id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
117513730
pes2o/s2orc
v3-fos-license
Towards realistic models from Higher-Dimensional theories with Fuzzy extra dimensions We briefly review the Coset Space Dimensional Reduction (CSDR) programme and the best model constructed so far and then we present some details of the corresponding programme in the case that the extra dimensions are considered to be fuzzy. In particular, we present a four-dimensional $\mathcal{N} = 4$ Super Yang Mills Theory, orbifolded by $\mathbb{Z}_3$, which mimics the behaviour of a dimensionally reduced $\mathcal{N} = 1$, 10-dimensional gauge theory over a set of fuzzy spheres at intermediate high scales and leads to the trinification GUT $SU(3)^3$ at slightly lower, which in turn can be spontaneously broken to the MSSM in low scales. Introduction Since 1970's there has been an intense pursuit of unification, that is the establishment of a single theoretical model describing all interactions.Profound research activity has resulted in two very interesting frameworks, namely Superstring Theories [1] and Non-Commutative Geometry [2].Both approaches, although developing independently, share common unification targets and aim at exhibiting improved renormalization properties in the ultraviolet regime as compared to ordinary field theories.Moreover, these two (initially) different frameworks were bridged together after realizing that a Non-Commutative gauge theory can describe the effective physics on D-branes whilst a non-vanishing background antisymmetric field is present [3]. Significant progress has recently been made regarding the dimensional reduction of the E 8 × E 8 Heterotic String using non-symmetric coset spaces [4]- [20], in the presence of background fluxes and gaugino condensates.It is widely known that the large number of Standard Model's free parameters which enter the theory, because of the ad hoc introduction of the Higgs and Yukawa sectors, is a major problem demanding solution.This embarrassment can be overcome by considering that those sectors originate from a higher dimensional theory.Various frameworks starting with the Coset Space Dimensional Reduction (CSDR) [21][22][23] and the Scherk-Schwarz [24] reduction schemes suggest that unification of the gauge and Higgs sectors can take place making use of higher dimensions.This means that the four-dimensional gauge and Higgs fields are the surviving components of the reduction procedure of the gauge fields of a pure higher-dimensional gauge theory.Furthermore, the addition of fermions in the higher-dimensional gauge theory leads naturally (after CSDR) to Yukawa couplings in four dimensions.The last step in this unified description in high dimensions is to relate the gauge and fermion fields, which can be achieved by demanding that the higher-dimensional gauge theory is N = 1 supersymmetric, i.e. the gauge and fermion fields are members of the same vector supermultiplet. In order to maintain an N = 1 supersymmetry after dimensional reduction, Calabi-Yau (CY) manifolds serve as suitable compact internal spaces [25].However, the moduli stabilization problem that arose, led to the study of compactification with fluxes (for reviews see e.g.[26]).Within the context of flux compactification, the recent developments suggested the use of a wider class of internal spaces, called manifolds with SU(3)-structure.The latter class of manifolds admits a nowhere-vanishing, globally-defined spinor, which is covariantly constant with respect to a connection with torsion and not with respect to the Levi-Civita connection as in the CY case.Here we focus on an interesting class of SU(3)-structure manifolds called nearly-Kähler manifolds. The homogeneous nearly-Kähler manifolds in six dimensions have been classified in [27] and they are the three non-symmetric coset spaces G 2 /SU (3), Sp(4)/(SU (2)× U (1)) non−max and SU (3)/U (1)× U (1) and the group manifold SU (2) × SU (2).The latter cannot lead to chiral fermions in four dimensions and therefore, for our purposes, it is ruled out of further interest.It is worth noting that four-dimensional theories resulting from the dimensional reduction of ten-dimensional N = 1 supersymmetric gauge theories over non-symmetric coset spaces, contain terms which could be interpreted as soft scalar masses.Here we will briefly describe the dimensional reduction of the N = 1 supersymmetric E 8 gauge theory over the nearly-Kähler manifold SU (3)/U (1) × U (1).More specifically, an extension of the Minimal Supersymmetric Standard Model (MSSM) was derived by dimensionally reducing the E 8 × E 8 gauge sector of the heterotic string [28]. Non-Commutative geometry is considered as an appropriate framework for regularizing quantum field theories, or even better, building finite ones.Unfortunately, constructing quantum field theories on Non-Commutative spaces is much more difficult than expected and, furthermore, they present problematic ultraviolet features [29], see however [30] and [31].In the beginning, several models of the type of Standard Model were built making use of the Seiberg-Witten map, but they could only be considered as effective theories which were also lacking renormalizability.A more promising use of Non-Commutative geometry in particle physics occurred after the suggestion that it would describe the extra dimensions [32]; see also [33].This proposal motivated the construction of higherdimensional models which present many interesting features i.e. renormalizability, potential predictivity, etc. In this framework has been developed a higherdimensional gauge theory in which those extra dimensions are described by fuzzy spaces [32], i.e. matrix approximations on smooth manifolds.The first step was to find a manifold on which one would construct a higher-dimensional gauge theory.The appropriate one was the product of Minkowski space and a fuzzy coset space (S/R) F .Afterwards, in order to achieve the necessary dimensional reduction, was made use of the CSDR scheme, which is described in the previous section.Although the reduction is performed using the CSDR programme, there is a significant difference between the ordinary and the fuzzy version: the four-dimensional gauge group that appears in the first, between the geometrical and the spontaneous breaking due to the four-dimensional Higgs fields, does not appear in the latter.In the fuzzy CSDR scheme, the spontaneous symmetry breaking occurs after solving the fuzzy CSDR constraints, resulting in a non-zero minimum of the four-dimensional potential.Thus, in four dimensions, there remains only one scalar field, the physical Higgs field, which does survive the spontaneous symmetry breaking.In the same way, regarding the Yukawa sector, we have the welcoming results of massive fermions as well as interactions among the physical Higgs field and fermions (Yukawa interactions).We conclude that in order to be able to reproduce the spontaneous symmetry breaking of the SM in this framework, one would have to consider large extra dimensions.A determinant difference between ordinary and fuzzy CSDR is that a non-Abelian gauge group G is not necessary in the higher-dimensions theory.The non-Abelian gauge theories in four dimensions can originate from a U (1) group in the higherdimensional theory. These theories are equipped with a very strong advantage when compared to the rest higherdimensional ones, that is renormalizability.Arguments leading to this result are given in [32], but the strongest one was given after examining the issue from a different perspective.In a detailed analysis, it was established a renormalizable four-dimensional SU (N ) gauge theory in which we assigned a scalar multiplet which dynamically develops fuzzy extra dimensions, forming a fuzzy sphere [34].The model develops nontrivial vacua which are interpreted as 6-dimensional gauge theory, in which geometry and gauge group depend on the parameters that are present in the initial Langrangian.We result with a finite tower of massive Kaluza-Klein modes, a result consistent with a dimensionally reduced higher-dimensional gauge theory.This model presents many interesting features.First, the extra dimensions are generated dynamically by a geometrical mechanism.This feature is based on a result from non-commutative gauge theory, namely that solutions of matrix models can be interpreted as non-commutative, or fuzzy, spaces.The above mechanism is very generic and does not need fine-tuning, which means that supersymmetry is not involved.In the renormalizable quantum field theory framework, this constitutes a realization of the concepts of compactification and dimensional reduction.Moreover, since it is a large N gauge theory, every analytical technique of this context should be available to be applied.More specifically, it proves that the general gauge group in low energies is SU (n 1 )×SU (n 2 )×U (1) or SU (n).In this model, gauge groups that are formed by more than two simple groups (apart from U (1)) are not observed. The features that emerged from the above mechanism are quite appealing, suggesting the construction of phenomenologically viable models in particle physics.When addressed to this direction, one encounters a severe problem, that is the chiral-fermion assignment in four dimensions.The best candidates of the above category of models, when it comes to inserting the fermions, are theories with mirror fermions in bi-fundamental representations of the low energy gauge group [35].Detailed studies on fermionic sectors of models, which obey the mechanism of dynamical generation of the extra dimensions with the fuzzy sphere or a product of two fuzzy spheres, showed that when extrapolating to low-energy, the fermionic sector of the theory consists of two mirror sectors, even after the inclusion of the magnetic fluxes on the two fuzzy spheres [36].Although the presence of mirror fermions does not exclude the possibility to obtain phenomenologically viable models [37], it is certainly preferred to end up with exactly chiral fermions.This is achieved by extending the above context and by inserting an additional structure which is based on orbifolds.Specifically, a Z 3 orbifold projection of a N = 4 SU (3N ) SYM theory leads to a N = 1 supersymmetric theory with the gauge group being the SU (3) 3 [38].In order to obtain specific vacua in the N = 1 theory, required by interpreting the theory as resulting from fuzzy extra dimensions, one is normally obliged to introduce soft breaking supersymmetric terms.This induces the dynamical generation of twisted fuzzy spheres.The introduction of such soft breaking terms seems necessary in order to build phenomenologically viable supersymmetric theories, with MSSM being a very leading case.The vacua that emerge give rise to models which preserve the features described above, but also they accommodate a chiral low-energy spectrum.The most appealing chiral models of this kind are SU (4) × SU (2) × SU (2), SU (4) 3 and SU (3) 3 .The most interesting of those unified theories seems to be the latter, which is de-scribed by the trinification group.In addition, this theory can be upgraded to a two-loop finite theory (for reviews see [39], [40], [41], [42]) and moreover it is able to make testable predictions [42].Therefore, we conclude that fuzzy extra dimensions can be used in constructing chiral, renormalizable and phenomenologically viable field-theoretical models. The Coset Space Dimensional Reduction In the Coset Space Dimensional Reduction (CSDR) scheme (see [21][22][23] for a detailed exposition) one starts with a Yang-Mills-Dirac Lagrangian , with gauge group G, defined on a D-dimensional spacetime M D , which is compactified to M 4 × S/R with S/R a coset space.S acts as a symmetry group on the extra coordinates and both S and its subgroup R, are Lie groups.As far as the most general S-invariant metric is concerned, it is always diagonal and depends on the number of radii that each space admits.Regarding the coset of our interest (SU (3 where where M, N run over the D-dimensional space and A M and Ψ are D-dimensional symmetric fields.Let ξ α A , (A = 1, ..., dimS and α = dimR + 1, ..., dimS the curved index) be the Killing vectors which generate the symmetries of S/R and W A the compensating gauge transformation associated with ξ A .The requirement that transformations of the fields under the action of S/R are compensated by gauge transformations, is expressed by the following constraint equations for scalar φ, vector A α and spinor ψ fields on S/R, where W A depend only on internal coordinates y and D(W A ) represents a gauge transformation in the appropriate representation of the fields.The constraints ( 2)-( 4) provide us [21,22] with the four-dimensional unconstrained fields as well as with the gauge invariance that remains in the theory after dimensional reduction.The analysis of these constraints implies that the components A µ (x, y) of the initial gauge field A M (x, y) become, after dimensional reduction, the four dimensional gauge fields and furthermore they are independent of y.In addition, one can find that they have to commute with the elements of the R G , subgroup of G. Thus, the fourdimensional gauge group H is the centralizer of R in G, denoted by φ α (x, y) from now on, become scalars in four dimensions and they transform under R as a vector υ, i.e. S ⊃ R (5) Furthermore, φ α (x, y) act as an interwining operator connecting induced representations of R acting on G and S/R.This implies, according to Schur's lemma, that the transformation properties of the fields φ α (x, y) under H can be found, if we express the adjoint representation of G in terms of R G × H: Then, if υ = s i , where each s i is an irreducible representation of R, there survives a Higgs multiplet transforming under the representation h i of H.All other scalar fields vanish. The analysis of the constraints imposed on spinors [22,[43][44][45] is analogous to the scalar cases and implies that the spinor fields act as interwining operators connecting induced representations of R in SO(d) and in G.In order to specify the representation of H under which the four-dimensional fermions transform, we have to decompose the representation F of the initial gauge group in which the fermions are assigned in higher dimensions under R G × H, i.e. and the spinor of SO(d It turns out that for each pair (r i , σ i ), where r i and σ i are identical irreducible representations of R, there is an h i multiplet of spinor fields in the four-dimensional theory.Regarding the existence of chiral fermions in the effective theory, we notice that if we start with Dirac fermions in higher dimensions it is impossible to obtain chiral fermions in four dimensions.Further requirements must be imposed in order to achieve chiral fermions in the resulting theory.Imposing the Weyl condition in D dimensions, we obtain two sets of Weyl fermions with the same quantum numbers under H. This is already a chiral theory, but still one can go further and try to impose Majorana condition in order to eliminate the doubling of the fermionic spectrum.Majorana and Weyl conditions are compatible in D = 4n + 2, which is the case of our interest. An important requirement is that the resulting four-dimensional theories should be anomaly free.Starting with an anomaly free theory in higher dimensions, Witten [46] has given the condition to be fulfilled in order to obtain anomaly free four-dimensional theories.The condition restricts the allowed embeddings of R into G by relating them with the embedding of R into SO(6), the tangent space of the six-dimensional cosets we consider [22,47].According to ref. [47] the anomaly cancellation condition is automatically satisfied for the choice of embedding E 8 ⊃ SO(6) ⊃ R, (11) which we adopt here. Dimensional Reduction Let us next present a few results concerning the dimensional reduction of the N = 1, E 8 SYM over SU (3)/U (1) × U (1) [48].To determine the fourdimensional gauge group, the embedding of R = U (1) × U (1) in E 8 is suggested by the decomposition After the dimensional reduction of E 8 under SU (3)/U (1)×U ( 1), according to the rules of the previous section, the surviving gauge group in four dimensions is Similarly, the explicit decomposition of the adjoint representation of E 8 , 248 under U (1) A × U (1) B provides us with the surviving scalars and fermions in four dimensions.Eventually, one finds that the dimensionally reduced theory in four dimensions is a N = 1, E 6 GUT with U (1) A , U (1) B as global symmetries.The potential is determined by a tedious calculation [49,50].The D-terms can be constructed and the F-terms are obtained by the superpotential.The rest of the terms in the potential could be interpreted as soft scalar masses and trilinear soft terms.Finally, the gaugino mass was also calculated and receives contribution from the torsion contrary to the rest soft supersymmetry breaking terms. SU(3) 3 due to Wilson flux In order to reduce further the gauge symmetry, one has to apply the Wilson flux breaking mechanism [51][52][53].Instead of considering a gauge theory on M 4 × B 0 (B 0 a simply connected manifold in our case), one considers a gauge theory on M 4 × B, with B = B 0 /F S/R and F S/R a freely acting discrete symmetry of B 0 .The discrete symmetries F S/R , which act freely on coset spaces B 0 = S/R, are the center of S, Z(S) and W = W S /W R , where W S and W R are the Weyl groups of S and R, respectively.In the case of our interest The presence of the Wilson lines imposes further constraints on the fields of the theory.The surviving fields are invariant under the combined action of the discrete group Z 3 on the geometry and on the gauge indices. After the Z 3 projection, the gauge group E 6 breaks to (the first of the SU (3) factors is the Standard Model colour gauge group).Moreover, one can obtain three fermion generations by introducing non-trivial monopole charges in the U (1)'s in R. In ref [28] it was shown that the scalar potential leads to the proper hierarchy of spontaneous breaking.Using the appropriate vev's, a first spontaneous symmetry breaking leads to the MSSM [54], while the electroweak breaking proceeds by a second one [42].It is worth noting that before the EW symmetry breaking, supersymmetry is broken by both D-terms and F-terms, in addition to its breaking by the soft terms. We plan to examine in detail the phenomenological consequences of the resulting model, taking also into account the massive Kaluza Klein modes. Field theory orbifolds and fuzzy spheres Let us begin with reminding briefly how the orbifold structure applies in field theory, and how this structure is related to the dynamical generation of fuzzy extra dimensions.The reason is that we seek to end up with chiral fermions in the case of construction of models in particle physics. We commence with a SU (3N ) N = 4 supersymmetric Yang-Mills (SYM) theory.The orbifold projection of this theory will be achieved by the action of the (discrete) group Z 3 .The procedure is to embed a discrete symmetry into the R-symmetry of the original theory, i.e.SU (4) R .Due to this embedding, the projected theories we may end up with, may have different amount of remnant supersymmetry [55].For example, supersymmetry is completely broken when Z 3 is embedded maximally in SU (4) R , while if it is embedded in an SU (3) or SU (2) subgroup of the SU (4) R , it results to N = 1 or N = 2 supersymmetric theories, respectively.In this contribution, we concentrate on the N = 1 case, which is compatible with our prime motivation, that is the construction of chiral models. Projecting the initial theory under the discrete symmetry Z 3 leads to a N = 1 SYM theory, in which the only fields that remain are the ones that are invariant under the action of the discrete group, Z 3 .For the technicalities of this procedure see [38].In the initial N = 4 SYM theory, there are totally four superfields, one vector and three chiral in N = 1 language.The component fields are the gauge fields A µ , µ = 0, . . . 3 of the SU (3N ) gauge group, three complex scalar fields φ i , i = 1, . . .3, which are accommodated in the adjoint of the gauge group and in the vector of the global symmetry and four Majorana fermions ψ p , which are assigned in the adjoint of the gauge group and the spinor of the global symmetry.After the orbifold pojection, we end up with a theory which has different gauge group and particle spectrum.In short, Z 3 acts non-trivially on the various fields depending on their representations under the R-symmetry and the gauge group [55].The gauge group breaks down to H = S(U (N ) × U (N ) × U (N )) and the scalar and fermionic fields that survive, transform under the representations of the non-Abelian factor gauge groups, obtaining a spectrum free of gauge anomalies.It is easily understood that fermions belong to chiral representations and that there is a threefold replication, meaning there are three chiral families. As for the F-term scalar potential of the N = 4 SYM theory, we obtain After the projection, the potential V F remains practically the same, obviously containing only the terms which describe interactions of the surviving fields.We also have a contribution to the total scalar potential from the D-terms, that is with the D-terms having the form where T I are the generators in the representation of the corresponding chiral multiplets.Obviously, vanishing both F-terms and D-terms, which means φ i , φ j = 0, we obtain the minimum of the full scalar potential, i.e. all scalar fields vanish in the vacuum and therefore no spontaneous supersymmetry breaking takes place.However, interesting vacua are achieved by inserting soft supersymmetry breaking terms in the theory.Specifically, the scalar part of the soft supersymmetric breaking sector is which obeys the orbifold symmetry.Therefore, the expression for the full scalar potential of the theory becomes now which can be equivalently written in the form for suitable parameters, having also defined Due to the fact that the first term is always positive, in order to obtain the global minimum of the potential, the following equations must hold where (φ i ) † is the hermitian conjugate of the φ i and [R 2 , φ i ] = 0.The above relations are related to the fuzzy sphere.This can be easily understood if we consider the twisted fields φi , which are defined by φ i = Ω φi , (27) for Ω = 1, satisfying the relations Therefore, (24) converts to the relation of the ordinary sphere φi , φj = iε ijk φk , which is generated by φi and (26) becomes φi φi = R 2 . ( Expressions of φ i which satisfy (24) have the following form where λ i (N ) are the generators of the SU (2) group in the N −dimensional representation.The matrix Ω is (32) The true meaning of the above configuration is revealed by diagonalizing the matrix Ω, that is Ω3 := U −1 ΩU = diag(1, ω, ω 2 ) . This form of φ i indicates that there are actually three identical fuzzy spheres, which are embedded with relative angles 2π/3. The solution, (31), breaks completely the gauge symmetry SU (N ) 3 (it could be considered as the Higgs mechanism of the SYM theory), yet there exists a class of solutions which do not break the gauge symmetry completely, namely where 0 n is the n × n matrix with zero entries.In this case, the gauge symmetry breaks from SU (N ) 3 to SU (n) 3 with the vacuum being interpreted as R × K F , with an internal fuzzy geometry, K F , of a set of twisted fuzzy spheres (in φ i coordinates).It is possible that this kind of vacua leads to a low-energy theory of high phenomenological interest, see discussion in [38]. Summing up, we should emphasize the general picture of the theoretical model.At very high-scale regime, we have an unbroken renormalizable gauge theory.After the spontaneous symmetry breaking, the resulting gauge theory is an SU (3) 3 GUT, accompa-nied by an unsurprising finite tower of massive Kaluza-Klein modes.Finally, the trinification SU (3) 3 GUT breaks down to MSSM in the low scale regime. 3 are introduced.According to the CSDR framework, an S-transformation of the extra d coordinates is a gauge transformation of the fields that are defined on M 4 × S/R, thus a gauge invariant Lagrangian written on this space is independent of the extra coordinates.Fields defined in this way are called symmetric.The initial gauge field A M (x, y) is split into its components A µ (x, y) and A a (x, y), corresponding to M 4 and S/R respectively.Consider the action of a D-dimensional Yang-Mills theory with gauge group G, coupled to fermions defined on a manifold M D compactified on M 4 × S/R, D = 4 + d, d = dimS − dimR:
2014-12-01T11:42:24.000Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "72380510189816b8675976e32458aad68eec3a5b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "72380510189816b8675976e32458aad68eec3a5b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269842241
pes2o/s2orc
v3-fos-license
Chloroplast genome and nuclear loci data for 71 Medicago species We present a dataset containing nuclear and chloroplast sequences for 71 species in genus Medicago (Fabaceae), as well as for 8 species in genera Melilotus and Trigonella. Sequence data for a total of 130 samples was obtained with high-throughput sequencing of enriched genomic DNA libraries targeting 61 single-copy nuclear genes from across the Medicago truncatula genome. Chloroplast sequence reads were also generated, allowing for the recovery of chloroplast genome sequences for all 130 samples. A fully-resolved phylogenetic tree was inferred from the chloroplast dataset using maximum-likelihoood methods. More than 80% of accepted Medicago species are represented in this dataset, including three subspecies of Medicago sativa (alfalfa). These data can be further utilised for phylogenetic analyses in Medicago and related genera, but also for probe and primer design and plant breeding studies. Specifications Table Subject: Biological Sciences Specific subject area: Phylogeny and Evolution Type of data: Processed sequence data, tables Data collection: The Illumina MiSeq high-throughput platform (San Diego, California, USA) was used to sequence enriched genomic DNA libraries.The MYBaits hybrid-capture method (MYcroarray, Ann Arbor, Michigan) was used for library enrichment targeting 61 single-copy nuclear loci.Probes for target enrichment were obtained from the Medicago truncatula genome ( http://medicagohapmap.org ).Sequence reads were processed using the CLC Assembly Cell software (CLC Bio, Aarhus, Denmark).Data source location: Department of Biological and Environmental Sciences, University of Gothenburg, Sweden Data accessibility: Repository name: Mendeley Data Data identification number: https://doi.org/10.17632/r5zzxg4xsw.1 Direct URL to data: https://data.mendeley.com/datasets/r5zzxg4xsw/1 Value of the Data • These data correspond to the first genomic dataset generated by high-throughput sequencing that includes the vast majority of Medicago species, including species never before sampled in molecular phylogenetic studies.• Other researchers can use these data directly for phylogenetics and population studies in Medicago and related genera, but also for probe and primer design.• Chloroplast genome data contributes to the understanding of phylogenetic relationships among Medicago species and can be used in future comparative studies. Background The dataset presented herein was generated in the context of a research project on the phylogeny of genus Medicago (Fabaceae), aimed at exploring biological causes of phylogenetic incongruence affecting tree inference in this genus, namely incomplete lineage sorting, paralogy and hybridisation [ 1 ].The availability of an annotated genome for genus Medicago enabled the development of a probe set to obtain sequence data using innovative sequence-capture techniques and high-throughput sequencing.Sequence capture targeted the 61 nuclear gene set, but chloroplast genomes were also sequenced in the process, allowing for the compilation of the chloroplast whole-genome dataset.Part of the sequence data generated in the context of this research have supported several different publications [ 2 , 3 , 4 , 5 ].However, all sequences in the present dataset have been newly generated from the original raw reads using a pipeline that automated allele phasing and SNP calling.Sampling includes 71 species and subspecies in genus Medicago L. [ 6 ], as well as two species in genus Trigonella L. and six species in genus Melilotus Mill., in a total of 130 samples.Genera Trigonella and Melilotus form the sister-group to genus Medicago and were sampled to be used as outgroups in phylogenetic inference. Data Description The data set presented herein is divided into two folders, one for nuclear data and another for chloroplast data, along with two tables describing the data and sampling. Nuclear data is organised into 61 unaligned multifasta files corresponding to all targeted genes.Each multifasta file contains one consensus sequence and two phased sequences (allele 0 and 1) for each sample present in the file.Samples for which only the consensus sequence is available are assumed to be homozygous in that locus.Genes were sampled from 20 genomic blocks of three to four genes, distributed in eight chromosomes, each chromosome containing two or three unlinked genomic blocks.Genes were chosen according to the following criteria: minimum distance between each gene within a genomic block = 30Kbp; minimum gene length = 2Kbp; maximum intron length = 500bp.All genes were in single-copy in the Medicago genome and had homologues in other plant genomes [ 2 ].Gene names and references, their location in each chromosome of the Medicago genome, the corresponding genomic block and the number of sequences present in each multifasta file are presented in "Table A -Nuclear Data".Chloroplast data is organised into 130 fasta files, one for each sample.Sample names, their corresponding genus, species, accession, number of base pairs and percentage coverage in chloroplast sequences, as well as accession numbers in the European Nucleotide Archive, where raw sequence data was stored (https://www.ebi.ac.uk/ena ) are presented in the "Table B-Samples".The minimum coverage per sample in the chloroplast genome dataset is 40%, and the maximum coverage is 99.6%, with a median of 92% for the 130 samples. Table 1 shows the completeness of the nuclear and chloroplast datasets, for each of the three plant genera sampled and an averaged total, based on the comparison between reference sequence lengths and the number of sequenced base pairs (for the 61 gene dataset, estimates were made from consensus sequences). Experimental Design, Materials and Methods Sampling, DNA extraction, target enrichment and sequencing were done as described in [ 2 ].In brief, genomic DNA libraries were enriched for a target of 61 nuclear loci using the MYBaits hybrid-capture method (MYcroarray, Ann Arbor, Michigan) and sequenced with Illumina MiSeq (San Diego, California, USA) at the Genomics Core Facility of the University of Gothenburg, Sweden. Read trimming, quality filtering and mapping were performed using the CLC Assembly Cell software (CLC Bio, Aarhus, Denmark).Raw paired-end sequence reads were trimmed of adapter sequence and filtered for quality with a minimum quality score of 20.A first round of mapping was done against the 61 nuclear gene reference sequences used to design probes for target enrichment (sequences obtained from the Medicago truncatula genome).As chloroplast DNA present in the enriched genomic libraries was also sequenced, reads were also mapped against a whole-chloroplast reference sequence of Medicago truncatula , retrieved from GenBank (accession AC093544).Mapped reads were converted into sequences with the program samtools [ 7 ], using the mpileup tool.Consensus sequences generated for each sample and each gene, containing indels not present in the original reference, were used as reference for a second round of mapping, followed by phasing of two alleles using samtools phase.Allele and consensus sequences from the second round of mapping were generated using mpileup without the reference sequence option, to avoid erroneous base calling where read depth was low [ 8 ]. Chloroplast genome sequences used to infer a phylogenetic tree using maximum-likelihood methods ( Fig. 1 ).Whole-chloroplast sequence files of 130 samples were aligned using MAFFT v. 7.3 [ 9 ].Sites containing gaps in more than 50% of sequences were deleted from the alignment using TrimAl v. 1.2 [ 10 ].A maximum-likelihood analysis of the alignment was run using IQTree v. 2.2.2.3 [ 11 ], under the GTR substitution model and a discrete four category gamma model of site rate heterogeneity, as determined by model testing in IQTree and the corrected Akaike In- formation Criterion.Support for the best tree was obtained with 10 0 0 ultrafast bootstrap replicates.The analysis ran on the CIPRES Science gateway [ 12 ].The analysis recovers a fully resolved tree with high bootstrap support for most nodes, including those corresponding to samples with lower coverage, thus confirming the utility of the chloroplast dataset for phylogenetic analyses.The main clades recovered can be identified in earlier tree inferences [ 1 ], although previously used markers did not recover supported relationships among clades. Limitations None. Fig. 1 . Fig. 1.Phylogenetic treePhylogram obtained from the maximum-likelihood analysis of the chloroplast data on IQTree.Branch support values were obtained from 10 0 0 ultrafast bootstrap replicates. Table 1 Completeness of the nuclear and chloroplast datasets.
2024-05-19T15:02:54.456Z
0001-01-01T00:00:00.000
{ "year": 2024, "sha1": "2b56683a8e38efa324f6b62d32fb64386973129a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2024.110540", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bec3dce3699c4f9666d39cb0dbf2bc6e35738fd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
255395277
pes2o/s2orc
v3-fos-license
Comparative transcriptomic analysis of long noncoding RNAs in Leishmania-infected human macrophages It is well established that infection with Leishmania alters the host cell’s transcriptome. Since mammalian cells have multiple mechanisms to control gene expression, different molecules, such as noncoding RNAs, can be involved in this process. MicroRNAs have been extensively studied upon Leishmania infection, but whether long noncoding RNAs (lncRNAs) are also altered in macrophages is still unexplored. We performed RNA-seq from THP-1-derived macrophages infected with Leishmania amazonensis (La), L. braziliensis (Lb), and L. infantum (Li), investigating a previously unappreciated fraction of macrophage transcriptome. We found that more than 24% of the total annotated transcripts and 30% of differentially expressed (DE) RNAs in Leishmania-infected macrophage correspond to lncRNAs. LncRNAs and protein coding RNAs with altered expression are similar among macrophages infected with the Leishmania species. Still, some species-specific alterations could occur due to distinct pathophysiology in which Li infection led to a more significant number of exclusively DE RNAs. The most represented classes among DE lncRNAs were intergenic and antisense lncRNAs. We also found enrichment for immune response-related pathways in the DE protein coding RNAs, as well as putative targets of the lncRNAs. We performed a coexpression analysis to explore potential cis regulation of coding and antisense noncoding transcripts. We identified that antisense lncRNAs are similarly regulated as its neighbor protein coding genes, such as the BAALC/BAALC-AS1, BAALC/BAALC-AS2, HIF1A/HIF1A-AS1, HIF1A/HIF1A-AS3 and IRF1/IRF1-AS1 pairs, which can occur as a species-specific modulation. These findings are a novelty in the field because, to date, no study has focused on analyzing lncRNAs in Leishmania-infected macrophage. Our results suggest that lncRNAs may account for a novel mechanism by which Leishmania can control macrophage function. Further research must validate putative lncRNA targets and provide additional prospects in lncRNA function during Leishmania infection. It is well established that infection with Leishmania alters the host cell's transcriptome. Since mammalian cells have multiple mechanisms to control gene expression, different molecules, such as noncoding RNAs, can be involved in this process. MicroRNAs have been extensively studied upon Leishmania infection, but whether long noncoding RNAs (lncRNAs) are also altered in macrophages is still unexplored. We performed RNA-seq from THP-1derived macrophages infected with Leishmania amazonensis (La), L. braziliensis (Lb), and L. infantum (Li), investigating a previously unappreciated fraction of macrophage transcriptome. We found that more than 24% of the total annotated transcripts and 30% of differentially expressed (DE) RNAs in Leishmania-infected macrophage correspond to lncRNAs. LncRNAs and protein coding RNAs with altered expression are similar among macrophages infected with the Leishmania species. Still, some speciesspecific alterations could occur due to distinct pathophysiology in which Li infection led to a more significant number of exclusively DE RNAs. The most represented classes among DE lncRNAs were intergenic and antisense lncRNAs. We also found enrichment for immune response-related pathways in the DE protein coding RNAs, as well as putative targets of the lncRNAs. We performed a coexpression analysis to explore potential cis regulation of coding and antisense noncoding transcripts. We identified that antisense lncRNAs are similarly regulated as its neighbor protein coding genes, such as the BAALC/ BAALC-AS1, BAALC/BAALC-AS2, HIF1A/HIF1A-AS1, HIF1A/HIF1A-AS3 and IRF1/ IRF1-AS1 pairs, which can occur as a species-specific modulation. These findings are a novelty in the field because, to date, no study has focused on analyzing lncRNAs in Leishmania-infected macrophage. Our results suggest that lncRNAs may account for a novel mechanism by which Leishmania can control macrophage function. Further research must validate putative lncRNA targets and provide additional prospects in lncRNA function during Leishmania infection. Introduction Endogenous noncoding (nc) regulatory RNAs are classified according to their length: the small noncoding RNAs (sncRNAs), of which the 25 nucleotides-long microRNAs (miRNAs) exert regulatory functions, and the long noncoding RNAs (lncRNAs), which are larger than 200 nucleotides. LncRNAs were classified based on biogenesis signatures, such as genomic position and proximity to protein coding genes: as sense intergenic (IG), overlapping (OT), and intronic (IT), RNAs from the same orientation of closely related protein-coding genes in their genome loci, and antisense (AS) RNAs, transcribed from the opposite strand of protein coding genes (Fernandes et al., 2019). LncRNAs are transcribed by RNA polymerase II but differ from mRNAs since the transcripts can assume multiple forms. The linear transcripts can be polyadenylated or not; alternatively, lncRNA can be circularized, forming circular (circ) RNAs (Zhang et al., 2013). Unlike miRNAs that mostly perform posttranscriptional modulation of target genes through 3′UTR recognition, lncRNAs act by transcriptional to posttranslational mechanisms, regulating many physiological and pathological processes (Fernandes et al., 2019). LncRNAs can regulate their molecular targets by multiple mechanisms at both the nucleus and the cytoplasm, such as regulating neighbor genes (Joung et al., 2017), mediating protein function through direct structural association (Rinn & Chang, 2012), and even by sponging miRNAs (Ebert et al., 2007). Also, lncRNAs are strong components in defining cell phenotype and function. The transcriptome of T lymphocytes during development and differentiation showed that more than 50% of the identified lncRNAs are stage-specific. At the same time, the coding transcripts fraction is mainly shared among the compared groups (Hu et al., 2013). Similarly, lncRNA signature in primary or THP-1-derived human macrophages stimulated either with IFN-γ plus LPS or IL-4 induced a subset of lncRNAs defining macrophage phenotype, and RNA interference (RNAi) of specific lncRNAs prevented the expression of macrophage polarization markers (Huang et al., 2016). LncRNAs are mediators of immune response (Chen et al., 2017;de Lima et al., 2019) and act at both pro-and antiinflammatory (Du et al., 2017) pathways in macrophages. There is an increasing interest in ncRNA biology and function in the context of host-parasite interaction, but most studies focus on miRNAs (Bayer-Santos et al., 2017;Bensaoud et al., 2019). Pathogen-mediated change in lncRNAs expression has been studied in macrophages infected with the parasitic protozoa Toxoplasma gondii (Menard et al., 2018), fungal infection by Cryptococcus neoformans (Gao et al., 2022), and in bacterial infections, such as Mycobacterium tuberculosis (Yang et al., 2016) and Salmonella typhimurium (Westermann et al., 2016). Twenty different species of the Leishmania genus cause 0.7 to 1 million new human cases of leishmaniasis each year worldwide (Burza et al., 2018). The disease can cause ulcers in the skin or mucosa that can be self-healing or cause organ damage, mainly in the liver, spleen, and bone marrow, within three main clinical forms: cutaneous, mucocutaneous, and visceral leishmaniasis. After the transmission of Leishmania promastigote forms by infected sandflies, the parasites differentiate to amastigotes inside phagocytic cells in the mammalian host. Leishmania establishes its replicative niche primarily in the phagolysosome of macrophages. Disease outcome reflects the balance between pro-inflammatory macrophages with M1 phenotype and antiinflammatory M2 macrophages induced by Leishmania's immune response subversion mechanisms (Soong, 2012). Multiple biological processes are involved in the macrophage response, such as the metabolism (Muxel et al., 2018;Ferreira et al., 2021) and cytokine production (Zamboni & Sacks, 2019). Previous studies determined the transcriptome of Li infection in THP-1 derived macrophages (Gatto et al., 2020), La and L. major infection in primary human macrophages (hMDM) (Fernandes et al., 2016) and Lb infection in patient's lesions (Maretti-Mira et al., 2012). In murine macrophages, the transcriptome of BALB/c and C57BL/6 macrophages infected with La indicated an inflammatory response different from the spectrum extremes M1 and M2 polarized macrophages (Osorio y Fortéa et al., 2009;Aoki et al., 2019) observed in L. major (Sacks & Noben-Trauth, 2002). All of the above-mentioned results of transcriptome-wide experiments exhibit some contrasting results with the literature because macrophage response to Leishmania is highly dependent on the parasite species and strain and host cell type, thus the importance of investigating different models (Salloum et al., 2021). The transcriptome of macrophages is widely affected by inflammatory stimuli (Das et al., 2018;Vollmers et al., 2021). Changes in gene expression upon Leishmania infection have been documented in a variety of Leishmania-host cell models, explaining different aspects of immune response subversion elicited by this parasite (Salloum et al., 2021). But, to our knowledge, no data is available that systematically compares the transcriptomic profile of macrophages infected with La, Lb, or Li. Besides these transcriptomic studies on genes that encode proteins, ncRNAs in Leishmania infection of both murine and human macrophages were also investigated, focusing primarily on miRNAs and their role in regulating mRNAs related to the inflammatory response (Lemaire et al., 2013;Geraci et al., 2015;Muxel et al., 2017;Colineau et al., 2018;Diotallevi et al., 2018;Kumar et al., 2020;Souza et al., 2021;Ramos-Sanchez et al., 2022). Some groups dedicate to identifying and characterizing Leishmania's subsets of ncRNAs regulating parasite developmental stages, and although we did not evaluated parasite's reads, these groups can benefit from our publicly available datasets (Dumas et al., 2006;Freitas Castro et al., 2017;Ruy et al., 2019). LncRNAs were not investigated in Leishmania infection until recently. Sanz and collaborators have identified 21 differentially expressed lncRNA in the lymph nodes of dogs infected with L. infantum (Sanz et al., 2022). Maruyama and collaborators explored lncRNA content in the blood transcriptome of visceral leishmaniasis patients infected with L. infantum (Maruyama et al., 2022). Still, how different species of Leishmania parasites affect macrophage transcriptome is an open question. In this study, we investigated human macrophages infected with Leishmania amazonensis (La), L. braziliensis (Lb), and L. infantum (Li), the causative agent of cutaneous, mucocutaneous and visceral leishmaniasis, respectively (Burza et al., 2018). Here we show the coding and noncoding RNA profile of THP-1-derived macrophages infected with La, Lb, or Li. We focused on describing dysregulated lncRNAs, comparing their specificity upon infection by different Leishmania species, pathway enrichment analysis based on mRNA profile and putative lncRNA targets, lncRNA-mRNA pairs with close genomic loci coexpression, and highlighting prospects on the study of macrophage ncRNA function in Leishmania infection. Modulation of protein coding and long noncoding host RNAs upon Leishmania infection To compare transcripts involved in the infection by La, Lb, and Li, we performed RNA-seq of human THP-1-derived macrophages infected with these species for 24 h (Supplementary Figure S1); in this time, infection is established, and the transcriptomic alterations related to lncRNAs would be relevant to study in the early phase of infection and their implications in the activation to immune response. The highquality reads were deposited at NCBI under BioProject with the accession number PRJNA881925. We identified an average of 41 million reads per sample (with an average of 40% reads mapped for the human genome for infected macrophage samples and 99% for uninfected macrophages), resulting in 19043 genes mapped to the GRCh38 human genome. From that, 4,311 were differentially expressed (DE) in at least one infection model compared to uninfected macrophages. The most abundant transcripts were protein coding and long noncoding RNAs (lncRNAs) ( Table 1). All infected samples presented similar scores for molecular degree of perturbation (MDP, Supplementary Figure S2A), and no outlier was found in our samples (Gonçalves et al., 2019). We performed principal component analysis (PCA, Supplementary Figure S2B), showing that 50% of the total variance is explained by the differences among uninfected and Leishmania-infected macrophages in dimension 1 (Dim1). In comparison, Dim2 further explains 14% of the variance by clustering La and Lb-infected macrophage samples away from Li-infected macrophage samples. Since lncRNAs are important regulators of mRNA expression and those were the two most abundant transcripts identified as DE, we investigated the number of DE protein coding and lncRNA transcripts identified in the RNA-seq of THP-1 macrophages infected with La, Lb or Li compared to uninfected macrophages (Supplementary Table S1). We identified a total of 1503 DE protein coding transcripts (883 up-and 620 downregulated) in La-infected macrophages, 1708 DEs (984 up-and 724 downregulated) in Lb-infected macrophages, and 2,572 DEs (1,285 up-and 1,287 downregulated) in Li-infected macrophages (Supplementary Table S1). From the lncRNA-annotated subset, we identified a total of 735 DE lncRNAs (17% of DE, 335 up-and 400 downregulated) for La-infected macrophages, 795 DE lncRNAs (13% of DE, 380 up-and 415 downregulated) for Lb- Table S1). To observe the similarities of modulated protein coding transcripts (mRNAs) and lncRNAs during infection by different Leishmania species, in the intersection plot ( Figures 1A,B), we compared the mRNAs and lncRNAs subsets in each model. Among the three infection models, we identified 673 commonly upregulated protein-coding genes ( Figure 1A) and 448 commonly downregulated ( Figure 1B). Also, 227 lncRNAs were upregulated regardless of parasite species ( Figure 1A), and 249 had their expression reduced compared to uninfected macrophages ( Figure 1B We observed that the majority of DE lncRNAs were intergenic (IG) and antisense (AS) lncRNAs for La-, Lb-and Li-infected macrophages, part of them were classified as novel transcript ( Figure 1C). Many transcripts remain unannotated and herein were included as non-classified ( Figure 1C). Our data showed a species-specific alteration in lncRNA and protein coding RNAs, in which DE transcripts from Li infection Figure S2). Also, Leishmania infection can regulate sense-(mainly intergenic) and antisense-encoded lncRNAs transcript in macrophages ( Figure 1C). Although not classified as a different class, as they can be intronic or intergenic, we have identified modulation of some microRNA host genes (miR-HGs). We saw the upregulation of the mature microRNA (miR)-155 and its precursor MIR155HG, which can function as lncRNA (Supplementary Table S1). We also identified other miR-HGs that act as lncRNAs, such as the Table S1). Enrichment of immune response pathways-related gene sets upon Leishmania infection We ran an enrichment analysis to investigate overrepresented pathways within the identified DE protein coding genes. Level 3 Reactome pathways were ranked by the normalized enrichment score (NES) obtained through gene set enrichment analysis (GSEA) for THP-1 macrophages infected FIGURE 2 Enriched Reactome pathways for La, Lb, and Li infection in THP-1 macrophages. The plot represents the normalized enrichment score (NES) by both color and bubble size, as indicated in the figure legend, of significantly altered level 3 Reactome pathways based in the protein coding genes of THP-1 macrophages infected with La, Lb or Li. Frontiers in Genetics frontiersin.org 05 Table S2). We found that most of the significantly enriched immune response pathways in Li-infected macrophages are involved in parasite recognition, such as Toll-like receptor cascades, NLR signaling pathway, cytosolic sensors of pathogen-associated DNA, and C-type lectin receptor. In contrast, MHC class II antigen presentation was enriched for La and Lb infection. Indeed, we extended our analysis to further evaluate speciesspecific activation of immune pathways, as shown by the cellular response to stimuli pathways. The pathway's related DE genes, to a higher or lesser extent, are depicted in the alluvial plots ( Figure 3). We found DE mRNAs related to reactive oxygen species (ROS) and reactive nitrogen species (RNS) production in phagocytes, such as Neutrophil NADPH Oxidase Factor 1 and 2 (NCF1 or p47phox and NCF2 or p67phox) and ATPase H + Transporting V1 Subunits B2, C1, D, F, H (ATP6V1), and detoxification of ROS, as thioredoxin (TXN) Table S2). Among the enriched signal transduction pathways, we highlighted TNF signaling genes, such as X-Linked Inhibitor of Apoptosis (XIAP), an inhibitor of apoptosis Baculoviral IAP Repeat Containing 3(BIRC3), MAP Kinase Activating Death Domain (MADD), and UBC exclusively DE mRNAs in Li-infected macrophages, and DE mRNAs shared by all the three Leishmania species infection, as TNF, TNF Receptor Associated Factor 1 (TRAF1) and TNF Alpha Induced Protein 3 (TNFAIP3) ( Figure 3D). In the cytokine response mediated by tumor necrosis factor receptor 2 (TNFR2) non-canonical nuclear factor kappa B (NFκB) pathway, we found DE mRNAs exclusively in Liinfected macrophages, such as NFKB2 and RelB ( Figure 3E) but also DE mRNAs shared by all the three Leishmania species infection, such as TNF Superfamily Member 14 and 15 (TNFSF14/15) and TNF receptor Superfamily Member 1B, 9, 12A and 14 (TNFRSF). Antisense lncRNAs are coexpressed with sense protein coding genes during Leishmania infection and may be involved in immune response Since antisense lncRNAs correspond to a major regulated class during Leishmania infection, we ran a coexpression analysis to decipher whether the DE lncRNAs and their respective neighbor protein coding gene are simultaneously regulated and may be functionally connected through cis regulation. We found 69 lncRNA-mRNA pairs coregulated in La-infected macrophages, 77 in Lb-infected macrophages, and 101 in Liinfected macrophages (Figures 4A-C). Of those, we highlighted 10 antisense lncRNAs pairing to 8 genes previously described in Leishmania-infected macrophages with their genomic positions ( Figure 4D We highlighted exclusively regulated mRNA-lncRNA pairs in Li-infected macrophages, such as IL21R/IL21R-AS1, HIF1A/ HIF1A-AS1, HIF1A/HIF1A-AS3, and IRF1/IRF1-AS1. These results suggest that modulation of these genes may be linked to the specificity of the pathophysiology of each Leishmania species. We used the NcPath's (Li et al., 2022) database of experimentally validated and predicted lncRNA-protein coding gene interaction to investigate these lncRNA targets by mechanisms other than cis regulation. To investigate their possible function, we submitted the putative target genes to Reactome enrichment of immune response-related pathways ( Table 2). We found 15 significantly over represented pathways that can be regulated by the abovementioned lncRNAs, including key recognition receptors, such as TCRs, CLRs and NLRs, and cytokine signaling by interleukins and interferon. Discussion LncRNAs interfere with gene expression at multiple levels and affect the transcriptome by controlling transcription and mRNA stability in cis in trans (Fernandes et al., 2019). To see if lncRNAs may play a role in Leishmania-elicited macrophage responses, we analyzed the profile of host mRNAs and lncRNAs. For the first time, we showed species-specific regulation of host RNA expression in human macrophages infected with L. amazonensis, L. braziliensis, or L. infantum. Interestingly, even though La and Lb belong to different subgenera, the gene expression pattern of macrophages infected with those species are more similar than Li, indicating that gene expression can reflect more the different pathologies rather than evolutionary proximity. Frontiers in Genetics frontiersin.org 07 We used total RNA depleted of rRNA for RNA-seq to address the analysis of both polyadenylated and non-polyadenylated RNAs, including lncRNAs from excised introns or some nonpolyadenylated intergenic lncRNAs (Pinkney et al., 2020;L. Yang et al., 2011), facilitating a broader analysis of lncRNA expression in Leishmania-infected macrophages. With this, our transcriptome mapped 24% of total identified DE transcripts to lncRNAs, being the second most abundant class of RNA in our data after rRNA depletion (Table 1), revealing that a significant fraction of the transcriptome was unappreciated in previous studies from this field (Salloum et al., 2021). LncRNAs were also investigated in the genome of Lb (Ruy et al., 2019), L. major, and L. donovani (Freitas Castro et al., 2017). Together, these recent studies open a field to study the interplay of host-parasite ncRNAs. Regarding our dataset, it is important to emphasize that there was no enrichment for small RNAs for library construction, as required for the proper identification of mature miRNAs, so the fraction of these molecules shown in Table 1 must be interpreted with caution. The literature already established that miRNAs are essential for macrophage reprogramming during Leishmania infection (Paul et al., 2020). Moreover, as a new class of DE transcripts, we observed regulation of lncRNAs that are host genes for miRNAs (Supplementary Table S1) displaying important functions in inflammation as recently described in the literature. Our data show an upregulation of the well know inflammation-related miR-155 (Supplementary Table S1) as well as its host gene MIR155HG, which is involved in proinflammatory response during chronic obstructive pulmonary disease (Li et al., 2019) and against influenza A virus (Maarouf et al., 2019). Recent work shows MIR155HG as a suppressor of dendritic cell-mediated autoinflammation (Niu et al., 2020). Also, MIR210HG was upregulated in our infection models and was described to act together with HIF1α to promote glycolysis during cancer, revealing that lncRNAs may be involved in regulating metabolic pathways (Du et al., 2020). To further understand the distinct trends in mRNA and lncRNA regulation, we compared the host gene expression in the infection by different Leishmania species. With this, we identified commonly induced lncRNAs (Figures 1A,B) that may control pathways mechanistically shared by distinct species to circumvent the immune response. On the other hand, the unique sets of lncRNAs regulated in Li-infected macrophages or shared between La and Lb only could unravel regulatory modules related to specificities of clinical manifestations. Although the total number of DE mRNAs and lncRNAs is similar among La, Lb, and Li-infected macrophages (Supplementary Table S1), we observed a higher fraction of those specific for Li ( Figures 1A,B). The difference is also evident in the multivariate analysis by PCA (Supplementary Figure S2), suggesting that the separation of clusters from La and Lb infection from Li infection reflects the host's gene expression modulation specificities, leading to distinct physiopathological outcomes caused by these Leishmania species. Since lncRNA classification can contribute to interpreting its function (Fernandes et al., 2019), we showed that most of these transcripts are intergenic or antisense to protein-coding genes ( Figure 1C). The high proportion of these types of lncRNAs is following the observed in the whole blood transcriptome, mainly composed of neutrophils and T cells, in visceral leishmaniasis patients infected with L. infantum (Maruyama et al., 2022). The prevalence of AS lncRNAs is in accordance with a study showing that over 20% of human transcripts pair to AS gene expression (Chen et al., 2004). We first compared macrophage phenotype upon infection with La, Lb, and Li based on the protein coding subset of genes. Our analysis showed that Li infection elicits a response with higher NES scores than La and Lb infection for immune response-related Reactome pathways (Figure 2). We also depicted results in Alluvial plots facilitating further discussion into specificities of DEGs among infection by different Leishmania species (Figure 3), since to our knowledge this is the first study to compare the transcriptome of La, Lb and Li-infected macrophages. For this, we included pathways from the cellular response to stimuli from Reactome. Many of the identified DEGs were already identified in multiple models of Leishmania infection. Here, we identified upregulation NCF1 ( Figures 3A,B), corresponding to the gp47 subunit of the NADPH oxidase (NOX), that was previously shown as essential for ROS production in murine neutrophils upon La infection (Carlsen et al., 2013). The NOX components NCF1, NCF2 (gp67), and CYBB (gp91) also lead to ROS production against Lb in human monocytes activated by IFN-γ and in cutaneous lesions (Novais et al., 2014). We also found other markers induced in the transcriptome, such as TNF-α, STAT1, and STAT4 (Novais et al., 2014). On the other hand, we identified enrichment of the detoxification of ROS pathways ( Figure 3B). In the literature, the antioxidant response to La infection was evaluated at a systemic level, showing increased SOD2 levels and activity in the liver of infected mice (Gasparotto et al., 2017). Our data also corroborates data from L. major-infected BALB/c or C57BL/ 6 macrophages, where GSR was upregulated (Bichiou et al., 2021). The increased level of detoxification molecules agrees with high glutathione levels in the La-infected macrophages Mamani-Huanca et al., 2021). There was an exclusive regulation of mRNAs related to the cellular response to hypoxia, such as UBC, HIF1A, EPAS1/ HIF2A, and the proteasome-related PSME1, PSME2, PSMA6 by Li-infected macrophages ( Figure 3C). In previous studies on visceral leishmaniasis models, HIF1α was essential for a host-protective response during L. donovani infection in vitro and in vivo (Mesquita et al., 2020). However, during L. major infection, Hif1a mRNA was only observed upon LPS + IFN-γ treatment or hypoxia condition (Schatz et al., 2016), probably Frontiers in Genetics frontiersin.org because some Leishmania species can interfere in its stabilization. HIF1α binds to hypoxia response elements (HRE)-containing target genes, regulating the transcription of genes glucose transporter 1 (GLUT1), hexokinase II, pyruvate dehydrogenase kinase 1 and lactate dehydrogenase A, and glycolysis itself (Wheaton & Chandel, 2011), controlling proinflammatory response of macrophages (Tannahill et al., 2013). The TNF and TNFR2 signaling are also prevalent pathways during Li infection. Previously published transcriptome of Li-infected THP-1 macrophages showed TNFAIP3 and IRF7 upregulation but not IL1B transcript, as we found (Supplementary Table S1). The difference can be due to different procedures and the Leishmania strain used for the experiments (Gatto et al., 2020). To understand the implication of coexpressed protein coding mRNAs and closely located lncRNAs ( Figure 4D), we ran a coexpression analysis showing that the fold change of these mRNA-lncRNA pairs is positively-correlated ( Figures 4A-C). This approach led us to spot interesting correlations and to find the reported lncRNAs in previously published data. Manual inspection of the available DEG table from the previously published transcriptome (Fernandes et al., 2016) allowed the identification of four antisense lncRNAs regulated after 24 h of La infection in primary human macrophages: the upregulation of BAALC antisense RNA 2 (BAALC-AS2) and GSN antisense RNA 1 (GSN-AS1) and downregulation of BAIAP2 antisense RNA 1 (BAIAP2-AS1) and the solute carrier family 22 member 18 antisense (SLC22A18AS) (Fernandes et al., 2016). Also, they found upregulation of both protein-coding BAALC and BAALC-AS2 after 4 and 24 h of La and L. mexicana infection. In our study, BAALC/BAALC-AS1 and BAALC/BAALC-AS2 pairs were upregulated in La, Lb, and Li infection. BAALC is a binder of MAP3K1 and KLF4, previously described in cancer as interacting with these molecules and inhibiting their functions (Morita et al., 2015). MAP3K1 is involved in MAPK signaling in response to pro-inflammatory stimuli (Otto et al., 2021). It can be directly targeted by miR-770, reducing the M2 polarization of macrophages (Liu et al., 2021). During Leishmania infection, MAP3K1 appears to have speciesspecific expression, since it is upregulated by L. mexicana and downregulated by L. donovani and could be involved in type I interferon production in dendritic cells (Favila et al., 2014). During La infection, MAPK activation is required for IL-10 production (Yang et al., 2007). The other target, KLF4, is essential for the M2 polarization of macrophages . We saw downregulation of the RORA/RORA-AS1 pair in La and Li, but not in Lb-infected macrophages. The RORα is a transcription factor that negatively regulates inflammation, and its knockout in THP-1 significantly increases TNF, IL1β, and IL-6 production upon LPS stimulation (Nejati Moharrami et al., 2018). The RORα function in promoting M2 macrophage polarization is related to KLF4 (Han et al., 2017), the abovementioned BAALC target. However, no study has explored the function of RORA-AS1 yet. Maruyama et al also found CA3-AS1 and IRF1-AS1 downregulated in the serum of patients with active visceral disease caused by Li versus controls (Maruyama et al., 2022). While the CA1/CA3-AS1 pair was upregulated in the blood of visceral leishmaniasis patients (Maruyama et al., 2022), in our study, the CA3/CA3-AS1 pair is downregulated by La, Lb, and Li infection. The CA3-AS1 was shown to act as a sponge of the miR-93 (Zhang et al., 2020), while CA3 is a known antioxidant in different cell types (di Fiore et al., 2018), however, both transcripts lack functional studies in macrophages. The IRF1/ IRF1-AS1 pair appear upregulated exclusively in Li-infected macrophages in our coexpression analysis ( Figures 4A-C). IRF1-AS1 is induced by IFNα and was shown to be essential for its signaling and NF-κB-mediated response, mediating the transcription of the IRF1 gene (Barriocanal et al., 2022), as shown to be correlated in our results. The upregulation of IRF1 prevents immunopathology in Li-infected mice (Sacramento et al., 2020). We also highlighted cytokine receptors in our coexpression analysis (Figure 4). Upregulation of the IL1R1 occurred in all three Leishmania infection models. However, other species, like L. donovani can impair IL1R1 activity at multiple levels to survive in macrophages (Parmar et al., 2018). We also could not find publications on the IL1R1-AS1 function. On the IL-2 cytokine family receptors, IL21R/IL21R-AS1 pair was downregulated only in Li-infected macrophages. The IL21R is dispensable in the cutaneous leishmaniasis model by L. major (Fröhlich et al., 2007). Also, different from our finding, a previous study found an inverse expression level of the IL21R/ IL21R-AS1 (Riege et al., 2017). The study of Maruyama et al also found modulation of the antisense to the cytokine IL21, the IL21-AS1 lncRNA (Maruyama et al., 2022). The upregulation of TNFRSF14/TNFRSF14-AS1 pair occurred upon Lb and Li infection. An independent study showed the upregulation of TNFRSF14 with Li-infected THP-1 (Gatto et al., 2020). The TNFRSF14-AS1 is poorly studied, with one report indicating it as a prognostic marker in breast cancer (Dashti et al., 2020;Lv et al., 2021). Interestingly, we found that 13 of 15 pathways targeted by the highlighted AS-lncRNAs (Table 2) are also enriched in the analysis of protein coding genes (Figure 2). Both TNFRSF14-AS1 and IRF1-AS1 are upregulated in Li-infected macrophages and are involved in MHC-II antigen presentation. However, this pathway is only enriched during La-and Lb-infected macrophages in our analysis of protein-coding genes, indicating that a regulation with negative effects can be investigated for those targets. Some of the putative target Frontiers in Genetics frontiersin.org genes related to the TNFR2 non-canonical NF-kB pathway, such as the UBC, are depicted in the alluvial plots as regulated at transcriptional level (Figure 3), however, since lncRNA have multiple mechanisms, we cannot exclude regulation at protein level or activity. Finally, this analysis attributed functions related to immune response to the lncRNAs CA3-AS1 and BAALC-AS1, that were previously shown in the transcriptome of Leishmaniainfected macrophages. We show that the lncRNA signature is altered during macrophage infection with Leishmania in a species-specific pattern. We could also detect coexpressed pairs of protein-coding mRNAs and proximal AS lncRNAs suggesting cis regulatory roles in the host-parasite interface. We also included pathway analysis of lncRNA putative targets indicating association with genes of the immune response at multiple regulatory levels. However, further validations and experiments are necessary to unravel regulations, molecular mechanisms, and implications of lncRNAs in the response of macrophages to Leishmania infection. Intracellular parasites count For infection count, parasites were marked with the Vybrant CFDA SE Cell Tracer Kit (Invitrogen) with the proportion of 1×10 8 parasites/mL/5 μM CFDA SE reagent for 20 min at 25°C. After, cells were washed with both FBSsupplemented RPMI and PBS1X before infection. About 1×10 6 THP-1-derived macrophages were infected with CFDA SE-labeled Leishmania in 24-well plates. Cells were harvested after 24 h of infection using PBS/EDTA 1 mM treatment for detachment and 4% PBS/paraformaldehyde (PFA) for fixation. Cells were resuspended in 25μL of PBS 1X, and images were obtained in the FlowSight ® imaging cytometer. Data analysis was performed in the IDEAS software using Wizard Spot Count. The number of intracellular parasites per cell ranged from 2.7 to 4.2, and the percentage of infected macrophages ranged from 26 to 56 (Supplementary Figure S1). RNA extraction and RNA-seq Total RNA extraction from five independent biological replicates of each infected and uninfected macrophage was performed 24 h after infection using TRIzol reagent (Life Technologies, Carlsbad, CA, United States) following manufacturer instructions. After, RNA samples were treated with DNase I (1 U/µg of RNA) (Thermo Scientific, Lithuania, EU) at 37°C for 1 h. The absence of DNA contamination was determined from the A260/A280 ratio using a spectrophotometer Nanodrop ND1000 (Thermo Scientific, United States). RNA integrity was evaluated using an Agilent 2,100 Bioanalyzer and a Pico Agilent RNA 6000 kit (Agilent Technologies, Santa Clara, CA, United States). rRNA depletion was performed using Ribo-Zero Human/rat/mice Ribo-Zero plus rRNA Depletion Kit (Illumina). Library preparation was performed using 1 µg of rRNA-depleted total RNA using Truseq Stranded Total RNA LT Sample Prep Gold kit, without molecular barcodes. Sample preparation followed preparation using the NovaSeq 6000 S4 Reagent Kit (Illumina) according to manufacturer's instructions for RNA sequencing submission. The sequencing was performed with paired-ends (100 bp) using the Illumina Novaseq 6,000 Platform in Macrogen Inc. Service (Seoul, South Korea). RNA-seq analysis Quality control of raw sequencing data was done using FastQC tool. Mapping to a human reference genome assembly (GRCh38) was done using bowtie2 (Langmead & Salzberg, 2012). Read counts from the resulting BAM alignment files were obtained with featureCounts using a GTF gene annotation from the Ensembl database (Yang et al., 2014;Howe et al., 2021). The R/Bioconductor package edgeR was used to identify differentially expressed genes among the samples after removing absent features (0 counts) (McCarthy et al., 2012). Genes with adjusted p Frontiers in Genetics frontiersin.org 13 values less than 0.05 were identified as differentially expressed. For each Leishmania-infected group comparison, gene set enrichment analysis was performed using the fgsea R package. Genes with Ensembl IDs were transformed into gene symbols by the biomaRt package (Steffen et al., 2009) and ordered by their log FC values. The identified lncRNAs were classified using the GRCh38.p14 database source table with data for lncRNA annotation according to the genomic position. These annotations as intergenic, antisense, overlapping, intronic, and non-classified, and the transcripts that lack validation studies were referred to as novel transcripts. For the enrichment analysis of protein-coding transcripts, preranked genes and Reactome gene sets from Enrichr were provided to GSEA (Subramanian et al., 2005), with remaining default parameters. To identify significant common pathways between all comparisons, pathways with a p-value below a threshold of 0.05 for at least 1 comparison were selected and clustered based on the NES with hierarchical clustering. Correlation plots were generated to display the NES values using the corrplot package. The molecular degree of perturbation for each was assessed for each Leishmania-infected group samples relative to the uninfected control group samples using mdp R package (Gonçalves et al., 2019). The Pearson correlation test (|R| > 0.8) was used to identify the association between 10 pairs of lncRNAs (BAALC-AS1, BAALC-AS2, HIF1A-AS1, HIF1A-AS3, TNFRSF14-AS1, IRF1-AS1, IL1R1-AS1, IL21R-AS1, RORA-AS1, and CA3-AS1) and mRNAs (BAALC, HIF1A, TNFRSF14, IRF1, IL1R1, IL21R, RORA, and CA3). These lncRNAs' predicted targets were retrieved from the NcPath database (Li et al., 2022) and submitted to GSEA based on Reactome gene sets for enrichment analysis. Data availability statement The high-quality reads were deposited at NCBI under BioProject (https://www.ncbi.nlm.nih.gov/) with the accession number PRJNA881925. Author contributions Conceptualization, investigation, writing-original draft preparation JF, AG, and SM; methodology, JF, AG, and SM; software, AG; formal analysis, JF, AG, and SM; investigation, JF, AG, and SM; data curation, JF, AG, and SM; writing-review and editing, JF, AG, LF-W, HN, and SM; Supervision, project administration and funding acquisition, HN and SM. All authors have read and agreed to the published version of the manuscript.
2023-01-04T14:30:30.415Z
2023-01-04T00:00:00.000
{ "year": 2022, "sha1": "2153b67f4085fa7e94d9d8e76991b2bfd55955ff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2153b67f4085fa7e94d9d8e76991b2bfd55955ff", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
103796551
pes2o/s2orc
v3-fos-license
Suppression of vacancies boosts thermoelectric performance in type-I clathrates Intermetallic type-I clathrates continue to attract attention as promising thermoelectric materials. Here we present structural and thermoelectric properties of single crystalline Ba8(Cu,Ga,Ge,v)46, where v denotes a vacancy. By single crystal X-ray diffraction on crystals without Ga we find clear evidence for the presence of vacancies at the 6c site in the structure. With increasing Ga content, vacancies are successively filled. This increases the charge carrier mobility strongly, even within a small range of Ga substitution, leading to reduced electrical resistivity and enhanced thermoelectric performance. The largest figure of merit ZT =0.9 at 900 K is found for a single crystal of approximate composition Ba8Cu4.6Ga1.0Ge40.4. This value, that may further increase at higher temperatures, is one of the largest to date found in transition metal element-based clathrates. Introduction Intermetallic type-I clathrates are promising materials for hightemperature thermoelectric (TE) applications. The unique TE properties of these materials are associated with the crystal structure which is composed of polyhedral cages formed by covalently bonded host framework atoms and guest atoms ionically bonded in these cages. The guest atoms can act as rattlers, 1-6 creating low-lying optical modes. If the frequency of the rattling modes lies within the acoustic range, an interaction of acoustic and optical modes may result and lead to low lattice thermal conductivities. [7][8][9][10][11][12][13][14] The charge transport is mainly governed by the framework, 1,2 giving rise to comparably high charge carrier mobilities. The combination of these properties is beneficial for reaching high values of the dimensionless thermoelectric figure of merit ZT = T S 2 /(ρκ), where T is the absolute temperature, S the Seebeck coefficient, ρ the electrical resistivity, and κ the thermal conductivity. κ is usually composed of the lattice thermal conductivity κ ph and the electronic thermal conductivity κ e . As a common parameter of S, ρ, and κ e , the charge carrier concentration plays a crucial role in determining ZT . Generally, type-I clathrates can be regarded as Zintl compounds: 11,15,16 The guest atoms (for anionic clathrates) donate their valence electrons to the framework atoms which use them in covalent framework bonds. If all valence electrons are used up, the system is an insulator. If there are more (less) valence electrons than needed to complete the bonding, the system is an n-type (a p-type) semiconductor. Thus, the composition as well as details of the crystal structure (e.g., whether there are vacancies and how atoms distribute in the crystal structure [17][18][19] ) are critical for the thermoelectric properties of clathrates. 11,16,20 So far, the type-I clathrates most promising for thermoelectric applications are Ba-Ga-Ge(Si)-based compounds with a Ga content around 16 atoms per unit cell (u.c.). Transition metal (TM) element containing clathrates also have been widely studied 11 and remarkable ZT values have been reported for clathrates such as the Ba-Au-Ge system 21 and the Ba-Zn-Ge-Sn system. 22 Cu-containing clathrates, interesting for the low price of Cu, still have low ZT values due to the non-optimized charge carrier concentration and low charge carrier mobility. For Ba 8 Cu x Ge 46−x clathrates, studies showed that vacancies exist when x 5.5. 23 Previous studies have suggested that it is unfavorable if the clathrate optimized for charge carrier concentration contains vacancies because these may scatter charge carriers and reduce the charge carrier mobility. 11 Attempts were then performed to change the atomic environment by elemental substitution, for instance with Sn (Ref. 24) or Ga. 17,25 Interesting results including improved charge carrier mobility and enhanced TE properties have been observed. 24,25 In the present work, we study variations of the Ga content in the framework of Ba 8 (Cu,Ga,Ge,2) 46 clathrates, and their effects on the structural and TE properties. For this purpose, we grew two different large single crystals by the floating-zone technique, one with and the other without Ga. The Ga-containing as-grown crystal shows compositional gradients as seen in other crystals prepared by the floating-zone technique. [26][27][28][29][30][31][32] This can be exploited to study the compositional dependence of the TE properties. Interestingly, in a very narrow composition range, the charge carrier mobility is sizably enhanced with increasing Ga content, leading to reduced Crystal growth As starting materials for the growth of the Ga-containing crystal, two cylindrical rods with the same nominal composition Ba 8 Cu 4.8 Ga 1 Ge 40.2 were prepared in a high-frequency induction furnace from high-purity elements. One rod with 7 mm in diameter and 60 mm in length served as the feed rod, the other one with the same diameter and 20 mm in length as the seed for the crystal growth. The crystal was grown in a 4-mirror furnace equipped with 1000 W halogen lamps. The pulling speed of the rod was 3-5 mm/h. Both rods rotated oppositely (speed: ∼8 rpm) to ensure efficient mixing of the liquid and an uniform temperature distribution in the molten zone. A pressure of 1.5 bar of Ar was used during the crystal growth. For more details on the growth conditions, please refer to our previous work. 30 To elucidate the effects of Ga substitution, a Ga-free single crystal with a nominal composition Ba 8 Cu 4.8 Ge 41.2 was grown using the same synthesis processes. Characterization Single crystals with a size of about 60 µm were mechanically isolated from crushed single crystal pieces. Inspection on an AXS-GADDS texture goniometer assured high crystal quality, and provided unit cell dimensions and Laue symmetry of the specimens prior to an X-ray intensity data collection on a four-circle Nonius Kappa diffractometer equipped with a CCD area detector employing graphite monochromated Mo-Kα radiation (λ = 0.071069 nm) at 300 K. The orientation matrix and unit cell parameters were derived using the program DENZO. 33 No absorption corrections were necessary because of the rather regular crystal shapes and the small dimensions of the investigated specimens. The structures were solved by direct methods and refined with the Oscail program. A quantitative analysis of the structural details was done with the program SHELXS-97. 34 X-ray powder diffraction (XPD) data were collected using a HUBER-Guinier image plate system (Cu K α 1 , 8 • ≤ 2θ ≤ 100 • ). Lattice parameters were calculated by least squares fits to indexed 2θ values employing Ge (a Ge = 0.5657906 nm) as internal standard. Rietveld refinements were performed for the XPD data by using the program FULLPROF. 35 The composition was determined by energy dispersive x-ray spectroscopy (EDX) in a scanning electron microscope (SEM) operated at 20 kV (Zeiss Supra 55VP, probe size: 1µm). The measured Table 1 Average compositions derived from Eq. 1 for different samples/parts ( Fig. 1), theoretical carrier concentration n calculated from Eq. 2, experimental charge carrier concentration n H and mobility µ H (both at 300 K) evaluated from Hall effect measurements, electrical resistivity ρ(300 K), Seebeck coefficient S(300 K), and effective mass m * (300 K) derived from Eq. 3. Code Composition n (-e/u.c.) with an assumption of no vacancy in the framework. Physical properties The electrical resistivity ρ and Seebeck coefficient S were measured with a ZEM-3 (ULVAC-Riko, Japan) between room temperature and 600 • C. In order to fulfill the size limitation for the measurements, the as growth single crystal was cut into 3 parts with ∼7 mm in length (samples S-a, S-b, and S-c shown in Fig. 1). The two temperature sensors (T1 and T2) are asymmetrically arranged around the sample center (see Fig. 1 (c)). To maximize the number of different measurement geometries and thus the amount of data from different sample compositions, each sample was measured along two directions, as indicated in Fig. 1 (b) bottom. Both ρ and S have uncertainties of < 5%. The thermal conductivity at high temperatures (300-900 K) was derived from the thermal diffusivity D t measured using the flash method with a Flashline-3000 (ANTER, USA), the specific heat C p estimated using the Dulong-Petit approximation, and the density D, using the relation κ = D t C p D. A disc-like sample (diameter φ = 6 mm, thickness t = 1 mm), selected from near the beginning of the as-grown single crystal, was used. Hall effect measurements were performed in a physical property measurement system (PPMS, Quantum Design, Model 6000) in the temperature range 2 to 300 K, in magnetic fields up to 9 T. We used a standard 6-point ac technique in which the Hall contacts are perpendicular to both the magnetic field and the electrical current. Small longitudinal resistivity components due to contact misalignment were subtracted by magnetic field reversal. At selected temperatures we confirmed that the Hall response is linear in field. The charge carrier concentration n H was calculated using a simple one-band model n H = 1/(eR H ). The Hall mobility µ H was determined by µ H = R H /ρ. To determine the average charge carrier concentration of each part, the Hall contacts were positioned around the center of each part ( Fig. 1 (d)). Specific heat measurements under zero magnetic field were performed with a PPMS by a standard relaxation method between 2 and 300 K. Ge at 24k Chemical properties of as-grown single crystals Both the Ga-containing and the Ga-free as-grown single crystals have a length of ∼22 mm and a diameter of ∼7 mm. XPD, optical microscopy, and SEM measurements confirmed the type-I clathrate structure (no. 223, Pm3n) and the high quality with no visible foreign phases. Figure 2 shows an example from XPD. The crystals are stable in air and mechanically strong. Lattice parameters from different parts in each crystal are very similar. The average value is 1.06975 (2) The composition determination by EDX was performed along both the growth direction and the radial direction ( Fig. 1 (a)). In the Ga-free crystal, the composition differences in both directions are very small. The average composition is Ba 8 Cu 5.0 Ge 41.0 , with a Cu content slightly above that of the nominal composition Ba 8 Cu 4.8 Ge 41.2 . The composition of the Ga-containing crystal, however, changes distinctly along the growth direction ( Fig. 1 (b)), indicating a complex reaction scheme during the crystal growth process. 30 There is a clear correlation between the Cu and the Ga content, with the Cu content increasing and the Ga content decreasing along the growth direction. This is the behavior expected within an electron-balanced scheme of the Zintl rule. 11,15,16 The changes of the Cu and Ga contents along the coordinate z (defined along the growth direction, see Fig. 1 (b)) are described by x Cu (z) = 0.022z + 4.556 and y Ga (z) = −0.020z + 1.089 . The average composition of each piece S-a to S-c used for physical property investigations can then be estimated from these relations. The refinements of the single crystal data for both crystals revealed isomorphism of the type-I clathrate structure (SG: 223, Pm3n). The heavier Ba atoms are located at the 2a and 6d sites, and the framework sites are 6c, 16i, and 24k. For further refinements, we first assumed an ordered model for the framework, i.e., Cu fully located at the 6c site and Ge/Ga at the 16i and 24k sites. As the differences between Cu, Ge, and Ga atoms are essentially invisible in X-ray diffraction, this is certainly a plausible way forward. The refinements gave very good reliability factors and reasonable thermal parameters (temperature factors). Though vacancies have been evidenced in some ternary Ba 8 Cu x Ge 46−x clathrates, 23,36 we could not pin down their existence in our refinements. This might be due to a very low level of vacancies in our crystals. The almost spherical shapes of the atoms at the 6c site (Fig. 3), that reflect the thermal parameters of the refinement, make it difficult to recognize vacancies in the structure. Even a model with a site splitting at the 24k site does not reveal any sizable distortion from a spherical shape. A change of the occupation in the structure model for the refinement does not change the interatomic distances in the structure. Thus, a comparison of interatomic distances in our two crystals may be the most sensitive means to reveal vacancies. We focused on the following distance changes induced by Ga substitution: The distance between Ba atoms and the framework, and the distance between coordinated framework atoms (see Fig. 3 (b) and (c, 1-3)). The results are given in Table 2 and visualized in Fig. 4 (a). The Ga substitution of about 1 at./u.c. shrinks the small cages by shortening the interatomic distances Ba(2a)-Ge(24k), but leaves the large cages essentially unchanged. All large changes shown in Fig. 4 are associated with the atoms at the 24k site, giving a first glance that Ga could replace Ge at the 24k site in Ba 8 Cu 4.8 Ga 1 Ge 40.2 just as Sn does in Ba 8.0 Cu 5.1 Sn 0.7 Ge 40.2 . 24 However, the structure is more complex here because locating Ga at the 24k site alone cannot explain (1) the shrinkage of the interatomic distance Ge(24k)-Ge(24k), which should be elongated due to the slightly larger covalent radii of Ga compared to that of Ge; (2) the elongation of the interatomic distance Cu(6c)-Ge(24k); and (3) the shrinkage of the small cages. We therefore introduce a vacancy-filling model in which Ga atoms fill vacancies at the 6c site, leading to an increased interatomic distance Cu(6c)-Ge(24k) and a shortened distance Ge(24k)-Ge(24k) as sketched in Fig. 4 (b). This strongly suggests that vacancies exist in Ba 8 Cu 4.8 Ge 41.2 and are filled by atoms induced by the Ga substitution. A possible structure model for Ba 8 Cu 4.8 Ge 41.2 , with Cu+Ge (the Cu content is fixed to 5.0 at./u.c. from EDX, M1 in Table 3) atoms occupying the 6c site, and a model for Ba 8 Cu 4.8 Ga 1 Ge 40.2 , with Cu+Ge (the Cu content is fixed to 4.6 at./u.c. from EDX, M1) at the 6c site and Ge+1.0Ga (M2 in Table 3) at the 24k site, are shown in Table 3. The atomic parameters are comparable with the available values of similar compositions in the literature. 23 Physical properties 3.3.1 Thermoelectric properties The temperature dependent electrical resistivity ρ(T ), Seebeck coefficient S(T ), and power factor PF(T ) = S 2 /ρ for three pieces of the Ga-containing single crystal are shown in Fig. 5. ρ(T ) exhibits metal-like behavior for all samples and changes systematically from S-a to S-c (see also Fig. 6 (c)). S(T ) is negative and linearly dependent on temperature. The highest PF value of 1.4 mW/mK 2 is reached in S-a at 900 K. This is about 30% larger than the PF of sample S-c-2 at the same temperature. The variations The negative sign of S is thus related to the remaining nonbonded electrons (Table 1), and the increase of ρ from S-a to S-c may, at least in part, be due to an accompanying decrease of the charge carrier concentration n (Fig. 6 (d)). Note that the decrease of the Cu content x Cu associated with the increase of the Ga content y Ga slows down the change of n with composition, providing flexibility to finely tune the charge carrier concentration by composition. The Hall effect analysis, however, shows that changes in the Hall mobility µ H dominate the change in electrical resistivity ρ. As expected from the Zintl rule, the experimentally determined n H does indeed change with composition, but not as strongly as predicted by Eq. 2 (see Fig. 6 (d)). The Hall mobility, however, is strongly enhanced with increasing Ga content ( Fig. 6 (d) and Table 1). At 300 K, for instance, µ H of S-a is 40% larger than µ H of S-c, corresponding to almost the same relative reduction as that in ρ. In comparison, n H increases by only 15% (see Fig. 6 (d)). To understand the origin of the enhanced mobility, we analyzed the effective mass m * and the scattering parameter λ , which are related to the mobility by µ = e · τ/m * . Here, τ is the average relaxation time for all scattering processes, and e is the electron charge. m * is estimated from S(T ) (Fig. 5 (b)) by 47 and λ is derived by fitting µ H (T ) (Fig. 6 (b)) between 200 and 350 K with µ H ∝ T λ . With increasing Ga content, both m * and λ decrease slightly (Table 1 and Fig. 6 (b)). The small decrease of m * (by only 5% between S-a and S-c) can only partially account for the increase of µ H . Therefore, we identify an increased relaxation time τ as the main origin of the observed mobility enhancement. The λ values between -0.65 and -0.70 are relatively close to -0.5, the value for alloy disorder scattering. 25 The increase of |λ | may be related to the vacancy filling, which leads to the decrease of alloy disorder scattering and thus the high mobility. To test this conjecture, we performed Hall effect measurement also on our Ga-free single crystal Ba 8 Cu 4.8 Ge 41.2 , which has distinctly more vacancies than all Ga-containing crystals. Indeed, the Gafree sample has the lowest |λ | (λ is close to -0.5), and the lowest mobility of all our crystals ( Fig. 6 (b)). The mobility of 9.5 cm 2 /Vs at 300 K for the Ga-free sample is even lower than the 11.9 cm 2 /Vs for the Sn-substituted single crystal Ba 8.0 Cu 5.1 Sn 0.7 Ge 40.2 that has strongly distorted cages due to the large size of Sn. 24 Figures 7 (a) and (b) show the temperature dependent thermal conductivity κ tot and the figure of merit ZT , respectively, for the sample S-a and for Ba 8.0 Cu 5.1 Sn 0.7 Ge 40.2 for comparison. 24 Sa has a higher thermal conductivity than the Sn substituted single crystal, which, however, is in part due to the higher electronic contribution κ e . Below about 780 K, ZT of our sample S-a is somewhat lower than that of the Sn-substituted crystal, which is mostly due to the lower κ tot of the latter. At high temperatures, however, ZT of our S-a crystal is much higher due to its high power factor (see Fig. 5 (c)). The highest ZT of 0.9 at is achieved 900 K. As ZT (T ) is still not saturated at the highest temperature of our experiments, we anticipate even larger values at higher temperatures. Specific heat Specific heat C p data of S-a are shown in Fig.8. Below 4 K the standard description is assumed, i.e., where γ is the Sommerfeld coefficient of the electronic contribution and β the low-temperature coefficient of the lattice contribution. The fit (Fig.8, insert) yields γ = 14.4 mJ/(molK 2 ) and β = 3.75 mJ/(molK 4 ). Using θ D = (12π 4 RN/5β ) 1/3 , where R is the gas constant and N the number of atoms per u.c., and treating the guest atoms as independent Einstein oscillators and the framework atoms as a Debye solid, i.e., N = N D = 46, we obtained θ D = 287 K. Both γ and θ D are in good agreement with values derived for Ba 8 Cu 5.3 Ge 40.7 (γ = 11.8 mJ/molK 2 and θ D = 289 K). 48 To model the data in the entire temperature range (Fig. 8 main panel) we used where C D and C E are the Debye and the Einstein contribution, respectively, namely with x = ω/(k B T ) and the phonon-angular frequency ω, and where p i is the number of degrees of freedom, N Ei the number of Einstein oscillators, and θ Ei the Einstein temperature of the ith vibrational mode. With the constraints for type-I clathrates given in Table 4, 20,40 the data are well described with two Einstein temperatures, representing vibrations in two perpendicular directions for Ba at the 6d site, and one Einstein temperature for Ba at the 2a site (see Table 4). Table 4 Constraints evaluated from the structure characteristic of type-I clathrates, 20,40 used in the fit of C p (T ) with Eq. 5, and corresponding results. The subscripts number 1 and 2 denote atoms at the 2a site (dodecahedral cages) and the 6d site (tetrakaidecahedral cages), respectively. and ⊥ represent vibration directions of the atoms at the 6d site parallel and perpendicular to the 6-atom-ring planes of the tetrakaidecahedra. Conclusions Our detailed crystal structure and thermoelectric property investigation of type-I Ba 8 (Cu,Ga,Ge,2) 46 clathrate single crystals unambiguously revealed that vacancies, present at the 6c site in Ba 8 Cu 4.8 Ge 41.2 , are successively filled upon Ga substitution. This was revealed by an X-ray single crystal diffraction study, demonstrating an interatomic distance elongation for the Cu(6c)-Ge(24k) distance and a shrinking for both the Ge(24k)-Ge(24k) distance and the diameter of the small cages in Ba 8 Cu 4.8 Ga 1 Ge 40.2 . The vacancy filling removes local disorder and leads to an increased charge carrier mobility and thus to enhanced thermoelectric performance. In view of the narrow composition range (Cu: ∼4.6 to ∼4.9 at./u.c., Ga: ∼1.0 to ∼0.7 at./u.c.) in our single crystal, the size of the enhancement is surprisingly large. The highest figure of merit ZT = 0.9 at 900 K was achieved for a single crystal with an approximate composition Ba 8 Cu 4.6 Ga 1.0 Ge 40.4 . This value, that has still not reached saturation at the highest temperature of our measurements, is to date one of the largest in transition metal element-containing clathrates. We conclude that reducing the vacancy content in type-I clathrates is an important design strategy to optimize their thermoelectric performance. Conflict of interest There are no conflicts to declare.
2017-10-31T15:40:32.000Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "02aa85215135ae1bf56136aa8b3fad2bbdb328db", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1710.11536", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a1f5db7bd63fe1b6cc4a5b94ade3acb759ab9ce9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
260775722
pes2o/s2orc
v3-fos-license
Heisenberg-limited spin squeezing in a hybrid system with Silicon-Vacancy centers In this paper, we investigate spin squeezing in a hybrid quantum system consisting of a Silicon-Vacancy (SiV) center ensemble coupled to a diamond acoustic waveguide via the strain interaction. Two sets of non-overlapping driving fields, each contains two time-dependent microwave fields, are applied to this hybrid system. By modulating these fields, the one-axis twist (OAT) interaction and two-axis two-spin (TATS) interaction can be independently realized. In the latter case the squeezing parameter scales to spin number as $\xi_R^2\sim1.61N^{-0.64}$ with the consideration of dissipation, which is very close to the Heisenberg limit. Furthermore, this hybrid system allows for the study of spin squeezing generated by the simultaneous presence of OAT and TATS interactions, which reveals sensitivity to the parity of the number of spins $N_{tot}$, whether it is even or odd. Our scheme enriches the approach for generating Heisenberg-limited spin squeezing in spin-phonon hybrid systems and offers the possibility for future applications in quantum information processing. In this work, we propose a scheme for generating spinsqueezed states in a hybrid system consisiting of an ensemble of SiV centers coupled to the acoustic mode of a diamond waveguide via the strain interaction.This SiV ensemble is partitioned into two different segments resulting from two sets of non-overlapping microwave fields.The strain-induced coupling enables effective spin-spin interactions mediated by virtual phonons, then the OAT and TATS interactions can be induced independently, where the latter one can realize Heisenberg-limited spin squeezing [1][2][3].Furthermore, we investigate the spinsqueezed states generated by the mixed Hamiltonian of OAT and TATS interactions and show the sensitivity of these states to the even-odd spin particles, which holds potential for sensing applications.Considering practical dissipations in the system, the squeezing parameter ξ 2 R has a trend as ξ 2 R ∼ 1.61N −0.64 , which can be used to achieve a measurement precision close to the Heisenberg limit.Compared to other schemes that necessitate the use of squeezed field injection, complex pulse drive or parametric drive to generate better spin-squeezed states, our scheme requires only the appropriate modulation of microwave fields and allows better spin-squeezed states based on this spin-phonon hybrid system. Our paper is organized as follows.In Sec.II, we introduce the theoretical model of a hybrid quantum system consisting of two SiV-center segments embedded in a quasi 1D acoustic waveguide.Section III shows the time evolution of squeezing parameters ξ 2 S and ξ 2 R in the case of the OAT, TATS and mixed OAT-TATS Hamiltonians.In Sec.IV, we discuss the experimental feasibility of this scheme and analyze the influence caused by the experimental dissipation of this hybrid system.Finally, we make a summary in Sec.V. to an acoustic mode of a 1D diamond waveguide via the strain-induced interaction.This interaction arises from the change of Coulomb energy of the electronic states due to the displacement of atoms forming the defect.First, we consider the SiV centers in segment S 1 which are driven by two time-dependent microwave fields Ω 1 (t) and Ω 2 (t), and this system can be described by the Hamiltonian [63,65] where H SiVS 1 and H ph are the Hamiltonians of SiV centers in segment S 1 and the acoustic mode, respectively, and H strainS 1 denotes the strain-induced coupling between the orbital degree of the SiV center in segment S 1 and the common acoustic mode of the waveguide, as shown in Fig. 1(b).The SiV center is an interstitial point defect in which a silicon atom is positioned midway between two adjacent missing carbon atoms in the diamond lattice, as depicted in the inset of Fig. 1(a).Its ground state is four-fold degenerate, with the corresponding energy splitting ∆ = [λ 2 g + Υ 2 x + Υ 2 y ] 1/2 ≈ 2π × 46 GHz, where λ g = 2π × 45 GHz is the spin-orbit coupling strength, and Υ x(y) describes the strength of the Jahn-Teller (JT) effect along ⃗ x(⃗ y) direction [59,63].Two time-dependent microwave fields Ω 1,2 (t) with frequencies ω 1,2 will induce transitions between states |1⟩ ↔ |4⟩ and |2⟩ ↔ |3⟩, as shown in Fig. 1(b).Consequently, the dynamics of SiV centers can be described by the Hamiltonian(ℏ = 1) [63,65] where ω B = γ s B 0 denotes the energy-level splitting induced by the Zeeman effect, and we set ω B ≈ 2π ×5 GHz here.γ s is the spin gyromagnetic ratio, and j labels the j-th SiV center in segment S 1 .Now we consider acoustic modes in the quasi 1D diamond waveguide.The length, width, and thickness of the waveguide are L, w, d, respectively, as shown in Fig. 1(a), satisfying L ≫ w, d meanwhile.The quantized Hamiltonian of acoustic modes can be written as where a n,k is the annihilation operator of one acoustic mode. Considering that the acoustic modes are well separated from frequency (∆ω n ≥ 2π × 50 MHz) in the waveguide with small size, we could treat the mechanical mode as a single standing wave with ω n ≈ 2π × 46 GHz for simplicity.Then, the Hamiltonian Eq. ( 1) can be written as (6) Performing a unitary transformation with respect to U = e −iH0t , where the Hamiltonian in the interaction picture reads where ν, δ 1 , δ 2 are the corresponding detunings between the frequencies ω n,k , ω 1 , ω 2 and eigenfrequencies of states |3⟩, |4⟩, as shown in Fig. 1(b), and , we may further eliminate the higher energy levels |3⟩ and |4⟩ via Froehlich-Nakajima transformation [69][70][71].Finally, we obtain an equivalent two-level Hamiltonian where the parameters ε 1 , λ 1 , Λ 1 in Eq. ( 8) have the forms as following Next we consider the total hybrid system with two segments S 1,2 , which are connected by a common acoustic mode.The effective Hamiltonian of this whole hybrid system can be written as where the operators j=1 |1⟩ j ⟨2| are the collective spin operators of the SiV centers, subscripts 1 and 2 denote the two parts and N 1 , N 2 are the total spin numbers of corresponding SiV-center segments.Moreover, the parameters ε 2 , λ 2 , Λ 2 in Eq. ( 10) have the forms as following As shown in Eq. ( 9), Eq. ( 11), by properly adjusting the microwave fields, the effective detunings can be set as ε 1 = −ε 2 = ∆ s , which implies that the two parts of the SiV centers are physically different.In addition, we set w 1 = w 2 = 0. Assuming that the setup works at 100mK temperature, thus the phonon number of acoustic mode is close to 0, i.e. a † a ∼ 0.Moreover, with the condition ∆ s ≫ λ 1,2 g n , Λ 1,2 g n is satisfied, applying the canonical transformation H → e −S H ef f e S with [71,72] Finally, we could obtain an effective projected Hamiltonian in the spin-ensemble subspace [72,73] as following Here the terms in lines 2 and 3 represent the OAT interaction, while the term in line 4 indicates the TATS interaction [2,3,23].Thus, by tunning the driving fields, one could realize an OAT Hamiltonian along the z axis, a TATS Hamilatonian, and a mixed Hamiltonian containing the OAT and TATS interactions, respectively.In addition, the dynamical evolution of the system can be described by the quantum master equation where n th is the average thermal phonon number, and indicates the collective spin relaxation induced by mechanical dissipation Γ m of the corresponding acoustic mode. III. SPIN SQUEEZING In this section, we quantify the degree of spin-squeezed states by calculating two most frequently used squeezing where (∆J− → n ⊥ ) 2 min is the minimum variance in a direction which perpendicular to the mean spin direction, and − → J = ⟨J 2 x ⟩ + J 2 y + ⟨J 2 z ⟩ denotes the magnitude of the mean spin.N tot = N 1 + N 2 is the total number of SiV centers in the waveguide, and for the sake of simplicity, we assume that N 1 ≃ N 2 . A. OAT interaction Hamiltonian When the terms in Eq. ( 13) are set as through tunning the amplitudes and frequencies of the driving fields, we can obtain the following OAT Hamiltonian along the z axis, where 2 )/∆ s describe the OAT interaction strengths of corresponding spin ensembles, and z + J z , the last two terms in Eq. ( 13) indicate a standard OAT interaction Hamiltonian [3]. Figure 2 shows the time evolution of the squeezing parameters ξ 2 S and ξ 2 R .The red line, black line, and green dotted line represent the squeezing parameters of N 1 = N 2 = N = 20, 30, 50, respectively.The hybrid system evolves from a spin coherent state distributed on the x-axis, in which state the values of both parameters ξ 2 S and ξ 2 R are 1, as depicted in the figure 2 at time 0. As the system begins to evolve, these two parameters become smaller than 1, indicating that the spin squeezing has been generated in this hybrid system.As shown in figure 2, the generated spin-squeezed states reaches their optimal squeezing at time t ∼ 20us, with minimum values are ξ 2 S ≈ 0.18, 0.11, 0.08 and ξ 2 R ≈ 0.22, 0.15, 0.13 with the case of N = 20, 30, 50, respectively.In addition, we find that ξ 2 S < ξ 2 R for the same spin numbers, which is consistent with the results as mentioned in Ref. [3]. B. TATS interaction Hamiltonian Similar to the case of OAT interaction, we can also set 2 by tunning the amplitudes and frequencies of the driving fields.Then, the Hamiltonian with a TATS interaction could be obtained from Eq. ( 13) as follows where G T AT S = (λ 1 Λ 2 − λ 2 Λ 1 )/∆ s indicates the TATS interaction strength, and ∆ s1,s2 = ∆ s + 2λ 2 1,2 /∆ s .Figure 3 depicts the time evolution of squeezing parameters ξ 2 S and ξ 2 R with different spin numbers in the case of TATS interaction Hamiltonian.The red, blue, green lines in this figure represent the squeezing parameters ξ 2 S and ξ 2 R of N 1 = N 2 = N = 20, 30, 50, respectively.We can see that the minimum values of ξ 2 S and ξ 2 R have decreased significantly compared to the OAT interaction case in fig. 3 with the same spin number, specifically, ξ 2 S ≈ 0.03, 0.021, 0.013 and ξ 2 R ≈ 0.113, 0.079, 0.049 with the case of N = 20, 30, 50, respectively.Similarly, ξ 2 S < ξ 2 R for the same spin numbers.Figure 3 shows that both squeezing parameters would reach their minimum values more quickly with the increment of the spin numbers.Moreover, we can see that the spin-squeezed state generated by this TATS interaction Hamiltonian could approach the Heisenberg limit 1/N for large spin numbers, which is not possible in the OAT case [3]. C. Mixed Hamiltonian of OAT and TATS interaction With appropriate tuning of the microwave driving fields, it is also possible to obtain a mixed Hamiltonian that comprises both OAT and TATS interactions from Eq. ( 13), where G mix represents the mixed interaction strength, and ∆ s1,s2 = ∆ s + 2min λ 2 1,2 , Λ 2 1,2 /∆ s .We also plotted the time evolution of the squeezing parameters ξ 2 S and ξ 2 R for different spin numbers in Fig. 4, and the black, red, green lines in this figure represent the squeezing parameters ξ 2 S and ξ 2 R of N 1 = N 2 = N = 20, 30, 50, respectively.In the case of mixed Hamiltonian of the OAT and TATS interaction, the minimum values of corresponding squeezing parameters are ξ 2 S ≈ 0.08, 0.063, 0.047 and ξ 2 R ≈ 0.146, 0.109, 0.075 with the case of N = 20, 30, 50, respectively, which are smaller than the case of OAT interaction induced spin squeezing, but also slightly larger than the ideal TATS case.From the Fig. 4, we can also see that the time for the system to reach the optimal squeezing is significantly smaller than the OAT (Fig. 2) and TATS (Fig. 3) cases. In particular, in the mixed OAT-TATS interaction case, we find that the spin squeezing effect differs significantly depending on whether the total number of spins is odd or even.This property may be utilized to detect changes of the number, N tot , of coupled spins at the single-particle level.Figure 5 shows the time evolution of squeezing parameters ξ 2 S with N tot = 40 and N tot = 39.Notably, during the first instance of spin squeezing, the parameters ξ 2 S of N tot = 40 and N tot = 39 are almost identical.However, as the hybrid evolves from the spin coherent state to spin-squeezed state for the second time, the spin squeezing in the N tot = 39 case is significantly poorer compared to the the N tot = 40 case, as shown in Fig. 5(a).When the number of total spins is odd, there will be a difference in the parity of spin numbers between the two segments, resulting in the overall dynamics of spin squeezing, the combination of two parts with different periods and parities.Therefore, like destructive and constructive interference, the squeezing parameters ξ 2 S with odd total spins will display the maximum squeezed value that alternates between large and small in odd and even periods.In contrast, in the case with even total spins, such alternations in the maximum value of spin squeezing are absent.Figure 5(b) illustrates that how this odd-even sensitivity could be used for sensing.We plot the value of J 2 X with different total spin numbers N tot = 40, 39, 38, 37, 36.When the spins leaving or decoupling from the waveguide one by one, the corresponding values of J 2 X would become smaller and smaller, as shown in Fig. 5(b). IV. EXPERIMENTAL FEASIBILITY In this section, we discuss the relevant parameters used in numerical simulations to assess the practical feasibility of this schemement.First, we take the value of the strain-induced coupling strength between the acoustic mode in the diamond waveguide and SiV centers to g = 2π × 5MHz.To embed the SiV centers into the 1D diamond waveguide, we can utilize ion implantation techniques based on state-of-the-art nanofabrication techniques [75].The ground state splitting of SiV centers is ∆ ≈ 46GHz, and the transitions between states |1⟩ ↔ |4⟩ and |2⟩ ↔ |3⟩ can be induced by the microwave driving fields or via an equivalent optical Raman process, which has already been experimentally realized [63,76,77].At 100 mK, the spin dephasing rate of a FIG. 6.The optimal squeezing parameter (1/ξ 2 R )max versus the total spin numbers Ntot with the consideration of experimental disspation in TATS interaction case.The black dots is the value of (1/ξ 2 R )max with corresponding spin numbers, and the red line represents the curve fitting of numerical results. single SiV center is about γ d ∼ 100Hz, corresponding to a coherent time is T s ∼10ms [47,60,78].The driving fields adopted here are Ω 1 , Ω 2 , Ω 3 , Ω 4 ∼ 2π × (30 ∼ 50)MHz with δ 1 , δ 2 , δ 3 , δ 4 ∼ 2π × (300 ∼ 500)MHz, respectively.It should be noted that we have not taken into account the effect of dissipation in numerical simulations of the squeezing in the previous section.A quality factor of Q ≈ ×10 5 for the mechanical phonon modes of the small-sized diamond waveguide has been demonstrated [79,80], which leads to a mechanical dissipation value of Γ ∼ 2π × 500 kHz.Consequently, the effective collective decay rate induced by mechanical dissipation in Eq. ( 14) has a value as Γ ef f ∼ 2π × 50Hz in the TATS interaction case, which is of the same order of magnitude as the spin dephasing rate.Here, we modify the master equation Eq. ( 14) by including the spin dephasing term as follows In Fig. 6, we plot the optimal squeezing parameter (1/ξ 2 R ) max versus the total number of spins N tot in TATS interaction case, taking into account the collective deco-herence induced by mechanical dissipation of the acoustic mode and the dephasing rate of SiV centers.Using the numriacl results obtained from Eq. ( 19), we fit a curve and obtain the trend of the optimal squeezing parameters (ξ 2 R ) max with respect to the number of spins as ξ 2 R ∼ 1.61N −0.64 .As such, our scheme can generate highly squeezed-spin states under currently available experimental conditions in the hybrid system base on SiV centers. V. CONCLUSION In summary, we have designed a hybrid quantum system, consisting of an ensemble of SiV centers coupled to the acoustic mode of a diamond waveguide via the strain-induced coupling.The system is partitioned into two segments with different sets of microwave driving fields, and by ajusting the frequencies and amplitudes of fields, we can achieve the OAT interaction, the TATS interaction and mixed Hamiltonian with both OAT and TATS interactions.The scheme can still work when the numbers of SiV centers in the two segments differ, despite a reduction in the squeezing effect.In the ideal TATS scenario with large numbers of spins, the two spinsqueezing parameters ξ 2 R and ξ 2 S scale with total spin numbers as ξ 2 S , ξ 2 R ∼ N −1 , reaching the Heisenberg limit.In the mixed interaction case, our hybrid system can generate the optimal spin squeezing more rapidly, and these spin-squeezed states is sensitive to the parity of the total number of spins.Moreover, we have provided a possible method for measuring the change of spin numbers at the single particle level.Considering the realistic dissipation, ξ 2 R scales with the total number of spins as ξ 2 R ∼ 1.61N −0.64 , demonstrating its potential for application in quantum metrology.Consequently, our scheme can work well under experimental conditions and extend the applications of the SiV-based hybrid quantum systems in quantum information processing and quantum metrology. FIG. 1 . FIG. 1.(a) Sketch of an array of SiV centers embedded in a 1D diamond waveguide.The length, width, and thickness of the waveguide are L, w, d, respectively.Molecular structure of the SiV center is shown as the inset.In this system, there are two different segments S1 and S2 in the SiV-center ensemble, which contains N1 and N2 SiV centers, respectively, resulted from the different set of driving fields.(b) The level structure of the electronic ground state of the SiV center.The timedependent microwave driving fields induce the transitions between levels |1⟩ ↔ |4⟩ and |2⟩ ↔ |3⟩, while the transitions between levels |1⟩ ↔ |3⟩ and |2⟩ ↔ |4⟩ are caused by the strain-induced coupling. FIG. 5 . FIG. 5.The effects of even and odd total spin numbers to the spin squeezing generated by the mixed Hamiltonian.(a) Time evolution of the squeezing parameter ξ 2 S with Ntot = 40 and Ntot = 39.(b) An example of how the even-odd sensitivity of the spin squeezing could be used for sensing.Time evolution of the value of J 2 X with Ntot = 40, 39, 38, 37, 36.
2023-08-11T06:42:22.084Z
2023-08-10T00:00:00.000
{ "year": 2023, "sha1": "6a33e6be1f1f17de4f3fd54efde2a714c70e2b2a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.499299", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "6a33e6be1f1f17de4f3fd54efde2a714c70e2b2a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
234878579
pes2o/s2orc
v3-fos-license
Curcuma Longa (Medicinal Plant) Research: A Scientometric Assessment of Global Publications Output with Reference to Web of Science The present study explores the characteristics of publication records for a total duration of twenty years, from 2000 to 2019, in the field of Curcuma longa research. This study has been carried out based on the multidisciplinary bibliographic database available with the Web of Science in Science Citation Index-Expanded (SCIE) and Social Sciences Citation Index (SSCI) and its implications, using the means of scientometrics research techniques. In order to make this analysis a holistic and comprehensive survey of the research trends in the chosen field, the following variables are taken into account: growth rate; global citation scores; distribution of publications by journals, conferences and institutions; favored media of communication; Hirsch index and citation profile of top institutions, countries and authors; contribution of funding agencies; high number of cited papers and characteristics of their bibliographic details. The total number of publication records has been found out to be 6087 during the study period. These 6087 publications have received 171 h-index, 1, 84,715 global citations score and 30.34 average citations. On the whole, 6087 records were published during the study period (2000-2019) in 18 types of documents from 107 countries with 2005 journals, contributed by as many as 20855 authors affiliated to 4879 institutions. These publications were brought out in 18 languages, and they received 1, 56,986 cited references. Majority of the records were in the form of journal articles, reviews, papers in conference proceedings and meeting abstracts, accounting for 97 percent of the total publications. Naturally enough, English happens to be the leading language of 98.8 percent to have accounted for the most number of publications. The four largest contributing countries in the total literature on Curcuma longa during the entire study period are India (24.68 percent), USA (17.7 percent), China (12.2 percent) and Iran (6.09 percent) respectively. The largest institutional contributor of publication records happens to be the Mashhad University of Medical Sciences, Mashhad, Iran with 1.8 percent of the papers to its credit. The most prolific authors to have published more number of research documents during the study period were Sahebkar A (73 papers), Aggarwal BB (67 papers), Nayak S (35 papers) and Kumar A (33 papers). The journal of “Food chemistry” Elsevier ltd tops the list of journals with maximum number of publication records in the field for the given study period with 70 publications, followed by “Journal of Agricultural and Food Chemistry” American Chemical Society (69 papers), “Phytotherapy Research” John Wiley and sons Ltd (63 papers) and “PLOS One” Public Library of Science (59 papers). While the Third World Congress on Medicinal and Aromatic Plants WOCMAP III held in February 2003 at Thailand resulted in the publication of 6 papers, the following three major funding agencies contributed immensely to the research activities in the field: ‘National Natural Science Foundation of China’ with 318papers, United States Department of Health & Human Services, USA with 304 papers and Council of Scientific Industrial Research, India with 99 papers. Introduction Turmeric (Curcuma longa Linn) is a medicinal herb belonging to the family of Zingiberaceae, which is widely cultivated in the tropical and subtropical regions, having its origination from India, Indonesia and Southeast Asia (Paramasivam et al. 2009). It is used as spices and also used in traditional medicine for its widespread medicinal properties like anti-microbial, anti-oxidant, anti-inflammatory, anti-cancer, anti-aging and anti-malarial characteristics. These medicinal properties are ascribed to its compound Curcuminoids which consists of curcumin (CUR), Dimethoxy curcumin (DMC) and bisdemethoxy curcumin (BDMC). Among these curcuminoids, curcumin (diferuloylmethane) is the most predominant bioactive compounds with the presents of Materials and Methods The required data were collected from Science Citation Index-Expanded (SCIE) and Social Sciences Citation Index (SSCI) using the ''ISI Web of Knowledge'' an international database of the Clarivate Analytics (version 4.10 -Web of Science), in September 2020. For the purposes of analysis, the aforesaid data on global publications in last twenty years (2000-2019) was collected in the form of electronic download. The Basic search was conducted using the keywords "Turmeric" or "Curcuma longa" or "curcumin" using the core collection of this database, and custom year range of time span 2000 to 2019 was chosen for arriving at the data results. Every 500 data was downloaded as a single component with full records, cited references and the plain text. Thus, a total number of 6087 publication records were obtained for the entire study period using this method. The downloaded data was then tabulated and analysed using the Histcite software and MS Excel for getting the relevant information required for analysis and interpretation. Additionally, abstracts were included within the search range while using the specific keywords, so as to integrate publications relating to this study from the most related records for the special issue on the turmeric. As a pioneering study on this field of research encompassing such a vast time range, the search terms 'turmeric', 'Curcuma longa' and 'curcumin' were used, in order to arrive at a broader picture of the research in this area as much as possible. A more restricted use of the search terms in an earlier version of this study had affected the identification of literature, more particularly in the fields relating to the vital sciences. Histcite software has been used for analyzing the result after the data was downloaded from the web of Science database. This software provided the result by analyzing few areas and by preparing tables with local citation scores and global citation scores. Total records have been shown through this software, and the analysis was carried out following the result outputs like Records, Authors, Journals, Cited References, Words, Yearly output, Document type, Language, Institution, and Institution with subdivision and Country. The next step of this analysis was carried out using the ''create citation report'' tool of the Web of Knowledge database. The final results have been arrived at using the Total publications, sum of the times cited, Citing Articles, Without self citations, Average citations per item and h-index. Literature Review K.K Mueen Ahmed, B. M. Gupta and Ritu have examined the twenty years (from 1997 to 2016) of global research publications on curcuma longa in the database of Scopus with a total of 5351 publication records with citation impact, growth rate, collaborative share of papers, subject areas, output and citation of authors and organizations. The publication share of first 15 countries was 92.66 percent, while 340 publications received highest citations between 100 and 3869 during this period. Laksham, S et al. (2020) have examined the global level view of Coronavirus publication outputs by retrieving 7381 records for the period extending from 1989 up to March 2020. They have analyzed the annual publication growth, publication share in global, research communication channels pattern and journals' productiveness. Thus this article has concluded by stating that publication output of joint author's was higher when compared to the single author publications, and open access journals published higher than paid journals. Gupta BM, Mueen Ahmed KK and Ritu have analyzed the publication records of Glycyrrhiza glabra in global using bibliographical database of Scopus for the total period of twenty years. They came up with the following results; average annual growth rate is 10.87 percent and 19.09 citations per year, China and India are the good number of productive countries 19.81 percent and 13.71 percent. 1153 journals published 3352 papers, twenty organizations in global level published 15.08 percent of the papers, 9.16 percent of the authors, and they received citation scores 14.57 percent and 16.62 percent in the study period. This study reveals that, Asian countries excelled in the total number of publication records more than the other countries in Glycyrrhiza glabra research, whereas the quality of research was found to be higher with American and western countries. Konur, O (2011) has studied the scientometric evaluation of the research on the algae and bio-energy for the period of three decades extending from 1980 to 2009 using Web of ISI Web of Knowledge database. He has investigated the most prolific authors, countries, research institutions, journals, subject areas language of publications and most cited papers. The result of this study showed that the algae and bio-energy had developed exponentially in the past three decades. B. M. Gupta, K. K. Mueen Ahmed (2018) have conducted a scientometric view on 4900 global publication outputs in the field of Azadirachta indica research during 1997 to 2016. He measured to find out that the average annual growth rate stood at 7.61 percent and the citation score grew at the pace of 13.91 percent per paper. The largest share of publication records was found to have emerged from India with 53.49 percent and Agricultural and biological sciences contributed 48.41 percent. 20.65 percent and 8.92 percent of total share by first twenty five global organizations and authors and 43.63 percent of journal publications were shared among by the first 20 most prolific journals during the study period of 1997 to 2016. Milad Haghania, Michiel C.J. Bliemer et al. (2020) analyse the bibliometric aspects of this studies on a macro level, as well as those addressing Coronaviruses in general. Moreover, through a scoping analysis of the literature on COVID-19, they have identified the main safety-related dimensions that these studies have thus far addressed. Nirmal Singh (2017) explained the outlines of the growth of scientific literature on Azadirachta indica in the journals applying bibliometric analysis. The distribution of articles in journals was found nearly acceptable to the Bradford's law of scattering making it obvious that there are a few core journals contributing significantly on Azadirachta indica. Anwar MA depicts that growth of the literature analyzed in this study indicates that research on Phoenix dactylifera L grew very fast from 1971 onward, reached its peak by 1989, and stabilized after that period. It clearly focused on the direction for plant diseases, plant breeding and the quality of augmenting the food and feed. There is a clear focus in research on improving plant breeding, supervision plant diseases, and augmenting the food and feed quality. The literature from medical feature plays more importance on the animal areas rather than the human areas. 1481 19 2018 680 4676 6.88 3322 4.89 29 20 2019 702 2351 3.35 3945 5.62 18 2000-2009 1335 97478 5962 2010-2019 4752 87177 24324 Total 6087 184708 TR-Total Records TCS-Total Citation Scores CPP-Citation Per Records NAPR-Number of Authors per Records The cumulated literature output in the field of Curcuma longa on the global scenario is 6087for a 20 year period starting from 2000 up to 2019. The yearly output in Curcuma longa research increased from 55 in the year 2001 to 702 publications in 2019 on the global level. The second half of the study period 2010-2019 saw an increased number of publications and authors following the previous trend, but the number of citations became lower compared to the first decade of the twenty-first Century (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009). The number of publications increased 3.6 times, and the number of authors increased 4 times between the beginning and ending years of the total study period. The years 2005 to 2014 witnessed a total gain of more than 50 h-index, while it was lower than 50 in the rest of the study period years. This data reveals that the total number of researchers and their publications have attained a drastic growth in the field of Curcuma Longa research. Yet, the rather decreased number of citations and decline of H-Index below 50 may be indicating an alarming trend in the particular area of research. It is generally expected that more number of authors and research publications would lead to more citations and an increase in H-Index. It appears strange and alarming that citations and H-Index in this field of research. Of the total global publication output in the field of Curcuma Longa research for the total study period, 4846 (79.6 percent) have appeared in the form of articles, while 659 (10.8 percent) were Reviews, 204 (3.4 percent) were conference papers, 199 publications were abstracts (3.3 percent), 69 were articles in proceedings (1.1 percent), 34 were editorials (0.6 percent), 23 were Letters (0.4 percent), 12 were corrections (0.2 percent), 9 were news items, 8 were article retracted publications, 7 were early access articles, 5 were review book chapters, 4 were article book chapters, 3 were article data papers, 2 were early access reviews, and 1 publication each was book review, retraction and review retracted. Country-wise, the global publication share of top 15 countries varied widely from as low as 1.28 percent to as high as 24.68 percent during the study period of 2000-2019, with India accounting for the highest publication share of (24.68 %), followed by USA (17.66 %), China (11.24 % share), UK, Thailand, Japan and South Korea (from 2.58 % to 5.52 %), Spain, Canada, Egypt, Germany, Taiwan, Brazil and Iran (from 1.28 % to 6.09 %) between 2000 and 2019. Among the whole 106 countries which have research publications in Curcuma Longa to their credit, four countries Bosnia, Lithuania, Ukraine and Yemen are yet to start their citation score. Incidentally, these four countries have produced only one paper each, thereby explaining the reason for zero citation to their credit. While 17 countries among the total list received less than 10 citations, 42 countries managed to score more than 100 citations. Notably, 37 of them received citations between 100 and 1000 citations, while four countries USA (66987), India (36899), China (19214) and South Korea (11215) accounted for the highest number of citations during the study period. Even though USA has only 1075 publication records to get the second position in the rank of topmost prolific countries, it has successfully gained the highest h-index on the global level in 2000-2019 followed by India (90), China (6) 4 and Japan (50). The remaining countries have gained only a low h-index below 50, exemplifying the minimum impact of their research publications on the global scenario. It is clearly inferred from the above table that India stands out as the most prolific country from where highest number of research publications have originated, closely followed by the United States. Such an interest in this area of research can be explained by the fact that turmeric is widely cultivated and used in India for its medicinal, cooking and ritual purposes on a large scale. Notwithstanding the number of publication records, US top the citation score and H-Index. TR-Total Records TCS-Total Citation Scores CPP-Citation Per Paper HIh-index CR-Cited Reference CRPP-Cited Reference Per Paper The number of publication output varied from 73 to 18 among the top twenty most prolific authors listed in the above table in the field of Curcuma longa research during the total study period (2000-2019). These twenty most prolific authors have together contributed 544publication records on the global level, with a share of 2.6 percent research output to their credit. More significantly, these twenty most prolific authors have taken a huge share of 23.9 percent in citations, with a swapping citation score of 22566to their publications from 2000 to 2019. It is further inferred from the above table that these twenty most prolific authors produced 27.2 publications, received 2214.1 citations, attained 15.05 H-Index and received 1817.9 cited reference on the average during the total study period of twenty years. Table 3 A total of 4879 institutions are found to have participated on the global scenario in the field of research on Curcuma longa during the study period of 2000-2019.Out of these total institutions, as many as 4452 institutions contributed one to five papers each to the field during the study period. It is also found that 216 institutions have produced six t ten papers each, 89 institutions came out with eleven to fifteen papers, each, 40 institutions could contribute sixteen to twenty papers each, 21 institutions involved themselves in the publication of twenty one to twenty five papers each, 11 institutions contributed twenty six to thirty records each, 10 institutions have thirty one to thirty five publication records to each of their credit, and 6 institutions came out with forty one to fifty papers each. As a remarkable contribution of exceptional nature, four institutions have contributed from 51 to 107 papers each, as inferred from the analysis of publication records data for the study period. A conference is usually understood as a summit consisting of a large number of people to talk about thorough issues of relevance and interest. It is a formal meeting of shared interest, naturally one that takes place over one or few days. Any research conference provides a chance to meet the people thinking in one particular area in order to conduct serious discussions and come out with the new theme. In this line, the above table deals with the conferences on Curcuma longa research for the period of twenty years from 2000 to 2019. Out of the total number of top 100 conferences, 6 conferences led to the publication of four to six papers, ten of them resulted in 3 papers each, 35 of them produced 2 papers each, and the remaining 56 conferences provided one paper each. Among the top 20 conferences, the highest conferences held in the country of USA (6) followed by Indonesia (3) and Laos (2). The countries South Africa, Czech Republic, South Korea, Taiwan, Switzerland, Greece, Tanzania, India and Thailand were found to have held each one conference during this period. The highest number of publication record (6) Public or private organizations offering finance support to prepare research work on individual or group of researchers based on the laboratories and produced research papers are termed as 'funding agencies'. Most of the countries in the world have got funding agencies aimed at disseminating research funds to find out solutions to the current problems like medical, agriculture technology issues. A few of the research works especially in the fields of science, technology and engineering are dependent on full funding from such agencies since research works in these fields are more often than not very expensive and no time limit can be stipulated. As shown in the above table, these twenty organizations from different countries of the world have topped the list of funders for research works in the field of Curcuma longa. Based on the application of the formula of Time Series Analysis, the results have been obtained separately for the years 2025 and 2030. It is predicted that the future trend of growth rate in Turmeric Research Literature output may incline as the present scenario reveals an increasing trend. The assumption is that there is a positive growth level in productivity of Turmeric Research Literatures. This study has been conducted for a time period of twenty years. Such an extended time period has been stipulated for this study in order to arrive at a holistic picture of the research trends in the particular field. The findings based on such a holistic analysis is intended to help the researchers, both with experienced and in their early career, to get a clear map of research. Language of Publications The total records consisting of 6087 publications were authored in 18 different languages, among which English has, quite naturally for the present scenario, emerges to be the predominant language. A swapping 98.8 percent of the records were published in English language, whereas the other 17 languages together have only 1.2 percent of publications to their credit. Next to the English language, 17 papers got published in Portuguese language, followed by eight papers in Indonesian, seven papers each in Japanese and Spanish languages, six papers in German language. Four papers each were published in Chinese and Turkish, Three papers in Polish and two papers each were authored in Korean, Russian and Thai languages. One paper each was published in Czech, French, Hungarian, Italian, Malay and Persian languages. Usage of Words The entire corpus of publication records in the chosen field of study for the study period contained 59644 words matching the keyword search. The word Curcumin was found to have been used in 2435 papers, followed by the word 'Turmeric' used in 1302 papers. While the keyword 'Curcuma' was found in 1219 records, 'longa' was found used in 1007 papers. Some medical terms like Cancer (468), antioxidant (258) and curcuminoids (254) were also found to have been used in the publication records. VoS Viewer VOS Viewer was developed by Nees Jan Van Eck and Ludo Waltman, Centre for Science and Technology Studies, Leiden University. It will give the picture of network visualization, Overlay visualization and Density visualization. There are three types of maps created based on network data, bibliographic data and text data. The supported file types are Web of Science, Scopus, Dimensions and pubmed. Here we have used bibliographic data files and create map bibliographic coupling of authors, countries and organizations. Conclusion This study has been carried out by involving scientometric analysis methods in the field of Curcuma longa research published and indexed in past 20 years from 2000 to 2019. The major outcome of this study is the segregation of publication records for the study period in the chosen field of research in terms of document types, countries, journals, authors and research institutions with highest number of publications, preferred medium of publication, growth ratio during the twenty years of study period, contribution of funding agencies and role of conferences held in this field. For this research, the software Histcite and VoS viewer were optimally used so as to derive a more complete picture of the research. The Histcite software was utilized for preparing the tables in the order of annual growth of publications, authorship, organizations, journals and measured citations. Likewise, a graphical picture of bibliographic coupling of authors, institutions and countries in networked references was performed through the VoS viewer. While mapping out the quality of publication records by measuring citations based on local and global index scores, the study has also provided valuable information on citations through the papers on curcuma longa research including total number of citations, average citation score and Hirsch-index. A total of 6807 papers received 1, 84,708 citations and the h-index was found to be at 171 during this study period. The highest number of research works were published in the year 2019 (702) and the lowest number of publication was (55) recorded in the year 2001. The first fifteen countries contributed the maximum number of publication records (92.92 percent). India became the top most producers of research publications in terms of number (1502), whereas USA published 1075 records and received the highest citation score of 125. The study also found out that first twenty authors contributed 2.6 percent of publications. In terms of authorship pattern, Sahebkar, A from Mashhad University of Medical Sciences published 73 records to his credit with 3212 citations, while Aggarwal BB from University of Texas received the huge number of 21325 citations from 67 publications. The first twenty institutions were found to have contributed 17.81 percent of the total publication records and 21.22 percent of the citations received. 583 authors published 107 records from the institution of Mashhad University of Medical Sciences, Iran.
2021-05-14T10:08:52.029Z
2021-04-11T00:00:00.000
{ "year": 2021, "sha1": "88464d9895f787b1f3fe2b4ffa78ca180f34677b", "oa_license": "CCBY", "oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/2115/1839", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "88464d9895f787b1f3fe2b4ffa78ca180f34677b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Geography" ] }
261586969
pes2o/s2orc
v3-fos-license
LMP2-mRNA lipid nanoparticle sensitizes EBV-related tumors to anti-PD-1 therapy by reversing T cell exhaustion Background Targeting EBV-proteins with mRNA vaccines is a promising way to treat EBV-related tumors like nasopharyngeal carcinoma (NPC). We assume that it may sensitize tumors to immune checkpoint inhibitors. Results We developed an LMP2-mRNA lipid nanoparticle (C2@mLMP2) that can be delivered to tumor-draining lymph nodes. C2@mLMP2 exhibited high transfection efficiency and lysosomal escape ability and induced an increased proportion of CD8 + central memory T cells and CD8 + effective memory T cells in the spleen of the mice model. A strong synergistic anti-tumor effect of C2@mLMP2 in combination with αPD-1 was observed in tumor-bearing mice. The mechanism was identified to be associated with a reverse of CD8 + T cell exhaustion in the tumor microenvironment. The pathological analysis further proved the safety of the vaccine and the combined therapy. Conclusions This is the first study proving the synergistic effect of the EBV-mRNA vaccine and PD-1 inhibitors for EBV-related tumors. This study provides theoretical evidence for further clinical trials that may expand the application scenario and efficacy of immunotherapy in NPC. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12951-023-02069-w. Introduction Epstein-Barr virus (EBV) is the first human oncogenic virus discovered [1].More than 90% of the world's population has been infected with EBV [2].In most cases, the host remains asymptomatic for life [3].However, there are still some infections that develop into EBV-associated tumors, including nasopharyngeal carcinoma (NPC) [4].EBV induces a type II latent infection in host cells.The main viral proteins include core antigen EBNA1, latent membrane protein LMP1, and LMP2 [5][6][7].These viral proteins induce intracellular signal distortion and promote tumor cell survival, invasion, and metastasis, resulting in a poor prognosis [8].Considering the important role EBV plays in tumor progression, targeting EBV-proteins becomes a new strategy in NPC treatment [9,10]. The common treatment methods for NPC include radiotherapy, chemotherapy, surgery or combined therapy [11].However, due to the invasiveness and asymptomatic nature of NPC, most patients are diagnosed at an advanced stage with local spread [12].The treatment resistance and toxicity still hinder the therapy of nasopharyngeal cancer [13,14].NPC is often accompanied by chronic EBV infection with massive lymphocyte infiltration, high expression of programmed cell death-ligand 1 (PD-L1), and deregulation of T lymphocyte activation [15].These characteristics determine that NPC is may benefit from immune checkpoint blockade, which blocks immune checkpoint interactions like programmed death receptor-1 (PD-1) and its ligand PD-L1 to cut off immunosuppressive signals from tumor cells and reverse T cell exhaustion [16].Anti-PD-1 (αPD-1) immunotherapy has been used to treat local recurrence and/or metastatic NPC (R/M-NPC) and achieved great improvement [17,18].However, immune checkpoint blockade faces many difficulties such as response heterogeneity, resistance, and intricate immunosuppression pathways.The αPD-1 monotherapy has been recommended as a secondary or late-line choice after platinum-based chemotherapy.Therefore, it is essential to find new combination strategies to enhance the efficacy of αPD-1. For EBV-driven tumors, vaccines using EBNA1, LMP1, and LMP2 as antigens can induce enhanced anti-tumor immunity [10].Messenger RNA vaccine has attached attention because of its advantage in tolerability, safety, rapid production, and excellent immune activation ability [19].We speculated that the combination of EBV-related mRNA vaccine with αPD-1 therapy may induce a potent and long-lasting anti-tumor effect, to the best of our knowledge, very little is known about this.Lymph nodes are the main site where the tumor vaccine works [20] and tumor-draining lymph nodes (TDLN) are also important for αPD-1 response.Therefore, we developed an ionizable lipid nanoparticle (LNP) to deliver the EBV-mRNA LMP2 to the TDLN.Targeted lymph node delivery can reduce side effects and increase the immune response [21].The LMP2-mRNA is expressed and presented by antigen-presenting cells in the lymph node, which then activates CD8 + T cells to attack cancer cells expressing LMP2.At the same time, αPD-1 is used to cut off the inhibiting signal from tumor PD-L1, achieving a synergistic anti-tumor effect. Animals Female Balb/c (6-8 weeks old) were purchased from Chongqing Tenxin Bio-Technology Co., Ltd.Mice were fed in a pathogen-free laboratory (SPF) with water and food.The mice were marked uniquely by earmarks in advance.A tumor challenge was performed by injecting 1 × 10 6 EBV-CT26 cells subcutaneously into each Balb/c mouse.All animal experiments in this study were approved by the Animal Ethics Committee of West China Hospital of Sichuan University. Synthesis of the ionizable lipid (C2) The ionizable lipid C2 used in this study was designed based on four tertiary amino nitrogen atoms (4N4T).The 4N core was synthesized via a Michael addition reaction and deprotection of Boc groups and the four tails were added through a ring-opening reaction of the epoxide.The synthesis route has been described in our previous study [22]. Preparation of C2@mRNA-LNP C2, cholesterol, DSPC, and DMG-PEG2000 were dissolved in anhydrous ethanol with a molar ratio of 35/46.5/16/2.5 to prepare an ionizable lipid composite solution with a C2 concentration of 15mg/ml.The mRNA was diluted in a 10 mmol/L citric acid buffer solution composed of citric acid and sodium citrate solution dissolved in enzyme-free water with pH = 6.The ionizable lipid stock solution and the mRNA aqueous solution were mixed using a microfluidic chip with a volume ratio of 1/3 and the mass ratio of C2 /mRNA was 15/1.The mixing speed was 12 ml/min.The concentration of mRNA was 0.25 mg/ml, which was diluted with citrate buffer solution to the concentration of administration (0.1 mg/ml).The microstructure of LNP was observed under transmission electron microscopy (TEM). Characterization of LNP The particle size and potential of LNP were measured by Malvern Laser Particle Size Analyzer (Zetasizer Nano ZS 90, Malvern, UK)) at 25 °C. Encapsulation efficiency of C2@mLMP2 The encapsulation efficiency of the sample was analyzed by Stunner high-throughput concentration particle size analyzer. Transfection efficiency of C2@eGFP mRNA DC2.4 cells were cultured and transferred into six-well plates one day in advance.On the second day, C2@eGFP mRNA was prepared and diluted with citric acid buffer to a concentration of 0.1 mg/ml; the six-well plate was divided into three groups.The first group was used as the control group.The second group was transfected with mRNA (0.5 μg) with Lipo2K reagent.The third group was added with C2@eGFP mRNA (0.5μg).After incubation overnight and being photographed under a fluorescence microscope the next day, the DC cells in the six-well plate were collected with a flow tube to observe the luminescence of the cells in the FITC channel. Lysosomal escape ability of C2@Cy5-mRNA DC2.4 cells in the logarithmic growth phase were spread in confocal dishes (10 5 cells per dish) with 200 μl medium containing antibiotics and serum, incubated with 5% CO 2 at 37 °C overnight.Discard the medium, mix 50 μl C2@ Cy5-mRNA (0.1 mg/ml) and 150 μl complete medium to the confocal dishes, (50 μl PBS and 150 μl complete medium mixture to the control group), and incubate for 3h.Then LysoTrackerGreen was added to the confocal dishes, incubating for 2h.Cells in the confocal dishes were washed with PBS and fixed for 20 min.Intracellular Staining Permeablization Wash Buffer containing DAPI dye was added, and cells were observed under the confocal microscope after PBS washing. Antigen expression in vivo (lymph node targeting effect) C2@Luc mRNA was prepared according to the method.The mRNA concentration was diluted to 0.2 mg/ml with citric acid buffer and inoculated subcutaneously in mice.Each animal was given 30 μg of mRNA.After 6 h of administration, each animal was intraperitoneally injected with 200 μl PBS-dissolved solution containing 3 mg of luciferase substrate.Ten minutes later, the animals were placed supine under anesthesia and observed using live imaging. Flow cytometry Tumor tissues of mice in each group were collected to prepare single-cell suspension at a density of 10 6 cells/wells (5 samples per group).Antibodies targeting CD45, CD3, CD8, CD279, and TIGIT were used to label exhausted CD8 + T cells with 1 μl per sample.Spleen tissues of mice in each group were collected to prepare single-cell suspension (5 samples per group).Antibodies targeting CD45, CD3, CD8, CD44, and CD62L were used to label memory CD8 + T cells with 1 μl per sample.Cells were detected using flow cytometry. Pathological examination The heart, liver, spleen, lung, kidney, and tumor tissues were fixed using formaldehyde and embedded in paraffin.H&E staining of the tissue slices was performed for pathological analysis. Statistical analysis Tumor volume was calculated as V = length×width 2 /2.Differences in tumor growth and body weight curve among the four groups were tested using Two Way ANOVA and Tukey's multiple comparisons tests.Cell numbers in flow cytometry were compared using the Mann-Whitney test.All the analyses. Characterization of C2@mLMP2 The preparation of C2@mLMP2 was displayed in Materials and Methods (Fig. 1A, B). Figure 1C showed that C2@ mRNA exhibited a multilayer capsule structure, indicating the formation of lipid nanoparticles, which is also an important feature of nanoliposomes.The particle size of the nanomaterial was measured to be 97.60 nm, and the zeta potential was-2 mV (Additional file 1: Fig. S1).The encapsulation efficiency was 95.6% by Stunner highthroughput particle measurement analysis.The effect of C2 on mRNA expression at the cellular level was detected in DC2.4 cells.The expression ability of C2 nanoliposomes on mRNA delivery was stronger compared with Lipo2K (Fig. 1D), and the flow cytometry showed a higher proportion of cells with fluorescent expression in the C2@eGFP-mRNA group (Fig. 1E), which indicates an enhanced intracellular expression of mRNA synthesized in vitro.Then we tested the lysosomal escape ability of C2@mRNA. Figure 1F shows that Cy5 mRNA in C2@ Cy5-mRNA can escape from lysosomes sufficiently after administration in DC2.4 cells, which indicates that during the delivery of mRNA, C2 nanoliposomes can release mRNA from lysosomes to be translated in the cytoplasm.The in vivo distribution assay was performed using live imaging.The results showed that the fluorescence in the administered mice was mainly distributed in the liver, spleen, and lymph nodes (bilateral abdominal lymph nodes) (Fig. 1G, H).The successful delivery of mRNA to tumor-draining lymph nodes enables the activation of antitumor immunity through antigen-presenting cells, and the vaccine mainly performs immune response through liver metabolism and spleen. The combination of C2@mLMP2 and αPD-1 provoked a strong anti-tumor effect EBV proteins EBNA1, LMP1, and LMP2 are expressed in most EBV + NPC tumors and play a key role in the transformation of normal cells into cancer cells [23,24].Targeting EBNA1, LMP1 or LMP2 has become an effective way in the treatment of NPC through vaccines.To test the in vivo effect of C2@mLMP2 and its synergy effect with αPD-1, we constructed a cell line (EBV-CT26) expressing EBNA1, LMP1, and LMP2.CT26 was used as a template because it shows a poor response to αPD-1 treatment.When the tumors were measurable with an average volume of 30-50 mm 3 (Day 0), the mice were randomly divided into four treatment groups: 1.Control group (PBS injection); 2. VAC group (Subcutaneous injection of C2@mLMP2 at Day 0, 3, 8 with 15ug mLMP2); 3. PD1 group (Intraperitoneal injection of 100μg αPD-1 at Day 1, 4, 7, 10); 4. VAC + PD1 group (Combined treatment of VAC group and PD1 group) (Fig. 2A).Tumor volume was recorded every two days since Day 0. As shown in Fig. 2B, single vaccination achieved a better tumor control compared with the PBS group, although with a P value of 0.0517 using Tukey's multiple comparisons test after two-way ANOVA.While the combination of C2@ mLMP2 and αPD-1 significantly enhanced the antitumor effect, compared with the control group or each Fig. 1 Characterization of C2@mLMP2.A Chemical structure of C2.B Preparation of C2@mRNA-LNP.C TEM image of C2@mLMP2.D Fluorescence microscope of DC2.4 cells transfected by C2@mRNA, Lipo2k@mRNA, and naked mRNA.E Luminescence of the transfected DC2.4 cells in the FITC channel.F Cy5 mRNA in C2@Cy5-mRNA escapes from lysosomes sufficiently after administration in DC2.4 cells.G, H Antigen expression in vivo single-treatment group (Fig. 2B-D).H&E staining of tumor tissues verified a pronounced tumor cell apoptosis in the VAC + PD1 group (Fig. 2E). Although the high PD-L1 expression rate and TIL infiltration TME make NPC suitable for αPD-1 therapy, it faces other challenges like high tumor heterogeneity, high recurrence rate, and complicated immunosuppressive factors including neoantigen losing, MHC aberration, DC incapacity, Tregs infiltration and T cell exhaustion caused by multiple checkpoints [15].Clinical use of αPD-1 in NPC is still limited in local recurrence and/or metastatic NPC, while monotherapy of αPD-1 is mainly used as a second-or late-line strategy after platinum-based chemotherapy.The combination of αPD-1 and chemotherapy achieves a synergy effect since platinum and 5-FU can promote the presentation of neoantigen and alleviate the immunosuppressive environment [25][26][27].Radiation therapy, which is the main treatment for NPC, can also enhance tumor response to αPD-1 through immunogenic cell death that releases neoantigen but it also leads to upregulation of Tregs and expression of immune checkpoints.Since EBV plays an important role in NPC, targeting EBV protein using vaccines may be promising way to enhance the αPD-1 effect.The C2@mLMP2 vaccine was designed to target tumor-draining lymph nodes to achieve a strong immune activation with limited systemic side effects.The result indicated that C2@ mLMP2 can apparently improve the tumor response to αPD-1. C2@mLMP2 enhanced the anti-tumor effect of αPD-1 by reversing CD8 + T cell exhaustion We performed flow cytometry to examine the phenotypic alterations of immune cells in tumors and spleens from different groups.The memory T cell ratios in the spleen between mice receiving C2@ mLMP2 and PBS were compared.As shown in Fig. 3A, B, both CD8 + central memory T (Tcm) cells and CD8 + effective memory T (Tcm) cells of the CD3 + T cells ratios in the spleen were significantly increased in mice receiving C2@mLMP2 treatment.The immunological memory effect is one of the advantages of tumor vaccines, especially for tumors with a high recurrence rate like NPC.The memory effect of Tcm can persist for years.Compared with naïve T cells, the activation threshold of memory T cells is much lower and the response time is shorter.Memory T cells also show higher migration ability toward lymph nodes owing to high expression of CCR7, a lymph node homing receptor [28].To understand the mechanism of the synergistic effect, we characterized tumor resident CD8 + T cells of the four groups.The result showed that the exhausted CD8 + T cells ratio in CD3 + T cells marked by PD-1 and TIGIT was significantly decreased in mice from the combination therapy group (Fig. 3C, D).T-cell exhaustion is commonly caused by the persistence of antigens and inflammation [29].Compared with memory T cells and effective T cells, exhausted T cells lose effector function and exhibit enhanced and sustained expression of multiple immune checkpoints [29,30].αPD-1 is designed to block the immune checkpoint of PD-1 and its ligand PD-L1/2 to restore the function of CD8 + T cells.However, reversing T cell exhaustion using αPD-1 only works when part of the T cells are not completely terminal [29].Monotherapy may not be an effective strategy considering the complexity of the exhaustion mechanism.In this study, we observed a strong synergy effect of exhaustion reversing when we combined αPD-1 with a lymph node-directed EBV-mRNA vaccine. Safety of C2@mLMP2 combining αPD-1 in vivo To evaluate the safety of C2@mLMP2 and its combination with αPD-1, we monitored the body weight of mice during the treatment.As shown in the Additional file 1, the body weights of mice from each group did not show a difference from Day 0 to tumor harvest (Additional file 1: Fig. S2).We further collected the heart, liver, spleen, lung, and kidney to perform H&E staining.No significant histopathological changes like necrosis, inflammation, or structural destruction were observed in organs of the VAC and VACPD1 groups (Fig. 4).The result indicates that C2@mLMP2 combing with αPD-1 is safe in vivo. Compared with other forms of vaccines, mRNA-based vaccines are well tolerated with adverse events being manageable and transient [19].The mRNA is delivered to the cytoplasm without integration into the host genome, and it is easily degraded which also reduces the toxicity.LNP makes mRNA vaccines more selective by preventing non-specific uptake in healthy tissues. Conclusions This is the first study proving the synergistic effect of the EBV-mRNA vaccine and PD-1 inhibitors for EBVrelated tumors.We developed an LMP2-mRNA vaccine based on ionizable lipid nanoparticles to deliver the vaccine to lymph nodes.The strong synergistic effect with αPD-1 and its safety was verified by animal experiments.This study provides theoretical evidence for further clinical trials that may expand the application scenario and efficacy of immunotherapy in EBV-related tumors like NPC. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ?Choose BMC and benefit from: Fig. 3 Fig. 3 Flow cytometry results of the in vivo experiment.A CD8 + TCM% (CD8 + TEM%) of CD3 + T cells in spleens from the PBS and VAC group.B Typical flow cytometry graphs of A. C PD1 + CD8 + T % (TIGIT + CD8 + T %) of CD3 + T cells in tumors from the PBS, PD1, VAC, and VACPD1 groups.D, Typical flow cytometry graphs of C Fig. 4 Fig.4 Safety of C2@mLMP2 combining αPD-1 in vivo.H&E staining of the heart, liver, spleen, lung, and kidney of mice from different groups
2023-09-08T14:02:39.120Z
2023-09-08T00:00:00.000
{ "year": 2023, "sha1": "0650190d580ab2c47896f582e9bbdae49473e66b", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/counter/pdf/10.1186/s12951-023-02069-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "903542694c381b9b4168f36aa68c4756d7df77f2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208291143
pes2o/s2orc
v3-fos-license
State-Space Based Network Topology Identification In this work, we explore the state-space formulation of network processes to recover the underlying structure of the network (local connections). To do so, we employ subspace techniques borrowed from system identification literature and extend them to the network topology inference problem. This approach provides a unified view of the traditional network control theory and signal processing on networks. In addition, it provides theoretical guarantees for the recovery of the topological structure of a deterministic linear dynamical system from input-output observations even though the input and state evolution networks can be different. INTRODUCTION In recent years, major efforts have been focused on extend traditional tools from signal processing to cases where the acquired data is not defined over typical domains such as time or space but over a network (graph) [1,2]. The main reason for the increase of research in this area is due to the fact that network-supported signals can model complex processes. For example, by means of signals supported on graphs we are able to model transportation networks [3], brain activity [4], and epidemic diffusions or gene regulatory networks [5], to name a few. As modern signal processing techniques take into account the network structure to provide signal estimators [6][7][8], filters [9][10][11][12], or detectors [13][14][15], appropriate knowledge of the interconnections of the network is required. In many instances, the knowledge of the network structure is given and can be used to enhance traditional signal processing algorithms. However, in other cases, the network information is unknown and needs to be estimated. As the importance of studying such structures in the data has been noticed, retrieving the topology of the network has become a topic of extensive research [16][17][18][19][20][21][22][23][24]. Despite the extensive research done so far (for a comprehensive review the reader is referred to [2,17] and references therein), most of the approaches do not lever a physical model beyond the one induced by the so-called graph filters [18,25] drawn from graph signal processing (GSP) [10,26,27]. Among the ones that propose a different interaction model e.g., [20,24], none of them considers the network data (a.k.a. graph signals) as states of an underlying process nor considers the that the input and the state may evolve on different underlying networks. However, different physical systems of practical interest can be defined through a state-space formulation with This research is supported in part by the ASPIRE project (project 14926 within the STW OTP programme), financed by the Netherlands Organization for Scientific Research (NWO). Mario Coutino is partially supported by CONACYT and AIP RIKEN. (probably) known functions, i.e., brain activity diffusion, finite element models, circuit/flow systems. For these processes, a more general approach to find the underlying connections is required. In this work, we therefore focus on the general problem of retrieving the underlying network structure, from input-output signals, of a process that can be modeled through a deterministic linear dynamical system whose system matrices depend on the interconnection of the network. STATE-SPACE MODELS FOR NETWORK PROCESSES Let us consider a tuple of graphs G 1 = {V, E 1 } and G 2 = {V, E 2 } to represent two networks, where V = {v 1 , . . . , v n } and E i ⊆ V ×V for i ∈ {1, 2} denote their vertex and edge sets, respectively. Further, let P be a process over {G 1 , G 2 } that describes the evolution through time of a signal (the state) x(t) defined over G 1 coupled with another signal (the input signal) defined over G 2 . Such process can be described through the linear dynamical system where Ł i , i ∈ {1, 2}, is the matrix representation of the graph G i , i.e., the shift operator in the GSP terminology, C ∈ R l×n , and D ∈ R l×n are the observation matrices and f i : R N×N → R N×N is a matrix function defined via the Cauchy integral by [28] f i (Ł) := 1 2πi with f s i being the scalar version of f i which is assumed analytic on and over the contour Γ f i . Here, R(z, Ł) is the resolvent of Ł given by Model (1) is expressed in terms of its state-space representation and captures the relation between the input, the output, and the state through a first-order difference equation [29]. It connects the output (observables), y(k), to a set of variables (states), x(k), which vary over time and depend on their previous value and on external inputs (excitations), u(k). After observing model (1), a natural question that arises is the following: assuming that the observation matrix C and the relation between P and {G i , G j } are known, how can we retrieve {Ł i , Ł j }, i.e., the network structures, from a number of samples of the input signal u(k) and the output signal y(k)? In this work, we aim to answer this question by employing techniques commonly used in control theory which rely on results for Hankel matrices and linear algebra. In particular, we employ subspace techniques which do not require any parametrization of the model, hence the problem of performing nonlinear optimization, as in the prediction-error methods [30], is avoided. IDENTIFIABILITY CONDITIONS FOR LTI SYSTEMS For the sake of simplifying notation, from this point on, we omit the dependency on Ł i and Ł j of the system matrices in (1) and refer to the matrices f i (Ł i ) and f j (Ł j ) as A and B, respectively. Prior to introducing the methods for network topology identification, we digress the identifiability conditions of LTI systems. In this section, we briefly recap the requirements on the system matrices (A, B, C, D) for applying subspace techniques for estimating them. The main requirement for system identification is the minimality condition of system (1). This property is intrinsically related to two well-known properties of dynamical systems: reachability and observability. The first property denotes the ability of the input, u(k), to steer the system state to the zero state within a finite time interval. While the second denotes the ability to observe the time evolution of the states through the evolution of the output; that is, it answers the question of the uniqueness of the relation between state and the output. Before stating these notions mathematically, let us introduce the following two matrices [29] • Controllability Matrix: Based on these matrices, the following two lemmas state the concepts of reachability and observability in a more formal way. By using these results we can now formally state the definition of minimality of a system. Definition 1. (Minimality) The LTI system (1) is minimal if and only if it is both reachable and observable. Furthermore, the dimension of the state vector x(k) of the minimal system defines the order of the LTI system. As the system identification framework only guarantees recovery of a minimal system, from this point on, we only consider problem instances where the system of interest is minimal. Note that this is not a restrictive assumption, as even when we retrieve a minimal system of order p < n, this can be interpreted as a system on the nodes of a hypergraph, i.e., clusters of nodes that drive the general behavior of the process over the network. SUBSPACE NETWORK IDENTIFICATION In this section, we introduce a general framework for estimating the topology of the networks, i.e., the associated matrices {Ł 1 , Ł 2 }, from input-output relations. To do so, we first provide the methods for retrieving the system matrices in (1). Then, we state the required conditions and propose different methods for estimating the graph matrices {Ł 1 , Ł 2 } from the obtained system matrices. State-Space Identification It is not hard to show that the state of the system (1) with initial state x(0) at time instant k is given by Observing the expression relating the states, the input and the output in (1), we can specify the following relationship between the batch input {u(k)} s−1 k=0 and the batch output where and s is the size of the batch that must be larger than the number of states (assuming the number of nodes is the number of states this implies s > n). Given that the underlying system is time-invariant (i.e., the graph does not change in time), the following relation holds [31] Throughout this work, we assume that that C has rank equal to n. Despite that this assumption seems restrictive, we consider it to simplify the exposition of the approach. Dealing with dynamical models whose output dynamics satisfy l < n is not trivial. As it will become evident, disambiguation of the system matrices requires extra information when C is wide or singular. Therefore, this is left for immediate future work. To identify the system matrices from (1), we first make use of the following lemma. Lemma 3. (Verhaegen and Dewilde [32]) Given the following RQ factorization for appropriately sized matrices R and Q, the following relationship holds for the input-output data matrices Using Lemma 3, it can be shown that Therefore, from the singular value decomposition (SVD) of R 22 , i.e., R 22 = U R Σ R V T R , we can obtain the transition matrix A (up to a similarity transform) as follows. First, from . . . where we have defined A T := T −1 AT and C T := CT for an unknown similarity transformation matrix T ∈ R n×n , we can compute an estimate T of A T by solving the overdetermined system which exploits the shift-invariance of the system. Here, we have defined the matrices U R,r := U R (l + 1 : sl, :), and abused the MATLAB notation to denote the rows and columns that are considered for building system (12). From (11), we can observe that an estimateĈ T of C T can be obtained by selecting the first l rows of U R . Since of C is full rank, we can estimate the similarity transform T from C T . Therefore, the estimateÂ, for A, can be obtained aŝ While a similar approach using the matrices R 21 and R 11 can be performed for retrieving a transformed B, i.e., B T = T −1 B, [32], here we compute it, together with the initial state x T (0) = T −1 x(0), by solving a least squares problem. This is done to keep the exposition of the approach conceptually simple, as the usage of the information in R 21 and R 11 requires the introduction of another (more involved) shift-invariant structure. To do so, first, observe that for given matrices A T and C T the output can be expressed linearly in the matrices B T and D as where After the estimatesB T andD are obtained, we can solve for the original matrices by appropriately multiplying them with the estimate of the similarity transform as we did to retrieve A [cf. (15)]. Network Identification At this point, the system matrices have been obtained. Now, we consider different scenarios for estimating the topology of the underlying networks. Known Scalar Mappings { f s 1 , f s 2 }. In this case, we first obtain the eigenvalues of the graph matrices by applying the inverse mappings ( f s 1 ) −1 and ( f s 2 ) −1 to the spectra of the respective matrices. Therefore, for guaranteeing a unique set of eigenvalues for the graph matrices, the functions { f s 1 , f s 2 } should be bijective, i.e., a one-toone mapping, on an appropriate domain. For instance, for Ł i being the normalized Laplacian, the mappings should be bijective in the interval [0, 2] as the spectrum of the normalized Laplacian lies there. When the inverse mappings cannot be found analytically (e.g., due to computational reasons), the problem of finding the eigenvalues of the graph matrices boils down to a series of root finding problems. That is, consider [ω i ] k as the kth eigenvalue of the matrix M i , where M 1 := and M 2 :=B, and f s i is the known scalar mapping. Then, the estimation of the eigenvalue vector λ i for each of the matrices can be formulated as for i ∈ {1, 2}. Fortunately, there exist efficient algorithms to obtain roots with a high accuracy even for non-linear functions [34]. In addition, note that even when only A T is known, we can still retrieve the eigenvalues of Ł i as this matrix is similar to A, i.e., A T = T −1 AT . As by definition andB are matrix functions of Ł 1 and Ł 2 [cf. (2)], respectively, we can use the eigenbasis from these matrices to reconstruct the graph matrices aŝ Unknown Scalar Mappings. When the scalar functions { f s 1 , f s 2 } are unknown, we can opt to retrieve the sparsest graphs that are able to generate the estimated matrices, i.e., where diag(·) denotes a diagonal matrix with its argument on the main diagonal and S is the set of desired graph matrices, e.g., adjacency matrices, combinatorial Laplacian matrices, etc. To do so, we can employ methods existing in the GSP literature that, given the graph matrix eigenbasis, retrieve a sparse matrix representation of the graph [18,35]. One-Shot State Graph Estimation. In alternative to the previous two cases, we can estimate the network topology related to the states by avoiding the computation of A T explicitly. That is, after obtaining an estimate of C T , and hence T , we can notice that system (12) can be modified to include the graph matrix, i.e., where U R,l and U R,r are the left and right matrices associated with U R in (12). Notice that in (21), we not only exploit the shift invariance in the U R matrix but also the fact that Ł i and A commute. We can check that this relation holds by recalling that [cf. (11)] As a result, we can pose the following optimization problem where we have defined M := Ł i A to convexify the problem. Here, µ is a regularization parameter controlling the sparsity of Ł i and the optimization is carried out over the set of desired graph matrices, S, (as in (20)) and M is a convex set of matrices meeting conditions derived by the matrix representation of the graph, e.g., if Ł i is restricted to a combinatorial Laplacian then 1 T M = 0 T must hold. Alternatively, we could solve for Ł i and A by means of alternative minimization [36]. Although in principle this approach requires knowledge of T , in many instances it is possible to find a graph matrix associated with the transformed system, i.e., a graph associated with the system {A T , B T , C T , D}, as the shift invariance property is oblivious to this ambiguity. NUMERICAL EXAMPLES To illustrate the performance of the proposed framework, we carry out a pair of experiments using synthetic and real data. Synthetic Example. For this example, we consider a simple system where Ł i Ł j with n = 15 nodes, f s 1 is a scaled diffusion map (i.e., f s 1 (x) = α i e −xτ i , also known as heat kernel), and f s 2 is the identity map, i.e., f s 2 (x) = x. Here, it is assumed that all states are measured, i.e., C = I, and that there is a direct feedback from the input to the observations, i.e., D = I. As input, we considered a random piece-wise constant (during the sampling period) binary bipolar signal with 300 samples each. The reconstruction of the topology using the proposed framework is shown in Fig. 1. In Fig. 1a, the true and reconstructed adjacency matrices for the states and input are shown. As expected, when the data follows a practical model, the reconstruction of the matrices Ł 1 and Ł 2 is guaranteed to be exact. Here, since we have considered simple scalar mappings, we only perform root finding to retrieve the eigenvalues of the graph matrices. The eigenvalues comparison for both graphs is shown in Fig. 1b. ETEX dataset. We now consider data from the European Tracer Experiment (ETEX) [37]. In this experiment a tracer was released into the atmospheric system and its evolution was sampled and stored from multiple stations in time. As it is unlikely that such kind of process has as many states as stations, we cluster the 168 measuring stations in 25 geographical regions and aggregate its measurements as a preprocessing step. This preprocessing is sustained by looking at the singular values of 1 m Y m in Fig. 1a. In this figure, it is observed that most of the dynamics can be described with a system of order 5, i.e., first knee in the plot. Here, we selected 25 nodes as a trade off between complexity and graph interpretability (second knee). As the propagation of the tracer is considered to be a pure diffusion in an autonomous system, i.e., matrices B and D equal zero, we employ the proposed one-shot state graph estimation method [cf. (23)] to retrieve the underlying network structure. In this case, it is also assumed that the observations are the states of the system, i.e., C = I. The estimated graph is shown in Fig. 2b. Here, the size of the circle representing a vertex is proportional to the degree of the node. From the estimated graph, we can observe that the region of Berlin presents the highest degree which is consistent with the concentration results in [33]. Further, the strong connectivity along the France-Germany region correlates with the spreading pattern of the agent. Despite that this graph has fewer nodes than the one obtained in [33] (see Fig. 2c), the estimated graph presents a better visual interpretability and exhibits a similar edge behaviour. CONCLUSION In this paper, we have introduced a general framework for graph topology learning using state space-models and subspace techniques. Specifically, we have shown that it is possible to retrieve the matrix representation of the involved graphs from the system matrices by different means. In the particular case of the graph related to the states, we presented a one-shot method for topology identification that does not require the explicit computation of the system matrix. Numerical experiments for both synthetic and real data have demonstrated the applicability of the proposed method and its capabilities to recover the topology of the underlying graph from data. Network of Input http://arxiv.org/ps/1911.11270v1
2019-11-25T23:20:33.000Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "28857829fc9bbe8dc6d9774aab9f519c3bbd246f", "oa_license": null, "oa_url": "https://repository.tudelft.nl/islandora/object/uuid:abd4ea54-a401-4f9d-9a3a-639a8a5b2596/datastream/OBJ/download", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "28857829fc9bbe8dc6d9774aab9f519c3bbd246f", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
251912472
pes2o/s2orc
v3-fos-license
Experiences of activity monitoring and perceptions of digital support among working individuals with hip and knee osteoarthritis – a focus group study Background Mobile health (mHealth), wearable activity trackers (WATs) and other digital solutions could support physical activity (PA) in individuals with hip and knee osteoarthritis (OA), but little is described regarding experiences and perceptions of digital support and the use of WAT to self-monitor PA. Thus, the aim of this study was to explore the experiences of using a WAT to monitor PA and the general perceptions of mHealth and digital support in OA care among individuals of working age with hip and knee OA. Methods We conducted a focus group study where individuals with hip and knee OA (n = 18) were recruited from the intervention group in a cluster-randomized controlled trial (C-RCT). The intervention in the C-RCT comprised of 12-weeks use of a WAT with a mobile application to monitor PA in addition to participating in a supported OA self-management program. In this study, three focus group discussions were conducted. The discussions were transcribed and qualitative content analysis with an inductive approach was applied. Results The analysis resulted in two main categories: A WAT may aid in optimization of PA, but is not a panacea with subcategories WATs facilitate PA; Increased awareness of one’s limitations and WATs are not always encouraging, and the second main category was Digital support is an appreciated part of OA care with subcategories Individualized, early and continuous support; PT is essential but needs to be modernized and Easy, comprehensive, and reliable digital support. Conclusion WATs may facilitate PA but also aid individuals with OA to find the optimal level of activity to avoid increased pain. Digital support in OA care was appreciated, particularly as a part of traditional care with physical visits. The participants expressed that the digital support should be easy, comprehensive, early, and continuous. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-022-14065-0. Introduction Osteoarthritis (OA) is a chronic and common musculoskeletal disorder occurring frequently in the hips and knees [1][2][3]. Individuals with hip and knee OA often experience pain and reduced function of the affected joint [3][4][5] which may lead to reduced quality of life and reduced work ability [6,7]. Hip and knee OA are also associated with an increased prevalence of comorbidities and premature mortality [8,9]. There is ample evidence that physical activity (PA) decreases pain, improves physical function and healthrelated quality of life in individuals with hip and knee OA Open Access *Correspondence: Elin.ostlind@med.lu.se [10]. PA is defined as "any bodily movement produced by skeletal muscles that results in energy expenditure" [11]. For all adults, the World Health Organization (WHO) recommend at least 150-300 min of moderate intensity PA, or at least 75-150 min of vigorous intensity PA, or a combination of both during the week for substantial health benefits [12]. Doing some PA but not reaching the recommended levels is still better than no PA at all [13]. However, despite the recommendations and evidence showing the effect of PA, previous research has reported that most individuals with hip and knee OA are not physically active enough [13,14]. Interventions using behavior change techniques have previously been shown to improve adherence to PA in the short-term [15,16]. Behavior change techniques are defined as the smallest "active ingredient" in an intervention and supports the individual in the behavior change process [17]. Some of the most effective behavior change techniques to enhance adherence to PA have been found to be goal setting, self-monitoring of behavior, social support, and non-specific reward [16]. These and several other techniques are often incorporated in mobile Health (mHealth) interventions [18,19] which has frequently been used in the last decade to promote PA in different populations [20,21]. MHealth is a subsegment of electronic Health and encompasses the use of mobile communication devices such as smartphones, tablets, personal digital assistants, and wearable activity trackers (WATs) for digital health [22][23][24]. WATs are increasingly popular among users but also in research with eight published studies in 2013 and 199 in 2017 [21]. They are often used for self-monitoring of PA and can provide the user with prompts and feedback to an application (app) on the smartphone or tablet [21]. Commercially available WATs measure different aspects of PA such as steps, distance walked, intensity level and heart rate [21]. WATs have been used in interventions to promote PA and systematic reviews have shown that they can be effective in increasing PA levels in healthy adults [25], older adults [19], individuals with rheumatic and musculoskeletal diseases [26], and other chronic diseases [27]. Several studies have also shown a high short-term adherence to WAT-use among participants in PA interventions [26,[28][29][30][31]. Other types of digital health are also used to support individuals with hip and knee OA. There are several examples of web-based platforms and mobile apps that offer digital support such as information, exercises, and feedback [32][33][34]. Before implementing new methods to promote PA and health, it is important to gain information about the users' , i.e. patients' , perceptions and opinions about the method [35]. Several published studies have reported experiences and perceptions of using digital solutions and mHealth to support self-management in adult arthritis and OA patients [36][37][38][39][40][41][42]. The experiences differ but, in general, the results showed that the digital solutions could aid in self-management, increase adherence to exercise and improve the patients' communication with health care personnel. Apprehensions towards the digital solutions and wanted features of the digital support were also reported [36][37][38][39][40][41][42]. Only a few studies have reported on participants' experiences of self-monitoring PA with a WAT [36,42] and, to our knowledge, there are no studies on a Swedish, working age population. The results could add relevant information about OA patients' experiences and perceptions of this area which might guide clinicians and researchers when designing and providing future OA care. The aim of this study was to explore the experiences of using a wearable activity tracker to monitor physical activity and the general perceptions of digital support in OA care among individuals of working age with hip and knee osteoarthritis. Design We conducted a focus group study and applied qualitative content analysis to the data [43][44][45][46]. The consolidated criteria for reporting qualitative research (COREQ) were used as a guidance when reporting the study [47]. Setting This study was a part of a larger project investigating the effect of self-monitoring PA with a WAT in working individuals with OA [48]. The primary outcome in the C-RCT was work ability and the secondary outcomes were PA and work productivity. Briefly, a clusterrandomized controlled trial (C-RCT) was conducted with one control group (n = 74) and one intervention group (n = 86). Both groups received information about OA, self-management, and exercise in group lectures according to the Supported OA Self-management Program (SOASP) [49,50]. In addition, the participants in the intervention group used a WAT, Fitbit Flex 2, and the Fitbit-app for 12 consecutive weeks. The Fitbit Flex 2 device is placed in a wrist-worn small rubber band and measured distance, steps, time in different activity levels etc., which can be observed in the Fitbit-app [51]. The Fitbits had a default step goal of 10,000 steps per day that was changed to 7,000 steps per day. This was changed to make the step goal more achievable for the participants but also because previous research has reported that taking 7,000 steps or more per day was associated with lower risk of mortality [52] and has been shown to correspond to 150 min of MVPA per week [53]. The participants were asked to monitor their activity daily, and they also received some automatic feedback from the app. Feedback could be positive push notifications when they reached their step goal, reminders to move or different badges of PA accomplishment. The feedback was visible in the app or sent to the participant's e-mail. Participants In this study, a combination of purposive and convenience sampling methods was used [54]. Participants from the intervention group of the C-RCT that participated in 2019 (n = 57) were approached by email and asked if they were willing to partake in focus group discussions about their experiences of using the WAT and their perceptions of digital support in OA care. We chose to ask only participants that had taken part of the intervention in 2019 so that they would more easily recollect the intervention. Out of all contacted potential participants (n = 57), twenty individuals agreed to participate but two dropped out due to different unforeseen events. Three focus group discussions with six participants in each were held. The groups were settled based on the participants preferences of date and, in general, the participants were not familiar with each other. Process The first author EÖ moderated each session and the co-authors KS (discussion one and three) and EEH (discussion two) assisted. All three researchers are female, registered physiotherapists (PTs) and have experience in qualitative research. EÖ had previously met the participants on one or several occasions. However, these meetings took place as a part of the research project, e.g., delivery of Fitbit or group lectures in the SOASP. The participants had received short information on e-mail about the study in conjunction with their informed consent. They signed the informed consent and brought it with them at the time for the focus group discussion. Each group discussion was carried out in the same manner. The participants were offered coffee and a sandwich upon arrival to the conference room and were able to get casually acquainted with the other participants. The participants, the moderator and the assistant sat around a table. Before commencing the discussion, the moderator started with a brief introduction. It was emphasized that the participants could feel secure in talking freely, express their experiences and that there were no 'right' or 'wrong' things to say. Participants were also asked not to pass on the information that emerged during the discussions. A questioning route was thereafter used with an opening question, introductory questions, key questions and ending questions [45]. The questioning route was designed before the focus group discussions and was applied on all three sessions without any changes (Additional file 1). The questions were mostly open-ended and designed to answer the aim of the study. Discussions between the participants were encouraged. Follow-up questions or questions that targeted a specific participant were asked when needed. Field notes were taken by the assistant. At the end of each session, the assistant verbally summarized what had been discussed during the focus group and the participants were allowed to comment on this. After each focus group discussion, the moderator and the assistant had a brief debriefing where they reflected on the content of the focus group discussion. The focus group discussions lasted between 60 and 75 min and were conducted in November-December 2019. The three discussions and the debriefings were audio-recorded and transcribed verbatim by EÖ. Participant demographics were collected prior to this study in conjunction with the C-RCT and are presented in Table 1. Data analyses The data from the focus group discussions were analysed using qualitative content analysis and the inductive approach as presented by Elo and Kyngäs [43]. No themes or categories were identified in advance. We followed the three phases of the analysis: preparation, organizing and reporting. All three transcribed focus group discussions were seen as a unit of analysis. The transcribed discussions were read through several times by EÖ and KS to become familiar with the data. Thereafter, the data was anonymized and organized using the software program NVivo (released 2020). Open coding was conducted in NVivo, headings were written using annotations, and codes were thereafter created. Similar Table 1 Participant characteristics and physical activity levels (IPAQ-SF categories) SD Standard deviation, WAT Wearable activity tracker, IPAQ-SF International physical activity questionnaire -short form codes were grouped in sub-categories and similar subcategories were grouped in main categories. The process was not linear, and data was re-organized several times. Results Two main categories were identified during the analyses: A WAT may aid in optimization of PA but is not a panacea and Digital support is an appreciated part of OA care. The main categories and their subcategories are presented in Fig. 1. Representative quotes from all three focus group discussions are attached to each category. A WAT may aid in optimization of PA, but is not a panacea The participants expressed that the WAT in different ways had facilitated PA and increased their awareness of the number of steps that were optimal for handling their OA symptoms. However, using the WAT was not experienced as encouraging for all participants and in some situations, prompts from the app regarding PA were experienced as stressing and discouraging if they were unable to walk. WATs facilitate PA The WATs facilitated PA in more than one way. Targeting and reaching the daily step goal were experienced as a spur to walk more than usual. The participants described that they would walk around the block or take the dog out for an extra walk in the evening if they saw that they were some steps short of reaching the goal. To set a realistic and achievable step goal and to have a "good enough is perfect" approach when it came to doing PA were seen as important. "...it will be easy to push or trigger yourself to go those steps extra if you are at 6,500, it is easy to motivate and take another walk to reach the goal. " Quote from discussion 1 "I'm amazed at how controlled I am by it, 7,000 steps, it was like, that's what I walked every day. And now that I don't have this [the WAT] anymore, I don't think I take that many steps anymore. I'm really affected by it. " Quote from discussion 2 The different feedback from the Fitbit app (prompts, reminders, and rewards) were also experienced as an incentive to do more PA, especially walking. They could receive prompts about reaching the step goal but also reminders to move if they had been sedentary for some time. "It's positive that it beeps when you haven't walked 250 steps in an hour. When it "beeps" you get to move and take a turn in the corridors at work…" Quote from discussion 1 Increased awareness of one's limitations One aspect that surfaced during the discussions was that the WATs not only facilitated PA but also made the participants aware of their PA level and their limitations in engaging in PA, especially walking longer distances. They used information about the number of steps taken and related it with their pain, and other health-related issues. In that way, they became more aware of the number of steps that were optimal specifically for themselves. When they stayed within their optimal number of steps, they experienced less pain flares and less pain-related disruptions of their regular exercise. However, sometimes, the reason for a pain flare was unknown. Some participants could not identify any pattern at all regarding PA and pain. To be able to show others how many steps they had walked during the day was also seen as valuable. It could be used as a sort of evidence and legitimate their need for rest whether it is after a day at work or after an entire day of sightseeing while on vacation. "I could see [in the app] that I should probably quit now. Plus, you can say it to others: I have taken the steps that I can manage, and I can't tag along any longer. " Quote from discussion 2 WATs are not always encouraging Both limitations and disadvantages of using a WAT were highlighted during the discussions. Some speculated that the facilitating effect of the WAT depended on the interest of the user. A WAT and the information/feedback serves no purpose if the user is not encouraged by it. Another factor that could limit the facilitating effect was if the user was hindered in walking because of OA pain or functional limitations. Also, those already being highly physically active expressed limited effects of the WAT since there was limited room for increasing their PA. Concerns and experiences of anxiety and stress related to WAT-use were expressed in the discussions. These feelings were experienced when they failed to reach their step goal, when they received prompts from the WAT to move but was not able to walk due to driving a car, attending a meeting etc. To push oneself too hard and never feel content with the amount of PA was also highlighted as a disadvantage of WAT-use. "You get disquiet if you do not reach 7,000 steps… I think it happened to me one day and that was very tough…" Quote from discussion 1 "You can go too far with this, as you said, you push yourself and then you have to do a little more and then you have to do a little more and you will never be satisfied. " Quote from discussion 1 Digital support is an appreciated part of OA care Digital support in OA care was, in general, discussed in positive terms but a combination of traditional faceto-face OA care and digital support was perceived as the best solution. Perceptions on OA care, functionality of digital support and the PTs' role were also highlighted. Individualized, early, and continuous support It was considered important that the advice and exercise delivered in OA care were individualized and that the health care personnel or personal trainer identified what would motivate individuals to engage in PA. They also felt that the traditional care failed to recognize that also younger, working individuals are affected by OA. The SOASPs are often held during working hours and some participants had experienced that the attendees of OA were mostly older individuals. "I feel that this… I participated in the SOASP… that it was me and then it was 90-year-olds. " Quote from discussion 3 "It [SOASP] should be sort of more separated in the age groups maybe because I have no one... but it felt like they were not in the same stage as I was. I would probably like to have that. " Quote from discussion 3 The timing of OA care was also discussed. They would like to have received information and treatment at an earlier stage of the disease. Self-monitoring certain aspects of one's own health and detecting any changes was seen as a way to encourage seeking care at an early stage. More frequent visits to health care personnel in the early stage of the disease was also mentioned to support and consolidate behavior change or learn suggested exercises. The need for continuous support from health care was also stressed among the participants. A suggestion emerged of an OA-PT that would see them regularly for check-ups. The suggestion was based on their experiences of individuals with diabetes that see a nurse specialized in diabetes for check-ups once yearly. Quote from discussion 2 PT is essential but needs to be modernized PT has a key role in OA care, both traditional and digital care. Some of the participants had experiences of a digital platform for OA care. They appreciated that they had a personal and continuous connection with a PT in the digital platform that could individualize their exercises and offer guidance and support. One functionality that they lacked in the digital platform was the possibility to receive feedback regarding how they performed their exercises. They received the exercises on video but could not film themselves and show the PT. "That's what I miss about Joint Academy (digital platform). I have never shown how I do my exercises. So theoretically, I can do them completely wrong. " Quote from discussion 1 The participants talked about sharing their WAT activity information with a PT. A positive aspect was that the PT would gain more information regarding their health and would therefore be able to guide them better regarding PA, exercises etc. Knowing that there was a recipient to their activity data was also seen as a motivating factor. To trust the PT that they shared their activity information with was important. "Someone could help me check what it is that makes me feel so bad today, if it's because I did too much or I did too little or what could be the cause... Then I was grateful because I can't find a pattern myself and don't really know... " Quote from discussion 2 "It can be a good discussion basis for the follow-up visit: "You have walked far too much" or "you have not moved enough. "" Quote from discussion 3 PT treatment was discussed in the three sessions and particularly home exercises with stick figure drawings on paper. The participants did not appreciate that they received stick figures drawn by their PT. To instead be provided with instructional videos of the exercises was seen as a superior alternative compared to exercises illustrated with stick figures. "The stick figures should have been a video instead.-Yes, an instructional film. " Quote from discussion 1 Digital support should be easy, comprehensive, and reliable High availability, more frequent feedback, and initial help with setting up the app or WAT were mentioned when digital support was discussed. There were diverging opinions about apps and WATs in general. Where some participants expressed a great interest in them and had many apps in their smartphone, others said that they had no general interest in apps or WATS and that they wanted a simple support that worked as intended. "I think it's a problem, that you can't get in… That I can't make it work. I feel it's like a sort of handicap. But once it works, it's amazing. " Quote from discussion 2 "… someone must probably instruct me what to do and how to set it up because as I said, I'm not interested in sitting and looking among the apps and what features they have and so on... " Quote from discussion 3 Desired features of digital support were brought up in the discussions. They appreciated step counting, feedback, and reminders to move that existed in the Fitbit. Other desired features of an optimal digital support were information, automatic registration of PA, to receive new exercises (on video) automatically, reminders to do the exercises and to be able to check it off from a list when you have finished an exercise. A more comprehensive digital support was also discussed with additional features supporting weight loss (logging food and counting calories). "I would like to have an increased support so that you get the whole concept of diet and other things as well, it would have been great, I think. " Quote from discussion 3 Experiences related to the reliability of the measurements and the data security of the WAT also surfaced during the discussions. A fear that unauthorized individuals or organizations would get access to the users WAT-data was also mentioned as a disadvantage, but they didn't feel it was a major issue for them. Also experiences about the accuracy of the WAT were discussed. The WAT did not measure all PA which was seen as a limitation. Participants had also experienced that the WAT sometimes measured incorrectly, registering other activities as steps, or not registering other activities at all. Discussion This focus group study reports the experiences and perceptions of WAT-use and digital support in working individuals with hip and knee OA. Experiences of the WAT as a tool to facilitate and optimize PA emerged in the discussions but also diverging experiences and perceptions were described; WATs could be discouraging for some individuals and in certain situations. Digital support was perceived as a valuable part of OA care and the participants perceived that it should be individualized, easy, continuous, and reliable. The categories can also be linked to behavior change techniques such as self-monitoring of behavior, social support, problem solving and goal setting [17]. Although WAT-use in interventions to promote PA is a relatively new phenomenon, there has been a rapid increase in its popularity and use in research during the last decade [21]. Several meta-analyses have reported that WAT-use seems to increase PA in different populations [55]. The experiences of the WAT as a tool to facilitate PA is also reported in a US study describing and comparing current and former WAT-users where a majority (both current and former users) answered that the device influenced increased PA [56]. Correspondingly, a qualitative study reported that patients with OA or inflammatory arthritis described that the WAT reinforced their motivation and helped them to reach their activity goal [36]. The importance of having a step goal to strive for is also shown in other qualitative studies reporting experiences from individuals with OA, arthritis, and type 2 diabetes [36,42,57]. The participants in this study also experienced that the WAT made them aware of how many steps per day that was optimal for them to avoid worsening of pain. This experience that both too little and too much PA might be suboptimal in OA has been described as a U-shaped relationship [58,59]. WATuse may aid the individual in finding the PA dosage that works best for them. In line with this, clinicians in the study by Leese et al. [36] expressed that the WAT could work as a "teaching tool" to help patients with OA and arthritis see the connection between the level of PA and the perceived pain. Negative opinions and limitations of WAT-use were also highlighted in the discussions. They perceived that WATs would be more encouraging if the user had at least some interest in technology. This is in line with the results from a US study describing and comparing current and former WAT-users [56]. That study reported that the top three reasons for WAT-use (current and former users) were 'an interest in the technology' , 'to monitor health variables' and 'aid to lose weight' . Even the interested and positive WAT-users in this study expressed that there were situations in which the WAT gave rise to feeling more discouraged or irritated than encouraged. They could feel discouraged when they were in so much pain that they could not walk enough to reach their step goal. These feelings of discouragement when using a WAT are reported also in previous research [36,60]. In the study by Leese et al. [36], both patients and rehabilitation professionals expressed that the WAT-user might feel discouraged and uninspired by the activity information from the WAT if they could not reach their goal due to a fluctuating ability to walk or a constant deterioration. This could possibly be avoided if individual and realistic goals are set together with a rehabilitation professional instead of only using the default goals of the WAT-app [36,61]. The other main category in this study entailed participants' experiences and perceptions of digital support in OA care. In general, the participants talked about digital support in positive terms. Having digital support was seen as accessible and could help them to easily gain more knowledge regarding their disorder and their health. These results are also reported in previous research where patients with OA described that having more information of their disorder and health would empower them to manage their symptoms better [38]. The participants in that study also expressed that if they could share data from their WAT with a health care professional, their information would be more objective and accurate. The health care professional would then have more knowledge and be able to make more informed and individually targeted recommendations. To share activity information with others might increase the adherence to WAT-use [62]. In a previous study exploring individuals' (with OA) perspectives on mHealth, participants expressed that they would appreciate a simple data input, personalized settings, and individual goals in a mHealth app [41]. Other wanted features that were brought up during the discussions in our study were that the OA care and digital support should be early and continuous. An early and continuous care in OA could be important as a preventive measure to reduce the risk of avoidance of activities [63]. The importance of PTs in traditional and digital OA care was also discussed. Some of the participants had used a digital platform for OA care and said that they appreciated having contact with a PT through the platform and to receive individualized exercises with video instructions. In a previous study on OA patients' experiences of an exercise app, the participants said that they needed input from a professional that could see if they were doing their exercises correctly [64]. This was echoed in this study where participants expressed that the optimal OA care would be a hybrid between digital and traditional OA care with physical meetings with their PT. In the study by Danbjörg et al. [64], a combination between digital support and physical meetings was also preferred. When discussing the importance of PT in this study, stick figures illustrating exercises on paper generated lively discussions among the participants. They found the stick figures difficult to interpret and would have preferred to receive the exercises on video instead. Wanted features of digital support have been reported in previous qualitative studies and were also discussed in this study. Simplicity and comprehensiveness were highly valued in an eHealth intervention [65] while easy, comprehensive, and including several functions such as information about OA and exercises, automatic registration of activity and the ability to log food were wanted features in this study. It was also seen as essential that the digital support worked as intended and was reliable. Previous research reported that users lost interest if the app or other digital support did not function as intended [60]. Clinical implications The general results in this study are in line with the results of previous research exploring the experience and perceptions of mHealth and activity monitoring among individuals with hip and knee OA and other musculoskeletal disorders [65]. This strengthens our beliefs that the results from this study can be applied to similar populations. WATs can facilitate PA in different populations but may also be used to guide individuals with OA to find the specific dose of PA that is optimal for them. Pain is often a limiting factor and important to take into consideration when setting a PA goal. The implications of finding the optimal dose of PA are however limited by the WAT used in this study that mainly was used for counting steps. There may be situations where perhaps only bicycling is suitable. Where applicable, a treating PT or other health professional may also receive relevant activity information from the WAT. However, to our knowledge, patients cannot digitally share activity data with a PT in primary health care in Sweden at present. Future health care systems could be constructed to allow for activity-and other health data to be shared to aid the clinician in their recommendations. WATs in general may perhaps facilitate PA particularly for individuals that are physically active already and have an interest in digital support, but some factors which emerged in this study might enhance the possibility to encourage even those that are not as interested. Within the scope of this study was also the participants' perceptions of mHealth and digital support in OA care. Digital support was seen as useful and accessible, especially as a complement or part of the traditional OA care with physical visits. Digital health care could probably be used by traditional health care to a larger extent. In the section below, we present the key clinical implications and suggestions from the results in this study. Some of the implications are somewhat outside the scope of this study but are included since they emerged during the discussions and were seen as relevant. • When initiating WAT-use, technical "hands-on" support with settings and goals might be needed. Achievable and individualized step-or activity goals are essential. • Sharing the activity data with a PT or others may facilitate PA and adherence to WAT-use. • The participants expressed that core treatment in OA should be delivered at an early stage of the disorder. • The SOASP may need adjustment to suit younger and working individuals. • Since OA is a chronic disease, OA care should be continuous. The care could be mainly digital but with visits at regular intervals, for example, annually. Strengths and limitations Measures to achieve trustworthiness as suggested by Graneheim and Lundman [66] have been considered throughout this study. A questioning route was used in all three focus group discussions and no alterations were made to this. It was also the same moderator, place, time of day and the discussions took place within a period of a few weeks. An experienced assistant moderator participated in the discussions. Having these contextual factors consistent for all discussions increased the dependability of the results. Credibility has been strengthened by choosing the most suitable meaning units and presenting the analysis process thoroughly for transparency. Also, quotes from the participants were chosen to represent the content of the discussions. A continuous dialogue between EÖ and KS were held throughout the analysis process to make sure that all data was included in the results. Agreement was continuously sought between the two researchers in the analysis process. After each focus group discussion, the assistant summed up the discussions and offered the participants the possibility to comment. We believe that the results in this study could be transferred to a similar population among individuals with hip and knee OA in working age who are probably somewhat interested in mHealth and digital support. Even though many of the study participants were moderate to highly physically active, also participants having low PA levels are represented in this study. Participant characteristics were presented to increase the opportunity for comparison with other study populations. This study also has limitations. The moderator and first author (EÖ) had met with all participants at least once. The number of meetings and the reason for the meeting(s) differed for each participant, (handing out the Fitbit and lecturing the SOASP). This previous contact might have had an inhibitory effect on the participants' willingness to talk freely during the discussions. However, since the questions in the discussions were not directly related to their contact with E.Ö, we believe that the participants felt that they could speak freely. The participants in this study are probably not representative of the general population with hip and knee OA, which may have affected the transferability to the general OA-population. Based on data previously collected in the C-RCT, about 40% of the participants in the C-RCT already used a WAT when they registered for the study. This could indicate an interest in WATs and mHealth and might have introduced a selection bias. Most of the participants were women (72%) which could have had an impact on the results. Previous studies have shown that WAT-use is more common in women [67] and that women have higher adherence to WAT-use in a PA intervention than men [68]. Hence, the participants in our study were perhaps more positive to WAT-use than a sample with an equal sex distribution would have been. Our sample are in other aspects probably similar to individuals participating in SOASPs in Sweden where a majority is women and have OA in the knee. In this study, 18 individuals agreed to participate which resulted in the three focus groups. Additional participants and a fourth focus group could possibly have provided additional information but given the consistency of the experiences and perceptions in the three discussions, we do not believe that a fourth focus group would have induced any major changes in the results. Conclusion This study provides information on how individuals with hip and knee OA experience and perceive PA monitoring and digital support in OA care. Using WATs may aid in facilitating PA for some individuals but not all. WATs could also help individuals with OA to relate their steps taken or PA conducted to their perceived pain or other health outcomes. This may help them (and their PT) to optimize the PA level. Digital support was seen as an appreciated part of OA care but preferably, it should be a hybrid solution between traditional OA care and digital OA care. Health care should offer solutions for a hybrid health care that is individualized, comprehensive, easy, reliable, and continuous.
2022-08-30T14:49:05.603Z
2022-08-30T00:00:00.000
{ "year": 2022, "sha1": "007776bd2e9e73b7109db596e4a6f2ea261117b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "007776bd2e9e73b7109db596e4a6f2ea261117b8", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
246439500
pes2o/s2orc
v3-fos-license
Oncology nurse: Psychological nursing for cancer patients, what can we do? In recent years, the incidence of cancer in China has been increasing year by year. According to a report from the International Agency for Research on Cancer, the estimates of new cancer cases and cancer deaths in China in 2020 were 4.569 million and 3.003 million, respectively.1 Cancer account for nearly one-third of deaths from all diseases each year. The increasing number of patients with cancer in China have led to the increasing need for cancer care service and challenge for oncology nurse. In the training course of oncology nurse certification, the most common question asked by nurses is what can we do about the psychological distress of cancer patients? Talking about psychosocial and spiritual care for cancer patients, I would like to share some of my own understanding and experience. When I was in college, the teachers of nursing school mentioned psychological nursing almost every part of the course. My understanding of psychological nursing at that time was to comfort patients when they were stressed, anxious or in a bad emotion. After many years of clinical practice in cancer nursing, I have come to realized that psychological nursing is much more than this. Cancer is a life-threatened disease with the characteristics of insidious onset, rapid disease progression, long treatment cycle, complex symptoms and poor prognosis. Patients have a variety of painful psychological experience such as anxious, agitated, depressed, angry, lonely and desperate throughout the illness process. Every patient is like a traveler lost in desert, full of suffering and wondering. Once the disease was diagnosed, they began a difficult and unusual journey. As oncology nurse, if we want to relieve cancer patients’ psychological distress, understanding the cause of psychological pain should be the first. The main causes of psychological distress of cancer patients and nursing strategies are as follows: In recent years, the incidence of cancer in China has been increasing year by year. According to a report from the International Agency for Research on Cancer, the estimates of new cancer cases and cancer deaths in China in 2020 were 4.569 million and 3.003 million, respectively. 1 Cancer account for nearly one-third of deaths from all diseases each year. The increasing number of patients with cancer in China have led to the increasing need for cancer care service and challenge for oncology nurse. In the training course of oncology nurse certification, the most common question asked by nurses is what can we do about the psychological distress of cancer patients? Talking about psychosocial and spiritual care for cancer patients, I would like to share some of my own understanding and experience. When I was in college, the teachers of nursing school mentioned psychological nursing almost every part of the course. My understanding of psychological nursing at that time was to comfort patients when they were stressed, anxious or in a bad emotion. After many years of clinical practice in cancer nursing, I have come to realized that psychological nursing is much more than this. Cancer is a life-threatened disease with the characteristics of insidious onset, rapid disease progression, long treatment cycle, complex symptoms and poor prognosis. Patients have a variety of painful psychological experience such as anxious, agitated, depressed, angry, lonely and desperate throughout the illness process. Every patient is like a traveler lost in desert, full of suffering and wondering. Once the disease was diagnosed, they began a difficult and unusual journey. As oncology nurse, if we want to relieve cancer patients' psychological distress, understanding the cause of psychological pain should be the first. The main causes of psychological distress of cancer patients and nursing strategies are as follows: Unrelieved physical symptom and psychological distress Cancer patients' physical symptoms include, but are not limited to fatigue, weakness, insomnia, pain, nausea, vomiting, dyspnea, and constipation. In clinical nursing practice, we can see that unalleviated pain makes patients restless and even tendency to suicide; nausea, vomiting and difficulty swallowing caused by illness or treatment make them unable to eat and depressed; dyspnea keeps them insomnia and anxious; malignant wounds with a lot of oozing fluid and foul smell leave them afraid to go out and feel lonely and depressed, and so on. The psychological distress of these cancer patients caused by their physical symptoms poorly controlled. A survey of 156 hospitalized patients with cancer pain showed that 35.2% of them did not receive effective pain relief. 2 Without physical comfort, psychological comfort is out of the question. Therefore, effective control of physical symptoms is the key to alleviate psychological pain. Oncology nurses should play a professional role in symptom management of cancer patients, including symptom screening and assessment, correct medication administration, implementation of drug and non-drug nursing measures, patient education, follow-up support, etc. Uncertainty of illness and psychological distress In clinical nursing practice, when the patients with clear cancer diagnosis, disease progression, poor prognosis, and even death approaching, they are still not told of the truth, and the uncertainty of the disease is throughout the illness trajectory, which make the patients suspicion, anxiety, restlessness, and loss of appetite and sleepless. Their psychological distress come from their rights to know were deprived of by their families and medical staff. A survey indicated in China a high proportion of cancer patients continue to receive inadequate information about their illness. 3 Nurses as the primary caregivers of cancer patients, often become their most trusted persons. We should identify their concerns and needs, use communication skills appropriately to tell the truth, provide the information they need, guide them in decision making, advocate for the patient's right, so as they have the opportunity to involve in the treatment and care plan consistent with their value and preference, and also they can feel more sense of self-control and less psychological distress. Death approaching and psychological distress In clinical nursing practice, we can see there are a lot of cancer patients at the end of life even the last days or hours who cannot face the current condition and accept the fact of death approaching. Some patients struggled between the hope and despair. Some patients suicided with extreme fear and hopelessness. Some dying patients occurred restless and delirium that cannot be explained by physiological factors. The psychological distress of these patients comes from their anxiety and fears about death and separation from their loved ones, from death impending but wishes unfinished. A study revealed that of 300 Chinese patients with advanced cancer, 43 (16.8%) had moderate death anxiety based on scores of 45 on the death and dying distress scale-Chinese version. 4 Providing death education timely for terminal patients to promote good death, is the responsibility of healthcare staff. Good death is actually a "good living" at the end of life. Nurses with their unique role as professional caregivers, have more opportunities to guide terminal patients to understand and accept the dying. We should accompany and support patients to find meaning, reorganize their life plans, appreciate and treasure the happy days and loved ones what they have, and actively live at present until the last moment rather than passively waiting for death. Anger emotion and psychological distress In clinical nursing practice, we often see many patients full of anger and express the emotion in various forms, or blame others, or irrationally angry, or scold family members, or refuse examination and treatment. Many family members felt sad, hiding in the corridor alone with tears. As oncology nurses, we should know that patients' anger is actually the manifestation of their serious psychological and spiritual pain which comes from many reasons, including: sense of frustration caused by patient's role changed in family and social settings or loss of independence; disappointment about the gap between expectations and reality because of disease progression; Feeling of spiritual distress because their beliefs crumbled in front of the cancer disease; guilt over the heavy financial and caring burden of long-term treatment to their family, and so on. Nurses should express understanding and respect to the patients through accompanying and deeply listening, empathy their feelings and distress, identify their concerns and worries under the anger emotion, and build a bridge between patients and their families, encouraging them to communicate openly, express feeling, concerns and worries, support each other and face difficulties together. Implication for nursing Nursing as a discipline focuses on the patient's response to disease and treatment. This response is reflected in the patient's physical, psychological, social, and spiritual all-round needs. When any one aspect need is not met, psychological distress will occur. In addition, the physical, psychological, social, and spiritual problems are closely linked and interact with each other. When we talk about psychosocial and spiritual care, actually providing holistic nursing for cancer patients is the focus. To provide quality care for cancer patients, oncology nurses as primary members of the health care team, should have the competencies including professional symptom management ability, appropriate communication skills and person-centered compassionate care. While these competencies are just the core content of palliative care. That is to say, palliative care is essential part of holistic cancer nursing and every oncology nurse should have the primary knowledge and skills of palliative care. So, how to integrate palliative care into cancer nursing for quality cancer care is further effort for us.
2022-02-01T16:10:34.060Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "ee5d7be14dbd4714b690d241031240b7306cc813", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.apjon.2022.01.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cecb674e91c8f649a1f261dab940f0873ef57a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
266885259
pes2o/s2orc
v3-fos-license
Effect of a lifestyle-integrated functional exercise (LiFE) group intervention (sLiFE) to falls prevention in non-institutionalized older adults. Protocol of a randomised clinical trial Introduction Personalized programs of integrated strength and balance activities have been shown their effectiveness in falls reduction in the older adults. Objective To measure whether a group intervention with the strength and balance principles of the sLiFE program is more effective than standard health advice in reducing the incidence of falls. Methods The study will comprise 650 participants with more than 65 years who live at home, observing established inclusion and exclusion criteria. Participants will be randomly assigned in two groups: group intervention (n = 325) and standard health advice (n = 325). The intervention group will follow the balance and strength activities described in the LiFE program manual. The group intervention will be carried out in groups of 12–14 and will consist of seven one-hour sessions over 12 weeks in health centres. Incidence of falls and quality of life will be assessed as primary outcome variables. Fear of falling and exercise adherence will be analysed as secondary outcome variables. Discussion Physical activity has been put forward as an effective treatment technique for these patients; however, long-term adherence to these programs remains a challenge. Group interventions could reduce dropout rates. Conclusion Falls represent a major health problem globally due to the disability they cause in older people. Prevention would help reduce not only their incidence but also the health costs derived from their treatment. Group intervention helps clinicians to save resources and time, being able to attend more people with the same quality of care. Clinical trial registration https://clinicaltrials.gov/study/NCT05912088?distance=50&term=NCT05912088&rank=1, identifier NCT05912088. Introduction Practitioners and researchers are focused on ageing and frailty, as demonstrated by different initiatives (1,2).It has been shown that in the population older than 70 years, frailty represent a 5.5 times higher adjusted risk of mortality, a 2.5 times higher risk of new disability, and a 2.7 times higher risk of loss of mobility (3).To reduce frailty, action must be taken on its main risk factor, sedentary lifestyle (4), and the prevention of falls is thus of particular importance; falls are among the five main health problems related with disability in people older than 60 years (5). Approximately 30% of people aged over 65% and 50% of those over 80 who live in the community have one fall once a year as a minimum (3,5,6).Similarly, about 30% of those who fall suffer a new fall in the same year and 10% suffer several falls (7).Falling is, therefore, a risk factor for further falls.Furthermore, Noureldin et al. found that 15% of hospitalised older adults people had a fall within 30 days of discharge (8). Falls represent a major cause of disability in older adults and over 50% present sequelae (5,6); half of those suffering a fracture from a fall do not fully recover their previous functional level.Older adults are admitted to hospital for injuries related to falls more frequently and they also present discharges and consecutive readmissions over the following three years (8,9).Besides, between 32% and 80% of patients who survive hospitalisation after a hip fracture are left with a permanent disability (10), being a 95% of hip fracture cases a result of falls (11).In this regard, it has been observed that physical exercise approaches can reverse the functional disability caused by hospitalisation in older patients (12). Several studies have demonstrated improvements at cardiovascular and mental levels (dementia), and in psychological stress and quality of life (13, 14).Exercise programs, multifactorial strategies for fall prevention and home interventions diminish falls (6,15).Other systematic reviews and meta-analyses which have found multifactorial interventions, including exercise, postulated as the most effective, with a single intervention with exercise also showing significant effects are in this line.The literature suggests that exercise is the best approach for preventing falls at this population, but this could be influence by the exercise component selected (16,17).Other structured training programs aim to improve muscle strength and balance, for example, the Otago Exercise Program (18).Nevertheless, they often do not generate long-term change, participation and adherence (19, 20); however, physical activity has shown numerous benefits (21).A review concluded that multicomponent group exercise and exercise at home, as well as safety interventions at home, diminish the rate and the risk of falls (22). The LiFE study (Lifestyle-integrated Functional Exercise) intervention stood out as achieving the best results in preventing falls with a reduction of 13% (23).It is a personalized program which has demonstrated its effectiveness improving balance, strength, and physical activity, while falls in older adults were reduced by incorporating exercise activities into their daily activities.This program shown a clinically significant reduction of 31% in fall rate in comparison to the control program.A 30% in falls reduction is like most interventions currently recommended for preventing falls in clinical guidelines.It has recently been suggested in a pilot study that LiFE could be effective administered in a group setting in comparison to an individual intervention (24). This project aims to compare the standard health advice to the original LiFE program implemented in a group (sLiFE), with the aim of facilitating large-scale implementation with lower use of resources and verifying effectiveness in terms of fall rates, physical activity, and profitability. The objectives will be to assess if a group intervention implementing the sLiFE program principles reduces fall rates compared to standard advice; to assess whether fall prevention is more efficient in the group intervention than standard individual recommendations; to measure the incidence rate of falls according to participants' level of physical activity; to assess medium-and long-term adherence to the exercise program and to find out the participants' fear of falling. Hypothesis The sLiFE group intervention is more effective than usual health advice for the prevention of falls in older adults people living at home. Design Multicentre randomised clinical trial with two parallel arms, designed according to the CONSORT statement.This protocol has followed the SPIRIT guidelines for randomised trials.It was registered with ClinicalTrial.gov in June 2023 under the identifier NCT05912088. Sample/participants The study population will comprise subjects aged over 65 years who be in agreement to take part in the study and are being treated in the primary care setting of the Health Area of "X." This study was reviewed and approved by the Salamanca Drug Research Ethics Committee in July 2022 under registration number PI 2022 071126.Before the study onset, all participants will be informed of the study objectives and will sign an informed consent form.Throughout the study the standards established in the Declaration of Helsinki will be followed.All those meeting the inclusion criteria in the health centres will be invited to participate. Exclusion criteria Heart failure (NYHA class III-IV); previous stroke (<6 months); Parkinson's disease diagnosis; in active cancer treatment (last 6 months); chronic obstructive pulmonary disease (GOLD class III-IV); lower extremity fragile fracture; lower extremity amputation, treatment for depression less than six months ago, resting systolic pressure blood pressure > 160 or diastolic pressure > 100 uncontrolled; unavailable for the intervention, having more than two months travel or transfers planned in the first six months of the study; cognitive impairment moderate-severe (Mini Mental Cognitive Assessment <23); simultaneously participation in another clinical intervention trial. Sample size The main variable of the study was used to estimate the sample size, the annual incidence rate of falls in this population.An alpha risk of 0.05 and a beta risk of 0.2 (Power for Chi square 80% and t-test 94%), effect size estimated 0.30 (Cohen D) and 0.01 (V de Cramer) in a two-sided contrast has been accepted, 325 participants in the intervention group and 325 in the control are needed to find differences of ten percentage points as statistically significant between the control group, expected to be 30% [the estimate of falls in people older than 65 years of age (3,5,6)] and the intervention group, expected to be 20%.A give up rate of 10% during follow-up has been estimated.This data has been calculated following the formula for qualitative variables published by Argimon Pallàs et al. (25). Participant assessment An external researcher will receive the participants and carry out the initial evaluation.The interview will be completed with information from the participants' primary care medical history and the records of the University Hospital of "X." The visit will take around 50-60 min and includes the collection of sociodemographic data, lifestyle, physical activity, cognitive status, adherence to exercise, quality of life and fear of falling.Participants will be fitted with a digital pedometer to record their physical activity for eight days.After the initial assessment, participants will be randomized to the intervention or control group. Randomisation Once the inclusion criteria have been assessed, participants will sign informed consent and will then be randomised into the intervention/control group (Figure 1).An independent investigator, blinded until groups have been assigned will be in charge of generate the allocation sequence generated in a 1:1 ratio using the Epidat 4.2 software package.Considering the nature of the study, participants cannot be blinded to the intervention. Procedure for the sLiFE program intervention The intervention will take place in four different stages. Stage one Physiotherapists (who led the sessions) will establish the guidelines to be followed and the intervention dynamics (sessions will be carried out in the same way and with the same contents in all health centres).The manuals for professionals, in Spanish, will be used to guarantee the reproducibility (26). Stage two The intervention group (12-14 participants) will be given a brief guide in Spanish from the participant's manual to advise them in carrying out the activities.The intervention group (n = 325) will do the balance activities (tandem stance, tandem walk, one-legged stand, leaning from side to side, leaning forward and backward, stepping over obstacles forwards and backwards, stepping over obstacles sideways) and strength activities (knee bends, sitting down and getting up from normal and low chairs, toe stand, toe walk, heel stand, heel walk, walking sideways, climbing stairs, and tightening muscles), the principles and implementation strategies (26).A total of five one-hour in-person sessions and two follow-up telephone sessions will be held. Stage three Implementation of the sLiFE program (26).The total intervention comprises twelve weeks according to the following schedule (Table 1). Stage four Follow-up evaluation at six months.Spirit figure has been added as Supplementary material. Sociodemographic variables Participants' age, education level, marital status, and profession will be noted.The prescribed pharmacological treatment, lifestyle habits, smoking history and smoking pattern and alcohol consumption will be collected. Anthropometric variables Height will be measured with the portable Seca 222 system, with the subject standing.The average of two measurements, rounded to the nearest centimetre, will be recorded.Weight will be measured using a SOEHNLE 7830 digital column scale. Waist and hip circumference will be measured twice, the recommendations of the Spanish Society for the Study of Obesity will be followed.Systolic and diastolic blood pressure will be measured with a validated OMRON M10-IT blood pressure monitor (Omron Health Care, Kyoto, Japan), the recommendations of the European Society of Hypertension will be followed. Charlson Comorbidity Index The Charlson Comorbidity Index (CCI) was developed in 1987 by Mary E. Charlson.It has been considered the gold-standard tool in clinical research as a prognostic index to predict mortality.This index is a standardized score calculated as a simple weighted sum of comorbidity item scores (27,28).The original version of the CCI was composed on 19 items which correspond to different clinical comorbidities (27). Frailty Frailty will be measured following the five criteria of Fried's phenotype ( 29 Participants who meet these criteria will nevertheless be classified as active if they reported a high amount of daily usual physical activity (climbing stairs or lifting weights).The outcome will be meeting one frailty criterion at least. Physical activity Physical activity will be assessed with a digital pedometer (Omron HJ-321 Tri-Axial) (30), to be placed front and middle on one thigh for a period of 8 consecutive days (30).In addition, the Global Physical Activity Questionnaire (GPAQ) will be used (31).This questionnaire is made up of 16 questions about PA carried out in a typical week, differentiating between the different types of activity in work, travel and free time.Data is collected on intensity (low/moderate/high), frequency (days / usual week) and duration (hours-minutes/typical day) of physical activities carried out in three domains: (1) work (paid employment or unpaid, study, housework or job search), ( 2) commuting (walking/cycling to get from one place to another), and (3) free time (leisure).A question is also included about sedentary behaviour (time usually spent sitting or lying down, excluding time spent sleeping at night). Mobility The Short Physical Performance Battery (SPPB) assesses three aspects of mobility: balance, gait speed and strength of limbs or lower limbs to get up from a chair (32). Cognitive performance The Montreal Cognitive Assessment (MOCA) (33) determines the existence of mild cognitive dysfunctions.It comprises 30 questions and takes 10-12 min to complete. Primary outcome measures o The incidence rate of falls will be estimated based on the intervention and control group.Falls will be recorded using a daily log sent to the study centre monthly.On suffering a fall, participants must provide information about the time, date, injuries and prescribed treatment in relation to the fall, the fall location, and the movement which cause the fall.The person will be interviewed by telephone to correct any lack of data to determine the injuries details and to confirm their health status at this time (7).o Quality of life will be measured through the EuroQol 5D questionnaire, validated in Spanish (34).This questionnaire comprises five items (mobility, personal care, daily activities, pain/discomfort, and anxiety/depression) and a self-assessed thermometer of health status. Secondary outcome measures o Fear of falling: Short Falls International Scale of Efficacy (Short FES-I) assess "concerns about falling" (35).The scale is the falls efficacy scale-international short version, comprising seven items (items 2,4,6,7,9,15 and 16).Item responses are coded on a 4-point Likert scale: (1) not at all concerned, (2) somewhat concerned, (3) fairly concerned, and (4) very concerned.o Exercise adherence will be measured using the Exercise Adherence Rating Scale (EARS) as part of the schedule sent in monthly.This scale is composed of 16 items, scored using a 5-point Likert scale (0 = completely agree to 4 = completely disagree) with a total summed score range from 0 to 64 (36).o Cost-effectiveness of the intervention: Incremental costeffectiveness ratio (ICER) related with the ratio of the difference in costs to the health effects difference in both interventions (37). These costs include outpatient treatment, formal/informal care, medication, transportation, room rental, intervention costs involving labour costs, staff and participants transportation, and materials. Procedures Six months after the initial control group assessment and six months after the intervention group sessions have finished, subjects of both groups will be assessed with the same tests that were carried out in the initial assessment.After random assignment, follow-up assessments will be performed by the blind group assignment assessors.The database used in this study will only display information unrelated to the intervention when they are logged in to ensure the blinding of assessors.Should a participant wish to withdraw from the study, they will continue to be eligible to complete the follow-up measurements with their consent.Researchers will record the reasons and date of withdrawal, but data recorded before withdrawal will be used unless the participant decide to use their right to have all data deleted.Study data will be collected and managed using REDCap electronic data capture tools, hosted at the University of "X." REDCap is a secure, web-based software platform designed to support data capture for research studies. Statistical analysis The study population baseline characteristics will be expressed as means ± standard deviation (SD) for quantitative variables and in frequency distributions for categorical variables.Student's t-test, chi-square, and Fisher's exact tests will be applied to find differences in baseline characteristics between intervention and control groups.All analyses of the variables obtained from the questionnaires will be analyzed using the reliability and validity criteria proposed by their authors. The main analyses will be performed on the intention-to-treat principle, so all randomised subjects will be included in the data analysis set for which the initial assessment was performed.Participants who withdraw or drop out will be asked to be included in follow-up measurements; those lost to follow up will be considered in the full set of analyses as missing data.Detailed modelling of variations between participants and groups will be done in terms of factors such as dose, acceptability, and contextual factors.Using the Chi-square test, we will compare the proportions of subjects who have had a fall in both groups.The comparison of all outcomes between baseline, six and twelve months will be carried out using the two-way repeated measures ANOVA.Logistic regression analysis will be conducted to determine the influence of the different risk factors on falls.Statistics analysed will be performed with SPSS V.25.0 statistical package (SPSS Inc., Chicago, Illinois, United States).p values for cut-off values for significance were established at <0.05 (two-tailed).Results were interpreted according [Cohen], who considers small size if Cohen D are between 0.2 ≤ and < 0.5; average effect size if differences are between 0.5 ≤ and < 0.8 and very high effect size if differences are ≥0.8 (38). Trial status Nowadays people have been contacted to invite them to participate.First groups will be collected until the end of the year. Discussion Falls prevention is seen as one of the most needed interventions in the population aged over 65.Both the NICE guide on fall prevention (39) and the British and American Geriatrics Societies (6) recommend annual screening of subjects older than 65 years for a falls history and the presence of disorders in gait and balance. While some publications have found evidence of the efficacy of a multifactorial intervention in reducing falls in older adults and/or their consequences (40), some interventions developed in primary care in Spain have not been able to reduce the frequency of falls (7,40).However, the 2018 Cochrane review showed that most of these multifactorial and multicomponent studies were of low quality, and high risk of bias, and that there may be little or no effect on other fall-related outcomes.Furthermore, structured programs have failed to induce long-term behaviour change towards more regular exercise, demonstrating poor adherence (41).New concepts and formats with large-scale implementation and long-term adherence to balance and strength in exercise are required urgently. The intervention of the LiFE study has achieved the best results in preventing falls (23) and is considered to be of high quality.Moreover, the LiFE program in terms of function and participation, was superior providing support for this program in measuring both frailty and fall risk.However, less than 10% of older people regularly do strength training and probably even less do balance activities.In the LiFE program, adherence was significantly better (23) and exceeded the 42% adherence reported in the New Zealand Otago trial; this trial tested a successful structured and home-based exercise program (18).However, despite its effectiveness, the implementation of the home-based program LiFE requires considerable economic costs and human resources.Nevertheless, no study has yet been implemented comparing the LiFE intervention group format to standard health advice in a larger population. A group intervention could facilitate adherence to these activities and clinicians could save resources and time to be more effective and to guarantee the care quality. For these reasons, the sLiFE program aims to promote the performance of physical activity centered on modifying the behaviour of participants and has demonstrated its effectiveness in a large randomised controlled trial to risk of falls reduction.It seems necessary to assess whether the sLiFE program intervention implemented in a group format can be recommended over individual participation. Limitations The 24 months prior to data collection could be a long period to remember such events for such individuals.In this study the interview will be complemented with the information available in the participants' primary care medical history and in the records of the University Hospital "X" to ensure that all falls which had consequences are considered. Conclusion This project can help to increase physical activity are effective in falls reduction and in avoiding the consequences derived from them.The key characteristic is the analysis of whether a more economical (group) intervention can be recommended if similar results are obtained in comparison to the individual intervention carried out: LiFE program which has been shown a fall reduction in older adults.It could be applied to larger groups, and it would be possible to recommend it to most older people as the best fall prevention strategy.The author(s) declare financial support was received for the research, authorship, and/or publication of this article.This study has been funded by the Spanish Ministry of Science and Innovation, Instituto de Salud Carlos III (ISCIII).RD21/0016/0010 (Network for Research on Chronicity, Primary Care, and Health Promotion (RICAPPS) is funded by the ): (1) Low muscle strength; (2) Poor nutrition; (3) Poor endurance; (4) Slow walking; and (5) Low physical activity. FIGURE 1 FIGURE 1Sample size flow chart. TABLE 1 Session schedule. European Union-Next Generation EU, Facility for Recovery and Resilience (MRR).Government of Castilla y León also collaborated with the funding of this study through the research projects (GRS 2502/B/22).They played no role in the study design, data analysis, reporting results, nor the decision to submit the manuscript for publication.
2024-01-10T16:16:37.116Z
2024-01-08T00:00:00.000
{ "year": 2024, "sha1": "8d3da125fb43da712cad98c6efa79ad3f321d7c6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1304982/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41e78141cd0ea105428c9fa8c82db1798ff7ed55", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
222080278
pes2o/s2orc
v3-fos-license
Investigating the effect of reinforcing particulates on the weight loss and worn surface of compocast AMCs This paper aims to investigate the abrasive wear behavior of the sol-gel coated B4C particulate reinforced aluminum metal matrix composite. Sliding wear is related to asperity-to-asperity contact of two counter surfaces, which are in relative motion against each other. The effective wear from the specimen surface is due to the combined effect of a number of factors. An increase in the applied load leads to increase in penetration of hard asperities of the counter surface to the softer pin surface, increase in micro cracking tendency of the subsurface and also increase in the deformation and fracture of asperities of the softer surface. In general, composites offer superior wear resistance as compared to the alloy irrespective of applied load and B4C particles volume fraction. This is primarily due to the presence of the hard dispersoid which protects the matrix from severe contact with the counter surfaces, and thus results in less wear, lower coefficient friction and temperature rise in composite as compared to that in the alloy. The worn surfaces of all specimens were covered with grooves parallel to the sliding direction and some plastic deformations. These grooves are typical features associated with abrasive wear, in which hard asperities on the steel counter face or hard particles between the pin and disc, plough or cut into the pin cause wear by the removal of the material. Plastic deformation, material smearing, cavities and craters imply adhesive wear. K e y w o r d s: dry sliding, adhesive, mechanically mixed layer Introduction Wear is the process occurring at the interfaces between interacting bodies and is usually hidden from investigators by the wearing components. It is usual to classify wear in terms of four different categories: adhesive wear, abrasive wear, fatigue wear and tribochemical wear. Adhesive wear is characterized by the appearance of junctions between the surfaces that are subject to friction. Abrasive wear occurs when a hard material is put into contact with a soft material. This type of wear can cause scratches, wear grooves, and leads to material removal. Surface fatigue wear occurs when a material is subject to cyclical stresses. Tribochemical wear is a phenomenon which involves the growth of a film of reaction products due to chemical interactions between the surfaces in contact with each other and the surrounding environment. The enhancement in tribological properties of Alu-*Corresponding author: tel.: +98 912497959; fax: +98 21 66930963; e-mail address: vahid ostadshabany@yahoo.com minum Matrix Composites (AMCs) has been effectively attainable by introducing the ceramic particles [1]. There are excellent reviews on the tribology of AMCs. Chung et al. [2] reported that increasing the ceramic particle content enhanced the wear resistance of the base alloy [2]. During sliding wear of AMCs, a layer is formed over the specimen surface, which strongly dictates the wear behavior of the materials. Shabani et al. [3] showed that subsurface micro cracks are generated during the wear of AMCs which finally leads to removal of wear debris, especially from asperity contacts. The formation of these micro cracks is due to combined action of load, sliding speed and sliding distance. In addition, higher temperature rise also leads to greater flowability of surface materials and thus increases greatly the possibility of compaction of wear debris on the specimen surface. In this case, valleys between the asperities of the counter surface get partially occupied with the material of the speci-men, which results in the reduction of surface abrasive action effectiveness of the counter surface asperities. With increase in sliding distance, the temperature increases to a critical value at which specimen surface gets oxidized. This oxidized surface either gets fragmented or becomes stable to some extent. The fragmented oxide particles sometimes act as lubricating agent and thus these oxide layers reduce the effective wear rate. Furthermore, the fragmentation and compaction of wear debris, counter surface material and thin oxide layers lead to formation of mechanically mixed layer (MML) which protects the specimen surface from wear [3]. However, further increasing sliding distance leads to increasing temperature which leads to subsurface softening, and because of plastic incompatibility and thermal mismatch, the MML gets fractured and subsequently removed from the specimen surface. Thus at higher sliding distances it is expected that the formation and removal of MML are taking place simultaneously and the rate of removal and the rate of growth of MML might be the same, and thus the wear rate remains unchanged with sliding distance [4]. It is reported that as the sliding distance reaches the point of seizure, the MML becomes unstable because of greater degree of temperature rise in the subsurface resulting in higher degree of thermal as well as plastic incompatibilities between MML and subsurface [5]. The wear rate of the unreinforced alloy is found to be higher than that of the composites. This is primarily due to the fact that the hard dispersoids, present on the surface of the composite, act as protrusions, protect the matrix from severe contact with the counter surfaces [6], and thus resulting in less wear in composite as compared to that in the alloy. The hard particles resist against destruction action of abrasive and protect the surface, so with increasing its content, the wear resistance enhances [7]. Additionally, the hard dispersoid makes the matrix alloy plastically constrained and improves the high temperature strength of the virgin alloy [8]. During dry sliding wear of aluminum based composite, wear of the counter face is usually evident. The extent of iron deposition is reported to be more significant for AMC/steel sliding couples in comparison to Al/steel sliding couples due to micro cutting and ploughing of reinforcing hard ceramic particles on the counter face. Razavizadeh reported that an extensive mechanical mixing took place between the aluminum matrix composite and the steel counterpart during sliding wear, and MML containing elements from the two sliding counterparts was formed on the worn surface [9]. In this study, B 4 C powders were incorporated into the semisolid matrix alloy and sliding wear behavior of composites had been examined under varying applied loads and sliding distances. The benefits of semisolid agitation processes include reduced solidi-fication shrinkage, lower tendency for hot tearing, and suppression of segregation, settling or agglomeration and faster process cycles [10,11]. These advantages are accompanied with lack of superheat (lower operating temperatures) as well as a lower latent heat which results in a longer die life together with a reduced chemical attack of the reinforcement by alloy, also a globular, non-dendritic structure of the solid phase which then explains the thixotropic behavior of the material [12]. Experimental procedure Al composites reinforced with the B 4 C particles have been used for the present study. The Al alloy has the chemical composition of 4 % Cu, 4 % Mg, 0.5 % Fe, 0.25 % Cr, 0.25 % Mn, 0.25 % Ti, 0.25 % Zn and the rest Al. The process of stir casting was used which generally involves the admixture of ceramic particulate reinforcement with a molten metal matrix. The B 4 C powders (particle size of 40 µm) were first coated with TiB 2 via a sol-gel process. Mg was added to the melt in the final stage prior to pouring task to enhance the wettability between metal matrix and reinforcement particles. The process involved melting the alloy in the graphite crucible using an electrical resistance furnace. The furnace was controlled using a J-type thermocouple located inside the gas chamber. The temperature of the alloy was raised to about 850 • C and stirred at 800 rpm using an impeller fabricated from graphite and driven by a variable ac motor. The stirrer was positioned just below the surface of the slurry and the coated particles were added uniformly at a rate of 50 g min −1 over a time period of approximately 3 min. The temperature of the furnace was gradually lowered until the melt reached a temperature in the liquid solid range (i.e. 590 • C) while stirring was continued. The squeeze casting was obtained by pouring composite slurry into preheated permanent die and punch which was allowed to solidify under squeeze pressure of 80 MPa for duration of 5 min. High temperature graphite powder was used in the die to facilitate removal of cast blanks from the die after cooling. The pressure applied during solidification in the squeeze casting technique resulted in excellent feeding during solidification shrinkage. For the purpose of comparison, unreinforced Al alloy was also cast following the procedures which were identical to those for the composite samples. A continuous purge of nitrogen gas was used inside and outside of the crucible to minimize the oxidation of molten aluminum and graphite parts. The obtained cast bars were turned to small pins (diameter of each pin was 6 mm and its length was 25 mm). These pins were subsequently used in the wear test. The disk with a diameter of 50 mm and a thickness of 10 mm was made of the steel hardened up to 63 HRC and polished to a very fine grade with surface roughness about 0.22 mm. Before the abrasion tests, each specimen was polished to 0.5 µm. Figure 1 shows schematic diagram of the abrasion wear test. The pin-on-disk wear machine consists of the stationary pin pressed at the required load against the disk rotating at the defined speed. An AC motor ensures the stable running speeds of the disk. The testing machine is equipped with a set of measuring transducers. The experiment was carried out at room temperature (21 • C, relative humidity 55 %) with water as the lubricant. The samples were cleaned with acetone and weighed (up to an accuracy of 0.01 mg using microbalance) prior to and after each test. The temperature rise and friction force were recorded from the digital display interfaced with the wear test machine. The wear tests were conducted up to the total sliding distance of 2000 m. The mass loss of the pin was used to study the effect of B 4 C addition on the wear resistance of the composite materials under consideration. The worn surfaces of the samples were examined using scanning electron microscope (SEM) equipped with energy dispersive X-ray spectroscopy (EDS) (EDAX). Results The volume fraction of B 4 C particles was measured by means of an image analyzer system attached to the optical microscope. A sedimentation experiment was conducted on the composites containing 15 vol.% particles. Figure 2 shows the B 4 C concentration as a function of the distance from the bottom of the mould (D). This clearly indicates that when the composite slurry was held at the molten state, the lower parts of the ingot contained a lower volume percent of particles than the upper parts, representing an uneven macroscopic particle distribution. The microstructures of the coated composites were examined by SEM in order to determine the distribution of the B 4 C particles and presence of porosity. Typical SEM micrograph of the compocast B 4 C reinforced Al alloy composites is shown in Fig. 3. The distribution of the B 4 C particles within the matrix alloy was characterized by a distribution factor (DF) defined as DF = S.D./Af, in which Af is the mean value of the area fraction of the B 4 C particles measured on 100 fields of a sample, and S.D. is its standard deviation. Figure 4 shows the gradual decrease in DF for the composites when the particle content increased, indicating the improvement in the uniformity of the B 4 C particle distribution. A non--uniform microscopic distribution of the reinforcing phase within a sample is reflected as a relatively high value of DF. The wear tests were performed at various normal loads of 5, 10, 15, 20, 25 and 30 N, and a sliding speed of 0.3 m s −1 using a pin-on-disk type test machine. Figure 5 shows the weight losses for different volume percentages Al-B 4 C composites during wear test at an applied load of 20 N. The wear rate of the unreinforced alloy is found to be higher than that of the composites. The lowest value of mass loss in wear test was distinct for Al-15vol.%B 4 C and the highest mass loss in wear test was for bare Al alloy. Although the rate of change for the composites is much smaller than that of the matrix, the weight loss of the matrix and the composites increases linearly with the sliding distance. It is clear that the unreinforced matrix alloy wore much more rapidly than the reinforced composite materials. Discussion The uniformity in distribution of particles within the sample is a microstructural feature which determines the in-service properties of particulate AMCs. A non-homogeneous particle distribution in cast composites arises as a consequence of sedimentation (or flotation), agglomeration and segregation. The subject of particle distribution in particulate MMCs has been studied by several investigators either qualitatively or quantitatively. The macroscopic particle segregation due to gravity (settling) has also been studied both experimentally and theoretically, the latter of which generally involves the correlation of particle settling rate within the composite slurry with the Stocks' law [13][14][15][16][17][18]. Boron carbide (B 4 C) powder was chosen as reinforcement because of its higher hardness (very close to diamond) than the conventional and routinely used reinforcement such as SiC, Al 2 O 3 , etc. Further, its density (2.52 g cm −3 ) is very close to Al alloy matrix [13]. The wettability of B 4 C particles represents a very important issue which is poor at temperatures near the melting point of aluminum (660 • C) [19]. It is reported that B 4 C powders coated with some of Ti-compounds might have reasonable wettability with aluminum [20][21][22][23]. Figure 3 shows that fabrication of these composites via compocasting technique leads to reasonably uniform distribution of particles in the matrix and minimum clustering or agglomeration of the reinforcing phase. During the solidification process of the composite slurries, the reinforcing particles are pushed to the interdendritic or intercellular regions and tend to segregate along the grain boundaries of matrix alloy. The quantitative assessment of the B 4 C particle distribution within composite samples shows the gradual decrease in DF for the composites when the particle content increased, indicating the improvement in the uniformity of the B 4 C particle distribution (Fig. 4). These results can be attributed to the restricted movement of particles within the melt during solidification as a consequence of the increased effective viscosity of the slurry and the less pronounced coarsening effects resulting in a finer matrix microstructure which in turn causes a more uniform ceramic particle distribution. The wear resistance of the composites is considerably improved due to the addition of the B 4 C particles and increases with increasing B 4 C volume fraction up to 15 vol.%. Generally, the most important feature in improved wear resistance of all composites is the presence of B 4 C particles whose hardness is much greater than the matrix alloy [5]. It is well known that hard ceramic particles in the matrix alloy provide protection to the softer matrix during sliding and strengthen the aluminum matrix. This protection will limit the deformation, and also resists the penetration and cutting of the asperities of the sliding disk into the surface of the composite. The B 4 C particles also improve loadbearing capacity and thermal stability of the composites [24]. It is noted that the weight loss of the composites is less than that of unreinforced alloy, increases with increase in sliding distance, and has a declining trend with increasing the particles volume fraction. It is known that the wear loss is inversely proportional to the hardness of alloys. In case of unreinforced Al alloy, the depth of penetration is governed by the hardness of the specimen surface and applied load. But, in case of Al matrix composite, the depth of penetration of the harder asperities of hardened steel disk is primarily governed by the protruded hard ceramic reinforcement. Thus, the major portion of the applied load is carried by B 4 C particles. The role of the reinforcement particles is to support the contact stresses preventing high plastic deformations and abrasion between contact surfaces, and hence reduce the amount of worn material. However, if the load exceeds a critical value, the particles will be fractured and comminuted, losing their role as load supporters [25]. Figure 6 shows the hardness values for the unreinforced alloy and composites investigated in this study. It is observed that the addition of hard ceramic B 4 C particles increases the hardness of an Al alloy. During sliding, frictional force acts between the counter surfaces which causes frictional heating of them. While the counter surfaces are in relative motion, the frictional heating is continuous because of insufficient time for heat dissipation. The variation of temperature with sliding distance at an applied load of 25 N is presented in Fig. 7. It is noted that temperature rise is greater in the case of unreinforced alloy as compared to that in the composite irrespective of the applied load and surface conditions. During the sliding, in fact, a considerable fraction of energy is spent on overcoming the frictional force, which leads to heating of the contact surfaces. Initially, the asperities are stronger and sharper, and that is why frictional force and as a result frictional heating takes place at higher rate. After a certain period, because of the in- crease in flowability of the material on the specimen surface, slipping action is higher which results in reduction of frictional heating. The more possibility of adhesion between the counter surfaces leads to higher degree of friction. It is noted in Fig. 8 that the wear rate in all the samples increases marginally with applied load prior to reaching the critical load. The increase in the applied load leads to increase in the penetration of hard asperities of the counter surface to the softer pin surface, increase in micro cracking tendency of the subsurface, and also increase in the deformation and fracture of asperities of the softer surface. On the other hand, a higher amount of material from the pin surface gets accumulated at the valleys between the asperities of counter surfaces resulting in reduction in height and cutting efficiency of counter surface asperities. Beyond the critical load for each composite, the wear rate starts increasing abruptly with the applied load. The load at which the wear rate increases suddenly to a very high value is termed as the transition load. When the applied load is greater than the transition load, the wear rate of the composite shoots up to significantly higher value. This is attributed to the significantly higher frictional heating and thus the localized adhesion of the pin surface with the counter surface, and also to an increase in softening of the surface material and thus more penetration of the asperities. Under such conditions the material removal due to the delamination of adhered areas, micro cutting and micro fracturing increases significantly. This leads to destruction of MML, which was formed at lower applied load at the initial period of sliding. As a result, after a critical load there is a transition from smooth linear increase wear rate to sudden increase in wear rate. It was reported in the previous research that when the applied load induced stresses that exceeded the fracture strength of carbide particles, the particles fractured and largely lose their effectiveness as load bearing components. The shear strains are transmitted to the matrix alloy and wear proceeds by a subsurface delamination process. Furthermore, liberated reinforcing particles as wear debris roll over the contacting surfaces which create three body abrasion type situations and cause more wear on both the contacting surfaces. However, an extent of this situation depends on sliding speed, applied load and frictional heating. In this case, the lower ductility of Al-B 4 C composites appears to control the wear rates rather than the hardness of particles, resulting in wear rates almost similar to those observed in Al alloys without B 4 C reinforcement. On the other hand, when the nominal load induces stresses lower than the strength of particles, the particles act as load bearing components. In this case, the B 4 C particles remain intact during wear in order to support the applied load and act as effective abrasive elements. The particles protruding from the surface of the composite bear most of the wear load, and the surface hardness of the composite is mainly a result of the hardness of the particles [26][27][28]. It is reported that the friction coefficients for composites containing B 4 C are higher than the aluminum--based alloys while sliding under identical conditions [3][4][5]. The higher coefficients of friction in the case of composites containing hard B 4 C particles are due to the formation of tribofilm at the interface between pin and disk. If the effective load on the individual particle increased above its flexural strength, the particles get fractured. Parts of the removed B 4 C particles are entrapped between two partners, i.e. asperities of softer material of pin and asperities of harder material (hardened steel disk), possibly leading to three--body abrasion; then it will result in surface roughness between contacting surfaces, and coefficient of friction increases. Figure 9 shows schematic illustration of a three-body abrasion model. The tribofilm contains debris from specimen and counter face steel disk. EDS analysis was used to detect the trace of Fe on the worn surfaces (Fig. 10a,b). The variation of Fe content on the worn surface with B 4 C content is shown in Fig. 10c. It is observed that the formation of iron-rich layers on the contact surfaces increases with increasing the B 4 C content [29]. The wear surface of the unreinforced alloy under the applied load of 15 N is depicted in Fig. 11. The flow of materials along the sliding direction, generation of cavities due to delamination of surface materials and tearing of surface material is also noted in this figure. It is noted that the slider could penetrate and cut deeply into the surface and cause an extensive plastic deformation on the surface, resulting in a great amount of material loss. Worn surface of the 10 vol.% composite at an applied load of 15 N is shown in Fig. 12. It indicates formation of continuous wear grooves, relatively smooth MML, and some damaged regions. However, the degree of cracks formation on the wear surface is not much. The wear surface is characterized by the formation of parallel lips along the continuous groove marking. Unlike the worn surface of the unreinforced alloy, the number of scratches by abrasives or hard asperities was small. Worn surfaces of the composites were smoother with shallower grooves along the sliding direction [30]. Therefore, it was reasonable that wear resistance of composites was higher than that of the unreinforced alloy. Conclusion The wear sliding test disclosed that the weight loss of the coated B 4 C reinforced composites decreases with increasing volume fraction of B 4 C particulates. The improvement of the tribological properties of Al alloy matrix via addition of the ceramic particles including the SiC and Al 2 O 3 reinforcements was reported by previous investigators, which is due to the resistance of hard particles against destruction action of abrasive, so with increasing its content, the wear resistance enhances. The wear rate in all the samples increases marginally with applied load prior to reaching the critical load. This result is also consistent with previous works and is ascribed to the increase in fracture of reinforcement, the penetration of hard asperities of the counter surface into the softer pin surface and micro cracking tendency of the subsurface. The critical load of the unreinforced alloy is measured between 5 and 10 N, while for the composite reinforced with 7.5 % B 4 C, it is obtained between 15 and 20 N. After the critical load there is a transition from smooth linear increase wear rate to sudden increase in wear rate. This is attributed to the significantly higher frictional heating, and thus the localized adhesion and softening of the surface with the counter surface. During sliding, the temperature of the contact surface is initially less, and hence the asperities are expected to be stronger and more rigid. As time progresses, the frictional heating increases, which leads to higher temperature and softening of the surface materials.
2020-10-01T09:55:52.287Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2c5458255c58ebced562f312c981dce6cb997252", "oa_license": null, "oa_url": "https://doi.org/10.4149/km_2013_1_11", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2c5458255c58ebced562f312c981dce6cb997252", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
38419970
pes2o/s2orc
v3-fos-license
Sequential Outbreaks Due to a New Strain of Neisseria Meningitidis Serogroup C in Northern Nigeria, 2013-14 Œ PLOS Currents Outbreaks Background Neisseria meningitidis serogroup C (NmC) outbreaks occur infrequently in the African meningitis belt; the most recent report of an outbreak of this serogroup was in Burkina Faso, 1979. Médecins sans Frontières (MSF) has been responding to outbreaks of meningitis in northwest Nigeria since 2007 with no reported cases of serogroup C from 2007-2012. MenAfrivac®, a serogroup A conjugate vaccine, was first used for mass vaccination in northwest Nigeria in late 2012. Reactive vaccination using polysaccharide ACYW135 vaccine was done by MSF in parts of the region in 2008 and 2009; no other vaccination campaigns are known to have occurred in the area during this period. We describe the general characteristics of an outbreak due to a novel strain of NmC in Sokoto State, Nigeria, in 2013, and a smaller outbreak in 2014 in the adjacent state, Kebbi. Methods Information on cases and deaths was collected using a standard line-list during each week of each meningitis outbreak in 2013 and 2014 in northwest Nigeria. Initial serogroup confirmation was by rapid Pastorex agglutination tests. Cerebrospinal fluid (CSF) samples from suspected meningitis patients were sent to the WHO Reference Laboratory in Oslo, where bacterial isolates, serogrouping, antimicrobial sensitivity testing, genotype characterisation and real-time PCR analysis were performed. Results In the most highly affected outbreak areas, all of the 856 and 333 clinically suspected meningitis cases were treated in 2013 and 2014, respectively. Overall attack (AR) and case fatality (CFR) rates were 673/100,000 population and 6.8% in 2013, and 165/100,000 and 10.5% in 2014. Both outbreaks affected small geographical areas of less than 150km2 and populations of less than 210,000, and occurred in neighbouring regions in two adjacent states in the successive years. Initial rapid testing identified NmC as the causative agent. Of the 21 and 17 CSF samples analysed in Oslo, NmC alone was confirmed in 11 and 10 samples in 2013 and 2014, respectively. Samples confirmed as NmC through bacterial culture had sequence type (ST)-10217. Conclusions These are the first recorded outbreaks of NmC in the region since 1979, and the sequence (ST)-10217 has not been identified anywhere else in the world. The outbreaks had similar characteristics to previously recorded NmC outbreaks. Outbreaks of NmC in 2 consecutive years in northern Nigeria indicate a possible emergence of this serogroup. Increased surveillance for multiple serogroups in the region is needed, along with consideration of vaccination with conjugate vaccines rather than for NmA alone. Methods Information on cases and deaths was collected using a standard line-list during each week of each meningitis outbreak in 2013 and 2014 in northwest Nigeria. Initial serogroup confirmation was by rapid Pastorex agglutination tests. Cerebrospinal fluid (CSF) samples from suspected meningitis patients were sent to the WHO Reference Laboratory in Oslo, where bacterial isolates, serogrouping, antimicrobial sensitivity testing, genotype characterisation and real-time PCR analysis were performed. Results In the most highly affected outbreak areas, all of the 856 and 333 clinically suspected meningitis cases were treated in 2013 and 2014, respectively. Overall attack (AR) and case fatality (CFR) rates were 673/100,000 population and 6.8% in 2013, and 165/100,000 and 10.5% in 2014. Both outbreaks affected small geographical areas of less than 150km2 and populations of less than 210,000, and occurred in neighbouring regions in two adjacent states in the successive years. Initial rapid testing identified NmC as the causative agent. Of the 21 and 17 CSF samples analysed in Oslo, NmC alone was confirmed in 11 and 10 samples in 2013 and 2014, respectively. Samples confirmed as NmC through bacterial culture had sequence type (ST)-10217. Conclusions These are the first recorded outbreaks of NmC in the region since 1979, and the sequence (ST)-10217 has not been identified anywhere else in the world. The outbreaks had similar characteristics to previously recorded NmC outbreaks. Outbreaks of NmC in 2 consecutive years in northern Nigeria indicate a possible emergence of this serogroup. Increased surveillance for multiple serogroups in the region is needed, along with consideration of vaccination with conjugate vaccines rather than for NmA alone. Funding Statement This study was funded as part of MSF routine operations. The WHO Collaborating Centre for Reference and Research on Meningococci, Oslo, funded sample transport media and laboratory serogroup and strain tests. PLOS Currents Outbreaks In the African meningitis belt, and specifically within northern Nigeria, most meningitis outbreaks have been caused by N. meningitides serogroup A (NmA), including large (10-100 thousand cases) and widespread outbreaks 1 , 2 , 3 , 4 . In the past 15 years, there have been an increasing number of large outbreaks caused by N. meningitidis serogroups W135 and X 5 , 6 , 7 , 8 , 9 . Outbreaks due to Neisseria meningitidis serogroup C (NmC) have also occurred but were smaller and less frequent than NmA outbreaks 4 . The last NmC outbreak in this region occurred in 1979 in Burkina Faso with 539 cases reported (attack rate (AR) 517/100,000) 10 . Outbreaks caused by NmC in northern Nigeria are rare, with the last and only recorded outbreak in 1975 with no detailed report published 4 . Other notable NmC outbreaks occurred in the 1970s in Sao Paulo, Brazil and Ho Chi Minh, Vietnam with 2005 (11/100,000 people) and 1015 (>20/100,000 people) cases respectively 4 . In the USA, morbidity and mortality are higher among young adults in outbreaks caused by NmC compared with other serogroups 11 . Médecins sans Frontières (MSF) has conducted surveillance and response to cerebrospinal meningitis (CSM) outbreaks in northwest Nigeria since 2007. Meningitis outbreaks due to NmA in northwest Nigeria in 2008 and 2009 were recorded with 7601 and 9442 cases, respectively; MSF carried out reactive vaccination using polysaccharide ACYW135 vaccine in affected parts of the region in these years. An outbreak due to serogroup W135 occurred in 2010 with 2307 cases. From 2007-2012, MSF recorded no outbreak caused by NmC in the region. In December 2012, the most recent mass vaccination for meningitis in northwest Nigeria, conducted by the National Primary Health Care Development Agency (NPHCDA) of the Ministry of Health (MoH), the World Health Organization (WHO) and donor organizations, used MenAfrivac®, a serogroup A conjugate vaccine 12 .To our knowledge, there has been no mass vaccination specifically targeting NmC alone in this region. This paper describes the general characteristics of an outbreak due to a novel strain of NmC in Sokoto State, Nigeria in early 2013 and a smaller outbreak of the same strain in 2014 in the adjacent state, Kebbi, during which time no other serogroups were confirmed in the region. Methods Case definition. During this outbreak, the case definition used for CSM for those over 1 year of age was sudden onset fever and either neck stiffness or petechial rash. For infants under 1 year of age the case definition was sudden onset fever and either bulging fontanelle or petechial rash. Only cases adhering to this case definition were treated, had CSF samples taken, and were recorded as a suspected meningitis case. Data collection. In the four northwestern states of Nigeria, meningitis surveillance is done by the MSF Surveillance Nurse through weekly proactive contact with all government disease notification officers. There is one notification officer for each Local Government Area in each state, and they are required to contact all health posts in their jurisdiction each week. The MSF Surveillance Nurse relays reports of meningitis cases to the MSF Emergency Response Unit for follow-up and confirmation using clinical and laboratory criteria. At MSF meningitis case-management sites, information for each case was recorded in a standardized line-list of core data. Maps of the outbreak area were created using data from case tracing. Affected population estimates were derived by combining the known population, as per the most recent national census, for each ward which had at least one case. The aggregated data used for this paper were collected as part of routine activities which MSF has approval to conduct from the MOH. This work met the standards set by the independent MSF Ethics Review Board for retrospective analyses of routinely collected programmatic data 13 . Laboratory methods. Cerebral spinal fluid (CSF) samples were collected from all eligible suspected cases at the start of the outbreak and tested using the rapid Pastorex® latex agglutination kit. Pastorex test kits were kept in controlled, refrigerated storage, between 2 and 8 degrees Celsius. Cold chain procedures were maintained while transporting test kits to the field, in Gio'Style boxes with ice-packs. The field team conducted quality control tests on the kits with each usage, and returned the kits to refrigerated storage at the end of each day. The first 21 and 17 samples in 2013 and 2014, respectively, were inoculated into Trans-isolate media 14 and sent to the WHO Collaborating Centre for Reference and Research on Meningococci, Oslo, for confirmation. Bacterial identification was determinedby Gram staining, the oxidase reaction and standard biochemical tests. The strains were stored at -80°C in brain heart broth with 15% sterile glycerol or in Greaves solution. N. meningitidis strains were serogrouped by slide agglutination with commercial antisera (Remel, GA, USA) 15 . Antimicrobial susceptibility testing was performed by determination of the minimal inhibitory concentrations (MIC) using Etest (AB Biodisk, Solna, Sweden). Isolates were tested for susceptibility to penicillin G, amoxicillin, ceftriaxone, ciprofloxacin, chloramphenicol, rifampin, tetracycline and sulphonamides, and classified using the breakpoints from the European Committee on Antimicrobial Susceptibility Testing 16 . Genotypic characterization: DNA from each strain was prepared by suspending bacteria in Tris-EDTA buffer (10 mM Tris-HCl and 1 mM EDTA), pH 8.0, heating at 95°C for 10 min, and followed by centrifugation at 16,000 x g for 5 min. The supernatant was used as DNA template for PCR. Multi-locus sequence typing (MLST) was performed as described on the MLST website 17 . The DNA sequences were compared with those on the MLST website for determination of the allele numbers, STs, and clonal complexes of the isolates 18 . Variation in the porA and fetA genes, coding for the outer membrane proteins PorA and FetA, respectively, was determined by DNA sequencing, as described previously 19 , 20 . New MLST alleles and STs were submitted to the MLST database 17 together with the strain serogroup and porA and fetA sequences. PCR analysis of the genes coding for In the African meningitis belt, and specifically within northern Nigeria, most meningitis outbreaks have been caused by N. meningitides serogroup A (NmA), including large (10-100 thousand cases) and widespread outbreaks 1 , 2 , 3 , 4 . In the past 15 years, there have been an increasing number of large outbreaks caused by N. meningitidis serogroups W135 and X 5 , 6 , 7 , 8 , 9 . Outbreaks due to Neisseria meningitidis serogroup C (NmC) have also occurred but were smaller and less frequent than NmA outbreaks 4 . The last NmC outbreak in this region occurred in 1979 in Burkina Faso with 539 cases reported (attack rate (AR) 517/100,000) 10 . Outbreaks caused by NmC in northern Nigeria are rare, with the last and only recorded outbreak in 1975 with no detailed report published 4 . Other notable NmC outbreaks occurred in the 1970s in Sao Paulo, Brazil and Ho Chi Minh, Vietnam with 2005 (11/100,000 people) and 1015 (>20/100,000 people) cases respectively 4 . In the USA, morbidity and mortality are higher among young adults in outbreaks caused by NmC compared with other serogroups 11 . Methods Case definition. During this outbreak, the case definition used for CSM for those over 1 year of age was sudden onset fever and either neck stiffness or petechial rash. For infants under 1 year of age the case definition was sudden onset fever and either bulging fontanelle or petechial rash. Only cases adhering to this case definition were treated, had CSF samples taken, and were recorded as a suspected meningitis case. Data collection. In the four northwestern states of Nigeria, meningitis surveillance is done by the MSF Surveillance Nurse through weekly proactive contact with all government disease notification officers. There is one notification officer for each Local Government Area in each state, and they are required to contact all health posts in their jurisdiction each week. The MSF Surveillance Nurse relays reports of meningitis cases to the MSF Emergency Response Unit for follow-up and confirmation using clinical and laboratory criteria. At MSF meningitis case-management sites, information for each case was recorded in a standardized line-list of core data. Maps of the outbreak area were created using data from case tracing. Affected population estimates were derived by combining the known population, as per the most recent national census, for each ward which had at least one case. The aggregated data used for this paper were collected as part of routine activities which MSF has approval to conduct from the MOH. This work met the standards set by the independent MSF Ethics Review Board for retrospective analyses of routinely collected programmatic data 13 . Laboratory methods. Cerebral spinal fluid (CSF) samples were collected from all eligible suspected cases at the start of the outbreak and tested using the rapid Pastorex® latex agglutination kit. Pastorex test kits were kept in controlled, refrigerated storage, between 2 and 8 degrees Celsius. Cold chain procedures were maintained while transporting test kits to the field, in Gio'Style boxes with ice-packs. The field team conducted quality control tests on the kits with each usage, and returned the kits to refrigerated storage at the end of each day. The first 21 and 17 samples in 2013 and 2014, respectively, were inoculated into Trans-isolate media 14 and sent to the WHO Collaborating Centre for Reference and Research on Meningococci, Oslo, for confirmation. Bacterial identification was determinedby Gram staining, the oxidase reaction and standard biochemical tests. The strains were stored at -80°C in brain heart broth with 15% sterile glycerol or in Greaves solution. N. meningitidis strains were serogrouped by slide agglutination with commercial antisera (Remel, GA, USA) 15 . Antimicrobial susceptibility testing was performed by determination of the minimal inhibitory concentrations (MIC) using Etest (AB Biodisk, Solna, Sweden). Isolates were tested for susceptibility to penicillin G, amoxicillin, ceftriaxone, ciprofloxacin, chloramphenicol, rifampin, tetracycline and sulphonamides, and classified using the breakpoints from the European Committee on Antimicrobial Susceptibility Testing 16 . Genotypic characterization: DNA from each strain was prepared by suspending bacteria in Tris-EDTA buffer (10 mM Tris-HCl and 1 mM EDTA), pH 8.0, heating at 95°C for 10 min, and followed by centrifugation at 16,000 x g for 5 min. The supernatant was used as DNA template for PCR. Multi-locus sequence typing (MLST) was performed as described on the MLST website 17 . The DNA sequences were compared with those on the MLST website for determination of the allele numbers, STs, and clonal complexes of the isolates 18 . Variation in the porA and fetA genes, coding for the outer membrane proteins PorA and FetA, respectively, was determined by DNA sequencing, as described previously 19 , 20 . New MLST alleles and STs were submitted to the MLST database 17 together with the strain serogroup and porA and fetA sequences. PCR analysis of the genes coding for In the African meningitis belt, and specifically within northern Nigeria, most meningitis outbreaks have been caused by N. meningitides serogroup A (NmA), including large (10-100 thousand cases) and widespread outbreaks 1 , 2 , 3 , 4 . In the past 15 years, there have been an increasing number of large outbreaks caused by N. meningitidis serogroups W135 and X 5 , 6 , 7 , 8 , 9 . Outbreaks due to Neisseria meningitidis serogroup C (NmC) have also occurred but were smaller and less frequent than NmA outbreaks 4 . The last NmC outbreak in this region occurred in 1979 in Burkina Faso with 539 cases reported (attack rate (AR) 517/100,000) 10 . Outbreaks caused by NmC in northern Nigeria are rare, with the last and only recorded outbreak in 1975 with no detailed report published 4 . Other notable NmC outbreaks occurred in the 1970s in Sao Paulo, Brazil and Ho Chi Minh, Vietnam with 2005 (11/100,000 people) and 1015 (>20/100,000 people) cases respectively 4 . In the USA, morbidity and mortality are higher among young adults in outbreaks caused by NmC compared with other serogroups 11 . Methods Case definition. During this outbreak, the case definition used for CSM for those over 1 year of age was sudden onset fever and either neck stiffness or petechial rash. For infants under 1 year of age the case definition was sudden onset fever and either bulging fontanelle or petechial rash. Only cases adhering to this case definition were treated, had CSF samples taken, and were recorded as a suspected meningitis case. Data collection. In the four northwestern states of Nigeria, meningitis surveillance is done by the MSF Surveillance Nurse through weekly proactive contact with all government disease notification officers. There is one notification officer for each Local Government Area in each state, and they are required to contact all health posts in their jurisdiction each week. The MSF Surveillance Nurse relays reports of meningitis cases to the MSF Emergency Response Unit for follow-up and confirmation using clinical and laboratory criteria. At MSF meningitis case-management sites, information for each case was recorded in a standardized line-list of core data. Maps of the outbreak area were created using data from case tracing. Affected population estimates were derived by combining the known population, as per the most recent national census, for each ward which had at least one case. The aggregated data used for this paper were collected as part of routine activities which MSF has approval to conduct from the MOH. This work met the standards set by the independent MSF Ethics Review Board for retrospective analyses of routinely collected programmatic data 13 . Laboratory methods. Cerebral spinal fluid (CSF) samples were collected from all eligible suspected cases at the start of the outbreak and tested using the rapid Pastorex® latex agglutination kit. Pastorex test kits were kept in controlled, refrigerated storage, between 2 and 8 degrees Celsius. Cold chain procedures were maintained while transporting test kits to the field, in Gio'Style boxes with ice-packs. The field team conducted quality control tests on the kits with each usage, and returned the kits to refrigerated storage at the end of each day. The first 21 and 17 samples in 2013 and 2014, respectively, were inoculated into Trans-isolate media 14 and sent to the WHO Collaborating Centre for Reference and Research on Meningococci, Oslo, for confirmation. Bacterial identification was determinedby Gram staining, the oxidase reaction and standard biochemical tests. The strains were stored at -80°C in brain heart broth with 15% sterile glycerol or in Greaves solution. N. meningitidis strains were serogrouped by slide agglutination with commercial antisera (Remel, GA, USA) 15 . Antimicrobial susceptibility testing was performed by determination of the minimal inhibitory concentrations (MIC) using Etest (AB Biodisk, Solna, Sweden). Isolates were tested for susceptibility to penicillin G, amoxicillin, ceftriaxone, ciprofloxacin, chloramphenicol, rifampin, tetracycline and sulphonamides, and classified using the breakpoints from the European Committee on Antimicrobial Susceptibility Testing 16 . Genotypic characterization: DNA from each strain was prepared by suspending bacteria in Tris-EDTA buffer (10 mM Tris-HCl and 1 mM EDTA), pH 8.0, heating at 95°C for 10 min, and followed by centrifugation at 16,000 x g for 5 min. The supernatant was used as DNA template for PCR. Multi-locus sequence typing (MLST) was performed as described on the MLST website 17 . The DNA sequences were compared with those on the MLST website for determination of the allele numbers, STs, and clonal complexes of the isolates 18 . Variation in the porA and fetA genes, coding for the outer membrane proteins PorA and FetA, respectively, was determined by DNA sequencing, as described previously 19 , 20 . New MLST alleles and STs were submitted to the MLST database 17 together with the strain serogroup and porA and fetA sequences. PCR analysis of the genes coding for the polysaccharide capsule was performed for genogroup determination of non-serogroupable isolates as described 21 . PCR analysis of culture negative specimens: DNA from Trans-isolate supernatants was purified using QiAmp DNA mini kit (Qiagen) and analysed by real-time PCR for species identification, followed by genogrouping if N. meningitidis was identified. Determination of the PorA variant was done by DNA sequencing of the porA gene using a nested porA-PCR 22 . Data analysis. Line-lists were entered into an MSF standardized database in Microsoft Excel. Quality checks on the data were done weekly. Epidemiological curves and frequency summaries of patient history and symptoms were generated in Excel. Results Case Numbers and Attack Rates, 2013. During the 20 weeks from February 9 th until June 23 rd , 2013, a total of 856 suspected cases of CSM presented for treatment at MSF or MoH treatment sites in Sokoto State ( Table 1). The attack rate was 673 cases per 100,000 population in the affected wards of the state (Figure 1). Fifty-eight (58) deaths were recorded from treatment centres, giving a case fatality rate (CFR) of 6.8%. During the same period in 2013, some CSM cases were reported and treated by the MoH in Kebbi State, which borders Sokoto to the West. Detailed information on the cases from Kebbi State is not available. Figure 2). The epidemic curve for the 2013 outbreak ( Figure 1) shows a peak in the 9th epidemiological week, during which time the outbreak was mostly restricted to the two index villages with high attack rates. Case numbers in these villages then decreased, leading to the low attack rate seen in the 11th week. The increase in weeks 12 and 13 reflects presentations from areas outside the index villages. The gradual overall decline in cases after this period seems to be due to the subsequent serial rise and fall of cases in other villages. The 2014 outbreak was slightly more widespread, affecting 57 villages in a region of approximately 150km 2 . the polysaccharide capsule was performed for genogroup determination of non-serogroupable isolates as described 21 . PCR analysis of culture negative specimens: DNA from Trans-isolate supernatants was purified using QiAmp DNA mini kit (Qiagen) and analysed by real-time PCR for species identification, followed by genogrouping if N. meningitidis was identified. Determination of the PorA variant was done by DNA sequencing of the porA gene using a nested porA-PCR 22 . Data analysis. Line-lists were entered into an MSF standardized database in Microsoft Excel. Quality checks on the data were done weekly. Epidemiological curves and frequency summaries of patient history and symptoms were generated in Excel. Results Case Numbers and Attack Rates, 2013. During the 20 weeks from February 9 th until June 23 rd , 2013, a total of 856 suspected cases of CSM presented for treatment at MSF or MoH treatment sites in Sokoto State ( Table 1). The attack rate was 673 cases per 100,000 population in the affected wards of the state (Figure 1). Fifty-eight (58) deaths were recorded from treatment centres, giving a case fatality rate (CFR) of 6.8%. During the same period in 2013, some CSM cases were reported and treated by the MoH in Kebbi State, which borders Sokoto to the West. Detailed information on the cases from Kebbi State is not available. Figure 2). The epidemic curve for the 2013 outbreak ( Figure 1) shows a peak in the 9th epidemiological week, during which time the outbreak was mostly restricted to the two index villages with high attack rates. Case numbers in these villages then decreased, leading to the low attack rate seen in the 11th week. The increase in weeks 12 and 13 reflects presentations from areas outside the index villages. The gradual overall decline in cases after this period seems to be due to the subsequent serial rise and fall of cases in other villages. The 2014 outbreak was slightly more widespread, affecting 57 villages in a region of approximately 150km 2 . the polysaccharide capsule was performed for genogroup determination of non-serogroupable isolates as described 21 . PCR analysis of culture negative specimens: DNA from Trans-isolate supernatants was purified using QiAmp DNA mini kit (Qiagen) and analysed by real-time PCR for species identification, followed by genogrouping if N. meningitidis was identified. Determination of the PorA variant was done by DNA sequencing of the porA gene using a nested porA-PCR 22 . Data analysis. Line-lists were entered into an MSF standardized database in Microsoft Excel. Quality checks on the data were done weekly. Epidemiological curves and frequency summaries of patient history and symptoms were generated in Excel. Results Case Numbers and Attack Rates, 2013. During the 20 weeks from February 9 th until June 23 rd , 2013, a total of 856 suspected cases of CSM presented for treatment at MSF or MoH treatment sites in Sokoto State ( Table 1). The attack rate was 673 cases per 100,000 population in the affected wards of the state (Figure 1). Fifty-eight (58) deaths were recorded from treatment centres, giving a case fatality rate (CFR) of 6.8%. During the same period in 2013, some CSM cases were reported and treated by the MoH in Kebbi State, which borders Sokoto to the West. Detailed information on the cases from Kebbi State is not available. [15-29 years]). The 2013 outbreak was limited to a small geographical area, spreading gradually to 44 villages and remaining restricted to a region of 105km 2 ( Figure 2). The epidemic curve for the 2013 outbreak ( Figure 1) shows a peak in the 9th epidemiological week, during which time the outbreak was mostly restricted to the two index villages with high attack rates. Case numbers in these villages then decreased, leading to the low attack rate seen in the 11th week. The increase in weeks 12 and 13 reflects presentations from areas outside the index villages. The gradual overall decline in cases after this period seems to be due to the subsequent serial rise and fall of cases in other villages. The 2014 outbreak was slightly more widespread, affecting 57 villages in a region of approximately 150km 2 . The outbreaks were confined to relatively small areas, and did not have the 'wild fire' effect more typical of meningitis outbreaks caused by NmA 2 , 4 . Age groups with the highest proportion of cases in these outbreaks were 5-14 years and 15-29 years, also typical of meningitis, with these ages possibly exposed to more risk factors for transmission such as overcrowding and active and passive smoke exposure 11 , 27 , 28 . (Table 2). In total, 11 (52%) and 10 (59%) samples were confirmed as serogroup C in 2013 and 2014, respectively. Most were also confirmed as a new strain with sequencing of ST-10217 PorA type P1.21-15,16 and FetA type F1-7. No other serogroups were identified during testing of CSF samples from these outbreaks. Discussion These outbreaks were caused by a strain of NmC that has not been seen anywhere else in the world: sequence ST-10217 PorA type P1.21-15,16 and FetA type F1-7. As far as we know this is the first meningitis outbreak caused by NmC in northern Nigeria since 1975 and in the meningitis belt since 1979 4 , 10 , 23 . The seasonal pattern and presentation of these outbreaks, in the dry period during and following the Harmattan winds, did not differ much from those of other meningitis outbreaks caused by N. meningitidis in sub-Saharan Africa 24 , 25 , 26 The outbreaks were confined to relatively small areas, and did not have the 'wild fire' effect more typical of meningitis outbreaks caused by NmA 2 , 4 . Age groups with the highest proportion of cases in these outbreaks were 5-14 years and 15-29 years, also typical of meningitis, with these ages possibly exposed to more risk factors for transmission such as overcrowding and active and passive smoke exposure 11 , 27 , 28 . (Table 2). In total, 11 (52%) and 10 (59%) samples were confirmed as serogroup C in 2013 and 2014, respectively. Most were also confirmed as a new strain with sequencing of ST-10217 PorA type P1.21-15,16 and FetA type F1-7. No other serogroups were identified during testing of CSF samples from these outbreaks. The outbreaks were confined to relatively small areas, and did not have the 'wild fire' effect more typical of meningitis outbreaks caused by NmA 2 , 4 . Age groups with the highest proportion of cases in these outbreaks were 5-14 years and 15-29 years, also typical of meningitis, with these ages possibly exposed to more risk factors for transmission such as overcrowding and active and passive smoke exposure 11 , 27 , 28 . A relatively high percentage of Pastorex latex agglutination tests, carried out only on patients fitting the clinical case definition, had negative results during both the 2013 (64%) and 2014 (48%) outbreaks. This may have occurred for a number of reasons, including meningitis symptoms with non-bacterial cause, or self-medication with antibiotics or traditional medicines prior to presentation. As there is no other report of Pastorex testing being used in a real outbreak situation for NmC, it is uncertain whether these negative test result percentages are unusually high. During the 2013 outbreak, the MSF control strategy consisted of active case finding and health promotion activities. Due to heightened security issues and anti-vaccination sentiments in the area, reactive vaccination was not done. It is not clear that health promotion activities significantly decreased outbreak spread, though it was observed that after initiation of this strategy, cases presented earlier to treatment centres and this could have contributed to lower mortality rates in the latter part of the outbreak. In 2014, the Kebbi state MoH, along with the WHO, attempted twice to apply for vaccines from the International Coordinating Group; however, neither request was granted. The state MoH later received approximately 20,000 doses of ACYW135 polysaccharide vaccine from the Federal MoH, which was used for reactive vaccination in some affected villages; details of vaccination strategy and outcomes were not available to MSF. Similar to the previous year, health promotion and active case finding was carried out by MSF teams during the 2014 outbreak. Control strategies employed in 2014 by MSF and the MoH did not clearly impact outbreak spread. The 2014 NmC outbreak had fewer cases than the 2013 outbreak; however, since the outbreaks occurred in different regions and were controlled with different measures we cannot say this signifies a pattern of decrease. It is possible that these small, localized outbreaks will precede increasingly widespread occurrence of meningitis due to this serogroup (C) in the meningitis belt, as has been noted as a characteristic N. meningitidis outbreak pattern 4 . Conclusions and Recommendations NmC outbreaks have emerged in northwest Nigeria in the past 2 years, and there is some evidence of serogroup replacement in the meningitis belt following recent mass vaccination with NmA conjugate vaccine. Meningitis case surveillance systems, for both serogroup and strain, should continue to be strengthened in this region to allow for early identification and proper control (such as vaccination for the appropriate serogroup) of outbreaks. If NmC outbreaks become more widespread in northern Nigeria or adjacent regions in the coming years, large-scale preventative action may be required; a key measure is to ensure availability of ACYW135 polysaccharide vaccine for reactive vaccination. A relatively high percentage of Pastorex latex agglutination tests, carried out only on patients fitting the clinical case definition, had negative results during both the 2013 (64%) and 2014 (48%) outbreaks. This may have occurred for a number of reasons, including meningitis symptoms with non-bacterial cause, or self-medication with antibiotics or traditional medicines prior to presentation. As there is no other report of Pastorex testing being used in a real outbreak situation for NmC, it is uncertain whether these negative test result percentages are unusually high. During the 2013 outbreak, the MSF control strategy consisted of active case finding and health promotion activities. Due to heightened security issues and anti-vaccination sentiments in the area, reactive vaccination was not done. It is not clear that health promotion activities significantly decreased outbreak spread, though it was observed that after initiation of this strategy, cases presented earlier to treatment centres and this could have contributed to lower mortality rates in the latter part of the outbreak. In 2014, the Kebbi state MoH, along with the WHO, attempted twice to apply for vaccines from the International Coordinating Group; however, neither request was granted. The state MoH later received approximately 20,000 doses of ACYW135 polysaccharide vaccine from the Federal MoH, which was used for reactive vaccination in some affected villages; details of vaccination strategy and outcomes were not available to MSF. Similar to the previous year, health promotion and active case finding was carried out by MSF teams during the 2014 outbreak. Control strategies employed in 2014 by MSF and the MoH did not clearly impact outbreak spread. The 2014 NmC outbreak had fewer cases than the 2013 outbreak; however, since the outbreaks occurred in different regions and were controlled with different measures we cannot say this signifies a pattern of decrease. It is possible that these small, localized outbreaks will precede increasingly widespread occurrence of meningitis due to this serogroup (C) in the meningitis belt, as has been noted as a characteristic N. meningitidis outbreak pattern 4 . Conclusions and Recommendations NmC outbreaks have emerged in northwest Nigeria in the past 2 years, and there is some evidence of serogroup replacement in the meningitis belt following recent mass vaccination with NmA conjugate vaccine. Meningitis case surveillance systems, for both serogroup and strain, should continue to be strengthened in this region to allow for early identification and proper control (such as vaccination for the appropriate serogroup) of outbreaks. If NmC outbreaks become more widespread in northern Nigeria or adjacent regions in the coming years, large-scale preventative action may be required; a key measure is to ensure availability of ACYW135 polysaccharide vaccine for reactive vaccination. A relatively high percentage of Pastorex latex agglutination tests, carried out only on patients fitting the clinical case definition, had negative results during both the 2013 (64%) and 2014 (48%) outbreaks. This may have occurred for a number of reasons, including meningitis symptoms with non-bacterial cause, or self-medication with antibiotics or traditional medicines prior to presentation. As there is no other report of Pastorex testing being used in a real outbreak situation for NmC, it is uncertain whether these negative test result percentages are unusually high. During the 2013 outbreak, the MSF control strategy consisted of active case finding and health promotion activities. Due to heightened security issues and anti-vaccination sentiments in the area, reactive vaccination was not done. It is not clear that health promotion activities significantly decreased outbreak spread, though it was observed that after initiation of this strategy, cases presented earlier to treatment centres and this could have contributed to lower mortality rates in the latter part of the outbreak. In 2014, the Kebbi state MoH, along with the WHO, attempted twice to apply for vaccines from the International Coordinating Group; however, neither request was granted. The state MoH later received approximately 20,000 doses of ACYW135 polysaccharide vaccine from the Federal MoH, which was used for reactive vaccination in some affected villages; details of vaccination strategy and outcomes were not available to MSF. Similar to the previous year, health promotion and active case finding was carried out by MSF teams during the 2014 outbreak. Control strategies employed in 2014 by MSF and the MoH did not clearly impact outbreak spread. The 2014 NmC outbreak had fewer cases than the 2013 outbreak; however, since the outbreaks occurred in different regions and were controlled with different measures we cannot say this signifies a pattern of decrease. It is possible that these small, localized outbreaks will precede increasingly widespread occurrence of meningitis due to this serogroup (C) in the meningitis belt, as has been noted as a characteristic N. meningitidis outbreak pattern 4 . It is possible that the mass vaccination with a conjugate 'A' vaccine (MenAfriVac®) in this region a few months prior to the 2013 outbreak could have had an influence on the emergence of new strains or less commonly seen serogroups, such as NmC. Serogroup replacement following mass meningitis vaccination has been noted in west Africa; reports from Niger and Burkina Faso have indicated a significant increase in serogroup W prevalence in the years following campaigns with MenAfriVac® around 2010 29 , 30 . Following a mass vaccination with MenAfriVac® in Chad in 2011/2012 it was seen that in one community serogroup A carriage decreased from 0.7 to 0.02%, while carriage of "other" serogroups (ie. not A, W, X) increased from 0.4 to 0.7% 31 . Because of the possibility of serogroup replacement following vaccination, enhanced surveillance systems in the region are a priority 32 . Conclusions and Recommendations NmC outbreaks have emerged in northwest Nigeria in the past 2 years, and there is some evidence of serogroup replacement in the meningitis belt following recent mass vaccination with NmA conjugate vaccine. Meningitis case surveillance systems, for both serogroup and strain, should continue to be strengthened in this region to allow for early identification and proper control (such as vaccination for the appropriate serogroup) of outbreaks. If NmC outbreaks become more widespread in northern Nigeria or adjacent regions in the coming years, large-scale preventative action may be required; a key measure is to ensure availability of ACYW135 polysaccharide vaccine for reactive vaccination.
2017-10-17T03:54:07.028Z
2014-12-29T00:00:00.000
{ "year": 2014, "sha1": "952d4b2e6171273589d0d4d2b196fb8ec417b824", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/currents.outbreaks.b50c2aaf1032b3ccade0fca0b63ee518", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "952d4b2e6171273589d0d4d2b196fb8ec417b824", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216355216
pes2o/s2orc
v3-fos-license
Investigation into Innovative Fabrication of Fiber Metal Laminated Pipe and Application Fiber reinforced metal laminated tube (FMLT) is a kind of multi-layer super-hybrid material which is cured at a fixed pressure and temperature after alternately laying metal laminated tube and fiber composite material. The development of GLARE composite pipe hydroforming technology and performance testing is of great significance to the development of aviation industry and automotive industry in terms of lightweight and safety. Introduction For a long time, aircraft and automobile manufacturers have been looking for a new material with high specific strength, good fatigue performance and corrosion resistance. The birth of GLARE composite pipe has solved many problems, and now scientists have been researching it successively. Based on the research of fibre metal laminates, Tao Jie used Yan Huigeng's graphic method to determine the matching pipes for fibre metal reinforced pipes. Through Lame's formula, Tao Jie gives the minimum internal pressure required for expansion by assuming that the expansion nozzle is in a plane stress state. KUL CAN obtains the optimum pressure curve through dynaform during the expansion process, and gives that the best forming condition is that the feed speed of the axis is 8mm/s. Based on Dai Qiwei's KUL CAN, the preparation and formability of Ti/CF/PEEK/Ti composite layer tube were studied. It was pointed out that the critical bulging value of the composite crown was 7.9 MPa, and the strength of the inner layer titanium tube was lower than that of the outer layer titanium tube.The deep drawing experiments of the Ti/CF/PEEK/Ti GLARE composite pipe were carried out, and the deep drawing performance was obtained. In addition, KUL CAN also carried out the deep drawing performance of the Ti/CF/PEEK/Ti GLARE composite pipe. The energy absorption experiment of LARE tube under axial compression was carried out. The main purpose of this study is to further study the hydraulic forming technology of GLARE composite pipe and to further study the physical objects.Formatting the title, authors and affiliations Research Background and Significance Lightweight is the development trend of aerospace, high-speed railway, automobile and other transport machinery manufacturing industry. Thin-walled, integral and lightweight structure is an MMIE 2019 IOP Conf. Series: Materials Science and Engineering 784 (2020) 012002 IOP Publishing doi: 10.1088/1757-899X/784/1/012002 2 important measure to realize lightweight products. The first ten-year action plan of the Chinese government to implement the "Made in China 2025" strategy clearly emphasizes and points out that green manufacturing, aerospace equipment and new energy vehicles are key areas for development. In addition, COP15, the World Climate Conference held in Copenhagen, Denmark in 2009, advocated a green low-carbon lifestyle and enterprise production and consumption. For green manufacturing, one of the important measures is lightweight components. At the same time, various countries and regions are stricter and stricter in terms of environmental protection, safety and corrosion resistance of the automobile industry, which greatly promotes the automobile manufacturers all over the world to actively develop some environmentally friendly automobile products. According to statistics, the automobile manufactured by lightweight design has reduced about 2% compared with the previous steel manufacturing. 5% [1]. Under this background, tube hydroforming and its derivative technology have gained wide attention and development in the field of aviation, aerospace and automobile manufacturing all over the world. This technology is one of the key technologies to realize lightweight structural manufacturing proposed in the late 1970s. It was first used in large-scale engineering practice by Germany in the 1990s. In the field of automobile manufacturing, its industrial chain covers liquidfilled forming equipment, liquid-filled forming pipes, liquid-filled forming parts and related inspection and testing equipment. In Japan, Europe, America, Korea and other countries and regions, a relatively independent and complete supplier system has been formed. The parts produced are designed for chassis, body and exhaust system of automobiles. And many other fields. Pipe hydroforming technology has also made extensive development in aerospace industry. In the field of aircraft, rocket and engine manufacturing, the requirement of lightweight is more urgent. The integral complex thin-walled structural parts suitable for tube hydroforming technology are widely used in the above fields, so this technology has attracted much attention. In 2004, the Wright Foundation of the United States made the hydroforming technology one of the key developing forming technologies. In view of the current situation in China, the technology has also been funded by the National Natural Science Foundation, the National Science and Technology Support Plan and the National Key Basic Research Development Plan. The development prospects are broad [2] . In addition to the development of new technologies, the search for new materials is also an inevitable requirement for the development of aircraft with better performance. Compared with other materials, the plasticity of traditional metal materials is better and easier to be processed and moulded, but the corrosion resistance is relatively poor. Fiber reinforced resin matrix composites have high specific modulus, specific strength, corrosion resistance and fatigue properties, but have poor impact damage resistance and ductility, and are vulnerable to humidity. The water aging is affected by the environment. Considering that these two materials have their own advantages, a new type of composite material with colloidal connection structure, fiber reinforced metal laminates, emerged as the times require. " Figure 1" shows the application of GLARE composite laminated tubes in various fields. Development status Fiber reinforced metal laminates can be divided into the following generations according to the material of fiber reinforced matrix: aramid reinforced aluminium laminates (ARALL), glass reinforced aluminium laminates (GLARE) and glass reinforced aluminium laminates (GLARE) in the first generation and in the third generation. Carbon fiber-titanium alloy laminates (CARE) and graphite fiber-titanium alloy laminates (TiGr) are the fourth generation. Fiber reinforced metal laminated tube (FRMT) is a kind of interlaminar superhybrid material which is cured at a fixed pressure and temperature after alternately laying metal laminated tube and fiber composite material. Among them, the composite laminated tube formed by similar manufacturing process of second generation GLARE fiber reinforced metal laminates has good impact resistance, which can be used for aircraft crashworthiness structure, aircraft landing buffer structure, automobile body crashworthiness device, air transmission material protection and so on; at the same time, because of its excellent corrosion resistance. It can be widely used in chemical industry. The development of GLARE laminated tube hydroforming technology and performance testing is of great significance to the development of aviation industry and automotive industry in terms of lightweight and safety. In this paper, the formability and liquid filling formability of the fibre metal laminated tube are studied as an example. The structure is shown in the following "figure 2". Figure 2. Structural sketch of GLARE layered tube The manufacturing technology of Grail tube is mainly liquid-filled forming, which has developed rapidly in recent years. After the pre-treatment (deburring, elbowing and oiling) of the tube, the tube is usually placed in a pre-set cavity die, and the tube is deformed by controlling the axial feed (lateral thrust) and internal pressure. Finally, the technology of the given shape is obtained [3] . " Figure 3" shows the process schematic diagram of hydroforming pipe. Figure 3. Process schematic diagram of hydroforming pipe Because the material is formed under the action of real-time controllable uniform fluid surface force, and the axial thrust force can continuously push the appropriate material into the deformation area, thus greatly improving the forming limit of the tube hydroforming and the precision of the part clamping. Because water (or emulsified liquid) is used as force transfer medium in industrial production and test, and hydraulic proportional servo control technology is applied, forming pressure and lateral thrust can also be precisely controlled by real-time servo, and real-time pressure holding treatment is carried out according to specific conditions, thus the quality of formed parts can be accurately controlled. Pipe hydroforming technology is suitable for the production of any plastic thin-walled metal materials, including stainless steel, carbon steel, copper alloy, aluminum alloy, magnesium alloy and titanium alloy. Pipe filling parts are widely used in advanced manufacturing fields such as automobiles, aerospace, military nuclear power and so on. Twist beams, towing arms, front and back frames, energy absorbing boxes, etc. in automobile manufacturing are typical application cases, such as fuel conduits, heat dissipating vanes, tee tubes, small radius elbows, etc. Development Status of Hydraulic Expansion Joint Generally, the methods of manufacturing laminated pipes include hot extrusion, explosive cladding (forming), coil welding (welding), centrifugal casting, cold drawing and hydraulic bulging, etc [4] . Compared with other technologies for manufacturing composite laminated pipes, hydraulic expansion has the advantages of low cost and good formability. The manufacturing process of common composite pipes is listed in " Table 1". It is suitable for alloy metal composite with low plasticity and poor workability; small deformation resistance and high surface roughness of composite pipe. Mechanical combination Explosive forming It can realize the compounding between various metals and has high efficiency.The thickness of the covering metal can be large or small, and the interface is tightly combined; the explosion site is dangerous and the technical requirements are high. Mechanical combination Welding method It can be used to manufacture composite steel pipes for natural gas transportation with a diameter of more than 300mm; the production process is complicated and the production cost is high. Diffusion bonding The material combination is more diversified, the quality is good, the cost is slightly higher; the production process is complicated, the equipment investment is large, and the production efficiency is low. Drawing method The production process is simple and the cost is low; the interface is non-diffusion connection, the temperature is high and easy to stratify, and the application temperature is low. Mechanical combination The expansion joint of the composite pipe refers to the plastic deformation after the inner pipe is convex, and the outer pipe is elastically deformed.When the internal pressure is unloaded, the rebound amount of the outer tube is greater than the rebound amount of the inner tube.Finally, the residual contact stress at the interface stroke causes the two layers of tubes to mechanically bond, as shown in the following figure.As a plastic forming method, the expansion-molded composite layer tube can greatly improve the material utilization efficiency and has high forming precision.In addition, the hydraulic expansion joint pipe is used, the expansion joint force is uniform, and can be calculated according to the set parameters and mechanical properties of the two metal pipes, the wall thickness distribution is uniform, and the inner surface quality of the pipe is high [5] . " Figure 4" is a expansion joint schematic. Status of foreign research In the early 1970s, Fokker discovered the effect of metal thickness on the cementitious laminate in the bonding structure between the laminates. It was unexpectedly found that the fracture toughness of the thin plate can be improved by cementation and the fatigue crack propagation can be suppressed. [6] .Based on the above phenomenon, the University of Delft in the Netherlands developed in cooperation with Fokker to add a reinforcing base (single aramid fiber) to an aluminum alloy metal laminate to obtain a composite-fiber reinforced metal laminate (ARALL) having a novel structure.Since then, Delft University has systematically studied fatigue performance, durability, processing properties, fracture toughness, impact damage, structural design, etc., and laid a precedent for fiber reinforced metal laminate research. [7] . In 1987, the second generation of fiber reinforced metal laminates, which is the main technical idea used in this experiment, was the glass fiber-aluminum laminate (glare).Compared with the first generation of fiber reinforced metal laminates, glare laminates have the advantages of higher fatigue properties, notch strength and compressive strength.Airbus became the first person to eat crabs. The company first applied glare laminates and its technology to the a340 and a330 fuselage sections. After more than 100,000 flight tests, no damage was found. Boeing will then Glare laminates are used in the cabin floor of the 777 and 757. The first large-scale application of the glare laminate is in the upper fuselage skin, fairing, fairing, vertical and horizontal tail, upper fuselage siding and upper double cabin A380 side panel." Figure 5" shows the vault of the barrel portion of the fuselage made of glare.According to statistics, the upper body of the a380 has a total area of about 470 square meters and a total of 27 glare laminates [8] .The longest part is 11 meters and the weight is reduced by about 800 kilograms. alloy laminate, but due to the large difference between the galvanic sequence of carbon and aluminum, it is prone to electrical corrosion that is common in life.To solve this problem, a solution for filling a barrier layer between the carbon fiber layer and the aluminum alloy laminate has appeared, but the use of the barrier layer increases the difficulty in preparing the carall laminate.For these various reasons, carbon fiber reinforced aluminum alloy laminates have not been able to achieve large-scale applications in actual production and life. The main research of glare laminate mechanics focuses on both shear properties and anisotropy. The glare laminates are directional and exhibit anisotropy along the direction of the different fiber arrangements.In general, due to the high modulus of elasticity of the fibers, the tensile strength of the unidirectionally arranged glare laminate in the fiber arrangement direction is much higher than that of the aluminum alloy sheet, but the lateral tensile strength is low.If the fiber layer is selected as the prepreg, the horizontal and vertical drawing strengths are guaranteed. Hale et al. first studied the related effects of the orientation direction of glass fiber prepreg on the tensile properties of GLARE laminates.Donnellan et al. studied the effect of temperature on the tensile properties of ARALL laminates by operating at varying ambient temperatures.The results show that low temperature and normal temperature have no effect on the tensile properties of various types of ARALL laminates. However, when the temperature is raised to 120 °C, the tensile strength of ARALL-1, -2, -3 ARALL laminates will be The above phenomenon [9] occurs for the ARALL-4 type to reach 170 °C or higher. Liu Cheng et al. used the short beam method to study the shear behavior between the plate and the plate of the galle laminate. The results show that when the span-thickness ratio is 8, the sample undergoes pure shear failure, ie near the neutral layer. Interlayer peeling delamination.The apparent interlaminar shear strength values obtained by this method can characterize the layers of the composite laminate.Shear performance.When the span-thickness ratio is 5, the shear test is the squeeze-shear failure mode, and when the span-thickness ratio reaches 10, the failure mode is the bending effect [10] . " Table 2" lists the glare laminate shear performance. Table 2. glare laminate shear performance In addition to the study of basic mechanical properties, the damage of the galle laminate and its mechanics theory are also hot topics.Volume fraction theory is a classical theory [11] that characterizes the tensile properties of galre laminates.The theoretical formula is as follows: among them, lam t Is the total thickness of the laminate; metal t Is the thickness of a single layer of E Is the yield strength of the laminate and the metal; a is the fiber volume fraction of the laminate in the tensile direction. Delfe University studied the uniaxial tensile properties of GLARE1-to-GLARE-6 and other GLARE. The results show that when 0.45<MVF<0.85, MVF theory can give the prediction of tensile strength [12] . The glare laminate also follows the industry-recognized classical laminate theory. Although the theory ignores the plastic deformation of the aluminum alloy sheet after yielding, it can reflect its main mechanical properties through some data and according to the theory.Perform relevant mechanical analysis and calculation of common clt theory before the yield behavior of the aluminum alloy, and calculate the residual stress generated by the glare laminate during the curing process, as shown in FIG.During the cooling of the glare laminate, the coefficient of thermal expansion of the aluminum alloy layer and the fibrous layer during cooling will result in residual stress between the materials, and the mechanical properties of the glare laminate will be significantly affected [13] . " Figure 6" uses clt theory to explain the effect of participating stress on glare laminates. Status of domestic research Based on the research of fiber reinforced metal laminates, Tao Jie used Yan Huigen's graphic method to determine the matching of the fiber metal reinforced tube.Through the Lame formula, Tao Jie gives the minimum internal pressure required for expansion by assuming that the expansion joint is in a plane stress state.KUL CAN obtains the optimum pressure curve during the expansion process through dynaform, as shown in the figure below, and also gives the axis feed rate of 8mm/s which is most favorable for forming.Based on the theory, Dai Qijun studied the preparation and forming properties of Ti/CF/PEEK/Ti composite layer tubes. It is pointed out that the critical value of the composite crown is 7.9 MPa, and the inner titanium tube strength is required to be lower than The outer layer of titanium tube and the tube drawing of Ti/CF/PEEK/Ti tube were subjected to the deep drawing test, and the deep drawing performance was obtained. In addition, KUL CAN also carried out the axial compression energy absorption experiment on the GLARE layer tube. Its compression curve [14] . " Figure 7" shows the internal pressure loading curve. Interlayer "viscous" effect test Because semi-curing has the characteristics of large tangential deformation and complicated deformation process compared with traditional cementation, in addition to considering the fluid hydroforming process in semi-cured forming in this subject, the interface has always had a normal pressure value with time.In order to better describe the viscous effect of the metal and prepreg interface in the glare laminate and the tube semi-cured, the physical simulation test shown in " Figure 8" below is designed. Figure 8. "viscous" effect test As shown in the figure above, the 3+2 GLARE layer form is used.The thickness of the aluminum plate is 3mm, the material is aluminum alloy 2024-T3, and its performance is shown in " Table 3" below; the prepreg is glass fiber WP9011(25*25), and its performance is shown in " Table 4" below: As shown in " Figure 8", there is a 25*25mm pressure zone in the middle of the test piece, and there is a constant pressure in the normal direction to ensure that the prepreg is tightly bonded to the metal.Due to the large thickness of the aluminum plate, the amount of tensile deformation can be neglected in this experiment.Under the action of the axial force, the intermediate layer aluminum plate continuously slides in the axial direction.By performing the above experiments under different pressures (five in each group, averaging), the tensile force and sliding displacement maps are obtained as shown in " Figure 9" below. Figure 9. Tensile-displacement curve at different pressures It can be seen from " Figure 9" that the curve distribution law is basically the same under different pressures, that is, the slope gradually becomes smaller after linear increase, and then exponentially decays after reaching the peak value, and finally shows a linear trend; in addition, the curve peak value and the final linear segment value Positively related to stress. For any pressure, the relationship between tensile force and displacement can be expressed by the following "figure 10": As shown in the above figure, the typical tensile-displacement curve can be divided into four parts: linear area 1, curve area 2, curve area 3, and linear area 4.The following four areas are analyzed: (1)Linear area 1: For the linear region 1, the main deformation mode of the cementing unit is the shear deformation in the elastic region.In this area, the cementing unit remains stable without major deformation. (2)Curve area 2: As the elastic shear deformation of the cementing unit becomes larger and larger, the deformation of the cementing unit exceeds the elasticity, and the deformation is permanent deformation, and the deformation amount is large.In addition, the tensile force and displacement are not linear. (3)Curve area 3: As the deformation gradually increases, the tensile-displacement curve tends to decline.At this time, the amount of deformation is extremely large, and itis not restored after unloading. (4)Until the deformation to the linear region 4, the stretch-displacement curve region is stable and tends to decrease linearly It can be seen that when the curve reaches its apex, the interlaminar viscous effect degrades, which is conducive to subsequent forming. Technology development prospects Composite material refers to a material composed of two or more different substances and combined in different ways. It can overcome the defects of a single material and utilize the advantages of various materials to expand the application range of materials.Due to its light weight, high strength, convenient processing, excellent elasticity, chemical resistance and weather resistance, composite materials gradually replace wood and metal alloys, and are widely used in automobiles, aerospace, construction, electrical and electronic, fitness equipment and so on.In other areas, it has grown rapidly in recent years. As a new type of composite material, the chemical fiber reinforced metal layer tube is still in the laboratory development stage. Due to its good impact resistance, it can be used in aircraft crashworthy structures, aircraft landing buffer systems, and automobile body anti-collision devices. Airborne material protection, etc.; at the same time, due to its excellent corrosion resistance, it can be widely used in the chemical industry. Development prospects of the transportation market The development potential of composite materials in the transportation market is even more important.Although composite materials are commonly used in the field of automation, composites account for only 1% of the weight of ordinary vehicles.Not to mention the proportion of fiberreinforced metal sheet materials, it can be seen that the market development potential of fiberreinforced metal materials is enormous.The driving force of regulations and technology has made people's optimism about the growth of composite materials become reasonable, but the adoption of composite materials is still difficult, and the market conditions in the next few years will challenge the industry.Composite materials will be used in aerospace and other industries that are in urgent need of light weight, but will still occupy a certain share in the general market. To be like other materials, composites are the choice of people, and only the overall cost, weight and performance can be compared to the extraordinary value.Although in automotive and other fields where lightweighting is not demanding, the reason for the application of composite materials is not only its lightweight, but its main disadvantage is that the processing cost is higher than that of ordinary manufacturing materials.At this point, the decline in sales of light vehicles and competition from other materials has become an obstacle to composite applications. " Figure 11" shows the use of Ford gt's wide range of carbon fiber materials Figure 11. Use of Ford gt's wide range of carbon fiber materials The results of the survey show that a 10% reduction in vehicle volume will result in a 7% increase in fuel economy.In addition, lightweight vehicles require less power to accelerate, so even with smaller, low-burning engines, these vehicles still have the potential to drive.Therefore, original equipment manufacturers are investing in lightweight materials to produce economical but at the same time enjoyable vehicles. Composite materials -here is the fiber reinforced metal tube, which will be used more in automotive structural design due to its outstanding performance compared to other materials. Development prospects of the aerospace market At present, the aerospace industry is one of the main application fields of carbon fiber, which is mainly due to the high strength and light weight properties of carbon fiber.Compared with aluminum or steel, carbon fiber can reduce weight by 20% to 40%. " Figure 12" shows the carbon fiber aviation interior. Figure 12. Carbon fiber aviation interior In the aerospace industry, it is mainly used for aircraft structural materials (about 40% of aircraft weight), so the use of carbon fiber is comprehensive. It can reduce the weight of the aircraft by 6% to 12%, which significantly reduces the fuel cost of the aircraft.In the aerospace industry, composite materials -carbon fiber was first used in the manufacture of satellite antennas and satellite supports, and because of its heat and fatigue resistance, carbon fiber has also been widely used in solid rocket engine casings and nozzles.Some aircraft tubular structures, such as the fuselage, oil pipelines, landing gear, etc., use this product to not only improve the strength, but also reduce the quality, is a good choice. 1. A new preparation method of metal fibre tube has been studied and successfully formed. The results show that the tube has higher specific strength, specific modulus, fatigue property and corrosion resistance than common composite materials and uniform metal components. 2.Hydroforming is a good way.For hollow variable cross-section structural parts, the traditional manufacturing process is to stamp two halves and then weld them into a whole. Hydraulic forming can form hollow structural parts with variable cross-section at one time. Compared with stamping welding process, hydroforming technology and process have the following main advantages: (1). Reduce quality and save materials. For hollow shaft parts, the weight can be reduced by 40%-50%. (2). Reduce the number of parts and moulds and reduce the cost of moulds. Hydraulic forming parts usually need only one set of dies, while stamping parts mostly need multiple sets of dies. (3). It can reduce the welding quantity of follow-up mechanical processing and assembly. (4). Improve strength and stiffness, especially fatigue strength. (5). Reduce production cost. According to the statistical analysis of the parts already used in hydroforming, the cost of production of hydroforming parts is 15%-20% lower than that of stamping parts on average, and the cost of die is 20%-30% lower than that of stamping parts. 3. After hydroforming, the interface property can be improved greatly. Hydraulic expansion of GLARE composite pipes refers to a mechanical connection mode which realizes expansion by residual stress caused by different deformation of inner and outer pipes before solidification. The basic principle is that the two ends of the pipe are sealed by the left and right sealing devices, and the high-pressure liquid enters the inner pipe through the sealing device, and the plastic deformation of the inner pipe occurs under the action of the fluid surface force; when the internal pressure is unloaded, the rebound of the outer pipe is greater than that of the inner pipe, and finally the residual contact stress at the interface stroke makes the two-layer pipe. Mechanical bonding occurs. As a plastic forming method, the bulging forming of composite tube can greatly improve the material utilization efficiency and the forming accuracy. In addition, the hydraulic expansion joint compound pipe has the advantages of uniform expansion force, uniform wall thickness distribution and high surface quality, which can be calculated according to the set parameters and mechanical properties of the two metal pipes.
2020-04-16T09:14:48.699Z
2020-04-10T00:00:00.000
{ "year": 2020, "sha1": "fed30b4bfad705e54d2daf7d760004c5db19df88", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/784/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ed5de8c4dcfc0644cc7cbb5bcd7c373a10210d3b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
1209215
pes2o/s2orc
v3-fos-license
On multiprocessor temperature-aware scheduling problems We study temperature-aware scheduling problems under the model introduced in [Chrobak et al. AAIM 2008], where unit-length jobs of given heat contributions and common release dates are to be scheduled on a set of parallel identical processors. We consider three optimization criteria: makespan, maximum temperature and (weighted) average temperature. On the positive side, we present polynomial time approximation algorithms for the minimization of the makespan and the maximum temperature, as well as, optimal polynomial time algorithms for minimizing the average temperature and the weighted average temperature. On the negative side, we prove that there is no approximation algorithm of absolute ratio \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{4}{3}-\epsilon $$\end{document} for the problem of minimizing the makespan for any \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon > 0$$\end{document}, unless \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{P}=\mathcal{NP}$$\end{document}. Introduction The exponential increase in the processing power of recent (micro)processors has led to an analogous increase in the energy consumption of computing systems of any kind, from compact mobile devices to large scale data centers. This has also led to vast heat emissions and high temperatures affecting the processors' performance and reliability. Moreover, high temperatures reduce the lifetime of chips and may permanently damage the processors. For this reason, manufacturers have set appropriate temperature thresholds for their processors and use cooling systems to control the temperature below these thresholds. However, the energy consumption and heat emission of these cooling systems have to be added to that of the whole system. The issues of energy and thermal management in the (micro)processor and system design levels date back to the first computer systems. During the last few years these issues have been also addressed at the operating system's level, generating new interesting questions. In this context the operating system has to decide the order in which the jobs should be scheduled so that the system's temperature (and/or energy consumption) remains as low as possible, while at the same time some standard user or system oriented criterion (e.g., makespan, response time, throughput, etc) is optimized. Clearly, the minimization of the temperature and the optimization of the scheduling criteria are typically in conflict, and several models have been proposed in the literature to analyze such conflicts and trade-offs. A first model is based on the speed-scaling technique for energy saving and the Newton's law of cooling; see for example Bansal et al. (2007); Atkins et al. (2011) as well as recent reviews on speed-scaling in Irani and Pruhs (2005); Albers (2010Albers ( , 2011. In another model proposed in Zhang and Chatha (2007), a thermal RC circuit is utilized to capture the temperature profile of a processor. In this study, we adopt the simplified model for cooling and thermal management introduced by Chrobak et al. (2008), who were motivated by Yang et al. (2008). In particular, they consider a set of unit-length jobs (corresponding to slices of the processes to be scheduled), each one with a given heat contribution, and model the thermal behavior of the system as follows: if a job of heat contribution h is executed on a processor within a time interval [t − 1, t), t ∈ N, and the temperature of the processor at time t − 1 is , then the processor's temperature at time t is +h 2 . Although in practice the heat contribution of the executed jobs and the cooling effect are spread over time Zhou et al. (2010), the authors in Chrobak et al. (2008) consider the above simplified discrete model in which the heat contribution of the job to be executed is first added to the current temperature and then this sum is halved, in order to take into account the cooling effect. In Chrobak et al. (2008), the authors study the problem of scheduling a set of unit-length jobs with release dates and deadlines on a single processor so as to maximize the throughput, i.e., the number of jobs that meet their deadlines, without exceeding a given temperature threshold θ at any time t ∈ N. Extending the well-known three-field notation for scheduling problems Graham et al. (1979), this problem is denoted as 1|r i , p i = 1, h i , θ| U i . They prove that this problem is NP-hard even for the special case when all jobs are released at time 0 and their deadlines are equal, i.e., 1| p i = 1, h i , θ| U i . Furthermore, in the presence of release dates and deadlines it is shown that a family of reasonable list scheduling algorithms, including the coolest first and earliest deadline first algorithms, have a competitive ratio of at most two. This result implies also an approximation factor of two for the off-line problem. In the negative side, they also give an instance that shows that there is no deterministic on-line algorithm with competitive ratio less than two. The same model has been also adopted by Birks et al. (2010Birks et al. ( , 2011a where online algorithms for several generalizations of the throughput maximization problem have been studied. In fact, in Birks et al. (2010) the cooling effect is generalized by multiplying the temperature by 1/c, where c > 1, instead of one half. In Birks et al. (2011a) the weighted throughput objective is considered, while in Birks et al. (2011b) the jobs have equal (non-unit) processing times. Our problems and results. Under the thermal model of Chrobak et al. (2008), we initiate the study of scheduling a set J = {J 1 , J 2 , . . . , J n } of n jobs on a system of m identical processors, unlike the previous works that study only single processor systems. All jobs have common release dates and unit processing times, and for each one of them we are given a heat contribution h i , 1 ≤ i ≤ n. Let h max = max{h i , J i ∈ J } be the maximum heat contribution among all jobs. We consider each job J i executed in a time interval [t − 1, t), t ∈ N, which we call slot t, on some processor. By j t we denote the temperature of processor j at time t. As in , if we start executing job J i at time t − 1, then The initial temperature of each processor (the ambient temperature) is considered to be zero, i.e., In what follows, we simplify the notation using t instead of j t , when the processor is specified by the context. We consider two natural variants of the above model: The threshold thermal model. In this model, a given threshold θ on the temperature of the processors cannot be violated at any time t ∈ N. This is the case with the throughput maximization problems studied in Chrobak et al. (2008); Birks et al. (2010Birks et al. ( , 2011a. It is clear that, for a given instance in this model, a feasible schedule may exist only if h i ≤ 2 · θ for each job J i . By normalizing the values of h i 's and θ we can assume w.l.o.g. that 0 < h i ≤ 2 and θ = 1, as in Chrobak et al. (2008). Moreover, if a processor at time t − 1 has temperature t−1 and it holds that t−1 +h i 2 > 1, for every job J i that has not yet been scheduled, then this processor will remain idle for the slot [t − 1, t) and its temperature at time t will be reduced by half, i.e., t−1 2 . Note also that once a processor has executed some job(s), its temperature will never become exactly zero. Therefore, in this model, a feasible instance cannot contain more than m jobs of heat contributions equal to 2, as there are m slots with 0 = 0 (the first slots in each one of the m available processors). Under this model we study the makespan minimization problem, that is P| p i = 1, h i , θ|C max . The optimization thermal model. In this model, no explicit threshold on the processors' temperature is given. The lack of such a threshold is counterbalanced by studying the problems of minimizing the maximum and average temperature of a schedule. For any instance in this model, any schedule of length at least n m is feasible, independently of the range of the jobs' heat contributions. However, the optimum value of our objectives depends on the time available to execute the given set of jobs: the maximum or average temperature of a schedule of length equal to n m is, clearly, greater than that of a schedule of longer length, where we are allowed to introduce idle slots. In what follows, we are interested in minimizing these two objective functions with respect to a given schedule length (makespan or deadline) of d ≥ n m . Such a schedule will contain md − n idle slots and we can consider them as executing md − n fictitious jobs of heat contribution equal to zero. This length d is part of our problems' instances, denotes the time available to complete the execution of all the jobs and represents the need to complete them within a given time at the price of higher temperatures. Thus, in both problems we consider under this model (minimizing the maximum and the average temperature) we are accounting the temperatures at the end of any of the md slots available on the m processors. The problems of minimizing the maximum and average temperature we consider under this model are denoted by The complexity of our problems is strongly related to the complexity of the throughput maximization problem studied in Chrobak et al. (2008). It is already mentioned in Chrobak et al. (2008), that the NP-hardness of the maximum throughput problem of scheduling jobs with common release dates and deadlines on a single processor 1| p i = 1, h i , θ| U i implies the NP-hardness of our makespan minimization problem 1| p i = 1, h i , θ|C max . In fact, the decision version of the latter problem asks for the existence of a feasible schedule where all jobs complete their execution by some given deadline d. Moreover, the decision version of the maximum temperature problem on a single processor 1| p i = 1, h i , d| max asks for the existence of a schedule where all jobs complete their execution by some given deadline d without exceeding a given temperature threshold θ . Therefore, the same reduction gives NP-hardness for both makespan and maximum temperature minimization problems. The NP-hardness for our problems on an arbitrary number of parallel processors follows trivially. Given these NP-hardness results, in this paper we focus on approximation algorithms and inapproximability results for the above-mentioned problems, under the threshold and optimization thermal models for the case of multiple processors. We start in Sect. 2 with the problem P| p i = 1, h i , θ|C max of minimizing the schedule length (makespan) in the threshold thermal model. We first prove that this problem cannot be approximated within an absolute ratio less than 4/3. Then, we present a generic algorithm of approximation ratio 2ρ, where ρ is the approximation ratio of an algorithm A for the classical makespan problem on parallel machines, used as a subroutine in our algorithm. This leads to a (2 + )approximation ratio within a running time that is polynomial in n but exponential in 1/ for m processors (using the known PTAS's for minimizing makespan), and a 2-approximation ratio for a single processor, within O(n log n) time. If in the place of algorithm A we use the standard LPT ( 4 3 − 1 3m )approximation algorithm, we are able to give a tighter analysis, improving the 2ρ-approximation ratio to 7 3 − 1 3m , while the overall running time is O(n log n). Then in Sects. 3 and 4, we move to the optimization thermal model. In Sect. 3, we study the problem P| p i = 1, h i , d| max of minimizing the maximum temperature of a schedule, and we give a 4/3 approximation algorithm. In Sect. 4, we prove that the problem P| p i = 1, h i , d| j t of minimizing the average temperature of a schedule, as well as a time-dependent weighted version of this problem are both solvable in polynomial time. We conclude in Sect. 5. Makespan minimization In this section, we study the approximability of makespan minimization under the threshold thermal model, that is, We start with a negative result on the approximability of our problem. The proof of the next theorem is along the same lines with the NP-hardness reduction for the throughput maximization problem under the same model Chrobak et al. (2008). Theorem 1 There is no polynomial time algorithm achieving an absolute approximation ratio better than 4/3 for the minimum makespan problem P| p i = 1, h i , θ|C max , unless P = N P. Proof We give a reduction from Numerical 3-Dimensional Matching (N3DM) where we are given three sets A, B, C of n integers each and an integer β, and the question is whether A ∪ B ∪C can be partitioned into n disjoint triples (a, b, c) ∈ A × B × C such that each triple contains exactly one integer from each of A, B, C, and a + b + c = β for each triple. W.l.o.g., we assume that x∈A∪B∪C x = βn and x ≤ β for each x ∈ A ∪ B ∪ C. The N3DM problem is known to be NP-complete (see Garey and Johnson (1979)). Given an instance I of N3DM, we construct an instance I of P| p i = 1, h i , θ|C max consisting of n processors and 3n jobs, one for each integer in The reduction works by showing that it is hard to decide whether the optimal schedule is of length three or not. Claim There is a N3DM for instance I if and only if there is a feasible schedule for the instance I of P| p i = 1, h i , θ|C max of length three. in this solution, we schedule in the i-th processor the jobs corresponding to a i , b i and c i in the first, second, and third slots, respectively. For the temperatures, a i , b i , c i , of the i-th processor after each one of those executions we have and hence there is a feasible schedule of length three. (⇐) Assume, now, that there is a feasible schedule of length three. In this schedule there are exactly three jobs in each processor, since there are 3n jobs in total. If a job corresponding to an integer a ∈ A is scheduled to the second slot of a processor, then the temperature threshold θ = 1 is violated after the third slot of this processor. Indeed the temperature at this slot will be at least In a similar way, we can show that a job corresponding to an integer a ∈ A cannot be scheduled to the third slot of a processor: Hence, each of the n jobs corresponding to one of the n integers a ∈ A is scheduled to the first slot of a processor. Moreover, we can show that a job corresponding to an integer b ∈ B cannot be scheduled to the third slot of a processor: In all, in each processor exactly three jobs are scheduled: a job a ∈ A in the first slot, a job b ∈ B in the second slot, and a job c ∈ C in the third slot. Therefore, the jobs of a processor correspond to a feasible triple for N3DM. To finish our proof, we have to show that each triple sums up to β. If this does not hold then there is a triple (a, b, c) for which a + b + c > β, since x∈A∪B∪C x = βn.The temperature of the third slot of the processor in which the corresponding jobs to this triple are scheduled is which is a contradiction that there is a feasible schedule. This completes the proof of Theorem 1 since an approximation ratio better than 4/3 would be able to decide the N3DM problem. Note that the result of Theorem 1 allows the possibility of an asymptotic PTAS or even an additive constant approximation ratio. In what follows in this section, we present an approximation algorithm for the minimum makespan problem. Note that, in order to respect the temperature threshold, a schedule may have to contain idle slots. To argue about the number of idle slots that are needed before the execution of each job, we will introduce first an appropriate partition of the set of jobs according to their heat contribution. In particular, for each integer k ≥ 0, we can argue separately for jobs whose heat contribution belongs to the interval(2 − 1 2 k−1 , 2 − 1 2 k ]; recall that h i ≤ 2, for 1 ≤ i ≤ n. Moreover, the interval to which a job of heat contribution h i belongs to is indexed by k i , that is Our algorithm and its analysis are based on the following proposition for the structure of any feasible schedule. Proof (i) Consider a feasible schedule that has less than min{n , m} jobs in J executed in the first slot of the processors. Assume, first, that in this schedule there is a processor, p, in which a job J i ∈ J \ J is executed in its first slot and there is at least one job of J executed in p. Let J j ∈ J be the earliest of these jobs which is executed in slot s > 1. By swapping the jobs J i and J j , the temperature s of processor p after slot s is decreased. Indeed, let s be the temperature of processor p after slot s and be the contribution of jobs executed in slots 2, 3, . . . , s − 1 to s , that is s = h i 2 s + + h j 2 . After the swap it holds that s = h j 2 s + + h i 2 < s , since h i < h j . Thus, the temperature of any slot s ≥ s in p is decreased. Moreover, by assumption, each slot s , 2 ≤ s ≤ s − 1, of p executes a job in J \ J . Hence, no new idle slots are required for these jobs, although the temperature before their execution is increased. Therefore, the new schedule is feasible and it has the same length. If there is not such a processor, then let J i ∈ J \ J be a job executed in the first slot of some processor p and J j ∈ J be a job executed in s-th, s > 1, slot of processor q. By swapping the jobs J i and J j the temperature of any slot s ≥ s of processor q is decreased as h i < h j . Moreover, by assumption, the processor p contains only jobs in J \ J , and, as in the previous case, no new idle slots are required for these jobs. Therefore, after the swap we get a feasible schedule of the same length. (ii) Consider a schedule that is feasible up until the execution of the job preceding J i . Let x be the number of idle slots before the execution of job J i and let be the temperature of the processor before the first of these x slots. Since the schedule is feasible before J i , we have that ≤ 1. The temperature will become 2 x , after the last idle slot, and 2 x +h i 2 after the execution of job J i . For such a schedule to be feasible we need that 2 x +h i 2 ≤ 1, This means that with at least k i idle slots, feasibility is ensured. (iii) Let t be the temperature of the processor before executing J j . Next, after the execution of J j we have t+1 = t +h j 2 . Then, after x slots (idles or executing jobs of heat contribution h ≤ 1) we get a temperature t+x+1 ≥ t +h j 2 · 1 2 x . In order for J i to be executed in the next slot, it should hold that t+x+1 In what follows we consider instances with n > m, for otherwise the problem becomes trivial. By Proposition 1(i), we also assume that the number of jobs of heat contribution h i > 1 is greater than m. If this is not the case, all jobs can be executed without any idle slot before them and the length of an optimal schedule is exactly n m . We consider the jobs in non-increasing order of their heat contributions, i.e., h 1 ≥ h 2 ≥ . . . ≥ h n , and we define A = {J 1 , J 2 , . . . , J m } and B = {J m+1 , J m+2 , . . . , J n }. Our algorithm schedules first the jobs in A to the first slot of each processor. Each one of the jobs in B is scheduled by leaving before its execution exactly k i idle slots, according to the Proposition 1(ii). In this way, our problem, for the jobs in B, is transformed to an instance of the classical makespan problem on parallel machines, P||C max , where the processing time of each job is p i = k i + 1, that is, k i idle slots plus its original unit processing time. Then, these jobs are scheduled using any known approximation algorithm A for P||C max . From now on we fix an instance of our problem and we denote by SO L the length of the schedule S provided by Algorithm MAX_C and by O PT the length of an optimal schedule S * for our original scheduling problem. For the presentation and the analysis of our algorithm, we denote by I B and I + B the instances of P||C max consisting only of jobs in B with processing times p i = k i and p i = k i + 1, respectively, for each J i ∈ B. For an instance I of P||C max , we denote by S(I) the schedule found by an algorithm A and by C(I) the length of this schedule. In a similar way, we denote by S * (I) and C * (I) an optimal schedule for P||C max and the length of this optimal schedule, respectively. Clearly, SO L = 1 + C(I B + ). To analyze our Algorithm MAX_C, we need a lower bound on the optimal makespan. To derive this bound we will utilize an optimal schedule S * (I B ). Note that for jobs with h i ∈ (0, 1], k i = 0, hence the schedule S * (I B ) involves only jobs for which h i > 1. Lemma 1 For the optimal makespan it holds that The first bound on the optimal makespan follows trivially by considering all jobs requiring a single slot for their execution. For the second bound, let A * , |A * | = m, be the set of jobs executed in the first slot of the m processors in an optimal solution and B * = J \ A * . Consider, first, an auxiliary schedule of length O PT − , identical to the optimal apart from the fact that each job in B * ∩ A has been replaced by a different job in A * ∩ B. Observe that in this schedule, the jobs executed in the first slot of the processors remain A * while the jobs executed in the remaining slots are the jobs in B. Since each job in B has smaller or equal heat contribution than any job in A, it follows that O PT ≥ O PT − . Consider, next, the schedule S * (I B ). For this schedule it holds that, O PT − ≥ 1 + C * (I B ), since by Proposition 1(i),(iii) each job in B requires at least k i slots to be executed; recall that we consider instances where the number of jobs of heat contribution h i > 1 is greater than m and that jobs in B with h i ≤ 1, and hence k i = 0, do not appear in the schedule S * (I B ). It is well-known that the P||C max problem is strongly NP-hard and a series of constant approximation algorithms and PTASs have been proposed. Our main result in this section is that in step 4 of Algorithm MAX_C we can use any algorithm A for P||C max to obtain twice the approximation ratio of A for our problem. Theorem 2 Algorithm MAX_C achieves a 2ρ approximation ratio for P| p i = 1, h i , θ|C max , where ρ is the approximation ratio of the algorithm A for P||C max . Proof A ρ-approximation algorithm A implies that . To obtain an upper bound to C * (I + B ) we start from the schedule S * (I B ). The processing times of jobs in the latter schedule are reduced by one with respect to the former one, and the jobs in B with h ≤ 1 do not appear in schedule S * (I B ). Let B ⊆ B be this set of jobs. We transform the schedule S * (I B ) to a new schedule S (I + B ) in two successive steps: (i) we increase the processing time of jobs in B \ B from k i to k i + 1, and (ii) we introduce the jobs in B with unit processing time, at the end of the resulting schedule in a first-fit manner. Clearly, for the length, C (I + B ), of this new schedule it holds that C * (I + B ) ≤ C (I + B ) as both of them refer to the same instance I + B . Let us now bound C (I + B ) in terms of C * (I B ). , then we consider the construction of S (I + B ) and we argue about the completion time of a critical processor in S * (I B ), i.e., the processor that finishes last. By step (i), the length of schedule S * (I B ) increases at most twice, since each job in B \ B has processing time at least one and this is increased by 1. For the case of a single processor the 1||C max problem is trivially polynomial, whereas for multiple processors there are well-known PTAS's, e.g., Hochbaum and Shmoys (1987); Alon et al. (1998). Hence, the main implication of Theorem 2 is Corollary 1 For any > 0, there is a (2+ )-approximation algorithm for P| p i = 1, h i , θ|C max . For the case of a single processor, there is an algorithm that achieves an approximation ratio of 2. To obtain the ratio of 2 + , as stated above, one needs to use a PTAS for the classical makespan problem in step 4 of Algorithm MAX_C, resulting in a running time that is exponential in 1/ . To achieve more practical running times, we can investigate the use of other algorithms for step 4. In particular, if the standard Longest Processing Time (LPT) algorithm is used, then Theorem 2 leads to a 2( 4 3 − 1 3m ) approximation ratio within O(n log n) time. Recall that the LPT algorithm greedily assigns the next job (in nonincreasing order of their processing times) to the first available processor Graham (1969). In the next theorem we are able to improve this ratio to 7/3, based on an LPT oriented analysis of Algorithm MAX_C. Theorem 3 Algorithm MAX_C using the LPT rule in step 4 achieves an approximation ratio of Proof Our proof follows the standard analysis given in Graham (1969), for the classical multiprocessor scheduling problem. For the lower bound on the length of an optimal schedule we use Lemma 1 and the fact that C * (I B ) ≥ To upper bound the length SO L of the schedule S returned by Algorithm MAX_C we consider the job J which finishes last in S. Clearly > m, for otherwise there are at most m jobs to be scheduled and the problem becomes trivial. The job J will start being executed not later than , and hence, it holds that Thus, we get SO L ≤ 2O PT − 1 + 1 − 1 m (k + 1). If k ≤ O PT/3, then the theorem follows directly. If k > O PT/3, then we consider the subinstance, I , of the original problem that contains only the jobs of heat contribution at least h , i.e., J = {J 1 , J 2 , . . . , J }. Obviously, 3 and k ≥ 1, as k is an integer. Moreover, for the length of an optimal schedule, C * (I ), of the subinstance I it holds that C * (I ) ≤ O PT . As > m, the lengths of the schedules returned by Algorithm MAX_C for instances I and I are equal, i.e., C(I ) = SO L. Hence, In an optimal schedule of I there are at most three jobs in each processor, for otherwise, if there is a processor with four assigned jobs, the length of that schedule will be, by Proposition 1(iii), at least 1 + 3k > O PT , a contradiction. Hence, ≤ 3m. Algorithm MAX_C schedules the jobs of I as follows: the job J i , 1 ≤ i ≤ m, is scheduled to the first slot of processor i, the job J m+i , 1 ≤ i ≤ m, to the (1 + (k m+i + 1)) − th slot of processor i and job J 2m+i , 1 ≤ i ≤ m, accordingly to the LPT rule. If m < ≤ 2m, then the length of the above schedule is C(I ) = 1 + (k m+1 + 1) = 2 + k m+1 . By Lemma 1 it follows that C * (I ) ≥ 1 + k m+1 , since there is a processor executing at least two jobs in {J 1 , J 2 , . . . , J m+1 }. Hence, , then the Algorithm MAX_C schedules in the first processor either the jobs J 1 and J m+1 or the jobs J 1 , J m+1 and J . In the first case, the job J starts its execution not later than the slot 1 + (k m+1 + 1), for otherwise J would have been scheduled by Algorithm MAX_C in processor 1, that is C(I ) ≤ 1 + (k m+1 + 1) + (k + 1). In the second case, J is the job that finishes last, that is C(I ) = 1 + (k m+1 + 1) + (k + 1). Thus, in both cases it holds that C(I ) ≤ 3 + k m+1 + k . For an optimal schedule for I , Lemma 1 implies as before that C * (I ) ≥ 1 + k m+1 . Moreover, in such a schedule there is a processor with at least three jobs, and hence C * (I ) ≥ 1 + 2k . Combining these two bounds we get C * (I ) ≥ 1 + k m+1 2 + k . Therefore, we get SO L O PT ≤ C(I ) C * (I ) ≤ 6+2k m+1 +2k 2+k m+1 +2k . This ratio is decreasing with k and as k ≥ 1 we get SO L O PT ≤ 8+2k m+1 4+k m+1 = 2, and the proof is completed. Note that the 4 3 − 1 3m -approximation ratio of the LPT algorithm for the classical makespan problem on parallel machines is tight. Concerning the tightness of our algorithm, we are able to give an instance where it achieves a 2-approximation ratio. This instance consists of m(k + 2) jobs: a set J 1 of m jobs of heat contribution h i = 2, a set J 2 of m jobs of heat contribution h i = 2 − 3 2 k+1 , and a set J 3 of mk jobs of heat contribution h i = 1 2(2 k −1) . An optimal solution for this instance is to schedule the jobs in the following way: every processor executes a job of J 1 in the first slot, k jobs of J 3 in slots 2, 3, . . . , k + 1, and a job of J 2 in slot k + 2. The temperature of every processor after slot k + 1 is 1 2 k + 1 2(2 k −1) · 2 k −1 2 k = 3 2 k+1 , and hence a job of J 2 can be executed in slot k + 2. Moreover, as the jobs of J 3 have heat contribution h i ≤ 1, this schedule is feasible. On the other hand, our algorithm schedules in every processor a job of J 1 in the first slot, a job of J 2 in the slot k + 2, and k jobs of J 3 in slots k + 3, k + 4, · · · , 2k + 2. Therefore, the ratio achieved by our algorithm is 2k+2 k+2 2. Maximum temperature minimization Now, we turn our attention to the optimization thermal model and to the problem of minimizing the maximum temperature, i.e., P| p i = 1, h i , d| max . Recall that as we discussed in the Introduction, we consider a schedule length d ≥ n m and that n = m · d, by adding the appropriate number of fictitious jobs. Recall also that the maximum is taken over the temperatures at the end of any of the md slots available on the m processors. In the sequel, we will denote by * max the maximum temperature of an optimal schedule. We start with the observation that any algorithm for this problem achieves a 2 approximation ratio. Indeed, it holds that * max ≥ h max /2, no matter how we schedule the job of maximum heat contribution. It also holds that for any algorithm, max ≤ h max , with max being the maximum temperature of the algorithm's schedule. Therefore, max ≤ 2 · * max . To improve this trivial ratio we propose the Algorithm MAX_T below, which is based on the intuitive idea of alternating the execution of hot and cool jobs. To elaborate a little more on how the algorithm works, note that processor 1 will be assigned the job J 1 , followed by J n , then followed by J m+1 , and then by J n−m and this alternation of hot and cool jobs will continue till the end of the schedule. Similarly processor 2 will be assigned the jobs J 2 , J n−1 , J m+2 , J n−m−1 , and so on. The schedule is illustrated further in Table 1. To analyze the Algorithm MAX_T, we start with the proposition below, which is implied by the Round-Robin scheduling of jobs in Steps 2 and 3 of the algorithm. Algorithm MAX_T 1: Sort the jobs in non-increasing order of their heat contributions: h 1 ≥ h 2 ≥ ... ≥ h n ; 2: Using the order of Step 1, schedule the d 2 m hottest jobs to the odd slots of the processors using Round-Robin; 3: Using the reverse order of Step 1, schedule the d 2 m coolest jobs to the even slots of the processors using Round-Robin; Average temperature minimization In this section, we look at the problem of minimizing the average temperature, P| p i = 1, h i , d| j t , instead of the maximum temperature. We will again consider a schedule length d and assume that the number of jobs is n = md. Contrary to the maximum temperature, we show that minimizing the average temperature of a schedule is solvable in polynomial time. Our algorithm is based on the following lemma. Lemma 4 In any optimal solution for the average temperature, jobs are scheduled in a coolest first order, i.e., for any pair of jobs J i , J j such that h i > h j scheduled at slots t and t , respectively, it holds that t ≤ t, regardless of the processor they are assigned to. Proof Consider the job J i to be scheduled at slot t of some processor p in a schedule S. The contribution of job J i to the temperature of the s-th slot of processor p (with t ≤ s ≤ d), is h i 2 s−t+1 , while this job does not affect the temperature of any other slot in any processor. Hence, the contribution of job J i to the objective function, . Therefore, the later job J i is scheduled, the smaller its contribution to the objective function becomes. Assume, now, that in an optimal schedule S * the job J i is scheduled at slot t of some processor, while the job J j at slot t > t in any processor. By swapping the execution of this pair of jobs the contribution of the job J i to the objective function decreases by h i · 2 t −2 t 2 d+1 and the contribution of job J j increases by h j · 2 t −2 t 2 d+1 . As h i > h j , it follows that the resulting schedule contradicts the optimality of the schedule S * and this completes the proof of the lemma. The previous lemma leads directly to the next simple algorithm. Algorithm AVR_T 1: Sort the jobs in non-decreasing order of their heat contributions: h 1 ≤ h 2 ≤ ... ≤ h n ; 2: According to this order schedule the jobs to processors using Round-Robin; Conclusions We have provided algorithms as well as negative results for various optimization criteria in scheduling under thermal management models. There are many interesting open questions remaining. The most important is to improve the approximation ratio both for the problem of minimizing the makespan and for minimizing the maximum temperature. Also it would be interesting to generalize our results in the case where the cooling effect is different than one half, as in Birks et al. (2010Birks et al. ( , 2011a. Towards a different direction, one can also consider other objectives under the threshold thermal model, in line with the objectives that have been studied in the more traditional models of job scheduling. Resolving these questions seems technically more challenging than the classic scheduling problems due to the different nature of the constraints that are introduced by temperature management models. Note that scheduling problems under the threshold thermal model can be seen as scheduling problems with sequence-dependent setup times; such a setup time for a job corresponds to the idle slots required to respect the temperature threshold. In scheduling problems with setup times (see for example Pinedo (1995)), the setup time of a job usually depends only on the job itself and the previous job in the schedule. However, in our case, the number of idle slots, required before executing a job, depends on all the jobs scheduled before as well as on their order. Hence existing results from the literature cannot be applied.
2017-09-17T07:48:30.609Z
2012-05-14T00:00:00.000
{ "year": 2013, "sha1": "0576ced0c3114769b0d2031dd29c68082e653bc0", "oa_license": "CCBYSA", "oa_url": "https://basepub.dauphine.psl.eu/bitstream/123456789/6309/1/cahier310.PDF", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "82d68e45898ef5b9a5dc5392d701b5f89b6f24e6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
89926526
pes2o/s2orc
v3-fos-license
Within leaf variation is the largest source of variation in agroinfiltration of Nicotiana benthamiana Transient gene expression utilizing syringe agroinfiltration offers a simple and efficient technique for different transgenic applications. Leaves of Nicotiana benthamiana show reliable and high transformation efficiency, but in quantitative assays also a certain degree of variation. We used a nested design in our agroinfiltration experiments to dissect the sources of this variation. An intron containing firefly luciferase gene was used as a reporter for agroinfiltration. A number of 6 week old tobacco plants were infiltrated for their top leaves, several samples were punched from the leaves after 2 days of transient expression, and protein extracts from the samples were repeatedly measured for luciferase activity. Interestingly, most of the variation was due to differences between the sampling spots in the leaves, the next important source being the different leaves on each plant. Variation between similar experiments, between plants and between repetitive measurements of the extracts could be easily minimized. Efforts and expenditure of agroinfiltration experiments can be optimized when sources of variation are known. In summary, infiltrate more plants but less leaves, sample more positions on the leaf but run only few technical replicates. Background A wide range of methods and techniques have been used to produce transient gene expression in plant cells for studying promoter activity, gene and protein function, or protein-protein interactions in vivo [1][2][3][4]. Protoplast transformation and particle bombardment date back furthest [5,6] and in spite of their drawbacks in being time consuming and sometimes inefficient, they still are used because of their benefits [7]. For example, particle bombardment is targeted to intact tissues where different cell and tissue types can be distinguished for the assay. During more recent years, agrobacterium based transient assays have become more and more widely used [8][9][10]. Agrobacterium is the earliest [11,12] and still today often the preferred gene transfer tool to generate stably transformed plants. Agrobacterium interacts with a wide range of plant cells and through a type IV secretion system injects a single stranded DNA molecule into the plant cell, which subsequently gets transported to the nucleus, made double stranded and finally gets integrated into a chromosomal position [13]. Interestingly, genes residing on the transferred DNA (T-DNA) are expressed early during the process and, according to the present view, prior to and independent of the integration event itself [14]. This early expression is transient and is strongly reduced after peaking at ca. 2 days [15]. Fading away of the transient expression is not due to fast degradation of non-integrated T-DNA, but an active silencing process. Coinfiltration of T-DNA from which viral silencing suppressor proteins are expressed prolongs transient expression by many days, highest accumulation levels occurring at around 6 or 7 days post infiltration [16,17]. Agrobacterium based transient gene expression can take place in various tissues [9], but the most commonly used target is the mesophyll of expanded leaves. An agrobacterium suspension can be infiltrated with vacuum or a syringe to the parenchymal airspace, hence the method is referred to as "agroinfiltration". Particularly leaves of Nicotiana benthamiana have proven to be rewarding targets for agroinfiltration. A large fraction of N. benthamiana mesophyll cells are transformed by agrobacterium and in the extreme cases as much as 50 % [18] of total soluble leaf protein can be encoded by the transferred gene. This has led to applications where pharmaceutically active proteins are produced by leaf infiltration at a commercially viable scale [19][20][21]. For research, proteins difficult to yield in microbial systems have been produced in N. benthamiana for their characterization [22][23][24] or allowing their function to take place in the plant cells leading to changes in metabolism clarifying their (enzymatic) roles or in formation of pharmaceutically or commercially interesting small molecules [25]. In addition to bulk protein production, syringe or vacuum agroinfiltration has been used to study protein-protein interactions and plant promoter function in vivo [1,26]. For quantitative assays, variation originating from biological and technical sources limits the accuracy and statistical power of the assays. Compared to using stably transformed plant lines, transient expression assays already eliminate variation due to different chromosomal positions and epigenetic states of the transferred genes. Still, plenty of variation remains. In this work, we address the source of this variation by using a hierarchical (nested) experimental design, where components of the experimental variance can be teased apart. Our aim was to understand the source of the variation in order to design experiments that are optimal in respect to the effort and expense used. In short, our results show that most of the variation originates from within the infiltrated leaf (between sampling spots), position of the leaf on the plant being the second largest source. Experimental design We ran two different experiments using a similar hierarchical design. Our original intention was to test estradiol induction of the XVE/LexA system [27] in agroinfiltrated N. benthamiana, compare the background and induced levels to the widely used Cauliflower Mosaic Virus 35S promoter [28], and to compare the G10-90 promoter [29], driving the XVE transcription factor, to the 35S promoter. Therefore, three constructs with the reporter gene encoding firefly luciferase (LUC) were used in this experiment. For each construct (for XVE-LUC with and without estradiol), two N. benthamiana plants were used, three top leaves were infiltrated from each plant, five samples were punched from each leaf and extracted, and each extract was measured five times for luciferase activity (technical replicates) (Fig. 1). In the second experiment we used only a 35S-LUC construct. Three plants were treated, three top leaves were infiltrated, four samples were punched from each leaf and each sample measured twice. This was repeated three times with 1 week intervals (experimental replicates), giving the topmost hierarchical level of the second experiment. All results were tabulated (Additional file 1: Tables S1, S2) and variance components were calculated as described in materials and methods. Promoter efficiencies in agroinfiltration Comparison of the three different promoters (XVE promoter for uninduced and induced levels) showed that, compared to the 35S promoter, XVE promoter gave an uninduced background level of 17 % and an induced level of 140 %. In this system, G10-90 promoter yielded luciferase activity that was 12 % of the 35S promoter driven activity (Fig. 2). Source of variation The hierarchical design of the promoter test experiment allowed us to split the total experimental variance to its components. Largest fraction of the variance (85 %) was due to the promoters (or induction conditions) applied, as expected. As the promoters cause a fixed effect, their contribution was ignored when inspecting the distribution of the remaining variance (Fig. 3a). The remaining variance concentrated to the within leaf sampling (between punch holes or disks, 53 %), to the leaf position (17 %) and to the plant individual (19 %). Inspecting results from individual plants used in the experiment showed that in few cases the two plants used for the experiment were not alike. Technical replication of the luciferase activity contributed least (11 %) to total variance. The second experiment was designed to address the agroinfiltration variance in more detail by using a single reporter construct (35S-LUC) and more plants but less technical replicates. In the first experiment, only 0.5 µl of leaf extract was used for the luciferase assay. Although the variance of technical replication was smallest, part of it might be due to inaccurate pipetting. We increased the sample volume to 10 µl, but in order to keep the luciferase activity within the range of the luminometer, we mixed the reporter agrobacterium strain with one expressing the silencing suppressor p19 [15] in ratio 1:50. Silencing suppression is commonly used in agrobacterium infiltration and allows transient expression to continue for up to a week, however here the role of the second strain was simply to dilute the luciferase carrying agrobacterium. We also took extra care to choose plants identical in size and figure for the experiment, leaving the largest and smallest plants on the tray out of the experiment. Analysing the second experiment for its variance components showed that increasing the volume pipetted for the luciferase assay nearly completely eliminated variance from technical replication (Fig. 3b). In addition, variance between the three plants in the experiment and between the three experimental replicates of the infiltration series was negligible. Similar to the first experiment, largest variation came from between samples punched from each leaf analysed (66 %) and next largest from leaves infiltrated within each plant (33 %). To catch possible sources of the within leaf variation, we ran some additional controls. The agrobacterium suspension spreads seemingly evenly in the airspace of the expanded leaf but this does not assure that the bacteria are distributed evenly. To test this, infiltrated leaves were sampled as for the luciferase assay and bacteria were released by homogenisation. Plating of serial dilutions of the suspensions showed 12 % variation but no trend in respect to the distance from the infiltration spot (Additional file 2: Figure S1). Buyel and Fischer [30] observed significant variation between sampling positions within agroinfiltrated N. tabacum leaves and their experiments showed a trend of increased transient expression towards the basal parts of the leaf. Two of the four sampling spots in our experiment were taken closer to the tip of the leaf and two closer to the base, but the variation observed could not be addressed to the sampling position (Additional file 2: Figure S2). Still, there was a slightly higher average level of expression closer to the tip of the leaf and variation within the tip samples was somewhat lower than between the basal samples (Additional file 2: Figure S2). Finally, we tested if our protein extraction procedure causes variation. We repeatedly sampled test leaves and measured soluble protein content in the extracts. Variation was only 5.5 %, while for the transient luciferase expression it was 26 % within leaves, on average (Additional file 2: Figure S3). Although none of the tested sources contributed a major fraction of the within leaf variance, together they may contribute up to 15 % (Additional file 2: Figure S3). The second largest source of variation comes from leaves within each infiltrated plant. In the second experiment we originally infiltrated four top leaves of each plant. The fourth leaf gave consistently lower expression levels and was not included in the analysis. The three top leaves that were included did not differ significantly from each other (Additional file 2: Figure S4). Discussion Syringe agroinfiltration has been increasingly used as a fast, reliable and low cost method for transient gene expression. The method works particularly well in N. benthamiana, but for quantitative assays suffers from a degree of variation. In order to optimize the resources spent for conducting agroinfiltration experiments, we investigated the source of variation using a hierarchical (nested) design in our experiments. A hierarchical design is a special case of a factorial design where the factors do not interact. Instead, errors (variance) is propagated from one hierarchical level up to the next in a simple manner that allows easy calculation of the variance contribution by each nested level. Hierarchical designs are typically used for resource optimisation [31], in biology for example for guiding optimal expenditure for replication in quantitative PCR [32]. We conducted two experiments where activity of an intron containing reporter gene encoding firefly luciferase was used to monitor transient gene expression 2 days after infiltration of N. benthamiana leaves with agrobacterium carrying the reporter in its T-DNA. Both experiments showed that the main variation comes from unequal distribution of the reporter activity within an infiltrated leaf. This was somewhat unexpected, and we could not address the variation to uneven spread of agrobacteria in infiltration, variation in the sampling procedure itself or to positional effects of the sampling along the leaf axis. However, in agroinfiltration many errors add up to this particular hierarchical level and may explain together part of the high variation. Fig. 3 Components of variance in the agroinfiltration experiment. In the first experiment (a), the variance caused by the different promoters is excluded. In the second experiment (b) none of the observed variance could be addressed to the three plant individuals within one agroinfiltration subexperiment, or its three repetitions. In both experiments, largest variation occurred between the sample disks punched from infiltrated leaves A more expected variation, but second to the within leaf variation, was due to the individual leaves infiltrated. We saw usually little variation between plants within a single experiment, although in the first experiment we observed in one case a major difference between the two plants used for infiltration. The second experiment addressed also replication of the infiltration setup (experimental replicates), including a different batch of agrobacterium suspension and different history of the set of plants growing on a shared tray. Variation between the experimental replicates was negligible. Finally, for technical replication of the luciferase assay, we found that using a submicroliter sample of leaf extract caused variation that could be easily avoided by increasing the sample volume. In the first experiment we used different promoters to drive the luciferase reporter. The promoter choice naturally introduced a large variation in reporter activity, but was included in order to assay for inducibility of the XVE/LexA system and to compare it to the commonly used constitutive 35S promoter. We could measure an eightfold induction by estradiol of the XVE/LexA transcription factor/promoter cassette and the induced levels were about the same or slightly higher than the constitutive levels achieved with the 35S promoter. Zuo and coworkers [27] tested XVE/LexA in stably transgenic Arabidopsis plants with GFP as reporter. Without estradiol induction, GFP mRNA was below the level of detection. Induced with saturated estradiol concentration (5 µM), the induced promoter activity was four times higher than 35S. The G10-90 promoter, in our hands, was much less active than the 35S promoter. Using stably transformed N. tabacum and assay for β-glucuronidase enzyme activity encoded by the reporter gene uidA, Ishige and coworkers [29] concluded that G10-90 is much stronger than 35S promoter (assayed in cotyledons, roots and seeds). Conclusions We have teased apart the variation in transient agrobacterium infiltration experiments and can come up with recommendations for setting up similar experiments. Most of the variation comes from uneven expression of the reporter gene within a leaf. Therefore, several sampling spots should be combined for the assay. Technical replication of the reporter enzyme assay is not important, if one takes care that pipetting errors are controlled by avoiding submicroliter volumes. The physiological state of the test plant can cause variation. Growth of plants should be standardized and individuals with extreme characteristics should be discarded. In order to monitor the plant parameter, several individuals should be used. In summary, infiltrate more plants but less leaves, sample more positions on the leaf but run only few technical replicates. Plant material Nicotiana benthamiana plants were grown under fluorescent light at 24 °C in peat: vermiculite (1:1). Day length was 16 h and the relative humidity 65 %. Plants were watered twice a week with commercial fertilizer (Substral, Thompson Siegel, Germany) and used for infiltration at age of 6 weeks when they typically carried nine leaves. Construction of plasmids In order to avoid measuring luciferase activity generated by agrobacterium cells, we used a firefly luciferase cDNA that contains an intron in the coding sequence [33]. The binary plasmid pLKB10, a kind gift from George Allen, contains this reporter under the 35S promoter. In order to generate expression constructs for the first experiment, we amplified the LUC gene from pLKB10 using first primers 5′-AAAAAGCAGGCTCCATGGAAGACGCCA AAAAC and 5′-AGAAAGCTGGGTGTTACAATTTGG ACTTTC, followed by attB adapter primers, as described in the manual for Gateway cloning (Invitrogen). The fragment was inserted to pDONR221 (Invitrogen) using the Gateway BP Clonase enzyme (Invitrogen) to form plasmid pEnLUC. For generation of the estradiol inducible reporter construct and the G10-90-LUC reporter, multisite Gateway cloning was used. The following plasmids were kind gifts from Ari Pekka Mähonen: pEnNosT2-R2R3 containing a nopaline synthase gene polyadenylation site flanked by attR2 and attL3 sites, pEnPG1090-L4R1 containing the G10-90 promoter flanked by attL4 and attR1 sites, pEn-PG1090XVE-L4R1 containing a G10-90-XVE construct, expressing the chimeric estrogen inducible transcription factor XVE [27], followed by the LexA promoter, the cassette flanked by attL4 and attR1 sites, and pCAMkan-R4R3, which is a pCAM1300 [34] derived Gateway destination vector where attR4 and attR3 sites flank the ccdB cam cassette. The luciferase reporter was also recombined from pEnLUC to the destination vector pK7WG2D [35] using Gateway LR Clonase. The resulting plasmid pExp35S-LUC, used in the first experiment, is functionally equivalent to pLKB10 that was used in the second experiment. All resulting expression vectors were transformed into the Agrobacterium tumefaciens strain C58C1(pGV2260) [36] using electroporation. Preparation of Agrobacterium suspension In addition to the luciferase containing Agrobacterium tumefaciens strains described above, we used in the second experiment also C58C1(pGV2260, pBin61-p19) that provides suppression for gene silencing [15]. The purpose was to dilute the luciferase expressing strain so that the luminometer readings would not overflow, suppression of silencing is not needed when the reporter is assayed after only 2 days of expression. Agrobacterium strains were streaked on solid Luria Broth (LB) supplemented with antibiotics (rifampicin, carbenicillin and kanamycin or spectinomycin, all at 100 µg/ml) and grown at 28 °C for 3 days to single colonies. Colonies were inoculated into 5 ml LB with 20 μM acetosyringone and 10 mM 2-(N-morpholino) ethane sulfonic acid (MES, pH6.0) without antibiotics, and grown for overnight with vigorous shaking at 28 °C. Cells were collected by centrifugation at 3200×g for 10 min at room temperature and resuspended in 2 ml Mg-MES buffer (200 µM acetosyringone, 10 mM MgCl 2 , 10 mM MES, pH 6.0). 200 µl of bacterial suspensions were diluted to 3 ml of Mg-MES buffer and adjusted to a final density of OD 600 = 0.5. The cell suspensions were kept for 3 h at room temperature before infiltration into tobacco leaves. Agroinfiltration of tobacco leaves Three top leaves of 6 week old N. benthamiana plants were used for infiltration, excluding the youngest leaf that was difficult to infiltrate. Agrobacterium suspension was infiltrated into the whole leaf area from a small cut in the lower epidermis, using a 1 ml plastic syringe without a needle. After agroinfiltration, the plants were kept in the growth room for 2 days before harvest. For estradiol induction, the plants were watered with 10 µM 17-β-estradiol (Sigma-Aldrich) 3 days prior to infiltration and subsequently until sampling. All transgenic material was handled according to the Finnish GMO legislation. The laboratories where this work was conducted has a permanent permission for this type of experiments (Diary number 004/S/2002). Determination of luciferase activity Leaves were sampled from four or five different positions by using a cork bore as a punch. The punched leaf disks were 5.5 mm in diameter and weighed approximately 2.2 mg. Soluble proteins were extracted from the leaf disks using 100 µl of modified lux buffer (50 mM Naphosphate pH 7.0, 4 % soluble polyvinylpyrrolidone Mw 360,000, 2 mM EDTA, 20 mM DTT) [37], homogenised with a small pestle on ice and centrifuged for 10 min at 4 °C in a microcentrifuge. In the first experiment, luciferase activity was measured in the samples at 24 °C by pipetting 0.5 µl of the supernatant into 50 µl of enzyme substrate (Luciferase 1000 Assay System, #E4550, Promega), fast vortexing and counting photons for 1 s in the luminometer (Luminoskan TL plus, generation II, Thermo Labsystems, Finland). In the second experiment, 10 µl of the supernatant was pipetted into 80 µl of enzyme substrate and photons were counted for 5 s. Statistics analysis Our infiltration experiments are hierarchical (nested) designs that allow calculation of the amount of variance generated at different hierarchical levels of infiltration, sampling or measurement of the luciferase activity. The statistical (linear effects) model used to analyse the nested designs is where µ represents the mean of all measurements, A the top hierarchical level (promoter in the first experiment and repetition of the infiltration subexperiment in the second experiment), B the second hierarchical level (plant treated), C the third (leaf infiltrated), D the fourth (sample punched) and E the residual error, estimated by running technical replicates of the luciferase assay. Calculation of the variance components is explained by Quinn and Keough [38] and shown in Additional file 1: Tables S1, S2. Authors' contributions THT conceived and planned the work, HB and SJ constructed the plasmids and conducted the experiments, SJ, HB and THT analysed the data, HB and THT wrote the manuscript that all authors commented. All authors read and approved the final manuscript. y ijklm = µ + A i + B ij + C ijk + D ijkl + E ijklm Additional files Additional file 1: Table S1. Measurements of luciferase activity from the first experiment and calculation of variance components. Table S2. Measurements of luciferase activity from the second experiment and calculation of variance components. Additional file 2: Figure S1. Spread of agrobacterium in leaf infiltration. Figure S2. Distribution of luciferase activity within leaves. Figure S3. Variation in protein extraction. Figure S4. Efficiency of transient expression in different leaves.
2019-04-02T13:12:15.924Z
2015-10-14T00:00:00.000
{ "year": 2015, "sha1": "9c844fa05dee43387adb3ec5c9cb290d8bf10b5c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13007-015-0091-5", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c4ddd3cdc6b44781ef00234363ae1da32e6da901", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
22340842
pes2o/s2orc
v3-fos-license
Pairing of Fermions with Unequal Effective Charges in an Artificial Magnetic Field Artificial magnetic fields (AMFs) created for ultra cold systems depend sensitively on the internal structure of the atoms. In a mixture, each component experiences a different AMF depending on its internal state. This enables the study of Bardeen-Cooper-Schrieffer pairing of fermions with unequal effective charges. In this Letter, we investigate the superconducting (SC) transition of a system formed by such pairs as a function of field strength. We consider a homogeneous two-component Fermi gas of unequal effective charges but equal densities with attractive interactions. We find that the phase diagram is altered drastically compared to the usual balanced charge case. First, for some AMFs there is no SC transition and isolated SC phases are formed, reflecting the discrete Landau level (LL) structure. SC phases become reentrant both in AMF and temperature. For extremely high fields where both components are confined to their lowest LLs, the effect of the charge imbalance is suppressed. Charge asymmetry reduces the critical temperature even in the low-field semiclassical regime. We discuss a pair breaking mechanism due to the unequal Lorentz forces acting on the components of the Cooper pairs to identify the underlying physics. Artificial magnetic fields (AMFs) created for ultra cold systems depend sensitively on the internal structure of the atoms. In a mixture, each component experiences a different AMF depending on its internal state. This enables the study of Bardeen-Cooper-Schrieffer pairing of fermions with unequal effective charges. In this Letter, we investigate the superconducting (SC) transition of a system formed by such pairs as a function of field strength. We consider a homogeneous twocomponent Fermi gas of unequal effective charges but equal densities with attractive interactions. We find that the phase diagram is altered drastically compared to the usual balanced charge case. First, for some AMFs there is no SC transition and isolated SC phases are formed, reflecting the discrete Landau level (LL) structure. SC phases become reentrant both in AMF and temperature. For extremely high fields where both components are confined to their lowest LLs, the effect of the charge imbalance is suppressed. Charge asymmetry reduces the critical temperature even in the low-field semiclassical regime. We discuss a pair breaking mechanism due to the unequal Lorentz forces acting on the components of the Cooper pairs to identify the underlying physics. PACS numbers: Cold atom experiments have realized novel manyparticle systems, challenging some of the most fundamental models of condensed matter theory. In particular, Bardeen-Cooper-Schrieffer (BCS) theory of fermion pairing, which has successfully explained superconducting (SC) and superfluid (SF) behavior in a large number of systems, had to be extended to cover new regimes. Pairing due to resonant interactions has been explored both theoretically and experimentally, uncovering the BEC-BCS crossover in detail. Density imbalance between components forming the Cooper pairs and resulting unconventional SC states were first considered for condensed matter systems, but, experimental observation of polarized SFs [1,2] with cold atoms required significant improvements upon prior approaches. Similarly, Cooper pairs made up of fermions with unequal masses have been explored theoretically [3,4]. The constituents of cold atom experiments are neutral atoms. The dominant interaction between these atoms are through s-wave scattering which can be tuned via Feshbach resonances between the atoms. While the absence of Coulomb interactions facilitated the realization of some fundamental condensed matter models, the neutrality of the particles prevented the observation of the effects of an external magnetic field on these systems. Initial efforts in this direction used rotation to mimic the magnetic field which brings further constraints on the confining potential of the ultracold system [5]. Over the last five years, a significant development, namely the creation of Raman laser-assisted artificial magnetic fields (AMFs) for neutral atoms [6], has extended the capabilities of cold atom experiments. These AMFs are realized by coupling the internal states of the * fatmanur@bilkent.edu.tr atoms to light to imprint a Berry phase on the motion. While a number of different schemes have been used to manufacture these synthetic Hamiltonians, all of them sensitively depend on the internal excitation structure [7]. Hence, for a mixture of two different atom species or even a mixture composed of atoms in different hyperfine states, the effective magnetic field acting on each component can be different. For example, the g-factors for 87 Rb 5S 1/2 F = 1 and 85 Rb 5S 1/2 F = 2 have a 3/2 ratio. If the scheme in Ref. [6] is applied to a mixture of these atoms, position dependent detunings, consequently the AMFs, would reflect this ratio. Although the Zeeman shifts due to the real magnetic field are utilized to create the AMF, this artificial field only couples to the spatial motion of the atoms and does not cause an artificial Zeeman effect [7]. In this Letter, we explore the consequences of an AMF that couples unequally to the fermions forming a Cooper pair. Essentially, we consider the pairing of fermions with different cyclotron frequencies, which we regard as unequal effective charges coupling to the same AMF. We discuss the conditions for pairing and the response of the paired state to the external AMF. We show that this system displays reentrant SC in temperature, i.e. a normal sample at zero temperature can become SC as temperature is increased. Oscillatory dependence of T C on the AMF, which is a direct consequence of the Landau Level (LL) structure of single-particle excitations, is observed. However, for some AMFs, SC state is not preferred even at zero temperature. We calculate the phase diagram of the system for various representative cyclotron frequency ratios and present physical mechanisms to elucidate the fundamental changes in the SC transition. We consider a mixture of two fermion species of equal mass and equal density. The system is assumed to be spatially homogeneous, as in most cold atom experiments the effects of the confining potential can be taken into ac-count through the local density approximation. An AMF of arbitrary strength is acting on the system by coupling only to the orbital motion of the fermions but causing no Zeeman shift. The coupling of the AMF to each component is different, defining the effective charges q 1 and q 2 . Corresponding cyclotron frequencies ω 1 = q 1 B/m and ω 2 = q 2 B/m define the respective LL separations. We introduce the relative frequency ω r = ω 2 /ω 1 and the effective frequency ω = √ ω 1 ω 2 . Within the Landau gauge A = (0, Bx, 0), the non-interacting Hamiltonian can be written as where the index ν = (n, k y , k z ) incorporates the LL index n, momentum along the z -direction k z , and the momentum k y which also labels the LL degeneracy. The associated kinetic energy is The chemical potentials µ i are not equal but are chosen to fix the density of both species to be the same at each AMF value, The particle densities are then scaled with the effective magnetic length, ℓ = /mω, i.e. n 1 = N π 2 ℓ 3 . We numerically solve the number equations for chemical potentials at each AMF. Thus, for a fixed charge ratio ω r and total real-space density N , changing the AMF strength alters only the effective density n 1 . This single particle spectrum is unique as the LLs of up and down spins do not match in energy. Since their zero point energy and the separation between the LLs are different, two LLs can have equal energy only if the charge ratio ω r is a rational number. When the chemical potentials are adjusted to equate the densities, low energy single-particle excitation spectra for up and down spins are asymmetric. This mismatch has drastic consequences on pairing when the interactions are introduced. The two species interact resonantly through s-wave scattering which we model using the two-channel Hamiltonian following Refs. [8,9] studying the balanced magnetic field case, ω r = 1. We write the interacting Hamiltonian, The open channel fermions interact to form the closed channel boson with energy ε B = γ + ( ω 1 + ω 2 )/2 − µ 1 − µ 2 + C, where γ is the unrenormalized detuning between the closed channel boson and the open channel fermions. C is the counterterm which is set to compensate the divergence of the boson self energy due to the infinite number of LLs of the closed channel fermions, . (3) The closed channel boson eigenstates have the same form as the fermion single particle states, however the effective charge of the boson is q B = q 1 + q 2 , and its mass 2m. Consequently the boson magnetic length is . Two fermions with up and down spin in different LLs can interact to form a boson through the coupling constant α and the overlap integral Q νν′ which has the form where H n (.) is the n-th Hermite polynomial. The dominant pairing mechanism is through the closed channel boson with the lowest possible energy. Hence, the wave function of the boson is in general a superposition over all the states in the bosonic lowest LL (LLL) with zero k z . The distribution of the boson wave function over the degenerate LLL states with different k y does not affect the physical properties after the bosons are integrated out [8,9]. Thus, we calculate the overlap integral only for the LLL state with k y = 0. The renormalization parameters γ and α are chosen in such a way that the system produces low energy scattering properties for (ω 1 , ω 2 ) → 0. Hence, they are related to the physical parameters a sscattering length and r 0 -effective range as follows, In order to analyze the pairing of unequal charges, we examine this Hamiltonian around the SC transition. Following mean-field theory, we introduce α < b >= ∆ which is the average amplitude of the bosonic wave function defining the order parameter. Near the transition, ∆ is small and we expand the free energy, Pairing is favorable if the coefficient of the second order term is negative. The critical temperature for pairing is 1. (Color online) Phase diagram of the system as a function of dimensionless temperatureT = kBT / ω and effective density n1 = N π 2 ℓ 3 where N is the real-space density and ℓ = /mω. Notice that increasing AMF, B = mω/ √ q1q2, corresponds to lower n1 values. Two frequency ratios ωr = 1 and ωr = 0.95 are displayed for the same interactions strength as = as(N π 2 ) 1/3 = 0.53. For ωr = 1, there is SC transition at any field, the oscillations in TC (stars) originates from the LL structure. For ωr = 0.95, these oscillations evolve into bubble SC regions (shaded areas). The system is not SC even at zero temperature between the bubbles and the transition becomes weakly reentrant in temperature. At the low (many LLs) and high field (only LLL) regimes, TC is not affected significantly by a small charge imbalance. obtained by setting this coefficient to zero The chemical potentials µ i are chosen so that the realspace densities of the two components are the same. The right-hand side of Eq.7, the pairing susceptibility, determines the behavior of T C and can be used to understand the underlying physical picture for its evolution. We solve Eq.7 numerically by calculating the pairing susceptibility for a given value of T and n 1 . The phase diagram of the system is then obtained by comparing this value with −1/a s . For the numerical solution we scale all energies by the effective magnetic energy ω. Similarly the dimensionless scattering length isã s = a s (N π 2 ) 1/3 . Our equations are symmetric for ω r → 1 ωr which is equivalent to switching the indices of the components. We checked this symmetry numerically and concentrate on 0 < ω r ≤ 1 in the following. In Fig.1, we present the phase diagram of ω r = 1 which is in agreement with Ref. [8]. For the balanced charge case, there is always a critical temperature below which the SC state is preferred within the mean-field approximation. The critical temperature is non-monotonic with the applied field. It first decreases and becomes exponentially small as smaller number of LLs are involved in the pairing, but, then increases when only the LLL contributes. This high-field SC has been studied for both solid state [10,11] and cold atom systems [8]. The oscillatory behavior of T C with the applied field is a direct result of the underlying LL spectrum. When the LLs of the two components coincide in energy, the pairing susceptibility diverges as 1/ √ T at the peaks and as ln(T ) at the rest in Fig.2(a), always guaranteeing a SC state. We also display the phase diagram for ω r = 0.95 in Fig.1. First of all, unlike the balanced case, there are AMF values for which a SC state is never favored. Oscillatory behavior in pairing due to the LL structure causes stable islands of SC in the phase diagram isolated from the zero field SC phase. Although the low field SC is not affected from unequal frequencies as can be expected, the high-field SC proves to be surprisingly resilient too. The destruction of the low temperature SC in the presence of a small misalignment between LLs has been predicted [12], however, this misalignment also leads to a third effect, reentrant SC. For some AMFs, at low temperatures the sample is in the normal state while at higher temperatures it becomes SC. Consequently, even for a slight asymmetry between the charges of the components, the phase diagram undergoes fundamental changes. These changes are most pronounced in the regime where only a few LLs are populated for both components and we display representative phase diagrams in Fig.2. In the following, we discuss the physical reason for each of these three features. Fig.2(c) displays the pairing susceptibility for ω r = 0.95 by focusing on intermediate field strength. The most striking feature of this phase diagram is the emergence of isolated islands of SC Fig.2(d) which come about because Eq.7 does not have a solution at some AMF strengths. The absence of solutions even for a minute amount of charge imbalance is best understood by considering the single-particle spectra. The one-particle density of states (DOS) for each component has sharp peaks at each LL threshold. For the balanced case, the DOS and the chemical potentials of the components are always equal. If the temperature is low enough, only the DOS near the chemical potential is relevant. At low temperatures, the pairing susceptibility diverges as T −1/2 at each LL threshold and the peaks in T C follow from the one-particle DOS. When ω r = 1, LLs of different components do not have the same energy or total degeneracy. In general, chemical potentials of both components must be chosen differently to give equal real-space densities. If the mismatch between the chemical potentials and the LLs is large, the energy cost of exciting particles may not be redeemed by attractive interactions even at zero temperature. The pockets of SC phases roughly correspond to population of a new LL. Whenever a chemical potential crosses a LL threshold, there is a new set of states at the Fermi surface which become suddenly available to contribute to the pairing. The most favorable case for pairing is when FIG. 2. (Color online) Pairing susceptibility (left panel) and respective phase diagrams (right panel) for three frequency ratios ωr = 1, 0.95, 0.75. The system is made dimensionless with effective magnetic energy ω as in Fig.1 and decreasing n1 corresponds to increasing AMF at fixed real-space density N . The phase diagrams are in linear scale in order to cover TC = 0 and plotted for intermediate field strengths where charge imbalance effects are most prominent. a) Pairing susceptibility of equal charges diverges at low temperature, guaranteeing SC for any field value. Divergence is more pronounced at LL thresholds. The corresponding phase diagram (b) is obtained by Eq.7. The phase boundary is also highlighted on the surface. c) Even a slight asymmetry between the charges, ωr = 0.95, lifts the low temperature divergences and the oscillations in (b) turn into bubble SC phases (d). Each LL susceptibility peak is split into smaller peaks, thus, the bubble phases branch into smaller bubbles for weaker interactions (not displayed). e,f) The mismatch between LL spectra is greater for smaller ωr, resulting in prominent reentrance with temperature. The maximum reentrance temperature is controlled by |ω1 − ω2|. both chemical potentials simultaneously cross a new LL. For small charge imbalance, these threshold crossings happen within a small difference in AMF which essentially turns the T C oscillations of the balanced case into bubble SC phases in Fig.2(d). For a general ω r ratio, the picture is much more complicated. For chemical potentials to give equal densities and cross LL thresholds simultaneously, ω r must be close to a simple fraction. This complicated behavior is evident in the pairing susceptibilities displayed in Fig.2 where simple peaks of balanced LLs are split into smaller structures. For very strong attractive interactions, these bubble phases are not re-solved as T C becomes comparable to the LL separation. A similar effect was predicted for the balanced case [8]. As the AMF is increased further, we observe that T C increases and reaches the same value with the equal charge limit. This surprising revival happens only when both components populate their LLLs. Although the degeneracies of the two LLLs are not equal, they both increase with increasing AMF. The chemical potentials of both components then lie very close to the corresponding LLL thresholds. Hence, the excitation cost for pairing decreases at such high fields. We can estimate the AMF for which the effect of charge imbalance vanishes by requiring all the particles with smaller effective charge to reside in their LLL. This estimate is in good agreement with our numerical results. Another fundamental change brought about by the charge imbalance is SC that is reentrant with temperature. While this effect is not clear for small imbalance as in Fig.2(d), we found that it is a common feature of the phase diagrams for general ω r . A more prominent reentrant SC phase can be observed for ω r = 0.75 as in Fig.2(f). For some field strengths, the system prefers normal phase at zero temperature and becomes SC only above T C1 . SC phase subsequently disappears after a higher temperature T C2 . Similar reentrant behavior was predicted for graphene bilayers [13] and asymmetric nuclear matter [14]. In our system it is easy to understand the physical basis for this reentrance. Increasing temperature generally prefers a disordered state, however, it also excites a significant amount of particles to higher LLs. Because of the antisymmetric and oscillatory nature of the DOS in the charge imbalanced system, states which are close to not only the chemical potential but also to LL thresholds are most favorable for pairing. Thus, if the pairing contribution from thermally excited particles overcomes the entropy cost, increasing the temperature can drive the SC transition. With this scenario we expect the maximum lower critical temperature T C1 to be of the order of LL mismatch between the components which agrees with our numerical results. In contrast to other reentrant SC phases where a competing order precludes SC at low temperatures [15], the current system has reentrance solely due to non-trivial nature of the single-particle DOS. Although we concentrate on the interplay between discrete LL structure and charge asymmetry, it is worth mentioning that there is a profound effect even in the semiclassical regime where many LLs are filled for both components. The effect of an external magnetic field on a Cooper pair is modeled by only considering the phase acquired by the center of mass motion. This semiclassical approximation due to orbital dephasing describes the upper critical field H C2 in type-II SC successfully. However, if the charges of the fermions forming the pairs are different, the magnetic field couples the center of mass motion with the relative coordinate. As pairing is controlled by the relative coordinate, especially for tightly bound pairs, it is possible for the Lorentz force due to the center of mass motion to break the pairs. Classically, if the center of mass of a bound pair with charges q 1 , q 2 is moving with velocity v in a perpendicular magnetic field B, Lorentz force difference between the two particles is F ≃ |q 1 − q 2 |Bv. This real space picture can be utilized to give a rough estimate for the strength of this pair breaking mechanism by comparing the work done by this force over the size of the pair to the SC gap. This effect becomes dominant especially for large charge ratios and we estimate the upper critical field due to this pair breaking mechanism as H C3 ≈ √ ωr 1−ωr H C2 which is in agreement with our numerical results. While competing orders such as charge density waves, which were not taken into account in our approach, may complicate the physics in the high magnetic field limit, the decease of T C with charge imbalance is observed even when many LLs are filled and the mean-field approximation is most reliable. In summary, the cold atom experiments with AMFs can create mixtures where each component has a different effective charge. The pairing between fermions of unequal effective charges presents a unique extension of BCS theory which is fundamental in diverse areas of physics. In this Letter, we find that even for a slight asymmetry between the charges, the phase diagram changes drastically with the emergence of reentrant SC both in temperature and AMF. The oscillatory behavior of T C with AMF for the balanced case modifies into isolated SC phases. For extremely high AMFs where both components are in their LLLs, the transition temperature is independent of the charge ratio. Finally, we argue that T C is reduced due to pair breaking facilitated by unequal Lorentz forces on the charges forming the pairs. F. N.Ü. is supported by Türkiye Bilimsel ve Teknolojik Aras . tırma Kurumu (TÜBİTAK). This work is supported by TÜBİTAK Grant No. 112T974.
2016-02-02T01:48:23.000Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "0a1596abd2dac0e3c675ef0a438485eae72aceef", "oa_license": null, "oa_url": "http://repository.bilkent.edu.tr/bitstream/11693/36535/1/Pairing_of_Fermions_with_Unequal_Effective_Charges_in_an_Artificial_Magnetic_Field.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0a1596abd2dac0e3c675ef0a438485eae72aceef", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
16361672
pes2o/s2orc
v3-fos-license
Estimated Effect of Climatic Variables on the Transmission of Plasmodium vivax Malaria in the Republic of Korea Background: Climate change may affect Plasmodium vivax malaria transmission in a wide region including both subtropical and temperate areas. Objectives: We aimed to estimate the effects of climatic variables on the transmission of P. vivax in temperate regions. Methods: We estimated the effects of climatic factors on P. vivax malaria transmission using data on weekly numbers of malaria cases for the years 2001–2009 in the Republic of Korea. Generalized linear Poisson models and distributed lag nonlinear models (DLNM) were adopted to estimate the effects of temperature, relative humidity, temperature fluctuation, duration of sunshine, and rainfall on malaria transmission while adjusting for seasonal variation, between-year variation, and other climatic factors. Results: A 1°C increase in temperature was associated with a 17.7% [95% confidence interval (CI): 16.9, 18.6%] increase in malaria incidence after a 3-week lag, a 10% rise in relative humidity was associated with 40.7% (95% CI: –44.3, –36.9%) decrease in malaria after a 7-week lag, a 1°C increase in the diurnal temperature range was associated with a 24.1% (95% CI: –26.7, –21.4%) decrease in malaria after a 7-week lag, and a 10-hr increase in sunshine per week was associated with a 5.1% (95% CI: –8.4, –1.7%) decrease in malaria after a 2-week lag. The cumulative relative risk for a 10-mm increase in rainfall (≤ 350 mm) on P. vivax malaria was 3.61 (95% CI: 1.69, 7.72) based on a DLNM with a 10-week maximum lag. Conclusions: Our findings suggest that malaria transmission in temperate areas is highly dependent on climate factors. In addition, lagged estimates of the effect of rainfall on malaria are consistent with the time necessary for mosquito development and P. vivax incubation. volume 120 | number 9 | September 2012 • Environmental Health Perspectives Research Climate change is predicted to have a variety of impacts on human health, many of which have been extensively reviewed (Ebi et al. 2006;Mills et al. 2010). Among them, malaria has been recognized as the one of the diseases most sensitive to climate change (Haines et al. 2006;Patz and Olson 2006). Temperature, humidity, and rainfall have been reported to affect the incidence of malaria, either through changes in the duration of mosquito and parasite life cycles or through influences on human or para site behavior (Paaijmans et al. 2009;Parham and Michael 2010;Snow and Gilles 2002). According to a report by the Intergovernmental Panel on Climate Change (2007), the rate at which tempera tures are increasing is higher in the temperate areas of the world. Although the relationship between malaria and meteorological variables has been assessed in many regions, including Africa, Europe, Asia, South America, and Australia, few stud ies have been conducted and little is known about the impact of climate variation on malaria in temperate regions (Bi et al. 2003;Zhang et al. 2010). In addition, few relation ships between climatic variables other than tempera ture and Plasmodium vivax have been reported. Most studies of meteorological effects on malaria have focused on tempera ture and rainfall (Parham and Michael 2010;Zhang et al. 2010). Paaijmans et al. (2009) reported that diurnal tempera ture fluctuation played an important role in parasite development. Moreover, it is known that the longevity of a mosquito increases with increasing relative humidity (Martens et al. 1995) and that mos quito activity decreases with increasing sunshine because mosquitoes are more active during the dark. However, the effects of these variables on malaria have not been estimated. Studies of the effect of various meteorological variables such as the tempera ture, relative humidity, diurnal tempera ture range (DTR), the duration of sun shine, and rainfall are needed to clarify the link between malaria and climate and suggest new approaches to reduce the current and future disease burden of malaria. Since its reemergence in 1993 from the northwest border area facing the Democratic People's Republic of Korea (DPRK), P. vivax malaria has become endemic in the Republic of Korea (ROK) with a peak incidence in 2007 of 2,192 cases, one of the highest among coun tries in temperate regions (Korea Centers for Disease Control and Prevention 2010; Park 2011). Moreover, the surface air tempera ture has significantly increased by about 1.5°C dur ing the past 100 years on the Korean peninsula (Korea Ministry of Environment 2011), an increase which is greater than the global 0.74°C average increase. P. vivax malaria con tinues to be problematic in the northwestern part of the ROK, presenting a seasonal pat tern (Park et al. 2009). In 2005, the proportion of cases that occurred after a short incubation period increased, suggesting an increase in the length of the transmission period that could be a consequence of rising tempera tures in the ROK Park 2011;Park et al. 2003). The new emer gence and expanding pattern of malaria in the ROK, which has the highest latitude among the temperate region countries, may be evi dence of the potential effect of climate change on malaria transmission. We aimed to estimate the effects of diverse climatic variables, such as tempera ture, rela tive humidity, DTR, duration of sunshine, and rainfall, on the transmission of P. vivax while taking the lag time into account. We also aimed to provide strategic insights into the current and future impact of climate change on malaria transmission, especially in the temperate regions. Study area. During the 1990s, after the ini tial reemergence of P. vivax malaria in the ROK, more than half of the total annual cases were diagnosed among active military person nel and veterans within 24 months of their discharge from military service. However, subsequently, the proportion of civilian cases increased consistently, reaching over 60% in 2006, and the geographical area associated with malaria transmission has expanded south ward from the Demilitarized Zone (DMZ), a strip of land running across the Korean Peninsula that serves as a buffer zone between ROK and the DPRK Yeom et al. 2005Yeom et al. , 2007. Only civilian cases, which constituted 44-63% of all cases for the years [2001][2002][2003][2004][2005][2006][2007][2008][2009] in the ROK, were included in the analysis. Military cases were excluded because mass chemoprophylaxis has been conducted on a large scale in the military. Thus, P. vivax malaria cases among civilians may be more informative for investigating effects of cli mate on P. vivax malaria in the ROK ). We obtained surveillance data for malaria cases, including information about the date of onset and place of residence, from the Korea Centers for Disease Control and Prevention (Osong, ROK), which moni tors and manages malaria in the ROK as a nationally notifia ble communicable disease. Study cases were restricted to those in the capital region, which covers about 90% of all civilian cases in the ROK and is the only area where malaria is endemic. The capital region includes Seoul, Incheon, and Gyeonggi province and is located in the northwestern ROK, covering 11,730 km 2 with a com bined (census) population of 22,766,850 as of 2005-amounting to over 48% of the entire population of the ROK. Seoul, the capital city of the ROK and the center of its capital area, is located at 37.6°N and 127.0°E. The study area has a continental climate with four distinct seasons, including a hot and humid summer and a cold and snowy winter. Daily meteorological parameters, includ ing the daily maximum, mean, and minimum tempera ture; relative humidity; duration of sunshine; and the amount of rainfall were obtained from eight sites in the capital region that are monitored by the Korea Meteorological Administration (Seoul, ROK). Daily weather data were averaged across the eight sites for use in analyses. Daily DTRs were calculated as the difference between the maximum and mini mum tempera ture on each day and were used as an index of diurnal tempera ture fluctua tion, which can substantially alter the incuba tion period of malaria parasites and reduce the impact of mean tempera ture (Paaijmans et al. 2009). Weekly DTRs were calculated as the average of daily DTRs over each week. Weekly mean values for tempera ture, relative humidity, DTR, duration of sunshine, rainfall amounts, and numbers of malaria cases were used to estimate the effect of climatic factors on P. vivax. Statistical analysis. Generalized linear Poisson regression models allowing for over dispersion were used to examine relationships between the number of malaria cases per week and the climatic variables (McCullagh and Nelder 1989;Zanobetti et al. 2000). We began by using a generalized additive model with natu ral cubic splines (Hastie and Tibshirani 1990) to characterize the shapes of relationships between P. vivax malaria and weather variables while controlling for possible confounders. Models included Fourier terms up to the sixth harmonic per year to account for seasonality in malaria incidence, and indicator variables for each calendar year to account for tempo ral trends over the study period (Hashizume et al. 2008). After examining the shapes of each exposure-outcome relation to confirm assumptions regarding linearity or identify threshold values, we fitted generalized linear Poisson regression models with natural cubic splines [4 degrees of freedom (df)] to control for confounding factors, including other cli matic variables. Final models for each variable of interest (i.e., tempera ture, relative humidity, DTR, and duration of sunshine) were selected based on model fit using Akaike's information criterion. We estimated associations between climatic variables and malaria incidence for various singleweek lags. For example, a lag of 0 weeks (unlagged) corresponds to the associa tion between weather in a given week and the risk of malaria incidence in that same week. A lag of 8 weeks refers to the association between weather in a given week and malaria incidence 8 weeks later. The model specifications used for climatic variables except rainfall were as follows: Here, E(Y) denotes the number of expected malaria cases; β 1 is the coefficient (slope) for the weather variable (X t ) during week t; S i (X i ) denotes the smooth functions for the i covari ates; t j denotes the week of the year j (t = 1, 2…52); and N is the period up to the sixth harmonic per year. Weekly mean values for rela tive humidity and duration of sunshine were included in the model for a 1°C increase in the mean tempera ture; weekly mean values for tempera ture, duration of sunshine, and DTR were included in the model for a 10% increase in relative humidity; weekly mean values for duration of sunshine, relative humidity, and mean tempera ture were included in the model for a 1°C increase in the DTR; and weekly mean values for tempera ture and rela tive humidity were included in the model for a 10hr increase in the weekly duration of sunshine. To account for a longer and non linear lag effect as suggested by Thomson et al. (2006), we examined the relationship between malaria incidence and rainfall by fitting non linear unconstrained distributed lag models, a subtype of distributed lag nonlinear model (DLNM) (Armstrong 2006). We adjusted the weekly averaged mean tempera ture using a crossbasis framework (Gasparrini et al. 2010) to account for the combined effects of a 10week maximum lag structure (stratified at 4 weeks for mean tempera ture and polynomial for rainfall) and a nonlinear exposure response represented by using natural cubic splines with 5 df for the effect of rainfall and tempera ture. The knot for tempera ture at 4 weeks was included to account for a change in the effect of tempera ture, which was greater for 2, 3, and 4week lags than for 5 through 10week lags; the lagged effect of tempera ture began to decrease after a 4week lag. After plotting the DLNM for the effect of rainfall without threshold, we estimated the increase in the number of malaria cases for a 10mm increase in rainfall ≤ 350 mm/week, up to which level the effect of rainfall increased linearly. In addi tion, we estimated singleday lag effects of an increase in rainfall on the daily (vs. weekly) number of malaria cases to account more accurately for daily variation in rainfall. A generalized linear Poisson regression model with natural cubic splines was utilized for the singleday lag effects of rainfall after examining the shapes of each exposure-outcome relation to confirm assumptions regarding linearity. All statistical analyses were performed with R software (version 2.13.0; R Project for Statistical Computing, Vienna, Austria) using the packages MGCV (version 1.7-6) and DLNM (version 1.4.0). All tests were twosided, and an alpha level of < 0.05 was considered significant. Results Between 2001 and 2009, 6,548 cases of malaria were reported among civilians in the capital area (of 7,557 total cases in the ROK, with an annual maximum of 1,193 cases out of 1,295 total civilian cases in 2007). Over the entire study period (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009), there was a mean (± SD) of 14.0 ± 18.2 malaria cases per week, mean tempera ture was 12.0 ± 9.9°C, and the mean weekly rainfall was 27.20 ± 49.1 mm (Table 1). A strong sea sonal variation in malaria incidence coincided volume 120 | number 9 | September 2012 • Environmental Health Perspectives with seasonal variation in mean tempera ture and rainfall variations (Figure 1), with the highest average number of malaria cases reported during the 31st week. Pearson correlation coefficients between the number of P. vivax malaria cases and mean tempera ture, relative humidity, DTR, dura tion of sunshine, and rainfall and were 0.74, 0.60, -0.48, -0.22, and 0.44, respectively. Estimated effects of tempera ture, relative humidity, DTR, and duration of sunshine. Associations between malaria cases and tempera ture, relative humidity, DTR, and duration of sunshine during the same week estimated using Poisson regression with adjust ment for seasonal variation, betweenyear, and other weather variables are shown in Figure 2. The numbers of malaria cases were positively associated with relative humidity and mini mum, mean, and maximum tempera ture, and negatively associated with DTR and duration of sunshine. Figure 3 shows estimated singleweek lag effects of mean tempera ture, relative humid ity, DTR, and duration of sunshine based on generalized linear Poisson regression adjusted for the other climate variables. A 1°C increase in mean tempera ture was associated with a 16.1% [95% confidence interval (CI): 15.3, 16.9%] increase in malaria cases during the same week (unlagged), and with a maximum increase of 17.7% (95% CI: 16.9, 18.6%) after a 3week lag. A 10% increase in relative humidity was associated with a 10.4% (95% CI: 2.5, 18.9%) increase in malaria cases dur ing the same week. However, numbers of malaria cases decreased in association with a 10% rise in relative humidity during previ ous weeks with a 40.7% decrease (95% CI: -44.3, -36.9%) when lagged by 7 weeks. A 1°C increase in the DTR was associated with a 7.3% decrease (95% CI: -10.9, -3.5%) in malaria during the same week, and a 24.1% decrease (95% CI: -26.7, -21.4%) when lagged by 7 weeks. A 10hr increase in the duration of sunshine per week was associ ated with a 5.1% (95% CI: -8.4, -1.7%) and 4.8% (95% CI: -8.1, -1.5%), decrease in malaria when lagged by 2 and 4 weeks, respectively, after adjusting for mean tempera ture and relative humidity. Estimated effect of rainfall. A three dimensional plot of the estimated effect of rainfall based on a nonlinear unconstrained distributed lag model with adjustment for the lagged effect of tempera ture using natural cubic splines (5 df) suggests an effect of weekly rainfall up to a total of approximately 350 mm when lagged 2-4 weeks ( Figure 4A). Model estimates, fitted with assumption that the effects of rain fall were absent above 350 mm/week, indicate statistically significant increases in relative risk (RR) for P. vivax malaria incidence associated with a 10mm increase in weekly rainfall after lags of 3-7 weeks ( Figure 4B). The cumulative RR for a 10mm increase in weekly rainfall ≤ 350 mm/week with a 10week lag is 3.61 (95% CI: 1.69, 7.72) ( Figure 4C). The effects of rainfall on the daily malaria incidence were also estimated in order to determine whether a more specific lag effect is suggested when daily (vs. weekly) data are modeled. Estimated associations based on an adjusted generalized additive model indicated a linear effect ≤ 50 mm/day (data not shown), consistent with an effect of ≤ 350 mm of weekly rainfall as shown in Figure 4A, we estimated the percent change in malaria incidence due to a singleday lag effect of rainfall of < 50 mm/ day ≤ 60 days earlier, after adjusting for the daily mean tempera ture, duration of sunshine, seasonal variation, day of week, and year. Estimated lagged effects were statistically sig nificant for 28, 47, 48, and 56day lags, with corresponding percent increases in daily malaria incidence with a 10mm increase in rainfall of 5.7% (95% CI: 2.4, 9.1%), 5.4% (95% CI: 2.0, 8.9%), 4.2% (95% CI: 0.8, 7.7%), and 4.5% (95% CI: 0.9, 8.2%) ( Figure 5A). In addition, we analyzed the effect of daily rainfall after the average peak inci dence of malaria during the 31st week of the year (Figure 1, approximately 21 August). Significant lag effects were observed at 38, 40, 43, and 45 days, with the largest percent change at 58 days (19.1% increase with a 10mm increase in daily rainfall ≤ 50 mm; 95% CI: 2.1, 39.0%) ( Figure 5B). Discussion Although P. vivax is the prevalent strain of malaria in seasonal climates (i.e., with distinct dry and wet seasons) (Gilbert and Brindle 2009), few studies have been conducted on the association between the transmission of P. vivax malaria with climatic variables in tem perate areas using empirical and shortinterval data. Zhang et al. (2010) estimated effects of climate factors on P. vivax malaria in a Chinese city located in a temperate zone using monthly data, but did not include rainfall and humidity in their Seasonal Autoregressive Integrated Moving Average model because they were not statistically significant predic tors of malaria incidence. Bi et al. (2003) reported that monthly mean tempera ture and total monthly rainfall were significant predic tors of P. vivax malaria after a 1month lag in Shuchen, China, a subtropical city. We estimated lagged effects of diverse climatic variables, including tempera ture, rainfall, relative humidity, DTR, and duration of sunshine, on the incidence of P. vivax in a temperate area of the ROK using weekly and daily data, which provided more detailed information on the relationship between climate factors and P. vivax malaria than previous studies. We estimated significant effects . Estimated effect of weekly rainfall on P. vivax malaria cases fitted with distributed lagged nonlinear model adjusted for mean tempera ture, seasonal variation, and between-year variation. (A) Three-dimensional image of the associations between increase in weekly rainfall and malaria cases adjusted for tempera ture with a maximum lag of 10 weeks, (B) lag-specific relative risk estimates (95% CI), and (C) estimated cumulative lagged RR (95% CI) for a 10-mm increase in weekly rainfall up to 350 mm/week. of weekly rainfall on malaria incidence after accounting for distributed lag effects. Although not directly comparable due to differences in methodology, plasmodium species, and climate zone, our results are consistent with a study of Plasmodium falciparum in the East African highland that reported a 6-138% increase in malaria incidence with a 22% increase in monthly rainfall (Zhou et al. 2004). The main advantage of the distributed lag model is that it can incorporate a detailed rep resentation of the timecourse of the exposureresponse relationship, and thereby estimate the overall effects of climate variables in the pres ence of lagged effects or harvesting. DLNM is an extension of the distributed lag model that includes functions describing the shape of the relationship between exposure and response and its distributed lag effects (Gasparrini et al. 2010) simultaneously. We estimated the rela tion between rainfall and malaria incidence adjusted for confounding by tempera ture, which was modeled using natural cubic splines (5 df) and a moving average with ≤ 10 weeks of lag with a 4week lag knot to reflect the change in the estimated effect of the mean tempera ture on malaria at 4 weeks. When the tempera ture was modeled using a polynomial lag type instead of the moving average, the estimated relative risk for the cumulative effect of rainfall increased from 3.61 (95% CI: 1.69, 7.72) to 5.02 (95% CI: 2.13, 11.8). However, when we adjusted for the duration of sunshine per week in addition to tempera ture (mod eled as a moving average for lagged effect), the cumulative effect of rainfall decreased to 2.95 (95% CI: 1.19, 7.31). When the maximum lag was prolonged to 12 weeks, the overall cumulative effect of rainfall was no longer significant (RR 1.98;95% CI: 0.76,5.22), whereas the RR increased to 4.19 (95% CI: 2.30, 7.65) with a maximum lag of 8 weeks after adjusting for tempera ture. In temperate regions, the onset of primary symptoms after P. vivax infection reflects two different incubation periods: short and long (or intermediate). If the sporozoite injected by a mosquito into a human host develops directly into a tissue schizont, the incubation period between the initial infection and the onset of symptoms is short, typically from 10 days to 4 weeks, resulting in an early pri mary attack. Conversely, if the sporozoite develops into a dormant hypnozoite (rather than a tissue schizont) in liver cells, the onset of illness may be delayed for ≤ 1 year, resulting in a long incubation period and late primary attack (Hankey et al. 1953;Sinden and Gilles 2002). Nishiura et al. (2007) estimated that the average length of a short incubation in the ROK is 26.6 days (95% CI: 21.0, 32.2 days). In addition to the incubation period between infection and symptoms, the associa tion between rainfall and malaria incidence also reflects the time required for mosquito larvae to develop into adult mosquitos, and the time required for P. vivax gametocytes to develop into infectious sporozoites after an adult mos quito has taken a blood meal from an infected human host [referred to as the extrinsic incu bation period (EIP)]. We estimate that the EIP for P. vivax in the ROK is approximately 12-14 days according to the formula DD/ (T -Tmin), where degreedays (DD) represent the required accumulation of tempera ture units over time (estimated to be 105 DD for P. vivax), Tmin indicates the minimum tempera ture for parasite development (14.5-15°C for P. vivax) (Martens et al. 1995), and T indicates the average tempera ture (23°C during 2001-2009). In addition, the estimated time for the development of an adult mosquito from an egg to adult at 22-26°C has been reported to be 11.2-17.6 days (Bayoh and Lindsay 2003). Therefore, the time required to complete mos quito development and the EIP would delay the apparent effect of increased rainfall on malaria incidence by at least 23-32 days. Based on the estimated time required for mosquito development, the EIP, and a short incubation period before the onset of symp toms, we would expect the effect of rainfall on malaria incidence to be lagged by approx imately 42-61 days. Models based on daily data after 21 August, the approximate peak in annual malaria incidence during the study period, indicated clear lag effects at 43-45 days, with the largest increase at 58 days. These esti mated effects were much greater than estimates based on data for the entire year, which is con sistent with our hypothesis that the majority of malaria cases occurring after late August represented a primary attack after a short incu bation period. In contrast, we hypothesize that incidences at other times during the year would include a larger proportion of late primary attack after a long incubation that would have a much weaker temporal relation (if any) with rainfall and other climatic variables. Parham and Michael (2010) reported that the malaria transmission rate strongly Figure 5. Estimated percent increase (95% CI) in daily malaria incidence expected with a 10-mm increase in rainfall (≤ 50 mm/day) for single-day lags based on all data (A) and based on data for 21 August through 31 December only (B). After 21 August L a g 0 L a g 2 L a g 4 L a g 6 L a g 8 L a g 1 0 L a g 1 2 L a g 1 4 L a g 1 6 L a g 1 8 L a g 2 0 L a g 2 2 L a g 2 4 L a g 2 6 L a g 2 8 L a g 3 0 L a g 3 2 L a g 3 4 L a g 3 6 L a g 3 8 L a g 4 0 L a g 4 2 L a g 4 4 L a g 4 6 L a g 4 8 L a g 5 0 L a g 5 2 L a g 5 4 L a g 5 6 L a g 5 8 L a g 6 0 L a g 0 L a g 2 L a g 4 L a g 6 L a g 8 L a g 1 0 L a g 1 2 L a g 1 4 L a g 1 6 L a g 1 8 L a g 2 0 L a g 2 2 L a g 2 4 L a g 2 6 L a g 2 8 L a g 3 0 L a g 3 2 L a g 3 4 L a g 3 6 L a g 3 8 L a g 4 0 L a g 4 2 L a g 4 4 L a g 4 6 L a g 4 8 L a g 5 0 L a g 5 2 L a g 5 4 L a g 5 6 L a g 5 8 L a g 6 0 depends on the vector (mosquito) density, and that changes in rainfall govern malaria endemicity, invasion, and extinction by influ encing mosquito abundance. Although it has been assumed that effects of rainfall would be less predictable and more difficult to quantify than effects of tempera ture (Bi et al. 2003;Zhang et al. 2010), our analyses suggest a clear relationship between rainfall and malaria transmission in temperate regions with a lag effect consistent with the time required for the development of the mosquito, the EIP, and the incubation period in the human body. The estimated effect of a 1°C rise in weekly mean tempera ture on the incidence of P. vivax malaria in the ROK (17.7% increase; 95% CI: 16.9, 18.6%) was larger than the 11.8-15.8% increase estimated for the city of Jinan, China, which is also located in a temperate region (Zhang et al. 2010). Increase in relative humidity had a positive effect on malaria incidences during the same week. However, numbers of malaria cases decreased in association with a rise in relative humidity when lagged by 2-8 weeks. We estimated a negative effect of DTR on malaria with a long lag period in accordance with previous research by Paaijmans et al. (2009) who reported that large tempera ture fluctuations can slow increases in malaria under warm conditions; effects that will tend to lessen the impact of increases in mean tempera ture. Our results also showed that increase in duration of sunshine was associated with decrease in malaria, which is reasonable when considered that the main hour of activity of mosquitoes is after sunset. The present study provides strategic insights into the current and future impact of climate change on malaria transmission, especially for P. vivax malaria in temperate regions, based on reliable daily and weekly data on malaria incidence, which must be reported when diagnosed in the ROK. Conclusions The incidence of malarial infection at a rela tively high latitude area appears to depend strongly on humidity, DTR, duration of sunshine, and rainfall as well as the tempera ture. Lagged effects estimated for the associa tion between rainfall and malaria are consistent with expectations given the time necessary for mosquito development, P. vivax development, and the onset of malaria symptoms. Effects of climate change on rainfall, tempera ture and other climatic variables may increase the range of populations at risk of P. vivax infection, especially in temperate regions.
2017-04-04T08:14:42.122Z
2012-06-18T00:00:00.000
{ "year": 2012, "sha1": "c2898ab96de044a7b6a6f147b02abd2ca00e3b82", "oa_license": "CC0", "oa_url": "https://doi.org/10.1289/ehp.1104577", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2898ab96de044a7b6a6f147b02abd2ca00e3b82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
260751309
pes2o/s2orc
v3-fos-license
P758: THE MECHANISM OF ABNORMAL EXPRESSION AND MUTATION OF RBPJ GENE PROMOTING CLONE PROLIFERATION BY REGULATING PAF1 IN PAROXYSMAL NOCTURNAL HEMOGLOBINURIA Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal disease of hematopoietic stem cells caused by somatic mutations. Our Previous studies used whole exome sequencing (WES) technology to perform deep sequencing on 13 patients with PNH(Fig.A), and also found that the relative expression of the gene is significantly correlated with the clinical condition of PNH. Therefore, we speculate that the high expression of RBPJ gene may be involved in the proliferation of PNH abnormal clones. Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal disease of hematopoietic stem cells caused by somatic mutations. Our Previous studies used whole exome sequencing (WES) technology to perform deep sequencing on 13 patients with PNH (Fig.A), and also found that the relative expression of the gene is significantly correlated with the clinical condition of PNH. Therefore, we speculate that the high expression of RBPJ gene may be involved in the proliferation of PNH abnormal clones. Aims: To explore the mechanism of RBPJ gene mutation involved in PNH clone proliferation. Methods: We obtained blood samples from 6 PNH patients and 5 healthy individuals to use qRT-PCR to identify the presence of RBPJ. PNH cell line KO was constructed by knocking out PIGA by Crsiper/Cas9 technology on K562 cell line. Transfection of siRNA resulted in the creation of K562 and PNH cell lines with low expression of RBPJ. The EDU technique and flow cytometry were used to detect cell proliferation and cycle. The K562-KO cell line stably expressing flag-RBPJ was constructed, and the RBPJ interacting proteins in each group were identified by LC-MS after silver staining. CO-IP confirmed the relationship between RBPJ and its physical interaction. Western-blot analysis revealed the expression of RBPJ, PAF1, NOTCH1, Gapdh, and Tublin in the low-RBPJ-expressed K562 and PNH cell lines. PNH and K562 cell lines with low expression of PAF1 were constructed by siRNA transfection, and Western-Blot was used to detect RBPJ, PAF1, and Gapdh expression. Results: PNH patients had higher levels of RBPJ mRNA expression (FigB). According to this result, RBPJ expression was knocked down in K562 cell line and PNH cell line and the effect was verified (p< 0.05), indicating that the low-RBPJ-expressed K562 and PNH cell lines were successfully constructed. Flow cytometry showed that the proliferation of PNH cell line was decreased (Fig.C) and the apoptosis was increased (Fig.D) after RBPJ knockdown (p < 0.05). LC-MS was used to identify the RBPJ interacting proteins in each group. A total of 281 proteins interacting with RBPJ were identified (Fig.E), including the core components of the NOTCH pathway Notch1and other proteins that have been clearly reported to interact with RBPJ, as well as polymerase-related factor 1 (Paf1) and other polymerase-related factor 1 (Paf1) complex. Thus, we speculate that the polymerase-associated factor 1 (Paf1) complex may be involved in regulating the expression of RBPJ in PNH patients. We chose PAF1 from the interacting proteins for further investigation, and used immunoprecipitation experiments to further confirm the physical interaction between PAF1 and RBPJ in K562-KO cell line, whereas there is no physical interaction between PAF1 and RBPJ in K562 cell line (Fig.F). Knockdown RBPJ, PAF1 protein content in the K562 cell line increased, while it reduced in the K562-KO cell line. Knockdown RBPJ, NOTCH1 protein content also decreased in both cell lines (Fig.G). Knockdown of PAF1 expression in K562 and PNH cell lines and verification of the effect were confirmed (p<0.05), indicating that PAF1-low K562 and PNH cell lines were successfully created. In K562 and K562-KO cell lines, PAF1 knockdown decreased the amount of RBPJ protein (Fig.H). Summary/Conclusion: Patients with RBPJ mutations showed higher expression. After RBPJ was knocked out in PNH cell line, cell proliferation decreased and apoptosis rate increased; After PAF1 was knocked out, the amount of RBPJ protein and the expression of PAF1 protein decreased, and there was a physical interaction between PAF1 and RBPJ, while the expression of NOTCH1 protein decreased. RBPJ may regulate PAF1 through NOTCH signaling pathway, thus promoting PNH clone proliferation.
2023-08-10T15:05:13.189Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "d01fd7ff0ba868e9ab668ddad01291c0aa7c0d71", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "35168582330ef86bae4176fca47ec70c18cf3c4c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
85508467
pes2o/s2orc
v3-fos-license
Determination of the Biomass Content of End-of-Life Tyres Determination of the Biomass Content of End-of-Life Tyres Studies have been conducted in France and Spain for (1) the validation of sampling meth‐ ods to achieve representative samples of end‐of‐life tyre (ELT) materials and (2) the com‐ parison and validation of test methods to quantify their biomass content. Both studies conclude that the 14 C techniques are the most reliable techniques for determining the biomass content of end‐of‐life tyres. Indeed, thermogravimetry and pyrolysis‐GC/MS do not lead to results consistent with the theoretical content of biogenic materials present in tyres, and results in both cases differ considerably from the known natural rubber content of the reference samples studied using thermogravimetric analysis. Furthermore, in the two last techniques, natural isoprene cannot be distinguished from synthetic iso‐ prene. Results obtained with radiocarbon analysis based on 14 C contents could be used as reference values of the biomass content of the ELTs: in the ranges of 18–22% for pas‐ senger car tyres and 29–34% for truck tyres, in line with actual natural rubber and other components content. Additionally, the presence of textile fibres and stearic acid, which are known sources of biomass in the tyre, cannot be evaluated by thermogravimetry and pyrolysis‐GC/MS techniques. Introduction End-of-life tyre (ELT) is any pneumatic tyre removed from any vehicle and not selected to be mounted on a vehicle again [1]. Because the end-of-life tyre is a non-reusable tyre in its original form, it enters a waste management system based on product/material recycling and Their compositions and combustion properties are similar to, or even better than those of coal (see Table 2). Due to their high carbon content (60-70%), they have become an interesting alternative fuel with a net calorific value in the range of 26.4-30.2 MJ/kg [2]. The use of secondary fuels is progressively increasing, not only because of its economic benefits, but also because of the environmental advantages of using solid recovered fuels [3]. These include natural resource savings, the preservation of fossil fuels such as petroleum coke, and above all the reduction in net emissions of CO 2 due to the biogenic origin of some components of the tyre, mainly natural rubber. Indeed, according to Directive 2003/87/EC [4], emissions associated with biomass fraction are considered to be neutral with regard to the greenhouse effect. It also leads to the reduction in other pollutants [5] such as SO x , mainly because the sulphur content in tyres (1-2%), used for the vulcanization process, is in any case lower than the quantity in most fossil fuels (see Table 2). ELTs contain a fraction of biogenic carbon that mainly comes from their natural rubber content. This is not, however, the only source of biogenic carbon. Most tyre formulations also include stearic acid in small quantities, used as activator of the vulcanization reaction, and also smaller quantities of rayon, a natural fibre used as a reinforcement material in the manufacturing of some tyre carcasses. Nowadays, cotton could only be found in carcasses of older tyres [6,7]. Despite the range of variation in the formulation of tyres, in practice their composition hardly varies, and thus they are one of the more dependable fuels, coming from wastes. The greatest variation in biomass content is found among tyres of different types: passenger car tyres, truck tyres or agro vehicle tyres [1]. Each tyre particles obtained from treatment by shredding is quite heterogeneous in terms of biomass content. This intrinsic heterogeneity of tyres particles at the microscopic level is related to their composition. For example, elastomer mixtures are not the same in each part of the tyre (see Figure 1). Although the heterogeneity does not appear at the industrial scale (consumption of around one ton per hour), the microscopic heterogeneity is of importance when it is necessary to take representative laboratory samples and to prepare them for analysis, in order to prevent different results. In order to quantify the biomass content of an ELT, three analytical techniques have been identified. The first one is a method that determines the biogenic carbon content by measuring the activity of the 14 C isotope, a technique employed in archaeology to date organic materials [8][9][10]. Another widely used technique for determining the composition of vulcanized elastomers is thermogravimetric analysis (TGA) [11]. This method is based on the measurement of the weight variation of a sample when it is submitted to a progressive increase in temperature in a controlled atmosphere. The third method, pyrolysis-gas chromatography/ mass spectrometry (Py-GC/MS), has been used extensively for qualitative and quantitative identification of polymer blends [12]. The results obtained under sampling, testing and analysis conditions respecting good practices with regards to heterogeneous materials show a remarkable stability in the measured parameters. This chapter provides first the appropriate methodology to make a selection of test samples from tyres, and secondly, novel information on the use of different technologies for the determination of their biomass content. Two cases studies conducted independently by ALIAPUR and SIGNUS, in France and Spain, respectively, have probed which techniques are appropriate to measure biomass content in tyres and which are not. Also in both studies reference values of biogenic content have been established, and results are quite close to theoretical values. This chapter then offers an analysis of the differences between the results of the three techniques as well as the advantages, disadvantages and problems. Sampling procedure for the estimation of biogenic fraction of end-of-life tyres The management of ELTs to be used as secondary fuel mainly consists of the shredding of relevant quantities of different types of tyres coming from a diverse range of origins. Shredded material can be stocked in piles, in which each particle contains parts of different layers of the tyre, each with a particular composition. So, the first problem to solve is how to estimate the biomass content taking random portions of materials. If this process is not performed carefully, there is an important risk not to be representative enough of the total stock. This is especially critical if we take into account that any of the used analytical techniques hardly needs a few milligrams of material. One of the key points for this calculation of biomass content is then the design of a sampling plan which is representative of the big sized lots. In the case of samples taken on the shredding site, the samples will be representative of one to several days of production. Depending on the size of the facility, the sample could represent tens or even hundreds of tons of material. In the case of samples taken during loading or unloading processes at the storage site, the samples will be representative of several weeks of production; the stocks in this case could even be of thousands of tons of material. General criteria for the definition of sampling procedure The minimum mass to produce a representative laboratory sample from the lot should be determined by the following formula [13]: where, --m m is the mass of the minimum sample size, in grams as received. --d 95 is the nominal top size of a particle (a mass fraction of 95% of the particles are smaller than d 95 ), in mm. This value is measured by means sieves following the method described within CEN/TS 15415:2006 [14]. --s is the shape factor, in mm 3 /mm 3 ; reference value of 1.0 in the case of granular materials with nominal size smaller than 50 mm [13]. --λ is the average particle density of the particles in the solid recovered fuel, in g/mm 3 as received [15]. --g is the correction factor for distribution in the particles size. Its value is related to the superior nominal size d 95 and the minimum size of the particle d 05 . --p is the fraction of the particles with a specific characteristic (such as a specific contaminant), in g/g, and is equal to 0.1 [13]. --Cv is the coefficient of variation. Its value is 0.1 [13]. General criteria for the preparation of laboratory samples Samples were prepared using a riffle splitter with 14 slots of 27 mm in width, until the mass of the sample is greater than the minimum size of the laboratory sample necessary to guarantee total representativeness for few milligrams. This must be calculated according to the thirdpower law [16], which for granular materials is expressed as: where m is the mass retained after each sample division step in grams, d 95 is the nominal top size in millimetres, and α is a constant over the whole sample preparation procedure for a particular material in g/mm 3 . Depending on the size of the laboratory sample for each method, a particular number of tests are needed to guarantee the representativeness of the sample. Methodology for taking representative samples from ELTs The preparation of the sample consists in a reduction made in several stages of fragmentation/ quartering until the different subpopulations (rubbers, metal wires, textile fibres) are obtained (see Figure 2). The standard means and procedures for reducing test samples are carried out in a laboratory with the appropriate equipment (sample division, cryogenic mill…). Figure 3 shows the flow diagram of the whole process for collection and preparation of the representative sample to obtain a test portion. Taking into account the size of the average production lots, it was estimated that 1.5 t of whole tyres is the minimum necessary quantity of ELTs. For the Spanish case, four different samples were taken in order to estimate the biomass content by type of tyre. Starting with 1.5 t of tyres, the first step is the shredding of tyres to reduce the size of the sample and to obtain a mix of particles from different tyres. This first-size reduction can be carried out in a primary shredder to obtain pieces within the interval of 35-200 mm. The sample must be taken at the production platform, using a tool of the open rectangular shovel type by completely cutting the flow of falling material. Samples are considered valid if a total quantity of at least 25 kg is taken per increment to represent a production of 1.5 t. In a second step, those particles are reduced under a maximum size of 20 mm. The samples produced by this secondary shredder should be taken in smaller portions, at constant intervals of time, called increments. No less than 24 increments are recommended to achieve a representative sample. The obtained sample must be quartered to obtain subsamples of a quantity between 1.5 and 3 kg. That is bigger than the minimum quantity necessary for this type of material (around 1.0 kg). Then the steel fraction should be removed from the sample using a magnet, taking special care in leaving the rest of material (rubber and textile fibres) that should be reduced in particles sized under 1 mm. Different methods could be used for this purpose, especially cryogenics. The obtained product should then be quartered to obtain a test portion. For the French case, the procedure is similar to the above described (see Figure 3). Test methods for the determination of biomass content in tyres Different test methods were identified in the literature for the determination of natural rubber content in blends that can thus be indirectly used for the quantification of biomass content in tyres [17]. Some of those methods are based on the determination of content in elastomers and are particularly able to distinguish the presence of isoprene, the main component of the natural rubber, among other different elastomers of the rubber blend. This natural rubber has been identified as the main source of biogenic carbon in a tyre; nonetheless, this component is not the only bio-based one. Thus, some of the test methods identified for this chapter are not really conclusive about the total content of biomass in tyres. On the other hand, there are significant differences in the time to analyse the samples (see Table 3). Pyrolysis-GC/MS This method is based on the degradation of a sample in an electric furnace at 500-600°C and keeping the sample within this temperature range [18]. This temperature range is recommended to obtain rapid pyrolysis without excessive degradation or carbonization of the rubber sample. However, a temperature of 550°C is advised to obtain the maximum quantity of pyrolysate for NR, IR, BR, SBR, IIR, BIIR and CIIR that are the major elastomeric components of a tyre. C (BI) Test duration A few hour A few days 2 months A few days This pyrolysis must be performed passing a stream of nitrogen through the pyrolysis reactor. Nitrogen serves to displace air, prevents oxidation and facilitates the transfer of the pyrolysis products to the gas chromatographer. The gas chromatographer is equipped with 30-m-long capillary chromatographic column in a fused non-polar-type silica. The gas chromatographer is coupled to a mass spectrometer operating in scan mode. It detects and registers certain decomposition substances between 35 and 550 atomic mass units (amu). The pyrolysis-GC/MS carried out in one of the studies is based on ISO standard 7270-2 and requires a calibration curve by pyrolysing the samples with known styrene/butadiene/isoprene ratios. The approach of this method is to evaluate the natural rubber content in a sample of tyres: It is possible to calculate the total concentration of elastomers in samples and also the concentration in natural elastomers by reporting the result on the previously produced calibration curve. The authors of this study observed several problems during the application of this method; the main one being related to the non-possibility to distinguish natural isoprene from synthetic isoprene. One major drawback is that determining the content comparing results with a curve made with different ratio of known samples of styrene/butadiene/isoprene rubbers only gives relative values inside the elastomeric fraction and not in the whole sample. An unrealistic composition on biomass could then be reported with this method. Another problem derived from the use of PY-GC/MS is the non-detection of other biogenic components of the rubber. A variability of results with pyrolysis temperature and the extraction time in solvents before pyrolysis is also reported. Finally, the presence of brominated butyl could also disturb the results. Taking into account all these issues, Pyrolysis-GC/MS is then not considered as a valid technique for the evaluation of biogenic content. Thermogravimetric analysis (TGA) This method is based on the continuous measurement of the weight loss of a sample submitted to a ramp of temperature in a controlled atmosphere [11,19]. In the case of TGA, each type of elastomer has a particular temperature at which the loss of mass occurs. When a sample of vulcanized rubber is tested, some particular peaks appear at specific temperatures. At lower temperatures, below 300°C, moisture, volatile components derived from plasticizers and other simple chemicals of the rubber blend, volatize. In the range from 300 to 525°C most of the elastomers in a tyre rubber blend are degraded by the heat. The first thermal decomposition corresponds to natural rubber NR, and the maximum weight-loss rate occurs in the 300-400°C interval. Styrene-butadiene SBR maximum weightloss rate occurs between 420 and 550°C. One of the studies tried to produce a reference calibration curve based on different binary NR/SBR rubber samples of known composition and the intensity-or height-of the peaks of the DTG curves. Each measurement produces a typical graph of weight loss as a function of the increase in temperature. Figure 4 shows an example of a weight loss curve for a (NR 75%, SBR 25%) blend and its derivative weight loss curves. This curve shows one minimum per elastomer, H NR and H SBR . The calibration curve in Figure 5 represents where r is the percentage of NR in the elastomeric fraction (NR+SBR), H NR is the maximum rate of weight loss in the area where NR decomposes, and H SBR is the maximum rate where SBR decomposes. Using this calibration curve, the value of r of an unknown sample can be then determined, based on the height of the peaks for NR and SBR. However, it has been reported by authors of this chapter [17] that the correlation of the result for NR obtained by TG analysis and the actual content of NR in samples with known quantities of this elastomer is very bad. Like in the case of PY-GC/MS, this analysis technique is only valid for an estimation of the natural rubber content, independently of the accuracy of the results. Indeed, there is not any possibility to distinguish natural isoprene from synthetic isoprene. Furthermore, problems derived from the use of this technique start when there is a combination of more than two elastomers in the sample; in such case, the identification and quantification of the elastomers are very difficult because of the overlapping of peaks. Finally, other biogenic components of the rubber, such as cotton or stearic acid, are not detectable by the use of this technique. Taking into account all these issues, thermogravimetric analysis is then not considered as a valid technique for the evaluation of biogenic content. Radiocarbon analysis: 14 C methods The determination of biomass content in different materials using 14 C methods is based on analytical procedures used for the determination of the age of carbon contained in materials [8,9]. Three well-known methods for the determination of 14 C content are described in the literature, two of which have been used in the studies made by the authors of this chapter. Those methods are commonly accepted for the determination of the age of objects, especially This technique is based on the principle that all the carbon atoms in organic materials have either a contemporary origin, proceeding directly or indirectly from the fixation of contemporary atmospheric CO 2 by means of photosynthesis, or a fossil origin and were fixed millions of years ago. Every living organism contains a quantity of 14 C proportional to the relative abundance of 14 C in the atmosphere. Thus, the percentage of biomass in a material is directly proportional to its 14 C content. Fossil fuels, however, do not contain 14 C, as its half-life is 5,700 years [20,21]. 14 C/ 12 C determination by beta-ionization (BI) One of the studies used the analysis of the carbon concentration of bio-based origin (Beta ionization method) focusing on the biomass 12 C assay, considered to be more accurate. The test has been developed according to standard ASTM D6866-08 and has determined the biogenic carbon content specifically for this purpose. Particularly, in this case, this test method has been adapted and used for measuring 14 C, in elastomeric fraction and textile fraction. Liquid scintillation spectrometer (LSC) 14 C determination Another alternative is the 14 C determination by liquid scintillation spectrometer LSC using butyl-PBD, as scintillation agent, added to benzene (C 6 H 6 ) samples, previously prepared by the following chemical reactions: CO 2 is then produced in a combustion chamber by burning an appropriate sample of rubber coming from tyre. It was ensured that the reacting CO 2 was only coming from the sample. The 14 C activity was corrected by the isotopic fractionation according to the directives of the ASTM D6866-05 standard test methods for the determination of biomass content [10]. To do this, the 12 C/ 13 C ratio was established in the stable isotopes laboratory of reference. Discussion of results of the biomass content in tyres Two different studies have been conducted recently, one in France and the other Spain for the quantification of the biomass content in tyres. In both cases, the purpose of the studies was not only the quantification of the biomass content in tyres but also the validation or not of different techniques for this purpose. In paragraph 3 of this chapter, three techniques have been compared: PY-GC/MS and TGA were discarded, and techniques related with 14 C determination have been accepted and are highly recommended. Table 4 shows the number of tests conducted for each method. Table 5 indicates the content of each sample of tyres in the recycling plant selected to conduct this study in Spain, taking into account the Spanish shared market, in terms of sizes. Lot coded PT140611-1 represents a sample of motorcycle, passenger car, SUV and vans tyres representative in percentage of the market in Spain. The other sample PA140611-2 was randomly taken from a stocked pile of those types of tyres. In addition, two other samples, one corresponding to truck tyres, and other sample with agro tyres were prepared. Tyres were shredded and reduced to granules following the procedure described in paragraph 2 of this chapter. Samples were appropriately divided using a riffle box with the appropriate number of slots. After reducing the size of the particles below 20 mm, the steel wires were removed from samples. This process was carried out with a magnet and the percentage of steel in samples is given in Table 6. Some rubber particles remain attached to the steel samples, so a calcination of these samples was performed to calculate the actual content of metal wires in the representative samples. In the same way, the rubber granulates samples were reduced to fine powder, under 1 mm. Spanish case (number of tests) French case (number of tests) Description of the sample TGA 14 The French study used a specific laboratory equipment that permits a complete separation of the three different fractions of materials with no contamination between them, starting from shredding stage. So, the phase of calcination of steel fraction was not used. Summary of content of different materials are listed in Tables 7 and 8. Then the rubber and textile fractions were also reduced to particles under 1 mm. In both studies, a cryogenic laboratory mill was used to achieve this particle size total reduction in the samples. Results of the 14 C analysis According to the sampling plan, the sample for laboratory test is representative of the lot if quantity is over 66 mg. In the case of liquid scintillation spectrometry, the mass of the sample should be between 10 and 12 g and then, only one single test per sample is necessary. Nevertheless, all the samples have been tested twice to ensure the repeatability of the results. Results of 14 C per sample of rubber and textile using this LSC are shown in Table 9. The percentage of modern carbon is calculated with the following expression. where A SN is the isotopic activity of the sample, standardized for isotopic division, A ON is the activity of the oxalic acid reference, also standardized for isotopic division and pMC is the percentage of modern carbon. Using this Eq. (7), the results of the biomass content in the Spanish study are summarized in Table 10. In general terms, duplicated samples show similar values, all of them are under the limits of tolerance established for this analytical technique. Table 10 shows the average value of the percentage of biomass in the rubber-textile fraction for each sample. The final result of biomass in the tyres sample, taking into account the content of steel in each one, is written in the Table 11. As a conclusion, the biomass content in the sample corresponding to the passenger car tyres both random and representative to the Spanish share market are exactly the same with an average value of 22.2%. Table 11. Results of biomass content in tyres samples (Spanish case). Conclusions A tyre is a complex product with a biomass content heterogeneously distributed inside it. Furthermore, there is also a huge variability in composition inside the market, by brands and types, each one having a different biomass content. Therefore, to carry out a proper study of biomass content of a batch or a sample representing a significant number of tyres, it is absolutely necessary to make a good selection of the laboratory sample by means of statistical methods to ensure representativeness of the lot. In this chapter, the procedure for managing tyres samples to obtain representative samples by continuous size reduction and quartering has been described. Furthermore, it should be taken into account that necessary quantities for the biomass content determination are in the order of a few grams or even milligrams, so the sampling step is crucial to obtain proper results. Several methods have been explored for determining the biomass fraction of tyres, but some of them are not completely reliable and conclusive, mainly for the following reasons: The results of the thermogravimetric method differed considerably from the known natural rubber content of the reference samples as well as results obtained from the 14 C technique that is because the synthetic isoprene cannot be distinguished from natural isoprene. The pyrolysis-GC-FID method is neither considered as a reliable method, mainly for the same reason and because results are affected by temperature and extraction time. In addition, the presence of textile fibres and stearic acid in the tyre, a well-known source of biomass, cannot be evaluated for both techniques (pyrolysis-GC-MS and TGA), French and Spanish studies conclude that the 14 C techniques are the most reliable for determining the biomass content of end-of-life tyres. Both methods, beta-ionization (BI) and liquid scintillation spectrometry (LSC), lead to results close to the actual biomass content in tyres. Finally, the reference values of the biomass content of the end-of-life tyres have been established. Average content for passenger car tyres is 18.3% for the French market and 22% for the Spanish market. In the case of truck and bus tyres, content in both cases are in the range from 29.1% in the French case to 34% in the Spanish case. Apart from errors of different analytical techniques and laboratories, differences found in both studies could be related to different market share in term of sizes and brands in both countries. In the Spanish case, a reference value has also been established for agro tyres with an average biomass content of 26.4%.
2018-12-30T08:34:06.290Z
2017-02-22T00:00:00.000
{ "year": 2017, "sha1": "1e06b70bb357e63d03855f673d73a7506142eab4", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/52753", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b19abd4da21000af723e381bbee38ac46e802688", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
17611150
pes2o/s2orc
v3-fos-license
Mesenchymal–epithelial cell interactions and proteoglycan matrix composition in the presumptive stem cell niche of the rabbit corneal limbus Purpose To investigate whether mesenchymal–epithelial cell interactions, similar to those described in the limbal stem cell niche in transplant-expired human eye bank corneas, exist in freshly enucleated rabbit eyes and to identify matrix molecules in the anterior limbal stroma that might have the potential to help maintain the stem cell niche. Methods Fresh limbal corneal tissue from adult Japanese white rabbits was obtained and examined in semithin resin sections with light microscopy, in ultrathin sections with transmission electron microscopy, and in three-dimensional (3D) reconstructions from data sets of up to 1,000 serial images from serial block face scanning electron microscopy. Immunofluorescence microscopy with five monoclonal antibodies was used to detect specific sulfation motifs on chondroitin sulfate glycosaminoglycans, previously identified in association with progenitor cells and their matrix in cartilage tissue. Results In the rabbit limbal cornea, while no palisades of Vogt were present, the basal epithelial cells stained differentially with Toluidine blue, and extended lobed protrusions proximally into the stoma, which were associated with interruptions of the basal lamina. Elongate processes of the mesenchymal cells in the superficial vascularized stroma formed direct contact with the basal lamina and basal epithelial cells. From a panel of antibodies that recognize native, sulfated chondroitin sulfate structures, one (6-C-3) gave a positive signal restricted to the region of the mesenchymal–epithelial cell associations. Conclusions This study showed interactions between basal epithelial cells and subjacent mesenchymal cells in the rabbit corneal limbus, similar to those that have been observed in the human stem cell niche. A native sulfation epitope in chondroitin sulfate glycosaminoglycans exhibits a distribution specific to the connective tissue matrix of this putative stem/progenitor cell niche. . Although some studies [9] have suggested that stem cells seem to be present throughout the central corneal epithelium, the evidence applies to the mouse cornea only, and consensus continues to favor the corneal limbus and, in particular, deep involutions of the limbal epithelium into underlying vascularized stroma, termed the palisades of Vogt, as the major location of epithelial progenitor cells [3,5]. Basal epithelial cells at the human limbus also possess different biochemical signatures compared to epithelial cells more centrally in the cornea when examined with spectroscopic techniques [10]. In the human eye, further specialized regions have been identified within this stem cell niche, termed limbal epithelial crypts, limbal crypts, and focal stromal projections [11,12]. It seems, however, that well-defined palisades of Vogt are not present in all mammalian species; for example, palisades of Vogt are present in the pig eye [13] but reportedly absent in rabbits [14] and rodents. In the rabbit, although epithelial rete ridges projecting into the subjacent stroma, characteristic of the palisades [15], are not seen, the basement membrane zone is nonetheless undulating [16] and exhibits discontinuities of the basal lamina, reminiscent of the human limbus [14]. Chen and associates [8], in 2004, suggested that the basal invaginations of basal epithelial cells through the basement membrane at the human corneal limbus allowed close contact with the underlying vascularized stroma to facilitate nutrient transfer. More recently, Dziasko et al. [17] used a three-dimensional electron microscopy approach to demonstrate focal associations between small basal epithelial cells in crypt-rich zones of the human limbus in transplant-expired eye bank tissue and cells in the subjacent stroma. These stromal cells labeled positively for CD90 and CD105, two markers for mesenchymal stem cells, encouraging the authors to propose the limbal crypt region as a site of cellular interactions between epithelial and stromal stem cells. Hayashi and associates [18] demonstrated N-cadherin expression by putative basal epithelial stem cells and associated melanocytes in the human limbus, implicating these cells in the modulation of the stem cell niche, and this idea has recently been expanded on by others [19]. Another concept, that specific properties of the non-epithelial stromal milieu, including the limbal microvasculature and localized connective tissue composition, may provide cues for the maintenance of a stem cell population in the limbal cornea has also been supported by several recent studies [20,21]. These putative factors have not yet been investigated in detail. Signaling pathways, including those leading to cell proliferation and differentiation, are known to be significantly modulated by some of the tissue glycosaminoglycans (GAGs), such as chondroitin sulfate (CS) and heparan sulfate (HS), bound to proteoglycans in extracellular matrices, which can function as ligands for signaling molecules. The heterogeneity of sulfation patterns on the disaccharide chains of GAGs provides the potential for the diverse reactivity of these molecules. An earlier study on articular cartilage showed that CS sulfation motifs can identify distinct cellular populations with stem cell characteristics in this tissue [22]. Here, we document the three-dimensional architecture of a putative stem cell niche in the rabbit corneal limbus with a focus on the associations between basal epithelial cells and anterior stromal cells. We also provide new information on matrix-specific CS localization subjacent to the limbal niche, using a panel of monoclonal antibodies that react with novel sulfation motif epitopes. METHODS Tissue acquisition: Female Japanese white rabbits (n=2; Shimizu Laboratory Supplies Co., Ltd., Kyoto) were anesthetized by intravenous administration of pentobarbital sodium solution (64.8 mg/kg, Somnopenthyl, Kyoritsu Yakuhin Corporation, Tokyo) and euthanized by scission of abdominal aorta and vena cava. Corneas were removed immediately post-mortem by an incision parallel to, and approximately 2 mm outside, the limbus. They were then cut into superior, inferior, nasal, and temporal quadrants and transferred immediately to fixatives as described below. Animals were housed and treated in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. A specimen of human corneal limbus was dissected from the eye of a 56-year-old male donor. The research was approved by the Human Science Ethical Committee (School of Optometry and Vision Sciences, Cardiff University, UK) and the South East Wales Research Ethics Committee (Cardiff, UK). The institutional review board granted approval with a waiver of consent as the cornea was obtained from the Bristol Corneal Transplant Service Eye Bank (Bristol Eye Hospital, UK). All tissue used in this study was obtained in accordance with the tenets of the Declaration of Helsinki, and local ethical rules were adhered to throughout. The cornea was removed to storage culture medium in the eye bank within 48 h post-mortem and obtained for study after 4 weeks when the low endothelial cell count rendered the cornea unsuitable for transplantation. Light, transmission, and SBF SEM: Samples from each quadrant were fixed by immersion in 2.5% (v/v) glutaraldehyde and 2% (w/v) paraformaldehyde in 0.1 M sodium cacodylate buffer, pH 7.3. After storage in buffer at 4 °C, they were processed using a modification of the method described by Deerinck and Ellisman for the generation of high backscatter electron contrast for serial block face scanning electron microscopy (SBF SEM). However, the blocks thus produced were also suitable for light microscopy on semithin (0.2-0.3 µm) sections after Toluidine blue staining and for transmission electron microscopy on unstained ultrathin (90−100 nm) sections. After the primary fixation step, full thickness, 1×5 mm tissue blocks were dissected from the limbus in each of the four corneal quadrants and transferred to 1.5% potassium ferricyanide/1% osmium tetroxide in cacodylate buffer for 1 h and then washed in distilled water. The blocks were then placed sequentially in 1% aqueous thiocarbohydrazide, 1% aqueous osmium tetroxide, and 1% aqueous uranyl acetate, each for 1 h and each followed by thorough washing in distilled water. After a further 1 h incubation in lead aspartate at 60 °C with more washes, the specimens were dehydrated in an ethanol series, from 70%, through 90% to 100%, and via propylene oxide, infiltrated, and embedded in Araldite CY212 epoxy resin over 2 days. After the resin was cured at 60 °C for 48 h, semithin sectioning was performed to identify areas of interest, and ultrathin sections cut onto uncoated G300 copper grids for examination in a JEM 1010 transmission electron microscope (Jeol (UK) Ltd, Welwyn Garden City, UK). Images of the epithelial basement membrane region, including the basal epithelium and superficial mesenchymal cells were acquired with an 11-megapixel 14-bit Orius SC1000 CCD camera (Gatan, Pleasanton, CA). For SBF SEM, blocks containing suitably oriented regions of interest were glued to aluminum specimen pins and, after the surface was polished with ultramicrotomy, sputter-coated with gold (EM ACE 200, Leica Microsystems (UK) Ltd, Milton Keynes, UK). They were transferred to a Zeiss Sigma VP FEG SEM (Carl Zeiss Microscopy Ltd, Cambridge, UK), and sequences of up to 1,000 images acquired, with surface cuts of 50 nm, using an in-chamber Gatan 3View ® 2 sectioning system (Gatan UK, Oxford, UK). The image area was 10.598 × 10.598 µm, giving a resolution of 2.6 px/nm. Images were taken at 2.5 kV with 8 ms dwell time and a scan resolution of 4096 × 4096 pixels. Data sets were analyzed using Amira 5.6 software, or Image J/Fiji [23], and displayed in 3D Viewer. Immunohistochemistry: Corneal quadrants were fixed in 4% paraformaldehyde in 0.1 M Sörensen's phosphate buffer, washed briefly, cryoprotected in a graded series of sucrose, to 30%, in buffer and frozen over dry ice in optimum cutting temperature (OCT) embedding compound for sectioning at 10 µm thickness on a cryostat at −21 °C. Sections were collected on poly-L-lysine-coated slides and washed in PBS (1X; 145 mM NaCl, 1 mM NaH 2 PO 4 .2H 2 O, 18 mM Na 2 HPO 4 .2H 2 O, pH 7.4) containing 0.1% Tween 20, before exposure to a panel of monoclonal antibodies, shown in Table 1, which recognize specific sulfation motifs on proteoglycans carrying chondroitin sulfate glycosaminoglycan chains. Before exposure to antibodies, all of which were diluted 1:10 in PBS/Tween, sections were blocked with PBS/Tween containing 1% bovine serum albumin (BSA) for 30 min. Antibodies were applied at 4 °C overnight. To validate the antibody reactions, some sections for each antibody were pretreated with 0.1 U/ml chondroitinase ABC (Sigma-Aldrich) in 100 mM Tris acetate buffer, pH 8.0 for 2 h at 37 °C, which degrades the CS chains carrying the epitopes they identify. Washing after antibody incubation was followed by treatment of all sections with goat anti-mouse AlexaFluor 488 secondary antibody (Molecular Probes, Invitrogen) diluted to 5 µg/ml in PBS/Tween, for 2 h, after which they were mounted under coverslips with Vectashield containing the nuclear stain 4',6-diamidino-2-phenylindole (DAPI: Vector Laboratories, Peterborough, UK). Sections obtained from the specimen of human limbus were treated in the same way as described for the rabbit tissue and exposed to 3B3, 4C3, and 6C3 primary antibodies. All sections were examined with phase and immunofluorescence microscopy with an Olympus BX40 microscope. Sections exposed to non-immune mouse serum or PBS instead of the primary antibody served as negative controls. Light and transmission electron microscopy: In the Toluidine blue-stained, semithin sections obtained from all four quadrants of the rabbit limbal cornea (Figure 1), no evidence of Antibody Isotype Specificity Reference 3-B-3 IgM native CS/DS sulphation epitopes; also, C-6-S neoepitope "stubs"after digestion with chondroitinase [31,33] 3-C-5 IgG native CS/DS sulphation epitopes [22,31,35] 4-C-3 IgM native CS/DS sulphation epitopes [22,31,35] 6-C-3 IgM native CS/DS sulphation epitopes [22,31,35] 7-D-4 IgM native CS/DS sulphation epitopes [22,31,35] structures resembling palisades of Vogt or epithelial crypts was evident. Nonetheless, the basement membrane zone in all four regions exhibited a markedly irregular profile distinct from the smooth interface present in the center of the cornea. Small blood capillaries were present in the subepithelial stroma at the limbus and in the peripheral cornea. In the former, a characteristic feature of mesenchymal cells, often in the pericapillary matrix, was the presence of cytoplasmic processes extending from the cells distally to make contact with the basement membrane and basal epithelial cells. Basal cells sometimes exhibited less basophilia than overlying layers and appeared paler-staining with Toluidine blue. In the electron microscope, additional details of the irregular basement membrane were evident. Basal cells formed ornate lobed protrusions into the underlying stroma ( Figure 2A), and their contours were followed faithfully by the basal lamina. However, at locations where it was approached by the processes emanating from mesenchymal cells the lamina often appeared discontinuous, permitting close epithelialmesenchymal cell contact ( Figure 2B). Mesenchymal cell processes at the rabbit limbus were numerous and reached extensively into the pockets created by the lobed basal membrane of the epithelial cells. Small rounded cells with a high nucleus to cytoplasm ratio were sometimes observed within the basal epithelial cell layer, invariably with multiple mesenchymal cell processes nearby ( Figure 2B). Serial block face scanning electron microscopy: Observations of epithelial cell-mesenchymal cell associations in large data sets of serial images acquired with SBF SEM of limbal corneal specimens from superior and nasal quadrants of the rabbit eye clearly showed the extent of the penetration of mesenchymal processes into the basal epithelium at numerous sites ( Figure 3, and Appendix 1), with individual cells often making several connections. Three-dimensional reconstructions of selected image sequences emphasized the abundance of mesenchymal cell processes, indicating that they occasionally extended distally between adjacent basal epithelial cells (Figure 4, Appendix 2, and Appendix 3). Small capillaries were commonly identified in the superficial stroma in close proximity to sites of mesenchymal-epithelial cell interaction ( Figure 5 and Appendix 4). Immunohistochemistry: A panel of sulfation motif-specific, anti-chondroitin sulfate/dermatan sulfate (CS/DS) glycosaminoglycan antibodies provided tissue-specific staining results when applied to the rabbit corneal limbus ( Figure 6). The immunoreactivity of these antibodies is indicated in Table 1. The analysis showed that some antibodies (i.e., 3B3 and 3C5; Figure 6B,E) detected no native CS/DS epitope whatsoever. However, a positive enzyme-sensitive signal was detected with antibody 7D4 in the midstroma of the peripheral cornea and was seen in association with deep capillaries in the limbus, but staining was absent from the putative stem cell zone along the limbal basement membrane ( Figure 6N,O). Antibody 6C3 labeled the extracellular matrix along the basement membrane and around capillaries in the limbus at the site where epithelial-mesenchymal associations were observed with SBF SEM (Figure 6K), and staining was removed by section pretreatment with chondroitinase ABC ( Figure 6L). Antibody 3B3, as mentioned, detected no native epitope ( Figure 6B) but showed positive localization of a CS/DS neoepitope after enzyme pretreatment ( Figure 6C). Control sections were consistently negative with or without enzyme pretreatment ( Figure 6Q,R). Only the 6C3 primary antibody showed positive immunofluorescence when applied to the human limbal sections (Figure 7). DISCUSSION Our observations of the epithelial basement membrane region in the rabbit corneal limbus confirm a lack of structures comparable to the palisades of Vogt, a characteristic feature of the human limbus, and which have previously been identified as the site of limbal epithelial progenitor or stem cells [5,24]. Nevertheless, basal epithelial cells in this region of the rabbit limbus exhibit markedly irregular profiles, sending elaborate lobed protrusions into the underlying stroma and thus present a significantly increased surface area toward the superficial mesenchyme. This observation is not new, having been identified in previous studies of the rabbit limbus [14,16]. However, our three-dimensional reconstructions of this region from serial block face scanning electron microscopy provide a new perspective and clear evidence that, in the rabbit, mesenchymal cells interact with the basal limbal epithelium just as discovered recently in the human limbus [17]. Given that the rabbit tissue examined here was processed immediately, this gives extra credence to the recently published findings of Dziasko and coworkers [17], and implies, though indirectly, of course, that gaps in the corneal basal epithelium at the limbus in human eye bank tissue are not the consequence of extended storage in preservation medium. We suggest that lobed protrusions of basal cells in the rabbit cornea may represent the palisades in miniature, broadly fulfilling the same role. Large-volume 3D reconstructions indicate that in the rabbit, mesenchymal cells subjacent to the epithelium extend numerous cellular processes that make contact with the basal lamina and occasionally penetrate distally between basal cells making close associations. In some of these locations, the basal lamina appeared discontinuous, with the possibility that confluence may be established between the two cell types. These associations, together with the presence of small basal cells with nucleus to cytoplasmic ratios resembling cells identified as stem cells in other studies, provide strong circumstantial evidence that this site represents the stem cell niche in the rabbit corneal limbus. The presence of a source of cells in the limbus to regenerate the corneal epithelium in the rabbit has been accepted since the observation by Kinoshita et al. [25] of centripetal movement of epithelial cells from limbal origins to repair central corneal surface wounds; in addition, delayed central corneal epithelial wound healing resulted from removal of the limbal epithelium [26]. In the human eye, evidence suggested that the limbal stem cell niche is predominantly within the inferior and superior limbal quadrants, corresponding to the prominence of the palisades of Vogt, epithelial crypts, and focal stromal projections at these locations [27]. In contrast, we observed mesenchymal-epithelial associations in semithin sections from all four quadrants of the rabbit eye and examined them in detail in 3D reconstructions in specimens from the superior and nasal limbus. This implies that the nature of the stem cell niche in the rabbit limbus is quite different from that in humans. Rabbit eyes exhibit some significant differences from those of humans, for example, in the blink rate (10 min versus 5-8 s between consecutive blinks in rabbit and human, respectively), anatomically in the tear film structure and stability [28], and in the presence of a nictitating membrane in rabbit. The potential influence of these factors on the distribution of limbal stem cells is unknown and requires further study. Confirmation of stem cell potential has previously been assessed with immunocytochemical detection of specific epitopes in tissue sections [8,18], even though no definitive stem cell markers have yet been identified; cell isolation and assessment of colony-forming activity in cell culture have become another criterion employed [12,29]. An alternative approach, based on the supposition that cues for the maintenance of stem cell characteristics may reside in the extracellular matrix [20], relies upon localization of specific epitopes in the niche environment. To date, few studies have attempted to define the stem cell niche based on the matrix composition. Hayes et al. [22] used specific monoclonal antibodies (including 3B3, 4C3, 6C3, and 7D4) that recognize the sulfation patterns of the chondroitin sulfate/dermatan sulfate glycosaminoglycan chains associated with cell and extracellular matrix proteoglycans. Several of these antibodies (3B3, 4C3, and 7D4) successfully identified, with immunocytochemistry and flow cytometry, pericellular matrix and colony-forming cell populations in the superficial zone of articular cartilage from which progenitor cells had previously been isolated [30]. The remarkable heterogeneity in sulfation motifs on CS/DS proteoglycans has long been recognized as a contributory factor in which considerable diversity of structure and specificity of function could be achieved [31]. A conservative assessment indicated the human glycome may encompass around 7,000 distinct glycan determinants, including numerous CS/DS structures [32]. The precise expression of specific CS/DS motifs, together with other matrix components, could therefore define the local extracellular compartment that contributes to the maintenance of the stem cell phenotype. The antibodies employed in our study recognize carbohydrate moieties in native (i.e., non-enzyme-predigested) chondroitin sulfate. Antibody 3B3 is believed to recognize a six-sulfated disaccharide epitope at the non-reducing terminus on CS glycosaminoglycan chains [31]. No native epitope could be detected with this antibody in the corneal limbus, but the antibody identified a neoepitope exposed with chondroitinase pretreatment [31], which is the more conventional application of the 3B3 reagent, as it was initially raised against the enzyme-generated epitope [33]. Although the structural epitopes of antibodies 4C3, 7D4, and 6C3 remain to be precisely defined, the available evidence from chain disruption studies suggests they are all different and reside in non-terminal sequences of the CS glycosaminoglycan chain, near the linkage region to the protein core [31,34,35]. Different susceptibilities to chondroitinase digestion of these epitopes in different tissues have previously been identified, consistent with our observations of the rabbit limbus, where some residual staining persisted after enzyme treatment followed by 7D4 antibody staining, whereas all staining with antibody 6C3 was removed. Previous studies showed a diversity of staining results with native CS antibodies in cartilages from different species, as well as changes through development and in disease [31,33,34]. Interestingly, the labeling we observed in the rabbit limbus with antibodies 6C3 and 7D4 appeared somewhat similar to that reported previously in human skin with both labeling sites of vascular and neural structures: 7D4 in the deep limbal stroma and reticular dermis; 6C3 labeling the basement membrane and superficial stroma and papillary dermis, in the limbus and skin, respectively. Antibody 4C3 failed to show a significant Figure 4. Three-dimensional reconstructions in ImageJ 3D Viewer of the epithelial basement membrane zone of the rabbit limbal cornea from serial block face scanning electron microscopy. A: A blood vessel can be seen in the superficial stroma (asterisk), below mesenchymal cells that extend numerous cytoplasmic processes (arrows) distally to contact the basal epithelial cells. The scale bar represents 4 µm. B: Mesenchymal cell processes form diffuse associations with epithelial cells, occasionally appearing to extend between adjacent cells (arrow). The scale bar represents 4 µm. signal in the corneal limbus, unlike results reported in the skin. In addition, the intense labeling the antibody generated in a region of midperipheral corneal stroma was only slightly reduced by enzyme predigestion, suggesting native and neoepitope distribution for this antibody in the eye. The CS glycosaminoglycan epitope revealed by antibody 6C3 was the only component of the limbal matrix that appeared clearly coincident with the putative stem cell niche in the subepithelial stroma revealed by SBF SEM. Our preliminary results showed that antibody 6C3 also labels this site positively in the human corneal limbus. The rabbit eye has historically proven to be a useful model for studies of corneal epithelial regeneration, and the present study indicates that in spite of morphological differences between the human and rabbit cornea at the limbus, this site has other features in common between the rabbit and human indicative of a stem cell niche at this location. The specific native CS/DS sulfation moieties, recognized by our antibodies, have been highly conserved in most animal species (chicken to human) in the stem/progenitor cell niche. We hypothesize that the collective function of CS sulfation patterns in the matrix is to act as ligands to bind and present, or sequester, a spectrum of signaling molecules, including cytokines, morphogens, chemokines and growth factors, which may lead to the initiation or inhibition of the stem to progenitor cell, to mature cell, cascades [31]. Evidence in support of this mechanism has already been reported from studies on cell migration and maturation in chondrocytes and cartilage [36,37]. Specific markers for connective tissue glycosaminoglycans can thus help define the stem cell microenvironment in the eye, although further studies are required to expand this possibility and to confirm the progenitor capability of epithelial and mesenchymal cells in the rabbit limbus. Figure 5. Three-dimensional reconstruction of the rabbit corneal limbal basement membrane zone using automated and manual segmentation techniques with Amira 5.6 software. Mesenchymal cells colored in blue and purple make associations with basal epithelial cells, in green. A superficial stromal capillary (orange) can also be seen. The scale bar represents 4 µm. E, H, K, N, and Q) show native epitope localization without section pretreatment. Antibody 6C3 labels the matrix subjacent to the limbal epithelial basement membrane (K, arrows). The right-hand panels (C, F, I, L, O, and R) show validation of the results by removal of native epitopes by section pretreatment with the chondroitinase ABC enzyme, and thus, the loss of the 6C3 signal (L). In some cases, neoepitopes are generated by enzyme treatment, as with antibody 3B3. The central cornea is toward the left in all panels, but toward the right in the controls (P, Q, and R). The scale bar represents 100 µm. APPENDIX 1: FLYTHROUGH OF 300 SERIAL IMAGES FROM SBF SEM OF RABBIT EPITHELIAL BASEMENT MEMBRANE ZONE IN CORNEAL LIMBUS. Mesenchymal cells send processes into basal epithelium at multiple sites. To access these data, click or select the words "Appendix 1". APPENDIX 2: VIDEO OF 3D RECONSTRUCTION MADE WITH IMAGEJ 3D VIEWER PLUGIN FROM SBF SEM IMAGES OF RABBIT LIMBAL BASEMENT MEMBRANE ZONE. Superficial mesenchymal cells extend multiple processes to contact basal epithelial cells. To access these data, click or select the words "Appendix 2". APPENDIX 3: VIDEO OF 3D RECONSTRUCTION MADE WITH IMAGEJ 3D VIEWER PLUGIN FROM SBF SEM IMAGES OF RABBIT LIMBAL BASEMENT MEMBRANE ZONE. Mesenchymal cell processes extend between adjacent cells of the basal epithelial layer. To access these data, click or select the words "Appendix 3". APPENDIX 4: VIDEO OF 3D RECONSTRUCTION MADE WITH AMIRA 5.6 SOFTWARE FROM SBF SEM IMAGES OF RABBIT LIMBAL BASEMENT MEMBRANE ZONE. Mesencymal cells, coloured in blue and purple, associate with epithelial cells, coloured in green in the vicinity of a superficial stromal capillary, coloured in orange. To access these data, click or select the words "Appendix 4".
2017-08-20T05:35:19.799Z
2015-12-29T00:00:00.000
{ "year": 2015, "sha1": "6ba5cad0dbc6eaa339c86f06adb1f085025dd3c0", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6ba5cad0dbc6eaa339c86f06adb1f085025dd3c0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213265177
pes2o/s2orc
v3-fos-license
Population-Specific Genetic and Expression Differentiation in Europeans Abstract Much of the enormous phenotypic variation observed across human populations is thought to have arisen from events experienced as our ancestors peopled different regions of the world. However, little is known about the genes involved in these population-specific adaptations. Here, we explore this problem by simultaneously examining population-specific genetic and expression differentiation in four human populations. In particular, we derive a branch-based estimator of population-specific differentiation in four populations, and apply this statistic to single-nucleotide polymorphism and RNA-seq data from Italian, British, Finish, and Yoruban populations. As expected, genome-wide estimates of genetic and expression differentiation each independently recapitulate the known relationships among these four human populations, highlighting the utility of our statistic for identifying putative targets of population-specific adaptations. Moreover, genes with large copy number variations display elevated levels of population-specific genetic and expression differentiation, consistent with the hypothesis that gene duplication and deletion events are key reservoirs of adaptive variation. Further, many top-scoring genes are well-known targets of adaptation in Europeans, including those involved in lactase persistence and vitamin D absorption, and a handful of novel candidates represent promising avenues for future research. Together, these analyses reveal that our statistic can aid in uncovering genes involved in population-specific genetic and expression differentiation, and that such genes often play important roles in a diversity of adaptive and disease-related phenotypes in humans. Introduction Human phenotypes vary widely across the globe. In particular, geographically separated populations often differ in skin pigmentation (Loomis 1967), hair color (Rees 2003), tooth morphology (Scott and Turner 1997;Hanihara and Ishida 2005), surface area to body mass ratio (Katzmarzyk and Leonard 1998), and predisposition to diseases (Frank 2004). Much of this phenotypic variation is thought to have arisen due to a diversity of selective pressures experienced as early humans peopled the world and encountered novel environments (Sabeti et al. 2002;Voight et al. 2006), food sources (Sabeti et al. 2002), and pathogens (Diamond 2002;Jobling et al. 2013). As a result, uncovering the genetic targets of phenotypic differentiation among human populations is critical both for understanding past human adaptations (Sabeti et al. 2002) and for advancing future biomedical research (Jorde et al. 2001;Akey et al. 2004). Due to the abundance of whole-genome sequence and polymorphism data for many human populations (Cann et al. 2002;International HapMap 3 Consortium 2010;1000Genomes Project Consortium 2015, much work in the past several years has focused on elucidating and understanding genetic differentiation that occurred during human evolution (Li et al. 2008;Pickrell et al. 2009;Field et al. 2016). A common summary statistic for estimating genetic distances between two populations is the fixation index, F ST (Wright 1951), which has been used to infer human demographic history (Hinds et al. 2005;Holsinger and Weir 2009;Keinan et al. 2009;Patterson et al. 2012;1000Genomes Project Consortium 2015 and to identify loci that may be targets of natural selection (Bowcock et al. 1991;Akey et al. 2002;Bersaglieri et al. 2004). However, because F ST is a pairwise metric, it cannot identify the directionality of genetic differentiation nor be used as sole evidence for natural selection (Yi et al. 2010). To address this issue, Yi et al. (2010) developed the Population Branch Statistic (PBS), a summary statistic that utilizes pairwise F ST values among three populations to quantify genetic differentiation along each branch of their corresponding three-population tree. Genes with large PBS values on one branch represent loci that underwent populationspecific genetic differentiation consistent with relaxed selective constraint or positive selection (Yi et al. 2010). PBS has been applied to corroborate previously established targets of selection, including genes associated with skin pigmentation (Lamason et al. 2005) and dietary fat sources (Mathias et al. 2012), as well as to identify novel candidates for high-altitude adaptation in Tibetans (Yi et al. 2010). However, because natural selection acts on phenotypes, analysis of genetic data only enables assessment of its indirect effects. For this reason, it may be advantageous to study selection more directly by exploiting the recent availability of RNA-seq data for several human populations (Lappalainen et al. 2013). Specifically, phenotypic evolution is thought to often occur through modifications in gene expression (King and Wilson 1975;Wang et al. 1996;Wray et al. 2003;Carroll 2005Carroll , 2008Raj et al. 2010). Thus, studying gene expression differentiation among human populations may increase power for identifying loci underlying population-specific phenotypes. Indeed, like genetic differentiation, gene expression levels vary considerably across human populations (Cheung et al. 2005;Stranger et al. 2007) and often reflect population structure (Brown et al. 2018). Moreover, human genes with large PBS values are enriched for expression quantitative trait loci (Quiver and Lachance 2018). In the present study, we simultaneously explore population-specific genetic and expression differentiation in four human populations: the Toscani in Italia (TSI), British in England and Scotland (GBR), Finnish in Finland (FIN), and Yoruba in Nigeria (YRI). For these analyses, we employ single-nucleotide polymorphism (SNP; 1000 Genomes Project Consortium 2015) and RNA-seq (Lappalainen et al. 2013) data from each population. First, we use F ST (Wright 1951) and its analog for estimating quantitative trait differentiation, P ST (Leinonen et al. 2006), to quantify and examine genome-wide patterns of genetic and expression differentiation in the four human populations. Next, we adapt the approach of PBS (Yi et al. 2010) to P ST , and extend its computation to a four-population tree, enabling us to estimate both genetic and expression differentiation in each of the four human populations. Last, we apply this branch-based statistic to study population-specific genetic and expression differentiation, and uncover candidate genes and functional modules underlying adaptation in TSI, GBR, and FIN populations. Genome-Wide Patterns of Genetic and Expression Differentiation in Four Human Populations A first goal of our study was to estimate genetic and expression differentiation among TSI, GBR, FIN, and YRI populations. To address this problem, we used SNP data (1000Genomes Project Consortium 2015 to calculate the F ST (Wright 1951), and RNA-seq data (Lappalainen et al. 2013) to calculate the P ST (Leinonen et al. 2006), of every gene between each pair of the four human populations. We calculated F ST using Hudson's formula (Hudson et al. 1992) and computed the ratio of averages to minimize bias (Reynolds et al. 1983;Weir and Cockerham 1984;International HapMap 3 Consortium 2010;Bhatia et al. 2013; see Materials and Methods for details). Due to environmental effects on P ST , we followed the approach of Leinonen et al. (2006) in calculating P ST under two contrasting scenarios: one in which environmental and nonadditive genetic effects account for half of the observed expression variation (h 2 ¼ 0:5), and a second in which only additive genetic effects contribute to the observed expression variation (h 2 ¼ 1; see Materials and Methods for details). Examinations of Pearson's linear (r) and Spearman's nonlinear (q) correlations revealed small ($10 À2 ) but significantly positive relationships between F ST and P ST in TSI-FIN, TSI-YRI, GBR-YRI, and FIN-YRI population pairs (supplementary tables 1 and 2, Supplementary Material online), consistent with previous observations that genetic and expression differentiation are weakly or moderately associated (Makova and Li 2003;Nuzhdin et al. 2004;Sartor et al. 2006;Assis andBachtrog 2013, 2015;Hunt et al. 2013). To explore genome-wide patterns of genetic and expression differentiation among the four human populations, we independently used F ST and P ST to construct gene trees and then infer population trees supported by majorities of these gene trees (see Materials and Methods for details). Population trees inferred from F ST and P ST (with h 2 ¼ 0:5 and h 2 ¼ 1) have the same topology ( fig. 1), indicating that there is consistency between relationships estimated from genome-wide patterns of genetic and expression differentiation despite their weak correlations with one another. Further, the topology of the inferred population trees recapitulates known relationships among these four populations, in that TSI and GBR are most closely related to one another, FIN is an outgroup to TSI and GBR, and YRI is an outgroup to all three European populations. These results mirror those from similar studies of F ST (Hinds et al. 2005;Jakobsson et al. 2008;Li et al. 2008;Auton et al. 2009;Holsinger and Weir 2009;Keinan et al. 2009;Patterson et al. 2012;1000Genomes Project Consortium 2015, as well as findings that gene expression data often display population structure comparable to that of genetic data (Cheung et al. 2005;Stranger et al. 2007;Brown et al. 2018). Yet, there is greater support for the inferred population tree when using F ST ( fig. 1A) than when using P ST ( fig. 1B and C) as input. This effect is not surprising, given the complex and dynamic nature of gene expression data. Specifically, gene expression levels can vary across space (e.g., cell type), time (e.g., age), and condition (e.g., disease). Additionally, the experimental methodology used to collect and quantify these data may influence expression levels as well. This contrasts with the relatively static nature of genetic data. Further, whereas our calculation of F ST for a gene was often based on allele frequencies at multiple SNPs across the gene, our calculation of P ST for a gene was based on a single measurement. Therefore, differing levels of support observed for the inferred population trees may reflect higher accuracy and lower variance in estimating F ST given the more representative and larger samples available for genetic data. To investigate this effect, we examined the association between the number of SNPs in a gene and the difference between topologies of the gene tree constructed with F ST and the population tree. In particular, if mismatches between gene trees constructed with P ST and the population tree are often due to the small sample size of expression data, then we also expect gene trees constructed with F ST to be different from the population tree when the number of SNPs is small. To quantify the difference between each gene tree constructed with F ST and the population tree, we used the Robinson-Foulds (RF) distance, which is the sum of the number of unique clades in the two trees being compared (Robinson and Foulds 1981). Here, RF ¼ 0 when the tree topologies are identical, RF ¼ 2 when there is one unique clade in each tree, and RF ¼ 4 when the tree topologies are distinct. As hypothesized, there is an inverse relationship between RF and the number of SNPs, in that we tend to get RF Estimation of Population-Specific Genetic and Expression Differentiation on a Four-Population Tree Next, we sought to quantify population-specific genetic and expression differentiation of genes in each of the four human populations. For a three-population tree, population-specific genetic differentiation of a gene along each branch can be estimated with PBS (Yi et al. 2010; fig. 2A), which applies equation (11.20) in Felsenstein (2004) to F ST . In particular, considering the unrooted three-population tree shown in figure 2A, the PBS value of a particular gene in population W is estimated as , and E X,Y denote the log-transformed F ST between populations W and X, W and Y, and X and Y, respectively (Yi et al. 2010; see Materials and Methods for details). In a recent study, equation (11.20) in Felsenstein (2004) was also applied to expression distances between orthologous genes to estimate branch lengths corresponding to lineage-specific expression divergence on a three-species tree (Assis 2019). Analogously, by substituting P ST for F ST in the formula for PBS (Yi et al. 2010), we can obtain the PBS corresponding to gene expression differentiation in population W on the three-population tree. To distinguish between these two PBS in our study, we will refer to the calculation with F ST as "genetic PBS," and the calculation with P ST as "expression PBS." To enable quantification of population-specific genetic and expression differentiation in four human populations, we extended the derivation of PBS to a four-population tree ( fig. 2B). Henceforth, we will denote PBS as PBS 3 when applied to a three-population tree ( fig. 2A) and as PBS 4 when applied to a four-population tree ( fig. 2B). To derive PBS 4 , suppose that we have four populations W, X, Y, and Z that are related by the unrooted tree depicted in figure 2B. Then, we can compute four PBS 4 values for a particular gene, one corresponding to its population-specific differentiation in each population. Because the PBS 4 value for a gene in a population represents its differentiation that occurred in the lineage of that population, it can be estimated by the length of the external branch corresponding to the population. We can obtain the length of each external branch by first computing four distances: those between populations W and X (E W,X ), W and Y (E W,Y ), X and Y (E X,Y ), and X and Z (E X,Z ). Then, we can use these distances to compute the length of each external branch by following the schematic pictured in figure 2B. For example, the PBS 4 value of the gene in population W is calculated as PBS 4,W ¼ 1 Population-Specific Genetic and Expression Differentiation of Genes with Copy Number Variations Gene duplications and deletions are key contributors to human genetic diversity (Sudmant et al. 2015). Moreover, because they are large-scale mutation events that may impact gene dosage, duplications and deletions have been implicated in numerous human diseases (Sebat et al. 2004;Kumar et al. 2008;Sharp et al. 2008;Weiss et al. 2008), as well as in adaptive events in many diverse species (Kaessmann 2010;FIG. 2.-Schematic for calculating the PBS value of a gene in population W. Depicted are scenarios in which population-specific differentiation of a gene has occurred in population W of a set of (A) three populations W, X, and Y and (B) four populations W, X, Y, and Z. In each case, population-specific differentiation results in elongation of external branch W (red). To estimate the length of external branch W, we unroot the tree (top of each panel) and apply the formula shown (bottom of each panel) to pairwise genetic (F ST ) or expression (P ST ) distances between populations. We can use an analogous approach to estimate lengths of other external branches. Chen et al. 2013). For these reasons, genes harboring copy number variations (CNVs) are thought to be more frequently targeted by natural selection than those without CNVs (Freeman et al. 2006;Nguyen et al. 2006). Indeed, genes with CNVs often display signatures of adaptation (Sudmant et al. 2015), and fixation of duplications and deletions has been associated with natural selection in many species (Freeman et al. 2006;Nguyen et al. 2006;Han, Demuth, et al. 2009;Jiang and Assis 2017). Therefore, we hypothesized that genes with CNVs would have larger genetic and expression PBS 4 values than genes without CNVs. To test this hypothesis, we compared the distributions of maximum PBS 4 values of genes with and without known human CNVs larger than 50 bp ( Though the magnitudes of the effects are modest, genes with CNVs also contain more SNPs than those without CNVs (P < 0:001, two-sample permutation test; see Materials and Methods for details), which is expected to decrease their genetic PBS 4 values (Yi et al. 2010). Taken together, these findings suggest that genes with CNVs tend to undergo increased population-specific genetic and expression differentiation that is consistent with positive selection. However, increased population-specific genetic and expression differentiation of genes with CNVs may not only be attributed to positive selection, but alternatively to relaxed selective constraint. To disentangle these mechanisms, we examined levels of background selection in genes with and without CNVs. Background selection reduces genetic diversity at linked deleterious sites (Charlesworth et al. 1993), and is therefore weaker in regions with reduced selective constraint. As a result, if genes with CNVs primarily evolve under relaxed selective constraint, then we expect a reduction in their levels of background selection relative to those of genes without CNVs. To determine whether this is the case, we compared distributions of median B values (McVicker et al. 2009) in genes with and without CNVs. We found no significant difference between groups (supplementary fig. 2A, Supplementary Material online, P > 0:05, two-sample permutation test; see Materials and Methods for details), suggesting that overall levels of selective constraint do not differ between genes with and without CNVs. Further, because F ST is correlated with background selection (Charlesworth et al. 1997), we performed a follow-up analysis in which we explicitly accounted for background selection when comparing the genetic PBS 4 of genes with and without CNVs. Specifically, we corrected F ST for background selection using estimated B values (see supplementary Methods, Supplementary Material online, for derivation) and recalculated the background selection-corrected F ST and genetic PBS 4 of each gene. Even after this correction, genetic PBS 4 is elevated in genes with CNVs (supplementary fig. 2B, Supplementary Material online, P < 0:001, two-sample permutation test; see Materials and Methods for details). Whereas B values are not perfect measures of selective constraint, particularly for short evolutionary timescales, these findings better support the hypothesis that increased population-specific differentiation in genes with CNVs is due to positive selection than to relaxed selective constraint. Relationship of Population-Specific Genetic and Expression Differentiation to Gene Function in Europeans A natural question that emerges from our study is whether there are functional drivers of population-specific genetic and expression differentiation. In answering this question, it was important to exclude YRI, as it is an outgroup to the three European populations and therefore contains greater overall population-specific genetic and expression differentiation that cannot be polarized. Hence, we only considered TSI, GBR, and FIN populations. To globally assess functional modules contributing to population-specific genetic and expression differentiation in these populations, we utilized annotation data from the GO Consortium (Ashburner et al. 2000; GO Consortium 2018). In particular, GO terms classify genes by their molecular functions, cellular components, and biological processes (Ashburner et al. 2000;GO Consortium 2018). Though GO terms refer to intracellular gene functions that cannot be directly related to phenotypes that natural selection acts on, they can aid in elucidating the classes of gene functions that may be associated with population-specific genetic and expression differentiation. To examine these associations, we ranked genes by their genetic and expression PBS 4 values in each European population, performed GO enrichment analysis on ranked lists, and extracted significantly overrepresented GO terms (supplementary tables S6-S14, Supplementary Material online; see Materials and Methods for details). After correcting for multiple testing, there are no significantly enriched GO terms for genetic PBS 4 in any of the populations (supplementary tables S6-S8, Supplementary Material online). However, there are many significantly enriched GO terms for expression PBS 4 in all three populations (supplementary tables S9-S14, Supplementary Material online). Enriched GO terms for expression PBS 4 calculated from P ST with h 2 ¼ 0:5 and h 2 ¼ 1 are similar, consistent with our previous comparisons (see figs. 1 and 3). Moreover, several enriched GO terms are shared among the three related populations, and numerous related terms are enriched in individual populations. Though most GO terms are quite general and have limited interpretability, it appears that population-specific expression differentiation in Europeans often affects genes involved in signal transduction and immunity. This is not surprising, as such processes are frequent targets of natural selection (Barreiro and Quintana-Murci 2010;Fumagalli et al. 2011;Enard et al. 2016). To glean further insight into the individual genes potentially driving population-specific genetic and expression differentiation in Europeans, we performed literature searches on genes with the largest genetic and expression PBS 4 values in each population (tables 1 and 2). In both TSI and GBR, the gene with the largest genetic PBS 4 value is MCM6, or Minichromosome Maintenance Complex Component 6. MCM6 is part of a protein complex essential for the initiation of eukaryotic genome replication (Labib et al. 2000). Two of its introns contain enhancers for its upstream gene LCT, or Lactase, one of which has a mutation prevalent in European populations that is thought to confer lactose tolerance in adulthood (Enattah et al. 2002;Troelsen et al. 2003). Interestingly, LCT also has the second-largest genetic PBS 4 in GBR, and several genetic studies have identified both MCM6 and LCT as targets of recent positive selection in Europeans (Bersaglieri et al. 2004;Voight et al. 2006;Ranciaro et al. 2014;Cheng et al. 2017). In FIN, the gene with the largest genetic PBS 4 value is HLA-DPA1, or Major Histocompatibility Complex, Class II, DP Alpha 1. As a member of the HLA gene family, HLA-DPA1 plays an important role in antigen presentation (Bottazzo et al. 1983) and is believed to be evolving under balancing selection in humans Nei 1988, 1989;Takahata and Nei 1990;Hughes and Yeager 1998;Yasukochi and Satta 2013). In TSI, the gene with the largest expression PBS 4 value (calculated from P ST with h 2 ¼ 0:5 and h 2 ¼ 1) is PRKCB, or Protein Kinase C Beta. PRKCB is involved in numerous signaling pathways, including apoptosis (Reyland 2009) and B cell activation during immune response (Lutzny et al. 2013). As a result, mutations in PRKCB are associated with many cancers (Lutzny et al. 2013;Wallace et al. 2014;Antal et al. 2015) and autoimmune diseases (Han, Zheng, et al. 2009;Sheng et al. 2011;Kawashima et al. 2017). The association with autoimmune diseases is particularly intriguing, as such genes are often targets of recent positive selection (Barreiro and Quintana-Murci 2010;Ramos et al. 2014). It is hypothesized that mutations that cause autoimmune response today may have conferred pathogen resistance in the past (Barreiro and Quintana-Murci 2010). In GBR, the gene with the largest expression PBS 4 value (calculated from P ST with h 2 ¼ 0:5 and h 2 ¼ 1) is PRRX1, or Paired Related Homeobox 1. PRRX1 is a DNAassociated protein that is involved in the establishment of diverse mesodermal muscle types during development (Martin et al. 1995). It has also been connected with numerous cancers (Takahashi et al. 2013;Guo et al. 2015;Hirata et al. 2015;Jurecekova et al. 2016;Takano et al. 2016;Zhu et al. 2017) and is thought to mediate metastasis (Ocaña et al. 2012;Takahashi et al. 2013;Guo et al. 2015;Zhu et al. 2017). In FIN, the genes with the two largest expression PBS 4 values are VDR followed by FZD1 when P ST was calculated with h 2 ¼ 0:5, and FZD1 followed by VDR when P ST was calculated with h 2 ¼ 1. VDR, or Vitamin D Receptor, interacts with vitamin D in the small intestine to facilitate calcium transportation into circulation (Holick 2006). Skin exposure to solar ultraviolet radiation produces about 90% of the vitamin D that an individual requires (Holick 2006), and living at high latitudes has been associated with vitamin D deficiency due to decreased ultraviolet radiation (Kimlin 2008;Chaplin and Jablonski 2009). Therefore, it is possible that expression differentiation of VDR may contribute to high latitude adaptation in FIN. FZD1, or Frizzled Class Receptor 1, is a receptor for Wnt signaling proteins (Kennerdell and Carthew 1998). It has been associated with several cancers (Kirikoshi et al. 2001;Benhaj et al. 2006;Zhang et al. 2015) and specifically with chemoresistance (Flahaut et al. 2009), thus making it a promising therapeutic target. Gene Expression Analyses We obtained RNA-seq data from lymphoblastoid cell lines in TSI, GBR, FIN, and YRI populations from the GEUVADIS project (Lappalainen et al. 2013). These data comprise 93 individuals in TSI, 94 individuals in GBR, 95 individuals in FIN, and 89 individuals in YRI, all of whom are from the 1000 Genomes Project (1000Genomes Project Consortium 2015. We excluded data from the population of Utah Residents with Northern and Western European Ancestry (CEU) because they were collected from an older cell line and have been shown to display expression patterns that are inconsistent with their relationships to other populations (Yuan et al. 2015). We quantified the abundance of transcripts using featureCounts (Liao et al. 2014) with default parameters and the GRCh37 human genome (Zerbino et al. 2018) as our reference. To normalize count data, we used the "median ratio" method (Anders and Huber 2010) by implementing the estimateSizeFactors function in DESeq2 (Love et al. 2014). Next, we calculated the Fragments Per Kilobase of transcript per Million mapped reads (FPKM) of each gene using DESeq2 (Love et al. 2014). We removed genes that contained fewer than ten reads in each sample (lowly expressed), were located on sex chromosomes, or were not protein coding. For the remaining 13,075 genes, we log-transformed their FPKM values by log(FPKM þ 1). We computed the P ST for each gene as P ST ¼ r 2 between r 2 between þ2h 2 r 2 within (Leinonen et al. 2006), where r 2 between is expression variance between populations, r 2 within is expression variance within populations, and h 2 is heritability. For our analysis, we used h 2 ¼ 0:5 and h 2 ¼ 1 as was done previously (Leinonen et al. 2006), though we note that the patterns in figure 1 do not change as a function of h 2 . When h 2 ¼ 1, P ST reduces to Q ST (Spitze 1993), another common metric for differentiation of quantitative traits between populations. Population-Genetic Analyses We downloaded the 1000 Genomes Project phase 3 data set (1000Genomes Project Consortium 2015 for TSI, GBR, FIN, and YRI populations from ftp://ftp.1000genomes.ebi.ac.uk/ vol1/ftp/, last accessed February 12, 2020. To be conservative in our analyses, we only included the 371 individuals also present in the GEUVADIS Project (Lappalainen et al. 2013). After filtering out insertions, deletions, and monomorphic sites, we were left with 30,734,317 biallelic SNPs. Though we used SNPs of all allele frequencies, limiting our analysis to those with minor allele frequencies >0.01 did not alter our findings. We calculated Hudson's F ST for each SNP as F Hudson Reynolds et al. 1983;Weir and Cockerham 1984;Bhatia et al. 2013). Then, we combined SNPs within the entire annotated region of each gene and computed the "ratio of averages" for Hudson's F ST (Reynolds et al. 1983;Weir and Cockerham 1984;Bhatia et al. 2013). Because negative F ST values are not defined (Wright 1951) and have no biological interpretation (Akey et al. 2002), we followed the standard of setting all negative F ST ¼ 0 (e.g., Nei 1990;Akey et al. 2002). Phylogenetic Analyses To infer population trees, we first constructed gene trees using the NEIGHBOR program in the PHYLIP package (Felsenstein 2005). We constructed gene trees using either F ST or P ST as input distances between populations. Application of the UPGMA algorithm in the NEIGHBOR program yielded totals of 12,977 gene trees for F ST and 13,075 gene trees for P ST . Next, we used gene trees as input for the CONSENSE program in the PHYLIP package (Felsenstein 1993) and obtained rooted population trees supported by the majority of gene trees based on F ST and P ST . Specifically, the nodes in gene trees are included if they continue to resolve the population tree and do not contradict with more frequently occurring nodes. The number above each node in figure 1 represents its proportion in all gene trees. Calculation of PBS 4 We first computed the genetic or expression distance between populations as E A,B ¼ À log ½1 À Z ST ðA ; BÞ, following the approach of Cavalli-Sforza (1969), where Z ST represents either F ST or P ST between populations A and B. We used these as input for calculations of genetic and expression PBS 4 values. Negative branch lengths were set to 0. Gene Ontology Enrichment Analyses Genes were ranked by their genetic PBS 4 and expression PBS 4 values in each population (provided in supplementary tables S3-S5, Supplementary Material online). We performed Gene Ontology (GO) enrichment analysis on each ranked list of genes with the web-based GOrilla tool at http://cbl-gorilla. cs.technion.ac.il/; last accessed February 12, 2020 (Eden et al. 2007(Eden et al. , 2009, which searches for enriched GO terms that appear densely at the top of a ranked list of genes (Eden et al. 2007(Eden et al. , 2009. For each run, we chose "Homo sapiens" as the organism, set the running mode to "Single ranked list of genes," selected all ontologies (process, function, and component), and set the threshold P ¼ 10 À3 . Statistical Analyses All statistical analyses were performed in the R software environment (R Core Team 2013). Two-sample permutation tests were used to assess differences between all pairs of distributions compared in figure 3 and supplementary figures 1 and 2, Supplementary Material online. For each test, we performed 1,000 permutations, using the difference between medians of groups as the test statistic. In particular, we computed the difference between the medians of the two groups for each permutation, and the P value of the permutation test as the proportion of times the absolute value of this difference was greater than or equal to the absolute value of the observed difference in the data. Student's t-tests were used to assess the statistical significance of correlation coefficients shown in supplementary tables 1 and 2, Supplementary Material online. Discussion Identifying drivers of human phenotypic differentiation is crucial to understanding adaptive events that occurred in the past, as well as to developing population-and individualtargeted treatments for diseases in the future (Jorde et al. 2001;Sabeti et al. 2002;Akey et al. 2004). Though previous research (Sabeti et al. 2002;Akey et al. 2004;Voight et al. 2006) has made use of abundant whole-genome and polymorphism data for many human populations (International HapMap 3 Consortium 2010;1000Genomes Project Consortium 2015 to answer this question, simultaneously studying genetic and expression differentiation may provide unique insights into direct phenotypic targets of natural selection. In particular, it is thought that phenotypic evolution more often occurs through changes in gene regulation and expression, rather than their protein-coding sequences (King and Wilson 1975;Wang et al. 1996;Wray et al. 2003;Carroll 2005Carroll , 2008Raj et al. 2010). For this reason, gene expression differentiation might better reflect phenotypic differentiation. Therefore, a major advantage of the present study is that we utilized both genetic and expression data to address questions about population-specific differentiation in humans. Further, results from our combined analysis suggest that populationspecific genetic and expression differentiation in humans may be attributed to several important biological processes, most notably signal transduction and immunity, and also pinpoint many candidate genes for future studies of human phenotypic variation in adaptation and disease. Yet, there are three key limitations of the data analyzed here that must be considered when interpreting our findings in the context of human evolution. The first is that there is only a single estimate of the expression level of a gene in each population, which is particularly problematic given the complex and dynamic nature of gene expression data. In contrast, there are multiple SNPs per gene in each population, and genetic data are static. Therefore, we expect our estimates derived from expression data to have lower accuracy and higher variance than those from genetic data. Indeed, we found that gene trees constructed with F ST match the topology of the inferred population tree more often than those constructed with P ST and, further, that mismatches between topologies of gene trees constructed with F ST and the inferred population tree are associated with fewer SNPs. Hence, it is also not surprising that genetic and expression PBS 4 do not have common outlier genes (supplementary tables S3-S5, Supplementary Material online), and gene-level values of expression (and in some cases genetic) PBS 4 should thus be interpreted with caution. In spite of this issue, a handful of genes with the largest expression PBS 4 are well-known candidates of adaptation, such as VDR (Kimlin 2008;Chaplin and Jablonski 2009). Moreover, at a genome-wide level, the discordance between findings derived from genetic and expression data illustrates the importance of integrating both types of data into population-genetic studies. Nevertheless, future availability of larger sample sizes for gene expression data in multiple human populations will be invaluable for accurately pinpointing genic targets of population-specific expression differentiation in humans. The second caveat is that TSI, GBR, and FIN are closely related European populations. As a result, genetic distances among them are small, which can lead to noise in gene-level analyses. Moreover, due to shared ancestry and gene flow among these closely related populations, their genetic and expression differentiation are likely to be correlated. This limitation is clearly demonstrated by MCM6 having the largest genetic PBS 4 value in both TSI and GBR, which are the most closely related of the three European populations studied. Thus, though genome-wide patterns of genetic and expression differentiation are consistent with population relationships, caution needs to be taken when making inferences based on the genetic and expression PBS 4 values of individual genes. Despite this limitation, several genes with the largest genetic PBS 4 values, such as MCM6 and HLA-DPA1, are well-established targets of natural selection Nei 1988, 1989;Takahata and Nei 1990;Hughes and Yeager 1998;Bersaglieri et al. 2004;Voight et al. 2006;Yasukochi and Satta 2013;Ranciaro et al. 2014;Cheng et al. 2017), and novel candidates therefore may represent promising avenues for future research. Nevertheless, phenotypic differences among distantly related populations are better described than those among closely related populations, making it inherently more difficult to interpret our findings in the context of human phenotypes. Therefore, future availability of RNA-seq data from additional populations, particularly those that are more distantly related, will be critical to studying population-specific variation and its role in both human evolution and disease. The third limitation is that the RNA-seq data used in this study were obtained from lymphoblastoid cell lines. In particular, the enrichment of immune-related functions in genes with high levels of population-specific expression differentiation may be attributed to usage of this cell line, rather than reflecting widespread evolutionary patterns of immunity genes across tissues. Yet, it is important to note that associations between increased population-specific expression differentiation and immunity are consistent with previous findings. Specifically, immunity genes are among the fastest evolving genes in the human genome, likely due to adaptations to rapidly changing environments and introductions of novel pathogens (Barreiro and Quintana-Murci 2010;Fumagalli et al. 2011;Enard et al. 2016). Therefore, though observed patterns of population-specific expression differentiation may not be representative of those in other cell types, genes with high population-specific expression differentiation should be further studied to examine their potential roles in human evolutionary history and disease. Regardless, future availability of RNA-seq data for multiple cell or tissue types in several populations will be invaluable for capturing complex patterns of population-specific expression differentiation and pinpointing genic targets of phenotypic variation among human populations. In spite of the noted issues with the data analyzed here, a major advantage of our study is the design of PBS 4 , a novel summary statistic that can be used to estimate populationspecific differentiation of a quantitative trait in four populations. PBS 4 requires minimal assumptions about the data and can be used to rapidly estimate population-specific differentiation on a genome-wide scale. Further, because PBS 4 utilizes data from four populations, branch lengths are more likely to represent true population-specific differentiation than differentiation that occurred ancestral to two populations, as is possible in a three-population scenario (Assis 2019). Therefore, though the data set used in our study is not ideal in many respects, PBS 4 can easily be applied to existing or future data sets to estimate population-specific differentiation of a wide array of genetic, expression, and other measurable traits in humans and other species. In particular, we envision that application of PBS 4 to future human RNA-seq data from multiple cell lines or tissues and in many populations of varying divergence levels will shed light on complex questions about human evolutionary history and disease processes. Supplementary Material Supplementary data are available at Genome Biology and Evolution online.
2020-01-30T09:03:57.579Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "b9ac3d8ab24e1b29acc68540b0380eabdd9f4445", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/gbe/evaa021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbcd6419f2676180759a2eb5ac68090fa86ca379", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
24709243
pes2o/s2orc
v3-fos-license
Treatment of Subacromial Impingement Syndrome: Platelet-Rich Plasma or Exercise Therapy? A Randomized Controlled Trial Background: Subacromial impingement syndrome (SAIS) is the most common disorder of the shoulder. The evidence for the effectiveness of treatment options is inconclusive and limited. Therefore, there is a need for more evidence in this regard, particularly for long-term outcomes. Hypothesis: Platelet-rich plasma (PRP) would be an effective method in treating subacromial impingement. Study Design: Randomized controlled trial; Level of evidence, 1. Methods: This was a single-blinded randomized clinical trial with 1-, 3-, and 6-month follow-up. Sixty-two patients were randomly placed into 2 groups, receiving either PRP or exercise therapy. The outcome parameters were pain, shoulder range of motion (ROM), muscle force, functionality, and magnetic resonance imaging findings. Results: Both treatment options significantly reduced pain and increased shoulder ROM compared with baseline measurements. Both treatments also significantly improved functionality. However, the treatment choices were not significantly effective in improving muscle force. Trend analysis revealed that in the first and third months, exercise therapy was superior to PRP in pain, shoulder flexion and abduction, and functionality. However, in the sixth month, only shoulder abduction and total Western Ontario Rotator Cuff score were significantly different between the 2 groups. Conclusion: Both PRP injection and exercise therapy were effective in reducing pain and disability in patients with SAIS, with exercise therapy proving more effective. releases various growth factors involved in the tissue repair process. 15 There is some evidence demonstrating a positive effect of PRP in tendinopathies 4,12 and osteoarthritis of the knee 3 ; however, the evidence in rotator cuff tendinopathy is limited. In spite of increased PRP use in clinical settings, we found only 3 randomized controlled trials that evaluated the effectiveness of PRP injection in treating rotator cuff tendinopathy nonsurgically. 24,35,39 On the other hand, although inconclusive, current evidence suggests that physical therapy is effective in treating patients with SAIS. 1,10,14,23,26,27,30,34 For those patients who seek nonsurgical treatment in the early stages of SAIS, therapeutic exercise combined with other therapies (eg, Kinesio taping, acupuncture, localized corticosteroid injection, and ultrasound) is recommended. 7 However, there is a need for further evidence of the effectiveness of both PRP and exercise in treating patients with SAIS. We conducted a randomized controlled trial (RCT) comparing PRP injections with exercise therapy. Our hypothesis was that PRP would be more effective at reducing pain and improving function than exercise therapy. Study Design This study was a parallel-group, single-blinded RCT with 1-, 3-, and 6-month follow-ups conducted from April 2013 to October 2014. All clinical assessments and treatments were performed at a university hospital in Tehran, Iran. This study was approved by the ethics committee of Iran University of Medical Science, and all participants provided informed consent. The trial was registered in the Iranian Registry of Clinical Trials. Sample Size With the repeated-measures design of the study in mind, G*Power 3.1.5 software (Heinrich-Heine-Universitä t Dü sseldorf) was used to calculate the required sample size. We considered an effect size equal to 0.4 in our sample size calculation. Based on a power of 80% and a 2-tailed a of .05, we calculated that the sample size required per group was 18. Assuming a 15% loss to follow-up, the final sample size required was 21 patients per group. Patient Recruitment For the purposes of recruiting patients, advertising posters were put up in several local hospitals. Moreover, a number of medical practitioners interested in shoulder pathologies were notified of the objectives of the study by email and were asked to introduce their patients to us. A total of 72 volunteer patients came forward. SAIS was diagnosed via a clinical assessment. Patients underwent shoulder magnetic resonance imaging (MRI) for diagnosis confirmation. The 3-mm cuts were taken in T1-weighted, T2-weighted, proton density sequences in 3 planes (sagittal, coronal, and axial) using a 1.5-T MAGNETOM Essenza, a Tim system MRI (Siemens). Of these 72 patients, 62 were included in this study ( Figure 1). The inclusion criteria were (1) a minimum age of 40 years, (2) shoulder pain lasting at least 3 months prior to the study, (3) platelet count of more than 100,000, and (4) positive result in at least 3 of the following tests: empty can test, Speed test, Jobe test, Neer impingement sign, and Hawkins-Kennedy test. Exclusion criteria consisted of (1) radicular pain; (2) presence of pathologies such as frozen shoulder, calcific tendinitis, biceps dislocation, and a superior labrum anterior posterior lesion; (3) previous surgery within 6 months; (4) inflammatory diseases such as rheumatoid arthritis, polymyalgia rheumatica, or fibromyalgia; (5) fullthickness rotator cuff tear on MRI; (6) ligamentous laxity (positive sulcus test) or shoulder dislocation (positive apprehension test); (7) corticosteroid injection within 3 months prior to the study; (8) physical therapy 6 months prior to the study; (9) fear of MRI; and (10) contraindication to MRI. Group Allocation Because of the possibility of sample attrition, all 62 patients who had already met the inclusion criteria were recruited for this study. To eliminate the effect of confounding variables in the design stage, SAIS stage in shoulder MRI was considered as a confounder; thus, 2 different sequences were used to randomly allocate patients into the treatment groups: the first sequence for stages 1 and 2 and the second sequence for patients with stage 3 SAIS (only partial tear). After assessing the MRI findings for each patient, we used the proper sequence of randomization. After that, random number generator software was used for randomization. In this single-blinded RCT, only the assessor was blinded to group assignment. One group received PRP injections and the other exercise therapy. The CONSORT (Consolidated Standards of Reporting Trials) flowchart of the study is given in Figure 1. Interventions Participants were requested to refrain from receiving other forms of intervention for 6 months. They were, however, advised to take 500 mg of paracetamol if the pain in their shoulder during rest was more than a 5 on 10-point visual analog scale (VAS). Patients were also advised to avoid painful activities and to continue their usual daily activities during the study. Platelet-Rich Plasma Group. Patients in the PRP group were injected twice: once at the beginning of the study and again 1 month after the first visit. On each occasion, an aliquot of 25 mL of venous blood was collected from each patient using a syringe containing 2.5 mL of anticoagulant citrate dextrose solution. The samples were projected into a Tubex Autotube System (Moohan Enterprise) and were centrifuged at 1300 rpm for 10 minutes. The separated plasma was subsequently centrifuged at 2770 rpm for 8 minutes. As a result, 5 mL of PRP was prepared. One milliliter of this PRP was sent to a laboratory for platelet counting. It was revealed that the obtained PRP had a platelet concentration of approximately 900,000 ± 15,000 platelets per mm 3 , almost 3 times the size of the baseline blood platelet count. The leukocytes obtained from the centrifugation were also measured to be 5000 to 10,000 per mm 3 . The remainder of the obtained PRP (4 mL) was injected into the injured tendons under sterile conditions without any activator within 30 minutes of centrifugation. More specifically, 3 mL of PRP was injected into the partial tear in the tendon or, in the case of patients with tendinopathy, into hypoechogenic areas using an 18-gauge catheter guided by a 10-MHz ultrasound machine (Mindray). The other 1 mL was injected into the subacromial space from the lateral posterior side of the arm at an angle of 45 to the horizon without ultrasound guidance. Patients were advised to avoid ice packs and excessive use of their shoulder joint within 48 hours after injection. In addition, they were asked not to take NSAIDs or aspirin for a period of 12 days, starting from 1 week before the injection and ending on the fifth day after injection. They were also asked not to eat onion, garlic, or dogwood over the same period, as these foods are known to affect platelet counts. They did not participate in exercise therapy for 6 months. Exercise Therapy Group. Patients in the exercise therapy group received supervised exercise therapy in the hospital once a week for 3 months and performed the therapy exercises at home on the other days of the week. After this supervised period, the hospital program was terminated, but the patients were asked to continue the exercises at home for 6 months. No supervision was provided during this latter period. Each exercise session began with warm-up aerobic activities lasting for 10 to 15 minutes and ended with ice packs being applied on the affected areas for 20 minutes to relieve pain. A number of images showing how each exercise should be performed were also provided. The exercises were performed in 4 phases (see the Appendix). Each patient, depending on his or her condition, started with phase 1 and progressed to phase 4. Phase 1 was aimed at achieving passive range of motion (ROM) without pain. For this purpose, the isometric shoulder exercise and the passive ROM exercise were performed in all directions 8 to 10 times per day. Postural exercises (eg, chin tuck and scapular retraction) and glenohumeral ROM exercises were also performed 15 to 20 times per day. In the event of a 50% increase in the ROM, the active-assistive ROM exercise was performed in all directions with the help of a strap. Also in this phase, cross-body and neck stretches were performed 4 times a day, each for a length of 10 seconds. Mobilization exercises were performed once per week. When a patient was able to perform the passive and active-assistive ROM exercises fully and painlessly, phase 2 (active ROM exercises) began. Shoulder abduction or scaption (scapular plane elevation) was performed by elevating the arm in the scapular plane to an angle of less than 60 . Strength training was performed on the external and internal rotator cuff muscles while the arms were placed at the sides of the body. This exercise was in the form of 3 sets per day, each with 10 repetitions. The stretching exercises performed in phase 1 were also performed in phase 2, but their duration was increased to 15 to 20 seconds. The aim of phase 3 was to strengthen the muscles of the rotator cuff and scapula. Scaption was performed at an angle greater than 60 . The exercises intended to strengthen the rotator cuff muscles responsible for external and internal rotation of the humerus were performed at a 90 angle to shoulder abduction. The reverse-fly, shoulder extension, and bent-over row exercises were performed using an elastic band or a 1-to 1.5-kg weight in 3 sets of 10 repetitions each. In phase 4, the exercises intended to train the scapular muscles were performed using a medicine ball. The exercises for strengthening the muscles of the rotator cuff and biceps were performed in 3 sets of 15 repetitions with a gradual increase of 25% to 50% in external resistance. Outcome Parameters The primary outcome parameter was pain. In addition to baseline measurement, patients underwent follow-up 1, 3, and 6 months later. Pain was measured using a 10-point VAS, with higher scores on the scale showing more pain. The secondary outcome parameters were shoulder ROM, muscle strength, patient-reported outcome measures (Disabilities of the Arm, Shoulder, and Hand [DASH] and Western Ontario Rotator Cuff Index [WORC]), and MRI findings. As for shoulder-active ROM, a goniometer was used to measure flexion, extension, abduction, internal rotation in 90 of shoulder abduction, and external rotation in 90 of shoulder abduction. Muscle strength was assessed for shoulder flexion, abduction, and internal rotation via manual muscle strength testing and was measured on a scale from 0 (no active ROM) to 5 (full active ROM). 36 The WORC consists of 21 items in 5 categories: pain and physical symptoms, sports and recreation, work, lifestyle, and emotions. If a category score is closer to 100, the shoulder is in a poorer condition. However, a total score that is closer to 100 indicates that the shoulder is in a better condition. 33 The DASH questionnaire contains 30 items and measures the ability to do various activities of the upper extremities, including carrying loads and tools, overhead activities, key turning, writing, and many other activities of daily living. As the DASH score gets closer to 100, the shoulder is considered to be in a poorer condition. 32 MRIs of each patient were taken at the beginning of the study and again 6 months later. A musculoskeletal radiologist who had more than 10 years of experience assessed the MRIs of the patients for signs of tendinopathy (ie, signal change without loss of tendon integrity), partial tears in the tendons of the biceps or the rotator cuff (ie, partial tears involving less than 50% of the tendon thickness), or pathologies in the subacromial space such as bursitis. The difference between baseline and follow-up observations was classified as either improvement, no change, or worsening. Statistical Analysis Data obtained from the patients were analyzed using Stata software (version 12; StataCorp). Normal distribution of the continuous variables was determined using the Shapiro-Wilk test. The data pertinent to these variables are shown as either mean ± SD or median, as appropriate. The categorical variables were analyzed using the chisquare test. Pretreatment differences between the 2 groups were determined using t tests. To determine the treatment effect, the data were analyzed using either a randomeffects mixed model or a generalized estimating equations model, as appropriate. For all tests, statistical significance was set at an a level of <.05 (2-tailed). Compliance Patients receiving exercise therapy exhibited good compliance with treatment throughout the study period: 68.96% of the patients in the exercise group attended the 3-month course of exercise therapy (at home and in the hospital). They followed virtually all instructions. Patients in the PRP group also exhibited good compliance: 77.27% of them had both injections performed. Effect of Treatment The difference between the 2 groups was not significant at the beginning of the study considering the variables in question (Table 1). Both treatments resulted in improvement in pain (VAS) and function (total WORC and DASH) scores. The improvement in all areas of shoulder ROM was significant in the exercise group, but for the PRP group it was significant in all areas except external rotation. Improvement in muscle force did not reach significance. Tables 2 to 4 and Figures 2 to 4 show the effect of treatment, time, and differences between groups. As can be seen in Table 4, the 2 study groups were significantly different in some parameters. At the 1-month follow-up, the exercise group saw better results than the PRP group in VAS, total WORC, abduction ROM, and force of internal rotation. At the 3-month follow-up, the exercise group had better results than the PRP group in VAS, total WORC, DASH, abduction ROM, forward flexion ROM, force of flexion, force of internal rotation, and force of abduction. Finally, at the 6-month follow-up, the exercise group had better results than PRP only for total WORC and abduction ROM. MRI Findings A total of 38 patients (20 PRP, 18 exercise therapy) agreed to undergo both MRIs, one at the beginning of the study and the other 6 months after the study began. According to the MRI data obtained, none of the patients underwent any change in the pathology of the biceps or the acromiohumeral distance. Improvement in the appearance of supraspinatus tendinopathy was seen in 4 patients (3 PRP [P ¼ .06] and 1 exercise therapy [P ¼ .1]). Chi-square tests yielded P ¼ .34, which indicated no significant difference between the 2 methods according to rate of improvement. DISCUSSION The main finding of the present research was that the shoulder pain emanating from SAIS can be reduced through either PRP injection or exercise therapy. At final follow-up, there was no significant difference between the 2 groups in our primary outcome measure (pain), but the exercise therapy group had significantly higher WORC scores and abduction ROM. The use of PRP in the surgical treatment of rotator cuff tendinopathy has been shown to expedite healing. 16,22,25,41,42 However, a study found that the nonsurgical use of PRP injection neither reduces pain nor improves functionality any better than placebo in patients with tendinopathy or partial tear of the rotator cuff who received exercise therapy. 24 In the current study, 2 PRP injections were given 30 days apart. The injections were made both into areas of tendinopathy and into the bursa. The continued clinical improvement seen in our patients could be related to the difference in PRP technique. The improved results obtained during the 3-month follow-up could be related to a delayed effect of the first PRP injection or to a boosting effect of the second injection. These results echo those reported by Rha et al 39 but disagree with the findings of Randelli et al, 37 who reported that the effect of PRP injection decreased after 3 to 6 months as well as after 12 months. This difference can be attributed to the fact that the patients in the study by Randelli et al 37 received only 1 PRP injection whereas in the present study and in that by Rha et al, 39 there were 2 injections administered. Another explanation is that while the present research evaluated the effect of PRP injection on partial rotator cuff tears, Randelli et al 37 studied the effect of PRP injection on full-thickness rotator cuff tears. Similarly, in studies that investigated the effect of PRP injection on full-thickness rotator cuff tears, 5,6,41 PRP injection failed to prove effective in reducing pain and improving shoulder functionality. The inconclusive results reported in the literature about the effect of PRP injection are also attributable to the fact that different studies use different formulations of PRP. For instance, there is controversy about the use of leukocyte-rich PRP (L-PRP). Whereas some studies have claimed that L-PRP has the antibacterial property of regulating immunity and is the preferred treatment for pain relief in the medium-and long-term, 2,13 some researchers believe that the presence of leukocytes in PRP is a cause of inflammation, and the likelihood of repair decreases as the inflammation at the site of injection increases. 11,28,29 In the present research, the leukocyte content of the injected PRP was 1 to 1.5 times as high as that of the baseline PRP, suggesting that the presence of leukocytes does not disturb the healing process. A final point to reiterate is that we found no significant effect of PRP injection on external rotation ROM, a condition also observed by Kesikburun et al, 24 while in the exercise group, external rotation ROM was improved similar to other ranges of shoulder motion. We believe that this could be because the amount of external rotation in the PRP group was nearly in the normal range at the baseline (89.9 ) but in the exercise group it was 76.2 , which was improved by exercise and mobilization. The present study also provides more evidence in support of a healing effect of exercise therapy in patients with SAIS. Similarly, the results of other studies 23,26,27 indicate that exercise alone is an effective way of reducing pain and improving functionality in such patients. However, unlike 27 did not report a significant improvement in shoulder ROM. This dissimilarity is perhaps because our exercise protocol focused on both passive and active-assistive exercises and shoulder mobilization in phase 1, but in some of the RCTs evaluated in the study by Kuhn, 27 the exercise protocol was limited to pendulum exercises. We also observed the greatest level of pain relief and the largest increase in flexion, extension, and abduction ROM in the first month of treatment, and this effect improved with the passage of time, as observed 6 months after the first visit. Similar findings are reported in the literature with regard to the positive short-term (6-12 weeks) role of exercise therapy in reducing pain, strengthening the muscles of the rotator cuff, stabilizing the scapula, and increasing shoulder ROM on pain reduction. 10,20,30 From another perspective, the frequency at which exercise therapy was provided in the present research was 5 times a week, and the findings align with those of Calis et al, 10 who showed that completing exercises 5 days a week for 3 weeks significantly increased flexion, abduction, internal rotation, and external rotation ROM and improved functionality. In addition, the results of the present research, like those of Calis et al, 10 indicate that performing more frequent exercises per week can bring about a more significant increase in shoulder ROM. Concerning the prolonged beneficial effect of exercise therapy, our study cannot provide evidence because the length of the follow-up period was rather limited (ie, a maximum of 6 months). However, Hallgren et al, 19 in a clinical trial with a follow-up of 1 year, found that a specific SAIS exercise protocol performed for 3 weeks decreases the need for surgery, suggesting that the effect of exercise therapy persists for at least 1 year. In addition, the studies by Brox et al 8,9 show that the effects of 3 to 6 months of strengthening exercises twice per week continue to be seen for 2.5 years. The present research found no difference between 3 and 6 months of exercise therapy, which may be because the benefits of therapy plateau after 3 months. However, supervision was discontinued after 3 months in this study. In other words, it is possible that the patients did not continue their exercise protocol as prescribed and thereby achieved minimal improvement throughout the rest of the experiment. This shows the importance of supervised exercise as a source of motivation. Finally, the results show that although patients with SAIS clinically improved as a result of the 2 treatment options under study, paraclinical (ie, MRI) data hardly changed. This finding concurs with the results reported in another study that showed that despite clinical improvement seen with physical therapy, sonography data did not show any tangible change even 9 months after treatment. 34 Limitations As with any research, our study is not without limitations. A major limitation was the absence of a control group without treatment or with placebo injection. Other limitations included the short length of the follow-up period and crude measurement of muscle strength (manual muscle testing instead of a dynamometer). CONCLUSION This study showed that both PRP injection and exercise therapy can significantly reduce pain and improve shoulder ROM and functionality in patients with SAIS, with these beneficial effects lasting for 6 months. In spite of our hypothesis, exercise therapy was found to be more effective than the other treatment option until 3 months after initiation. Moreover, neither treatment choice significantly improved shoulder muscle force. What is more, even though the treatments resulted in clinical improvement, MRI findings did not change.
2018-04-03T05:11:18.934Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "34de8d85841340e550ddd656e86889bae0a1103d", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325967117702366", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34de8d85841340e550ddd656e86889bae0a1103d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258016311
pes2o/s2orc
v3-fos-license
Component-based specification, design and verification of adaptive systems Control systems are typically tightly embedded into their environment to enable adaptation to environmental effects. As the complexity of such adaptive systems is rapidly increasing, there is a strong need for coherent tool-centric approaches to aid their systematic development. This paper proposes an end-to-end component-based specification, design and verification approach for adaptive systems based on the integration of a high-level scenario language (sequence chart variant) and an adaptation definition language (statechart extension) in the open source Gamma tool. The scenario language supports high-level constructs for specifying contracts and the adaptation definition language supports the flexible activation and deactivation of static contracts and managed elements (state-based components) based on internal changes (e.g., faults), environmental changes (e.g., varying context) or interactions. The approach supports linking managed elements to static contracts to formally verify their adherence to the specified behavior at design time using integrated model checkers. Implementation can be derived from the adaptation model automatically, which can be tested using automated test generation and verified at runtime by contract-based monitors. propose the specification of functional system requirements using an informal language, whose automated utilization in later development phases, for example, V&V, is not feasible. In order to tackle these issues, model-based systems engineering (MBSE) and component-based systems engineering (CBSE) 5-10 put the focus on the application of models and components, considering them the main artifacts that provide the primary means of information exchange during system development. These artifacts can be utilized to support platform-independent, high-level design, the automated derivation of implementations to multiple platforms and automatic V&V both at design time and runtime with the potential utilization of formal methods. 11 However, there is a lack of cohesive tools as well as methodologies for building adaptive applications 12 and the employment of formal methods for V&V, for example, model checking, remains low. 11 Thus, engineers generally struggle with incompatible techniques and tools to specify requirements, design and implement the adaptation logic and managed elements (e.g., adaptable components) and verify and validate the resulting application using integrated tools. Consequently, there is a need for coherent tool-centric MBSE and CBSE approaches to aid the systematic development of adaptive systems along the entire development process, including (i) requirement specification, (ii) the design of adaptation logic and elements managed by it, (iii) formal verification and testing of the resulting behavior and (iv) automated derivation of implementation with monitoring support. As a solution, we propose an approach in the context of our open source Gamma Statechart Composition Framework 1,13 which is an integrated modeling toolset for the semantically well-founded modeling and analysis of state-based reactive systems. 14 Our approach offers a scenario language to specify contracts for adaptive behavior in terms of interactions (input and output events) and sequence chart 15 constructs, for example, alternative, parallel and loop fragments. It also offers a design language, called adaptation definition language, based on the well-known statechart formalism 16 to design the adaptation logic. The language supports the definition of adaptive application behavior based on configurations of activated and deactivated state-based components (managed elements). Moreover, the activation and deactivation of scenario contracts can also be defined by linking them to component configurations that shall satisfy these contracts. The created adaptation model can be formally verified against contract specifications in our integrated design environment by mapping these models into the inputs of model checker tools. We also provide methods for deriving implementation (source code) for the designed system, as well as generating test cases that can check the conformance of the implementation on a specific platform to the contracts. This work builds on and extends our contract-based specification and black-box test generation approach for adaptive systems, presented in its initial version in literature. 17 Nevertheless, this approach did not support adaptation logic design (the definition of applica- 1 More information about the framework and the source code can be found at http://gamma. inf.mit.bme.hu/and https://github.com/ftsrg/gamma/. tion behavior in system states) and thus, either formal verification of adaptive behavior against contract specifications or implementation derivation supporting monitoring. In this work, we extend the original approach and provide solutions to these missing features. Accordingly, this paper presents an end-to-end CBSE approach for adaptive systems development seamlessly integrated in our modeling tool, Gamma, based on the following main features. In the development standards of safety-critical systems, for example in IEC 61508:2010, 18 dynamic reconfiguration in the software architecture is not recommended as it may complicate the achievement of predictability, verifiability and testability. Adopting a state-based approach to design the adaptation logic and managed components opens a way to design predictable adaptation. The use of precise modeling languages (including scenario-based representation of requirements) allows for complete formal verification and test generation with respect to well-defined test coverage criteria. The rest of the paper is structured as follows. Section 2 presents the motivation and basic concepts of our work in the context of an example smart house themed adaptive system, and positions our approach among existing solutions (related work). Section 3 overviews our proposed CBSE approach for adaptive systems in our Gamma tool. Section 4 describes how adaptive behavior can be modeled in our approach based on the example system presented in Section 2. Section 5 presents the model transformations enabling the formal verification, test generation and code generation supporting monitoring for adaptive models in Gamma. Section 6 presents experiments with the smart house themed example model. Finally, Section 7 concludes the paper and outlines plans for future work. MOTIVATION AND BACKGROUND This section presents the motivation and background of our work. Section 2.1 introduces a motivating example, namely an adaptive smart house system, as well as basic concepts of adaptivity, in the context of which later sections present our CBSE approach. Section 2.2 covers related work and positions our solution in the state of the art based on the identified gaps in terms of adaptive systems development. Motivating example and basic concepts of adaptivity The motivating example builds on an adaptive smart house system 2 presented in literature 19,20 and extends it with additional components and adaptation behaviors. With this example, in addition to presenting our approach, we also aim to enrich the set of models available for evaluating solutions focusing on the development of adaptive systems. The adaptive smart house system, whose thorough functional breakdown can be found in the Appendix, controls the ventilation in a room based on the number of present people. The system, as illustrated in Figure 1, comprises two sensors, a camera and a simple motion sensor (see Figures 3 and 4), responsible for detecting people, as well as two actuators (see Figures 5 and 6), a smart ventilator and a switch, for controlling the ventilation level in the room based on presence datathese are the components (managed elements) managed by the adaptation logic realized by a central controller (adaptation model -we will present it later in Figures 7 and 10 after introducing our adaptation definition language to make its adaptation-related features understandable). The adaptive nature of the system stems from the fact that these components can become unavailable (e.g., due to internal faults) or must be deactivated (e.g., due to external commands) to which the adaptation logic must react by component adaptation (reconfiguration), for example, activating the motion sensor after the failure of the camera (see an illustration in Figure 2), or the adaptation of parameters, for example, adjusting the frequency of communication between components based on presence data or energy consumption. Upon changing between component configurations, the requirements that the system must satisfy can also change, that is, the newly activated components may offer an extended functionality or can provide only a degraded system service, for example, in the case of the more complex camera and the simpler motion sensor components. History between the deand reactivation of components may also have to be saved to support continuity after handling a certain external or internal event, for example, keeping the ventilation level in the room the same after handling a fault. Regarding the detailed functionalities of the sensors, the motion sensor can detect only the moving of people in the room, whereas the camera can also identify the number of present people with image processing techniques, which can be used to adjust the ventilation level accordingly. The system wants to minimize internal event transmission between components to minimize power consumption by frequent communication and changes of ventilation level. Thus, the camera has an adjustable granularity parameter (variable) that sets the difference in the number of detected people that, compared to the last update, results in an update towards the actuators. The camera also has a battery that is drained when people are detected and can be recharged in an idle state. As a special feature, the motion sensor can count incoming motion events in a specified time interval and detect unexpected situations (i.e., too many events in a time interval), such as failures in the underlying hardware components or potentially malicious behaviors. Regarding actuators, the switch can only turn off and on the ventilation at a predefined (default) level, whereas the ventilator can also adaptively set the ventilation level based on the number of present people (data received from the camera) and the time elapsed since the last received data (data becomes obsolete after a while). As demonstrated in the description of the example system, there are special concepts for adaptation, which must be addressed during the functional breakdown of the system (requirement specification), as well as system design to ensure its correct functioning in case of different events. Accordingly, we explain the following concepts in detail, which are prevalent in the design of adaptive systems both in our approach and in the state of the art 21 reconfiguration in the rest of the paper (see the activation of the motion sensor after the failure of the camera in our example, illustrated in Figure 2). Our approach supports components with an event-driven behavior modeled using statecharts. Parametric adaptation Parametric adaptation allows for the fine tuning of system behav- Related work and gaps in the development of adaptive system Many different aspects of adaptive systems development have been researched in the past 22,23 due to the wide spectrum of applica-tion types. Accordingly, many design solutions, for example, goalbased, [24][25][26] rule-based, 27 actor-based, 28 service-based 29 and modelbased, 30 as well as V&V solutions, for example, (runtime and probabilistic) formal verification, 31-34 theorem proving 27 and monitor-based runtime verification, 35 have been presented. This section overviews adaptive systems development solutions related to ours that support behavior. In addition, adaptive CSP models can be exhaustively verified using process-algebraic mechanisms, as well as LTL model checking. The authors also present how adaptive CSP models can be used to derive implementations realizing the specified behavior; however, the derivation process is not fully automated. The main limiting factor in their methodology is that they do not support the description of adaptive behavior based on high-level requirement specification and state-based models, greatly hindering its application by systems engi- COMPONENT-BASED DEVELOPMENT OF ADAPTIVE SYSTEMS This section presents our CBSE approach for adaptive systems. First, Section 3.1 overviews how our solution addresses the general concepts of adaptivity presented in Section 2. Next, Section 3.2 introduces the modeling languages supporting the approach, which is followed by the description of the design and internal transformation steps necessary from the designer (user) and the Gamma tool in the development workflow (see Section 3.3). Proposed modeling strategy In Section 2.1, we argued that the presented concepts of adaptivity must be addressed in a solution that aims to support the flexible and V&V-oriented design of adaptive systems. Accordingly, we describe how our approach addresses the above concepts with different modeling facilities. The relations and linking of these modeling facilities are demonstrated in a conceptual diagram in Figure 7 (explained in the sequel). Adaptation logic In our approach, the adaptation logic is captured in a statechart model, History In addition to historyless reconfiguration, our approach supports both shallow history, that is, history only for the topmost regions in statechart components, and deep history, that is, history for all nested regions, during component and contract reconfiguration. In both cases, history considers internal variables. Shallow or deep history are not distinguished in the case of contracts as each contract represents scenarios of events without multiple levels of refinement. Supporting modeling languages The concepts of adaptivity presented in Section 3.1, as well as the services provided by our CBSE approach presented in Section 3.3 are supported by the following modeling languages: The Gamma Scenario Language (GSCL) is a configurable variation of the LSC 15 (Live Sequence Chart) formalism supporting the highlevel description of system behavior in terms of input and output events (interactions). The Gamma Statechart Language (GSL) is a configurable UML/SysML-inspired formal statechart 16 Supporting development workflow Our CBSE workflow (depicted in Figure 8) comprises two parts. The MODELING OF ADAPTIVE BEHAVIOR This section presents the modeling languages of our approach for modeling functional behaviors of adaptive systems in the context of the example smart house system presented in Section 2. Specifying scenario contracts Scenarios can be defined in GSCL, which is a Live Sequence Chart (LSC) 15 Figure 9 with scenario examples that specify interaction sequences for the camera component and the whole adaptive smart house system. For a more thorough description of the language, we direct the reader to literature. 52,53 1. DelayThenMotion specifies that after a certain time (specified by the TIMEOUT_TIME constant) if the camera receives a motion event via its Camera port, it has to transmit it via its Motion port (towards the actuators). 2. MotionThenMotion specifies that after the camera receives a motion event via its Camera port, and then receives another one in a specified time interval (timeoutTime parameter), it must not transmit it via its Motion port (to save battery). 3. MotionThenDelayThenMotion specifies that after the system (spec- interactions that need to occur at the same time/cycle (Lines 28-32). In an interaction set, the direction and modality of the contained inter- GSCL supports test generation with configuration options for (1) specifying constraints for system response and (2) categorizing unspecified system behavior. Option 1 handles the issue of not knowing the exact latency between inputs and reactions in an implementation. For example, Lines 11 and 12 specify that the system must respond to an incoming motion event with a motion event on the respective output port. Depending on the implementation, this may take several execution cycles. Therefore, GSCL introduces an annotation (Line 8) that specifies the accepted range of latency in terms of execution cycles. If the response does not arrive within the specified interval, a violation occurs. Note that this kind of uncertainty does not apply to events received by the system, because they do not depend on the system implementation. Option 2 allows the different interpretation of unspecified events sent by the system during test execution. As explained above, after receiving a motion event (Line 11) the system may have "time" to respond, during which it may send unspecified events, too. GSCL offers annotations (Line 7) to decide whether this behavior is permitted (permissive mode) by ignoring that event or treated as a violation of the interaction expecting the response (strict mode). Designing the adaptation model linked with changing scenario contracts As detailed in Section 2.1 (see changing of requirements), the functional requirements that adaptive systems must satisfy may change depending on external or internal events, for example, the failure of a component or an external command prescribing the reconfiguration of components to better handle a certain need. Therefore, a modeling language is required to support the description of adaptivity with respect to scenario contracts. GASL is a statechart language 16 extension to support the definition of adaptive contracts, that is, the activation and deactivation of static scenario contracts upon specific events. It builds on GSL, the built-in formal statechart language of Gamma, 48 VERIFICATION AND MONITORING OF ADAPTIVE BEHAVIOR This section presents the internal model transformations in Gamma that facilitate the formal verification (see Section 5.3.1), implementation derivation supporting monitoring (see Section 5.3.2) and test generation (see Section 5.3.3) for the adaptive system based on derived composite models that integrate scenario contracts, the adaptation model and state-based components (see Section 5.2). In order to support these functionalities, as a first step, observer automaton models are derived from scenarios (Section 5.1). 5.1 Mapping scenarios into observer automata In the case of monitoring, the above construction is used both for event reception and transmission. In turn, for test generation, event reception is modeled using a single transition targeting the source state of the next interaction while event transmission is modeled as above. Note that this construction introduces determinism to event reception In the case of monitoring, the resultant automaton is determinized using the powerset construction method, 55 Creating models for formal verification and monitoring With the scenario models mapped into observer automata according to the selected functionality (monitoring or test generation, see Contract-based black box test generation for adaptive behavior The approach supports black box test generation based on the original (unprocessed) adaptation and contract models using the model checking functionalities of the Gamma framework. Currently, only historyless contract links are supported. As a general idea, in a testing context, an execution trace derived during model checking as a witness for satisfying a property can be considered as an abstract test case for the property based on which it is generated, representing a test target. 57 Thus, with the goal of generating tests, we control model checkers in a way that they generate execution traces (abstract test cases) to cover test targets specified as formal properties, for example, in the case of state coverage as a test target, state reachability properties. Such abstract test cases then can be customized to concrete test environments according to various aspects. 58 Test generation comprises two steps: generating paths in the adaptation model to activate contract models and traversing the activated contract models. The traversal of the adaptation model can be configured using various coverage criteria, including state, transition, out-event or interaction coverage. Based on the configured criteria, reachability properties are generated automatically and then passed to the selected model checkers along with the adaptation model, which return execution traces (paths) represented in GTL to state configurations with linked contracts. The observer automaton models derived from these contracts are then traversed to derive positive or negative tests using Theta as it is the only integrated back-end supporting the retrieval of all paths from the initial state to a certain state in acyclic models. Abstract test cases are created as the sequential combination of paths activating particular contracts in the adaptation model and paths in the contract models. In addition, test configuration options present in the contract models, that is, allowed latency and permissive/strict mode, are saved to support test concretization in the next step. As a last step, abstract test cases are customized to execution environments, that is, they are mapped into sequences of concrete calls to provide test inputs and time delays as well as schedule system execution, and then retrieve and evaluate outputs. The test configuration options in the concretized tests are handled with a simple auxiliary method used in conjunction with output evaluations as depicted in EXPERIMENTS WITH THE EXAMPLE MODEL This section presents experiments with the adaptive smart house system Scenario contracts for the adaptive smart house system We defined 11 scenario contracts 5 based on the functional requirements of the system presented in the Appendix, which specify system and component behavior from different aspects. Two scenarios (S1 and S2) describe behaviors of the entire system whereas the remaining ones capture the behaviors of managed components (S3 to S11). S1 and S2 specify how the system must control the ventilation level in the case of motion sensing and timeouts when the ventilator component is active. S3, S4 and S5 capture the common behavior of the camera and motion sensor components focusing on motion sensing and internal event transmission (see DelayThenMotion and MotionThenMotion scenarios in Figure 9). S6 specifies person detection based on the granularity parameter and timeouts for the camera component (an extension of the MotionThenDelayThenMotion scenario in Figure 9), whereas S7 specifies how too many events in a certain time interval must be handled by the motion sensor. S8 describes how ventilation must be turned on and off by the ventilator and switch components based on received motion sensing events. Finally, S9, S10 and S11 Table 1 Table 1). Typically, having unconstrained combinations and sequences of input events is not a practical goal for verification; realistic behavior can be restricted with user-defined environment models. Formal verification based on the derived composite models Therefore, we experimented with environment models that restricted error and recovery events. In particular, we specified in an environ- After eliminating such problematic cases, the correctness of a coherent parameterization was proven by formal verification for scenarios S3-S11 and the respective component models based on the derived T-3 models. Semantic variations Gamma offers multiple semantic variation options for statechart com- In the second phase, in order to generate positive and negative tests, the algorithm traversed the observer automata that were linked to states in which the generated execution traces ended. Table 3 summarizes the number of generated execution traces and the median and maximum number of steps in these traces considering every linked observer automaton for positive and negative tests for the motion sensor, ventilator and switch components. The generation of such traces was carried out under ten seconds for each automaton. The number of generated traces in an automaton depends on the number of distinct paths between the initial and the accept states for positive tests, determined by the optional and alternative fragments (branchings) of the scenario. For negative tests, in addition to the number of distinct paths between the initial and the hot violation states, the number of hot modality interactions is also a determining factor. As Table 3 shows, there were no branchings and there was a single hot TA B L E 3 Results of the observer automaton traversal phase of test generation in the adaptive smart house system. CONCLUSION AND FUTURE WORK In this paper, we introduced an end-to-end component-based speci- Our experiments show that the approach is applicable in an extended adaptive house automation system, first presented in literature. 19,20 Adaptive behavior can be adequately modeled using the proposed modeling languages, and the satisfaction of functional requirements specified as scenarios can be formally verified based on component-contract links and the employment of optimization techniques that reduce irrelevant context details. Nevertheless, adaptive models with many adaptation options, as expectable, can pose a great challenge to model checkers if optimization is not employed. In general, this problem could be addressed by extending the formal verification capabilities of Gamma by integrating additional model checkers more tailored to these models along with abstraction and reduction techniques. Also, implementation with monitoring support can be automatically derived and then tested using the generated test sets. Subject to future work, we plan to extend our approach to allow the management of composite components in order to support a system modeling approach based on compositional variation points and application variants. 36 We intend to realize the extension based on the generalization of the integration and (de)activation facilities for observer automata presented in Section 5.2. Moreover, we aim to extend our scenario language to support inter-component communication (multiple lifelines). We also plan to extend our approach by introducing the modeling and verification of extra-functional properties based on literature 60 and support the automatic deployment of adaptive functionality to computation nodes. ACKNOWLEDGMENTS We would like to express our gratitude to Benedek Horváth for his initial contributions to the GSCL metamodel and Dénes Lendvai
2023-04-08T15:30:14.145Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "60639b1a24333b6d4075a986f45a2d9edf5ebbb4", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/sys.21675", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "9b9864b0006aa21b31d3d1fcc55d890a7fe9ffdc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118444244
pes2o/s2orc
v3-fos-license
The double-slit quantum eraser experiments and Hardy's paradox in the quantum linguistic interpretation Recently we proposed the linguistic interpretation of quantum mechanics (called quantum and classical measurement theory), which was characterized as a kind of metaphysical and linguistic turn of the Copenhagen interpretation. This turn from physics to language does not only extend quantum theory to classical systems but also yield the quantum mechanical world view (i.e., quantum philosophy or quantum language). The purpose of this paper is to formulate the double-slit experiment, the quantum eraser experiment, Wheeler's delayed choice experiment, Hardy's paradox and the three boxes paradox (the weak value associated with a weak measurement due to Aharonov, et al.) in the linguistic interpretation of quantum mechanics. Through these arguments, we assert that the linguistic interpretation is just the final version of so called Copenhagen interpretation. And therefore, we conclude that the Copenhagen interpretation does not belong to physics (i.e., the realistic world view) but the linguistic world view. .1 The overview of quantum language As mentioned in the above abstract, our purpose is to understand the double slits experiment, the quantum eraser experiment, Wheeler's delayed choice experiment, Hardy's paradox and the three boxes paradox in the linguistic interpretation of quantum mechanics, which is proposed in [10]- [22]. According to ref. [14], we shall mention the overview of quantum language (or, measurement theory, in short, MT). Quantum language is characterized as the linguistic turn of the Copenhagen interpretation of quantum mechanics. Quantum language (or, measurement theory ) has two simple rules (i.e. Axiom 1(concerning measurement) and Axiom 2(concerning causal relation)) and the linguistic interpretation (= how to use the Axioms 1 and 2). That is, Quantum language (=MT(measurement theory)) = Axiom 1 (measurement) + Axiom 2 (causality) + Linguistic interpretation (how to use Axioms) (cf. refs. [10]- [22]). Measurement theory is, by an analogy of quantum mechanics (or, as a linguistic turn of quantum mechanics ), constructed as the scientific theory formulated in a certain C * -algebra A (i.e., a norm closed subalgebra in the operator algebra B(H) composed of all bounded linear operators on a Hilbert space H, cf. [24,26] ). Let N be the weak * closure of A, which is called a W * -algebra. The structure [A ⊆ N ⊆ B(H)] is called a fundamental structure of MT. When A = C(H), the C * -algebra composed of all compact operators on a Hilbert space H, the MT is called quantum measurement theory (or, quantum system theory), which can be regarded as the linguistic aspect of quantum mechanics. Also, when A is commutative that is, when A is characterized by C 0 (Ω), the C * -algebra composed of all continuous complex-valued functions vanishing at infinity on a locally compact Hausdorff space Ω (cf. [26]) , the MT is called classical measurement theory. Thus, we have the following classification: Also, we assert that quantum language is located as follows: Parmenides Socrates Greek philosophy Plato Alistotle Observables Let [A ⊆ N ⊆ B(H)] be the fundamental structure of measurement theory. Let N * be the pre-dual Banach space of N . That is, N * = {ρ | ρ is a weak * continuous linear functional on N }, and the norm ρ N * is defined by sup{|ρ(F )| : F ∈ N such that F N (= F B(H) ) ≤ 1}. The bi-linear functional ρ(F ) is also denoted by N * ρ, F N , or in short ρ, F . Define the mixed state ρ (∈ N * ) such that ρ N * = 1 and ρ(F ) ≥ 0 for all F ∈ N satisfying F ≥ 0. And put According to the noted idea (cf. ref. [3]) in quantum mechanics, an observable O ≡(X, F, F ) in the W * -algebra N is defined as follows: (B 2 ) [Countably additivity] F is a mapping from F to N satisfying: (a): for every Ξ ∈ F, F (Ξ) is a non-negative element in N such that 0 ≤ F (Ξ) ≤ I, (b): F (∅) = 0, where 0 and I is the 0-element and the identity in N respectively. (c): for any countable decomposition i.e., lim K→∞ F ( K k=1 Ξ k ) = F (Ξ) in the sense of weak * convergence in N . Remark 1. In the above (b), it is usual to assume the condition: F (X) = I. In fact, through all this paper except Section 5, the condition: F (X) = I is assumed. However, for the reason mentioned in Remark 9 later, we start from the above (b). Quantum language ( Axioms ) With any system S, a fundamental structure [A ⊆ N ⊆ B(H)] can be associated in which the pure measurement theory (A 1 ) of that system can be formulated. A pure state of the system S is represented by an element ρ p (∈ S p (A * )="pure state class"(cf. ref. [14])) and an observable is represented by an observable O =(X, F, F ) in N . Also, the measurement of the observable O for the system S with the pure state ρ p is denoted by M N (O, S [ρ p ] ) or more precisely, ) . An observer can obtain a measured value x (∈ X) by the measurement The Axiom 1 presented below is a kind of mathematical generalization of Born's probabilistic interpretation of quantum mechanics. Axiom 1 [Pure Measurement]. The probability that a measured value x (∈ X) obtained by the measurement is essentially continuous at ρ p 0 (cf. ref. [14]). Next, we explain Axiom 2 in (A 1 ). Let (T, ≤) be a tree, i.e., a partial ordered set such that "t 1 ≤ t 3 and t 2 ≤ t 3 " implies "t 1 ≤ t 2 or t 2 ≤ t 1 ". Assume that there exists an element t 0 ∈ T , called the root of T , such that t 0 ≤ t (∀t ∈ T ) holds. Put is called a causal relation (due to the Heisenberg picture), if it satisfies the following conditions (C 1 ) and (C 2 ). (C 1 ) With each t ∈ T , a fundamental structure [A t ⊆ N t ⊆ B(H t )] is associated. The family of pre-dual operators {Φ t 1 ,t 2 * is called a predual causal relation (due to the Schrödinger picture). If we can regard that Φ t 1 ,t 2 * is said to be deterministic. Now Axiom 2 in the measurement theory (1) is presented as follows: Linguistic Interpretation Next, we have to study the linguistic interpretation (i.e., the manual of how to use the above axioms, ) as follows. (D 2 ) Only one measurement is permitted. And thus, the state after a measurement is meaningless since it can not be measured any longer. Therefore, the wave collapse is prohibited. Also, the causality should be assumed only in the side of system, however, a state never moves. Thus, the Heisenberg picture should be adopted. And thus, the Schrödinger picture is rather makeshift. Thus, the problem "when and where a measurement is performed?" is nonsense. and so on. For example, the axioms seem the rule of how to move the piece of a chess game. On the other hand, the linguistic interpretation resembles the standard tactics of chess game. In this sense, we cannot completely say all about the linguistic interpretation. The following argument is a consequence of the above (D 2 ). For each k = 1, 2, . . . , K, consider a measurement M N (O k ≡(X k , F k , F k ), S [ρ] ). However, since the (D 2 ) says that only one measurement is permitted, the measurements {M N (O k , S [ρ] )} K k=1 should be reconsidered in what follows. Under the commutativity condition such that we can define the product observable (or, simultaneous observable) × K . Consider a finite tree (T ≡{t 0 , t 1 , . . . , t n }, ≤) with the root t 0 . This is also characterized by the map π : is an observable in the N π(t) . For the case that a tree T is not finite, see [11]. if the commutativity condition holds (i.e., if the product exists) for each s ∈ π(T ). Using (4) iteratively, we can finally obtain the Remark 2 [Particle or wave]. The argument about the "particle vs. wave" is meaningless in quantum language. As seen in the following table, this argument is traditional: Theories \ P or W Particle(=symbol) Wave(= mathematical representation ) Aristotles hyle eidos Newton mechanics point mass state (=(position, momentum)) Statistics population parameter Quantum mechanics particle state (≈ wave function) Quantum language system (=measuring object) state In the above table, Newtonian mechanics (i.e., mass point ↔ state) may be easiest to understand. Thus, "particle" and "wave" are not confrontation concepts. In this sense, the "wave or particle" is meaningless. In the linguistic interpretation of quantum mechanics, this should be usually understood as the problem "interference or no interference". Remark 3 [Reality]. Since quantum language is a kind of metaphysics, we are not concerned with the reality such as discussed in [4] and [2]. Also, since space and time are independent in quantum language (cf. [15] ), we can not expect it to yield a good physical theory (i.e., 5 in Figure 1). Remark 4 [The Schrödinger's cat]. Axiom 2 allows us to deal with more than the deterministic causal relation, for example, the Brownian motion and the quantum decoherence, etc. Therefore, we can easily describe the Schrödinger's cat by quantum language. Thus, this is not a paradox in quantum language. However, quantum language (due to dualism composed of "observer" and "system") does not have a power to describe Wigner's friend as well as Descartes' proposition "I think, therefore I am" (cf. [16]). Then, there is a reason to call the O a simultaneous observable. (iii): Also, it may be worth while investigating the concept such that O = (× K k=1 X k , ⊠ K k=1 F k , F ) is an simultaneous observable concerning ρ. The double-slit experiment Although Feynmann's enthusiasm is transmitted in the explanation of the double-slit experiment in [6], we do not think that his explanation is sufficient. That is because the double-slit experiment and so on should be explained after the answer to "What kind of measurement is taken?". That is, Consider a tree (T, ≤) with the two branches such that For each t ∈ T , define the fundamental structure where the average momentum (p 0 1 , p 0 2 ) is calculated by That is, we assume that the initial state of the particle P ( in Figures 2(1) and 2 (2) ) is equal to |u 0 u 0 |. As mentioned in the above, consider two branches T 1 and T 2 . Thus, concerning T 1 , we have the following Schrödinger equation: Also, concerning T 2 , we have the following Schrödinger equation: Let s 1 , s 2 be sufficiently large positive numbers. Put t 1 = (1, s 1 ) ∈ T 1 , t 2 = (2, s 2 ) ∈ T 2 . Define the subtree T ′ (⊆ T ) such that T ′ = {0, t 1 , t 2 } and 0 < t 1 , 0 < t 2 . Thus, we have the causal relation: Put Z = {0, ±1, ±2, · · · }. Let δ be a sufficiently small positive number. For each n ∈ Z, define the region D n (⊆ R 2 ) such that where χ Dn (x, y) = 1 ((x, y) ∈ D n ), = 0 (elsewhere). Hence, we can consider the two observables O t 1 = (Z, 2 Z , F ) in B(H t 1 )(= B(L 2 (R 2 )) and We consider that this is just the description of the standard double-slit experiment. The following is well known: [14]) will show the interference fringes. Fig. 2(2) says that (E 2 ) if we get the positive measured value n by the measurement , we may conclude that the particle P passed through the hole A. Further, note that we have the sequential causal observable Remark 6 Although, strictly speaking, we have to say that the statement "the particle P passed through the hole A" can not be described in terms of quantum language, it should be allowed to say the statement (E 2 ). Also, concerning the statement (E 3 ), note that but the observables O t 1 and O t 2 are in different worlds (i.e., different branches), except while Φ 0,t 1 1 = Φ 0,t 2 2 . We consider that, the double-slit experiment can not be completely explained without branches In this sense, our argument may be similar to Everett's (cf. [5]). Also, for our other understanding of the double-slit experiment, see [8] and [9]. 3 The quantum eraser experiment 3 No interference Consider the measurement: Then, we see (F 1 ) the probability that a measured value (1, x)(∈ {1} × X) belongs to {1} × Ξ is given by where the interference term disappears. Interference Consider the measurement: Then, we see: where the interference term (i.e., the third term) appears. Also, we see: where the interference term (i.e., the third term) appears. This was experimentally examined in [27]. Firstly, consider the measurement: Then, we see: (G 1 ) the probability that a measured value 1 a measured value 2 is obtained by M B(C 2 ) (ΦO f , S [ρ] ) is given by Next, consider the measurement: half mirror 2 mirror mirror Figure 3(2). [D 1 + D 2 ]=ObservableO g Then, we see: Also, consider the following Figure 3(3). This is clearly the same as the situation of Figure 3(1). Therefore, this is characterized by the same measurement M B(C 2 ) (ΦO f , S [ρ] ). half mirror 2 mirror Figure 3(1)(i.e., when the observable O f changes to the O g ), we see that the measurement ). Thus, we think that Wheeler's delayed choice experiment (cf. [28]) is not surprising in the linguistic interpretation of quantum mechanics. That is because the problem is not "wave or particle" but "interference or no interference". On the other hand, the statement (C 1 ) concerning M B(C 2 ) (ΦO f , S [ρ] ) is surprising, since it implies the non-locality. This surprising fact is essentially the same as the de Broglie's paradox (in B(L 2 (R 3 ))). 5 Hardy's paradox Let H be a two dimensional Hilbert space, i.e., H = C 2 . Let f 1 , f 2 , g 1 , g 2 ∈ H such that Now, consider the tensor Hilbert space H ⊗ H = C 2 ⊗ C 2 . Thus, put Define the projection P : and thus, define the Ψ : Concerning the tensor observable O g ⊗ O g Define the observable O gg = ({1, 2}×{1, 2}, 2 {1,2}×{1,2} , H gg ) in B(C 2 ⊗C 2 ) by the tensor observable O g ⊗ O g , that is, Consider the measurement: Then, the probability that a measured value (2, 2) is obtained by Also, the probability that a measured value (1, 1) is obtained by Further, the probability that a measured value (1, 2) is obtained by Similarly, Consider the measurement: Then, the probability that a measured value (2, 2) is obtained by Also, the probability that a measured value (1, 1) is obtained by M B(C 2 ⊗C 2 ) ( Ψ O gf , S [ ρ] ) is given by Further, the probability that a measured value (1,2) is obtained by Similarly, Remark 10. It is usual to consider that "Which way pass problem" is nonsense. However, for the other aspect of this problem, see Remarks 11 and 12 later. The three boxes paradox Let H be the three dimensional Hilbert space, i.e., H = C 3 . Let f 1 , f 2 , f 3 ∈ H such that And, put ρ = |u u| and And consider the measurements Clearly, the probability that a measured value 1 obtained by M B(H) (O 1 , S [ρ] ) is given by and, the probability that a However, we try to consider the "measurement" And further, we can calculate as follows. In spite of the non-commutativity of Ψ O gg and O f f , consider the "measurement": And we can calculate as follows. (I) under the condition that the measured value ((2, 2), (y 1 , y 2 )) is obtained by " )", the probability (or precisely, weak value) that This (I) and the idea in ref. [1] are superficially similar, but completely different in essence. However, if the latter says something good, we can expect that the (I) is somewhat meaningful. For completeness, note that quantum language is not physics but language. Therefore, we say that (J) if this statement (I) can be used effectively, then the concept: "weak value" should be accepted in the linguistic interpretation, however, if this is not more than "even not wrong", we will not be concerned with "the weak value". Conclusions In this paper, we discussed the double slits experiment, the quantum eraser experiment, Wheeler's delayed choice experiment, Hardy's paradox and the three boxes paradox in the linguistic interpretation of quantum mechanics. Quantum language says that everything should be described in terms of Axioms 1 and 2. Therefore, we always have to describe "measurement" explicitly. In fact, in this paper, any measurement was explicitly described such as the formula [(5)- (14), (15), (16) ]. Particularly, in Section 2, we say that the double-slit experiment can not be understood without the concept of "branch". And, in Section 4, we note that Wheeler's delayed choice experiment is not surprising, since it should be regarded as the problem such as "interference or no interference". Through these arguments, we assert that the linguistic interpretation is just the final version of so called Copenhagen interpretation. And therefore, the Copenhagen interpretation does not belong to physics but the linguistic world view (cf. Figure 1). We hope that our proposal will be discussed and examined from various view-points.
2014-07-23T13:14:39.000Z
2014-07-19T00:00:00.000
{ "year": 2014, "sha1": "113629c6b5b376d150bdd6a6de77fffce9dedc15", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "113629c6b5b376d150bdd6a6de77fffce9dedc15", "s2fieldsofstudy": [ "Physics", "Philosophy" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
254854952
pes2o/s2orc
v3-fos-license
Direct Uptake of Nutrition and Caffeine Study (DUNCS): biscuit based comparative study Abstract Objectives To identify the time required to achieve optimal palatability of a cup of tea without risk of harm (oral scalding) using the resources available in a standard hospital staff room, and to identify the best accompanying biscuit for nutritional content, crunchiness, and integrity when dunking. Design Prospective, non-masked, biscuit based, comparative study. Setting Staff room in the surgery department of a UK hospital. Participants Four different varieties of round, non-chocolate biscuit: oat, digestive, rich tea, and shortie. A standardised cup of tea was determined on the basis of the investigators’ preference for colour and palatability and pragmatic tea making methods. Main outcome measures The main outcome was time to achieve a safe temperature for consumption of tea, and the best biscuit to pair with the tea on the basis of nutritional content, absorptive ability, crunchiness, and integrity after dunking. Biscuits were ranked first to last (according to scores 1-4), with penalty points given for adverse events such as scalds and breakability. Results Baseline data suggested that after adding 240 mL of freshly boiled water to an unwarmed mug containing a tea bag, the median temperature of a standard cup of tea was 82ºC (range 81-84ºC). Optimal palatability and agreed universal drinking temperature of 61ºC was achieved at 400 (range 360-420) seconds with 30 mL of cow’s milk and 370 (330-450) seconds with 40 mL of milk. The investigators considered tea colour preferable with 40 mL of milk. Conclusion Healthcare workers can safely consume a cup of tea after less than 10 minutes, especially if enjoyed with a biscuit. Making time for a cup of tea may help healthcare workers avoid their break point. Introduction Over recent years the public sector has faced immense challenges, such as the covid-19 pandemic. 1 2 During difficult times, when morale and motivation have been tested, one quintessentially British refreshment has helped the nation to push through. Although the choice of hot drinks now available is extensive, a cup of tea remains the preferred choice 3 ; when paired with a biscuit, this combination is rocket fuel for the National Health Service. Staff wellbeing is important, 4 5 and optimising staff hydration 6 and nutrition can help to improve mood and performance. We have witnessed how NHS staff avoid breaks because of constraints on their time, often grabbing substandard refreshments in a rush. As scientific evidence is lacking on best snack practice, most healthcare workers are forced to rely on their own experience. Efficient tea making skills and a good quality accompanying biscuit are important for healthcare workers, who deserve to have a brew-tiful day. Freshly boiled water is an important part of the tea making process, but this raises concerns about the risk of oral scalding if busy healthcare workers are tempted to consume a beverage before it reaches a safe temperature. The choice of accompanying biscuit is also important. If staff prefer dunking, will their biscuit of choice survive immersion and sustain crunchiness? We identified the time taken to produce a safe, palatable cup of tea. In keeping with the tradition of tea paired with a biscuit, we assessed four biscuit varieties for nutritional content and durability after dunking. biscuit selection Based on extensive research from years of frequenting staff rooms in NHS hospitals, we identified four biscuits most commonly found in staff biscuit tinsoat, digestive, rich tea, and shortie. We describe and quantify the desirable characteristics (eg, nutritional content and integrity when being dunked) of these biscuits when paired with a safe, palatable cup of tea. To minimise the risk from heterogeneity, we limited our selection to single layer, non-coated, non-filled biscuit varieties. Preparing a standard cup of tea We performed a pilot experiment to determine the best volumes of water and cow's milk (this being to our knowledge the most commonly available for healthcare doi: 10.1136/bmj-2022-072839 | BMJ 2022;379:e072839 | the bmj workers) required for NHS staff to make a standard cup of tea in a timely manner; this baseline data informed subsequent tests. We reviewed existing literature to identify a standardised methodology; however, the findings were inconsistent or had historical reasoning not in keeping with the contemporary workplace. One of us (JF) prepared all cups of tea throughout the study, and the other (CJ) monitored relevant times with a stopwatch and recorded data. As the tests involved risks, health and safety was taken into consideration and we refreshed our good cuppa preparation (GCP) training before the study. Equipment was checked thoroughly, syringes were designated for enteral use only, and emergency equipment, including cold water, kitchen roll, a mop, and a food waste bin were readily available. Throughout the study, tea was prepared in standardised, newly purchased, unchipped porcelain mugs, each with a liquid holding capacity of 310 mL. Taking into account the non-filled void for safe carriage and a comfortable slurp rim, we determined that the total fluid volume (water plus milk) could not exceed 280 mL. We then agreed that 240 mL of water would be the standard and would allow the addition of 30-40 mL of milk. The mutually agreed tea making process involved pouring 240 mL of freshly boiled water over a single tea bag in an unwarmed mug. JF stirred the tea bag gently with a metal spoon for 60 seconds (checked by CJ), before giving it a gentle squeeze and extracting it from the cup. The milk was taken straight from the fridge (temperature set to 4ºC) and as soon as it was added to the tea, the stopwatch was started (time zero). The temperature of the tea was measured at 30 second intervals using a thermometer (fig 1). We both slurped the tea to assess palatability and the potential for oral scalding. Testing for palatability was repeated every 30 seconds. Data were collected on the rate the tea temperature dropped and the overall time required for a comfortable, and therefore potentially safe, drinking temperature to be achieved. Two volumes of milk (30 mL and 40 mL) were initially investigated, and we both reviewed the tea's colour, palatability, and cooling time-or time to drinkable tea (TTDT). Based on our pilot experiment findings, we used 40 mL of semi-skimmed cow's milk to prepare standard cups of tea for all six tests. biscuit tests Time to drinkable tea; the impact of dunking The first biscuit test was used to determine TTDT. JF prepared a standard cup of tea, and the temperature was monitored throughout the test. CJ started the stopwatch at time zero (addition of 40 mL of milk); at this point JF selected two biscuits from a packet and dunked each into the tea-one at 30 seconds and one at 60 seconds; these intervals representing natural first and second biscuit dunks during a tea break and determined over many years of the authors' time working in the NHS. We recorded tea cooling and TTDT data for each biscuit variety and repeated the tests three times with freshly prepared cups of tea. The biscuit with the shortest TTDT was ranked first (score 1) and the biscuit with the longest TTDT was ranked last (score 4). Nutritional content To assess the nutritional content of each biscuit and to corroborate the findings with the information provided by the manufacturers, we weighed three randomly chosen biscuits from each of the four packets. We then compared the recorded weights with those shown on the relevant packet and reviewed the energy content (kcal). The biscuit with the highest energy content was ranked first (1 point) and the one with the lowest energy content was ranked last (4 points). Saturation volume For the saturation volume test we hypothesised that those biscuits that absorbed the most tea in each dunk would help towards TTDT. Bespoke doilies, made from kitchen roll, were placed on saucers, and one randomly chosen biscuit from each packet was then placed on a doily. Using a syringe, we dripped freshly brewed tea Saturation volume Tea was syringed onto the centre of each biscuit in 1mL increments. The winner was the biscuit that absorbed the highest volume before permeation onto a doily below Crunch reduction Crunch scores were recorded using a decibel meter before and aer adding 2 mL of tea. The winner was the biscuit with the smallest reduction in crunch volume Overall rank Biscuits were held between thumb and index nger, dunked into a fresh brew, and gently moved back and forth until they fell into the tea. The biscuit enduring the longest dunk was the winner Pragmatic dunk break point Biscuits were dunked for 2 seconds into a fresh standard brew, then held away from the cup. The time for the biscuit to break was recorded, the winner having the longest post-dunk integrity Optimising staff hydration and nutrition is key for a successful health service. We identi ed the time required for the safe preparation and consumption of a cup of tea and the best accompanying biscuit to provide the most effective and refreshing uptake of caffeine, hydration, and nutrition, without the adverse events of spillage, scalds, or biscuit breakage 2) and recorded the volume of tea required before permeation to the doily. The biscuit that absorbed the largest volume of tea before signs of permeation was ranked first (score 1). Crunch reduction To test for crunchiness after a biscuit had been dunked, JF first selected three biscuits at random from each packet and then determined the baseline crunch score of each biscuit variety by breaking the biscuits in half next to a decibel meter (app on smartphone). Three more biscuits were then selected from each packet. Using another syringe, 2 mL of freshly brewed tea was then dripped onto the centre of each biscuit. Each treated biscuit was immediately broken in half next to the decibel meter and the crunch score recorded. For comparative purposes, both dry and wet crunch volumes were recorded (three sets of data for each biscuit variety) and the percentage reduction in crunch volumes were calculated. The biscuit with the smallest reduction in crunch volume was ranked first (score 1). Dunk break point The dunk break point test was used to identify the biscuit that would be best for dunking. One of each biscuit variety was held firmly between thumb and index finger (the universal dunking grip) before being dunked into a standard cup of tea as far as the fingertips. The biscuit was gently moved back and forth until the dunked portion broke away and sank (the dunk break point). JF dunked the biscuits while CJ recorded the dunk break point. The biscuit that took the longest to reach the dunk break point was ranked first (score 1). Pragmatic dunk break point The test for pragmatic dunk break point imitates the real world of tea and biscuit pairing more closely. JF selected a biscuit from each packet in turn and dunked it (using the universal dunking grip) for two seconds into a cup of freshly brewed tea. The biscuit was then held away from the cup and the time recorded until the biscuit fell apart. The biscuit that maintained its integrity for the longest after being dunked was ranked first (score 1). Biscuits were given penalty points if they broke apart before being moved away from the cup (the floater effect). statistical analysis For all six tests, each biscuit variety was ranked based on median and mean scores, with a score of 1 (ranked first) assigned to the biscuit with the best result and a score of 4 (ranked last) assigned to the biscuit with the worst result. We added the scores for each test together, with each test given equal weighting in the scoring process. The biscuit with the lowest overall score was thus considered the best biscuit. Patient and public involvement Members of the public were not involved in the design of the study. The manuscript, however, has been received with interest. results Preparing a standard cup of tea Baseline data suggested that the initial median temperature of a standard brewed cup of tea was 82ºC (range 81-84ºC) after adding 240 mL of freshly boiled water to a single teabag in an unwarmed mug. After gently stirring for 60 seconds, removing the tea bag, and adding the milk we observed temperature drops of 10ºC (range 9-10ºC) with the addition of 30 mL milk and 11ºC with the addition of 40 mL milk. The tea further cooled by 1ºC every 30 seconds. Optimal palatability and agreed universal drinking temperature was 61ºC, which was achieved at 400 (360-420) seconds with 30 mL of milk and 370 (330-450) seconds with 40 mL of milk. We preferred the colour of tea with 40 mL of milk. nutritional content No discrepancies were found between the recorded weights of biscuits and the weights given on the packets for all four biscuit varieties (table 2). The oat biscuit had the highest energy content (70 kcal/biscuit; 1 kcal=4.18 kJ) and was ranked first (1 point), and the rich tea had the lowest energy content (43 kcal/biscuit) and was ranked fourth (4 points). saturation volume The rich tea was ranked first (1 point) in the saturation volume test (table 2), absorbing a median of 9 (range 8-9) mL of tea before permeating to the doily. The results for the oat biscuit and digestive were comparable. The shortie was ranked fourth (4 points), absorbing a median of 4 (3-4) mL of tea during the three tests. crunch reduction The digestive was ranked first (1 point) for crunch reduction (table 2), with a 15% reduction in crunch volume. The results for the oat biscuit and rich tea were comparable, and the shortie was ranked fourth (4 points), with a 32% reduction in crunch volume. Dunk break point The oat biscuit ranked first in the dunk break point test, with a mean dunk time of 34.3 seconds to dunk break point (table 3). The shortie was ranked second, with a mean 31.7 seconds to dunk break point, followed by the digestive (ranked third) and rich tea (ranked fourth), with 28.3 seconds and 21.3 seconds, respectively. Pragmatic dunk break point The oat biscuit ranked first in the mean pragmatic dunk break point test, with 29 seconds compared with 17.5 seconds for the shortie (ranked second) and 8.5 seconds for the digestive (ranked third). The rich tea was ranked fourth; it was also given three additional penalty points for having the lowest dunk break point in all three repeat tests. Overall biscuit scores and best performing biscuit The oat biscuit ranked first after all six tests (table 4). The digestive ranked second-it crumbled in three tests of absorptive capability and structural integrity (saturation volume, dunk break point, and pragmatic dunk break point). The shortie was ranked third, whereas the rich tea (the only biscuit given penalty points) was ranked fourth; the penalty points did not directly influence the rich tea's ranking. discussion Good hydration and nutrition are fundamental, whether in the context of protocols for enhanced recovery after surgery, 7 trace element replacement on the intensive care unit, or simply avoiding "hangriness." 8 Optimising fluid and energy intake is essential for peak performance. As with elite athletes who require expert diet management to optimise performance, 9 healthcare workers also need to perform at their best. Tea and biscuits are part of British culture, and it would be beneficial to harness the rejuvenation provided by the pairing of these two and to deliver them to healthcare workers directly. Although the study results varied, important findings were that it takes around 400 seconds for a cup of tea to reach optimal palatability (61ºC) with 30 mL of milk, and just 370 seconds with 40 mL of milk. A healthcare worker can expect to enjoy a cup of tea in less than 10 minutes and paired with an oat biscuit (ranked first overall in all six tests) can help towards improved sustenance. strengths and limitations of this study Leafing through the literature and steeping ourselves in the evidence, we were unable to identify a pragmatic recipe for brewing a standard cup of tea. Although we appreciate that opinions differ widely on how to brew a palatable cup of tea, 10 waiting 3-4 minutes for tea to brew is unrealistic for all but the most senior of NHS managers. We are confident that our study methods reflect a real world approach to tea making in NHS staff rooms. Each biscuit was assessed and scrutinised in an open and unbiased manner, although both of us acknowledge a personal preference for the shortie. Our study addressed the multifactorial nature of tea making and biscuit choice to better inform NHS staff when having a tea break. We used the time needed to brew and consume a standard cup of tea as a proxy for the time needed to have a restorative tea break, and we identified the cooling effect of a dunked biscuit that might help in the consumption of this beverage. Although we performed six tests, the joy of dunking a biscuit never waned and, at times, actually provided hilarity. This joyfulness enhanced the tea break experience and this could have an important place in teambuilding and connectedness between different hierarchies and disciplines; a powerful influence to be considered. We limited our biscuit choice, excluding chocolate and cream variants with their potential for high desirability as we believed it important to limit the distraction and potential finger licking that usually occurs when eating biscuits with cream or chocolate fillings. The sticky and licked finger scenario is not compatible with a healthcare environment and should be reserved for non-work time. Future research will include an observation of sandwich style biscuits, such as the classic custard and bourbon creams; perhaps studies on the biscuit filled with jam could be enlightening. The British public will have many anecdotal views on how to brew and enjoy a cup of tea-the late Queen Elizabeth II and Paddington Bear included. 11 Although tea making facilities are accessible to most NHS workers, constraints on time mean that certain compromises must be made when preparing a cup of tea. One debatable and popular culture point that was addressed was when to add milk. 10 The investigators of the current study can confirm that although it may be reasonable when using fine bone china cups to add milk first to protect the cup from hot tea, this is not relevant to NHS workers, who often have a wide choice of mugs available. Such mugs have stood the test of time, and often just finding milk is a triumph. comparison with other studies Previous research into biscuit dunking has predominantly focused on how quickly biscuits break as a popular science experiment for schoolchildren aged 5-7 years. A previous study applied the Washburn equation-a more mathematical approach to the optimum dunk as it describes capillary flow in porous materials. 12 conclusion NHS staff can easily enjoy the pairing of a cup of tea with a biscuit in less than 10 minutes. Biscuit dunking has a beneficial effect on tea cooling and should be encouraged, and the oat biscuit was the best at achieving this when compared with the digestive, rich tea, and shortie. Making time for a cup of tea is an important daily ritual, and it should be encouraged to help improve the mood and performance of healthcare workers. Contributors: The authors contributed equally to the design, implementation, funding, and drafting of the manuscript. Both authors are guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding: No funding was available for this study. The investigators purchased study materials (mugs, biscuits, tea bags, and milk) out of pocket. Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: Despite actual biscuit harm being planned we successfully buttered up the local research and development department and were given full approval to dunk, break, and eat the participant biscuits. Data sharing: Dataset is available from corresponding author (ceri.jones4@wales.nhs.uk) The manuscript's guarantors (both authors) affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. Dissemination to participants and related patient and public communities: We have not disseminated the information at present but continue to share the information within our regional multidisciplinary team meetings. Provenance and peer review: Not commissioned; externally peer reviewed. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, TTDT=time to drinkable tea. *Includes penalty points.
2022-12-20T14:04:06.063Z
2022-12-19T00:00:00.000
{ "year": 2022, "sha1": "7b5fad3291293ad48aa0a86462133135efeaff3a", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "BMJ", "pdf_hash": "815a3b172a9bb0226993f1063c91d5e359b28d79", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [] }
118936901
pes2o/s2orc
v3-fos-license
Ultra-broadband Heteronuclear Hartmann-Hahn polarization transfer It is showed on the basis of the multiple-quantum operator algebra space formalism that ultra-broadband heteronuclear Hartmann-Hahn polarization transfer could be achieved by the amplitude- and frequency-modulation quasi-adiabatic excitation (90 degree) pulses, while it is usually difficult for the adiabatic inversion pulses to achieve effectively broadband Hartmann-Hahn transfer in a heteronuclear coupled two-spin system. The adiabatic and quasi-adiabatic pulses have an important property that within their activation bandwidth flip angle of the pulses is independent of the pulse duration and the bandwidth increases as the pulse duration. This property is importanr for construction of the heteronuclear Hartmann-Hahn transfer sequences with the quasi-adiabatic 90 degree pulses. Theoretic analysis and numerical simulation show that the heteronuclear Hartmann-Hahn transfer is performed in the even-order multiple- quantum operator algebra subspace of the two-spin system. The multiple-quantum operator algebra space formalism may give a powerful guide to the construction of ultra- broadband heteronuclear Hartmann-Hahn transfer sequences with the quasi-adiabatic 90 degree pulses. Introduction The Hartmann-Hahn polarization transfer experiment [1] is one of the most important nuclear magnetic resonance (NMR) experiments and has an extensive application in high-resolution NMR spectroscopy both in liquids and solids [1][2][3][4][5][6]. The Hartmann-Hahn transfer sequence may be used as a basic polarization-transfer building block to enhance the NMR signal intensity of dilute nuclei, which usually have a low spin polarization, and as a mixing sequence to achieve correction among different nuclear spins in molecules to help determination of molecular structures. In high-resolution NMR spectroscopy in liquid broadband heteronuclear Hartmann-Hahn transfer sequences usually were derived from heteronuclear decoupling sequences [7][8][9][10][11]. These decoupling sequences such as WALTZ, MLEV, and DIPSI families [7,8] usually are of composite-pulse sequences of rectangular radiofrequency (RF) pulses or amplitude-modulation shaped RF pulses. Such Hartmann-Hahn transfer sequences have been used extensively in structural determination of organic molecules and large biomolecules [12,13]. Usually these sequences need to dissipate much more RF power and have a high peak RF power in order that effective bandwidth to cover the full chemicalshift ranges of the nuclear spins in molecules can be obtained for the Hartmann-Hahn transfer. This generates significant resonance shifts due to sample heating which may cause problems for applications of the Hartmann-Hahn transfer sequences in high-resolution NMR spectroscopy. Therefore, a high-performance heteronuclear Hartmann-Hahn transfer sequence has been highly desired which has a lower peak RF power and dissipates low RF power but covers large chemical shift ranges. Having been stimulated by the success that the decoupling sequences based on adiabatic inversion pulses [14,15] can achieve ultrabroadband heteronuclear decoupling in high-field NMR spectroscopy [16][17][18], several researchers have suggested to exploit adiabatic pulses to construct the Hartmann-Hahn sequences [19,20]. Although a high tolerance to inhomogeneous RF field may be achieved for these sequences [19][20][21][22][23], it is really difficult to obtain a broadband Hartmann-Hahn transfer with these sequences with a low RF power. The adiabatic pulses have the superior performances [14,15]: (a) this type of pulses have a much wider inversion bandwidth but dissipate much less RF power and have much lower peak RF power than the rectangular pulses; (b) the pulses have a high tolerance to inhomogeneous RF field. These advantageous performances should attribute to the fact that such pulses consist of pairs of amplitude-and frequency-modulation functions which fulfill the adiabatic condition [14,15]. It has been desired highly that the superior performances of adiabatic pulses can be introduced into the Hartmann-Hahn transfer sequences by using the adiabatic pulses to build up the sequences. However, so far broadband heteronuclear Hartmann-Hahn sequences built up with adiabatic inversion pulses with a low RF power have not yet been found in high-resolution NMR spectroscopy in liquids. Other researchers suggested adiabatic Hartmann-Hahn transfer sequences with amplitude-modulation pulses, as be seen in Ref. [23][24][25][26]. These sequences usually are used in NMR spectroscopy in solids. It is expected that it is usually difficult for these sequences to obtain broadband Hartmann-Hahn transfer with a low RF power since these sequences use only amplitude-modulation RF field. It is well-known that flip angle of RF pulse is usually proportional to the pulsewidth for a hard pulse or more generally is dependent in a complex form on the pulsewidth for an amplitude-modulation shaped pulse [7]. Thus, for a given RF power an excitation pulse (90 degree) or an inversion pulse (180 degree) has a fixed pulsewidth [5,7]. However, there are also another type of pulses whose flip angle is independent of their pulsewidth. An example is adiabatic inversion pulses and quasiadiabatic excitation (90 degree) pulses [27], as can be seen below. The type of pulses has many advantageous performances and characteristic properties themselves. One of the important properties may be that the conversion efficiency of the initial longitudinal magnetization M 0 in a single spin system under the pulse is determined only by the constant adiabatic factor of the pulse within the conversion bandwidth, as investigated recently [27]. This indicates that flip angle of the pulse is determined only by the constant adiabatic factor. Furthermore, the flip angle is independent of the pulse duration within the conversion bandwidth, as investigated below, and the conversion bandwidth increases as the pulse duration and the square of the RF power [15,27,35,36]. This property shows that adiabatic inversion pulses may not be proper to construct broadband heteronuclear Hartmann-Hahn transfer sequences according to the theoretic analysis and numerical simulation below. It is understanding this point that leads me to the new idea to construct heteronuclear Hartmann-Hahn transfer sequences by using the quasi-adiabatic excitation (90 degree) pulses. As investigated below, this property also shows that quasi-adiabatic 90 degree pulses could be suitable for constructing heteronuclear Hartmann-Hahn transfer sequences in contrast to the adiabatic inversion pulses. In this paper it is proposed to use the amplitude-and frequency-modulation quasi-adiabatic 90 degree pulses to build up heteronuclear Hartmann-Hahn transfer sequences. A theoretic analysis is presented for the possible mechanics of the heteronuclear Hartmann-hahn transfer based on the multiple-quantum operator algebra space formalism [28]. Numerical simulation confirms the theoretic analysis. With the help of the multiple-quantum operator algebra space formalism ultra-broadband heteronuclear Hartmann-Hahn transfer sequences could be constructed with the quasi-adiabatic 90 degree pulses. The even-order multiple-quantum operator subspace A heteronuclear coupled two-spin system under a pair of amplitudeand frequency-(or phase-) modulation radiofrequency (RF) pulses has the total spin Hamiltonian in the rotating frame when neglecting the relaxation effects: (3) where i Ω and s Ω are the chemical shifts of the spins I and S, respectively and J is the scalar coupling constant between the two spins; are the amplitude-and phase-modulation functions of the pulse applied to the spin q (q = i, s), respectively. The total time-evolution propagator of the spin system under the pulses then can be written as where T is the Dyson time-ordering operator. This propagator can be decomposed into the product of the two factors corresponding to the two Hamiltonian operators with the interaction Hamiltonian in the interaction frame: (8) Since the Hamiltonian ) ( 0 t H of Eq.(2) does not contain the interaction between the two spins the corresponding propagator U 0 (t) actually describes the time evolution of a non-interacting two-spin system under a pair of amplitude-and frequency-modulation pulses. Then, one has the following unitary transformations [29][30][31][32]: (9b) This is due to the fact that each of the above unitary transformations is equivalent to the rotating transformation in the Lie algebra space su (2) . In Eq.(10) the first term is the longitudinal two-spin order operator (2I z S z ), the last four terms are single-quantum coherence operators, and the rest four terms are even-order multiple-quantum (double-and zero-quantum) coherence operators [28]. It will be seen in next sections that for a pair of adiabatic inversion pulses which are applied simultaneously to the heteronuclear coupled two-spin system the dominating term in the interaction Hamiltonian of Eq.(10) is the longitudinal two-spin order operator within their inversion band, (11) where the effective coupling constant J J zz ≈ . It is well known that this Hamiltonian can not derive the Hartmann-Hahn transfer [5]. This may be the main reason why the adiabatic inversion pulses usually are not suitable for deriving the Hartmann-Hahn transfer. However, for the quasiadiabatic excitation pulses (90 degree) the dominating terms in the interaction Hamiltonian of Eq.(10) are the even-order multiplequantum operators within their excitation band, (12) This Hamiltonian plays an important role in deriving heteronuclear Hartmann-Hahn transfer. It also shows that the quasi-adiabatic excitation pulses could be useful for constructing broadband heteronuclear Hartmann-Hahn transfer sequences. In the coupled two-spin system IS the even-order multiple-quantum operator subspace [28] contains the double-, and zero-quantum coherence operators, the longitudinal magnetization ( I z and S z ) and two-spin order (2I z S z ) operators. The conventional Hartmann-Hahn polarization transfer [1][2][3][4][5][6] usually is performed in the zero-quantum operator subspace that consists of the zero-quantum coherence operators and the longitudinal magnetization ( I z and S z ) and two-spin order operators (2I z S z ) in the twospin system since the effective spin Hamiltonian to derive the transfer is a zero-quantum operator. However, the Hartmann-Hahn polarization transfer may be performed more generally in the even-order multiplequantum operator subspace and the effective spin Hamiltonian to derive the transfer is an even-order multiple-quantum operator [28,29]. One possible form for the even-order multiple-quantum Hamiltonian is given by Eq. (12) and can be generally written as Then the Hartmann-Hahn transfer derived by the even-order multiplequantum Hamiltonian of Eq.(13) can be expressed as where the even-order multiple-quantum unitary propagator is given by This propagator may be expressed generally in a unitary matrix form where the nonzero matrix elements are time-dependent and also dependent on the parameters {J pq ; p, q=x, y} of Eqs.(14a) and (14b) and hence the resonance offsets. Now using the initial longitudinal magnetization: Equation (18) shows that the transformation may create generally double-and zero-quantum coherence, and the longitudinal magnetization and spin order operators. Obviously, this is due to the properties of the even-order multiple-quantum operators [28,33,34]. By comparing Eq.(18) with Eq.(15) one obtains the conditions for the complete Hartmann-Hahn transfer: If the spin Hamiltonian of Eq.(13) is time-independent, the even-order multiple-quantum propagator of Eq.(16) can be written generally as where the parameters are given by Using this propagator (19) one can calculate analytically the Hartmann-Hahn transfer: The first two terms in Eqs.(20a) and (20b) are the longitudinal magnetization operators and the last four terms are double-and zeroquantum coherence operators. Therefore, the complete Hartmann-Hahn transfer ( The transformations of Eqs.(20a) and (20b) show that there are not only zero-quantum but also double-quantum coherences to be generated during the Hartmann-Hahn transfer period besides the longitudinal magnetization I z and S z , indicating that the Hartmann-Hahn transfer derived by the even-order multiple-quantum Hamiltonian of Eq.(13) is performed in the even-order multiple-quantum operator subspace of the two-spin system (IS). The amplitude-and frequency-modulation adiabatic inversion and quasi-adiabatic excitation pulses The adiabatic inversion and quasi-adiabatic excitation pulses are composed of pairs of amplitude-and frequency-modulation functions. There are many methods to construct an adiabatic inversion pulse [14,15,27,35,36]. Recently, a general analytical method [27] has been proposed to construct high-performance adiabatic inversion and quasi-adiabatic 90 degree pulses. This method emphasizes the effect of adiabatic factor on the performance of adiabatic and quasi-adiabatic pulses, that is, the adiabatic factor can play an important role in the construction of the pulses. Here the adiabatic factor Particularly note that this definition of adiabatic factor is inversely that one in Refs. [14,15,35,36]. Based on the general analytical method [27] a simple and convenient formula to design an adiabatic pulse is obtained, where p t t 0 ≤ ≤ and t p is the pulsewidth of the adiabatic pulse, p is the constant adiabatic factor, ) t ( by the general analytical method [27] so that the adiabatic condition is always met over the whole chemical shift (Ω) range or the whole resonance offset ( (22) is the contribution of time derivative of the amplitudemodulation function in the construction of the frequency-modulation function. If this term is small and can be neglected equation (22) is reduced to the conventional approximated formula [35,36] to construct an adiabatic inversion pulse: (23) Equations (22) and (23) provide simple and convenient methods to construct adiabatic inversion and quasi-adiabatic 90 degree pulses. The more general method can be seen in Ref. [27]. It is particularly important that for the construction of an adiabatic inversion pulse with Eqs. (22) and (23) the constant adiabatic factor is set p=1/3, however, for a quasi-adiabatic 90 degree pulse the constant adiabatic factor is set p=2.3. The quasi-adiabatic 90 degree pulse is not an adiabatic pulse since it does not fulfill the adiabatic condition ( ), but it has some similar properties of an adiabatic inversion pulse and is constructed with the same methods such as Eqs. (22) and (23) as the adiabatic inversion pulse [27]. Therefore it is called the quasi-adiabatic pulse here. The frequencymodulation function is approximately proportional to the product 2 0 pω of the adiabatic factor p and the square of the peak power ( 0 ω ) of the RF pulse [35,36], as can be seen in Eqs. (22) and (23). Therefore, for some pulses such as the hyperbolic secant adiabatic pulse one could change the peak power of the pulse to obtain 90 or 180 degree flip angle although the adiabatic factor is not set p=2.3 or 1/3 when constructing the frequencymodulation function of the pulses by Eqs. (22) and (23) [15]. However, it is optimal for the construction of the adiabatic inversion and quasiadiabatic excitation pulses using Eqs. (22) and (23) or more generally using the general analytical method [27] with settings p=1/3 and 2.3, respectively. Since the operating bandwidth for an adiabatic pulse and a quasi-adiabatic pulse is proportional approximately to the adiabatic factor p the excitation bandwidth of the quasi-adiabatic 90 degree pulse is about seven times wider than that of the adiabatic inversion pulse with the same RF power and pulsewidth. That the quasi-adiabatic 90 degree pulses could be suitable for constructing heteronuclear Hartmann-Hahn transfer sequences is due to the fact that the pulses have the extraordinary property that flip angle of the pulses is independent of the pulse duration within the operating band of the pulses. It is well known that flip angle of the conventional rectangular pulses and the amplitude-modulation shaped pulses is usually dependent on the pulsewidth. Actually, flip angle of the adiabatic and quasi-adiabatic pulses is determined only by the constant adiabatic factor [27]. Numerical simulation shows that the adiabatic and quasi-adiabatic pulses have the important property that their flip angle is independent of the pulewidth. As a typical example, the conventional hyperbolic secant (backward-half part) quasi-adiabatic 90 degree pulse is investigated in the numerical simulation. For simplicity, the frequency-modulation function of the pulse is generated by Eq.(23) with the constant adiabatic factor p=2.3 by starting the backward-half hyperbolic secant amplitudemodulation function Figure 1 shows that bandwidth of the quasi-adiabatic excitation pulse increases as the pulse duration, but all the flip angles are the same (90 degree) for different pulse duration, as indicated by Figure 2, where the inversion profiles of the backward-half hyperbolic secant adiabatic inversion pulse with different pulse duration are plotted. However, the adiabatic inversion (180 degree) pulses are just not proper to derive the broadband Hartmann-Hahn transfer. This will be discussed in detailed below. (a) The adiabatic inversion pulses If the pair of pulses applied to the two heteronuclear spins are of adiabatic inversion pulses, then the conversion coefficients is the resonance offset of the spin q (q=i, s) and t p is pulse duration), the coefficients in Eqs.(9a) and (9b) should fulfill the relationship: (24) This shows that the initial longitudinal magnetization M 0 (I z or S z ) is inverted completely within the inversion band and in the duration t p of the pulses. (2) in the transition regions, i.e., the initial longitudinal magnetization M 0 is converted partly into the transverse magnetization M x and M y , (25) (3) in the large resonance offset ranges, i.e., , the initial longitudinal magnetization M 0 keeps unchanged, (26) This indicates that the adiabatic inversion pulses do not take into action when the resonance offset is outside the inversion bandwidth. It must be emphasized that the conversion coefficients ) t , ( q q ω ∆ α (q=i, s) is independent of the pulse duration t (t > t p ) within the inversion band ∆W q (t p ) for the adiabatic pulses, as shown in Figures 2 and 3 (A). The average Hamiltonian theory [5] may explain approximately why the heteronuclear Hartmann-Hahn transfer may not be achieved by the adiabatic inversion pulses. For simplicity, examine the Hartmann-Hahn polarization transfer on the resonance offset plane region SW(t p ): s). Assume that a pair of simultaneous adiabatic inversion pulses are applied to the two-spin system. These 11 pulses have the same inversion bandwidth which is dependent on the duration t p of the pulses, as shown in Figure 3. Obviously, since the inversion bandwidth increases as the pulse duration for the adiabatic pulses and the flip angle is independent of the pulse duration, as shown in Figure 2. For convenience, the total duration of the Hartmann-Hahn transfer derived by the adiabatic inversion pulses is denoted by T p and here assume that to derive the transfer during the period T p is given generally by Eq.(10). The zeroorder average Hamiltonian for the interaction Hamiltonian is written as [5], This integral can be expressed as the discrete sum: where the number n takes an enough large number and . It should be noted that the interaction Hamiltonian (10) is generally dependent on the resonance offsets of the two spins: . Now in the case that the pulse duration t (or the Hartmann-Hahn transfer duration) is smaller than t p the interaction Hamiltonian ) (t H i (t < t p ) is given by Eq.(10) within the inversion plane region (denoted by SW(t p )): (q=i, s), as shown in Figure 3 (B), since on the region all the coefficients in Eq.(10) may be nonzero due to the condition that . This is related to the fact that there are transition regions in the inversion profiles of the adiabatic pulses with pulse duration t < t p within the inversion band ∆W q (t p ), as can be seen in Figure 3 (A). The interaction Hamiltonian may contain zero-, single-and double-quantum coherence and longitudinal two-spin order operators. Such a Hamiltonian may not derive generally the broadband Hartmann-Hahn transfer. However, when the pulse duration t is longer than t p the interaction Hamiltonian ) (t H i (t > t p ) of Eq.(10) within the inversion plane region SW(t p ) should be simple and contain only the longitudinal two-spin order operator (2I z S z ) since in Eqs.(9a) and (9b) only the coefficients 1 ) t , ( q q − = ω ∆ α (q = i, s; t > t p ) and any other coefficients equal zero within the inversion band ∆W q (t p ). Then the zero-order average Hamiltonian of Eq.(28) within the inversion plane region SW(t p ) can be divided into two parts: The first part takes generally the form of Eq.(10), but the second part is the longitudinal two-spin order operator (2I z S z ). Since the total duration T p of the Hartmann-Hahn transfer is much longer than the time t p , that is, p p t T >> , the second part is the dominating term and the first part can be neglected. Consequently the zero-order average Hamiltonian is approximately equal to the longitudinal two-spin order operator within the inversion plane region SW(t p ), Obviously, this Hamiltonian can not derive the Hartmann-Hahn transfer within the inversion plane region SW(t p ). More generally, the time evolution propagator of Eq. (7) can be expressed as the sequence with sufficiently small intervals { The total duration of the Hartmann-Hahn sequence can be divided into the two periods so that the sequence of Eqs. (29) can be expressed as the product of two parts. One part is the one with the evolution time t shorter than t p and another is the rest part with evolution time t longer than t p , where the propagators are defined as Eq. (7): All these unitary propagators in Eq. (30) are usually dependent on the resonance offsets of the two spins. Since the interaction Hamiltonian is generally given by Eq.(10) within the region SW(t p ), any initial magnetization ) 0 ( ρ may be transferred into all the possible operator components, e.g., double-, zeroquantum coherences, etc., in the Liouville operator space of the two spin system under the propagator If the duration t p is much shorter than the total duration T p , that is, the total propagator within the inversion plane region SW(t p ) is given approximately by (32) It is well known that such a propagator can not derive the Hartmann-Hahn transfer. On the other hand, outside the resonance offset plane region SW(T p ), that is, , s), the adiabatic inversion pulses do not take into action, and the propagators are given by respectively, as can be seen from Eqs. (1)-(3), since outside the region SW(T p ) the coefficients in Eqs.(9a) and (9b). The propagators also can not derive the Hartmann-Hahn transfer. For the transition region between the resonance offset plane regions SW(T p ) and SW(t p ) the interaction Hamiltonian is complicated and is given generally by Eq. (10). It is usually difficult to derive broadband Hartmann-Hahn transfer by the Hamiltonian. Therefore, a pair of adiabatic inversion pulses applied simultaneously to two heteronuclear coupled spins usually can not achieve effectively Hartmann-Hahn transfer between the two spins. Numerical simulation is carried out in a heteronuclear two-spin system with scalar coupling constant J=140Hz under the full hyperbolic secant adiabatic inversion pulses with amplitude-modulation function ) and frequency-modulation function constructed by Eq.(23) with the constant adiabatic factor p=1/3. There is difference between the backward-half and the full hyperbolic secant pulses, but the conclusion obtained from the numerical calculation below is the same for the two adiabatic pulses. For the case of the full hyperbolic secant pulse numerical simulation shows that during the beginning T p /4 period of the pulse the longitudinal magnetization I z (or S z ) keeps unchanged over the inversion band, then the interaction Hamiltonian during the beginning T p /4 period is the longitudinal two-spin order operator of Eq.(11) and its corresponding propagator is , and after the time T p /4 the propagator is approximated by Eq. (32). Assume that the initial magnetization is The total propagator (the pulse sequence) to be simulated numerically is taken as for non-inversion pulses. The density operator is calculated numerically according to with the propagator (33a) or (33b). Figure 4 shows the resonance offset dependence for the anti-phase magnetization (2I y S z ) created by the propagator (q = i, s) when the pulse duration T p =3/2J. It can be seen that the initial magnetization I x is almost completely transferred into the anti-phase magnetization -(2I y S z ) on the resonance offset plane region, while the theoretic propagator ) T ( U p i of Eq.(32) also predicts that initial magnetization I x should be completely transferred into the anti-phase magnetization -(2I y S z ) when the duration T p =3/2J. Numerical simulation also shows that the initial magnetization I x is almost completely transferred into the magnetization -I x when T p =1/J, which is also consistent with the prediction by the propagator of Eq.(32). These numerical simulations using the propagator (33b) and the propagator show that it is difficult to achieve effectively broadband Hartmann-Hahn transfer in the two-spin system under the hyperbolic secant adiabatic inversion pulses. Therefore, a pair of adiabatic inversion pulses applied simultaneously to two heteronuclear coupled spins usually can not derive effectively Hartmann-Hahn transfer between the two spins. Actually, It is first understanding this point that leads me to the new idea to build up heteronuclear Hartmann-Hahn transfer sequences with the quasi-adiabatic excitation (90 degree) pulses. (b) the quasi-adiabatic excitation (90 degree) pulses Now consider the case that the pair of pulses used to derive the Hartmann-Hahn transfer in a heteronuclear two-spin system are taken as two quasi-adiabatic excitation (90 degree) pulses instead of the adiabatic inversion pulses. Then the conversion coefficients (1) within the excitation band ∆W q (t p ), i.e., , as shown in the schematic Figure 3 (A), the conversion coefficients fulfill These shows that under the quasi-adiabatic excitation pulses the initial longitudinal magnetization M 0 (I z or S z ) is converted completely into the transverse magnetization M x and M y within the excitation band. (2) in the transition regions: the initial longitudinal magnetization M 0 is converted partly into the transverse magnetization M x and M y , (3) in the large resonance offset ranges, i.e., , the initial longitudinal magnetization M 0 keeps unchanged, that is, Obviously, the quasi-adiabatic 90 degree pulses do not take into action outside their excitation band. The quasi-adiabatic excitation (90 degree) pulses have the same property as the adiabatic inversion pulses, that is, flip angle of the pulses is independent of the pulse duration and excitation bandwidth of the pulses increases as the pulse duration and the square of the RF power. Therefore, the excitation bandwidth at the duration t p if t > t p . By using the property one may find the possible reason why it could be possible for the quasi-adiabatic 90 degree pulses to derive the heteronuclear Hartmann-Hahn transfer. Now in the resonance offset plane region SW(t p ): as shown in Figure 3 (B), the interaction Hamiltonian of Eq. (10) can be simplified by the excitation condition of the quasi-adiabatic 90 degree pulses, that is, , when any duration t of the Hartmann-Hahn transfer sequence is longer than t p , i.e., t > t p . In this case the interaction Hamiltonian is reduced to the form of Eq. (12), indicating that the interaction Hamiltonian now is an even-order multiplequantum operator in the region SW(t p ) instead of the longitudinal twospin order operator which is the interaction Hamiltonian in the case of the adiabatic inversion pulses. When the total duration T p of the Hartmann-Hahn sequence is much longer than the time t p , that is, the evenorder multiple-quantum interaction Hamiltonian makes the main contribution to the whole Hartmann-Hahn transfer in the region SW(t p ). It is shown in the former sections that an even-order multiple-quantum Hamiltonian may derive the Hartmann-Hahn transfer. Then the Hartmann-Hahn transfer may be achieved within the resonance offset plane region SW(t p ) by the pair of the quasi-adiabatic 90 degree pulses. This is completely different from that case of the adiabatic inversion pulses. Therefore, it could be possible for a pair of quasi-adiabatic excitation (90 degree) pulses applied simultaneously to two heteronuclear coupled spins to derive the Hartmann-Hahn transfer between the two spins. Numerical simulation is used to investigate the Hartmann-Hahn transfer process derived by the hyperbolic secant quasi-adiabatic 90 degree pulses in a two-spin system with J=140Hz. The frequencymodulation function of the quasi-adiabatic pulse is generated by Eq. (23) with the constant adiabatic factor p=2. 3 (q = i, s). One can see that the Hartmann-Hahn transfer completely from the spin I to the spin S may occur even at some large resonance offset of the spin S, although the transfer is not uniform and may not be achieved completely at some resonance offset of the spin I. This shows that the hyperbolic secant quasi-adiabatic excitation pulse could provide a possibility to achieve the heteronuclear Hartmann-Hahn transfer. However, according to the property of even-order multiple-quantum operators [28,33,34] an evenorder multiple-quantum Hamiltonian generated by the hyperbolic secant quasi-adiabatic pulses should not derive the transfer from the initial longitudinal magnetization . The backward-half hyperbolic secant quasiadiabatic 90 degree pulse is used to investigate the transfer in the numerical calculation. Figure 6 shows that the initial longitudinal magnetization z I ) 0 ( = ρ in the two-spin system with J=140Hz is almost completely transferred to the double-and zero-quantum coherences in the resonance offset plane region: in the resonance offset region is approximately an even-order multiple-quantum operator. A possible ultra-broadband heteronuclear Hartmann-Hahn sequence Although it could be possible for the quasi-adiabatic 90 degree pulses to derive heteronuclear Hartmann-Hahn transfer, it is usually difficult to obtain broadband heteronuclear Hartmann-Hahn transfer by a simple sequence of the quasi-adiabatic 90 degree pulses. The multiple-quantum operator algebra space formalism [28] may be very helpful in the construction of an ultra-broadband heteronuclear Hartmann-Hahn sequence with the quasi-adiabatic 90 degree pulses. According to the theoretic analysis in the former sections the heteronuclear Hartmann-Hahn transfer can be performed in the even-order multiple-quantum operator space. This principle may guide one to design ultra-broadband heteronuclear Hartmann-Hahn sequences by using the quasi-adiabatic 90 degree pulses. A possible ultra-broadband heteronuclear Hartmann-Hahn transfer sequence could be constructed with the quasi-adiabatic 90 degree pulses on the basis of the theoretic analysis in the even order multiple-quantum operator subspace. Figure 7 (A) shows the Hartmann-Hahn transfer sequence. The initial longitudinal magnetization I z of the spin I is first transferred completely into the even-order multiple-quantum coherence operators by the first even-order multiple-quantum unitary propagator ) t ( U p 1 e , then under the second even-order multiple-quantum unitary propagator ) t ( U p 2 e the even-order multiple-quantum coherence operators are transferred completely to the longitudinal magnetization S z of the spin S. The sequence consists of the quasi-adiabatic 90 degree pulses which are used to prepare the two even-order multiple-quantum propagators. This sequence is called quasi-adiabatic excitation pulse deriving ultrabroadband Heteronuclear Hartmann-Hahn polarization transfer echo, as shown in the schematic picture of Figure 7 (B). The polarization transfer echo is achieved in the even-order multiple-quantum operator subspace. The propagator ) ( 0 t U of the quasi-adiabatic 90 degree pulses applied simultaneously to two non-interacting spins have not net contribution to the transfer, but it can degrade the echo and even destroy the echo and hence need to be refocused effectively. How to refocus effectively the propagator ) ( 0 t U is a challenge in implementing the Hartmann-Hahn transfer echo experiment at present. Now one possible Hartmann-Hahn transfer echo sequence could be constructed below. The first even-order multiple-quantum unitary propagator ) t ( U p 1 e is prepared by a pair of the backward-half hyperbolic secant quasi-adiabatic 90 degree pulses applied simultaneously to the two spin with the RF phase along X-direction and a refocusing propagator The second even-order multiple-quantum unitary propagator ) t ( U p 2 e is prepared by the pair of backward-half hyperbolic secant quasi-adiabatic 90 degree pulses applied simultaneously to the two spin with the RF phase along Y-direction and a refocusing propagator The numerical simulation has been carried out for the echo sequence of Figure 7 < π ω ∆ < (q = i, s). These results illustrate for the first time the possibility that an ultra-broadband heteronuclear Hartmann-Hahn transfer could be achieved by the quasiadiabatic excitation pulses with a lower RF power just similar to that an ultra-broadband inversion can be obtained by an adiabatic inversion pulse with a lower RF power [14,15]. These results also show that the multiple-quantum operator algebra space formalism [28] is very helpful for the construction of the ultra-broadband heteronuclear Hartmann-Hahn transfer echo sequence. From the view point of the quasi-adiabatic pulses a longer pulse duration t p corresponds to a wider excitation band of the pulses, which results in also a wider excitation band of the even-order multiple-quantum interaction Hamiltonian and hence a more broadband Hartmann-Hahn transfer. On the other hand, the complete Hartmann-Hahn transfer is also dependent on the scalar coupling between the two heteronulcear nuclei and usually is approximately proportional to inverse scalar coupling constant [5,6]. It seems that the bandwidth for the Hartmann-Hahn transfer is limited by the scalar coupling, but the excitation band for the quasi-adiabatic excitation pulses is dependent on not only the pulse duration but also the square of the RF power [27]. In particular, bandwidth of the quasi-adiabatic pulses increases in a quadratic form as the RF power, indicating that the bandwidth of the Hartmann-Hahn transfer sequence should increase as the square of the RF power. This is the extraordinary performance of the heteronuclear Hartmann-Hahn transfer sequences based on the quasi-adiabatic 90 degree pulses. It indicates that ultra-broadband heteronuclear Hartmann-Hahn transfer could be achieved by the quasi-adiabatic 90 degree pulses with low RF power. Finally it must be pointed out that the key to implement in NMR experiments the ultra-broadband heteronuclear Hartmann-Hahn transfer echo should be how the propagator ) ( 0 t U of the quasi-adiabatic 90 degree pulses can be refocused effectively.
2019-04-14T03:18:17.184Z
2002-03-16T00:00:00.000
{ "year": 2002, "sha1": "542f91c7def7c38e4670a494c24070245a4e075c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5ce766c54bbb58338ab99ffbe79d8aa6fb56f847", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
205370323
pes2o/s2orc
v3-fos-license
The C-terminal Region of Human Adipose Triglyceride Lipase Affects Enzyme Activity and Lipid Droplet Binding* Adipose triglyceride lipase (ATGL) catalyzes the first step in the hydrolysis of triacylglycerol (TG) generating diacylglycerol and free fatty acids. The enzyme requires the activator protein CGI-58 (or ABHD5) for full enzymatic activity. Defective ATGL function causes a recessively inherited disorder named neutral lipid storage disease that is characterized by systemic TG accumulation and myopathy. In this study, we investigated the functional defects associated with mutations in the ATGL gene that cause neutral lipid storage disease. We show that these mutations lead to the expression of either inactive enzymes localizing to lipid droplets (LDs) or enzymatically active lipases with defective LD binding. Additionally, our studies assign important regulatory functions to the C-terminal part of ATGL. Truncated mutant ATGL variants lacking ∼220 amino acids of the C-terminal protein region do not localize to LDs. Interestingly, however, these mutants exhibit substantially increased TG hydrolase activity in vitro (up to 20-fold) compared with the wild-type enzyme, indicating that the C-terminal region suppresses enzyme activity. Protein-protein interaction studies revealed an increased binding of truncated ATGL to CGI-58, suggesting that the C-terminal part interferes with CGI-58 interaction and enzyme activation. Compared with the human enzyme, the C-terminal region of mouse ATGL is much less effective in suppressing enzyme activity, implicating species-dependent differences in enzyme regulation. Together, our results demonstrate that the C-terminal region of ATGL is essential for proper localization of the enzyme and suppresses enzyme activity. Adipose triglyceride lipase (ATGL; official gene symbol: PNPLA2, patatin-like phospholipase domain containing 2) 3 is an important triacylglycerol (TG) lipase involved in the mobilization of TG stores (1,2). The enzyme belongs to a family of patatin domain-containing proteins originally observed in plants (3). The members of this family have been shown to hydrolyze TG, phospholipids, or retinyl ester (4 -8). Defective ATGL function is characterized by systemic TG accumulation in humans (9,10) and rodents (2). In humans, mutations in both the ATGL gene or the gene for CGI-58 (comparative gene identification-58; also known as ␣/␤-hydrolase fold-containing protein 5, ABHD5) are associated with a rare inherited disorder annotated as neutral lipid storage disease (NLSD) (11). CGI-58 functions as activator protein of ATGL and mutant forms of CGI-58 associated with NLSD completely lose their capability of activating ATGL (12). Although mutations in both ATGL and CGI-58 cause NLSD, the phenotypical appearance of patients is not identical. NLSD caused by defective CGI-58 function (also known as Chanarin-Dorfman syndrome) is clinically characterized by ichthyosis, often associated with mild myopathy and hepatomegaly. Other observed symptoms include ophthalmologic abnormalities, hearing loss, intestinal involvement, short stature, mental retardation, and microcephaly (13)(14)(15). In contrast, mutations in ATGL are not associated with ichthyosis. Affected individuals appear to develop a more severe form of myopathy than patients with defective CGI-58 function. Cardiac abnormalities and hepatomegaly have also been described (9,10). According to these divergent clinical phenotypes, Fischer et al. (9) proposed NLSD with ichthyosis as a name for the subgroup of individuals with mutations in the CGI-58 gene and NLSD with myopathy for individuals with mutations in the ATGL gene. Because naturally occurring mutations in human genes offer a unique opportunity to study the structure-function relationship of enzymes, we investigated the functional defects of mutations in the ATGL gene causing NLSD. Our results identify the biochemical basis of the known genetic defects and assign an important function to the previously uncharacterized C-terminal region of the protein, which affects enzyme activity and mediates LD binding of the enzyme. The PCR products were ligated to compatible restriction sites of the eukaryotic expression vector pcDNA4/HisMaxC (Invitrogen) and pEYFP-C1 (BD Biosciences Clontech, Palo Alto, CA). A control pcDNA4/HisMax vector expressing ␤-galactosidase was provided by the manufacturer (Invitrogen). Sequence Analysis-Sequence analysis of plasmid DNA was performed using the BigDye terminator mixture (Applied Bio-systems, Foster City, CA). The PCR products were sequenced on an ABI PRISM 310 Genetic analyzer (Applied Biosystems). Expression of Recombinant Proteins and Preparation of Cell Extracts-Monkey embryonic kidney cells (Cos-7, ATCC CRL-1651) were cultivated in DMEM (Invitrogen) containing 10% fetal calf serum (Sigma-Aldrich) under standard conditions (37°C, 5% CO 2 ). The cells were transfected with 1 g DNA complexed to Metafectene (Biontex GmbH, Munich, Germany) in serum-free DMEM. After 4 h the medium was replaced by regular growth medium supplemented with 10% fetal calf serum. For the preparation of cell extracts, the cells were washed with PBS, collected using a cell scraper, and disrupted in buffer A (0.25 M sucrose, 1 mM EDTA, 1 mM dithiothreitol, 20 g/ml leupeptine, 2 g/ml antipain, 1 g/ml pepstatin, pH 7.0) by sonication (Virsonic 475, Virtis, Gardiner, NJ). The nuclei and unbroken cells were removed by centrifugation at 1,000 ϫ g, 4°C for 10 min. Protein concentration of cell lysates was determined with Bio-Rad protein assay according to the manufacturer's protocol (Bio-Rad 785) using BSA as standard. The expression of the His-tagged proteins was detected by Western blotting analysis as described (1). Assay for TG Hydrolase Activity-For the determination of TG hydrolase activity of various recombinant proteins, 10 -40 g of protein of respective cell extracts in a total volume of 100 l of buffer A were incubated with 100 l of substrate in a water bath at 37°C for 60 min. As a control, incubations under identical conditions were performed with LacZ-expressing lysates alone or mixed with various recombinant protein lysates. After incubation, the reaction was terminated by adding 3.25 ml of methanol/chloroform/heptane (10:9:7) and 1 ml of 0.1 M potassium carbonate, 0.1 M boric acid, (pH 10.5). After centrifugation (800 ϫ g, 15 min), the radioactivity in 1 ml of the upper phase was determined by liquid scintillation counting. Labeling and Isolation of LD-Human skin fibroblasts of Patient 2 (FS282 mutation) were cultured in DMEM containing 10% fetal calf serum. For radioactive labeling of TG, confluent cells were incubated for 20 h in the presence of 0.2 mM oleate (4 mCi 3 H-9,10-oleate/mmol) complexed to BSA at a FFA/BSA molar ratio of 3:1. For isolation of LD, the cells were washed with PBS and collected using a cell scraper. Thereafter, the cells were suspended in buffer A and disrupted by sonication (Virsonic 475, Virtis, Gardiner, NJ). The cell lysates were transferred to SW41 tubes, overlaid with buffer B (50 mM potassium phosphate, pH 7.4, 100 mM KCl, 1 mM EDTA, 20 g/ml leupeptine, 2 g/ml antipain, 1 g/ml pepstatin), and centrifuged in a SW41 rotor (Beckman, Fullerton, CA) (2 h, 100,000 ϫ g, 4°C). LD were collected as a white band from the top of the tubes and concentrated by centrifugation (20,000 ϫ g, 15 min, 4°C). The underlying solution was removed, and LD were resuspended in buffer B by brief sonication. TG and protein contents of LD were determined using commercial reagents (Thermotrace, Thermo Electron Corporation, Victoria, Australia and Bradford, Bio-Rad Laboratories GmbH, Munich, Germany, respectively). Assay for TG Hydrolase Activity Using Purified LD as Substrate-For the determination of TG hydrolase activity of various recombinant proteins, 40 g of protein of respective cell extracts were incubated with 25 nmol 3 H-9,10-oleate labeled LD (1,660 cpm/nmol TG) and 5% defatted BSA in a total volume of 200 l. The reaction was incubated for 1 h at 37°C. The release of FFA was determined as described for TG hydrolase activity assays using an artificial substrate. Cellular Localization of ATGL Mutants-Cos-7 cells were seeded on glass coverslips in 6-well dishes (1.5 ϫ 10 5 cells/well) and transfected with YFP-tagged human ATGL (hATGL) and ATGL mutants. 24 h after transfection, the cells were incubated for 20 h in regular growth medium supplemented with oleic acid (400 M) complexed to fat-free BSA. Lipid droplets were stained by incubating cells with 15 g/ml Bodipy 558/568 C 12 (Invitrogen) in DMEM for 2 h. The cells were washed three times with 1ϫ PBS before mounting them on a Nipkow-based array confocal laser scanning microscope (19). The array confocal laser scanning microscope was built on a Zeiss Axiovert 200M (Zeiss Microsystems, Jena, Germany) equipped with VoxCell Scan (VisiTech, Sunderland, UK), a 150-milliwatt Argon laser (Laser Physics; West Jordan, UT), and a 30-milliwatt 405-nm laser diode (VisiTech). Single cells displaying a clear fluorescence were selected to acquire three-dimensional stacks (with a z-distance of 100 nm) using the ␣ Plan-Fluar 100ϫ/1.45 oil objective from Zeiss (Zeiss Microsystems, Jena, Germany). YFP fluorescence was excited with Argon Laser at 488 nm and detected at 535 nm using the emission filter 535AF26 (Omega Optical, Brattleboro, VT). Bodipy 558/568 C 12 fluorescence was excited at 514 nm and detected at 570 nm using the emission filter 570LP (Omega Optical, Brattleboro, VT). For quantification of fluorescence signals, five LDs were randomly selected in single cells, and the average fluorescence intensity for YFP-tagged ATGL variants was obtained along a circular line at the edge of each droplet. In analogy, the average intensity of the respective circular line at least 2 m away from the lipid droplets was extracted to measure LD-free cytoplasm. Differences of the subcellular localization different ATGL constructs were expressed as the ratio between the average fluorescence intensities at LDs and the average intensities in the cytoplasm. All of the image analyses were performed using Metamorph 5.0 (Universal Imaging, Visitron Systems, Puchheim, Germany) (20). Isolation of Lipid Droplets for Western Blot Analysis-24 h after transfection, Cos-7 cells were incubated for 20 h in regular growth medium supplemented with oleic acid (400 M) complexed to fat-free BSA (molar ratio 3:1). Thereafter, the cells were washed with PBS and collected using a cell scraper in buffer A containing 1 mM phenylmethylsulfonyl fluoride and 1 mM EDTA. The cells were disrupted by sonication (Virsonic 475, Virtis, Gardiner, NJ), transferred to SW41 tubes, and centrifuged as described above. Proteins of the LD fraction were subjected to SDS-PAGE and Western blotting analysis using an anti-His antibody (GE Healthcare). CGI-58 ELISA-For the detection of interacting proteins, ELISA plates (MaxiSorp, Nalgen Nunc Int., Rochester, NY) were coated with 3 g of GST-CGI in buffer C (50 mM Tris, pH 8.0, 150 mM NaCl). The wells were blocked with 5% BSA in buffer C and incubated with 50 g of protein/well of Cos-7 cell extracts in 50 mM potassium phosphate buffer, pH 7.0, containing equimolar concentrations of His-tagged proteins. After washing with buffer C containing 0.05% Tween 20, the mouse anti-His antibody (GE Healthcare) was added in the same buffer containing 0.5% BSA. Subsequent to three further washes, horseradish peroxidase-conjugated anti-mouse antibody (GE Healthcare) was added. After washing three times with buffer C containing 0.05% Tween 20, the absorbance of tetramethyl-benzidine was determined at 450 nm using 620 nm as reference wavelength. Biochemical Analysis-TG concentration was determined using Infinity Triglycerides reagent (Thermo Electron Corporation). Protein concentrations of cell extracts were measured with the Bradford protein assay (Bio-Rad) and BCA reagent (Pierce), respectively, using BSA as standard. Structure of Wild-type and Mutant Human ATGL-Previous sequence analysis and three-dimensional structural comparisons with related proteins of the PNPLA family showed that the N-terminal part (residues 1-251) of ATGL belongs to the class of ␣/␤ proteins containing a patatin domain of a three-layer (␣/␤/␣) sandwich architecture (residues 10 -178). In addition to the name giving plant protein patatin, Pat17 (21), this homologous superfamily (CATH code 3.40.1090.10) also contains the catalytic domain of human cytosolic phospholipase A2 (cPLA2) (22) with known three-dimensional structure. In these proteins, the hydrolytic reaction is mediated through a catalytic serine-aspartate dyad (Ser 47 -Asp 215 in Pat17, Ser 228 -Asp 548 in cPLA2, and Ser 47 -Asp 166 in ATGL (predicted)), with the nucleophilic serine located within a GXSXG motif typically found in lipases of the ␣/␤-hydrolase fold family (Fig. 1a). The C-terminal part (residues 250 -504) is expected to consist mostly of ␣-helical and loop regions. A hydrophobic stretch (amino acid 315-360) potentially represents a lipid-binding region. Fischer et al. (9) described four different mutations associated with NLSD with myopathy (Fig. 1a). Patient 1 was reported as compound heterozygote. One mutation led to an amino acid exchange within the ␣/␤-fold at position 195 (P195L). A single base pair deletion on the second allele resulted in a frameshift at position 270 (FS270) leading to the expression of a protein with 319 amino acids (aa). Patient 2 exhibited a homozygous single base pair deletion resulting in a frameshift at position 282 (FS282) and the expression of a protein with 319 aa. Patient 3 Structure-Function Relationship of ATGL was homozygous for a nonsense mutation, which led to the expression of a protein with 289 amino acids (Q289X). TG Hydrolase Activity of ATGL and ATGL Mutants-To investigate functional defects caused by different mutations, Cos-7 cells were transfected with expression vectors encoding His-tagged wild-type ATGL or the four mutant versions of the enzyme described above. Expression of proteins and their molecular weights were determined by Western blotting analysis (Fig. 1b). For the determination of TG hydrolase activity, cytoplasmic lysates of Cos-7 cells were incubated with an artificial triolein substrate emulsified with phospholipids (Fig. 1c). Compared with wild-type human ATGL (hATGL), the allele with a single missense mutation in amino acid position 195 (P195L) exhibited reduced TG hydrolase activity in the absence (Ϫ45%, p ϭ 0.06) and in the presence (Ϫ87%) of human CGI-58 (hCGI-58). In contrast and quite unexpectedly, all of the deletion mutants of ATGL exhibited increased activity. Based on the expression levels of the His-tagged proteins, FS270, FS282, and Q289X were 2-, 4.5-, and 8.6-fold more active than hATGL, respectively (Fig. 1c). A comparable increase in TG hydrolase activity was also observed in the presence of hCGI-58 (1.7-, 5.4-, and 11.6-fold for FS270, FS282, and Q289X, respectively, compared with CGI-58-stimulated hATGL). Thus, ATGL mutants lacking the C-terminal region exhibit increased in vitro TG hydrolase activity compared with the full-length enzyme. TG Hydrolase Activity of ATGL Mutants Using Lipid Droplets as Substrate-To investigate whether ATGL mutants are also active against a physiologically more relevant substrate, we used purified LD as substrate in TG hydrolase assays (Fig. 1d). Similar as observed with the artificial substrate, P195L exhibited decreased activity in the absence and in the presence of CGI-58. The other mutants again exhibited increased activity. Based on the expression levels of the respective His-tagged proteins, FS270, FS282, and Q289X were 2.0-, 7.6-, and 24.6-fold more active than hATGL. In the presence of hCGI-58, TG hydrolase activity was increased 1.3-, 5.0-, and 22.3-fold for FS270, FS282, and Q289X, respectively, compared with CGI-58 stimulated hATGL. Cellular Localization of ATGL Mutants-Next we determined whether ATGL mutations affect the cellular localization of ATGL. Cos-7 cells were transfected with YFP-tagged wildtype and mutant ATGL. The YFP-tagged proteins exhibited similar TG hydrolase activity compared with the His-tagged constructs, suggesting that the YFP tag did not interfere with the enzymatic function (data not shown). As shown in Fig. 2a, hATGL and P195L mainly localized to LD and were only barely detectable in the cytosol. In contrast, ATGL mutants FS270, FS282, and Q289X were located in the cytosol and barely detectable around LDs. To compare LD association of hATGL and mutants, we quantified fluorescent signals in the LD-free cytoplasmic fraction (F-cytoplasm) and around lipid droplets (F-LD). Compared with hATGL and P195L, mutants FS270, FS282, and Q289X exhibited a substantially decreased ratio F-LD/F-cytoplasm again indicating defective LD binding (Fig. 2b). The quantification of fluorescent signals of whole cells (Fig. 2c) as well as Western blotting analysis of ATGL constructs (Fig. 2d) revealed that wild-type and mutant enzymes were expressed at comparable levels, suggesting that the localization P195L, FS270, FS282, and Q289X. a, domain organization of wild-type and mutant ATGL variants. b, Western blot analysis of Histagged proteins expressed in Cos-7 cells using an anti-His Antibody. c, TG hydrolase activity in cell lysates expressing ATGL or mutant ATGL using an artificial triolein substrate. d, TG hydrolase activity of hATGL and mutant ATGL using purified LDs as substrate. The measurements were performed in the absence (basal) and in the presence of human CGI-58 (ϩCGI-58). TG hydrolase activity was normalized on the expression levels of His-tagged proteins and is shown in relation to basal activity detected in hATGL expressing cells. The specific activity of hATGL was 19.0 and 4.1 nmol of FFA/h*mg cell protein using emulsified triolein or lipid droplets as substrate, respectively. The activity measured in cells expressing ␤-galactosidase was set as blank. The measurements were performed in triplicate and are representative for three independent experiments. The data are presented as the means Ϯ S.D. *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001. of ATGL variants is not affected by different expression levels. Similar results as shown for YFP-tagged proteins were obtained in cell fractionation experiments using His-tagged constructs. As shown in Fig. 2e, signals for truncated ATGL mutants detected in isolated LDs were substantially decreased compared with hATGL, whereas P195L exhibited increased LD binding. Together, our data suggest that the C-terminal region is essential for proper localization of ATGL. Comparison of TG Hydrolase Activity of Human and Mouse ATGL-Human ATGL was reported to possess low in vitro TG hydrolase activity compared with the mouse ortholog (mATGL) or hormone-sensitive lipase (12,23). Sequence comparison of hATGL and mATGL reveals 84% identity and 87% homology. The N-terminal 266 aa of human and murine orthologs exhibit 96% homology. Stretches with low similarity can be found in the C-terminal part of the enzymes (indicated in red, Fig. 3a), which might be causal for the reported low TG hydrolase activity of hATGL. To compare the activity of human and murine orthologs in the presence and in the absence of their C-terminal regions, full-length enzymes and truncated ATGL variants were expressed in Cos-7 cells. Based on the expression levels of the His-tagged constructs, we found that mATGL is 8.0-and 9.2-fold more active than hATGL in the absence and presence of purified CGI-58, respectively (Fig. 3b). In comparison, Q289X exhibited increased TG hydrolase activity compared with mATGL (2.1-and 1.3-fold in the absence and in the presence of CGI-58, respectively). To investigate whether the C-terminal region also affects the activity of mATGL, we generated a construct encoding the N-terminal 289 aa of mATGL (m289X). In contrast to the human enzyme, truncation of mATGL did not affect basal activity (Fig. 3c). In the presence of CGI-58, m289X was 2-fold more active than the full-length enzyme. These data suggest that the activity of mATGL is influenced to a much lower extent by its C-terminal region compared with the human ortholog. TG Hydrolase Activity of Chimeric Enzymes-To investigate whether the C-terminal region of hATGL is capable of suppressing the activity of mATGL, we produced a chimeric enzyme (mN/hC-ATGL) consisting of the N-terminal 266 aa of mATGL and the C-terminal part of hATGL (aa 267-504). Compared with mATGL, mN/hC-ATGL showed markedly decreased TG hydrolase activity in the absence and in the presence of CGI-58 (Ϫ83% and -90%, respectively; Fig. 3d). In contrast, a chimeric enzyme consisting of the N-terminal region of hATGL and the C-terminal part of mATGL (hN/mC-ATGL) exhibited increased activity compared with hATGL (4-fold (p ϭ 0.07) and 7.2-fold in the absence and in the presence of CGI-58, respectively). Thus, the comparatively low activity of hATGL can largely be explained by the activity-suppressing character of the human C-terminal region. Interaction of ATGL Variants with CGI-58-To compare the interaction of hATGL, mATGL, truncated, and chimeric proteins, we determined the binding of the His-tagged enzymes to GST-CGI, which was immobilized on ELISA plates as described (12). Compared with ␤-galactosidase (LacZ), which was used as negative control, hATGL and mATGL exhibited a significant increase in CGI-58 binding (Fig. 4a). hATGL only barely interacted with GST-CGI compared with mATGL (Ϫ68% after subtraction of the LacZ control). hN/mC-ATGL showed increased binding compared with hATGL (2.5-fold). In contrast, mN/hC-ATGL exhibited reduced binding compared with mATGL (Ϫ59%; Fig. 4b). The truncated enzymes Q289X and m289X exhibited 2.5-and 1.6-fold increased GST-CGI binding in comparison with their respective wild-type enzymes (Fig. 4b). Together, these data suggest that ATGL interacts with CGI-58 in its N-terminal region. The C-terminal region of the enzyme interferes with CGI-58 binding. As observed in activity assays, the suppressive effect of the C-terminal region is more pronounced in the human ortholog compared with mATGL. FIGURE 3. Alignment of protein sequences of human and mouse ATGL orthologs and comparison of TG hydrolase activity of wild-type ATGL, truncated ATGL, and chimeric proteins. a, sequences of hATGL and mATGL were aligned using EMBOSS pairwise alignment algorithms (25). Stretches with low similarity are indicated in red. b, activity of hATGL, mATGL, and Q289X. c, activity of mATGL and m289X (encoding the N-terminal 289 aa of mATGL). d, activity of chimeric ATGL. Chimeric enzymes were produced by changing the C-terminal region of mATGL and hATGL at position 266 of the protein sequence. mN/hC-ATGL encodes the N-terminal 266 aa of mATGL and the C-terminal part of hATGL (aa 267-504). hN/mC-ATGL encodes the N-terminal 266 aa of hATGL and the C-terminal part of mATGL (aa 267-486). TG hydrolase activity was measured using an artificial triolein substrate. Western blotting analyses of His-tagged proteins were performed using an anti-His antibody and are shown as insets. TG hydrolase activity was normalized on the expression levels of His-tagged proteins and is shown in relation to basal activity detected for hATGL (b and d) or mATGL (c). The activity detected in cells expressing ␤-galactosidase was set as blank. Measurements were performed in the absence (basal) and in the presence of purified mouse GST-tagged CGI-58 (ϩCGI-58). The data are representative for at least three independent experiments performed in triplicate and presented as the means Ϯ S.D. ***, p Ͻ 0.001. ELISA plates were coated with GST-CGI and incubated with Cos-7 cell extracts containing His-tagged ␤-galactosidase (LacZ), wild-type ATGL, or mutant ATGL variants at equimolar concentrations. Binding of proteins was detected using anti-His primary and horseradish peroxidase-conjugated secondary antibody. The absorbance of the peroxidase reaction was detected photometrically using tetramethyl-benzidine. a, comparison of the interaction of hATGL and mATGL with GST-CGI. The data represent four independent experiments performed in triplicate. b, interaction of wild-type, mutant, and chimeric ATGL with GST-CGI. The measurements were performed in triplicate and are representative of three independent experiments. The data are presented as the means Ϯ S.D. *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001. Activity of P195L and Interaction with CGI-58-P195L exhibits reduced TG hydrolase activity compared with the wild-type enzyme (Fig. 1, c and d). To investigate whether this mutation affects the activity of the enzyme or its interaction with CGI-58, we generated a construct encoding the N-terminal 289 aa of P195L (P195L/289X). As shown in Fig. 5a, truncation of P195L did not increase enzyme activity in the absence or in the presence of CGI-58. Interaction studies with GST-CGI revealed that P195L and P195L/289X are capable of binding CGI-58 (Fig. 5b), suggesting that the P195L amino acid substitution affects the enzymatic activity of ATGL rather than its interaction with CGI-58. DISCUSSION Excess FFA are stored in the form of TG in cytosolic lipid droplets. Although many cell types are capable of storing TG, most of FFA are deposited in adipose tissue. In times of starvation or in periods of increased energy demand, adipocytes release FFA into the circulation to provide the body with energy. The concentration of circulating FFA is determined by a balance between TG synthesis and hydrolysis in adipose tissue. An increased net release may result in elevated FFA levels that represent an important risk factor for the development of type 2 diabetes, by virtue of their ability to promote insulin resistance in skeletal muscle and liver (24). ATGL performs the first step in the hydrolysis of TG generating FFA and diacylglycerol. In humans and mice, defective ATGL activity is associated with systemic TG accumulation, indicating a function of the enzyme in multiple tissues (2,9). In this study, we analyzed the functional defects caused by muta-tions in the ATGL gene that are associated with NLSD. Our study demonstrates that NLSD may be caused by mutations leading to the expression of inactive ATGL or active lipases with reduced lipid droplet binding. The N-terminal region of ATGL is predicted to adopt an ␣/␤/␣ sandwich structure containing a patatin domain and a GXSXG consensus motif with the active serine. The P195L mutation led to the substitution of a single amino acid within the lipase-typical ␣/␤-structure, resulting in substantially decreased lipase activity. The YFP-tagged P195L mutant was predominantly localized on lipid droplets, demonstrating that this mutation did not decrease the lipid droplet binding of ATGL. The P195L mutation is located outside the amino acid stretch of ATGL that shows similarity to patatin (aa 10 -178 in ATGL), and it can be expected that the architecture of the predicted catalytic site comprised of Ser 47 and Asp 166 of ATGL remains intact. However, our data show this aa substitution drastically affects the catalytic function of the enzyme, whereas the interaction with CGI-58 seems unaffected. In contrast to P195L, all of the mutations that left the N-terminal part of ATGL intact were enzymatically active and stimulated by CGI-58. Moreover, truncated mutants missing most of the C-terminal region were up to 20-fold more active than full-length ATGL, suggesting that the C-terminal region is involved in the regulation of enzyme activity. The activity of mutants was tested using an artificial substrate and purified LDs containing numerous proteins that might positively or negatively affect lipolysis. ATGL mutants were active against both substrates, demonstrating that they are also capable of hydrolyzing TG in the presence of the LD-associated proteins. Together, these observations would predict rather functional substrate binding and enhanced lipolysis than defective lipolysis and TG accumulation as observed in tissues and cultured cells of NLSD patients (9). However, in accordance with the NLSD phenotype, studies with YFP-tagged proteins revealed that the cellular appearance of these mutants is predominantly cytosolic because of defective LD binding. Thus, the in vitro activity of ATGL does not predict the capacity of the enzyme to hydrolyze TG in vivo. Presumably, additional yet unidentified factors control the targeting of the enzyme to the lipid droplet and its activity. The C-terminal region of ATGL apparently possesses two functions: (i) a negative regulatory function affecting the activity of ATGL and (ii) a domain or binding site necessary for efficient substrate binding in vivo. ATGL and hormone-sensitive lipase (HSL) are the major enzymes in adipose triglyceride catabolism. Together, these FIGURE 5. Activity of P195L and interaction with CGI-58. a, TG hydrolase activity of hATGL, P195L, and P195L/289X in the presence and in the absence of GST-CGI using an artificial triolein substrate. TG hydrolase activity was normalized on the expression levels of His-tagged proteins and is shown in relation to basal activity detected for hATGL. Western blotting analysis of His-tagged proteins was performed using an anti-His antibody and is shown as inset. b, binding of His-tagged ␤-galactosidase (LacZ), P195L, and P195L/289X (expressing the N-terminal 289 aa of P195L) to CGT-CGI-coated ELISA plates. The experiments were performed with Cos-7 cell lysates containing equimolar concentrations of the His-tagged constructs as described in Fig. 4. The measurements were performed in triplicate and are representative for two independent experiments. The data are presented as the means Ϯ S.D. ***, p Ͻ 0.001. enzymes are responsible for more than 95% of the TG hydrolase activity present in adipocytes (17). Both enzymes are activated by signals that raise the cAMP levels and activate protein kinase A (PKA). PKA phosphorylates HSL and the LD-associated protein perilipin A. This process leads to the translocation of HSL from the cytosol to the LD where the enzyme gains access to the TG substrate (26). In contrast to HSL, ATGL is not a target for PKA-mediated phosphorylation (1). In adipocytes, ATGL is present on lipid droplets and in the cytosol. This distribution pattern does not markedly change in lipolytically stimulated cells, which excludes an activation mechanism based on the translocation of the enzyme (1,27). However, recent observations suggest that the perilipin-adipophilin-Tip47 family proteins adipocyte differentiation-related protein and perilipin A are involved in the regulation of ATGL activity. Listenberger et al. (28) showed that overexpression of adipocyte differentiation-related protein reduces ATGL binding to lipid droplets in various cell lines. In adipocytes, current data suggest that PKA regulates ATGL-mediated lipolysis by an indirect process that involves CGI-58 and perilipin A. In the basal state, CGI-58 is bound to LDs and interacts with perilipin. In the activated state, the phosphorylation of perilipin leads to the release of CGI-58 (27,29,30). It was hypothesized that ATGL activity is controlled by the hormone-stimulated release of CGI-58, which is then available for ATGL activation (17,27). This mechanism potentially represents a regulatory event in adipocytes and needs further investigation. However, several aspects of ATGL regulation are incompletely understood. ATGL substantially contributes to TG catabolism in many tissues, and the expression of perilipin A is restricted to adrenals and adipose tissue. In addition, ATGL is phosphorylated at two positions in the C-terminal region (Ser 404 and Ser 428 ) (16). To date, it is not known whether phosphorylation events affect the activity of the enzyme or its localization. Our data clearly indicate that the molecular mechanisms regulating ATGL activity could involve additional regulatory steps. Apparently, most of the activity of the enzyme is masked by its C-terminal region, and full activity can be detected only in proteins lacking the C-terminal part. The increased binding of the truncated ATGL to CGI-58 suggests that the C-terminal region controls the access of CGI-58 to ATGL. We propose that conformational changes in the C-terminal region are necessary to unmask the activity of the human enzyme, which might be induced by phosphorylation events and/or chaperone activity. It is reasonable to assume that such a mechanism can control ATGL activity and the activation of the enzyme by CGI-58 independent of the presence of perilipin A and thus could represent an activation event in tissues where perilipin A is not expressed. In comparison with the full-length human enzyme, mouse ATGL is severalfold more active in hydrolyzing TG. Truncation of both human and mouse ATGL at position 289 increased enzyme activity, implicating that the activity of both orthologs is suppressed by their C-terminal region. However, the extent of enzyme activation was much higher in the human ortholog implicating species-dependent differences in enzyme regulation. Studies with chimeric enzymes revealed that an exchange of the C-terminal regions between mouse and human orthologs suppresses the activity of mATGL and increases the activity of hATGL. Thus, specifically in humans ATGL activity is efficiently suppressed by the C-terminal region of the enzyme. Together, studies with mutant ATGL revealed that the C-terminal region is essential for proper localization of the enzyme and possesses a negative regulatory function. The higher in vitro activity of truncated human ATGL and its increased interaction with CGI-58 indicates that this region controls ATGL activity by interfering with CGI-58 binding and enzyme activation. We propose that ATGL activity is regulated by C-terminal events that increase the interaction of CGI-58 and ATGL. These processes might include phosphorylation processes at the C-terminal region and/or interaction with unidentified regulatory proteins.
2018-04-03T02:46:30.442Z
2008-06-20T00:00:00.000
{ "year": 2008, "sha1": "d48723aeb87cbee339f3c8fdc74dc6e8baa06924", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/25/17211.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6698a1d544ef1528adc168d90555584e4cb4300a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250679120
pes2o/s2orc
v3-fos-license
Effect of Accident Due to Fall From Height at Construction Sites in Malaysia Nationally, falls from height (FFH) are a significant threat to the construction fields and are one of the leading causes of a fatal accident to construction workers. Since the construction industry is carried out in hazardous environments, accidents occurring at various severity rates, leading to minor, severe and fatal injuries. Meanwhile, the majority of accidents are caused by a variety of significant causes and uncertain actions or unsafe conditions. The recognition effect of falls from height accidents at construction sites is the focus of this research. Therefore, this paper revealed the major effect of an accident due to a fall from height from past researchers. The reported cases of accidents were investigated by the Malaysian Department for Occupational Safety and Health (DOSH) were reviewed. The finding of this paper indicates a time loss of project execution due to accident investigation was the major effect of the accident due to fall from height at construction sites and cost implication for hiring a new worker, training for a new employee and compensation for injury or settlement of death claims. Discoveries of this paper will enhance the construction industry to improve the performance and regulation of all construction projects in terms of safety. Introduction Accidents are unpredictable events that involuntarily and unexpectedly cause damage or injury. In the construction sector, accidents are inevitable and have a higher risk than other professions. Rising death rates have been recorded for the construction industry around the world, illustrating the industrial crisis as a result of accidents. Furthermore, the construction sector has expand over the last decades, resulting in improved company profits, economic accessibility, and enhanced demand for commodities. While it is significant, it has long been recognised as one of the riskiest sectors in many regions of the world. Tang et al., [1] have mentioned that the construction industry has unsatisfactory working conditions, a complicated situation, a high rate of workforce turnover, a weak of safety management, feeble educational performance and poorly trained employees compared to other industries. Occupational accidents in the construction sector are prevalent cases, leading to physical and mental disabilities and a high fatality rate. Forteza et al., [2] highlighted that fatal accident causes heavy casualties and also enormous personal, social and financial costs. According to information from the Social Security Organization [3], the number of accidents and casualties in the construction industry increasing in 2018 compared to 2017. Since January to November 2018, there have been 143 fatalities and 8,191 injuries in the construction sector. Moreover, the multi-story or high-rise structures continue to dominate in construction projects and there are a number of risks associated with work in heights, heavy machinery and vertical [4]. In the construction industry, falls from height are among the most common causes of serious work-related accidents and deaths [5]. While working four feet or higher off the ground, possibility of workers at a risk for falling is higher and anything that that can trigger you to lose balance and fall is a hazard. Most falls occur from a working platform, frameworks, ladders, or scaffolding. Zhang et al., [6] concluded that the significant threats to the safety and impact of the construction workers contribute to the socio-economic losses have made the accident prevention a priority for enhancing construction management practises. Hence, falls from height (FFH) still significantly higher in construction accidents than other types of accidents [7]. The causes of accidents in the construction industry in Malaysia were reviewed, and evidence shown that the administrative weakness in the organisation in maintaining a good safety management system and workers incorrect work practises are the two main causes of accidents [8]. In fact, safety commitment is reflected by good management in the workplace, in which the workers sense of safety responsibility is an important factor influencing injuries at workplace [9]. Therefore, the study intended to review the effect of an accident due to falling from height at construction sites. Issues of Fall from Height(FFH) Construction industry has a unique, complex, and temporary nature making it one of the most dangerous industries [10]. Statistics from the Department of Safety & Health has demonstrated that in the construction industry, fatality was 5 times greater than other industries. Master Builders Association Malaysia (MBAM) identified there has been a concerning rise in fatality rate in the construction industry per 100,000 workers. In 2014, 7.26 of every 100,000 jobs are deaths. It increased from 10.74 in 2015, to 12.78 in 2016 and gone up to 14.94 out of 100,000 in 2017. Furthermore, a total of 187 out of the 650 fatalities occurred in the construction industry among all industries, meaning that within one year, and excluding day off from Sunday, there are 1.2 fatalities every two days in the construction industry [11]. The statistics revealed by the Department of Occupational Safety and Health (DOSH) about the number of fatalities by the sector in Malaysia. Compared to other sectors, the statistics highlight that the construction sector is the highest number of fatalities that are 81 people in 2018 [12]. The findings revealed, workings at height are one of the main contributors to the high construction accident rate. Inappropriate company policies, weak management dedication, unsafe practices, inappropriate attitudes of construction workers and inadequate safety and workers training can result in a fall from height accident [13]. Meanwhile, Aminbakhsh et al., [14] stated that inadequate safety measures go beyond the safety concerns, since construction accident rates can have a significant impact on the financial success of construction companies and raise construction costs by up to 15%. If all parties do not work together to reduce the number of cases, this problem will always increase. The statistics revealed by the Department of Occupational Safety and Health (DOSH) about the accident caused by falling from a height in the construction industry are shown in figure 1. Referring to the statistics published by the Department of Safety and Health in 2015-2019 falls from height reported the highest rate of fatalities in the years 2015 and 2018 compared to any other years of accidents. Moreover, it is found that maximum fatality is caused due to a fall from height. Hence, to address this critical issue this paper will focus on the effect of the accident due to falls from height in the construction site. Effect of Accident Due to Falls from Height at Construction Sites According to Kadiri et al., [15] workers are the main victim of these accidents and the loss of project time is the main effect in project execution triggered by this accident. . From the Table 1, it demonstrates that the serious effect of accidents on construction sites is on-time loss of project execution, 48.57% of construction firm's views. Labourers are the main group of workers exposed to construction site accidents and also the primary victims of construction accidents In order to reduce the risk of injuries on-site, construction firms should establish safety measures from this study. In project management, the time frame and duration of the project seem to be critically important. The primary consequence of construction site accidents is the loss of time in project accomplishment. Furthermore, to ensure a safe construction site, management has to recognise, adopt and execute all or certain of the following steps such as effective supervision and evaluation by safety officials and on-site leaders [15]. During the construction period, projects are not finished as per contract, which results in numerous negative effects, due to failure to reach the specified time, estimated costs and specified quality. According to Salunkhe [16], cost overruns and project delays decrease the productivity in the economic resources available, restrict the potential for growth and reduce economic profitability. Moreover, cost overruns lead to a capitaloutput ratio increase for the entire economy in the projects. According to Asanka and Ranasinghe [17], accidents lead in construction delays, costs are exceeded and sometimes affect the organization's image, and losing the confidence among workers or prohibition from tendering by government authorities. This can contribute to stakeholder dissatisfaction uncompetitiveness during tenders, financial losses related to property damages and restrictions from the authorities. Furthermore, accidents can change the corporate objectives or even make the business uncompetitive. Human factors are the primary reasons for accidents but this is not the only cause for all accidents. While considering the human factor, negligence has a significant impact on construction-relevant accidents and is one of the vital causes reported by several researchers [17]. Arunkumar and Gunasekaran [18] mentioned that in the workplace, anything which can contribute to your loss of balance or body support, resulting in a falling risk is a fall. Moreover, construction accidents might have a negative effect, such as loss of time, the credibility of the company, worker's mental illness, increases of medical costs, recruitment costs, training costs, reimbursement costs, loss of productivity and accident investigation time cost. According to Oladipupo [19] accidents have considerable negative effects on project execution; some of these effects are damages to materials and equipment, injuries to labour, delay of works, reduced productivity, resource wastage and increased construction cost. In addition, the effects of construction accidents cause the employer huge cost for the reorganization of jobs, substitutes or reimbursements for equipment, workers, facilities and legal fees. From the Table 2, the researcher assessed the effects of accidents on construction sites that contribute to a production delay, operational delay while the causes of accidents are determined and productivity loss affects the delivery of construction projects. The researcher revealed that increment in insurance premiums, expenses of rescue activities and equipment, medical payments, payments for injury or death claims settlements, legal fees for protection against claims, worker's compensation insurance costs and enhanced insurance expenses are cost consequences of accidents on construction projects. Table 3. Combination effect of accident due to fall from height at construction sites among the previous researcher. According to Table 3, it indicates a plethora of research that has been conducted by selected previous researcher. There are 18 major effects as being exhumed by the researchers through a detailed and comprehensive literature review. Without any iota of uncertainty, eight various authors have classified different effect of accident due to fall from height at construction sites and segregate the factor into three categories. There are certain factors that are listed eight times while others are stated only once. Nevertheless, the latter being stated only once does not indicate that this aspect does not affect the safety of the worker [20]. Therefore, as shown in the table 3, the economic element would severely affect when the accident due to fall from height occur at construction sites. The most frequent accident result on economic categories such as time loss of project execution, cost of recruiting a new worker, cost of training given to a new worker, compensation cost and repairs or substitute damaged equipment cost. The results demonstrated that the scale of accident costs that would have to be absorbed in the cost of construction projects, potentially making them less profitable than originally planned. A plethora of reasons can be given for the high frequencies of economic categories are referred to client aspect. In respect of the client, he is the project initiator and wants to deliver the project on time, cost-saving, and good project performance but importance should be given to the safety of employees [21].Nonetheless, the previous study indicating that employees have experienced injuries in construction through rapid operations highlighted by Irumba [22], this might be attributed client's mandate to the company to complete the project quickly. In addition, this is in agreement with Udo et al., [23] who stated that construction today is defined by speedy project completion. While the client wants to execute the project quickly, the contractor takes the same path, encouraging the employees to carry out projects as soon as possible at the detriment of safety. Hasty construction contributes to poor workmanship, which can lead to workers' negligence which mainly contributed fall from height accidents at the construction sites [24]. Furthermore, humanitarian elements are the second highest frequent effect of accident due to fall from height mentioned by previous researcher. There are certain effect listed by researcher such as suffering to individual, bad reputation to construction firm, mental illness of workers, injuries of workers, possible loss of earning the ability and fatalities. Therefore, accident costs and health hazards cannot be reflected in financial terms only. The occurrence of a construction accident had probably affected all elements of society like employees, families, employers and resources [25]. It has a very devastating social aspect because an injury may have an emotional burden on independence in the household, and this has an adverse impact on the family's social relations [26]. Lastly, the legal categories are the least effect of accident was mentioned by researcher such as legal liability and failure to safeguard employee being a criminal offense leading to prosecution. This situation happened due to most of the companies are refuse to report the accident to the government authorities due to control reputation of the companies and disallow being investigated by the authorities. According to section 15, 16 and 17 of the OSHA 1994 by DOSH [27], it is a duty of the employer (or a self-employed person) to prepare a safety and health policy. Employers who did not comply with the provisions shall be guilty of an offence and on conviction, be liable to a fine not exceeding fifty thousand ringgit or to imprisonment for a term not exceeding two years or to both. Conclusion Commonly, the construction projects are produced in a dangerous, complicated and lengthy process. In every project, costs, time, reliability and safety are essential features. This paper reveals the effect of an accident due to a fall from height from various researchers. All of the organisations and individuals participating in construction projects should be specifically concerned with the safety of employees involved in construction projects on-sites. Ultimately, recognizing the effect of accidents fall from a height will help prevent certain accidents from occurring and improve the overall level of safety on construction sites.
2022-06-28T08:58:40.747Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d1f408186e62917f3db96a674e126f476c90e9a7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/498/1/012106", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d1f408186e62917f3db96a674e126f476c90e9a7", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
250285858
pes2o/s2orc
v3-fos-license
The impacts of economic growth, foreign direct investments, and gas consumption on the environmental Kuznets curve hypothesis CO2 emission in Iran Economic development is associated with higher energy consumption, which has a direct impact on climate change. As a result, today’s growth policies should also align with environmental sustainability goals. Although socioeconomic variables related to air pollution have been extensively studied in previous studies, little research has examined their long- and short-term effects. This study aimed to investigate the long-run and short-run relationship between carbon dioxide (CO2) emissions, energy consumption, especially gas as a clean fuel, foreign direct investments (FDI), and gross domestic product (GDP) using autoregressive distributed lag (ARDL) model in Iran during the period of 40 years. The estimation results indicated the validity of the environmental Kuznets curve (EKC) hypothesis for Iran. Moreover, empirical findings illustrate that the impact of financial growth on CO2 emissions, in the long run, is U-shaped in Iran. The reliance on gas as a fuel for the country led to a reduction of the carbon and ecological footprints in a short time compared to other polluting fuels. Further, our empirical results indicate that economic growth and foreign direct investment contribute to reducing pollutant and carbon emissions in Iran over long and short periods. According to the empirical findings, important energy policy recommendations are offered. Introduction One of the most urgent crises of our time is climate change. Human and natural systems are facing existential threats due to the increased severity of extreme weather events and changing climate patterns. As a result of these phenomena, the new epidemic of corona virus poses an unprecedented parallel threat to the both human society and sustainability of our planet. The global community must respond immediately and significantly to the dual crises, involving coordinated efforts between various state governments and contributions from both public and private sectors (Mahmood, Eqan, et al. 2020). Nowadays, nations all over the world are looking for approaches to mitigate the adverse effects of climate change. Despite the mentioned summary, there is still a lack of effective practical actions by most under-developed countries to combat climate change. However, despite Iran's struggles to formulate policies and plans related to sustainable development and low-carbon technologies, the desired outcomes from adopting such policies have not yet materialized (Razmjoo et al. 2019). According to a report of British Petroleum (BP) Statistical Review 2019 (Table 1), the top 10 carbon dioxide emitter countries reported their emission rate to become doubled since the Kyoto Protocol. The amounts of emissions in China and India, which are among the top 3 emitters, have both seen massive increases since 2005. According to Table 1, Iran is the eighth largest CO 2 emitter in the world. Emissions in Iran have increased by 50% since the Kyoto agreement. According to this report, global environmental degradation is reputed to be a significant barrier to socio-economic development. Iran is the world's largest oil producer after Venezuela, Saudi Arabia, and Canada (Davarpanah and Mirshekari 2020), with massive natural gas resources after Russia; so, it has a relatively large share in this field compared to other countries. Although various socioeconomic variables are influential on CO 2 emissions, such as economic growth, energy consumption, trade openness, financial development, urbanization, capital investments, and labor force, a crucial question is which one of the mentioned variables is the most effective one (Lotfalipour et al. 2010;Omri 2013;Hajilary et al. 2018). Historically, investment decisions and environmental protection policies have influenced the quality of the environment, which has also affected the economy. Consequently, long-run economic growth does not conflict with social cohesion or with environmental preservation, but they strengthen each other. The environmental Kuznets curve (EKC) hypothesis developed by Grossman and Krueger has been a fundamental part of the study of economic growth and environmental impacts for over a century. According to Kuznets' original curve, inequality and economic growth are related in an inverted U-shaped way. According to this hypothesis, pollution increases in low-income countries and decreases as income increases. So, whether the EKC hypothesis holds for each country or not is one of the most critical questions relating this hypothesis. Whether other important parameters such as renewable or non-renewable energy usage are represented on this curve or not is another essential question. According to our previous investigations, energy consumption and its costs, citizen rates, non-oil gross domestic product (GDP), and foreign direct investments (FDI) significantly affect CO 2 emissions. There is a linear relationship between these factors and CO 2 emissions. By using a partial least square (PLS) method, the relationships among significant factors were evaluated for the first time. Results of the prior investigation show that a lower energy consumption leads to a lower CO 2 emission rate. It also indicates that FDI is the important factor that decrease CO 2 emissions, while citizens raise CO 2 emissions rate (shares of 13% and 4%, respectively). CO 2 emissions are slightly affected by energy costs or non-oil GDP (Hajilary et al. 2018). As a continuation of our previous research, the purpose of this study is to evaluate the environmental Kuznets curve hypothesis and to examine its connection between CO 2 emissions and economic growth, foreign direct investment, energy consumption, or income from a variety of angles in Iran over 40 years with Pesarans Auto Regressive-Distributed Lag (ARDL) bound test. As the consumption of natural gas has increased considerably in Iran, as a common and non-renewable energy source, a more detailed assessment would be useful. Thus, the dynamic relationship between natural gas consumption and CO 2 emissions in Iran is examined. The study tests demonstrate the validity of the hypothesis and discuss both the short and long run. Hence, the results could be beneficial for policymakers to put forward specific policy measures to reduce carbon emissions in Iran. On the other hand, renewable energy sources such as hydro, nuclear power, wind, and solar have been found to produce energy with little or no effect on climate change, and thus they are less harmful to the environment and social well-being (Adedoyin et al. 2020) The Iranian government has pledged to reset carbon dioxide to 4% by 2030 under the Paris agreement. But, upon receiving international support and no further sanctions, the Iranian government believes that a reduction of 12% is possible. IRAN's policy on sustainable development emphasizes using renewable energy sources and a rise in natural gas consumption (Hosseini et al. 2019). The following questions are typically addressed in this paper: 1. Has a U-shaped inverted EKC for gas consumption been identified in Iran? 2. What is the strongest interaction appearing between CO 2 emission and FDI, gas consumption, or GDP in the short and long run? 3. Does natural gas consumption show a positive impact on CO 2 emissions? 4. Which policy implies strengthening energy resilience and mitigating the greenhouse effect? A review of the relevant literature is provided in Sect. 2 "Literature review" which is divided into two parts including "Important theoretical factors for EKC hypotheses" and "Iran's CO 2 emission factors." Section 3 discusses "Statistical and methodological information," Sect. 4 presents "Empirical results and discussion," and finally Sect. 5 "Conclusions and policy recommendations" concludes and highlights policy implications. Literature review This section is divided into two subsections. The first part of this article discusses theoretical factors that are most important for EKC hypotheses and carbon emission (based on previous studies (Hajilary et al. 2018), and actual parameters of "Iran's CO 2 emission factors" during the years 1976-2016 are examined in the second part. Theoretical important factors on the EKC hypotheses Researchers have confirmed the negative impact of economic growth and the positive impact of energy decline on carbon emissions at the provincial, national, and global levels. (Li et al. 2021). Therefore, economic growth cannot be sustained without environmental sustainability in tandem. Consequently, the relationship between economic growth and environmental quality has become a significant area of research in the contemporary era (Murshed et al. 2021a). One popular theory that explains the correlation between environmental pollution and economic growth from both academic and policy-making perspectives is the EKC hypothesis, by Grossman and Krueger (1991). There is an inverse relationship between a nation's economic growth and its environmental quality (Murshed et al. 2021b). According to Fig. 1, environmental degradation increases up to a certain level as income increases, called the inflection point, then decreases once income per capita reaches a certain level and contributes to environmental betterment. It has been noted in the literature that EKC curvature and shape are a function of various macroeconomic aggregates that have direct and indirect effects on the relationship between economic growth and the quality of the environment. Therefore, scientists have conducted a significant amount of research on structural factors and carbon emissions, particularly in the energy, trade, and society. Nevertheless, examining the parameters that have a more direct impact on CO 2 emissions is relatively few for a country like Iran, one of the significant producers of energy and, unfortunately, a major emitter of environmental pollutants. Among mentioned variables, energy consumption is one of the effective parameters of the EKC hypothesis. The validity of the EKC hypothesis is also influenced by the level of energy consumption. A relevant explanation for this phenomenon is that as a result of the energy push emission hypothesis, energy consumption within the economic growth will rise, and as a consequence, greenhouse gas emissions are likely to increase (Khan et al. 2019). Moreover, the environmental impacts associated with energy use are acknowledged depending on the type of energy consumed; as an example, it is thought that the use of fossil fuels contributes to environmental degradation (Ito 2017). Based on actual data, one of the main reasons for Iran's energy consumption is the population growth over recent years. However, Iran is one of the largest producers of gas and oil in the world and in the Middle East. In recent years, Iran has tried to shift its primary energy consumption sources, including domestic consumption and many large industries, from oil to gas. Therefore, gas consumption has increased in recent years. Due to the importance of changing the type of energy and its consumption of gas in Iran, the short-run and long-run effects of gas consumption as a common energy source have been investigated. In addition, foreign direct investment is also relevant when explaining environmental variations within a host country. Whether FDI always has positive effects on host countries or it leads to environmental degradation is fundamental for many governments (Peng et al. 2016). The effect of this parameter has been studied in different countries and also Iran. Due to the effect of various parameters on the changes in the EKC chart, the authors confirmed the presence of the EKC hypothesis for Iran's case, too. Examining the relation between foreign direct investment and CO 2 emission Gross domestic product is the most commonly used measure for the size of an economy, and it can be calculated in three ways, using expenditures, production, and income. The environmental Kuznets curve suggests that economic development initially leads to a deterioration in the environment, after a certain level of economic growth, a society begins to improve its relationship with the environment, and levels of environmental degradation reduce. The results of studies that have been conducted for European countries indicate that CO 2 emissions and gross GDP correlate positively (Dogan and Inglesi-Lotz 2020). In another article studying G-20 nations, the positive impact between GDP and CO 2 emission is clear (Han and Lee 2013). A direct relationship between GDP and carbon dioxide emissions in China is found by Michael and Xibao (Minlah and Zhang 2021). China's CO 2 emissions invert the U-shaped EKC effect (Yin et al. 2015). A panel of fourteen Asian countries was used to test the EKC hypothesis from 1990 to 2011. According to the results, emissions and per capita income exhibited an inverted U-shape relationship (Apergis and Ozturk 2015). Some studies have found that economic growth and CO 2 emissions follow an N, M, and W-shaped relationship. Over a long period of time, Canada, Japan, and the USA have experienced M-shaped financial development. France, Italy, and the UK experience inverted N-shaped financial development, while Germany experiences inverted M-shaped (W-shaped) financial development . There is also a linear relationship between these economic growth and CO 2 emissions, according to other scholars (Farhani and Hossein 2012). Some studies, however, have not found a significant relationship between these two variables (Lantz and Feng 2006). Based on these empirical findings, policymakers can now adopt comprehensive economic policies for using financial institutions as economic tools to keep environmental quality at sustainable levels. Examining the link between foreign direct investment and CO 2 consumption FDI and the ecological greenhouse gases proxy CO 2 have been studied widely in the environmental literature for the past few decades. Foreign direct investment is an investment controlling ownership in a business in one country by an entity based in another country. Countries can divide into three groups according to their real gross national income per capita: high-income, upper-middle-income, and lower-middle-income countries. Possibly, there are various reasons directly affecting FDI in developing and developed countries. Indeed, foreign direct investment is an indispensable source of finance for developing countries, but policymakers must minimize their risks. Host countries can benefit from FDI through employment creation, technology diffusion, economic growth, and sustainable development (UNCTAD 2015). As a result, Alfaro et al. (2004) state that absorption capacities include macro-economic management, infrastructure, human capital, industrial share, potential rise, high absorption capacity, and an adequate legal framework (UNCTAD, 2015). A study conducted on 26 European countries showed that foreign direct investment deteriorates pollution levels in most EU countries (Mert et al. 2019). According to the analysis of 57 developing countries from 1980 to 2013, FDI does not directly cause CO 2 emissions in the shortrun. In addition, even though CO 2 emission elasticity is statistically significant, it is minimal in the long run (Kim 2019). A considerable amount of economic growth has occurred in the emerging markets of Asia for decades. Foreign direct investment (FDI) has been attracted to the nations, which has positively impacting economic growth in the countries. The increased importance and flow of foreign direct investment have helped to transfer management skills and technologies, create jobs, and improve the standard of living for millions of people in the region since the early 1970s ). FDI has a linear relationship with environmental degradation in Asia, and the EKC hypothesis holds for selected developing countries . With half of Asia's FDI share, China has been positively affected by FDI on environmental degradation (He and Yao 2017). Examining the link between energy consumption and CO 2 emissions Due to the inextricable link between energy use and economic growth, it is quite likely that it would affect environment, as well (Destek and Sarkodie 2019;Murshed et al. 2021b). Several studies have indicated a positive relationship between per capita emissions and energy consumption. The result of one study asserts that the adverse environmental impacts of energy use are comparatively higher in the long run than in the short run (Khan et al.). Two studies investigated selected EU countries and found a positive relationship between energy consumption and carbon emissions (Balsalobre-Lorente Bekun et al. 2019). Due to the use of renewable energy, developed countries have reduced fossil fuel consumption despite an increase in energy consumption. Renewable energy resources are not available to all countries because of their high costs, according to the EIA report. However, developed countries are investing in renewable energies through direct foreign investments, and energy consumption from renewable resources is growing rapidly (Rezagholizadeh et al. 2020). One study from Pakistan concluded that economic growth contributed to a certain amount of carbon emissions (Ahmed and Long 2012). In Chinese, pollution showed evidence of energy consumption contributing to carbon emission (Li et al. 2010). The analysis results determine carbon emissions in eight oil-rich MENA countries, including Iran, as a result of fossil fuel consumption (Magazzino and Cerulli 2019). Murshed, who recently published a study focusing on South Asia, argues that consumption of natural gas, petroleum products, and liquefied natural gas, and hydroelectricity will reduce CO 2 emissions. Since the impacts of energy use on the quality of the environment are determined by the nature of the energy resource consumed, many existing studies have probed into the heterogeneous impacts of renewable and non-renewable energy use on the energy friendly (Murshed et al. 2022). Iran's CO 2 emission factors In most regions, population growth is among the major factors that drive CO 2 emissions. Both developed and developing countries have a positive correlation between population and CO 2 emissions (O'Mahony 2013; He et al. 2017). Figure 2 shows the growth of Iran's population over the last four decades. Iran's population has become more energy intensive since 1995. In countries like Iran, where fossil fuels such as oil and gas are the main energy source,CO2 emission is on the rise (Soytas et al. 2007;Kais and Sami 2016;Heydari et al. 2019). Figure 3 shows the increase in energy consumption per capita in Iran between 1976 and 2016. According to Fig. 4, natural gas consumption in Iran is increasing the most among the main primary energy sources. According to state policies, oil consumption is declining, and more natural gas is used for industrial and domestic purposes. Other energy sources such as nuclear energy, hydroelectricity, and coal consumption are less influential on energy consumption in this country (Hajilary et al. 2018). Energy consumption in Iran is positively correlated with per capita CO 2 emissions, as a rise of 100% in energy consumption leads to an increase of 87% in CO 2 emissions. It is important to note that although natural gas is a fossil fuel, it is a relatively clean fuel. Therefore, natural gas is an alternative to conventionally consumed crude oils, and compared to other options, it is said to be cleaner. The few decreases in carbon emissions in Iran are connected to the use of more gas. Compared to coal and oil, natural gas emits 50% less pollution (Solarin and Shahbaz 2015). Taking advantage of natural gas as a cleaner alternative to other fossil fuels, the Chinese government adopts the energy consumption structure to address its increased energy needs and cope with environmental issues. Despite the fact that there is little discussion of CO 2 emissions and gas consumption in the literature, a study (Kanyin Dong However, Iranians rely heavily on fossil fuels for their economic development. A considerable portion of Iran's export revenue in 2016 was from the sale of fossil fuels, specifically oil. GDP does not have a remarkable impact on CO 2 emissions, whereas GDP measured in non-oil terms does. The domestic GDP in Iran cannot represent real economic growth because oil exports make up a large share of the total. Non-oil GDP (GDP without considering oil exports) shows the real domestic economic growth. As a result, actual developments in industrial products lead to more significant CO 2 emissions (Ozcan 2013;Farhani and Shahbaz 2014;Hajilary et al. 2018). It is accepted that FDI generates both positive and negative effects that involve costs and cause benefits. According to Fig. 6, FDI in Iran has undergone many changes between 2000 and 2016, related to various reasons, including the amount of foreign trade and privacy. According to the results of the Iranian studies, foreign direct investment is positively and significantly related to CO 2 emissions, which suggests that higher foreign investment leads to a reduction in environmental pollution. With a 100% increase in foreign investment, emissions per capita are reduced by 5% (Hajilary et al. 2018). According to Ghorashi, financial development has a statistically significant negative effect on CO 2 emissions in the long run. Accordingly, domestic credit to the private sector as a percentage of value-added in each economic sector subscriptions could reduce CO 2 emissions in Iran. However, the estimated coefficients in the short run indicate that economic development does not have a statistically significant negative effect on CO 2 emissions in Iran. According to this study, policymakers should recognize the potential of financial development to minimize CO 2 emissions (Naghme-hGhorashi, Abbas Alavi Rad). The result of an investigation by Rafat over the period 1991 to 2014 about identifying the relationship between FDI and economic growth in Iran shows that economic growth and foreign direct investment have a positive impact on each other; hence, there is a reciprocal relationship between them. So far, no consensus has been reached on CO 2 emissions and economic growth. Scholars have discovered the tradition of the inverted U-shape and others of the N-shape. There is also a research suggesting a linear relationship between these two variables (Dong et al. 2017). To demonstrate the validity of the hypothesis, the typical approach has been used to estimate the statistical relationship between environmental pollutants (such as emissions) and GDP in early EKC studies. Consequently, several studies have been conducted to examine the validity of the EKC hypothesis, whether for one country or groups of countries using different econometric methodologies. Statistical and methodological information In this section, the theoretical framework and equations used are discussed. Among different methods, we decided to use ARDL equations since this cointegration technique is used in determining the long-run relationship between series with varying orders of integration. These parameterized result gives the short-run and long-run relationship of the considered variables. The following variables have been chosen for the current study: Gross domestic product (GDP) in non-oil base (constant), CO 2 emissions (Million tons), Fuel consumption produces (Million tons), Foreign direct investment (FDI) (Billions USD$), Gas consumption (Million cubic meters), and average income (Rials). All variables have been transformed into a logarithmic form from the World Bank development database (2018). The related data is indicated in Figs. 1, 2, 3 and 4. Theoretical framework Following preliminary study of Grossman and Krueger's (1991), recently there has been extensive literature analyzing the EKC hypothesis and its implications. EKC measures the number of pollutants emitted per capita about GDP. To put it another way, environmental degradation increases up to a point as income increases, but beyond some point in income per capita, it slows down. As part of proving the value of the hypothesis, the early studies of the EKC used the typical approach for estimating the statistical relationship between environmental pollutants (CO 2 emissions) and the per capita economy. Thus, it has been measured various times using econometric methodologies, whether for one country or several countries, to examine the validity of the EKC hypothesis. The empirical examination of its study demonstrates that alternative environmental quality measures can be applied instead of the traditional EKC theory, which studies the association between GDP and environmental degradation using either the U-shaped or the N-shaped pattern (Udemba et al. 2020). As global warming and other environmental problems become more severe, the effects of economic growth on the environment have become more prominent. Numerous empirical studies explain the nexus between emissions of carbon dioxide models such as the GDP model can promote economic growth Kuznets curve for the EKC theory. However, the assumptions of these models have not been rigorously tested with a large data set and, therefore, may not be appropriate to describe the relationship. According to the standard EKC form, economic growth and carbon dioxide emissions are linked by an economic U shape. The economic model that was used to test EKC in Iran can be illustrated as follows: The CO 2 is the carbon dioxide emissions in year t, GDP per capita of Iran's economy, FDI represents a foreign direct investment, gas is the gas consumption in Iran, and income is the income in Iran in year t. Stationary tests Several countries have experienced interruptions in social, political, and financial activities, which makes most yearly country data non-stationary. A time series analysis uses stationary estimation of the variables used in this study to eliminate errors and spurious results. In our model, stationary and unit root variables were analyzed in a mixed order of integration. Two-unit root tests were employed: Augmented Dickey-Fuller (Dickey and Fuller 1979) and Phillips Perron (Phillips and Perron 1988). There are three possible results for both tests: intercept, trend, and none. Approach to ARDL-bound testing This model uses the output of the unit root test from different techniques with an emphasis on the autoregressive distributed lag (ARDL) approach for a well-fitted model specification. The ARDL method, as described by Pesaran et al (2001), is considered suitable for this type of analysis with the usage of a mixture of the order of integration. Following is the econometric arrangement of ARDL: It is necessary to transform the linear model into a natural log form to examine the validity of the environmental Kuznets curve hypothesis. Compared to the linear model, the log-linear model can produce more consistent and efficient results. Models based on this are further advanced by ARDL long-path techniques, which are ARDL long-path models. Two sets of equations were constructed to account for the associations among CO 2 , GDP, FDI, income, and gas consumption; thus: Equation (4) describes long-run coefficients, and Eq. (5) describes α 1 , α j , α k , α l , α m, and α n as short-run coefficients. ∆ and ECMt−1 measure the promptness of correction and the first differences of variables over time. To expand the ARDL exploration, abound testing strategy was used to test the long path associations among the chosen variables. We correlated the lower and upper I (0) and I (1) boundaries with F and T statistics to determine the long-run co-integration. F and T statistics exceed the lower and upper limits, indicating long path association, and vice versa. The graphical representation of the ARDL model is shown in Fig. 7. Empirical results and discussion It is necessary to examine the stationary properties of the variable in the first step to avoid incorrect analysis. Table 2 presents the results of the unit root and stationary tests for each series. As shown, all series are unit root J (1) and J (0), indeed except GDP which is stationary in zero level J (0). In other variables, the first differences are stationary J(1). Given the ARDLtest presented in the tables, it appears that the co-integration equation will be stable in both the short and long run if CO2 production is taken into account as a dependent variable. A positive correlation between per capita CO 2 emissions and per capita income (the EKC hypothesis) is observed in Table 3, where the series show that quadratic per capita GDP negatively affects per capita CO 2 emissions. Indeed, the GDP coefficient in the long run has been β 1 = 3.991, and the GDP 2 coefficient has been calculated as β 2 = −1.182, based on Eq. (2). (4) The estimation results for these data show that per capita CO 2 emissions are positively correlated to GDP and negatively associated with quadratic GDP per capita. According to these findings, an inverted U-shape relationship between CO 2 emissions and economic growth in Iran was in an inverted U-shaped relationship during this period. Since, according to the standard EKC form, economic growth and carbon dioxide emissions are linked by an economic inverted U shape. However, in some studies, the association between GDP and environmental degradation uses either the U-shaped or the N, M, and W-shaped pattern (Udemba et al. 2020). These findings enable policymakers to design comprehensive economic policies for utilizing financial institutions as economic tools to help maintain environmental quality (Murshed et al. 2021a). This result indicates economic development in Iran initially leads to a deterioration in the environment for a long time; after a certain level of economic growth, a society begins to improve its relationship with the environment, and levels of environmental degradation reduce. Based on the consideration of variables in the long run and short run and the diagnostic tests, the results of the ARDL test indicate the cointegration equation for the long-run stability pathway via GDP, FDI, gas, and income, with CO 2 as a dependent variable. The results confirm a possible significant relationship between CO 2 and all variables except GDP. The latter represents a negatively significant relationship both in the short and long run. This indicates that the underlying variables (FDI, INCOME, and GAS) impact CO 2 emissions positively and significantly in Iran. The long-run CO 2 emissions of the economy are likely to decrease by 0.969% if GDP increases by 1%. As a result, economic growth has a significant impact on pollution emissions in Iran. Following another study, economic activities are the most significant contributor to environmental pollution in Iran (Taghvaee and Hajiani, 2015a, b). In the work of Sarkodie (2019), there is a negative relationship between economic growth and pollution in developing countries. In another study by Emir (2019), similar trends were seen in Romania. Based on Shahbaz et al., because of its potential to promote the collaborative reduction of pollutants and carbon emissions, China's provincial and national development strategies should prioritize financial inclusion. Since CO 2 and GDP exhibit a negative relationship, it is clear that economic growth via manufacturing activities or outsourced industries enhances efficient management of carbon emissions in Iran (Shahbaz et al. 2022). As a result of this finding, a 1% increase in gas consumption will result in a 0.625% increase in CO 2 emissions in the long run. Alola et al. (2019) in their study of large economies in Europe provide affirmative evidence of this conclusion in their study on 16 EU countries. Bekun et al. also revealed other findings. In the short run, a 1% increase in gas consumption will result in a 0.125% reduction in CO 2 emissions. Therefore, CO 2 emissions could be reduced in the short run by substituting natural gas with other fossil fuels. Economic growth and CO 2 emissions follow a U-shaped relationship. Natural gas consumption hurts emissions of CO 2 in the short run; since it burns cleaner than other fossil fuels (oil, petroleum, and coal), its use will lead to lower emissions when it becomes a major alternative fuel to other types. This trend has, however, shown positive results with CO 2 emissions, in the long run, proving that energy consumption is the most environmentally harmful factor and renewable energy sources such as solar or wind should be used instead (Al-Mulali et al. 2015;Omri et al. 2015;Taghvaee and Hajiani 2015a). Furthermore, a 1% upsurge of FDI would lead to 0.008 rises in CO 2 emissions in Iran in the short run. Saboori et al. (2012) found a similar relationship for Malaysia. Moreover, this finding supports the work of Alola (2019) on large European economies. Conversely, in long run, an increase of 1% in FDI induces the reduction of 0.233% in carbon dioxide emissions. Compared to the industrial sector, several items directly impact foreign direct investment. Modern technology, however, consumes less energy, resulting in fewer greenhouse gas emissions. In that case, Iran could invest in energy-efficient systems, technology that will decrease emissions, and policies designed to reduce carbon emissions without diminishing economic growth. According to Table 4, the results indicate a significant negative relationship between CO 2 and other variables, including Gas, GDP, Income, and FDI, but they do not generalize because of the high amount of p value (upper 10%). According to this result, gas consumption and GDP significantly positively affect CO 2 emissions in the long run. There are no FDI and income coefficients that are positively related to CO 2 emissions in Iran in the long run, indicating that income has no direct relationship with CO 2 emissions that is positive in the run. To ensure the accuracy of the analyses, a diagnostic test was conducted to identify any approximations or estimation errors. ARDL models estimated by square tests were found to be stable and reliable. Figure 8 shows the cumulative sum of recursive residuals (CUSUM) test for the estimated model. The test clearly showed that the coefficients were stable over the explored period. If the blue line in Fig. 8 is considered the design of the cumulative sum of squares tests (CUSUM), then we will find that the parameters and variance are stable. The straight lines represent the mean level of 5%. According to the graph, the movement path of the test statistic is always between straight lines, so that the model is stable. Conclusions and policy implications The effect of carbon emissions on climate change makes it imperative to investigate carbon emissions. Based on the findings of this study, we used series data of Iran from 1976 to 2016 to explore the interacting forces between carbon emissions, GDP, FDI, income, and gas consumption as well as test the validity of the standard EKC curve. It also takes into account environmental impacts. CO 2 emissions and other outlined variables exhibit short and long-run associations, as shown by the ARDL bound estimate. By utilizing the ARDL approach, long-run and short-run equations for CO 2 emissions were specified, taking into account the other variables as independent variables and linear and non-linear effects of the variables on CO 2 emissions were investigated. To begin with, according to these findings, CO 2 emissions and economic growth in Iran were in an inverted U-shaped relationship during this period. The estimation results for these data show that per capita CO 2 emissions are positively correlated to GDP and negatively associated with quadratic GDP per capita. This result indicates economic development in Iran initially leads to a deterioration in the environment for a long time; after a certain level of economic growth, a society begins to improve its relationship with the environment, and levels of environmental degradation reduce. Secondly, along with dramatic increases in Iran natural gas consumption and CO 2 emissions in recent years, a better understanding of the EKC and carbon emissions, economic growth, and natural gas consumption will help the country achieve a low carbon economic development and support the natural gas sector. ARDL results also indicate that natural gas consumption and CO 2 emissions in Iran exhibit a significant negative short-run relationship. Increasing natural gas consumption by 1% will decrease CO 2 emissions by 0.125% in the short run for the ARDL model. This energy source would dramatically reduce carbon emissions if it became a major replacement for other fossil fuels. With this finding, natural gas can serve as a viable alternative to hydrocarbon fuels. In any case, an increase of 1% in gasoline consumption results in an increase of 0.625% in CO 2 emissions, in the long run, suggesting that fossil fuels generally have a negative ecological impact. Finally, recent results indicate a positive short-term and long-term change in the EKC curve, which will result in increased CO 2 emissions in Iran. A negative correlation exists between FDI and CO 2 emissions. If foreign investors focus more on energy sectors, such as oil and gas, petrochemicals, telecommunications, and auto manufacturing, they will result in lower CO 2 emissions. According to our previous study, this result is also valid. According to the main findings of this study, essential policy implications can develop using the following recommendations: 1) It is clear that the relationship between CO 2 emissions, economic growth, natural gas consumption, income, and FDI is more complex than what is deduced by the EKC model. Because this study showed natural gas to reduce CO 2 emissions in the short run, natural gas consumption could be considered an efficient alternative to other fossil fuels. It might even prove to reduce CO 2 emissions in the long run. So, to resolve the issue of increased carbon emissions, the development of the natural gas industry should be accelerated in long run. Technology policy includes technology push and demand pull. Although technology support policies have helped significantly in the diffusion and innovation of new technologies, it is often difficult to assess their cost-effectiveness. Despite this, program evaluation data can provide information on the relative effectiveness of policies and help design new policies (IPCC, 2014: Summary for Policymakers) 2) Increasing the share of renewable energy such as solar, wind, and other forms of renewable energy and reducing non-renewable energy intensity are still effective ways to reduce carbon emissions. This idea is accessible if the governments can further increase R&D investment to improve energy efficiencies, such as funding the development of low-carbon technologies or participating in the development of the private sector through the appropriate use of renewable energy, unbundling of power generation, transmission, and distribution processes (Shen andLin 2020 &Adedoyin et al. 2020). Furthermore, the government can promote the use of renewable energy in manufacturing through fiscal subsidies, while simultaneously imposing carbon taxes on the use of fossil fuels (Li et al. 2021). A further suggestion is considering the heterogeneous environmental effects of non-renewable and renewable energy sources. Fossildependent countries should diversify their energy source by incorporating renewable energy into their energy mix (Murshed et al. 2021b). 3) Policymakers should minimize risk for foreign direct investment (FDI) to maximize its benefits. Environmental concerns can be addressed in several ways, including through effective governance, stakeholder participation, improving local capacities (entrepreneurship, technology, skills, communication), and creating a practical regulatory framework. The regulation of the environment must therefore be strengthened, since strictly enforcing laws and regulations that protect the environment is likely to prevent inflows of dirty foreign direct investment. On the other hand, the economies of these regions will attract relatively cleaner foreign direct investment, especially for the development of renewable energy through technology (Murshed et al. 2021a). In general, there is an indirect effect of financial risk on global carbon emissions, with an increase in financial risk not only reducing global carbon emissions directly but also promoting technical innovation to mitigate carbon emissions (Zhao et al. 2021). Although our study has made some contributions, there are still limitations. First of all, this paper tried to access a wide field of information and consider the range of data based on our previous study. However, this analysis can be extended by expanding the database and updating it. Besides, this paper focused on the ARDL method due to the ability to determine the short and long-run relationship between series with different orders of integration. As part of the future scope of research, this method could revise to dynamic simulations of autoregressive distributed lag models. This paper mainly focuses on fossil fuel energy with considering the advantage of using gas comparing other types. The research can be further refined for different kinds of energy like electricity which is cleaner than fossil fuels or renewable energy such as wind or solar in the future.
2022-07-06T13:30:07.692Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "21b1f23832040b77f9e04133a3e69e3d86a4f4f7", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-022-20794-x.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "21b1f23832040b77f9e04133a3e69e3d86a4f4f7", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
13053668
pes2o/s2orc
v3-fos-license
The importance of family relations for cannabis users: the case of serbian adolescents. Background Adolescence is transitional stage of physical and mental human development occuring between childhood and adult life. Social interactions and environmental factors together are important predictors of adolescent cannabis use. This study aimed to examine the relationship between the social determinants and adolescents behavior with cannabis consumption. Methods: A cross sectional study as part of the European School Survey Project on Alcohol and other Drugs was conducted among 6.150 adolescents aged 16 years in three regions of Serbia, and three types of schools (gymnasium, vocational – professional, and vocational – handicraft) during May – June 2008. A multivariate logistic regression analysis was carried out to obtain adjusted odds ratios with 95% confidence intervals in which the dependent variable was cannabis consumption non-user and user. Results: Among 6.7% of adolescents who had tried cannabis at least one in their lives, boys were more involved in cannabis use than girls, especially boys from gymnasium school. Well off family, lower education of mother, worse relations with parents were significantly associated with cannabis use (P < 0.05). Behaviors like skipping from schools, frequent evening outs, and playing on slot machines were also related to cannabis use (P < 0.05). Conclusions: The study confirmed the importance of family relationship development. Drug use preventive programmes should include building interpersonal trust in a family lifecycle and school culture. Introduction Cannabis use has been recognized to have negative effects on adolescents' health, including psychological and physical problems. Immediate health problems are psychotic symptoms, depression, suicidal behavior, respiratory disease, difficulties in functions related to learning, concentration and memory, decrease in school performance. Behavior problems following the previous are: interpersonal bullying, traffic accidences and drowning and risky sexual behaviors (1,2). At older age, these people may adopt other risk be-haviors, such as other illegal drug use, violence, and delinquency (3). Cannabis still holds the position of the most widely-used and available illicit drug in Europe (4). The prevalence of experimental cannabis consumption (at least once) among European 15 -years old was form under 10% to over 30% (5). In most European countries, cannabis use increased during the 1990s and early 2000s. Growing popularity of cannabis use has been particularly observed in countries of central and Eastern Europe (6). Ser-bia is on the traditional Balkan drug trafficking Route, well-known as the "drug road", which is used by organized criminal groups to transfer a variety of illegal drugs into and from Europe to supply increasing poly-drug consumption of some two million citizens with drug problems. The route expansion is a driving factor of Europe's black economy as well as of organized crime development (7). The biggest cities in Serbia are on this road: Novi Sad (the capital city of the Serbian autonomous province of Vojvodina), Belgrade (the capital city of Republic Serbia), Kragujevac (the city in the central Serbia) and Nis (the city in southern Serbia). Serbian transition milieu in the post-war period is very convenient for international crime operations from Asia via Balkan Peninsula to Western Europe. In some parts of Serbia, along the rivers Sava, Dunav, and Morava, cannabis is grown and it can be bought at our markets (8). Studies related to drug abuse among adolescents are actual. During the nineties, illegal drug market was similar in all parts of Serbian, so the cannabis was the most used drug among children of 13 to 15 years of age and that (9). Considering various researches cannabis consumption by the adolescent could be serious public health problem (10). The study Health Behavior of School Children completed in 1999 with a sample of 5.500 children in Belgrade of 11, 13, and 15 age (11) has shown that cannabis was the most commonly used drug among the adolescents. During social disturbance in Serbia prior 2000, two studies has pointed out that 2.9% adolescents had experience with this drug and cannabis tried 5-7% at ages 15 years among schoolchildren. Cannabis users were more boys than girls (8,12). In 2005, a pilot study based on the European School Survey Project on Alcohol and other Drugs (ESPAD) methodology indicated age 15 -16 years as critical for more often try of cannabis, 12.9% adolescent from the cities of Belgrade, Nis and Novi Sad, at age 16 reported lifetime experience with cannabis (13). Adolescence is transitional stage of physical and mental human development from childhood to adult life. Its characteristics are rapid physical growth, and psychological, mental and social ma-turity (14). Adolescents' behavior is shaped by different factors such as: personal factors, factors of the personal social context, environmental and socio-cultural factors, and the interaction of all these factors (15). Even, experimentation with drugs during this period can be considered as a statistically normative phenomenon (16). According to factors of the personal social context, substance-using peer groups and others such as older siblings have been found the strongest predictor of cannabis use in adolescence (17). Environmental factors such as easy availability of cannabis also tend to increase possibility of adolescent substance use (18). Besides, the other environmental factors could have critical role in personal development and social context: culture, race, socioeconomic and demographic factors (living in a single -parent family, poor interaction and communication with parents), education (poor academic performance or leaving school), social guidance (alcohol availability, social drinking norms and history of tobacco smoking) and health condition (mental conditions, antisocial behaviors) (1,2,19). Social interactions and environmental factors together are important predictors of adolescent cannabis use (10). They create an ambience in which adolescents are exposed to cannabis use or have the opportunity to abuse it. That could be measured by frequency of evenings out with friends. For different target groups in Serbia, there are drug preventive activities, the government also adopted a National Strategy for the Fight against Drugs from 2009 to 2013 (20). The aim of this study was to examine the relationship between the social and demographic determinants and adolescents behavior with cannabis consumption. Design and selection of the sample The study was a part of the European School Survey Project on Alcohol and other Drugs (ESPAD) among adolescents conducted during May -June 2008 (13). A cross sectional study was carried out among a stratified one -stage sample of 6155 schoolchildren out of 7911, aged 16 years, born in 1992. There were 2,856 (46.4%) boys and 3299 (53.6%) girls. The schoolchildren were attending their first year of secondary school in Serbia. The study included 273 secondary schools out of 290. The sampling frame, the list of all secondary schools in Serbia was provided by the Ministry of Education. The number of classes and number of students in classes was estimated based on the number of schools and classes in the previous period. The sample was selected to provide statistical reliability at two levels: the territorial coverage and type of secondary schools. The territorial coverage comprised three regions of Serbia: the autonomous province of Vojvodina represented by capital city Novi Sad, territory of Belgrade represented by capital city of Serbia, Belgrade and Central Serbia represented by the biggest city in South Serbia, Nis. In Vojvodina there were 1491 (24.2%) schoolchildren, in Belgrade 1160 (18.8%) and in Central Serbia 3504 (56.9%). Also, territorial coverage included the big and small cities, and rural areas. There were 3285 (53.4%) adolescents in big cities, 2695 (43.8%) in small cities and 175 (2.8%) in rural areas. Type of schools were selected by school branches that existed in urban and rural areas: gymnasium (four years of general education), vocational -professional (four years of professional education) and vocational -handicraft (three years of specific education for different types of crafts). Numbers of schoolchildren were: 1524 (24.8%) in gymnasium, 3682 (59.8%) in vocational -professional school and 949 (15.4%) in vocational -handcraft school. Procedure The survey data were obtained through a self-reported questionnaire with previous consent obtained from the Parent Council in selected schools and the Ethical committee of the Institute of Public Health of Serbia. The ESPAD 2003 questionnaire was translated and adapted into Serbian language. It included 74 questions, 30 items related to the socio-demographic characteristics (school, spear time, social contexts, problems, self perception and crime/violence), and 44 items addressed to psychoactive substance use (2 questions about tobacco, 12 questions about alcoholic beverages, 12 questions about cannabis and various illicit drug usages and 18 questions about attitudes and ideas with regard to psycho active substance -PAS use). The questionnaire was administrated in classrooms under conditions similar to written test in presence of the research assistants, while teachers were absent. The survey took about 45 minutes, one school class (in our study 41 minutes in average). Participation was voluntary and there were no consequences for those who did not wish to participate. On the day of research, 7 schoolchildren (5 boys and 2 girls) refused to take part. Children were free to leave some questions unanswered. In order to preserve complete anonymity of the respondent, each filled questionnaire, children placed in a sealed envelope. After analyzing the data quality the sample involved 6155 schoolchildren. Statistical analysis The categorical variables are expressed as frequencies / percentages. Univariate analyses were carried out to study differences in sociodemographic characteristics and behavior associated with cannabis use by regions of Serbia and types of schools. A multivariate logistic regression analysis was carried out to obtain adjusted odds ratios (OR) with 95% confidence intervals (CI) in which the dependent variable was cannabis consumption non-user and user (having never tried it and having consumed it at least once). The independent variables were those which were statistically associated with cannabis use (P<0.05). The IBM SPSS Statistics 19 package was used for these analyses. were, in great percentage, satisfied with their relationships with parents, a little more with mother then with father. Also, two-thirds of adolescents were very satisfied with their relationship with friends. More than one-third of adolescents estimated their school performance as average. Almost two-thirds of adolescents have never missed classes during the last 30 days. The other had skipping classes more than one day due to reasons different than illness or some other reasons (for example tests). The most frequently performed everyday activities were: going with friends to a shopping mall, walking in the streets or in the parks (41.8%). Same percentage of adolescents spent their spare time actively practicing sports, athletics or exercises. More than one -third of adolescents were spending time on the computer or playing computer games. Sociodemographic factors associated with cannabis consumption As shown in Table 2 and Table 3, after applying the univariate analysis, 11 variables out of 18 described adolescents and their habits were associated with cannabis consumption by regions, and by type of school in at least one sociodemographic dimension and spare time. Multiple logistic regression analyses (Table 4) indicated that the sex was associated with cannabis consumption in Vojvodina and the Central Serbia, i.e. boys consumed cannabis more than girls. Family's financial situation was associated with cannabis consumption in Central Serbia, the adolescents from well of families more used cannabis. In Belgrade and the Central Serbia the adolescents who were satisfied with the relationship with their father less consume cannabis. In Vojvodina the adolescents who were satisfied with the relationship with their mother less consume cannabis. Adolescents whose parents knew where they spent Saturday night consume less cannabis in Belgrade and Central Serbia. Adolescents who skipped classes consumed more cannabis in Belgrade and Central Serbia. In all three regions, spare time activities like evening outs and playing on slot machines were associated with cannabis consumption. Only in Vojvodina, using the internet in spare time was associated with cannabis consumption. Multiple logistic regression analyses (Table 5) indicated that the sex was associated with canna-bis consumption in gymnasium, and vocationalprofessional secondary schools. Gymnasium boys were almost two times more consuming cannabis than the girls, while it was less in professional secondary schools. In professional school, adolescents with better family's financial situation were more using cannabis. Adolescents whose parents knew where they spent Saturday night consumed less cannabis in gymnasium and vocational -professional school. Adolescents who had better relationship with father abused less cannabis in professional and handcraft school. While, adolescents who had better relationship with mother abused less cannabis in gymnasium. Higher education of mother was associated with lower cannabis consumption of adolescents in vocational -professional schools. Missed classes due to skipping classes was associated with cannabis use in all three types of school, and it was more among adolescents in vocational -handcraft school. In all three schools, more spare time spent on activities like playing on slot machines was associated with more frequent cannabis consumption, and almost (21). Our study has showed that boys in Vojvodina and Central Serbia consumed cannabis more than girls. Similar data were found in 31 European and North American countries (10,17). However, when boys and girls were in the same situation, their behavior was the same in consumption of cannabis (1,2,22,23). In our study, boys who attended gymnasium and professional schools, where genders were proportional opposite to vocational -handcraft schools, consumed cannabis more frequently than girls. It might be the bias, if in those types of school there were more boys than girls. There is evidence that low socio-economic status may lead to increased drug use (24). Researches emphasized the necessity to examine whether adolescents from wealthier family are exposed to greater drug abuse (25). In our study adolescents from wealthier families were more associated with cannabis consumption in Central Serbia and in professional schools. The drug market is available to everyone, but wealthier could buy easier. The period 2003-2008 has recorded a decrease or stable retail price of cannabis in most European countries (6,26). Serbia is among the countries with low cannabis price, lower than in surrounding countries (2.2$ per gram with range 1.5-4.4 $ in 2008) (27). The danger for adolescents is increasing production of modified cannabis (skunk) in laboratories and its' extremely acceptable price (26,28 (30). Relationships with parents, satisfaction with mother or father were predictors of cannabis use among adolescents in Serbia. The literature indicates that structure of family and difficulties in communicating with parents were predictive factors for cannabis abuse (1,2,31). Good communication and quality of relationship between parents and children are leading to parent' awareness where their children spend time and which activities carry out with friends. The traditional gender roles in 77% Serbian's families, unlike European, are mother taking care of home and household, while father secure family and finances, and at the same time the relationship with mother is better than with father (32,33,34). Our research showed that better satisfaction with mother and her higher education was more protective against cannabis use. Absences from classes due to skips were associated with cannabis use in Belgrade and Central Serbia, and among adolescents in all three types of schools. It is important to offer positive school climate, and school and family together may sup-port the development and implementation of effective prevention and intervention approaches (35,(36)(37)(38). Spare time activities offer the opportunities for adolescents to experiment with new roles and participate in risky behavior (10). Our results are consistent with evidence of the previous studies in United States and European countries considering correlation between cannabis use and evening outs (10,17,39). In Serbia, when the study was conducted, casinos and places with slot machines were located everywhere and even near the schools, which were potential risk for higher accessibility to cannabis. At the beginning of 2010, the action has been initiated to close places where gambling slot machines were available. Also, there has been recommended to prohibit the gambling places that were less than 200 meters from the schools, and prohibit the entry to the persons under age of 18. In Serbia, there is trend for opening internet coffee -spots where adolescents can spend spare time. The internet, as closer form of communication, gives them easier access to information about cannabis consumption and prices. Online retailers of drugs products are growing in UK, the Netherlands, Germany and Austria, and they adapt rapidly to new attempts to control the market (40). In 2008, 33.2% of Serbian households had internet, and internet connection was greatest in Belgrade, followed by Vojvodina and Central Serbia (41). The main strength of our study is possibility to follow-up data in the future and large sample of the study and coverage on the national level. Application of standardized methodology -ESPAD enables comparisons between European countries. However, data are not representative for all adolescents in Serbia born in 1992, but only for adolescents who attended the first grade of secondary school and were the same age. The reasons are: the secondary school is not obligatory, and some young people after the primary school did not continue further education or some adolescents went to the secondary school later. Other limitations are self-report biases. Even though study reflects adolescents' perception, some evidences suggest that report of current substance use is generally reliable. Conclusion In Serbia socio economic factors such as well off families and lower education of mother are significantly associated with adolescent cannabis use. The study confirmed the importance of family relationship development for decreasing adolescent cannabis use, and risky activities during leisure time (particularly playing on slot machine) and skipping from schools. Better relations and trust building between adolescents and parents should be the focus of preventive programmes. The results of this study are important for policy makers to create partnerships in community, which can contribute to effective drug prevention programs. Our findings also may be important for advanced investigations, because it is known that cannabis use could be associated with acceptance of other risk behaviors at a later stage. Ethical considerations Ethical issues (Including plagiarism, Informed Consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc) have been completely observed by the authors.
2016-08-09T08:50:54.084Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "436ae0c2a3799a1860621875203da125f93860b6", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "436ae0c2a3799a1860621875203da125f93860b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248142148
pes2o/s2orc
v3-fos-license
Editorial Identification of Attack Traffic Using Machine Learning in Smart IoT Networks <jats:p /> Introduction Identifying attack traffic is very important for the security of Internet of ings (IoT) in smart cities by using machine learning (ML) algorithms. Recently, the IoT security research community has endeavoured to build anomaly, intrusion, and cyber-attack traffic identification models using machine learning algorithms for IoT security analysis. However, some critical and significant problems have not yet been studied in depth. One such problem is how to select an effective ML algorithm when there are a number of ML algorithms for a cyber-attack detection system for IoT security. Will early-stage traffic management give effective results if applied to IoT traffic management by using ML algorithms, or will this affect the performance of the ML model if several features are selected? Methods must avoid the risk of inaccuracy, inefficiency, and privacy leakage of machine learning techniques in IoT. e main objective of this Special Issue is to publish articles based on feature selection, algorithms, protocols, frameworks, and machine learning techniques in IoT that extend the current state of the art with innovative ideas and solutions in the broad area of security attack traffic detection and network traffic management. eoretical and experimental studies for typical and newly emerging convergence technologies and cases enabled by recent advances are encouraged. Highquality review papers are also welcome. Papers in is Special Section. A large number of papers were submitted to this Special Issue, and each paper was reviewed by three or more experts during the assessment process. After evaluating the overall scores, thirteen papers were selected for inclusion in this Special Issue. Following is a brief description of the accepted papers: (i) In paper [1], the Hybrid Monotone Empirical Mode Decomposition (HM-EMD) is a recent EMD-based method of generating intrinsic mode functions (IMFs) using the monotone property. e monotone property assumes that, at each IMF extraction step, local maxima and minima are either increasing or decreasing. Based on this property and along with the characteristics of EMD, the HM-EMD is a useful method for extracting hidden information in audio streams. is paper proposes an enhancement of HM-EMD based on the predicted correlation and periodicity between IMFs obtained from a modified intensity function. In addition, to prove its feasibility, they apply the method to detect short messages in music files. Experimental results show that, compared with traditional EMD and other recent EMDbased methods such as reduced iteration EMD, scalar-reduced iteration EMD, and modified iteration EMD, the proposed algorithm is superior to both nondominated sorting genetic algorithm II and fast nondominated sorting genetic algorithm II. (ii) e sophisticated cyberattacks are evolving every day, and they are becoming difficult to be detected by conventional security measures. To defend the cyber-security of modern computer systems, researchers have been working on developing intelligent techniques to detect the cyberattacks. e AI techniques have been proved successful so far for many cybersecurity applications, such as intrusion detection, malware analysis, and attack forecasting. However, the complexity of these attacks grows rapidly and the AI techniques need to be continuously updated to detect these attacks. In this paper, the authors compare and analyze the approaches used in applying intelligent techniques in some applications of cybersecurity such as intrusion detection system (IDS), malware analysis, and network traffic monitoring. Based on the analysis, they define some open challenges in using AI for combating cybercrime. ey also discuss the challenges and prospects by combing through over one hundred articles related to future research directions. Finally, they present their perspectives on how future research can improve the cyberattack detection system. (iii) Paper [2] presents a communication cost optimization method based on security evaluation to address the problem of increased communication cost due to node security verification in the blockchain-based federated learning process. By studying the verification mechanism for useless or malicious nodes, they also introduce a double-layer aggregation model into the federated learning process by combining the competing voting verification methods and aggregation algorithms. e experimental comparisons verify that the proposed model effectively reduces the communication cost of the node security verification in the blockchainbased federated learning process. (iv) In paper [3] entitled "Poor Coding Leads to DoS Attack and Security Issues in Web Applications for Sensors," researchers from the Department of Computer Engineering at Konkuk University, South Korea, identify common web programming errors that could lead to a denial-of-service (DoS) attack in web applications for sensors. e research team developed a testbed for two kinds of applications: one for single sensor data collection and the other for data retrieval from a sensor network. eir findings reveal how easily common coding blunders can expose critical infrastructure to unfortunate circumstances. (v) Study [4] presents that in edge computing environments, a dynamic network failure happens frequently due to factors like time-varying nodes and service fluctuations. is failure often affects the performance of applications or even causes crashes. With the emergence of the model-based anomaly detection method, previous work has proven its effectiveness in helping edge computing systems to detect anomalous behaviors and recover from failures at runtime. However, these techniques often require ad hoc model regeneration for each new state of the system and are not suitable for unpredictable edge computing environments. To address this problem, they present Ada-GUM-an adaptive graph updating model-based anomaly detection method. e proposed method uses a multidimensional graph to capture the interdependency between different elements of edge computing systems (e.g., software components) and then generates the subsequent state transition paths through random walks over graphs. e system behavior is then compared with the transition path based on behavior space. ey evaluate AdaGUM with three real-world open-source systems (e.g., Spark Streaming) using real failures as anomalies and two criteria: accuracy and performance overhead that measures system resource consumption. e evaluation results show that AdaGUM can correctly detect 99% of anomalies with an average overhead of 3%. (vi) e authors in the paper "TBSMR: A Trust-Based Secure Multipath Routing Protocol for Enhancing the QoS of the Mobile Ad Hoc Network" proposed a trust-based multipath routing protocol called TBSMR to enhance the MANET's overall performance. e main strength of the proposed protocol is that it considers multiple factors like congestion control, packet loss reduction, malicious node detection, and secure data transmission to intensify the MANET's QoS. e performance of the proposed protocol is analyzed through the simulation in NS2. e simulation results justify that the proposed routing protocol exhibits superior performance than the existing approaches. (vii) e paper entitled "Compressed Wavelet Tensor Attention Capsule Network" proposes the compressed wavelet tensor attention capsule network (CWTACapsNet), which integrates multiscale wavelet decomposition, tensor attention blocks, and quantization techniques into the framework of capsule neural network. Specifically, the multilevel wavelet decomposition is in charge of extracting multiscale spectral features in the frequency domain; in addition, the tensor attention blocks explore the multidimensional dependencies of convolutional feature [5] channels, and the quantization techniques make the computational storage complexities be suitable for edge computing requirements. e proposed CWTA-CapsNet provides an efficient way to explore spatial-domain features, frequency-domain 2 Security and Communication Networks features [6], and their dependencies which are useful for most texture classification tasks. Furthermore, CWTACapsNet benefits from quantization techniques and is suitable for edge computing applications. Experimental results on several texture datasets show that the proposed CWTACapsNet outperforms the state-of-the-art texture classification methods not only in accuracy but also in robustness. (viii) In the paper entitled "Employing Deep Learning and Time Series Analysis to Tackle the Accuracy and Robustness of the Forecasting Problem," the authors apply time series to predict the crime rate to facilitate practical crime prevention solutions. Machine learning [7,8] can play an important role in better understanding and analysis of the future trend of violations. Different time-series forecasting models have been used to predict the crime. ese forecasting models are trained to predict future violent crimes. e proposed approach outperforms other forecasting techniques for daily and monthly forecast. [9] RSSI fingerprint dataset of the UCI repository having seven classes is used for simulation purposes. e dataset is preprocessed by min-max normalization to increase accuracy and reduce computational speed. e proposed model is simulated using MATLAB and evaluated in terms of accuracy, precision, and recall with K-nearest neighbor (KNN) and support vector machine (SVM). Moreover, the simulation results show that the proposed model achieves a high accuracy of 99.87%. (x) Although access control is one of the most important and effective methods for time-series data security, most existing access control models focus on the function of holding or managing data. However, the method of controlling transmission path is ignored. To maximize data security, the authors proposed an IoT time-series data security model based on thermometer encoding and proposed a new hyper-chaotic system as the source-generating system to build an adversarial attack model by using input parameter sensitivities detection. e authors designed a new adversarial attack model which can prevent input parameter sensitivity detection for realizing the maximum data security in the transmission process. (xi) Due to the development of the digital economy, Internet of ings (IoT) has been widely used in various fields. e data security of IoT has become a hot research topic. Generally, the data security of IoT cannot be guaranteed without encryption. Time series encryption can better protect IoT data, but it is still a challenge for time-series encryption, especially in the case that there is an adversary attack. erefore, the authors design an adversarial attack model and then propose an IoT time-series data security model based on thermometer encoding. Finally, the authors evaluate the performance of the proposed model through experiments and compare it with other encryption algorithms. (xii) e last paper proposes an anomaly detection [10] algorithm selection service (ADS) with genetic algorithm (GA) and tsfresh tool. For IoT stream data, it requires that the anomaly detection algorithm can provide good recommendation for easy operation for IoT devices in factory automation systems. Moreover, the proposed method can compare suitable detection models from 28 candidates that are introduced by tsfresh tool with suitable input parameters which are determined by GA methods. e experiments are conducted and ADS system has achieved good results for anomaly detection which can be a good reference for other researchers or users for their solutions. Conflicts of Interest e editors declare that they have no conflicts of interest.
2022-04-14T15:15:38.089Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d5556d7a9242ed8f16b0df43ed55126fe4b4529d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/scn/2022/9804596.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17d9285cebd838f873cfd84e8bb1b63d08c8a9f1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
4572837
pes2o/s2orc
v3-fos-license
Focused update on Gastrointestinal (GI) Oncology from ASCO 2016 Dr. Venook presented the retrospective data (Abstract 3504) on impact of the primary tumor location on survival in colorectal cancer in K-ras Wild type patients. Based on the CALGB/SWOG 8405 trial.[5] This was originally a randomized trial looking at either cetuximab or bevacizumab in the first line setting patients, initially all Ras patients, to first line chemotherapy per oncologist’s choice. In the current study the investigators assessed the impact of primary tumor location on survival in kras-wt metastatic CRC. Among 1137 patients reviewed retrospectively, about 1/4 patient had right-sided tumors; two thirds had left-sided tumors. There have been some interesting findings, that majority of all left-sided tumors tended to be younger and more males, less synchronous tumors and were more likely to have prior adjuvant chemotherapy. Additionally primary tumors were more likely to be in the left side and more patients had liver only metastases. The overall survival there was a 14 month median survival difference between the left side and the right sided tumors (33.2m (Left) vs 19.4 m (Right)). In terms of the biologics they received, subjects who received bevacizumab did better on the left side than the right with about a seven month difference in survival (31.4 m (left) vs 24.2 m (Right)), but this was even significantly greater when cetuximab was the biologic used in the first line with 19.3 months difference in survival (36.0m (Left) vs 16.7m (Right)), in other words, about 19 months inferior when it was on the right side of the bowel. The overall survival in patients with stage four cancers is 14 months greater with left-sided tumors than right-sided tumors. Cetuximab appears to be more effective than bevacizumab in k-ras wild type in left side where as bevacizumab appears to be more effective on the on the right side. This data is in agreement with previous results from FIRE 3 study that was presented a few years ago. This was a randomized trial between Cetuximab versus bevacizumab the first line setting with FOLFIRI as the chemotherapeutic backbone. A twenty months difference in survival was demonstrated with cetuximab in right versus left same with bevacizumab. Dr. Venook presented the retrospective data (Abstract 3504) on impact of the primary tumor location on survival in colorectal cancer in K-ras Wild type patients.Based on the CALGB/SWOG 8405 trial. [5]This was originally a randomized trial looking at either cetuximab or bevacizumab in the first line setting patients, initially all Ras patients, to first line chemotherapy per oncologist's choice. In the current study the investigators assessed the impact of primary tumor location on survival in kras-wt metastatic CRC.Among 1137 patients reviewed retrospectively, about 1/4 patient had right-sided tumors; two thirds had left-sided tumors.There have been some interesting findings, that majority of all left-sided tumors tended to be younger and more males, less synchronous tumors and were more likely to have prior adjuvant chemotherapy.Additionally primary tumors were more likely to be in the left side and more patients had liver only metastases.The overall survival there was a 14 month median survival difference between the left side and the right sided tumors (33.2m (Left) vs 19.4 m (Right)).In terms of the biologics they received, subjects who received bevacizumab did better on the left side than the right with about a seven month difference in survival (31.4 m (left) vs 24.2 m (Right)), but this was even significantly greater when cetuximab was the biologic used in the first line with 19.3 months difference in survival (36.0m (Left) vs 16.7m (Right)), in other words, about 19 months inferior when it was on the right side of the bowel.The overall survival in patients with stage four cancers is 14 months greater with left-sided tumors than right-sided tumors.Cetuximab appears to be more effective than bevacizumab in k-ras wild type in left side where as bevacizumab appears to be more effective on the on the right side.This data is in agreement with previous results from FIRE 3 study that was presented a few years ago.This was a randomized trial between Cetuximab versus bevacizumab the first line setting with FOLFIRI as the chemotherapeutic backbone.A twenty months difference in survival was demonstrated with cetuximab in right versus left same with bevacizumab. Dr. Shragg and her colleagues (Abstract 3505) attempted to further address this issue by assessing SEER database of the 18 registries they had over 60,000 patients.Basic drawbacks of this study were that, they didn't have any information on K-ras and secondly the only information they had about chemotherapy was from those patients who had Medicare (above 65 years of age).Once again It was demonstrated that patients on the left side to be younger and there were more males on the on that side as The 52 nd annual meeting of American Society of Clinical Oncology was held in Chicago, Illinois, on June 3-7, 2016, gathering 30,000 oncology professionals giving the attendees the opportunity to discuss and view ground-breaking research.In this article the pivotal presentations at American Society of Clinical Oncology (ASCO) 2016 related to colorectal cancer (CRC) and other gastrointestinal malignancies have been discussed.The presentations on pancreatic cancer and Neuroendocrine tumors have practice changing potential.The provocative retrospective study on Sidedness in KRAS wild type in patients with metastatic colorectal cancer (CRC) receiving biologics such as epidermal growth factor receptor (EGFR) targeted antibody and vascular endothelial receptor targeted antibody (VEGF) treatment could be a change in the paradigm of management of these patients.The addition of Doxorubicin to sorafenib was not superior to sorafenib alone for advanced hepatocellular carcinoma.For resectable gastric cancer patients, Post-operative chemoradiation resulted in similar survival when compared with post-operative chemotherapy.The novel Peptide Receptor nuclide therapy has significantly increased progression free survival in low grade metastatic midgut neuroendocrine tumors (NETTER-1).The Immunotherapy in colorectal and non-colorectal malignancies continuous to evolve as noted in several presentations.Microsatellite Instability has again been confirmed to be an important predictor in patients with stage IV colon cancer receiving immunotherapy.As expected, the immunotherapy and precision medicine was featured heavily in ASCO 2016.The selected presentations from 2016 annual meeting of American Society of Clinical Oncology (ASCO) related to GI Oncology have been reviewed here. COLORECTAL CANCER Epidermal growth factor receptor (EGFR) targeted antibodies approved for clinical use in patients with metastatic CRC.Several retrospective studies in CRC patients receiving anti-EGFR antibody treatment have shown that patients with mutated KRAS did not benefit from anti-EGFR therapy.The KRAS data has changed the paradigm of anti-EGFR antibody treatment in CRC.The retrospective analyses of KRAS data from CRYSTAL, [1] OPUS [2] and EVEREST [3] have further demonstrated patients with K-RAS mutant CRC do not benefit from anti-EGFR antibody treatment.The addition of cetuximab to FOLF-IRI or FOLFOX as first-line treatment only benefits patients with wild-type KRAS tumors.National Cancer Institute (NCI) has suspended all ongoing U.S. cooperative group studies involving anti-EGFR antibody well and once again they showed that there was indeed a difference in stage 4 disease between the left side and the right with the hazard ratio of 1.25 for stage three disease there was a difference but not quite as striking a stage IV with the hazard ratio of 1.12 and no difference noted in in stage II disease. So why the difference between right and left side tumors in terms of clinical outcomes?To address this Dr. Michael Lee (Abstract 3601) and colleagues from the MD Anderson try to look at this and look at molecular features associated with survival and with anti-EGFR therapy.Colon cancer is biologically heterogeneous, with mutation profiles that are different, microsatellite instability, with consensus molecular subtyping (CMS) classification that was reported two years ago.Molecular analyses suggest that these right sided tumors are impacted by high BRAF, hypermethylation and so distinct gene expression patterns. From clinical point of view right-sided tumors patients tended to be older, more females, often occur late presentation, histologically mucinous tumors or Signet cell tumors and finally peritoneal metastases are more common with right than left sided.It seems the side of cancer really is a surrogate marker for the tumor biology with differential BRAF and hypermethylation status.As this is a retrospective ad hoc analysis; it has its own strengths and limitations.This study may generalizable to the way these patients are being treated now with multimodality treatment strategies.Comprehensive molecular analysis of specimens and precise biomarkers are needed from phase 2 and 3 prospective clinical trial cohorts, in order to individualize patient care. PANCREATIC DUCTAL CANCER Surgical resection remains the only potential curative strategy for pancreatic ductal carcinoma (PDC) patients.However, 5-year survival for surgically resected patients is less than 30% and most patients die of distant and local progression.Therefore, effective adjuvant strategies have been sought to enhance clinical outcomes.The Gastrointestinal Tumor Study Group (GITSG) 9173 trial indicated that post-operative 5-FU and radiotherapy extended the median overall survival to 20 months, as compared with 12 months with observation alone.The European Study Group for Pancreatic Cancer (ESPAC-1) had indicated for the first time that adjuvant systemic chemotherapy led to a superior survival as compared with the either the no chemotherapy or the chemo-radiotherapy, thus, setting the stage for adjuvant treatment for resectable PDC. [6]There was an advantage to taking chemotherapy with the five-year survival at 21% and the no chemotherapy 8%.The Radiation Therapy Oncology Group (RTOG) 9704 indicated that adjuvant gemcitabine followed by chemo-radiation was superior to 5-FU for pancreatic head carcinomas. [7,8]The CONKO-1 study [9] was a multi-center, European trial which randomized 368 patients with surgically resected pancreatic cancer to post-operative gemcitabine for 6 months vs. observation.ESPAC-3 [10] assessed over a thousand patients with resectable PDC comparing 5FU and Gemcitabine and found to be equal in terms of survival outcomes.5FU was given as bolus and had relatively more toxicity compared to Gemcitabine.Gemcitabine thus became reference standard for adjuvant treatment for PDC. ESPAC 4 is a randomized trial (Abstract 4006) presented by the UK group looking at gemcitabine alone versus combination of gemcitabine and capecitabine following the Whipple procedure.Over 700 patients with pancreatic ductal carcinoma, treated with curative intent in terms of the surgery, were randomized to adjuvant treatment of gem for six cycles (day 1,8,15) or combination of gemcitabine (1000 mg/m 2 d1,8,16 (6 cycles) and Capecitabine (830 mg/m² daily 21 days out of 28 days).The overall survival data has been presented and found at the two-year mark the survival curves started to separate with the HR.82 that was a statistically significant, with overall survival of 28 months when compared to Gemcitabine alone (25.5 months).The fiver year survival difference which is about 12% increasing from 16.3% with gemcitabine alone to 28.8% with combination, which is quite impressive.Slightly more toxicity was seen in the combination treatment Arm including hand-foot syndrome diarrhea and neutropenia, however, were manageable.The five years overall survival 29% for gemcitabine and capecitabine compared to the gemcitabine alone which is 16% now.Therefore it is likely be the standard of care and certainly an option to be discussed with our pts. HEPATOCELLULAR CANCERS Sorafenib is a multikinase inhibitor of Raf kinase, VEGF receptor (VEGFR) and platelet-derived growth factor receptor (PDGFR), and has been approved for the treatment of advanced hepatocellular cancer (HCC), based on the results of the SHARP trial [11] that demonstrated approximately three month survival benefit with sorafenib when compared to placebo in child Pugh A cirrhotics with HCC.In Asian countries, the incidence of HCC is higher than in the western nations and is more likely to be HBV-associated compared to HCV in Western population.Sorafenib significantly prolonged OS and PFS as compared with placebo in a randomized trial with 226 Asian HCC patients, [12] thus establishing this agent as a standard therapy for HCC.Since 2009 several chemotherapeutics and targeted agents have been investigated, however, none of them demonstrated superior survival than Sorafenib alone. The combination of sorafenib and doxorubicin was found to be synergistic in phase 1 and 2 studies.Abou-Alfa et al. presented the results of phase III ALLIANCE study (Abstract 4003), the combination of doxorubicin and sorafenib therapy in HCC patients with child's A cirrhosis.In a, 137 histologically proven HCC patients with child's A cirrhosis, no prior systemic therapy and good performance status and Child pugh score received the standard dose of 400 mg bid and doxorubicin 60 mg/m 2 IV q 3 weeks for 6 cycles.The median OS for child's B cases was 14 weeks and time to progression (TTP) was 13 weeks.There was some allowance for his patients high bilirubin allowed to dose reduce.This study was powered to detect 37% increase in median overall survival.Unfortunately a negative trial that did not demonstrate superiority with addition of cytotoxic chemotherapy.Indeed, the combination of chemotherapy with sorafenib appears harmful in terms of OS.Toxicity is also worse in combination arm.Therefore, it was concluded that chemotherapy is not recommended for advanced HCC. GASTRIC CANCER For clearly resectable gastric adenocarcinoma, two trials have been quoted as standard of care -"MAGIC" [13] with perioperative chemotherapy and "McDonald" [14] with post-operative chemo-radiation.One of the common scenarios that is encountered in clinical practice while treating resectable gastric cancer with perioperative chemotherapy is to weigh in the role of post-operative radiation or change in chemotherapy when there is low to modest treatment response of the tumor.The CRITICS trial (abstract 4000) attempted to readdress the role of radiation in adjuvant setting in a multicenter randomized phase III clinical trial of neoadjuvant chemotherapy followed by surgery and then continuing chemotherapy or switching to chemo-radiation.The study population received either ECC (Epirubicin, Cisplatin and Capecetabine) or EOC (Epirubicin, Oxaliplatin and Capecitabine), so basically platinum and fluoropyrimidine based chemotherapy.The radiation was delivered 45 Gray 25 fractions using IMRT techniques and they receive weekly cisplatin or capecitabine during the time of the radiation.Eligibility criteria included stage 1B to IVA resectable gastric cancers (83%) and Gastro-Esophageal junction (GEJ) tumors (17%).Primary endpoint was overall survival with secondary endpoint being progression free survival.This trial was powered to detect a 10% increase in the five year overall survival.Majority of study population had T3 or T4 disease and were node positive in about 50 percent.This trial did not demonstrate any overall survival with post-operative chemotherapy when compared to post-operative chemotherapy alone (40.9% vs 41.3% P = 0.99).It is important to note that this trial reflected the general clinical practice treating gastric cancer patients where only 46% of the planned patients could complete post-operative chemotherapy and about 50% could complete post-operative chemoradiation.This is the third trial, in addition to CALGB and ARTIST trials that have addressed the role of radiation in the adjuvant setting for gastric cancer and all of them have been failed to demonstrate positive clinical outcomes.Currently an ongoing trial called TOPGEAR is assessing the impact the radiation upfront so rather than in the adjuvant setting patients are randomized after two cycles of chemotherapy to third cycles of chemotherapy or to the radiation. NEUROENDOCRINE NEOPLASMS PROMID [15] and CLARINET [16] trials showed improvement in PFS with Somatostatin Analogues (SSA) in neuroendocrine tumors and therefore considered to be first line for treatment.Peptide receptor radionuclide therapy (PRRT) is an infusion administered by nuclear medicine physicians every 8 weeks for 4 times.This is essentially an SSA, an octreotide molecule s linked to a radioactive molecule called Lutesium 177 that binds to octreotide receptor two and five.It is given systemically, intra vascular, for 30 minutes.To avoid radiation effects to kidneys, amino acids, lysine and arginine, will be infused for about 30 minutes followed by administration of these radiopharmaceutical simultaneously, with the amino acid infusion continued for 3 more hours for total of 4 hrs.PFS and OS advantage was demonstrated in a large case series previously suggesting that this compound is active in neuroendocrine malignancies.NETTER 1 (abstract 4005) is a randomized control trial in Europe and US.Patients who progressed on SSA randomized to receive 4 cycles of PRRT, The experimental group were still able to continue this medicine analog if they needed for symptom control.The comparative arm is a dose escalated group with 60 mg of SSA.The study compared the progression free survival in midgut tumors which is primary objective. Tumors were well differentiated, low grade, KI 67 index of less than 20%, somatosensory receptor positive The median progression free survival was not reachable beyond 2 years with the hazard ratio 0.21 that is 79% risk reduction with a response rate of 18% compared to the SSA arm (3%).In fact the progression free survival of the SSA group was about eight months, thus evidence that increasing the dose of SSA has impact on PFS.Patients in experimental arm had short term GI toxicity including nausea vomiting and diarrhea and have been attributed to amino acid infusion that was given for renal protection.Therefore PRRT provides a major therapeutic benefit for patients progressing on SSAs, for whom few treatment options are available. IMMUNOTHERAPY There were two key takeaways regarding immune therapy in colon cancer.First, all colon patients in all stages must be tested for microsatellite status to learn if they harbor inherited HNPCC syndrome and open a new therapeutic option with the checkpoint inhibitors that have demonstrated substantial clinical benefit in these patients with MSI-high metastatic disease.Even though these drugs are not formally approved in United States many sites gain access through company sponsored compassionate access programs.Second, based on a pre-clinical evidence, a small phase I trial (Abstract 3502) showed the combination of cobimetinib (MEK-1 Inhibitor) and atelzoliumab (PDL1 inhibitor) demonstrated interesting clinical benefit (response rate and prolonged stable disease) in MSS colon patients.MEK inhibition increased intra and peri-tumoral T cell accumulation by up regulation of MHC-1 on tumoral cells and therefore combined with PDL1 inhibitor it resulted in synergistic action.In this phase 1 trial, 23 KRAS mutant patients with metastatic colorectal cancer.The partial response was noted 17%, stable disease in 22% of enrolled patients.Based on these outcomes a larger randomized trial is in process of being initiated.This may be the most important observation presented.Before this, MSS colon patients who are at unmet need were considered to be unresponsive to immune therapies.These results have potential to open more opportunities for further exploration of combination trials of immune therapy for non MSI-high patients. Last year Dr. Lee and colleagues demonstrated that patients who are MMR deficient either colon cancer or in fact non-CRC tumors have a significant benefit from pembrolizumab with no benefit if they were MMR proficient or MSI stable. [17]This year this group of investigators presented updated reports of MSI-H cohort of CRC with total of 28 patients that shower overall response of 57%.With this study as background Dr. Overman from the MD Anderson Cancer Center presented data (Checkmate 142 trial, Abstract 3501) on nivolimumab (PD1 inhibitor) with or without ipilimumab (CTLA4 inhibitor) in patients with microsatellite stable or high.The primary endpoint was the investigator assessment of the response rate.The study patients stopped monotherapy because of disease progression and the combination because of toxicity.The overall response rate was 25.5% for monotherapy and 33.3% to the combination.But the clinical benefit that includes both overall and stable response was significantly higher with the combination 81% compared to 56% for monotherapy. In summary, this year at ASCO, ESPAC 4, adjuvant pancreatic cancer trial improved overall survival and five year overall survival and thus became reference standard.NETTER trial for metastatic midgut tumors is quite exciting with HR 0.21 and with very impressive progression free survival, response rate and early signs of overall survival it has high potential to become an excellent therapeutic option in future.CRITICS trial in resectable gastric cancers unfortunately does not advocate at this point using post-operative radiation treatment when you embark on a perioperative chemotherapy strategy.In Hepatocellular cancer chemotherapy does not tend to improve prove overall survival.Sorafenib alone continues to be the standard.For metastatic colon a cancer, side really does matter as it is not only prognostic but it is predictive of the treatment effect and clearly is a biologic surrogate marker.This may impact the paradigm of management in near future.Immune oncology space continues to expand in GI malignancy with new hope in MSS colorectal cancers. Conflicts of interest There are no conflicts of interest. Ravi Kumar Paluri GI Oncology, Department of Medicine, Division of Hematology Oncology, Associate Scientist, Experimental Therapeutics Program, Comprehensive Cancer Center, University of Alabama atBirmingham, AL, USA E-mail: rpaluri@uabmc.edu
2018-04-03T05:55:04.104Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "2ab3c45b11581c36a9c3dd48bfdce56d41008155", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0971-5851.195753", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "95a4502fa8934fe2ae7d4be7be98d0861435c92d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216252900
pes2o/s2orc
v3-fos-license
MOLECULAR DOCKING AND MOLECULAR DYNAMIC SIMULATION OF THE AGLYCONE OF CURCULIGOSIDE A AND ITS DERIVATIVES AS ALPHA GLUCOSIDASE INHIBITORS Diabetes mellitus (DM) is characterized by high blood sugar levels caused by insufficient insulin production in the pancreas or insulin resistance in the body. Alpha-glucosidase enzymes are therapeutic targets for type 2 diabetes treatment. Structurally, curculigoside has a resemblance to xathohumol (chalcone) which has a strong inhibitory activity against alpha glucosidase. The present study aims to determine the interaction of the aglycone of curculigoside A and its derivatives against alpha glucoside (PDB ID 2QMJ) was evaluated based on interaction mode and binding stability by performing docking and molecular dynamic simulation by using AutoDock 4.2 and AMBER 18, respectively. All ligands can interact with alpha glycosidase and ligand 34, 36, 43, and 56 have a best binding mode with free bonding energy of -6.30, -5.67, -5.16, and -5.92 kcal/mole, respectively. The hydrogen bonds formed in MD are different from the docked pose because of a large movement of alpha glucoside receptor and ligand during the MD process. In conclusion, ligand 34, 36, 43, and 56 are candidates for lead compounds as alpha glucosidase inhibitors. INTRODUCTION Diabetes mellitus (DM) currently affects 10.3 million Indonesian people and this number is predicted to increase to 16.7 million by 2045. 1 DM occurs due to deficiency of insulin secretion by β-pancreatic cells, increased insulin resistance or damage to insulin action in the target tissue. Type 2 diabetes (T2D) is the most common form of the disease, accounting for approximately 90 % of cases. 2 The magnitude and time of the peak plasma glucose (PG) depends on a variety of factors, including the timing, composition and quantity of the meal. PG peaks for diabetic individuals about ≥200 mg/dL but in non-diabetic individuals rarely exceed 140 mg/dL(about 60 minutes after the start of a meal). Thus, controlling PPHG to achieve blood glucose levels such as non-diabetic individuals becomes one of the therapeutic strategies for T2D. 3 The α-amylase inhibitor is a compound that inhibits carbohydrates breakdown into glucose by amylase enzyme. The α-amylase is a carbohydrate hydrolase located on the border of the small intestinal epithelial brush 4 and acts as a catalyst in the final stages of carbohydrate digestion which only monosaccharides such as glucose and fructose can be absorbed by the intestine. 5 This enzyme breaks down oligosaccharides through the hydrolysis reaction by breaking the 1,4-α glycosidic linkage between glucosil residues and glycosidic oxygen (C1-O) accompanied by proton exchange between water and glucosil residues and resulting D-glucose as a final product that is easily absorbed by the intestine and will cause an increase in postprandial blood glucose levels. 6 The inhibitory action of this enzyme can effectively reduce the digestion of complex carbohydrates and their absorption which causes decreased postprandial glucose levels in diabetics. 4 One of the chalcone derivatives which has a strong inhibitory activity against alpha glucosidase is xathohumol with IC 50 of 8.8 µM. 7 The aglycone of curculigoside A has a structural similarity to the chalcone. The aglycone of curculigoside A is a phenolic glycoside isolated from the rhizome of Curculigo orchioides. 8 Its aglycone was predicted good in absorption, moderate in permeability, and weak binding into plasma proteins. 9 In silico, one of the common approaches to determine the mechanism of action of a compound by considering the similarity of chemical structures. The similarity aspect of chemical structure refers to the similarity of chemical elements, molecules as well as substructure compounds. The basic principle is assumed that the similarity of a chemical structure compound will have similar biological properties and compounds with similar structures will tend to bind to the same protein. 10 Therefore, several studies have been carried out to modify the structure of the lead compound to obtain a more active compound but less toxic both chemically 11 and in silico 12 . One method used is molecular docking which is the process of docking a molecule into the active site of the target macromolecule through noncovalent bonds. It is important to know the basic structure of the drug that will be designed to optimize the ligand binding interaction in macromolecules. 13 Docking is used to predicting the interaction and orientation of ligand binding to the target protein as well as in the virtual screening of several candidate compounds to obtain the best hit for specific targets. 14 Docking is more focused on poses and interactions of ligands on an active side, to obtain more comprehensive information, it is necessary to simulate through molecular dynamics that provide information about the stability of ligand-protein binding. Although an acceptable binding mode can be provided, the solvent effect and protein flexibility were not fully considered. Therefore, MD simulations were carried out on the best-docked interaction to further explore the ligandreceptor interactions. In this study, the docking mode and interaction stability of the aglycone of curculigoside A and its derivatives on α-glucosidase was carried out by using docking and molecular dynamic simulation. Ligand Preparation The molecular structures of the aglycone of curculigodise A and its derivatives 9 were sketched using ChemDraw Ultra software 15 of ChemOffice and subjected to energy minimization technique using Allinger's molecular Mechanics (MM2) force field followed by geometry optimization using semiempirical Quantum Mechanics based on AM-1 (Austin Model-1). Preparation of Alpha Glucosidase Macromolecules The protein structure of 2QMJ was obtained from Protein Data Bank (PDB) with the resolution of 1.9Å bound with the ligand 1,4-deoxy-4-((5-hydroxymethyl-2,3,4-trihydroxycyclohex-5,6 enyl)amino) fructose and prepared using Discovery Studio Visualizer 16 and was used for the optimization and minimization until the root mean square deviation reached 2.0 Å. Then, Grid was generated using Grid generation Wizard for docking studies. MD Simulations The MD simulation was carried out by using The AMBER 18 software package. 18 The initial structures of 34, 36, 43, and 56 complexes from the docked results were used for the MD simulations. The FF14SB AMBER force field was taken from the protein, and charges were added to the protein by using the software database. The general AMBER force field (GAFF2) was taken for ligands 19 , and AM1-BCC method was applied to assign their partial charges because of the lack of partial charge parameters for ligands in GAFF2 force field. 20 The Antechamber suite (AMBER 18 package) was used to produce the atomic charges and topology files of ligands. 19 The Tleap module of the AMBER 18 was used to produce the topology and coordinate files of the whole system. The whole system was dipped into a water box of TIP3P with a margin distance of 10 Å. 21 To neutralize the charge of the system, a proper number of natrium ions were added. The particle mesh Ewald (PME) was adopted during the MD simulations to deal with the long-range electrostatic interactions 22 , and the cut-off distance of nonbonded interactions was set to 10 Å. The SHAKE algorithm was used to constrain the bonds involving hydrogen. 23 Firstly, two-stage energy minimizations were performed on each system: the algorithms (1,000 steps of the steepest descent and 1,000 steps of the conjugate gradient) with restrain were performed in the first stage; the same algorithms without restrain were further used in the second stage. Secondly, each system was heated from 0 to 300 K within 20 picoseconds (ps), gradually. Then, the system was equilibrated up to 100 ps at 300 K and constant pressure. Finally, a production process of 50 ns was performed in the constant temperature and pressure (NTP) with a step of 2 fs. The trajectories were recorded every 10 ps and the stability of the system was checked by the RMSD of the backbone. Trajectory analysis was carried out by using the CPPTRAJ. 24 Calculation of Binding Free Energy The MM-GBSA method in AMBER 18 was used to compute the binding free energies of the receptorligand complexes. 25 All the 100 snapshots of the simulated structures within the last 1 ns trajectory of MD simulations were extracted to perform the binding free energies calculations. RESULTS AND DISCUSSION Molecular Docking Molecular docking is done to predict the orientation of one molecule into the receptor and its interaction is evaluated based on conformation and electrostatic properties. 26 Native ligan, 1,4-deoxy-4-((5hydroxymethyl-2,3,4-trihydroxycyclohex-5,6-enyl)amino) fructose (acarbose) was docked into the binding site of alpha-glucosidase (PDB: 2QMJ) receptor 27 for validating the docking method. The rootmean-square deviation (RMSD) between the docked structure (red color) and the X-ray crystal structure (green color) of acarbose was 1.52 Å (less than 2 Å) (Fig.-1), which is satisfactory. The smaller RMSD shows the position of the re-docking ligand which is getting closer to the position of the crystallographic ligand. 13 Also, all the 57 curculigosides A 9 were docked into the binding pocket of alpha glucosidase (Table-1). To illustrate the interaction between ligands and alpha glucosidase, all docking mode were formed bonding to amino acid residues Asp327, Asp203, Arg526, Asp542, and His600; obtained the four best compounds, i.e. 34 (3,5-dihydroxybenzyl-4-chlorobenzoate), 36 (3,5-dihydroxybenzyl-3-bromobenzoate), 43 (3,5dihydroxybenzyl-4-(tert-butyl)benzoate), and 56 (4-hydroxybenzyl-4-(tert-butyl)benzoate) with free binding energy (kcal/mole) of -6.74, -6.69, -6.81, and -6.68, respectively. Native ligand has a docking mode for amino acid residues on the active site of the receptor. These ligandreceptor interactions are formed through hydrogen bonds, Van der Waals bonds and or electrostatic interactions (Fig.-2) and ligands (acarbose). Aglycone curculigoside A has a planar der Waals interaction and π-interactions. Asp542 residue formed The π-anion interaction plays a role in the stability of the binding interaction Asp327 residues form hydrogen bonds to O atoms in hydroxyl groups substituted in benzene in 43, and 56 as well as in Asp327, His600, and Arg526 residues. and 56 were able to interact with amino acid residues (key residues) in glucosidase via hydrogen bonds amino acid residues on the active side of alpha glucosidase (Table-2 interactions. Asp542 residue formed π-anion interaction to plays a role in the stability of the binding interaction. 28 43, 56, and Native Ligand (Acarbose) into the Active Site of Alpha G Asp327 residues form hydrogen bonds to O atoms in hydroxyl groups substituted in benzene in as well as in Asp327, His600, and Arg526 residues. Based on these results, re able to interact with amino acid residues (key residues) in the b hydrogen bonds to the Asp327, Asp542, Arg526 and His600 residues which are important amino acid residues on the active side of alpha glucosidase. 27 a value of free bonding energy higher than the value in ). These results indicate that compound 34, 36, 43, and 56 have a good affinity for alpha glucosidase receptors. Free bonding energy (∆G) shows the stability of the ligand (bond) interaction to the 98| January -March | 2020 Nursamsiar et al. have docking modes similar to native ligand 2) which able to form van interaction to 34, 36, 43, and 56. Active Site of Alpha Glucosidase. Asp327 residues form hydrogen bonds to O atoms in hydroxyl groups substituted in benzene in 34, 36, Based on these results, ligand 34, 36, 43, binding pocket of alpha the Asp327, Asp542, Arg526 and His600 residues which are important value in the native ligand (a good affinity for alpha hows the stability of the ligand (bond) interaction to the alpha glucosidase enzyme into the binding site. The higher the value of free bonding energy, the more stable interaction of ligand-receptor. 29 MD Simulations MD Simulations Features Molecular dynamics simulations were carried out to explore receptor-ligand interactions by considering protein flexibility. To observe the stability of the complexes, the properties of each complex (such as pressure, temperature, structure, and energy) were examined during the entire MD trajectory (Fig.-3). This simulation was carried out on protein-ligands complexes (acarbose, 34, 36, 43, and 56). The RMSD value of backbone atom referring to the starting structure of the protein-ligand complex was used to monitor the dynamic stability of the MD trajectories. The average RMSD fluctuations for the protein and ligand (acarbose, 34, 36, 43, and 56) are 1.36 Å and 1.91 Å; 1.60 Å and 1.57 Å; 1.28 Å and 1.71 Å; 1.33 Å and 1.72 Å; and 1.30 Å and 1.28 Å, respectively. These results reveal the average RMSD fluctuations of the five ligands: 56 <34 <43 <36 < acarbose. RMSF (Stability of the Binding Pocket) The root-mean-squared (RMSF) was used to explore the stability of the binding pocket during the MD simulation process. The RMSF of all residues around the acarbose, 34, 36, 43, and 56 complexes were computed within the last 10 ns trajectory of MD simulations by using Discovery Studio. The residues around the ligand and their RMSF values compared to the initial complexes can be seen in Table 2. In all the complexes, the RMSF for each residue surrounding the ligand is lower than 1.5 Å (Table-2), which means that the binding pocket is stable during the MD simulation. Hydrogen Bonds Interaction Hydrogen bonds interaction plays important role in the complexes between receptor and ligand. Hydrogen bonds were computed within the last 10 ns trajectory. All the possible hydrogen acceptors were taken into consideration, such as protein, ligands, and water molecules. The results of hydrogen bonds analysis for the five systems are listed in Table-3. There were five, three, six, three, and one hydrogen bonds formed in acarbose, 34, 36, 43, and 56, respectively (Table-2). The hydrogen bonds formed in complexes (MD simulation) are quite different compared to the binding mode (docking simulation), because of a large movement of ligand and receptor during the MD simulation. Binding Free Energies The calculated G bind of the five complexes was carried out by using MMGBSA method (Table-3), involved ΔE vdw , ΔE ele , ΔG GB , ΔG SA , ΔE gas , ΔG sol , and ΔG bind . The van der Waals interactions occur when adjacent atoms come close enough that their outer electron clouds just barely touch. The 56 complex has the lowest van der Waals energy of -18.38 kcal/mole which is influenced by hydrophobic interactions between t-butoxy benzyl with residues surrounding 56. The 43 complex has a van der Waals energy of -15.36 kcal/mole which is influenced by the hydrophobic interaction between methoxy benzyl with residues surrounding 43. The 36 complex has a van der Waals energy of -12.50 kcal/mole which is influenced by the hydrophobic interaction between dihydroxy benzyl with residues surrounding 36. The 34 complex has a van der Waals energy of -10.88 kcal/mole which is influenced by hydrophobic interactions between dihydroxy benzyl with residues surrounding 34. Acarbose complex has the highest van der Waals energy that is -6.01 kcal/mole which is influenced by hydrophobic interactions between dihydroxy benzyl with residues surrounding acarbose. The electrostatic also affects free binding energy because each complex shows the presence of hydrogen bonds with the receptor as well as unfavorable polar solvation (ΔG GB ) (e.g. 34 and 36 complexes). The 34 complex has a low electrostatic value, but the free binding energy is higher than 36 complex because it has a high in ΔG GB whereas non-polar solvation contributions (ΔG SA ) do not affect bond free energy. CONCLUSION In conclusion, the binding modes of four inhibitors (34, 36, 43, and 56) in docking simulation are similar; and RMSD fluctuations (MD simulation) of the four complexes are consistent with their inhibitory activities. Therefore, these ligands can be considered as a lead compound for alpha-glucosidase inhibitors.
2020-04-02T09:11:19.383Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "336d90ee1935815474b786ecf58d094a0b9e8df0", "oa_license": null, "oa_url": "https://doi.org/10.31788/rjc.2020.1315577", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f03726adcc6528fd1800c966628fc0eefab6e4f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
349177
pes2o/s2orc
v3-fos-license
A phase I trial of S-1 with concurrent radiotherapy for locally advanced pancreatic cancer This study investigated the maximum tolerated dose of S-1 based on the frequency of its dose-limiting toxicities (DLT) with concurrent radiotherapy in patients with locally advanced pancreatic cancer. S-1 was administered orally at escalating doses from 50 to 80 mg m−2 b.i.d. on the day of irradiation during radiotherapy. Radiation therapy was delivered through four fields as a total dose of 50.4 Gy in 28 fractions over 5.5 weeks, and no prophylactic nodal irradiation was given. Twenty-one patients (50 three; 60 five; 70 six; 80 mg m−2 seven patients) were enrolled in this trial. At a dose of 70 mg m−2 S-1, two of six patients demonstrated DLT involving grade 3 nausea and vomiting and grade 3 haemorrhagic gastritis, whereas no patients at doses other than 70 mg m−2 demonstrated any sign of DLT. Among the 21 enrolled patients, four (19.0%) showed a partial response. The median progression-free survival time and median survival time for the patients overall were 8.9 and 11.0 months, respectively. The recommended dose of S-1 therapy with concurrent radiotherapy is 80 mg m−2 day−1. A multi-institutional phase II trial of this regimen in patients with locally advanced pancreatic cancer is now underway. Pancreatic cancer (PC) is one of the leading causes of cancer death worldwide. The prognosis of patients with this disease remains extremely poor, with a 5-year survival rate after diagnosis of less than 5%. Despite recent improvements in diagnostic techniques, PC is diagnosed at an advanced stage in most patients. Among these patients, roughly one-third is diagnosed as having locally advanced disease radiographically confined to the pancreas and surrounding tissues. In patients with locally advanced PC, the concurrent external-beam radiation therapy and 5-fluorouracil (5-FU) therapy has been shown to offer a survival benefit in comparison with radiotherapy alone (Moertel et al, 1969(Moertel et al, , 1981 or chemotherapy alone (Gastrointestinal Tumor Study Group, 1988). In an attempt to improve the efficacy of 5-FU with concurrent radiotherapy, various anticancer agents and radiation schedules are being examined in clinical trials, but no significant impact on survival has been accomplished. Because of these results, 5-FU with concurrent radiotherapy remains the predominant chemoradiotherapy for locally advanced PC in clinical use (Willett et al, 2005;Yip et al, 2006). S-1 is a novel orally administered drug, which is a combination of tegafur, 5-chloro-2,4-dihydroxypyridine and oteracil potassium in a 1 : 0.4 : 1 molar concentration ratio. Tegafur is hydroxylated and converted to 5-FU by the hepatic microsomal enzymes. 5-Chloro-2,4-dihydroxypyridine is a competitive inhibitor of dihydropyrimidine dehydrogenase, which is involved in the degradation of 5-FU, and acts to maintain effective concentrations of 5-FU in plasma and tumour tissues. Oteracil potassium, a competitive inhibitor of orotate phosphoribosyltransferase, inhibits the phosphorylation of 5-FU in the gastrointestinal tract, reducing the serious gastrointestinal toxicity associated with 5-FU (Shirasaka et al, 1996a). In athymic nude rats, S-1 has been shown to result in retention of a higher and more prolonged concentration of 5-FU in plasma and tumour tissues in comparison with 5-FU and uracil/tegafur (Shirasaka et al, 1996b). The antitumour effect of S-1 has already been demonstrated in a variety of solid tumours, including advanced gastric cancer (Sakata et al, 1998), colorectal cancer (Ohtsu et al, 2000), non-small-cell lung cancer (Kawahara et al, 2001), and head and neck cancer (Inuyama et al, 2001). In patients with metastatic PC, a recent early phase II study has demonstrated a response rate of 21% (Ueno et al, 2005), and a more favourable tumour response (response rate: 38%) and survival (median: 8.8 months) have been reported in a multiinstitutional late phase II trial of S-1 (Furuse et al, 2005). Thus, S-1 has promising antitumour activity against advanced PC, and is much more convenient to administer than intravenous 5-FU infusion, as it is taken orally. Concurrent radiotherapy along with S-1 therapy as an alternative to 5-FU infusion may result in more efficient treatment and improve the quality of life of patients. Therefore, we conducted a phase I trial to determine the maximum tolerated dose of S-1 with concurrent radiotherapy based on the frequency of dose-limiting toxicities (DLT) in patients with locally advanced PC. The exclusion criteria were watery diarrhoea; pleural effusion or ascites; active infection; active gastroduodenal ulcer; severe complication such as heart disease or renal disease; mental disorder; history of drug hypersensitivity; active concomitant malignancy; pregnant and lactating females; females of childbearing age unless using effective contraception. Ultrasonography, multidetector row-computed tomography of the abdomen and chest X-ray were performed for pretreatment staging to assess the local extension of the tumour and exclude the presence of distant metastasis. The computed tomography-based criteria for tumour nonresectability included tumour encasement of the celiac trunk, common hepatic artery, superior mesenteric artery or bilateral invasion of the portal vein. All patients with obstructive jaundice underwent percutaneous transhepatic or endoscopic retrograde biliary drainage before treatment. This phase I study was approved by the Institutional Review Board of the National Cancer Center and conducted in accordance with the Declaration of Helsinki Principles. Treatment schedule This was an open-label, two-institutional and single-arm phase I study that was performed on an in-patient basis. Radiotherapy was administered by 10 or 25 MV photons using three-dimensional treatment planning. A total dose of 50.4 Gy was delivered in 28 fractions over 5.5 weeks. The clinical target volume (CTV) included only the gross primary tumour and nodal involvement enlarged over 10 mm detected by computed tomography. Elective nodal irradiation was not used. The planning target volume was defined as CTV plus a 10 mm margin in the lateral direction and 10 -20 mm margin in the craniocaudal direction to account for respiratory organ motion and daily set-up error. The four-field technique (anterior, posterior and opposed lateral fields) was used. There was no field reduction. The spinal cord dose was maintained below 45 Gy. The dose received by X50% of the liver was limited to p30 Gy, and that received by X50% of both kidneys was limited to p20 Gy. S-1 was administered orally twice daily after breakfast and dinner on the day of irradiation (Monday to Friday) during radiotherapy. The initial dose of S-1 was 50 mg m À2 day À1 , and the dose was escalated to 80 mg m À2 day À1 in increments of 10 mg m À2 day À1 (Table 1). The calculated S-1 dose was rounded down to the nearest 60, 80, 100 or 120 mg. S-1 at 50 mg m À2 day À1 is reported to be almost equivalent to 200 mg m À2 day À1 intravenously 5-FU (Hirata et al, 1999), which has been used in protracted 5-FU infusion with concurrent radiotherapy for locally advanced PC at our institutions (Ishii et al 1997). S-1 at 80 mg m À2 day À1 is the standard dose used as a single agent for systemic therapy (Furuse et al, 2005;Ueno et al, 2005). Patients maintained a daily journal to record their intake of S-1 and any signs or symptoms that they experienced. Patient cohorts had a minimum of three patients at each dose level. If no DLT was observed in the initial three patients, the dosage was escalated in successive cohorts. If DLT was observed in one or two of the initial three patients, three additional patients were evaluated at that dose level. If only one or two of six patients experienced DLT, dose escalation was continued. However, if three or more patients experienced DLT at a given dose level, then the previous dose level was considered as the maximum tolerated dose. Dose-limiting toxicities was defined as the following manifestations of toxicity observed until completion of chemoradiotherapy: grade 3 leucocytopenia and/or neutropenia with a fever X381C lasting 3 days or more, grade 3 leucocytopenia and/or neutropenia with infection, grade 4 leucocytopenia and/or neutropenia lasting 3 days or more, grade 4 leucocytopenia and/ or neutropenia requiring haematopoietic colony-stimulating factors, platelets o25 000 mm À3 , grade 3 thrombocytopenia requiring transfusion, serum AST/ALT X10 times UNL, grade 3 or 4 nonhaematological toxicities excluding nausea, vomiting, anorexia, fatigue, constipation, hyperglycaemia, and abnormality of sodium, potassium, and calcium or treatment interruption for longer than 12 days. When grade 3 or greater haematological toxicity, total bilirubin level 2.0 -3.0 times UNL, serum AST/ALT 5.0 -10.0 times UNL, grade 3 vomiting and/or grade 2 nonhaematological toxicity excluding nausea, vomiting, anorexia, fatigue, constipation, alopecia and pigmentation change, were observed, radiotherapy and S-1 administration was suspended. Treatment was resumed when the toxicities were resolved by one grade or more, compared with these suspension criteria. Dose modification was not performed in this study. When DLT or tumour progression was observed during chemoradiotherapy, this treatment was discontinued. After this treatment, the patients were allowed to receive another anticancer treatment at their physician's discretion. Toxicity and response evaluation The primary end point of this trial was to evaluate the frequency of DLT, and the secondary end point was to evaluate the potential antitumour activity. Treatment-related toxicities were assessed using the National Cancer Institute Common Toxicity Criteria version 2.0. During this treatment, complete blood count with differentials, serum chemistry and urinalysis were carried out at least once a week. Tumor response was evaluated at the completion of chemoradiotherapy and every 8 weeks thereafter until tumour progression, according to the Japan Society for Cancer Therapy criteria (Japan Society for Cancer Therapy, 1993) as follows: a complete response was defined as the disappearance of all clinical evidence of the tumour for a minimum of 4 weeks. A partial response was defined as a 50% or greater reduction in the sum of the products of two perpendicular diameters of all measurable lesions for a minimum of 4 weeks. A minor response was defined as a 25% or greater reduction and less than 50% in the sum of the products of two perpendicular diameters of all measurable lesions for a minimum of 4 weeks or a 50% or greater reduction in the sum of the products of two perpendicular diameters of all measurable lesions lasting less than 4 weeks. No change was defined as a reduction of less than 25% or a less than 25% increase in the sum of the products of two perpendicular diameters of all lesions for a minimum of 4 weeks. Progressive disease was defined as an increase of 25% or more in the sum of the products of two perpendicular diameters of all lesions, or the appearance of any new lesion. Progression-free survival time was defined as the time from the date of initial treatment to the first documentation of progression or death. Overall survival was measured from the date of initial treatment to date of death or the date of the last followup. Progression-free and overall survival times were calculated by the Kaplan -Meier method. Serum carcinoembryonic antigen (CEA) levels and serum carbohydrate antigen 19-9 (CA19-9) levels were measured at least every 8 weeks by a radioimmunometric assay using the Centocor radioimmunoassay kit (Centocor Inc., Malvern, PA, USA). Patient characteristics Twenty-one patients were enrolled in this study from May 2004 and November 2005 at the National Cancer Center Hospital, Tokyo, and the National Cancer Center Hospital East, Kashiwa, Chiba, Japan. The characteristics of the patients are listed in Table 2. The median age was 59 years (range: 51 -74). Karnofsky performance status was 100 in 12 patients (57%), 90 in 8 (38%) and 80 in one (5%). The median maximum tumour size was 37 mm (range: 25 -60), and the median planning target volume was 265 cm 3 (range: 153 -408). The causes of the unresectable PCs were invasion of the celiac trunk in nine patients, invasion of the superior mesenteric artery in eight patients and invasion of both regions in four patients. Patients were treated with S-1 and concurrent radiation over four dose levels, as listed in Table 1. After completion of chemoradiotherapy, 20 patients (95%) received gemcitabine alone for their cancer until disease progression, and one patient received the other treatment at another hospital. Toxicity The toxicities observed in the 21 enrolled patients are listed in Table 3. With regard to overall haematological toxicity, grade 3 neutropenia was observed in only one patient at the dose of level 1, and other grades 3 -4 toxicities were not observed. For nonhaematological toxicity, grade 3 anorexia and nausea (three patients), grade 3 vomiting (one patient) and grade 3 haemorrhagic gastritis (one patient) occurred at level 3, and grade 3 AST elevation was observed in a patient at level 4. As a late toxicity, duodenal ulcer with epigastralgia was observed in one patient at level 3 (S-1 70 mg m À2 ) 8 months after chemoradiotherapy, requiring embolisation of the gastroduodenal artery to treat severe bleeding from the ulcer and a 2-month hospital stay. No other grades 3 -4 nonhaematological toxicities or treatment-related deaths occurred in this study. Treatment was suspended in four patients (level 2, one; level 3, two; level 4, one patient) because of obstructive jaundice (two patients) or grade 3 anorexia (two patients); the durations of S-1 dose withholding were 3, 12, 2 and 13 days, respectively. One patient with grade 3 anorexia (level 3) was unable to resume this treatment. The compliance rate of the patients taking S-1 was as high as 99% (1170/1176 doses). There was no occurrence of DLT at the dose of levels 1 or 2, but two of six patients who received a level 3 dose experienced DLT; one of these patients required suspension of treatment for more than 12 days due to grade 3 anorexia, nausea and vomiting after the third fraction of chemoradiotherapy, and a second developed grade 3 haemorrhagic gastritis after completion of 13 fractions. However, no DLT at a dose of level 4 was observed, and S-1 at 80 mg m À2 with concurrent radiotherapy was considered to be well-tolerated. Five patients (level 2, two; level 3, two; level 4, one) of the 21 who were enrolled had to abandon this treatment. Two patients at level 2 developed massive ascites and infarction of the cerebellum, respectively, during chemoradiotherapy. The cause of the massive ascites was disease progression, as cancer cells were confirmed in the ascitic fluid. The cerebellar infarction was considered to have been unrelated to the treatment, because the patient had a history of the same problem. Two patients at level 3 had to discontinue the treatment because of DLT according to the protocol, and one patient at level 4 decided to stop the treatment, despite lack of severe toxicity, at her own request. Efficacy All the patients were included in the response evaluation. Four patients (levels 1 and 2, 0; level 3, one; level 4, three) achieved a partial response, giving an overall response rate of 19% (95% confidence interval, 5 -42%). Four patients (19%) showed a minor response, and nine (43%) and three patients (14%) had no change and progressive disease, respectively. Tumor response could not be evaluated in one patient (5%), because she was transferred to another hospital to seek some other treatment for her PC. None of the patients' conditions improved to resectable or operable diseases after the completion of treatment. After the start of chemoradiotherapy, the serum CA19-9 level was reduced by more than 50% compared to the pretreatment level in 14 (88%) of 16 patients who had shown a pretreatment level of 100 U/ml or greater, and the serum CEA level was reduced by more than 50% relative to the pretreatment level in three (100%) of three patients who had a pretreatment level of 10 ng ml À1 or greater. Eighteen of the 21 patients had disease progression at the time of analysis. The pattern of disease progression was distant metastases in 11 (52%), deterioration of general condition in five (24%) and locoregional recurrence in two patients (10%). The median progression-free survival time for all patients was 8.9 months (Figure 1). At the time of analysis, 13 patients had died due to tumour progression. The median survival time and 1-year survival rate for patients as a whole were 11.0 months and 42.9%, respectively (Figure 1). DISCUSSION On the basis of results of previous randomised controlled trials (Moertel et al, 1969(Moertel et al, , 1981Gastrointestinal Tumor Study Group, 1988), the combination of 5-FU therapy and radiotherapy has become a frequently employed treatment for locally advanced PC (Willett et al, 2005;Yip et al, 2006). Because of the modest survival benefit of 5-FU-based chemoradiotherapy, numerous investigators are pursuing phase I and II trials of radiotherapy with new chemotherapeutic agents such as gemcitabine, paclitaxel, capecitabine, bevacizumab, gefitinib and erlotinib (Blackstock et al, 2003;Okusaka et al, 2004;Rich et al, 2004;Crane et al, 2006;Czito et al, 2006). However, no marked improvement of survival has been observed. S-1 is an oral fluoropyrimidine derivative that has demonstrated excellent efficacy with mild toxicity in patients with metastatic PC (Furuse et al, 2005). It is considered to be beneficial because of its convenience of being administered by the oral route. In addition, combined S-1 and radiotherapy has been demonstrated to exert a synergistic effect against 5-FU-resistant cancer xenografts (Harada et al, 2004;Nakata et al, 2006). Therefore, a clinical trial of concurrent radiotherapy with S-1 therapy for locally advanced PC was designed to intensify the treatment efficacy and improve the convenience of administration. In this study, a limited radiation field, of which the planning target volume included only the gross tumour volume without prophylactic nodal irradiation, was adopted to minimise the volume of normal tissue treated, because our retrospective study showed that a larger planning target volume for irradiation was the significant predictor of severe acute gastrointestinal toxicity in patients treated with chemoradiotherapy (Ito et al, 2006). A similar radiation field has been attempted in recent reported trials of chemoradiotherapy to decrease the degree of gastrointestinal toxicity (Muler et al, 2004;Crane et al, 2006). Gastrointestinal toxicities, such as anorexia, nausea and vomiting, are major troublesome adverse events during chemoradiotherapy, necessitating intravenous fluid infusion and sometimes discontinuation of chemoradiotherapy (Talamonti et al, 2000;Crane et al, 2002;McGinn and Zalupski, 2003;Okusaka et al, 2004). In the present study, some gastrointestinal toxicities were observed, but were easily managed. Moreover, the limited radiation field used in this study did not result in excess failures in the border of radiation field, because locoregional recurrence was observed in only two patients of this series. In this study, DLT was observed in only two patients at level 3 (S-1 70 mg m À2 ). The DLT in the first patient was grade 3 anorexia, nausea and vomiting, requiring suspension of treatment for longer than 12 days, and the second DLT was grade 3 haemorrhagic gastritis. Other than DLT toxicity, acute grades 3 -4 toxicities during chemoradiotherapy were observed in only three patients: grade 3 neutropenia, grade 3 anorexia and nausea, and grade 3 AST elevation in one patient each. As a late toxicity, duodenal ulcer was observed 8 months after treatment in one patient at level 3, but no other late toxicity occurred. Accordingly, S-1 at a daily dose of 80 mg m À2 (level 4) was considered to be well tolerated, and this dose was deemed recommendable. S-1 with RT for locally advanced PC M Ikeda et al et al, 2004). In contrast, in the present trial, the combination of full-dose S-1 (80 mg m À2 ) and standard-dose radiotherapy (50.4 Gy/28 fractions) was easy to administer and had favourable toxicity profiles. Therefore, this regimen might have a dual benefit of counteracting systemic tumour spread as well as acting as a potent radiosensitizer for local control. With regard to the antitumour activity of this treatment, four (19%) of the 21 patients achieved a partial response, and the response rate at the recommended dose was 43% (3/7). The progression-free survival time (median: 8.9 months) and overall survival time (median: 11.0 months) were also favourable as a phase I trial. In this study, many patients (95%) received gemcitabine alone after completion of this regimen. Such adjuvant gemcitabine therapy might influence the efficacy of treatment, although the extent of its impact on tumour response and survival has not been fully elucidated in patients with locally advanced PC. Since both the efficacy and toxicity profile of this regimen appear to be attractive, a phase II trial is required to clarify the antitumour activity, survival and toxicity of S-1 80 mg m À2 day À1 with concurrent radiation therapy for locally advanced PC. In conclusion, the recommended dose of S-1 with concurrent radiotherapy is 80 mg m À2 day À1 on the day of irradiation, and this regimen has a mild toxicity profile while delivering substantial antitumour activity for patients with locally advanced PC. Orally administered S-1 may offer an easy alternative to intravenous 5-FU without impairing the quality of life. A phase II trial of S-1 at the optimal dose of 80 mg m À2 day À1 with concurrent radiation in patients with locally advanced PC is now underway in a multiinstitutional setting.
2016-05-04T20:20:58.661Z
2007-05-29T00:00:00.000
{ "year": 2007, "sha1": "4bef9041d9f568c25a977b455510c92bbeeb527d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6603788.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "90944a4eeefcc59108eaa0f2785049b14f6f0f8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252836839
pes2o/s2orc
v3-fos-license
RAFT Hydroxylated Polymers as Templates and Ligands for the Synthesis of Fluorescent ZnO Quantum Dots The remarkable photoluminescent properties, biocompatibility, biodegradability, and antibacterial properties of zinc oxide quantum dots (ZnO QDs) coupled with their low cost and nanoscale size guarantee bio-related and technological applications. However, the effect of the polymeric ligand during synthesis has hardly been investigated compared to other less environmentally friendly QDs. Thus, the objective of this work was to focus on the synthesis of fluorescent hybrid ZnO QDs by the sol-gel method using different polymers with hydroxyl groups as templates and ligands to obtain stable particles in different media. For this purpose, well-defined hydroxylated statistical polymers and block copolymers were synthesized using reversible-addition fragmentation chain transfer (RAFT) polymerization to establish the influence of molecular weight, hydrophobic/hydrophilic balance, and polymer architecture on the colloidal and photophysical properties of the synthesized hybrid ZnO QDs. Dynamic light scattering (DLS), TEM, and X-ray diffraction measurements indicated the formation of stable nanoparticles of a few nanometers. A remarkable enhancement in terms of fluorescence was observed when ZnO QDs were synthesized in the presence of the hydroxylated homopolymers and even more so with block copolymers architecture. Organosilanes combined with the hydroxylated polymers were used to improve the colloidal stability of ZnO QDs in aqueous media. These samples exhibited uniform and stable enhanced photoluminescence for nearly five months of being investigated. Among other applications, the hybrid ZnO QDs synthesized in this work exhibit high selectivity to detect Cr6+, Fe2+, or Cu2+ in water. The sol-gel method is the most extended method for the synthesis of ZnO QDs. Typically, this process involves the hydrolysis of zinc acetate in ethanol under sonication, generating nanoparticles between 3-5 nm with an emission about 500-550 nm [26]. However, the as-synthesized particles do not have colloidal stability, and after a short period of time the particles undergo the process known as Ostwald ripening and end up precipitating in the reaction medium [27]. To overcome this problem, some strategies have been applied to highlight the synthesis of ZnO QDs using triethyleneglycol or tetraethyleneglycol as reaction media [28,29] and through the use of silanes [25] and polymer ligands of different nature [12,27,30]. Furthermore, to provide colloidal stability in the reaction medium, polymers can confer other properties, such as solubility in different media, an increase in terms of biocompatibility, and functionalization capacity, among others [31]. Thus, Laopa and Vilaivan [12] found that ZnO QDs can be stabilized by a cationic copolymer via an ionic interaction with the citrate ligand, enhancing the fluorescence quantum yield (Φ F ) up to 27-32%. Zheng et al. [30] investigated the role of double hydrophilic block copolymers, consisting of a polyethylene glycol (PEG) stabilizing block and a second block bearing chemical groups with affinity for the ZnO surface (either carboxylic or phoshonic acid), on the stabilization and luminescence of ZnO QDs in THF and water. The hydrolysis of zinc methacrylate and in situ polymerization of the methacrylic surface ligand with a polyethylene glycol methyl ether methacrylate produced ZnO@polymer core-shell nanoparticles with tunable PL [3] and a high Φ F . In addition to these functionalities (carboxylic, phosphonic groups, etc.), the hydroxyl groups of poly(vinyl alcohol) PVA have shown avidity for the surface of ZnO QDs through the formation of hydrogen bonds, giving rise to very stable nanocomposites [32]. This leads us to hypothesize that other polymers with hydroxyl groups could be suitable ligands for the stabilization of ZnO QDs. Controlled living radical polymerization (CLRP), such as RAFT, allows for a wide variety of monomers with hydroxyl groups, which opens up a range of polymers with control of their chemical composition, molecular weight, and architecture. Furthermore, RAFT polymerization is ideal for the synthesis of functional polymers to hybridize inorganic nanostructures [33,34]. With this background, in this work, we focused on the synthesis of fluorescent hybrid ZnO QDs in the presence of polymers with multihydroxyl groups as templates and ligands in order to obtain stable particles in the nanoscale. Well-defined multihydroxylated statistical homopolymers and block copolymers, including a hydrophilic poly(polyethylene glycol methacrylate) block and a second more hydrophobic block comprising monomers with hydroxyl groups able to interact with the ZnO surface, were synthesized by RAFT polymerization. The influence of the molecular weight, polymer architecture, and hydrophobic/hydrophilic balance on the colloidal and photophysical properties of the ZnO QDs was investigated in detail. However, while polymers can improve colloidal stability in the reaction medium and enhance the Φ F , achieving stability in aqueous media for long periods of time remains a significant challenge, since the addition of small amounts of water to the medium can cause the precipitation of ZnO with the unavoidable loss of luminescence. So, the most successful and robust method to overcome this drawback is the silanization of the ZnO QDs' surfaces by means of alkoxysilanes [35][36][37]. The dense cross-linked network around the ZnO nanoparticle will prevent its evolution and loss of luminescence over time. Since silanes can react with hydroxyl groups during the condensation, in this contribution, we explored the combination of hydrophobic and hydrophilic silanes with hydroxylated polymers to create a more versatile and protective coating. The hydroxyl groups of the copolymers can covalently attach to the siloxane network, which provides a polymeric shell to the silanized ZnO QDs, which in turn improves the colloidal stability, providing a long-lasting fluorescence in aqueous medium. Due to the robustness of these nanoparticles, among other promising applications in the fields of biotechnology and optoelectronics, in this work, we took advantage of the fluorescence quenching experienced by QDs in the presence of metals for the purpose of detecting pollutants. Unfortunately, the contamination of water with different metals from industry is currently a significant concern, so the development of robust and economical detection methods is critical from an environmental point of view. Synthesis of Homopolymers via RAFT Polymerization pPPGMA, pHPMA, and pPEGMEMA homopolymers, from now on denoted as PG, HP, and EG, respectively, were synthesized by RAFT polymerization, for further use, namely both hydrophobic PG and HP as a coating for ZnO nanoparticles, due to the hydroxyl group they contain, and hydrophilic EG for further block copolymers synthesis. PG, HP, and EG homopolymers (Table 1) of different molecular weights were synthesized in ethanol at 70 • C by setting different monomer/CDTPA RAFT ratios (from 25 to 75) and reaction times (2, 3, 4, 5 h). Their chemical structures are shown in Scheme 1a,b. A typical protocol for the synthesis of sample PG 14k (subscript indicates theoretical M n in kg mol −1 ) is shown below: CDTPA RAFT agent (0.0333 g, 0.08 mmol) was added to PPGMA (1.546 g, 4 mmol) and dissolved in absolute ethanol (2.3 g, 48.8 mmol) in a 20 mL glass tube, followed by the addition of ACVA initiator (5.98 mg, 0.016 mmol, CDTPA/ACVA ratio = 5). This resulted in a yellow solution that was purged with nitrogen for 20 min in an iced-water bath followed by another 10 min at room temperature. The sealed tube was immersed in an oil bath at 70 • C and magnetically stirred for 5 h. After that, the reaction was stopped by exposure to air and immersion in an iced-water bath. The product was diluted in methanol and precipitated for three times in hexane in order to remove the unreacted monomers. Purified polymers were high vacuum dried and stored at 4 • C. The estimated monomer conversion according to 1 H-NMR spectra is gathered in Table 1. Synthesis of Copolymers via RAFT Polymerization A series of pPEGMEMA-b-pPPGMA (EG-PG) and pPEGMEMA-b-pHPMA (EG-HP) copolymers of several molecular weights (Table 1) were synthesized to be used as ZnO nanoparticles templates and ligands on further reactions. Their structures are shown in Scheme 1a. The two series of copolymers were synthesized using the three EG homopolymers collected in Table 1. The EG macro-CTA/monomer (PPGMA or HPMA) ratio was fixed to 50 or 75 in a reaction of 3 h at 70 °C. The protocol for the synthesis of the copolymers is presented in Scheme 1c and is shown below with the sample EG21k-PG14k. Thus, EG21k (0.618 g, 0.02 mmol) and PPGMA (0.394 g, 1.02 mmol) were dissolved in absolute ethanol (1.5 g, 32.6 mmol) in a 20 mL glass tube, followed by the addition of ACVA initiator (1.17 Scheme 1. (a) RAFT hydroxylated homopolymers and block copolymers used as template for the ZnO QDs synthesis; (b) procedure for poly(polyethylene glycol methacrylate) (EG) synthesis; (c) synthesis of poly(polyethylene glycol methacrylate-b-poly(hydroxypropyl methacylate) (EG-HP) block copolymer; and (d) synthesis of luminescent hybrid ZnO QDs protected with EG-HP block copolymer. Synthesis of Copolymers via RAFT Polymerization A series of pPEGMEMA-b-pPPGMA (EG-PG) and pPEGMEMA-b-pHPMA (EG-HP) copolymers of several molecular weights (Table 1) were synthesized to be used as ZnO The two series of copolymers were synthesized using the three EG homopolymers collected in Table 1. The EG macro-CTA/monomer (PPGMA or HPMA) ratio was fixed to 50 or 75 in a reaction of 3 h at 70 • C. The protocol for the synthesis of the copolymers is presented in Scheme 1c and is shown below with the sample EG 21k -PG 14k . Thus, EG 21k (0.618 g, 0.02 mmol) and PPGMA (0.394 g, 1.02 mmol) were dissolved in absolute ethanol (1.5 g, 32.6 mmol) in a 20 mL glass tube, followed by the addition of ACVA initiator (1.17 mg, 0.0041 mmol, EG 21k /ACVA ratio = 5). This resulted in a yellow solution that was purged with N 2 gas for 20 min and placed in an iced-water bath followed by another 10 min at room temperature. The sealed tube was immersed in an oil bath at 70 • C and magnetically stirred for 3 h. After that, the reaction was exposed to air and immersed in an iced-water bath. An aliquot of the reaction crude was taken to determine the monomer conversion by 1 H-NMR. The reaction crude was diluted in methanol and precipitated in hexane, this process was repeated for three times, in order to remove the unreacted monomers. The precipitate was high vacuum dried and stored at 4 • C. Synthesis of Fluorescent ZnO QDs The synthetic procedure for the ZnO QDs synthesis [26], based on the sol-gel method, is presented in Scheme 1d. The procedure begins with the preparation of the organometallic precursor, i.e., 25 mL of 0.06 M solution of zinc acetate dihydrate (Zn(CH 3 COO) 2 ·2H 2 O) in absolute ethanol. The solution, in a round bottom flask, is refluxed at 80 • C and magnetically stirred for 1 h. After cooling to room temperature, 5 mL of the solution are taken and transferred to a glass reaction tube, followed by the addition of the polymer, silane, or silane/polymer chosen for each experiment. The tube is then sealed and placed inside an ultrasonic bath at 30 • C, where KOH 0.9 M solution is added dropwise in an OH -/Zn 2+ molar ratio of 2:1. The obtained clear solution with the coated ZnO QDs is cooled down in an iced-water bath, stored at 4 • C, and protected from light. For XRD and FTIR analysis, samples were concentrated in ethanol and precipitated in hexane to remove the excess precursors. Characterization and Properties 1 H-NMR spectra were recorded in DMSO-d6 and CDCl 3 (depending on polymer solubility) solvents using a Bruker Avance III-HD-400 spectrometer. Molecular weight distributions and dispersity (Ð = M w /M n ) of homopolymers and copolymers were determined by gel permeation chromatography (SEC) on a Perkin Elmer series 200 system equipped with a refractive index detector and heated columns at 70 • C. DMF stabilized with 0.1 wt% of LiBr was used as the mobile phase at 0.8 mL min −1 and 70 • C using poly(methyl methacrylate) (PMMA) standards (Polymers Laboratories LTD, Shropshire, United Kingdom) for the calibration. Infrared measurements were carried out by Perkin-Elmer Spectrum Two FTIR spectrometer fitted with an attenuated total reflectance (ATR) accessory. The crystalline structure of the ZnO QDs was analyzed by X-Ray diffraction (XRD). Diffractograms were recorded in the reflection mode by using a Bruker D8 Advance diffractometer provided with a PSD Vantec detector (from Bruker, Madison, WI). CuKα radiation (λ = 0.1542 nm) was used, operating at 40 kV and 40 mA. The equipment was calibrated with different standards. The diffraction scans were collected within the range of 2θ = 4−80 • , with a 2θ step of 0.024 • and 0.5 s per step. ZnO QDs morphology and size were determined by transmission electron microscopy (TEM) in a JEOL JEM-2100 HT microscope operated at 200 kV and equipped with a LaB6 gun, an CCD ORIUS SC1000 (Model 882) camera, a STEM unit with an ADF detector, and a point resolution of 0.25 nm. This microscope is located at ICTS Centro Nacional de Microscopía Electrónica at UCM (Madrid, Spain). Mean particle size was obtained by measuring at least 120 particles. The hydrodynamic size of the nanoparticles was determined by dynamic light scattering (DLS). Diluted samples in ethanol were measured at 20 • C to determine the hydrodynamic size as the number distribution by a Zetasizer Nano ZS instrument (Malvern Instruments Ltd., Malvern, UK). Malvern Dispersion Software was used for data acquisition and analysis. The emission spectra of hybrid ZnO QDs were recorded on a Perkin Elmer FL6500 spectrophotometer in ethanol at the excitation wavelength of 365 nm, using a Rhodamine 6G solution in ethanol as standard. The absorption spectra of the synthesized ZnO QDs and Rhodamine 6G in ethanol were recorded on a UV/Vis NanoDrop One Thermo-Scientific spectrometer. Fluorescence quantum yield (Φ F ) of the ZnO QDs was calculated by the relative method using the following equation applicable for diluted solutions (absorbance ≤ 0.10, at the excitation wavelength): where Φ S is the quantum yield of the Rhodamine 6G standard in ethanol (Φ S = 95%) [38], I A refers to the absorbed light intensity at lambda excitation (365 nm) by the sample, and I A (s) is the same by the Rhodamine 6G, calculated from: I E and I E(s) are the integrated emitted fluorescence intensity of the sample and the Rhodamine 6G, respectively, η is the refraction index of the sample solution, and η (s) the refraction index of the Rhodamine 6G ethanolic solution. Experiments to investigate the use of hybrid ZnO QDs as sensors for metal detection were carried out by incubating 50 µL of the sample reaction diluted 1:10 in water for 1 h, with 50 µL of different salts solution to obtain a final concentration between 5 and 100 µM, and studying the resulting fluorescence emission signal. Synthesis of RAFT Hydroxylated Copolymers Since previous reports indicated that polymers with hydroxyl groups as PVA are useful for the stabilization of ZnO QDs, in the present work, we propose the synthesis of hydrophobic and amphiphilic polymers multi-functionalized with hydroxyl groups to act as templates in the synthesis of hybrid ZnO QDs. To this end, hydrophobic methacrylic homopolymers (PG and HP) and amphiphilic block copolymers with either a polypropylene glycol side chain (EG-PG) or hydroxypropyl (EG-HP) side chain were synthesized (Scheme 1a) by RAFT polymerization. For the homopolymers and EG first block synthesis, CDTPA was used as a RAFT agent and a CDTPA/ACVA, at ratio of 5 in ethanol at 70 • C, was also used. Ethanol was selected as a solvent since the synthesis of the ZnO QDs was also to be carried out in this alcohol, so traces of residual solvent would not affect this reaction. In Table 1, the polymer composition, conversion, and molecular weight parameters are collected and in Figure S1 1 H-NMR spectra corresponding to representative EG, HP, and PG homopolymers are shown. As it can be seen in Table 1, the hydroxylated homopolymers synthesized by RAFT present low dispersity (Ð), indicating that polymerization was well controlled, despite PG homopolymers exhibiting a higher Ð than HP and EG homopolymers. Block copolymers were synthesized from the extension of EG of different molecular weights, allowing for the obtaining of the copolymers collected in Table 1, which also exhibit a low Ð (lower than 1.3), indicating the suitability of the polymerization conditions. Figure S2 displays selected SEC chromatograms for representative EG-HP and EG-PG block copolymers, while 1 H-NMR spectra of EG macro-RAFTs and block copolymers, displaying their characteristic chemical shift, are presented in Figures S1 and S3. ZnO QDs with Hydroxylated Polymers as Ligands The synthesis was carried out using the classical sol-gel method since it has the lowest cost and is a simple, repetitive, and reproducible synthesis [20]. A solution of zinc acetate in ethanol was hydrolyzed by adding KOH in the absence or in the presence of different polymers with hydroxyl groups, while keeping the solution in an ultrasonic bath (frequency of 37 kHz). The process is streamlined in Scheme 1d and sample properties are displayed in Table 2. The reaction was carried out at 30 • C since preliminary experiments ( Figure S4) indicated that increasing synthesis temperature leads to a decrease of PL intensity, a red shift of the fluorescence maximum, and an increase in the particle size, in agreement with previous studies [39]. The formation of ZnO QDs with hydrodynamic diameters under 10 nm in the presence of RAFT hydroxylated polymer ligands is clearly observed in the DLS measurements shown in Figure 1a for a number of representative hybrid ZnO QDs in ethanol, while the dispersion of uncoated ZnO QDs (ZnO@bare) in ethanol results in large aggregates. In Figure 1b, the XRD spectrum shows the typical pattern for ZnO nanoparticles, with the broadening of the peaks due to the nanometric size [40]. All of the diffraction peaks were consistent with the database on JCPDS file No. 80-0075 and could be indexed according to a wurtzite structure, although the phase of Zn(OH) 2 could not be completely ruled out. According to the Debye−Scherrer equation, a mean size of 5.04 nm was obtained for ZnO@HP 10k nanoparticles in Figure 1b. This value was calculated from the diffraction peaks at 2θ = 47.4 and 56.6 • assignable to the (102) and (110) planes, respectively [41]. Figure 1c displays TEM images corresponding to ZnO@HP 7k nanoparticles, revealing the formation of spherical nanoparticles that are on average 4.5 nm size, in good agreement with the XRD. Most of the d-spacing of the (002) (0.26 nm) and (100) (0.28 nm) planes, consistent with wurtzite, are indicated by parallel lines in Figure 1c. The presence of homopolymer and copolymer ligands on the ZnO QDs surfaces was ascertained by the analysis of the FTIR spectra shown in Figure 1d. An FTIR spectrum of bare ZnO synthesized without polymer exhibits bands at 1560 and 1400 cm −1 , corresponding to the stretching vibration -COOof residual potassium acetate, a broad band from 3600 to 2700 cm −1 , assigned to the hydroxyl groups of ethanol still adsorbed on the ZnO surface, as well as a broad band around 400 cm −1 , attributed to the Zn-O bonds [42]. The FTIR spectra corresponding to ZnO QDs with polymer coating exhibit vibration bands corresponding to the polymeric ligands; notably, at 3000 to 2800 cm −1 , the C-H stretching vibrations correspond to the CH 3 , CH 2 , and CH of the polymer backbone and isopropyl groups of the side chain, and at 1725 cm −1 , they correspond to the strong stretching vibration of -C=O group. At approximately 1090 cm −1 , the typical C-O-C stretching of ether appears in ZnO QDs functionalized with polypropylene glycol and polyethylene glycol polymers. In addition, bands at 1560 and 1400 cm −1 corresponding to residual potassium acetate still appear. The band attributed to the Zn-O appears at around 460-467 cm −1 in ZnO nanoparticles with polymer ligands. The formation of ZnO QDs with hydrodynamic diameters under 10 nm in the presence of RAFT hydroxylated polymer ligands is clearly observed in the DLS measurements shown in Figure 1a for a number of representative hybrid ZnO QDs in ethanol, while the dispersion of uncoated ZnO QDs (ZnO@bare) in ethanol results in large aggregates. In Figure 2a, the absorbance spectrum corresponding to ZnO QDs without a polymer ligand (ZnO@bare) is compared with spectra of the particles synthesized in the presence of representative PG, EG-PG, HP, and EG-HP copolymers (dashed lines). ZnO QDs with polymer protection exhibit a sharp increasing of absorbance below 360 nm. The absence of absorbance above 360 nm for the samples synthesized in the presence of polymers indicates that there are not aggregates, in opposite to the sample synthesized without any ligand. Emission curves for a series of hybrid ZnO QDs with different polymer coatings are presented in Figure 2a as solid lines and compared with ZnO QDs without coating. For all the synthesized hybrid ZnO QDs the emission is centered between 549-564 nm after excitation at 365 nm, regardless of the presence or nature of the polymer employed in the synthesis. However, the photoluminescence emission experiences a strong enhancement with the addition of the different homopolymers and copolymers to the reaction batch ( Table 2). To quantify this effect, the Φ F (%) values obtained are shown in the last column of Table 2 and represented, for clarity, in the form of a bar chart in Figure 2b, for ZnO QDs coated with PG x homopolymers and EG y -PG x block copolymers, and in Figure 2c, for ZnO QDs coated with HP x and EG y -HP x block copolymers, where x and y represent the molecular weight of each block. As it can be seen in Figure 2 and Table 2, the quantum yield (ΦF) of ZnO@polymer noticeably increases with the polymer protection, in agreement with previous reports [12,27]. Comparing the results obtained with the different polymeric coatings, polypropylene glycol methacrylate homopolymers (PGx) provide comparable protection for ZnO QDs to that provided by hydroxypropyl methacrylate homopolymers (HPx), which results in similar quantum yields (ΦF). However, the ΦF does not improve when ZnO nanoparticles are hybridized with EG-PG block copolymers. On the other hand, EG-HP block copolymers clearly increase the ΦF over the HP homopolymers; in special ZnO QDs coated with EG21k-HP5k block copolymer, the ΦF reached 43%. In the case of HP polymers, a clear influence in terms of the molecular weight is observed, specifically, the increase in the size of the hydrophilic EG block improves the fluorescence quantum yield. On the contrary, the increase in the size of the more hydrophobic hydroxylated HP block produces a detriment in the quantum yield for both homo-and block copolymer ligands ( Table 2 and Figure 2c). The emission was maintained and even slightly increased in ethanol during the investigation ( Figure S5). However, transferring the sample to water causes the formation of aggregates and luminescence loss after a few days. (Figure S5). That occurred in the case of all the polymeric coatings investigated, even with the presence of amphiphilic block copolymers, where the hydrophobic block is bearing hydroxyl groups able to interact with the ZnO surface, with the hydrophilic block possibly providing colloidal stability in water. Therefore, these polymer coating agents are not able to prevent the final aggregation of the hybrids. For this reason, we investigated the use of silanes in combination with the hydroxylated polymers to improve colloidal stability and to preserve luminescence properties in water. ZnO QDs with Silanes and a Combination of APTES Silane and Hydroxylated Polymers The silanization of the ZnO QDs surface by means of alkoxysilanes [35,36] resulted in a robust method to preserve luminescence in an organic and aqueous medium. In this work, the three silanes displayed in Scheme 2 were explored; they exhibit different physico-chemical properties, so that TMODS (TO) is hydrophobic, APTES (AP) is hy- As it can be seen in Figure 2 and Table 2, the quantum yield (Φ F ) of ZnO@polymer noticeably increases with the polymer protection, in agreement with previous reports [12,27]. Comparing the results obtained with the different polymeric coatings, polypropylene glycol methacrylate homopolymers (PG x ) provide comparable protection for ZnO QDs to that provided by hydroxypropyl methacrylate homopolymers (HP x ), which results in similar quantum yields (Φ F ). However, the Φ F does not improve when ZnO nanoparticles are hybridized with EG-PG block copolymers. On the other hand, EG-HP block copolymers clearly increase the Φ F over the HP homopolymers; in special ZnO QDs coated with EG 21k -HP 5k block copolymer, the Φ F reached 43%. In the case of HP polymers, a clear influence in terms of the molecular weight is observed, specifically, the increase in the size of the hydrophilic EG block improves the fluorescence quantum yield. On the contrary, the increase in the size of the more hydrophobic hydroxylated HP block produces a detriment in the quantum yield for both homo-and block copolymer ligands (Table 2 and Figure 2c). The emission was maintained and even slightly increased in ethanol during the investigation ( Figure S5). However, transferring the sample to water causes the formation of aggregates and luminescence loss after a few days. (Figure S5). That occurred in the case of all the polymeric coatings investigated, even with the presence of amphiphilic block copolymers, where the hydrophobic block is bearing hydroxyl groups able to interact with the ZnO surface, with the hydrophilic block possibly providing colloidal stability in water. Therefore, these polymer coating agents are not able to prevent the final aggregation of the hybrids. For this reason, we investigated the use of silanes in combination with the hydroxylated polymers to improve colloidal stability and to preserve luminescence properties in water. ZnO QDs with Silanes and a Combination of APTES Silane and Hydroxylated Polymers The silanization of the ZnO QDs surface by means of alkoxysilanes [35,36] resulted in a robust method to preserve luminescence in an organic and aqueous medium. In this work, the three silanes displayed in Scheme 2 were explored; they exhibit different physicochemical properties, so that TMODS (TO) is hydrophobic, APTES (AP) is hydrophilic, and TMSPEDATA (TE) is a salt and therefore very soluble in water. The reaction's procedure is resumed in Scheme 2. Basically, silanes were added alone or in combination with hydroxylated polymers and block copolymers. The addition of KOH leads to silane hydrolysis and condensation, forming a dense cross-linked network around the ZnO nanoparticle, which will prevent the evolution of the ZnO QDs and a loss of luminescence over time. In addition, during condensation, silanol groups can react with hydroxyl groups of the polymers forming an organo-inorganic protective coating, as shown in Scheme 2. As a first step, ZnO QDs with one of the three silanes, or a combination of silanes shown in Scheme 2a, were prepared, and their photophysical properties are collected in Table 3. As it can be seen, λ em.max. lies between 550-561 nm (λ exc. = 365 nm) in ethanol, and the Φ F is as high as 38% for APTES (ZnO@AP 3.5 ). Increasing the silane concentration or the combination of them does not significantly increase the Φ F . In fact, it would appear that 3.5% of silane is enough to provide a good protection either in ethanol or in water (Figure 3). drophilic, and TMSPEDATA (TE) is a salt and therefore very soluble in water. The reaction's procedure is resumed in Scheme 2. Basically, silanes were added alone or in combination with hydroxylated polymers and block copolymers. The addition of KOH leads to silane hydrolysis and condensation, forming a dense cross-linked network around the ZnO nanoparticle, which will prevent the evolution of the ZnO QDs and a loss of luminescence over time. In addition, during condensation, silanol groups can react with hydroxyl groups of the polymers forming an organo-inorganic protective coating, as shown in Scheme 2. As a first step, ZnO QDs with one of the three silanes, or a combination of silanes shown in Scheme 2a, were prepared, and their photophysical properties are collected in Table 3. As it can be seen, λem.max. lies between 550-561 nm (λexc. = 365 nm) in ethanol, and the ΦF is as high as 38% for APTES (ZnO@AP3.5). Increasing the silane concentration or the combination of them does not significantly increase the ΦF. In fact, it would appear that 3.5% of silane is enough to provide a good protection either in ethanol or in water (Figure 3). Transferring the samples from ethanol to water, resulted in a slight decrease in the fluorescence emission compared to EtOH (Figure 3), but the solutions were still luminescent over time ( Figure S6). Comparing the nanoparticles with the three silanes, ZnO@TE presented a low solubility in EtOH due to TMSPEDATA, while ZnO@TO nanoparticles presented low dispersion in water due to the high hydrophobicity of TMODS. For this reason and because they exhibited the highest ΦF, APTES in a 3.5% mol was chosen for its combination with hydroxylated polymers in the synthesis of hybrid ZnO QDs (Scheme 2b). The results corresponding to the ZnO QDs synthesis in the presence of a combination of APTES (3.5% mol) and a selected polymer are shown in Table 4. The λem. max. in ethanol and water present similar values as the ZnO QDs coated with polymers (Table 2) or silane (Table 3), indicating that the nature of the coating is not a determining factor, meaning that it is likely that other synthetic conditions, such as temperature or the KOH/Zn(CH3COO)2 ratio, have more influence over the λem. max. The hydrodynamic sizes of these silane-polymer protected ZnO QDs in ethanol lie between 5-6 nm (Table 4), which is slightly smaller than the hydrodynamic sizes of ZnO QDs synthesized solely in the presence of the hydroxylated polymers (Table 2). TEM images and histograms of the particle distribution of representative samples with the different types of ligands are shown in Figure 4a-c, whereas in Figure S7, a chart Transferring the samples from ethanol to water, resulted in a slight decrease in the fluorescence emission compared to EtOH (Figure 3), but the solutions were still luminescent over time ( Figure S6). Comparing the nanoparticles with the three silanes, ZnO@TE presented a low solubility in EtOH due to TMSPEDATA, while ZnO@TO nanoparticles presented low dispersion in water due to the high hydrophobicity of TMODS. For this reason and because they exhibited the highest Φ F , APTES in a 3.5% mol was chosen for its combination with hydroxylated polymers in the synthesis of hybrid ZnO QDs (Scheme 2b). The results corresponding to the ZnO QDs synthesis in the presence of a combination of APTES (3.5% mol) and a selected polymer are shown in Table 4. The λ em. max. in ethanol and water present similar values as the ZnO QDs coated with polymers ( Table 2) or silane ( Table 3), indicating that the nature of the coating is not a determining factor, meaning that it is likely that other synthetic conditions, such as temperature or the KOH/Zn(CH 3 COO) 2 ratio, have more influence over the λ em. max . The hydrodynamic sizes of these silanepolymer protected ZnO QDs in ethanol lie between 5-6 nm (Table 4), which is slightly smaller than the hydrodynamic sizes of ZnO QDs synthesized solely in the presence of the hydroxylated polymers (Table 2). TEM images and histograms of the particle distribution of representative samples with the different types of ligands are shown in Figure 4a-c, whereas in Figure S7, a chart comparing the mean size values of some representative samples is displayed. In Figure 4, it is observed in all cases that particles are spherical and well dispersed. The histograms displayed in Figure 4, determined by measuring the diameter of more than 120 ZnO nanoparticles, evidence that polymer-coated ZnO QDs exhibit broader distributions and larger sizes (3.8 to 5 nm, Figure 4 (a.i, a.ii, b.i, b.ii)) than silane-coated ZnO QDs (2.6 to 4.4 nm, Figure 4 (c.i, c.ii, c.iii)). The nanoparticles with hydrophobic TMODS silane present lager sizes than those synthesized with hydrophilic APTES or a combination of APTES and TMODS. In agreement with this result, the combination of APTES and hydroxylated copolymers resulted in ZnO QDs of smaller sizes than when they are coated solely with polymer (Figure 4a,b). The sizes estimated by TEM ( Figure S7) matches with the results obtained from the XRD data ( Figure S8), where a broadening of the diffraction peaks can be observed as the size of hybrid nanoparticles decreases. This behavior also indicates a reduction in the crystallinity of the samples with silane functionalization. comparing the mean size values of some representative samples is displayed. In Figure 4, it is observed in all cases that particles are spherical and well dispersed. The histograms displayed in Figure 4, determined by measuring the diameter of more than 120 ZnO nanoparticles, evidence that polymer-coated ZnO QDs exhibit broader distributions and larger sizes (3.8 to 5 nm, Figure 4 (a.i, a.ii, b.i, b.ii)) than silane-coated ZnO QDs (2.6 to 4.4 nm, Figure 4 (c.i, c.ii, c.iii)). The nanoparticles with hydrophobic TMODS silane present lager sizes than those synthesized with hydrophilic APTES or a combination of APTES and TMODS. In agreement with this result, the combination of APTES and hydroxylated copolymers resulted in ZnO QDs of smaller sizes than when they are coated solely with polymer (Figure 4a,b). The sizes estimated by TEM ( Figure S7) matches with the results obtained from the XRD data ( Figure S8), where a broadening of the diffraction peaks can be observed as the size of hybrid nanoparticles decreases. This behavior also indicates a reduction in the crystallinity of the samples with silane functionalization. In Figure 5, the evolution of the integrated emission in ethanol and water are presented for selected ZnO@AP-polymers. For all samples investigated, it is observed that the emission noticeably increases in ethanol after 7 days, remaining practically unchanged during the entire period investigated. When transferring the hybrid ZnO QDs to water, the λem. max. experiences a red shift compared to those in ethanol (Table 4), while the fluorescence emission decreases after 7 or 14 days in the case of the samples synthesized with APTES in combination with PG14k or HP6k homopolymers, respectively. In Figure 5, the evolution of the integrated emission in ethanol and water are presented for selected ZnO@AP-polymers. For all samples investigated, it is observed that the emission noticeably increases in ethanol after 7 days, remaining practically unchanged during the entire period investigated. When transferring the hybrid ZnO QDs to water, the λ em. max. experiences a red shift compared to those in ethanol (Table 4), while the fluorescence emission decreases after 7 or 14 days in the case of the samples synthesized with APTES in combination with PG 14k or HP 6k homopolymers, respectively. On the contrary, even more remarkable is the good protection offered by block copolymers in combination with APTES, as can be seen in Figure 5c,d, for ZnO@AP-EG12k-PG14k and ZnO@AP-EG21k-HP9k, respectively. In Figure S9, images of these two ZnO QDs hybrids dispersed in water and ethanol evidences their colloidal stability in both media. As it can be seen in Figure 5c,d, the evolution of the fluorescence emission is similar in both media, ethanol and water, with it remaining stable for 70 days. After this time, the emissions in water are drastically extinguished. This result is promising when considering that in the absence of silane the emission was quenched after a few days (see Figure S5). Although these samples present high emission over time and are well dispersed in water, the increase in sample scattering when the sample was transferred to water was unavoidable. Since ΦF is inversely proportional to absorbance (Equation (1)), this parameter drops in aqueous solution (i.e., for ZnO@AP-EG21k-HP9k, ΦF in H2O, it is 4.7%). ZnO QDs as Sensors for Metal Detection In previous works, it was shown that the loss of fluorescence experienced by ZnO QDs in the presence of certain metals can be used as sensors for these analytes in water [12][13][14]43]. In particular, a loss of fluorescence has been noticed in the presence of Cr 6+ , Cu 2+ and Fe 3+ [13], Fe 2+ [12], and Cu 2+ [14]. Furthermore, certain hybrid ZnO QDs synthesized in this work were selected to explore to explore this environmental application. Specifically, the presence of HP6k homopolymer (ZnO@HP6k) and ZnO coated with AP silane and EG21k-HP9k block copolymer (ZnO@AP-EG21k-HP9k) were investigated as visible "turn-off" sensors for different metal ions. In Figure 6a,b, the decrease in fluorescence for ZnO@HP6k as a function of metal type (100 μM aqueous solution) proves that nanohybrids are feasible for Fe 2+ , Cr 6+ , and Cu 2+ detection compared to other metal ions tested (Li + , Mg 2+ , K + , Ca 2+ , Mn 2+ , Fe 3+ , Co 2+ , Ni 2+ , Hg 2+ , and Pb 2+ ). Remarkably, a decrease in the fluorescence emission of almost 90% was detected in the presence of a 100 μM solution of Cu 2+ . On the contrary, even more remarkable is the good protection offered by block copolymers in combination with APTES, as can be seen in Figure 5c,d, for ZnO@AP-EG 12k -PG 14k and ZnO@AP-EG 21k -HP 9k , respectively. In Figure S9, images of these two ZnO QDs hybrids dispersed in water and ethanol evidences their colloidal stability in both media. As it can be seen in Figure 5c,d, the evolution of the fluorescence emission is similar in both media, ethanol and water, with it remaining stable for 70 days. After this time, the emissions in water are drastically extinguished. This result is promising when considering that in the absence of silane the emission was quenched after a few days (see Figure S5). Although these samples present high emission over time and are well dispersed in water, the increase in sample scattering when the sample was transferred to water was unavoidable. Since Φ F is inversely proportional to absorbance (Equation (1)), this parameter drops in aqueous solution (i.e., for ZnO@AP-EG 21k -HP 9k, Φ F in H 2 O, it is 4.7%). ZnO QDs as Sensors for Metal Detection In previous works, it was shown that the loss of fluorescence experienced by ZnO QDs in the presence of certain metals can be used as sensors for these analytes in water [12][13][14]43]. In particular, a loss of fluorescence has been noticed in the presence of Cr 6+ , Cu 2+ and Fe 3+ [13], Fe 2+ [12], and Cu 2+ [14]. Furthermore, certain hybrid ZnO QDs synthesized in this work were selected to explore to explore this environmental application. Specifically, the presence of HP 6k homopolymer (ZnO@HP 6k ) and ZnO coated with AP silane and EG 21k -HP 9k block copolymer (ZnO@AP-EG 21k -HP 9k ) were investigated as visible "turn-off" sensors for different metal ions. In Figure 6a,b, the decrease in fluorescence for ZnO@HP 6k as a function of metal type (100 µM aqueous solution) proves that nanohybrids are feasible for Fe 2+ , Cr 6+ , and Cu 2+ detection compared to other metal ions tested (Li + , Mg 2+ , K + , Ca 2+ , Mn 2+ , Fe 3+ , Co 2+ , Ni 2+ , Hg 2+ , and Pb 2+ ). Remarkably, a decrease in the fluorescence emission of almost 90% was detected in the presence of a 100 µM solution of Cu 2+ . rials 2022, 12, x FOR PEER REVIEW 15 of 18 To verify the sensitivity of the ZnO QDs hybrids against quencher concentration, Stern-Volmer plots representing F0/F vs. concentration of Cu 2+ and Fe 2+ are displayed in Figure 6d,f, respectively. As shown in Figure 6, the fluorescence emission of the hybrid ZnO@HP6k decreased significantly after their incubation, with increasing concentrations of Cu 2+ and Fe 2+ showing high sensitivity and a linear response in the range from 0 to 100 μM of the extinguisher metal. Similar results were obtained when the ZnO@AP-EG21k -HP9k nanohybrid was incubated in the presence of these metallic salts ( Figure S10). It also reveals a noticeable selective fluorescence quenching against Cu 2+ , Fe 2+ , and Cr 6+ . It is interesting that both systems exhibit similar behavior despite the different polymeric composition and presence or absence of AP silane. Conclusions Fluorescent hybrid ZnO QDs with sizes between 4-5 nm were successfully synthesized using the sol-gel method and different polymers with hydroxyl groups as templates and ligands. By means of reversible-addition fragmentation chain transfer (RAFT) polymerization, multihydroxylated polymers and block copolymers with low dispersity were obtained to establish the influence of molecular weight, hydrophobic/hydrophilic balance, and polymer architecture on the colloidal and photophysical properties of the ZnO QD nanohybrids. A fluorescence enhancement occurred when ZnO QDs were synthesized in the presence of the hydroxylated polymers, especially when using block copolymers, although the nanoparticles aggregate when they are transferred to an aqueous solution. The stability of ZnO QDs in aqueous medium was achieved by means of a combination of organosilanes and hydroxylated polymers. In fact, these samples ex- To verify the sensitivity of the ZnO QDs hybrids against quencher concentration, Stern-Volmer plots representing F 0 /F vs. concentration of Cu 2+ and Fe 2+ are displayed in Figure 6d,f, respectively. As shown in Figure 6, the fluorescence emission of the hybrid ZnO@HP 6k decreased significantly after their incubation, with increasing concentrations of Cu 2+ and Fe 2+ showing high sensitivity and a linear response in the range from 0 to 100 µM of the extinguisher metal. Similar results were obtained when the ZnO@AP-EG 21k -HP 9k nanohybrid was incubated in the presence of these metallic salts ( Figure S10). It also reveals a noticeable selective fluorescence quenching against Cu 2+ , Fe 2+ , and Cr 6+ . It is interesting that both systems exhibit similar behavior despite the different polymeric composition and presence or absence of AP silane. Conclusions Fluorescent hybrid ZnO QDs with sizes between 4-5 nm were successfully synthesized using the sol-gel method and different polymers with hydroxyl groups as templates and ligands. By means of reversible-addition fragmentation chain transfer (RAFT) polymerization, multihydroxylated polymers and block copolymers with low dispersity were obtained to establish the influence of molecular weight, hydrophobic/hydrophilic balance, and polymer architecture on the colloidal and photophysical properties of the ZnO QD nanohybrids. A fluorescence enhancement occurred when ZnO QDs were synthesized in the presence of the hydroxylated polymers, especially when using block copolymers, although the nanoparticles aggregate when they are transferred to an aqueous solution. The stability of ZnO QDs in aqueous medium was achieved by means of a combination of organosilanes and hydroxylated polymers. In fact, these samples exhibited uniform and stable enhanced photoluminescence for nearly five months of investigation. The exceptional photoluminescent properties of these new ZnO QDs, coupled with their low price, means that they are expected to be used in biotechnological or environmental applications, such as metal detection. Indeed, the fluorescence quenching in the presence of some metals, such as for Fe 2+ , Cr 6+ , and Cu 2+ , makes the ZnO QDs promising materials for the detection of environmental contaminants. Wrapped in dashed line the minor isomers of pHPMA and pPPGMA; Figure S2. SEC traces obtained for a series of EG-PG diblock copolymers (a) and EG-HP diblock copolymers (b) in DMF; Figure S3
2022-10-12T15:06:20.088Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "6ce456bf925253740703995163ab96283caaba96", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/19/3441/pdf?version=1664617388", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75239599d543cf6a235a36dd9a2a52e617925bfa", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
2275441
pes2o/s2orc
v3-fos-license
Quantum enhanced positioning and clock synchronization A wide variety of positioning and ranging procedures are based on repeatedly sending electromagnetic pulses through space and measuring their time of arrival. This paper shows that quantum entanglement and squeezing can be employed to overcome the classical power/bandwidth limits on these procedures, enhancing their accuracy. Frequency entangled pulses could be used to construct quantum positioning systems (QPS), to perform clock synchronization, or to do ranging (quantum radar): all of these techniques exhibit a similar enhancement compared with analogous protocols that use classical light. Quantum entanglement and squeezing have been exploited in the context of interferometry, frequency measurements, lithography, and algorithms. Here, the problem of positioning a party (say Alice) with respect to a fixed array of reference points will be analyzed. A wide variety of positioning and ranging procedures are based on repeatedly sending electromagnetic pulses through space and measuring their time of arrival. This paper shows that quantum entanglement and squeezing can be employed to overcome the classical power/bandwidth limits on these procedures, enhancing their accuracy. Frequency entangled pulses could be used to construct quantum positioning systems (QPS), to perform clock synchronization, or to do ranging (quantum radar): all of these techniques exhibit a similar enhancement compared with analogous protocols that use classical light. Quantum entanglement and squeezing have been exploited in the context of interferometry [1][2][3][4][5], frequency measurements [6], lithography [7], and algorithms [8]. Here, the problem of positioning a party (say Alice) with respect to a fixed array of reference points will be analyzed. Alice's position may be obtained simply by sending pulses that originate from her position and measuring the time it takes for each pulse to reach the reference points. The time of flight, the speed of the pulses and the arrangement of the reference points determine her position. The accuracy of such a procedure depends on the number of pulses, their bandwidth and the number of photons per pulse. This paper shows that by measuring the correlations between the times of arrival of M pulses which are frequency-entangled, one can in principle increase the accuracy of such a positioning procedure by a factor √ M as compared to positioning using unentangled pulses with the same bandwidth. Moreover, if numbersqueezed pulses can be produced [9], it is possible to obtain a further increase in accuracy of √ N by employing squeezed pulses of N quanta, vs. employing "classical" coherent states with N mean number of quanta. Combining entanglement with squeezing gives an overall enhancement of √ M N . In addition, the procedure exhibits improved security: because the timing information resides in the entanglement between pulses, it is possible to implement [10] quantum cryptographic schemes that do not allow an eavesdropper to obtain information on the position of Alice. The primary drawbacks of this scheme are the difficulty of creating the requisite entanglement and the sensitivity to loss. On the other hand, the frequency entanglement allows similar schemes to be highly robust against pulse broadening due to transit through dispersive media [11]. The clock synchronization problem can be treated using the same method. In Refs. [12] and [13] two novel techniques for clock synchronization using entangled states are presented. However, the authors of Ref. [12] themselves point out that the resources needed by their scheme could be used to perform conventional clock synchronization without entanglement. Similarly, all the enhancement of [13] arises from employing highfrequency atoms which themselves could be used for clock synchronization to the same degree of accuracy without any entanglement. In neither case do these proposals give an obvious enhancement over classical procedures that use the same resources. Here, by contrast, it is shown that quantum features such as entanglement and squeezing can in principle be used to supply a significant enhancement of the accuracy of clock synchronization as compared to classical protocols using light of the same frequency and power. In fact, the clock synchronization can be accomplished by sending pulses back and forth between the parties whose clocks are to be synchronized and measuring the times of arrival of the pulses (Einstein's protocol). In this way synchronization may be treated on the same basis as positioning and the same accuracy enhancements may be achieved through entanglement and squeezing. In this paper only the positioning accuracy enhancement will be addressed in detail. In order to introduce the formalism, the simple case of position measurement with classical coherent pulses is now presented. Since each dimension can be treated independently, the analysis will be limited to the onedimensional case. For the sake of simplicity, consider the situation in which Alice wants to measure her position x by sending a pulse to each of M detectors placed in a known position (refer to Fig. 1). This can be easily generalized to different setups, such as the case in which the detectors are not all in the same location, the case in which only one detector is employed with M timeseparated pulses, the case in which the pulses originate from the reference points and are measured by Alice (as in GPS), etc. Alice's estimate of her position is given by where t i is the travel time of the i−th pulse and c is the light speed. The variable t i has an intrinsic indeterminacy dependent on the spectral characteristics and mean number of photons N of the i-th pulse. For example, given a Gaussian pulse of frequency spread ∆ω, according to the central limit theorem, t i cannot be measured with an accuracy better than 1/(∆ω √ N ) since it is estimated at most from N data points (i.e. the times of arrival of the single photons, each having an indeterminacy 1/∆ω). Thus, if Alice uses M Gaussian pulses of equal frequency spread, the accuracy in the measurement of the average time of arrival is Quantum Mechanics allows us to do much better. In order to demonstrate the gain in accuracy afforded by Quantum Mechanics, it is convenient to provide first a fully quantum analysis of the problem of determining the average time of arrival of a set of M classical pulses, each having mean number of photons N . Such a quantum treatment for a classical problem may seem like overkill, but once the quantum formalism is presented, the speedup attainable in the fully quantum case can be derived directly. In addition, it is important to verify that no improvement over Eq. (1) is obtainable using classical pulses. The M coherent pulses are described by a state of the radiation field of the form where φ ω is the pulses' spectral function, |α(λ ω ) i is a coherent state of amplitude λ ω in the mode at frequency ω directed towards the i-th detector, and N is the mean number of photons in each pulse. The pulse spectrum |φ ω | 2 has been normalized so that dω|φ ω | 2 = 1. For detectors with perfect time resolution, the joint probability for the i-th detector to detect N i photons in the i-th pulse at times t i,k is given by [14] p where t i,k is the time of arrival of the k-th photon in the i-th pulse, shifted by the position of the detectors t i,k → t i,k + x/c. The signal field at the position of the ith detector at time t is given by E where g(t) is the Fourier transform of the spectral function φ ω . Averaging over the times of arrival t i,k and over the number of photons N i detected in each pulse, one has with approximate equality for N ≫ 1. Here τ ≡ dt t |g(t)| 2 and ∆τ 2 ≡ dt t 2 |g(t)| 2 − τ 2 are independent of i and k since all the photons have the same spectrum. Eq. (5) is the generalization of (1) for non-Gaussian pulses. Quantum light can exhibit phenomena that are not possible classically such as entanglement and squeezing, which, as will now be seen, can give significant enhancement for determining the average time of arrival. First consider entanglement. The framework just established allows the direct comparison between frequency entangled pulses and unentangled ones. For the sake of clarity, consider initially single photon entangled pulses. Define the "frequency state" |ω for the electromagnetic field the state in which all modes are in the vacuum state, except for the mode at frequency ω which is populated by one photon. Thus the state dω φ ω |ω represents a single photon wave packet with spectrum |φ ω | 2 . Consider the M -photon frequency entangled state given by where the ket subscripts indicate the detector each photon is traveling to. Inserting |Ψ en in Eq. (3), and specializing to the case N i = 1, it follows that That is, the entanglement in frequency translates into the bunching of the times of arrival of the photons of different pulses: although their individual times of arrival are random, the average t ≡ 1 M M i=1 t i of these times is highly peaked. (The measurement of t follows from the correlations in the times of arrival at the different detectors). Indeed, from Eq. (7) it results that the probability distribution of t is |g (M t) | 2 . This immediately implies that the average time of arrival is determined to an accuracy where ∆τ is the same of Eq. (5). This result shows a √ M improvement over the classical case (5). To emphasize the importance of entanglement, Eq. (8) should be compared to the result one would obtain from an unentangled state analogous to |Ψ en . To this end, consider the state defined as which describes M uncorrelated single photon pulses each with spectral function φ ω . By looking at the spectrum of the state obtained by tracing away all but one of the modes in (6), each of the photons in (9) can be shown to have the same spectral characteristics as the photons in the entangled state |Ψ en . Now, using Eq. (3) for the uncorrelated M photon pulses |Ψ un , it follows that which is the same result that was obtained for the classical state (2). Thus Eq. (5) holds, with N = 1, also for |Ψ un . From the comparison of Eqs. (5) and (8), one sees that, employing frequency-entangled pulses, an accuracy increase by a factor √ M is obtained in the measurement of t with respect to the case of unentangled photons. Since |Ψ en is tailored as to give the least indetermination in the quantity t, it is appropriate for the geometry of the case given in Fig. 1, where the sum of the time of arrival is needed. Other entangled states can be tailored for different geometric dispositions of the detectors, as will be shown through some examples. How is it possible to create the needed entangled states? In the case M = 2, the twin beam state at the output of a cw pumped parametric downconverter will be shown to be fit. It is a 2 photon frequency entangled state of the form dω φ ω |ω s |ω 0 − ω i , where ω 0 is the pump frequency and s and i refer to the signal and idler modes respectively. This state is similar to (6) and it can be employed for position measurements when the two reference points are in opposite directions, e.g. one to the left and one to the right of Alice. In fact, it can be seen that p(t 1 , t 2 ) ∝ |g(t 1 − t 2 )| 2 and hence such a state is optimized for time of arrival difference measurements, as experimentally reported in [15]. In the case of M = 3, a suitable state can be obtained starting from a 3-photon generation process that creates a state of the form dωdω ′ f (ω, ω ′ )|ω |ω ′ |ω 0 − ω − ω ′ , and then performing a non-demolition (or a post-selection) measurement of the frequency difference of two of the photons. This would create a maximally entangled 3-photon state, tailored for the case in which Alice has one detector on one side and two detectors on the other side. However, for M > 2, the creation of such frequency-entangled states represents a continuous variable generalization of the GHZ state, and, as such, is quite an experimental challenge. Now turn to the use of number-squeezed states to enhance positioning. The N -th excitation of a quantum system (i.e. the state |N of exactly N quanta) has a de Broglie frequency N times the fundamental frequency of the state. Its shorter wavelength makes such a state appealing for positioning protocols. In this case, the needed "frequency state" is given by |N ω , defined as the state where all modes are in the vacuum except for the mode at frequency ω, which is in the Fock state |N . The probability of measurement of N quanta in a single pulse at times t 1 , · · · , t N is given by Eq. (3) with M = 1 detectors. It is straightforward to see that, for a state of the form dω φ ω |N ω , the time of arrival probability is given by Such a result must be compared to what one would ob-tain employing a classical pulse |Ψ cl of N mean number of photons, i.e. the state (2) with M = 1. Its probability (4) shows that employing the N -photon Fock state gives an accuracy increase of √ N vs. the coherent state with N mean number of photons. The similarity of this result (11) with the one obtained in Eq. (7) stems from the fact that the Fock state |N ω can be interpreted as composed by N one-photon pulses of identical frequency. Hence, all the results and considerations obtained previously apply here. An experiment which involves such a state for N = 2 is reported in [16]. Entangled pulses of number-squeezed states combine both these enhancements. By replacing |ω with the number-squeezed states |N ω in the M -fold entangled state (6), one immediately obtains an improvement of √ M N over the accuracy obtainable by using M classical pulses of N photons each. The enhanced accuracy achieved comes at the cost of an enhanced sensitivity to loss. If one or more of the photons fails to arrive, the time of arrival of the remaining photons do not convey any timing information. The simplest way to solve this problem is to ignore all trials where one or more photons is lost. A more sophisticated method is to use partially entangled states: these states provide a lower level of accuracy than fully entangled states, but are more tolerant to loss. As shown in figure 2, even the simple protocol of ignoring trials with loss still surpasses the unentangled state accuracy limit even for significant loss levels. The use of intrinsically loss-tolerant, partially entangled states does even better [10]. Before closing, it is useful to consider the following intuitive picture of quantum measurements of timing. A quantum system such as a pulse of photons or a measuring apparatus with spread in energy ∆E can evolve from one state to an orthogonal state in time ∆t no less than π /(2∆E) [17]. Accordingly, to make more accurate timing measurements, one requires states with sharp time dependence, and hence high spreads in energy. Classically, combining M systems each with spread in energy ∆E results in a joint system with spread in energy √ M ∆E. Quantum-mechanically, however, M systems can be put in entangled states in which the spread in energy is proportional to M ∆E. Similarly, N photons can be joined in a squeezed state with spread in energy N ∆E. The Margolus-Levitin theorem [18] limits the time ∆t it takes for a quantum system to evolve from one state to an orthogonal one by ∆t ≥ 2 /πE, where E is the average energy of a system (taking the ground state energy to be 0). This result implies that the √ M N enhancement presented here is the best one can do. In conclusion, quantum entanglement and squeezing have been shown to increase the accuracy of position measurements, and, as a consequence, they can also be employed to improve the accuracy in distant clock synchronization. For maximally entangled M -particle states we have shown an accuracy increase ∝ √ M vs. unentangled states with identical spectral characteristics. A further increase ∝ √ N in accuracy in comparison with classical pulses was also shown for the measurement of N quanta states. At least for the simple cases of M = 2 or N = 2, the described protocols are realizable in practice. This work was funded by the ARDA, NRO, and by ARO under a MURI program. FIG. 2. Sensitivity to loss. The quantum efficiency η needed for having an accuracy increase over the unentangled state |Ψ un is plotted vs. the number M of photons (here N = 1). The upper white region is where |Ψ en does better than |Ψ un. The white and light grey regions are where a partially entangled state, which exploits a configuration where one partially entangles subgroups of 2 maximally entangled photons, does better than |Ψ un. The lower dark region is where |Ψ un does better.
2016-04-30T01:05:23.138Z
2001-03-02T00:00:00.000
{ "year": 2001, "sha1": "23eea341f1c663ac60f3c2a24bf6dc655ebb0b94", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0103006", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "efaa8fa61557ae6ea53fc6253df7dd6c8bef5b2f", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
250686600
pes2o/s2orc
v3-fos-license
Child-friendly Kampoong: quality of play value criteria for children's identity and play place in Malang, Indonesia . The paper discusses child-friendly kampoong. A kampoong with a complex and diverse environment is a dense residential reality in urban areas that gives the impression of slums and unorganized. The kampoong actually contains cultural values of shared life that must be maintained and preserved especially for children. The paper aims to study the criteria for developing the quality of play value for children identity and play place in Malang, Indonesia. The reserch method about the social identity and children play place using narative method with a qualitative approach to obtain the general condition of child-friendly kampoong. The method used in determining the quality of play value criteria using Analysis Hierarchy Process through interviews with experts. Discussions about child-friendly kampoong included child- friendly city, the character of children’s friendly play value space ; children’s friendly kampoong as social identity; children play place and quality of play value space. The results indicate the main criteria for quality of play value according to the experts is safe spaces with value of 0.141. Using several broader perspectives, it is hoped that this paper can foster awareness about safe, comfortable and enjoyable playgrounds in urban dense environments. (A), (B), entrance (C), pathway lane (D), signage (E), (F), fence (G), (H), (I), Child-Friendly Kampoong as The Implementation of Children Friendly City As an effort to accelerate the fulfillment of children's rights, several cities in Indonesia have the initiative to form a child-friendly kampoong program. Child-friendly kampoong is held in the form of a pilot project. This pilot project is an activity that makes a community group in an area as a model for other regions, and runs according to the program objectives that have been established in the rules. The implementation of child-friendly cities in the form of child-friendly kampoongs contains rules that have been legally legitimate, so that all components involved together take action in achieving their goals. Application of child-friendly kampoongs through a series of activities. The kampoong area is analyzed as a unit of the smallest unit after the family to be able to accommodate a variety of childfriendly indicators which means ensuring the condition of the child and their rights in living life. Thus, child-friendly kampoongs can be defined as a place to provide interaction space so that the community is easier in socializing and building awareness about the rights of children. Child-friendly kampoong is a program unit carried out by citizens who are members of the kampoong association in the form of efforts to fulfill children's civil rights to provide opportunities to grow and develop based on realistic conditions towards the kampoong that are able to provide comfort, livable, and deserving of development on the basis of health, education and protection law based on independent initiative. This program is implemented in an integrated manner with the activities of the regional and neighborhood units as a fulfillment of basic living needs. Child-friendly kampoongs are a tangible manifestation of increased awareness that guarantees the fulfillment of children at the kampoong level and ensures efforts to pay attention to the needs of aspirations, attention, and appreciation for children without discrimination. Child-friendly kampoongs are expected to be physically and non-physically feasible in meeting the needs and rights of children. The development of child-friendly kampoongs is expected to unite the commitment and resources of the kampoong, the community and the business world in the kampoong to be able to respect, guarantee and fulfill the rights of children, protect children from acts of violence, exploitation, harassment and discrimination and hear children's opinions consciously planned, comprehensive and sustainable. The concept of child-friendly city policies as stated by Corsi [1], that there are two supporting models of child-friendly city policies, first, model with an orientation of education, cognition and normative, and second, a promotion model of social participation. The concept of environmental security as formulated by Tranter and Sharpe [2] that the danger of traffic causes parents to pay attention to their children. The concept of fulfilling basic rights was conveyed by Wilks [3] that the fulfillment of children's basic rights to obtain all basic services and security as well as protection from exploitation efforts is also a concern in realizing a child-friendly city. Criteria for play value include aspects of location and size; safe spaces; easy access to entrances; circulation path; signage; seating; fence; playground equipments; lighting facilities; trees and plants; garden; environmental sustainability; sand and water playgrounds. The location of play spaces should be in areas that are easily accessible to children and have visual views from all directions [4]. Criteria for a safe space include adequate lighting facilities, emergency telepohones and the fence precisely surrounds [5]. Criteria for appropriate signage are in the form of important information about the space and provide direction of traffic inside the open spaces [6]. Seating arrangements can support or hinder social interaction [5]. According to Shackell [4] the following design considerations must be considered when giving a fence in a child-friendly playing space, which is barrier, protector, and aesthetic. Playground equipment are multipurpose and support the creativity development and coordination [4]. Lighting can be used as follows: security; protection against crime and aesthetic [7]. Trees create a variety of play activities such as climbing, hiding and searching, exploring, discovering, imaginative playing, gathering and stimulate senses of children [4,6,8]. Criteria for the garden is the best way to allow children for interacting with each other and with nature. They learn about ecological cycles, how to preserve environment and foster cooperation between children. Trees in the garden are chosen according to their roots, water requirements, endurance and growing behavior [8]. Sand area in child-friendly spaces must be close to a pathway that is easily accessible to children with disabilities and located near or under a tree for sun protection and for wind protection [4,6]. Water playground can be integrated into child-friendly playing spaces through various shapes such as fountains, spray pools and water tables [5]. The Character of Child-Friendly Play Value Space A city if the greater the flow of urbanization will result in a more populated population. Gradually it began to erode its calm and comfort. Replacing with the hustle and bustle of community activities, job competition was in the midst of expensive needs. This situation makes a city unfriendly again, especially for children. Many parents or communities forget to pay attention to their children. Parents only fulfill the need for clothing, food and shelter. Even though they also need more love and attention. Besides that, many of their playgrounds have been lost, switching to residential functions. Children now play more at home with their gadgets or access the internet at the side of the road, watch television all day, and play themselves with games on the computer. The child-friendly kampoong program is a form of city government effort to empower people to care for children's growth. Encouraging people to be aware of the importance of preparing the next generation better. A good successor generation certainly needs a conducive environment as a growth and development medium. The added value of a child-friendly kampoong is to make children feel more comfortable. This means that the child must be comfortable in his own kampoong through planting moral values in daily life. For example, for adults who smoke are provided in certain places. In addition, children are also involved in various activities, such as the waste bank program. Thus, the children in this kampoong are more focused on their activities. Not only that, the kampoong is child-friendly, children are also involved in various activities, such as the establishment of reading gardens, sustainable food house areas, children's playgrounds, arts stage, healthy toddlers, and fond of learning, garbage banks, playgrounds, nursery gardens kampoong, skills for school dropouts. The kampoong dominates the allotment of land in cities in Indonesia (around 70 percent), the kampoong becomes the foundation of housing 70 to 85 percent of the city population. Meanwhile, housing provision through formal channels by the private sector and the government is only able to provide around 15 percent of the total needs of homes in urban areas. Data and facts show that selfhelp residents are the largest supplier of housing. The kampoong follows the compact city principle with mixed uses. The kampoong is a kind of mini-urban collage that allows them to continue to develop the principles of diversity, tolerance and solidarity. One room is used for various needs, such as streets in kampoongs used for passing vehicles, playing, hanging out, and earning a living. State protection against children's rights to play is a protection of human rights so there is no reason for the state not to enforce children's rights to play. Playing is a direct and spontaneous activity for fun purposes. Therefore, every child wants to always play because playing with children feels comfortable, happy and not depressed. The playing function not only can improve language development, discipline, moral development, creativity and physical development. Thus, children's play activities are a way for children to do activities that contain elements of learning and are carried out with pleasure and relaxation without any pressure on the child. The element of learning in children's play activities will be beneficial for physical and motoric development, psychic development (emotion, attitude, intelligence, perception) and the media for children to develop their social relationships. Play is considered very important for physical and psychological development so that all children can be given time and opportunity to play and are also encouraged to play without regard to their family's socioeconomic system [9]. During play children will develop various social skills so that it is possible to enjoy group membership in the children's community. Hurlock [9] explained further about children's play patterns that are classified into playing activities in the early childhood and playing activities in the late childhood. Playing activities in early childhood are often called the toy stage because in this period almost all games use toys. His interest in playing with toys began to decrease and when he reached school age. Hurlock [9] states that playing in childhood is a serious activity which is an important part in the development of the first years of childhood. In this perspective the playing activities of early childhood differed from one location to another. For example, the pattern of playing American children is different from the playing patterns of Asian and African children. Likewise, not all playing patterns from time to time will be as popular. Playing activities at the end of childhood are different from playing activities at the beginning of children. In the early days of children, playing activities tend to be individualized while in the late childhood, the play activities prioritize playing together or groups and prioritizing popular playing activities. Playing activities at the end of the children tend to play more constructively, explore, collect, sports and games that contain elements of entertainment. The character of children's friendly play value space provides an overview of the importance criteria such as children must feel safe and want to play the area; improve health and well-being; social meeting space; various types of paths that support a variety of different activities; designed for playing purposes; encourage interaction between children; provide a sense of security, closure and support for activities; supporting body muscle development, social interaction and fantasy play; provide a safe and aesthetic atmosphere; stimulate behavior to explore, discover, and also encourage fantasy and imaginative play; increase social interaction, develop fine motor skills and stimulate sensory; a very good media for creative play and social interaction; multisensory characters include sounds and textures that make children interested and relaxed. Child-Friendly Kampoong Method Development of a child-friendly city policy cannot be separated from the basic rights of children who become an important locus in fulfilling children's rights. The rights of the child in question are things that must be fulfilled by parents, community and government to children. Thus, children will avoid discrimination, have special protection, and can participate in activities. This is important for achieving the fulfillment of children's rights fundamentally. In the kampoong area, children's rights are based on 5 clusters that have been established by the government and translated into 61 child rights indicators. The cluster can be explained as follows: civil rights and freedom; family environmental rights and alternative care; the right to basic health and welfare; the right to education, use of leisure and cultural arts activities; the right to get special protection. As an implementation program, childfriendly kampoongs cannot be separated from various problems. The reality that occurs in the field shows that the historical value and plurality of social, economic, and cultural conditions experienced by the community in a kampoong have provided space for its citizens to construct and give subjective meaning to the current existence of child-friendly kampoongs [10]. The reserch method about the social identity and childern play place in child-friendly kampoongs used is narative method with a qualitative approach to obtain the general condition in Malang city. The reserch method about the quality of play value place in child-friendly kampoongs used is descriptive method with a quantitative approach to obtain the results of analysis hierarchy process (AHP) to experts related to child-friendly campoong. Questionnaire design of AHP was focused on quality factors of child-friendly play value space in the city campoong. Criteria used in AHP analysis consist of thirteen aspects as follows: Child Friendly Kampoong as Social Identity Social change is something that is sure to happen in every society. Even though social change is fairly slow, society will not stagnate and will continue to undergo social reality. Social change becomes a series of events that bring people to a new historical dimension. And includes aspects of acculturation, assimilation and enculturation in the culture that is carried out. Social change brings people to realize the conditions they judge are no longer relevant in social life. One of them concerns identity issues. The existence of identity is important, because identity is a characteristic that distinguishes humans from others. In the root of philosophy, according to Aini [11] the formation of identity is divided into three approaches, namely: primordialism, constructivism and instrumentalism. The primordialism approach explains that identity is something that is obtained naturally (given) which is formed through the process of hereditary socialization. Whereas the constructivism approach explains that identity is a complex social process through cultural ties built in society. An instrumentalist approach, identity as something constructed for the sake of the elite and for the sake of power. Identity becomes a form of affirming the existence of an individual and his group. According to Kinasih [12] identity becomes a dimension of necessity inherent in human relations because the existence of a person becomes part of an ethnic group, religion, tradition and language in a particular cultural system. Humans individually or in groups will place themselves in the corridor of identity in a cultural context. With the existence of identity, the individual will be recognized for its existence as well as its existence in the social space. Jeefrey Weeks quoted by Kinasih [12] explains the importance of identity for an individual Identity is about the similarities of a number of people and about what distinguishes you from others. As the most basic thing. Identity will give someone a sense of personal location, a stable core for individualism. Sense of belonging in this context will provide a sense of security for individuals. Security will provide stability in the social system that is being carried out by individuals in society. Through identity, individuals outside the community will provide an assessment that consciously shapes selfhood for the individual. As revealed by Barker [13] that social identity is the hope or opinion of others about selfhood. However, it is necessary to realize that the identity of a time can change shape as social change returns. Because its nature is not taken for granted, identity can be used inconsistently. Adapted to the needs of the individuals and groups concerned. This adjustment occurs as a step in forming a positive identity. This is a meaningful identification step, an effort to identify the identity that is already attached to get a better view and assessment than others [12]. The social construction built by the residents of a kampoong regarding the child-friendly kampoong program, was born through a simultaneous process that took place dialectically, namely: objectivation, internalization and externalization. As a government program, the reality of child-friendly kampoongs is present as a manifestation of policies that have a legally binding set of rules. So far, if based on reality, the implementation of child-friendly kampoongs has a gap when judged in the frame of social construction. These differences can be grouped into several things, namely: context, issues, agents, strategies and results. In context, by the government, the area of a kampoong is assumed to be an urban sprawl with a negative stigma that is inherently considered not responsive to children so that the child-friendly kampoong program becomes a program that is believed to be able to change this. But on the other hand, the kampoongrs constructed a child-friendly kampoong as a form of adaptation to social change in the region. Although it was realized that the child-friendly kampoong program had not bind the subjective awareness of citizens. Child-friendly kampoongs have given meaning to the components of society which consist of: children, parents, administrators as cadres and government. Simultaneously, the reality of childfriendly kampoongs is an effort to form a positive image of a kampoong. Positive images that are present are believed to have an impact on changing the views of outsiders towards the territory of a kampoong. So that people can visit the kampoong area and can tell about the social changes that have now taken place. For children, the reality of child-friendly kampoongs has been interpreted as an appreciation of the voices and aspirations conveyed by children. Modern instrumental views are still held by most kampoongrs. So that it still places children as objects, as well as individuals who must submit and obey. The absence of respect for votes has an impact on the lack of involvement of children in decision making at the institutional level. Based on environmental aspects, child-friendly kampoongs are interpreted collectively by parents as an area that can provide a comfortable and safe environment for children to do activities. A safe environment is perceived as a place that is far from danger, while a comfortable environment is interpreted as a place that gives children a sense of comfort when playing and utilizing leisure time. Children Play Place Facilities and infrastructure for children's play should be educative, namely: the development of children's personality attitudes and abilities, talents, mental and physical abilities to achieve their optimal potential; development of respect for human rights and fundamental freedoms; developing respect for parents, cultural identity, language and its own values, national values where children live, where children come from, and civilizations that differ from their own civilization; child preparation for a responsible life; and developing respect and love for the environment. Playground facilities in the open space area according to Kusumo [14] can be divided into three categories, namely: play lot, playground and play field. In kampoongs that are densely populated and access to outside as well as entering residential areas connected with footpaths and alleys, which illustrate that there is insufficient public space available. Population dense settlements cause the distance between individuals to be physically very close so that there is no private space, let alone public space. Space conditions like this affect the interaction between citizens. Interaction among citizens is very high and has an impact on the emergence of pro-social behavior such as the emergence of mutual help behavior among citizens. Social interaction in space -children's play time naturally creates a type of children's play that utilizes spatial situations and game repetition. It can be seen on a daily basis that there are children who play along the aisle with repeated games all the time. Although children have many opportunities to play but children are less satisfied playing because in addition to influencing the mobility of the people passing by on the kampoong streets or aisles, less involvement of children plays in large numbers and the games played by children are very limited. The phenomenon that happens a lot is the very lack of children's playground in the kampoong so they play around the house or play on the road. The game also has no traditional games, except running around, playing ball in the alley which sometimes disturbs people who go outside in the alley, are playing a bicycle so they go to the big road which is certainly dangerous Game tools used by children what they can to play like bricks, bamboo / sticks, pipes, and so on. Sometimes due to land conditions that there are not many children playing in public spaces government facilities such as schools. Child games that are repeated all the time are running without rules or sometimes they also play running with certain rules that are created by themselves. Other games that children often play are playing and playing cards. The game of children in the urban kampoong generally does not discriminate gender, because men and women play together. This situation is advantageous to foster a pro-gender attitude since childhood and build togetherness. Child games that involve a group of children can help train children's social development. Social development means that the ability to behave in accordance with social demands. Good social development is needed so that children in urban kampoongs are able to socialize or make social adjustments. Several factors that influence the social adjustment of children in the kampoong of the city as follows. First, children in urban kampoongs have full opportunities to socialize because every day children have time to play and socialize. Playing while socializing is important for children to learn to live in a community. If most of the time they are only used alone or activities play alone then children will lose the opportunity to socialize and learn to live in a community. Second, children who play while socializing are able to talk socially, and ultimately make them socially acceptable. Children like this easily associate and place language in relationships, even though in general they use the local language. Third, children will learn socialization if they have motivation. Motivation largely depends on the satisfaction provided by social activities given to children. Although children's play satisfaction is limited by the conditions of place and space but does not reduce the motivation of children to socialize. This can be seen from the frequency of children in urban kampoongs in groups. Children's games are limited because the land can still develop motor skills such as running movements that are performed when playing in principle is physical movement. Motor development means the development of physical movement control through coordinated central requirements, and muscle movements. If a child has no environmental or physical disturbances or mental barriers that interfere with normal motor development, children will be ready to adjust to playing with peers who strongly support motoric development. Good motorbike provides an opportunity to learn many things including social skills, physical security that will give birth to a sense of psychological security and will ultimately lead to self-confidence that will affect children's behavior. The limited place to play and less varied play activities can still develop children's emotions. Emotional development has a very important role in life. The emotional development of children in the city kampoong is influenced by two factors, namely: first, the maturity factor as a form of intellectual development. This factor is obtained from playing ball and cards by children. Ball games are not only physical and cooperative, but also help children's intellectual maturity. While playing cards contributes to children's emotional control efforts. Second is a factor of learning experience. Playing is learning and learning will determine which potential reactions children will use to express anger or pleasure. Availability of facilities and infrastructure or space and playground for children is an indicator that must be conditioned gradually to be achieved and fulfillment is improved. The right of children to get a decent place for recreation and play is very important for children to develop their basic potential such as emotion, intelligence, creativity, motoric, social relations, and others. One of the activities of children in playing in a narrow alley is running. Even though the playground is very limited, children can still run along the alley. Even though children run without rules, they are still useful for training their psychomotor development. Climbing games are one of the basic rock-climbing sports. This game can train children's motor development because one of the functions of the game is to train children's motor skills. Climbing is a form of motor skills, but it is unfortunate that a child climbs an electric pole that can endanger his life because he does not pay attention to the elements of safety. Children play using small poles embedded in the alley. The child does not give the name of the game, but the game is played very often by children in the kampoong. Although children play with game tools as they are but this game can train collaboration as one element of children's psycho-social development. Play ball in a very narrow alley. Children cannot run free to chase and dribble and fool opponents and kick the ball so that it is difficult for children to develop skills to play football well. Playing ball has this technique and technique trained in an adequate field, not in a narrow field so that children have reliable football playing skills. Playing ball on a narrow field is solely a function of health sports not for achievement sports. No wallpaper can be used for drawing. Children try to practice their creativity even though the walls are littered. Behind the children training drawing skills on the wall is reflected in the low economic ability of the family to buy drawing paper. On the other hand, littering the walls of the wall reflects the slums of a city kampoong. Child Friendly Kampoong: Quality of Play Value Space Based on the weighting results of all experts, it can be seen that the highest value of the expert's average geometry is the B criteria, which is in the form of security with a value of 0.141 ( Figure 1). This is because there are three experts who argue that security is a priority in the development of child-friendly play value. The security has been explained, can be in the form of security of playground equipment, security in interacting with each other, security in absorbing information and security in consuming food available in the open space. This is in accordance with a research conducted by Munoz [15] which states that playing space of children must avoid danger without reducing the child's motivation to be more courageous through providing safe facilities. The criteria for safe space are the highest because of the need to maintain child safety in using all elements of the playing place. It aims to minimize the dangers that occur in children, therefore a childfriendly kampoong concept is created. The criteria for safe space are also stated by Parson [8] that the creation of safe open spaces refers to the health and well-being of all children protected from all danger conditions. The lowest criteria value based on the weighting calculation of all experts is the entrance or criteria E with a value of 0.018. This is because the seating is a very important thing, but less significant since there is enough one gate entrance without having many entrances for a child-friendly play value. The importance of one gate entrance system is also used to maintain safety of children in playing in kampoong open space. This evaluation is in harmony with the statement by Parson [8] said that the seating to child-friendly open spaces mainly includes aspects of functionality and waiting zones. Conclusion It is fitting that child-friendly city policies must be known by the community. So that the participation of the community in realizing a children's playground can be continuously improved. This will encourage the government not to ignore the function of children's playgrounds. So that it can be avoided in a city there are many parks but not for children's playgrounds, but for the beauty of the city. The right of children to play is sometimes overlooked by the level of income of parents of children in urban kampoongs who are generally informal sector workers. So that the government is obliged to provide a space or playground area for children who live in densely populated areas such as the urban kampoongs. The availability of space or area for children's playgrounds that are both play lot, play ground or play field in urban areas that are very limited of play value space. The most influential criteria of play value space for child-friendly kampoong in Malang city are determined by the weighting results from all experts in AHP. The highest value from the average geometry of the experts is safe space with a value of 0.141. Three experts argue that security is a top priority in child-friendly spaces in the shape of playground security, security in interacting with others, security in absorbing information, and security in consuming food available in the park. These criteria are considered as the main priority to maintain safety of children in using all playing space elements and to minimize the dangers that occur.
2022-06-27T21:06:57.783Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ac006f90aaf6076d8b3a1db4c94f148c2ea81daa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/314/1/012080", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ac006f90aaf6076d8b3a1db4c94f148c2ea81daa", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Physics" ] }
234060879
pes2o/s2orc
v3-fos-license
The chief financial officer (CFO) profile and R&D investment intensity: evidence from listed European companies Purpose – This study aims to investigate whether the characteristics of the chief financial officer (CFO) have an impact on the intensity of the corporate research and development (R&D) investment. Design/methodology/approach – Based on hand-collected data for the CFOs of a sample of the largest Europeanlistedcompaniesfortheperiod2013 – 2016,thisstudyusesregressionanalysestotestempiricallythe association of CFO education, CFO gender and CFO age with R&D investment intensity. Findings – The presenceoffemaleCFOs,CFOswith aMaster ofBusinessAdministration(MBA)or Doctorof Philosophy (PhD) degree and older CFOs is positively associated with the intensity of R&D investment. Research limitations/implications – This study relies on some observable characteristics of CFOs and focuses on large listed companies. Practical implications – The results of this study may help investors, stakeholders and practitioners to understand better which type of CFO characteristics are more likely to result in higher firm-level R&D investment intensity. Originality/value – ThisstudyoffersthefirstinsightsintotheimpactofCFOs,asthemostprominent C -suite executives, on the level of corporate investments in R&D activity. Introduction Corporate research and development (R&D) investment is increasingly capturing the attention of the academic debate since it is crucial to translate novel technologies into organizational processes, products and services (Almor et al., 2019;Hou et al., 2019;Talke et al., 2010).Decisions concerning corporate investments are typically the responsibilities of chief (C-suite) executives (Kor, 2006), who should be accountable to all the stakeholders on the adequate return on whatever they bring to the firm (Bhaumik et al., 2019).Among these executives, chief financial officers (CFOs) play a key role in business decisions, often as the second-in-command to the chief executive officer (CEO) (Caglio et al., 2018).Both the academic literature and the professional community claim that CFOs have expanded their responsibilities from the supervision of financial and accounting processes toward major corporate strategic decision-making (Baxter and Chua, 2008;Bedard et al., 2014;Zorn, 2004).For instance, scholars argue that a CFO contributes to the CEO's corporate strategy on financial plans, the composition of investments and the allocation of resources between Chief financial officer and R&D investment alternate projects (Firk et al., 2019).Datta and Iskandar-Datta (2014) claim that the CFO is one of the most prominent C-suite executives and that his/her attributes represent a matter of increasing interest for the future research agenda, particularly considering that individual-level characteristics are likely to affect a firm's business choices and outcomes (Florackis and Sainani, 2018).At the same time, the role of the CFO in improving business innovation is currently a central theme among practitioners, as the following quote illustrates: CFOs and the finance function can help companies successfully deliver on the full potential of a transformation.To do so, they must be judicious about which activities truly add value and embrace their roles in leading the improvement in both performance and organizational health.McKinsey Davies R. and Huey D., February 2017 Hence, we argue that a firm's CFO can be expected to play a decisive role in guiding the CEO's decisions on R&D.Accordingly, a crucial question arises concerning whether the individual-level characteristics of the CFO are likely to affect the corporate investments in R&D.We aim to answer this question by investigating the impact on R&D investment intensity of three main individual-level attributes of a CFO: (1) educational background; (2) gender and ( 3) age. We focus on the educational background, gender and age of the CFO drawing from a number of studies (Bamber et al., 2010;Bertrand and Schoar, 2003;Sun et al., 2019b) that suggest that these individuals' characteristics may explain variations in managerial decisions on R&D investment among companies (Barker and Mueller, 2002). To document the association between CFOs' characteristics and R&D investment intensity, we use hand-collected data for the CFOs of a sample of the largest European listed companies from 2013 to 2016. We believe that this investigation is important for several reasons.First, corporate R&D investment is a central theme in many countries and especially in Europe, aiming to promote knowledge sharing and sustain national economic growth (Alam et al., 2019).Second, in the modern business landscape, firms are constantly called to increase R&D investment in order to create competitive advantage, leapfrog their competitors and ensure better performance (Chen et al., 2010;Jang et al., 2016;Ruiz-Jim enez et al., 2016).Third, since R&D investment is a risky strategic decision, the specific characteristics of the C-suite executives are important in order to understand the conditions under which managers are more likely to build long-term firm value (Coles et al., 2006;Sun et al., 2019a). Our study offers several important intuitions for the academic debate.First, given that CFOs are currently involved in strategic decisions, but they have received less attention from scholars (Florackis and Sainani, 2018), we contribute to the literature on the top management team, beyond the CEO.In particular, this research allows to tap into the influence of the CFO on R&D investment intensity.Second, by focusing on the individual-level attributes of CFOs, we grasp further determinants of heterogeneity among firms in sustaining R&D investment and therefore respond to the calls for more investigations into the types of C-suite executives who are more beneficial for firms' outcomes (Naranjo-Gil et al., 2009;Datta and Iskandar-Datta, 2014). As for the managerial implications, this study increases the current knowledge about the types of characteristics that are important for the CFO, who is turning away from being the "number guy" of the C-suite executives, while starting to act as a focal actor in decisionmaking processes.Consistent with this, we argue that the role of the CFO goes beyond financial reporting issues and impacts the whole strategic attitude of the organization toward innovation.Accordingly, investors and stakeholders should be more aware that the role of the CFO is crucial, allowing the firm to be agile in its decision-making on R&D investment implementation, promoting innovation and safeguarding the value creation process from risks.Hence, by showing which CFO characteristics are most closely correlated with R&D investment, this study may enable firms to determine the type of CFO that they should hire when they wish to support the growth of R&D. The remainder of this study is organized as follows: Section 2 reviews the literature and offers a set of hypotheses; Section 3 illustrates the research methodology with a description of the sample, the data collection, the variables and the empirical model; Section 4 describes the results and offers a series of robustness checks; Section 5 concludes the paper. Top management characteristics and R&D investment The literature explores several factors associated with R&D investment.For instance, some scholars focus on the firm's industry and on interindustry relationships in determining R&D investment (Barge-Gil and L opez, 2014).We can also find studies arguing for the influence of external factors (Wang, 2010) and maintaining that a better institutional environment may stimulate R&D investment by providing firms with enhanced collaborative capacity (Srholec, 2011;Wang et al., 2015).Yet other studies devote their attention to the relationship between country-level features and R&D investment (Varsakelis, 2001;Wang, 2010). Collectively, a growing stream of studies examines the relationship between the characteristics of the top management and R&D investment under the umbrella of upper echelons theory (Hambrick and Mason, 1984).This is because upper echelons theory suggests that organizational outcomes can be considered as the reflection of the values and cognitive bases related to the individual characteristics of top executives (Meyer and Goes, 1998;Barker III and Mueller, 2002).In this regard, studies based on the upper echelons perspective state that the educational background of a manager is an important factor affecting the firm's R&D expenditure (Barker and Mueller, 2002;Harymawan et al., 2020) since a higher educational level leads to improve cognitive ability and opens the mind of an individual to the opportunity for innovation (Naranjo-Gil et al., 2009).Moreover, research asserts that gender is a base on which to understand the managerial orientation in the decision-making process (Adams and Ferreia, 2009;Ruiz-Jim enez et al., 2016;Torchia et al., 2011), even in the case of R&D investment (Almor et al., 2019).Finally, scholars claim that R&D investments are strictly influenced by executives' age because their preferences in business decisions may change over the years due to their risktaking attitude and career concerns (Holmstrom, 1999;Serfling, 2014). The present study addresses the impact of the educational background, gender and age of the CFO on R&D investment intensity; although extensive attention has been devoted to CEOs, few studies investigate the decision-making of CFOs on R&D investment, who have progressively switched from having mere financial supervision responsibilities to playing the role of a "business partner" of the CEO (Caglio et al., 2018;Florackis and Sainani, 2018). CFO education and R&D investment Academic research tends to agree that the education of a firm's employees are likely to influence its propensity for innovation, constituting an important part of its absorptive capacity (Cohen and Levinthal, 1990;Sannino et al., 2020;Wenger, 2000) and having an influence in improving both working methods and decision-making.Dahlin et al. (2005) suggest that educational diversity may be beneficial for a team, allowing it to absorb and use a novel range of information.Other studies emphasize that employees' educational background plays a relevant role in shaping strategic decisions (Bertrand and Schoar, 2003;Finkelstein and Hambrick, 1990) and has an impact on a firm's competitive posture (Hambrick et al., 1996). Studies relying on upper echelons theory suggest that better-educated top executives are more likely to absorb new ideas and promote innovation (Naranjo-Gil et al., 2009) and R&D investment (Barker III and Mueller, 2002).Despite the awareness that R&D is not a guarantee of innovation, prior research suggests that R&D investment represents its important trigger. Chief financial officer and R&D investment For instance, investment in R&D is a process that promotes knowledge creation, which entails innovation (Kor, 2006;Wang, 2010), and represents a critical factor for a multinational corporation to sustain an innovative competitive advantage (Kawai and Chung, 2019). Since there is a link between R&D investment and innovation, we claim that firms in which the CFOs have a higher education level show a more positive attitude toward innovation and thus are more likely to invest in R&D.This is consistent with studies suggesting that a higher level of education positively influences R&D investment both at the firm level (Scherer and Huh, 1992) and even at the country level (Wang, 2010).Hence, we formulate the first hypothesis as follows: H1.There is a positive relationship between the intensity of R&D investment and CFO education. CFO gender and R&D investment Diversity in the top management team is an important factor that affects the quality of decision-making processes and increases the variety of perspectives among individuals (Murray 1989;Hillmann et al., 2015).Scholars claim that groups with diverse characteristics may generate alternate solutions to problems, show increased levels of creativity and support business model innovation (Van der Vegt and Janssen, 2003;Guo et al., 2018). The academic literature increasingly investigates the role of gender as a base on which to understand the effects of top management diversity on corporate innovation (Nielsen and Huse, 2010).Miller and del Carmen Triana (2009) find that gender diversity on the board is positively associated with innovation, suggesting that the benefits of gender diversity can be converted into R&D expenditure.Østergaard et al. (2010) find that gender diversity among employees positively affects the firm's innovative performance, suggesting that gender diversity is a key variable for understanding the knowledge base of an organization.Torchia et al. (2011) suggest that gender diversity of the corporate board leads to a higher level of organizational innovation, while Ruiz-Jim enez (2016) argues that gender diversity in the top executive team is expected to have indirect beneficial effects on corporate innovation. However, there is also a potential negative impact of gender diversity on R&D investment that scholars explain with reference to a gender effect on individuals' attitude toward risk-taking.For example, prior research suggests that female directors are less risk averse in corporate decisions than male directors (Adams and Ferreira, 2009).Since R&D projects are risky investments, it is plausible that female directors may be less encouraging about investing in R&D (Chen et al., 2016).However, Almor et al. (2019) claim that the relationship between female managers and R&D investment is a more complex aspect of a question about individuals' attitude toward risk-taking and managers' cognitive orientations are difficult to measure (Rajagopalan and Datta, 1996).Moreover, Almor et al. (2019) claim that female directors encourage long-term R&D processes resulting in organizational innovation. Despite the existence of contrasting findings, we develop our second hypothesis relying on proponents of the value of diversity who suggest that gender diversity in the top management team provides the firm with greater creativity, enlarges the knowledge pool and encourages investments in innovation ( Van der Vegt and Janssen, 2003;Adams et al., 2015;Almor et al., 2019;Guo et al., 2018).This leads to develop the following hypothesis: H2.There is a positive relationship between the intensity of R&D investment and CFO gender. CFO age and R&D investment Scholars generally agree that age should be considered when investigating how R&D investment varies with the observable characteristics of top executives (Barker and Muller, 2002;Serfling, 2014;Sun et al., 2019b).This is because managerial actions are related to the changes in individual orientation that arise with age (Hart and Mellons, 1970).For example, under upper echelons theory, Hambrick and Mason (1984) suggest that firms with older managers are less likely to pursue risky strategies for three main reasons.First, as their age increases, executives have less mental and physical energy to sustain extensive and longterm investment projects (Child, 1974), such as R&D projects (Barker and Muller, 2002).Second, the preference for a quiet life and the commitment to the organizational status quo tend to increase with age (Bertrand and Schoar, 2003) and thus, older executives avoid increasing the R&D investment (Barker and Muller, 2002), which may change the business model.Finally, Barker and Muller (2002) and Hambrick and Mason (1984) suggest that older executives pay more attention to their financial security, which may lead them to reduce risky investments in R&D. Conversely, there are empirical works that provide different evidence with regard to the impact of age.Shefrin (2008) finds that individual risk aversion increases until the age of 70 years and then decreases rapidly.Other scholars argue that younger managers pay more attention to their career development and to the scrutiny of the labor market than older managers (Holmstrom, 1999) and thus could avoid risky investments (Chevalier and Ellison, 1999).Zwiebel (1995) suggests that career concerns may influence the corporate choices and therefore, younger executives may avoid innovative investments and pursue less risky projects that are easier to control for external scrutiny. Hence, on the basis of the perspective of the risk-taking attitude, the age of a CFO is negatively associated with the level of R&D investment.Conversely, if we consider the career concerns and reputation, it is plausible that the age of a CFO is positively associated with the level of R&D investment.Accordingly, it is unclear how CFO age can affect R&D investment, although we believe that a relationship potentially exists.Hence, we formulate our third nondirectional prediction as follows: H3.There is a relationship between the intensity of R&D investment and CFO age. Research design 3.1 Data, sample and variables The sample-selection process began by choosing from ORBIS Bureau van Dijk (ORBIS) all 163 nonfinancial listed firms included in the S&P350-Europe index from five European countries (Italy, France, Germany, Spain and the United Kingdom).Using this sample allows us to consider larger listed companies that operate in countries representing a significant portion of the European capital markets but having differences in market conditions and legal systems (Devalle et al., 2010). To test the hypotheses, we collected and merged data from different sources.First, we used the ORBIS database to obtain data related to firm-level characteristics (i.e.size, leverage, profit, R&D expenses, intangible assets, etc.).Next, we completed our data set by searching for information (i.e.age, gender and educational background) in the CFOs' biographies on the companies' websites and through LEXIS/NEXIS database.Firms with CFOs whose biographies were not found were deleted from our sample. After removing observations with missing R&D expenditure (61) and other firm-level data (5), as well as firms for which we were unable to find information on the CFO's profile (16), we obtained a final sample of 81 firms (and 324 firm-year observations).By including data for the same firm covering the fiscal years 2013-2016, we obtained balanced panel data.In Appendix 1, we report the sample composition by country. 3.1.1Dependent variable.The level of R&D investment undertaken by companies can be operationalized by using different proxies.For instance, the firm-level R&D investment can Chief financial officer and R&D investment be measured as (1) the absolute value of R&D investments; (2) the level of R&D expenditure standardized to firm sales and (3) R&D outcomes expressed in the form of technologies developed and intellectual capital measures (i.e.patents and copyrights) [1].In this study, we follow research that determines the firm-level R&D investment intensity (RD_INV_INT) as the R&D expenditure divided by the total sales (Chen et al., 2010;David et al., 2001;O'Brien, 2003;Kor, 2006;Sun et al., 2019a). 3.1.2Main independent variables.To test our hypotheses, we calculate variables related to the CFO's profile through reference to prior literature (Barker and Mueller, 2002).More precisely, the CFO's age (CFO_AGE) is measured as the natural logarithm of his/her age in years (Serfling, 2014), while the CFO's educational stage (CFO_ED) is an indicator variable that equals 1 if the CFO holds a Master of Business Administration (MBA) or a Doctor of Philosophy (PhD) degree and 0 otherwise (Hiebl et al., 2017).The gender of the CFO is measured using a dummy variable (CFO_GENDER) that equals 1 if the CFO is female and 0 when the CFO is male (Francis et al., 2013). 3.1.3Control variables.We include a set of firm-level control variables to account for other potential determinants of variations in R&D_INV_INT.First, we control for firm size (SIZE) since prior research suggests that larger firms may have more resources to invest in R&D projects than smaller ones (Kor, 2006).However, Barker and Mueller (2002) claim that in larger firms, the top managers may have less incentive to invest in R&D projects to avoid risky investments and maintain their power with the organization's status quo.Additionally, we include profitability (PROFIT) since prior studies indicate that firm's profitability is related to decisions on R&D expenditures (Barker and Mueller, 2002;Kor, 2006;Sun et al., 2019a).At the same time, we include leverage (LEV) because scholars claim that more leveraged firms are likely to avoid onerous long-term investments in R&D to protect their risky financial condition (Barker and Mueller, 2002;Long and Ravenscraft, 1993).Like Sun et al. (2019a), we also take into account the firm's cash flow volatility (CASH_VOL) as a proxy for its financial risk, which can affect the managerial decisions on the R&D investment.Furthermore, we consider the level of intangible assets (INTAG) because scholars suggest that investments in intangible activity would explain the higher or lower propensity of a firm to invest in R&D projects (Honor e et al., 2015).Another control variable that we consider is the firm's growth, measured by the market-to-book (MTB) ratio, because research asserts that firms with greater growth opportunities usually engage more in R&D investment (Kuo et al., 2018;Sun et al., 2019a).Finally, we control for the industry type (TECH), distinguishing whether the firm operates in a high-technology sector or in more traditional business (David et al., 2001;Kuo et al., 2018;Sun et al., 2019a).Our view is that high-technology firms are likely to invest more in R&D projects than firms operating in another business sector. The description of the variables used in the analysis is reported in Table 1. Empirical model To test our hypotheses, we assess whether the fixed-effect (FE) model or random effect (RE) is more appropriate with panel data by employing the Hausman test (Khor, 2006;Onali et al., 2017).This approach suggests that the RE model is more efficient and thus, we use the generalized least squares (GLS) estimation technique.Specifically, we first enter the control variables and run Model (1): MD Then, we enter the variables relative to the CFO's individual-level characteristics in Model ( 2): Results of estimations Table 2 reports the descriptive statistics for the variables used in the analysis.Our sample of firms shows, on average, the value of R&D_INV_INT is around 0.03, with a maximum of 0.14.With regard to the main variables of interest, Table 2 shows that the average age (in years) of CFOs of listed European firms is 51years, and these firms have a low number of female CFOs (around 7%).Further, Table 2 shows that 36% of our sample presents CFOs with high postuniversity education (i.e. an MBA and/or a PhD).Table 3 reports the correlations among the variables, showing that R&D_INV_INT is positively correlated with CFO_ED, confirming our prediction that companies hiring top executives educated with higher post-university invest more in R&D.Moreover, the R&D_INV_INT is positively correlated with the level of intangible resources (INTAG) and with the firm being in the high-technology sector (TECH).The correlations among the other variables are generally in line with our expectations. Table 4 presents the results of the regression analysis.In model ( 1), we find that leverage (LEV) has a negative impact on R&D_INV_INT, while cash flow volatility (CASH_VOL) has the opposite effect, meaning that a firm's financial condition and firm's financial risk are important factors when the top management decides to invest funds in R&D (Sun et al., 2019a).Moreover, the results indicate a negative relationship between PROFIT and The results of model ( 2) confirm the significant impact of LEV, PROFIT and TECH on R&D_INV_INT.As we predicted in the first hypothesis, the regression results show that CFO_ED is significantly and positively associated with R&D_INV_INT (p-value < 0.05), suggesting that a higher level of post-university education enhances the CFO's inclination to invest in R&D.This is in line with the literature that suggests that education is a distinctive attribute of a top manager who promotes innovation because greater knowledge increases the manager's "acumen" and facilitates R&D investment (Hambrick and Mason, 1984;Meyer and Goes, 1988;Kor, 2006;Scherer and Huh, 1992). Our results provide evidence that CFO_GENDER is significantly and positively associated with R&D_INV_INT (p-value < 0.1), meaning that firms with a female CFO are more prone to pursue investment in R&D.This result supports the argument that the business case for gender diversity among the top management team is not only a matter of quotas or social fairness (Østergaard et al., 2010;Hilmann et al., 2015) but is above all a driving force for promoting investments in R&D.Moreover, these results pave the way to the idea that a relationship between gender diversity and R&D investment is not only a question of risk-taking approach (Almor et al., 2019) but requires a more comprehensive investigation. With reference to the third hypothesis, CFO_AGE is significantly and positively associated with R&D_INV_INT (p-value < 0.05), suggesting that older CFOs increase the investment in R&D.This result is in contrast with studies contending that younger top managers are more prone to engage in risky strategies such as R&D investment, but we suggest that there are circumstances under which firms with older CFOs may increase R&D spending.In particular, based on the perspective of career's concerns and opportunity, a younger CFO may be scared of the risk of R&D projects since a future adverse performance may harm his or her position in the labor market (Zwiebel, 1995;Andreou et al., 2017), while an older one may be more prone to engage in risky investments as he or she wants to appear dynamic and to defeat the stereotype of being reluctant to change. Our results may also mean that, in the current knowledge economy, there is a tendency for older CFOs to show a proactive approach to R&D investment since they are constantly under scrutiny by investors (Zimmerman, 2013) and market.However, it is important to highlight that the CFOs in our sample are, on average, 51 years old, meaning that they are not close to Chief financial officer and R&D investment retirement and, thus, that their career horizon may not necessarily play a negative impact on their investment decisions (McClelland et al., 2012). Additional analyses We carry out additional tests to increase the robustness of our results.First, we reestimate model ( 2) by using the FE model, and we obtain similar results (which we do not report here to preserve the space).Second, in line with McClelland et al. (2012), by entering the square of CFO_AGE in model ( 2), the regression results (untabulated) exclude any curvilinear effects of CFO_AGE on R&D_INV_INT [2]. Finally, we investigate whether our results are sensitive to the inclusion of additional firmand country-level factors that potentially have an impact on R&D investment.More specifically, we include in the baseline regression model ( 2) variables related to corporate governance quality (David et al., 2001;Honor e et al., 2015), corporate tax incentives (Dyreng et al., 2010), the rule of law in the country (Alam et al., 2019) and the country's legal system (Devalle et al., 2010). We use the BvD independence indicator from ORBIS [3] as a proxy for ownership quality (OW_QUAL), and we control for the impact of tax incentives by using the effective tax rate (ETR), calculated as the income tax divided by the pre-tax income.As in prior research, the values for the rule of law in the country (RULE_LAW) are gathered from the Worldwide Governance Indicators (World Bank), while we use a dummy variable for the country's legal system (COUNTRY) to distinguish between civil law (1) and common law (0) systems.As shown in Table 5, the results suggest that only tax incentives affect corporate R&D_INV_INT, whereas the impacts of our main variables of interest (i.e.CFO_ED, CFO_GENDER and CFO_AGE) remain unaltered. Conclusions R&D investment at firm level has received increasing attention in the current academic and policy debate because R&D represents a primary resource for firms wishing to stay competitive in the era of a digital and tech-based business environment.Using a sample of 81 of the largest listed firms from European countries, this study developed three hypotheses on MD the relationship between CFO-specific attributes and R&D investment intensity.Exploring the impact of these factors allows us to gain a better understanding of the heterogeneity among firms in promoting R&D activity. More specifically, this study offers three important contributions to the top management literature, beyond the CEO, by looking specifically at the role of the CFO.First, our findings suggest that CFOs with a higher education background are more likely to deliver more financial resources to R&D activities.Second, we find that firms led by a female CFO show superior R&D investment intensity to those led by a male CFO, meaning that increasing the gender diversity among top executives is beneficial for sustaining R&D projects.This result leads to a call for more comprehensive investigations into the effect of managers' gender diversity beyond the theoretical perspective of individual attitudes toward risk-taking.Finally, the findings of this study question the strand of the literature suggesting negative effects of managers' age on R&D since older CFOs are associated with a higher level of R&D investment intensity. The contribution of this study is also of interest for managerial practice.First, the paper fills an important gap in terms of the understanding of the current role of CFOs in the strategic decision-making processes for R&D investment.Indeed, although some articles have contended that there is a progressive enrichment of CFOs' tasks and professional standing, few have shown practically how this unfolds in organizations.Thus, in depicting the significant relationships between the attributes of CFOs and R&D investment, we lay the basis for greater awareness by firms and investors about the effects of CFOs on corporate policies.Furthermore, being aware of the fact that CFOs are no longer the "numbers men" only but play an active role in shaping the strategic attitude of the firm, we can also contend that this constitutes an aspect that it is essential to consider, in terms of the chance of managing efficiently and avoiding any problems of strategic discontinuity even when there is Chief financial officer and R&D investment a CEO turnover event.In particular, it is important to highlight that new discoveries or ideas in the market may rapidly shift the resources allocated to one R&D project to meet the needs of another.In this scenario, we argue that R&D investment is a matter of the governance organization in which the CFO has a key role in allowing a firm to be far more agile in managing R&D projects, removing the obstacles that do not allow its implementation and guarding against related risks in the best interests of investors and stakeholders.Collectively, our results may be useful for capital providers and practitioners who are interested in identifying the best combination of the CFO's characteristics and the business model that is likely to result in higher firm-level R&D investment. Our study is not free from limitations.First, this research relies on the observable characteristics of CFOs and does not consider the potential impact of other managerial traits (i.e.ethnics, experience, tenure) that may affect the decision on the level of investment in R&D.Second, our sample consists of firms included in the S&P350-Europe and is thereby limited to largest listed companies, which are subject to greater scrutiny by investors than smaller ones. Future research is needed to corroborate our results by the in-depth collection of more data about the individual attributes of CFOs.Second, we call on researchers to increase the number of studies identifying factors that may endanger the willingness of CFOs to pursue innovation.For instance, it could be useful to explore the impact of the role of owners (i.e.family vs nonfamily firms) and of any potential conflicts within the board of directors or between the board and the CFO. , suggesting that firms with higher profitability invest less in R&D.Our results document that firms from high-tech industry (TECH) are more sensitive to investment in R&D than companies that operate in other sectors.In relation to the other control variables, we find that SIZE, MTB and INTAG are not significantly associated with R&D_INV_INT. R&D_INV_INT Table 3 . Refer to Table 1 for variables description Correlation matrix
2021-05-10T00:03:17.136Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "986c24ca60ff106237f1fe0c42c65602fc9d18ef", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/MD-05-2020-0650/full/pdf?title=the-chief-financial-officer-cfo-profile-and-rampd-investment-intensity-evidence-from-listed-european-companies", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f7956e0dee77cb13b9abfd3e9461b69bc271a8a4", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
45126572
pes2o/s2orc
v3-fos-license
Formation of a new dog population observed by pedigree and mtDNA analyses of the Polish Hovawart The aim of the study was to evaluate changes in the gene pool of a dog population during the period of its formation. Pedigree and mtDNA analyses were performed on the Polish population of Hovawart dogs. A total of 192 litters of 93 dams and 115 sires were born between 1988 and 2008. Breeding began using Hovawarts imported mainly from the Czech Republic and Slovakia; however, the role of Western European dogs increased continually throughout the period analysed. No unfavourable effects caused by the limited size of the population were identified because of the constant inflow of new genes from abroad. The continual increase in the gene pool was indicated by all of the pedigree parameters analysed. Two different mtDNA haplotypes were found, and complete agreement between pedigree and molecular data was noted. The results of the analyses permit concluding that the process of formation of the new Hovawart population was also impacted by non-genetic factors that directly influenced the composition of gene pool. Introduction Many articles published in recent years have focused on pedigree analyses in dogs (Cole et al. 2004, Leroy et al. 2006, Calboli et al. 2008, Głażewska 2008, Ólafsdóttir & Kristjánsson 2008, Leroy et al. 2009, Voges & Distl 2009, Mäki 2010, Leroy & Baumung 2011, and the review by Leroy 2011). The subjects of these studies were either rare local breeds classified as endangered or popular breeds bred in many countries. The databases used in these analyses differed in completeness and ranges of pedigree information. They comprised all information on a given breed (Głażewska 2008, Mäki 2010, or were composed using local sources of pedigree information, for example the database of the UK Kennel Club (Calboli et al. 2008) or the Société Centrale Canine (SCC) database (Leroy et al. 2006). As a rule, a group of dogs born within a defined period of a few years (a reference population) was the subject of analyses; however, some studies focused on populations observed for longer periods (Cole et al. 2004, Głażewska 2008, Voges & Distl 2009, Mäki 2010). The analyses have indicated a number of unfavourable occurrences in dog breeding, such as a high level of inbreeding, high disproportion in the breeding use of sires and strong imbalance in founder contributions to a gene pool, all of which might negatively influence the health condition of a given population (Cole et al. 2004, Leroy et al. 2006, Calboli et al. 2008, Głażewska 2008, Ólafsdóttir & Kristjánsson 2008, Oliehoek et al. 2009, Leroy et al. 2009, Voges & Distl 2009, Mäki 2010). In the present study, we concentrated on changes in the gene pool of a pedigree dog population during the period of its formation. Our interest focused on genetic and non-genetic factors that influenced the decisions made by the breeders and whether the breeding policy was advantageous for the new population from a genetic point of view. Hovawart dogs, that have been bred in Poland since the 1980s, were used as a model population. The analyses were performed using pedigree data collected in the Polish archives, i.e. available for Polish breeders. The second goal of the study was to analyse mitochondrial DNA (mtDNA) to evaluate mtDNA diversity over the span of the 21-year breeding period. Hovawart dogs are a German working breed from the FCI (Fédération Cynologique Internationale) 2 group that originates from old guard dogs. Modern breeding began in 1924, and the breed was restored using Hovawart-type farm dogs and representatives of different breeds from FCI 1 and 2 groups. The breed was recognised in 1936 under FCI number 190. Breeding of this dog began in Poland with Britta von der Funkenmühle which was imported from East Germany (GDR), and her first litter by Ago vom Bretterkeller was born in 1988 at the Heland kennel. Material and methods Pedigree analysis of dogs, born in Poland between 1988 and 2008, was conducted using the pedigree database (further referred to as the Polish database) which comprised general information on the origin of the litters (parents, date of birth, kennel) and data on the origin of foreign parents from their four-generation pedigrees. The first part of the database, containing data for the 1988-2004 period, was published earlier by Głażewska (2006), and the remaining data were collected from the archive of the Polish Kennel Club (ZKwP). Not less than five generations of ancestors were known for each litter. The average pedigree length increased from 7.8 to 12.5 generations during the 21 years period studied, however, the average pedigree completeness increased only from 5.2 to 5.7 generations. This stemmed directly from the way the database was constructed i.e. using pedigree information exclusively from Polish archives. Data on the number of puppies born between 2003 and 2008 and on dogs presented during 55 international and national Polish dog shows in 2009, which were published on two web sites (www.klub.hovawart.pl, www.zkwp.pl), were used in the study. Additionally, information from interviews with the Hovawart breeders and owners, and from the www. forum.hovawart.pl web site were used for the interpretation of the results, and these sources of information are quoted in the article as »breeder statements«. For the purpose of analysis, the 21-year period was divided into seven periods : 1988-1993, 1994-1998, 1999-2000, 2001-2002, etc. The basic breeding and genetic parameters were calculated. Ancestors with unknown parents in the Polish database were considered as founders. The founder contributions in a pedigree and the founder genome equivalent were computed using GENES v11.8 (Lacy 1998). The founder contribution is defined as the expected proportion of the population's gene pool that has descended from this founder (Lacy 1989), and it is equal to the value of the coefficient of relationship between the founder and its descendants. The founder genome equivalent (FGE) defined as the theoretical number of founders that would be required to provide the level of genetic diversity observed in the living population if the founders were all equally represented and had lost no alleles, was computed according to Ballou & Lacy (1995). The total (FTOTAL) and 5-generation (F5) inbreeding coefficients (Wright, 1922), mean kinship (MK) (Malécot 1948), generation interval and the pedigree completeness level (which is given above) were computed using ENDOG v4.5 (Gutiérrez & Goyache 2005). The analysis of mtDNA diversity was performed using hair samples comprising 30-40 hairs each which were taken from the backs and tails of the dogs. In total, samples from 23 Hovawarts representing all dam lines and their branches were collected. Total genomic DNA from hair bulbs was extracted according to the standard organic procedure (Wilson et al. 1995). DNA amplification was performed using primers designed in this study with the Primer3 (Rozen & Skaletsky 2000): CRDOGF (15372-15392): 5'GTA ACC GCC CTC CCT AAG AC3' and CRDOGR (16096-16117): 5'TGT CCT GAA ACC ATT GAC TGA3. The PCR reaction was conducted in a GenAmp PCR System 9600 Thermal Cycler (Applied Biosystems, Foster City, CA, USA). The PCR products (660 base pair length) were purified using Microcon 100 microconcentrators (Amicon, USA) and sequenced with BigDye Terminator v1.1 Cycle Sequencing Kit (Amicon, Beverly, MA, USA) according to the user's manual. Purified sequencing products were separated in an Applied Biosystems 3130 DNA Analyser (Applied Biosystems, Foster City, CA, USA). The electrophoretic data were collected by the Data Collection v2.1. software and analysed by the Sequencing Analysis v3.0. software (Applied Biosystems, Foster City, CA, USA). The Hovawart sequences were compared to dog sequences deposited in GenBank. Phylogenetic analysis of the mtDNA haplotypes was performed using MEGA v4.0 software (Tamura et al. 2007). Breeding data analysis Between 1988 and 2008 a total of 192 litters were born in Poland in 74 kennels, by 93 dams and 115 sires. Twenty-three dams and 91 sires originated from foreign breeding. Regarding the parents of Polish origin, they came from 68 litters and this number is equal to 50,7 % of the total number of litters born until 2005. Differences in the breeding use of particular individuals were found. The majority of parents, 51.6 % of the dams and 76.5 % of the sires, had one litter only, and the maximum number of litters by one parent was 7 and 14, respectively. The length of the breeding use of females and males was similar. The average age of a dam and a sire at the birth of their first litter was 3.34 and 3.52 years, respectively, and the average generation interval between a dam and her litter was 4.47 years. Significant differences in the number of litters born in particular kennels were observed. The majority of kennels (43) produced only one litter, and only 12 kennels gave five or more litters, with a maximum of 15. In the kennels, which produced only one litter, 22.4 % of the total number of litters was born. Only six bitches originating from these litters have been used in breeding, which is 8.6 % of the total number of Polish breeding dams. Incomplete data for the 2003-2008 period which referred to 64 % of the litters born, indicated that the average number of puppies in a single litter was 8.14 (from 2 to 13). The analysis of parental origin indicated progressive changes in the population gene pool. During the initial years of breeding, with the exception of the first pair of parents that came from East Germany (GDR), breeding was dominated by individuals from the Czech Republic and Slovakia (CS) (Figure 1). Of the 23 imported bitches used in Polish breeding, 15 were from CS, three each from France and Germany, and one each from Hungary and Norway. The group of foreign sires was dominated by males from CS in the initial period, but in subsequent periods there was an increasing tendency to use males from Western European countries, mainly Germany. Breeders also decided to use dogs from other countries, such as France, Denmark, Finland and Norway, but this refers only to single individuals. The highest proportion of matings with Western European sires was noted in the 2003-2004 and 2005-2006 periods (45.5% and 45.3%, resp.). The increasing tendency of foreign matings might be linked to the improvement of the economic situation in Poland ( Figure 2). According to the Polish database, 23 imported breeding bitches represented 12 dam lines. The founders of these lines were nine bitches from the GDR, and single bitches from West Germany, Switzerland and Sweden. Figure 3 presents the dynamics of changes in the number of litters born in particular dam lines. The dominant line during the whole period was that of Adda vom Annatal which is represented mainly by descendants of Britta von der Funkenmühle, the first bitch used in Polish breeding. The analysis of lists of dogs in Polish dog shows in 2009 indicated that only a limited number of dogs was shown (Figure 4). The highest proportion of dogs shown from a given litter was in the first or second year of life, i.e. they were shown mainly in junior classes. In comparison to the estimated total number of dogs born, this means that only about 24 % of young dogs were judged in shows. A decreasing tendency in the number of dogs shown was linked to their age: older dogs were shown rarer. Table 1 presents detailed data of the pedigree analyses. A total of 203 founders (95 males and 108 females) were present in the pedigrees from the whole period analysed. The stable increasing tendency in founder numbers was accompanied by the loss of some founders from the pool and the appearance of new founders in the subsequent periods. Figure 5 presents the dynamics of change of founder contributions to the gene pool of the population. In the 1988-1993 period, seven founders each covered over 4.5 % of the gene pool, the next nine founders covered from 2.5 % to 4.5 %, and the contribution of the remaining 27 founders was lower than 2.5 %. In subsequent years, supplanting genes of the first founders by genes of new founders were noted. As the number of founders increased, the values of the inbreeding coefficient and mean kinship decreased ( Table 1). The constant decreasing trend in the values of F5, which ranged from 6.05 % in the first period to 0.60 % in the last period, is worth particular notice. Table 1 Basic parameters of pedigree analysis in Hovawart dogs bred in Poland Period 1988Period -1993Period 1994Period -1998Period 1999Period -2000Period 2001Period -2002Period 2003Period -2004Period 2005Period -2006 Analysis of mtDNA Samples from representatives of all 12 dam lines and their main branches were studied. Two haplotypes, Ho1 and Ho2 (HM007196-HM007197 GenBank accession numbers), belonging to two clearly distinct haplogroups that correspond to clad A (Ho1) and clad B (Ho2) as determined by Savolainen et al. (2002), were noted. The Ho1 sequence differed from the reference sequence (U96639; Kim et al. 1998) by 15 nucleotides (14 transitions, one transversion), and the Ho2 sequence by six nucleotides (five transitions, one transversion). The sequences differed among themselves by 18 nucleotides. The Ho1 halpotype was found in the dam line that originated from Adda von Annatal and which was continued by two imported bitches (Britta von der Funkenmühle, Andromeda Queen Elsa). The Ho2 haplotype was found in the remaining 11 Polish lines. Comparing the Hovawart sequences with those of other dog breeds we found some identical sequences ( Table 2). The Ho1 haplotype is identical to the set of sequences of breeds representing different FCI groups. Among them is the St. Bernard, a breed recognised as one of the Hovawart founder breeds. The identical sequence was also found in the Golden Retriever, which exhibit significant phenotype similarity to blond-haired Hovawarts, even is not mentioned as a founder breed for Hovawarts. Only one identical sequence to the Ho2 haplotype was found in the Labrador Retriever breed till now. Discussion The Polish population of Hovawart dogs is an interesting example of a new dog population. The population exhibited both similarities and differences in breeding parameters in comparison to breeds studied previously. Similarly to other breeds, occasional breeding prevailed and 58.1 % of kennels had just one litter. One-litter kennels are typical of dog breeding, and occasional breeders were also noted frequently by Leroy et al. (2007), Calboli et al. (2008, Głażewska (2008) and Mäki (2010). The generation interval of Hovawart (4.47) was relatively long in comparison with other breeds. For example, in Leroy et al. (2009) the generation interval in just 20 of 61 breeds representing different FCI groups was 4.5 or more. Next, the average generation interval was 3.8 in the breeds from the FCI 2 group, which is of particular note since the Hovawart breed also belongs to this group. Leroy et al. explained this result by the shorter lifespan of the dogs and the early end of their reproductive capacity. Meanwhile, the prevailing opinion among Polish breeders is that breeding Hovawarts early is not recommended because the breed matures late. The result of this opinion is a late moment of the breeding entry both of Polish Hovawart males and females. Significant differences were noted regarding the ratio of the number of dams to sires. The number of dams is generally higher than that of sires in dog breeding, and this is mainly because of the »popular sire effect« (Ostrander & Kruglyak 2000), which refers to the high number of offspring that are fathered by the most desirable sires. Meanwhile, for Hovawarts the number of the dams was lower than the number of sires. This is the result of the high proportion of foreign matings and the almost unlimited choice of sires available to Polish breeders. Over the span of 21 years of breeding, the gene pool increased continuously as indicated by all the pedigree parameters. Significant differences between F 5 and F TOTAL observed in recent years and the high proportion of litters with F 5 =0 %, which resulted from intentional breeder decisions to avoid mating between relatives, are both noteworthy. This approach is not typical of dog breeding. According to Leroy et al. (2007), 24 % of French breeders declared using close-breeding, and a similar tendency can be probably found in the breeding of the majority of dog breeds. The constant inflow of new genes resulted in changes in the structure of the gene pool seen by the founder contributions. With a passage of time the contributions of the first population founders decreased as did disproportions in the contributions of particular founders. The trend is advantageous because high disproportion in the founder contributions to a gene pool is one of the most important issues in breeding such small populations of dogs (Leroy 2011). The increasing genetic diversity of the population was mainly the result of foreign matings. The use of newly imported bitches for breeding was significantly genetically less effective. The dam line established by the first imported bitch held the dominant position in relation to the number of litters born throughout the period analysed. The high level of genetic diversity of the population was not reflected by mtDNA analysis and only two mtDNA sequences were found in the population. However, this number should be recognised as the typical number of haplotypes present in a single dog breed (Angleby & Savolainen 2005, Pires et al. 2006. The number of haplotypes is not directly related to the size of the dog population but rather stems from breed history or breeding policy, e.g., whether the breeders ensure the continuation of particular dam lines or not. Using additional pedigree information from the database http://www.working-dog.eu, we ascertained that the founders of 12 Polish lines originated from two founders of the breed, Dina (Geisler) (Adda von Annatal line) and Dina (Bruser) (remaining 11 lines), which were described as Hovawart farm dogs. Dina (Bruser) was born in 1923, and the date of birth of Dina (Geisler) is not given in this database but her daughter, Hova, was born in 1925. The results of the mtDNA analysis confirm that the pedigree data in the dam lines from almost 90 years of breeding are reliable. This is an important observation since similar analyses in horses have always led to deny some pedigree records (Hill et al. 2002, Kavar et al. 2002, Głażewska et al. 2007. The interesting result of the analysis is the numerical supremacy of the Ho2 haplotype in the imported bitches. The question remains open of whether this haplotype also dominates in the world population of Hovawarts, or if its high frequency in the group of imported bitches is a founder effect stemming from breeder decision regarding imports. One negative aspect noted in the breeding policy was the high imbalance in the chances of particular dogs transferring their genes to subsequent generations. Firstly, the chance of doing so depended on show participation. Only a small number of dogs were shown in dog shows and the remaining dogs are, in fact, a priori eliminated from the breeding. This follows from the Polish kennel regulations (http://www.zkwp.pl), according to which positive evaluations earned from adult show classes are required to receive breeding qualifications. Therefore, the decision of owners to show a given dog or not is crucially important for its breeding career. Show titles are also very important, and dogs with champion titles dominated breeding. According to the Polish database, of 22 Polish fathers of litters born since 1995, 19 were Polish champions. The significance of champion titles is economic since parents that are successful in shows determine the commercial success of Polish kennels in the market because buyers prefer puppies of champions (breeder statements). In the population studied, the chances of bitches transferring their genes to subsequent generations also depended on their birthplaces. Bitches born in one-litter kennels were bred relatively more rarely. Since such litters originated from new and often genetically highly valuable matings, this is an especially unfavourable aspect of breeding. Moreover, this situation is difficult to explain in genetic terms because the parents of litters from onelitter kennels appeared to be of the same high quality, regarding to show titles and health condition, as were the parents from multi-litter kennels. This suggests that the reasons for this must lie with the subjective decisions of dog buyers. Our observations of the Hovawart breeding community indicate that persons with a greater knowledge of the breed and with precise plans as to dog shows, training and future breeding look for puppies at established kennels. Meanwhile, persons with less knowledge of the breed or those interested in acquiring a family dog buy puppies from kennels located closer to their homes or from those that are selling at lower prices (breeder statements). This stratification in buyer intention negatively impacted the dynamics of change in the gene pool of the population. Despite the high proportion of foreign matings and the use of many new dams in recent years, the enrichment and rebuilding of the gene pool has not progressed as expected based on the demographic data of the Polish population. This is clearly visible in Figure 3, which presents the dominant position of the first Polish dam lines in light of the number of litters born. The composition of the gene pool of Polish Hovawarts was also influenced by economic factors. Initially, breeding animals were purchased in the Czech Republic and Slovakia where prices for puppies were lower than in Western Europe. As the economic situation in Poland improved, breeders began to use breeding material from Western Europe more frequently. The country of origin of breeding dogs plays an important role in the breeding of working dogs, such as Hovawarts, because in some countries breeding focuses on phenotypes rather than the behaviour traits of the dogs. This can be seen, for instance, in the different qualification criteria for breeding dogs in the German Rassezuchtverein für Hovawarthunde (RZV) (one dog show note and two high quality club performance tests, which were described in details by Boenigk et al. 2006) (http://www.hovawart.org) and in the Polish Kennel Club (three dog show notes and a standard performance test) (http://www.zkwp.pl). The superiority of show criterion over utility value in Polish breeding has contributed to the prevailing opinion that while Polish Hovawarts are handsome, they have weaker characters and are not as good workers as are German dogs (breeder statements). Based on the present analysis and many years of observing Polish Hovawart breeding, we concluded that the breeding is significantly conditioned by non-breeding, economic and psychological factors. These begin with decisions to purchase eight-week-old puppies and end with the decision to begin and continue breeding. The personal preferences of breeders regarding phenotype and behavioural characteristics of the dogs, not always corresponding with the standards of the breed, are also very important. Equally important factors in dog breeding also include personal ambitions and relations among breeders. Unfortunately, friendships or animosities behind particular breeding decisions are immeasurable with genetic methods and so important aspect of dog breeding remains terra incognita.
2017-10-27T07:52:12.465Z
2012-10-10T00:00:00.000
{ "year": 2012, "sha1": "520c626b3620db058f15054bd4cff4ff624ab8a8", "oa_license": "CCBY", "oa_url": "https://aab.copernicus.org/articles/55/391/2012/aab-55-391-2012.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "520c626b3620db058f15054bd4cff4ff624ab8a8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237828741
pes2o/s2orc
v3-fos-license
A Gland of Many Uses: a Diversity of Compounds in the Labial Glands of the Bumble Bee Bombus impatiens Suggests Multiple Signaling Functions Communication in social insect colonies depends on signals accurately reflecting the identity and physiological state of the individuals. Such information is coded by the products of multiple exocrine glands, and the resulting blends reflect the species, sex, caste, age, task, reproductive status, and health of an individual, and may also contain caste-specific pheromones regulating the behavior and physiology of other individuals. Here we examined the composition of labial gland secretions in females of the bumble bee Bombus impatiens, of different castes, social condition, age, mating status, and ovarian activation. We show that active queens, gynes, and workers each produce caste-specific compounds that may serve different communicative functions. The composition and amounts of wax esters, mostly octyl esters produced by active queens, differed significantly between castes, mating, and social conditions, suggesting a social signaling role. Farnesyl esters were predominant in gynes and peaked at optimal mating age (6–10 days), suggesting their possible roles as sex pheromone components. Reproductive status of females and age across castes was reflected by the ratio between short- and long-chain hydrocarbons, suggesting that these compounds may serve as fertility signals. Our findings overall suggest that the labial gland composition in B. impatiens reflects different facets of female physiology. While further bioassays are required to determine the functions of these compounds, they are likely to have important roles in communication between individuals. Introduction Insect societies rely on chemical signaling for regulating diverse activities ranging from foraging to reproduction, and the outcome of communication depends heavily on how accurately the signals reflect the identity and physiological state of the individuals. Semiochemicals specific to species, caste, age, task, reproductive status, and social status have been identified in numerous eusocial species and mapped to diverse glandular origins (Amsalem 2020;Billen and Šobotník 2015;Blomquist and Bagnères 2010;Keeling et al. 2004;Stokl and Steiger 2017). Some exocrine glands are specific to certain taxa, whereas others are shared across taxa. These ubiquitous glands in insects are useful tools to study the emergence of new signaling functions across species and levels of social organization. The labial glands are an outstanding example of such glands. Together with the mandibular, hypopharyngeal, and maxillary glands, they constitute the salivary gland complex of insects. The labial glands have thoracic and cephalic compartments, located, respectively, in the thorax and the head (Poiani and Cruz-Landim 2009). These two compartments have the same origin and also likely share the same secretion in hymenopteran species that have a salivary pouch at the intersection of the compartments (Bombus and Meliponinae spp.) (Poiani and Cruz-Landim 2010), making this gland suitable for examining functions associated with social signaling. Some hymenopteran species (e.g., Apis mellifera) lack this pouch and it is still debatable whether the secretion is the same (Katzav-Gozansky et al. 2001) or different (Poiani and Cruz-Landim 2010) between the two compartments, but the cephalic labial glands are well-developed only in eusocial species of Apinae in which the secretion is assumed to be associated with social roles (Poiani and Cruz-Landim 2010). The thoracic part of the labial gland is ubiquitous across insect orders with the notable exception of Coleoptera and has been studied primarily in the context of larval feeding, digestion, and silk production (Afshar et al. 2013;Musser et al. 2006;Sehnal and Sutherland 2008), with the main components of the larval glandular secretion being various proteins and digestive enzymes (Rivera-Vega et al. 2018). In social insects, studies have focused on the cephalic labial glands in Hymenoptera and on the thoracic labial glands in Isoptera. In the honey bee, the labial secretion was suggested to be associated with worker tasks, but the functional role was not examined (Katzav-Gozansky et al. 2001). In stingless bees, the cephalic labial glands contain a variety of wax-type esters and terpenes that serve as trail pheromones (Jarau et al. 2006;Stangler et al. 2009), and geraniol, the main compound in the secretions of nurse workers, was found to increase the proportion of larvae differentiating into queens (Jarau et al. 2010). Finally, in termites that lack cephalic labial glands, studies of thoracic labial glands found that those contain a variety of castespecific defensive compounds, most of them volatile, such as pyrazines and benzoic acid (Sillam-Dussès et al. 2012), but also non-volatile food marking pheromones (Reinhard and Kaib 1995). Although limited to a small number of species, these studies emphasize the varied social roles of the labial gland products in social insects. Bumble bees are an interesting group for the study of labial gland composition and function. In these species, queens experience both a solitary and a social phase during their life cycle. Newly-emerged queens (gynes) are produced in late summer at the end of the colony annual life cycle. They leave their natal colony and mate before entering a lengthy winter-diapause (Alford 1969). In spring, upon emerging from diapause, they found a nest and live a solitary lifestyle until the first worker emerges. Following that, the queen monopolizes reproduction, but only for a short period. Our previous studies Orlova et al. 2020) show that the ratio between shortand long-chain hydrocarbons on the queen cuticle (below 24 and above 26 carbons, respectively) decreases throughout her life cycle as she progresses towards heading a large colony and producing sexual offspring, and the ability of a cuticular hydrocarbon extract (CHC) to inhibit worker reproduction is dependent on the social context. Workers retain the ability to reproduce and challenge the queen's reproductive monopoly towards the end of the colony life cycle (Amsalem et al. 2015;Duchateau and Velthuis 1988). Examining the labial gland composition during the transitions in reproductive roles throughout the life cycles of queens and workers may shed light on the function of the gland products, and adaptive changes that the glandular secretion has acquired. In bumble bees, the content of cephalic labial glands in females has only been studied in Bombus terrestris. There, the cephalic labial glands exhibit quantitative differences in the amounts of fatty acid dodecyl esters between queens and workers. These esters are produced in larger quantities by sterile compared to fertile females in both castes (Amsalem et al. 2014). Cephalic labial glands of bumble bee males, however, have been studied extensively. Males produce various terpenes and fatty alcohols that serve for territory marking (Appelgren et al. 1991;Svensson and Bergström 1977;Valterova et al. 2019), and the secretion is highly variable across species and is often used as a chemotaxonomy tool to distinguish between cryptic species (Bertsch et al. 2005). Here we examined the cephalic labial gland secretion across different castes, social conditions, ages, and life stages in the bumble bee Bombus impatiens. Previous studies show that despite similarities in the life cycles of B. terrestris and B. impatiens, the Dufour's gland and the cuticular lipid compositions are different and may have different roles (Amsalem et al. 2014(Amsalem et al. , 2009Derstine et al. 2021;Orlova et al. 2020). We examined the composition of the cephalic labial gland contents in gynes and active queens, and in workers under queenright and queenless conditions, across different ages. We discuss possible functions of these secretions in B. impatiens females. Bumble Bee Rearing Source colonies for experimental bees were obtained from Koppert Biological Systems (Howell, Michigan, USA) or Biobest Canada Ltd. (Leamington, Ontario, Canada). They were approximately 3-4 wk old with less than 30 workers each, a queen, and all stages of brood. Colonies were maintained in closed 30 × 30 × 22.5 cm nest-boxes in a growth chamber at 28-30 °C, 60% relative humidity, and constant darkness, and were supplied ad libitum with a 60% sugar solution and honeybee-collected pollen (Koppert Biological Systems, Howell, Michigan, USA). Queens and workers used in the study were the same as in Derstine et al. (2021). Briefly, all workers were collected upon emergence (< 24 h old) from 10 colonies before the colonies produced gynes and males. Newly emerged workers were individually marked at the time of collection and randomly assigned to one of three treatments: queenright (QR, n = 70), queenless (QL, n = 70), and queenless broodless (QLBL, n = 70). QR workers were returned to their natal QR colony until they reached the desired age, while QL and QLBL workers were housed in plastic cages (11 cm diameter × 7 cm height) in groups of 3-6 workers without a queen until they reached the desired age of sampling. Queenless groups of workers typically lay eggs within 6-8 d (Amsalem et al. 2015), and because the presence of brood affects worker reproduction (Starkey et al. 2019), we included a group without brood. In the QL groups, eggs laid by workers were left intact, while in the QLBL groups, eggs laid by workers were removed daily. We collected 5 workers of each age (days 1-14) in each treatment (70 workers/treatment). All workers were stored at − 80° C until dissection. Twenty active queens that were all mated and laying eggs (hereafter, "active queens") were obtained from twenty full-sized colonies with > 100 workers. These queens were several months old and were actively producing female workers prior to sampling. Newly emerged, unmated queens (hereafter "gynes"; n = 20) were collected from 3 colonies. Gynes were separated from their natal colonies upon emergence to prevent mating, housed in small cages in groups of 3-5 gynes, and sampled at 4 time points: 3, 6, 10, and 14 d after emergence (5/time point). All sampled individuals were examined for reproductive status and labial gland composition. Ovarian Activation Ovaries were dissected under a stereomicroscope in distilled water, and the largest three terminal oocytes across both ovaries (at least one from each ovary) were measured with an eyepiece micrometer. The mean of these three oocyte measurements was recorded as mean terminal oocyte size and used in all analyses except when PERMANOVA was conducted (see below). This analysis required the use of a categorial variable for ovaries. Therefore, ovary stages were classified into four categories using the mean terminal oocyte size as follows: 1 -undeveloped ovaries (oocytes < 1 mm), 2 -partial development (1-2 mm), 3 -advanced development (2-3 mm) and 4 -full development (> 3 mm). Preparation and Analysis of Labial Gland Extracts After freeze killing, both cephalic labial glands were dissected out of the head capsule by opening the sclerotized cuticle of the head capsule with a forceps and separating the two clusters of gland acini (i.e., small saclike cavities that form the glands) from the surrounding tissue using a fine forceps. The clusters of acini were then placed in a vial with 50 µl hexane with 100 ng pentadecane as an internal standard. The vials were stored at -20 ˚C. Prior to GC analysis, samples were evaporated to a volume of 10 µl, of which 1 µl was analyzed with an Agilent 7890A GC equipped with a HP-5 ms column (0.25 mm id × 30 m × 0.25 µm film thickness, Agilent, Santa Clara CA, USA) and interfaced to an Agilent 5975C mass selective detector operated in electron impact ionization mode (70 eV). The temperature program was 60˚C to 120˚C at 15˚C/min, then 4˚C/min to 300˚C (5 min hold). The injector port and FID were held at 250˚C and 320˚C, respectively. Compounds were tentatively identified based on diagnostic ions in the resulting spectra and retention indices relative to straight-chain alkanes. For unsaturated compounds, the locations of double bonds were not determined in this study, and the double bond positions and geometries for compounds listed in Table 1 are tentative. Where possible, supporting evidence for tentative identifications was obtained by matching retention times and mass spectra with those of authentic standards of known structure. Mass spectra of selected compounds are provided as supplementary material ( Figure S1). Compounds in labial gland extracts were quantified on a Trace 1310 GC (Thermo Fisher, Waltham, MA, USA) equipped with a flame-ionization detector (FID) and a TG-5MS column (0.25 mm id × 30 m × 0.25 µm film thickness, Thermo Fisher). The temperature program and conditions were the same as above. Synthesis of Ester Standards Approximately 40 wax esters and terpenoid esters were synthesized by one of three methods, as represented by the following examples. A full list of the esters, and the methods used to synthesize and purify each one is provided in Table S1. Depending on their properties, synthesized compounds were purified by one or more of vacuum flash chromatography, vacuum distillation, or low-temperature recrystallization (see Table S1). Method A (example, (E,E)-farnesyl linoleate): (E,E)-Farnesol (0.222 g, 1 mmol), linoleic acid (0.281 g, 1 mmol), 3-(3-dimethylaminopropyl)-1-ethyl-carbodiimide hydrochloride (0.384 g, 2 mmol), and a few crystals of dimethylaminopyridine catalyst were dissolved in 20 ml CH 2 Cl 2 and stirred overnight at room temp. The following morning, the solvent was removed by rotary evaporation, and the residue was partitioned between hexane and water. The hexane layer was washed sequentially with 1 M aqueous HCl and brine, dried over anhydrous Na 2 SO 4 , and concentrated. The residue was purified by vacuum flash chromatography on silica gel, eluting with 7.5% EtOAc in hexane. Method B (example decyl myristate): Mytristoyl chloride (1.24 g, 5 mmol) was added by syringe pump over 30 min to a solution of decanol (0.95 g, 6 mmol), pyridine (0.4 g, 5 mmol), and a few crystals of dimethylaminopyridine catalyst in 25 ml CH 2 Cl 2 at room temp, and the mixture was stirred overnight. The solvent was then removed by rotary evaporation and the residue was partitioned between water and hexane. The hexane layer was washed successively with 1 M aqueous HCl and brine, dried over anhydrous Na 2 SO4, and concentrated. The residue was purified by vacuum flash chromatography on silica gel, eluting with 7.5% EtOAc in hexane. The purified compound was then recrystallized from 15 ml acetone at -20 °C overnight, filtering the resulting mixture in a cold room, producing the purified compound as low-melting white plates. Method C (example eicosyl (Z)-9-octadecenoate = eicosyl oleate): A solution of (Z)-9-octadecenoic acid (0.564 g, 2 mmol), eicosyl alcohol (0.54 g, 1.8 mmol), and 50 mg p-toluenesulphonic acid in benzene was heated to reflux for 3 h, removing the water formed with a Dean-Stark trap. The cooled mixture was diluted with hexane, washed twice with saturated aqueous NaHCO 3 and once with brine, then dried over anhydrous Na 2 SO 4 and concentrated. The residue was purified by vacuum flash chromatography on silica gel, eluting with 3% EtOAc in hexane, and the purified ester was recrystallized from hexane at -20 °C overnight. Statistical Analysis Statistical analyses were performed using SPSS v.21 and R Studio. Permutational multivariate analysis of variance (PERMANOVA, adonis, and adonis2 functions in R) were used to compare chemical profiles in their entirety between groups. Similarity percentage analysis (simper function in R) was used to identify the components contributing to distinction between groups. Prior to analysis, relative amounts of compounds were Z-transformed with a mean of 4 and standard deviation of 1 to avoid negative values. Colony identity was always used as the first term in PERMANOVA with the term of interest being added second to avoid overestimation of its contribution to variance. Pseudo-F and p-values for terms of interest are reported in the results after accounting for variance between colonies. Generalized Linear Mixed Model analysis was performed to assess the effect of continuous and categorical factors on the relative amounts of major classes of compounds in the gland. Robust estimation was used to handle violations of model assumptions (Ghosh and Basu 2016). In all analyses, we used treatment group (QL workers, QLBL workers, QR workers, gynes, and active queens) as the main effect followed by post-hoc contrast estimation using the Least Significant Difference (LSD) method. Colony identity was used as a random effect in all analyses involving workers. Satterthwaite correction was employed to account for small and unequal sample sizes (Loh 1987;Yau and Kuk 2002). Generalized Linear Mixed Model analysis was performed on standardized values (Z-scores) of oocyte size and relative amounts of compounds to obtain standardized beta coefficients. Statistical significance was accepted at α = 0.05. Identification of Gland Constituents The chemical analyses of the cephalic labial glands showed a total of 79 compounds in queens and workers, of which 53 were conclusively or tentatively identified, and 26 remain unknown at present Figure S2). All compounds were used in subsequent discriminant analyses ( Fig. 1) but only known compounds were used in further analyses (Figs. 2, 3 and 4). The main ions of the unknown compounds are provided as supplementary material (Table S2). The secretion was composed mainly of hydrocarbons ranging from 21 to 33 carbons, fatty acids, wax esters, and terpenoid esters. Of these, 41 were ubiquitous in all groups while 38 compounds (20 of them identified) were present only in specific groups. Nine compounds were specific to active queens (mostly wax esters), ten to gynes (mostly terpenoids), nine to the queen caste as a whole (mostly terpenoids), and another ten to workers (mostly hydrocarbons and wax esters) ( Table 1). The mean relative percentages and the absolute amounts of individual compounds in each of the examined groups are provided in Table 1 and the mean relative percentages of the main classes of compounds are provided in Table S3. Discriminant Analyses The cephalic labial gland profiles of all bees were analyzed by PERMANOVA using standardized relative quantities of substances. Active queens, gynes, and workers differed significantly from one another (Pseudo-F 4 = 30.39, R 2 = 0.16, p = 0.001) (Fig. 1A). Compounds contributing to the difference between gynes and other females included Relationship between the terminal oocyte size and the short-to long-chain hydrocarbon ratio in workers of different treatment groups. All trendlines were fitted by polynomial regression Because large differences between castes may have obscured differences between treatments and ages in workers, we analyzed the data of workers separately. Worker groups also differed significantly (Pseudo-F 2 = 11.63, R 2 = 0.08, p = 0.001) with QR workers being distinguished from other workers by a series of unknown compounds characterized by a base peak at m/z = 95 (unknowns 14 and 16 in Table 1, cumulative percentage of variance > 10%) (Fig. 1B). Based on the discriminant analysis information, further analyses of the labial gland secretions were done using three major classes of compounds: 1) terpenoid compounds, comprising farnesene, farnesyl esters, dihydrofarnesyl esters, and geranyl esters, 2) wax esters, with the alcohol moiety chain lengths ranging from 8 to 18 carbons and the acid moiety chain lengths ranging from 14 to 22 carbons, 3) hydrocarbons with chain lengths ranging from 21 to 33 carbons. The relative proportion of each compound class in the total secretion was calculated and used in further analyses. The ratio of short-(≤ 24 carbons) to long-chain hydrocarbons (≥ 26 carbons) was also calculated. Terpenoid Components GLM analysis revealed that the proportions of terpenoid components differed significantly between all groups and were highest in gynes, where they comprised up to 68% of the total secretion, and lowest in QL workers (0.1% of the total secretion) (GLMM, F 4,227 = 75.66, p < 0.0001 for group, p < 0.0001 for all post hoc comparisons) ( Fig. 2A). Among gynes of different ages, terpenoid compound proportions peaked on days 6 and 10 and declined on day 14, without covariance with oocyte size or interaction between oocyte size and age (GLMM, F 3,12 = 9.19, p = 0.002 for age, F 1,12 = 1.49, p = 0.24 for oocyte size, F 3,12 = 0.4, p = 0.75 for interaction) (Fig. 2B). Wax Ester Components The proportion of wax esters was highest in active queens (on average 23%) and lowest in QL workers (on average 2.9%) (GLMM, F 4,240 = 52.20, p < 0.0001 for group, p < 0.05 for all post hoc comparisons). However, the composition of wax esters differed between groups, with octyl esters being almost exclusively present in active queens and dodecyl ester proportions being highest in active queens, QR, and QLBL workers and undetectable in gynes, which almost exclusively produced long-chain esters (> 32 carbons in total) (Fig. 3). Wax ester proportion in gynes was not significantly explained by either age or by oocyte size (GLMM, F 3,12 = 1.82, p = 0.19 for age, F 1,12 = 1.1, p = 0.31 for oocyte size, F 3,12 = 0.47, p = 0.70 for interaction). Following our findings on caste differences in the abundance of different wax esters, we performed a PERMANOVA based solely on ester compounds. The groups differed from one another significantly (Pseudo-F 2 = 21.59, R 2 = 0.10, p = 0.001). Active queens differed from all other females by the proportions of octyl esters (cumulative percentage of variance > 30%), gynes differed from all other females by the proportions of very long chain esters (> 32 carbons) (cumulative percentage of variance > 30%), and workers of different social conditions differed in the proportions of dodecyl esters and palmityl octadecenoate (cumulative percentage of variance > 70%). Differences in Compound Classes Between Workers of Different Ages and Treatments Based on the results of the discriminant analysis, we tested whether treatment group, age, and oocyte size predicted the relative proportion of wax esters and the series of unknown compounds with m/z 95 mass spectral base peak. Wax ester proportion was significantly predicted by age and treatment group, being highest in QR workers and at later ages (day 11 and later) but not by oocyte size, with significant interaction between age and treatment (GLMM, F 13,138 = 1.96, p = 0.028 for age, F 2,114 = 23.12, p < 0.0001 for treatment, F 26,120 = 2.23, p = 0.002 for interaction between age and treatment, p > 0.05 for covariance with oocyte size and interaction between oocyte size and other terms). The proportion of unidentified compounds with m/z 95 base peak compounds was significantly predicted by age, treatment group (highest in QR workers), and oocyte size, with significant interaction between age and treatment and age and oocyte size (GLMM, F 13,135 = 2.58, p = 0.003 for age, F 2,112 = 24.17, p < 0.0001 for treatment, F 26,111 = 2.33, p = 0.001 for interaction between age and treatment, F 1,147 = 5.08, p = 0.026 for covariance with oocyte size, F 2,147 = 0.94, p = 0.39 for interaction between treatment and oocyte size and F 13,147 = 1.81, p = 0.046 for interaction between age and oocyte size). Hydrocarbon Composition and Ovarian Development In line with a previous study (Orlova et al. 2020), the short-to long-chain hydrocarbon ratio was highest in active queens (5.21 ± 0.16) and lowest in gynes (0.82 ± 0.09) (GLMM, F 4,240 = 96.65, p < 0.0001 for group, post-hoc LSD: p < 0.0001 for active queen vs. other groups, p < 0.0001 for gyne vs. other groups, p > 0.05 for comparisons between worker treatments). In workers, short-to long-chain hydrocarbon ratio was on average 3.17 ± 0.08 and was significantly predicted by age and treatment group and oocyte size, peaking on day 8 and being initially higher in QLBL and QL workers, and then in QR workers at later ages with significant interaction between age and treatment (GLMM, F 13,142 = 2.37, p = 0.007 for age, F 2,88 = 4.65, p = 0.012 for treatment, F 26,134 = 1.92, p = 0.009 for interaction between age and treatment, F 1,141 = 8.14, p = 0.005 for covariance with oocyte size, p > 0.05 for interaction of oocyte size with other terms). In gynes, short-to long-chain hydrocarbon ratio peaked on day 14 and displayed no covariance with oocyte size, but there was significant interaction between oocyte size and age (GLMM, F 3,12 = 51.04, p < 0.0001 for age, F 1,12 = 3.2, p = 0.098 for oocyte size, F 3,12 = 12.16, p = 0.001 for interaction). When the relationship between short to long CHC ratio was analyzed separately using regression curve estimation, polynomial regression with cubic fit proved the best fitting curve (R = 0.57, R 2 = 0.325, F 3,202 = 32.39, p < 0.0001) (Fig. 4). Discussion Our analysis of the cephalic labial gland secretions revealed a diversity of compounds representing a number of different chemical classes. This structural diversity, and the substantial differences in composition between bees of differing caste, age, and social condition allude to diverse roles played by the different compounds. Some of these differences, such as the abundance of terpenoids in gynes and the octyl esters in queens, parallel those found in other secretions of B. impatiens and B. terrestris (Amsalem et al. 2014(Amsalem et al. , 2009Derstine et al. 2021). Overall, we showed strong associations of terpenoid compounds with caste and mating status, of esters with social condition, and of the hydrocarbon profile with reproductive status. Terpenoid compounds were predominant in gynes. These compounds comprised 40-60% of the total secretion, and their amounts peaked in gynes aged 6 to 10 days, coinciding with the age range optimal for mating (Treanore et al. 2021). This finding suggests that terpenoid compounds may play a role in mating in bumble bee queens. Terpenoid compounds were also found to play a role in territory marking and mating in bumble bee males (Bergman and Bergström 1997), although males produce predominantly low molecular weight terpenes like farnesol, whereas, in queens (this study), terpenoids are mainly represented by farnesyl esters of unsaturated fatty acids. The low volatility of these esters suggests that if they do have a signaling role, they are likely short-range signals that are perceived upon contact. Interestingly, terpenoid compounds, albeit of a different structure, were found to be the distinguishing feature of the Dufour's gland secretion of B. impatiens gynes (Derstine et al. 2021), where they may also serve as sex pheromones. The similarity in compounds across species, sexes, and castes may point to evolutionary constrains on chemical diversity and perhaps an adoption of the same chemicals for different functions. For example, previous studies found that terpenoid compounds were produced by the same metabolic pathway as juvenile hormones in non-social insects (Engel et al. 2016). We know very little about the levels of juvenile hormone in bumble bee queens before and after mating and exploring the relationship between juvenile hormone level and terpenoid production (and the changes caused in these parameters by mating) could be a productive avenue of research. The amount and identity of wax ester components were a differentiating factor across castes. Specifically, active queens, gynes, and workers differed in the composition of non-terpenoid esters, and the differences we observed in the labial glands mirror trends previously determined for Dufour's gland secretions of Bombus impatiens. Workers were characterized by dodecyl esters, whereas gynes produced no dodecyl esters at all, but produced predominantly longer esters with 14-18 carbons in the alcohol moiety and 18-20 carbons in the acid moiety. This suggests that common biosynthetic pathways are activated in different glands, or alternatively, that esters are produced outside of the glands, possibly in the fat body, and are transported separately to different glands. Mechanisms regulating ester biosynthesis are not yet well characterized in bumble bees. The predominance of dodecyl esters in workers and octyl esters in queens of B. impatiens mirrors the contents from analyses of cephalic labial gland secretions of B. terrestris (Amsalem et al. 2014). Overall, aliphatic esters were by far most abundant in the labial glands of active queens and QR workers, and least abundant in gynes. The common characteristic of active queens and QR workers is, perhaps, due to the fact that they were sampled from a fully functional large colony, unlike gynes and QL and QLBL workers, which were reared in small groups. The abundance of esters in these bees might suggest a social communication function, but, alternatively, esters might be used for their physical properties, for example, in building and repair of wax cells. Labial gland esters have been implicated in nest building in solitary bees but their function in social species is as yet unknown. Hydrocarbons made up a large part of the cephalic labial gland secretions in all castes. The ratios of short-to longchain hydrocarbons in the labial glands displayed the same trend as hydrocarbons on the cuticle, where active queens have the highest short-to long-chain hydrocarbon ratio, and gynes have the lowest (Orlova et al. 2020). Additionally, in both gynes and workers, the change in the ratio occurs in tandem with ovarian development, and the terminal oocyte size is significantly correlated with the short-to long-chain hydrocarbon ratio. This suggests that in bumble bees, hydrocarbon synthesis is associated with oogenesis and might serve as a fertility signal, as was previously shown in solitary insects (Blomquist and Bagnères 2010). Finally, we observed an intriguing set of unidentified relatively heavy (likely molecular weights 430-530 amu) compounds characterized by a base peak at m/z 95. The proportions of these compounds were not large (0.5-4% of total secretion) but they discriminate significantly between castes and between different treatment groups in workers, in a similar manner to esters. As with the ester components, the proportion of these compounds increased with age, and their amounts significantly correlated with ester amounts. Further attempts are in progress to try and identify these compounds and understand the cause of their co-occurrence with esters. Overall, our analysis of labial gland secretion compositions revealed differences between castes, social conditions, and physiological states in both queens and workers, and allowed us to formulate several hypotheses about the possible functions of the cephalic labial gland compounds. The terpenoid esters which are abundant in gynes may act as a sex pheromone, while the wax esters may have a social signaling function. The ratio of short-to long-chain hydrocarbons may be associated with or regulated by oogenesis and may signal fertility. Testing these hypotheses will require further research involving behavioral assays, and elucidation of the physiological and molecular mechanisms underlying the biosynthesis of different classes of compounds.
2021-09-28T01:08:53.315Z
2021-07-15T00:00:00.000
{ "year": 2022, "sha1": "86537f3bd2b641f9919e08b33d82151c74789e05", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-689566/latest.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "b75194b011c4ff457615d33fe27606e83368fea1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237939738
pes2o/s2orc
v3-fos-license
18F-Fluorination Using Tri-Tert-Butanol Ammonium Iodide as Phase-Transfer Catalyst: An Alternative Minimalist Approach The 18F syntheses of tracers for positron emission tomography (PET) typically require several steps, including extraction of [18F]fluoride from H2[18O]O, elution, and drying, prior to nucleophilic substitution reaction, being a laborious and time-consuming process. The elution of [18F]fluoride is commonly achieved by phase transfer catalysts (PTC) in aqueous solution, which makes azeotropic drying indispensable. The ideal PTC is characterized by a slightly basic nature, its capacity to elute [18F]fluoride with anhydrous solvents, and its efficient complex formation with [18F]fluoride during subsequent labeling. Herein, we developed tri-(tert-butanol)-methylammonium iodide (TBMA-I), a quaternary ammonium salt serving as the PTC for 18F-fluorination reactions. The favorable elution efficiency of [18F]fluoride using TBMA-I was demonstrated with aprotic and protic solvents, maintaining high 18F-recoveries of 96–99%. 18F-labeling reactions using TBMA-I as PTC were studied with aliphatic 1,3-ditosylpropane and aryl pinacol boronate esters as precursors, providing 18F-labeled products in moderate-to-high radiochemical yields. TBMA-I revealed adequate properties for application to 18F-fluorination reactions and could be used for elution of [18F]fluoride with MeOH, omitting an additional base and azeotropic drying prior to 18F-labeling. We speculate that the tert-alcohol functionality of TBMA-I promotes intermolecular hydrogen bonding, which enhances the elution efficiency and stability of [18F]fluoride during nucleophilic 18F-fluorination. Introduction The diagnosis and quantification of various physiological and pathophysiological processes in vivo by position emission tomography (PET) have become increasingly crucial in medical research [1]. The high resolution and sensitivity of PET allow the detection of changes in cellular function or receptor densities during disease development using molecular tracers, most frequently labeled with fluorine-18. Radiolabeled biological and pharmaceutical active molecules carrying 18 F are of increasing importance for preclinical, clinical, and nuclear medical research due to the unique properties of 18 F, such as low β + -energy, long half-life (109.77 min), and the easy accessibility of no-carrier-added [ 18 F]fluoride [2]. The nuclear reaction of 18 O(p,n) 18 F in a small-scale cyclotron is the commonly applied process for the production of [ 18 2.2 or tetraalkylammonium bicarbonates, which are part of a conserved protocol. In the reaction vial, the water is finally removed by azeotropic drying under heating and gas flow to provide the reactive and dry [ 18 F]fluoride-PTC reagent for nucleophilic labeling of target precursors [3,4]. The nucleophilic substitution (S N 2) is a widely adopted method in 18 F chemistry for the introduction of [ 18 F]fluoride to provide important radiotracers, such as [ 18 F]FP-CIT [5], [ 18 F]FDG [6], [ 18 F]FMISO [7], [ 18 F]F-tryptophan derivatives [8], and many others. This strategy has gained significant importance in the radiopharmaceutical field, as it is not limited to aliphatic systems, but can also be applied to aromatic systems, whereas [ 18 F]fluoride is always introduced to the reaction by taking advantage of the eluting agent consisting of an appropriate solvent and the PTC. Numerous PTCs have been developed to improve radiosynthetic processes [9,10]; among them, tetraalkylammonium salts have been studied most widely due to their counter anion exchanging ability with [ 18 F]fluoride and fair solubility in most organic solvents including CH 3 CN and alcohols. The S N Ar radiofluorination with phosphonium borane modified cartridges and tetrabutylammonium cyanide (TBACN) as an eluting agent was reported, whereas tetrabutylammonium cations serve as PTCs and the cyanide counter anion displaces [ 18 F]fluoride from the cartridge [11]. Radiochemists have made efforts to skip the typical aqueous elution of [ 18 F]fluoride and the following azeotropic drying steps. Recently, Aerts et al. developed the n-tetradecyltrimethylammonium cation (TDTMA) to avoid the azeotropic drying step before radiofluorination [12]. Tetraethylammonium (TEA) hydrogen carbonate, tosylate, or perchlorate salts were developed by Inkster et al. to avoid basic reaction conditions as well as azeotropic drying. Such PTC salts were efficiently used for both aliphatic and aromatic 18 F-fluorination reactions [10]. Apart from tetraalkylammonium salts, various arylonium salts, such as quaternary anilinium, diaryliodonium, and triarylsulfonium, were developed to minimalize the 18 F-labeling approach [13]. The aforementioned onium salts act as PTC as well as promoters for radiofluorinations of aliphatic and aromatic compounds. However, radiolabeling using quaternary alkylammonium [ 18 F]fluoride is much faster at >80 • C; the reaction conditions typically produce by-products, such as alcohols or alkenes, due to strongly basic PTCsolvent systems. Recent reports on alcohol-containing solvent systems for 18 F-fluorination [14,15] or tert-alcohol coordinated tetraalkylammonium fluoride [16,17] demonstrated the favorable properties of such reagents in terms of reactivity, nucleophilicity and solubility in organic solvents. Furthermore, tert-alcohol functionalized tri-tert-butanolamine was reported as a promoter of S N 2 fluorination reactions; these accelerate the nucleophilic aliphatic substitution due to the coordination between hydroxyl (-OH) and amine functionalities and fluoride [18]. Importantly, nucleophilic properties of fluorine dominate over the basicity in favor of S N 2 substitutions [19]. Results and Discussion Initially, we examined the application of known tri-(t-BuOH)A [18] on the recovery of [ 18 F]fluoride from QMA cartridges (Table 1). Entry 2 of Table 1 illustrates that the recovery of [ 18 F]fluoride was very poor, at 11%. Most of the [ 18 F]fluoride was trapped on the cartridge, similar to an elution without any PTC (entry 1). Thus, tri-(t-BuOH)A was converted into the corresponding quaternary ammonium form by following a reported procedure with some modifications [20,21]. Scheme 1 shows the synthesis of TBMA-I by neat-treating of tri-(tert-BuOH)A with 1.1 equivalents of MeI under pressure at 70 • C for 3 days. The resulting product was semi-solid and immiscible with DCM, Et 2 O, EtOAc, and hexane, but soluble in CH 3 CN, H 2 O, MeOH, and DMSO. The purity and identity of TBMA-I were confirmed by LC/MS and 1 H-NMR analysis, and the reactivity of TBMA-I was studied by performing iodination reactions with 1,3-ditosylpropane using 1.5 equivalents of TBMA-I in CH 3 CN at room temperature for 12 h, providing the desired 3-iodo-1-tosylpropane with a yield of 67%. With TBMA-I, we studied its property as a PTC for [ 18 F]fluoride elution, as summarized in Table 1, when different parameters were optimized for the elution of [ 18 F]fluoride from QMA cartridges using 1 mL of various elution mixtures of PTC. We investigated the elution efficiency of the protic functionalized quaternary ammonium iodide salt TBMA-I in an elution mixture of an aqueous solution (1 mL total volume) of K 2 CO 3 (1 M, 15 µL) and CH 3 CN (800 µL), which showed excellent eluent properties, with an 18 F recovery of 98.8% (entry 3). The same elution in the presence of tri-(t-BuOH)A resulted in a lower 18 F recovery, similar to that in the absence of a PTC (entries 1 and 2). The concentration of TBMA-I both lower and higher than 39 µmol, did not significantly affect the efficiency of [ 18 F]fluoride elution; 18 F-recoveries of greater than 94% were obtained ( Table 1, entries 4 and 5). The ability of TBMA-I to facilitate nucleophilic 18 F-fluorinations in CH 3 CN was assessed using [ 18 F]fluoropropyl tosylate ([ 18 F]2a) as a model product ( Table 2). TBMA-[ 18 F]fluoride was eluted from the QMA cartridge by using an elution mixture of K 2 CO 3 (1 M, 15 µL) and H 2 O (185 µL) in CH 3 CN (800 µL), then azeotropically dried by applying anhydrous CH 3 CN (3 × 500 µL) in a gas flow at 85 • C. A solution of precursor 1,3ditosylpropane (1, 6.0 mg) in anhydrous organic solvent (500 µL) was added into the reaction vial and heated at 85 • C over 20 min, and the radiochemical yield (RCY) was analyzed by radio-TLC at specific time points. Entry 1 of Table 2 was performed using tri-(tert-BuOH)A, which is known to be a good promoter for fluorinations with alkali metal fluoride salts [18]. However, as mentioned above, the elution efficacy was poor using pure tri-(tert-BuOH)A, such that the remaining [ 18 F]fluoride on the cartridge could be eluted using additional elution mixture of K2CO3 (1 M, 100 µ L) in H2O (400 µ L). Subsequently, the solvent was azeotropically dried using anhydrous CH3CN and a solution of precursor 1 containing additional tri-(tert-BuOH)A in CH3CN was added, but no reaction was observed. Conversely, the 18 The same reaction applying a combination of both solvents tert-BuOH and CH3CN in a 1:4 ratio and a total volume of 500 µ L offered an RCY of 50% for the desired product [ 18 F]2a. Surprisingly, the formation of the hydrolytic byproduct [ 18 F]2b was suppressed. Changing the solvent ratio of tert-BuOH and MeCN to 1:9, we found that the RCY increased to 57% of [ 18 F]2a as a single product (Table 2, entry 6). Notably, the reaction resulted in the best conversion and chemoselectivity compared to the conventional Kryptofix ® 2.2.2-assisted reaction ( Table 2, compare entry 6 with entries 2 and 3). Further evaluation of polar solvents (DMSO, DMF, and THF) indicated poor RCY (entries 7 and 8). In an attempt to find a protic solvent with an effect on the reactivity of TBMA-I, 18 F fluorination was performed in iso-propanol, achieving a decreased RCY of 25% after 20 min. The similar reaction in tert-BuOH was attempted, but the precursor was insoluble. These results showed that TBMA-I allows adequate recovery of [ 18 F]fluoride from the QMA cartridge using the classical aqueous solution mixture and significant PTC activity in radiofluorinations. Organic PTCs were reported for the elution of [ 18 F]fluoride using MeOH as a solvent [22]. Importantly, TBMA-I has an excellent solubility in MeOH and could be used for the elution of [ 18 F]fluoride without employing an additional base. Methanolic TBMA-I solutions showed a good ability to elute [ 18 F]fluoride (Table 1). Furthermore, MeOH could be evaporated easily below 100 °C. These conditions are time-saving as they allow skipping azeotropic drying, which is needed after the elution under classical aqueous conditions. Entry 1 of Table 2 was performed using tri-(tert-BuOH)A, which is known to be a good promoter for fluorinations with alkali metal fluoride salts [18]. However, as mentioned above, the elution efficacy was poor using pure tri-(tert-BuOH)A, such that the remaining [ 18 F]fluoride on the cartridge could be eluted using additional elution mixture of K 2 CO 3 (1 M, 100 µL) in H 2 O (400 µL). Subsequently, the solvent was azeotropically dried using anhydrous CH 3 CN and a solution of precursor 1 containing additional tri-(tert-BuOH)A in CH 3 CN was added, but no reaction was observed. Conversely, the 18 Table 2, compare entry 6 with entries 2 and 3). Further evaluation of polar solvents (DMSO, DMF, and THF) indicated poor RCY (entries 7 and 8). In an attempt to find a protic solvent with an effect on the reactivity of TBMA-I, 18 F fluorination was performed in iso-propanol, achieving a decreased RCY of 25% after 20 min. The similar reaction in tert-BuOH was attempted, but the precursor was insoluble. These results showed that TBMA-I allows adequate recovery of [ 18 F]fluoride from the QMA cartridge using the classical aqueous solution mixture and significant PTC activity in radiofluorinations. Organic PTCs were reported for the elution of [ 18 F]fluoride using MeOH as a solvent [22]. Importantly, TBMA-I has an excellent solubility in MeOH and could be used for the elution of [ 18 F]fluoride without employing an additional base. Methanolic TBMA-I solutions showed a good ability to elute [ 18 F]fluoride (Table 1). Furthermore, MeOH could be evaporated easily below 100 • C. These conditions are time-saving as they allow skipping azeotropic drying, which is needed after the elution under classical aqueous conditions. Thus, we studied the reactivity of eluted [ 18 F]fluoride using TBMA-I in MeOH in the absence of water and potassium salt bases. Table 3 shows the results of the radiofluorination of 1 using [ 18 F]fluoride, which was eluted with a solution of methanolic TBMA-I (10.4 µmol, entry 7, Table 1), in various reaction solvent systems. Entry 1 in Table 3 was performed in CH 3 CN (4.5 mL) with tert-BuOH (0.5 mL) as a co-solvent when [ 18 F]2a was afforded with 40% RCY. Varying the solvent to t-BuOH and to CH 3 CN ratio to 1:4 and 4:1, respectively, did not increase the RCY of [ 18 F]2a (entries 3 and 4). We speculate that the basicity of TBMA-I is less than that of other ammonium PTCs due to the presence of three tert-OH moieties. Therefore, the reaction was also performed under the addition of 15 µL aqueous K 2 CO 3 (1 M), but hardly any radiolabeled product was observed ( Table 3, entry 4). To determine the solvent effect, the reaction was also carried out in pure CH 3 CN without any co-solvent, and only 14% RCY was observed for [ 18 F]2a (Table 3, entry 5). The results suggest that the evaporation of MeOH before addition of the precursor is not sufficient, as the presence of traces of water might affect the [ 18 F]fluoride reactivity. However, the elution process and the reaction conditions resulted in a maximum RCY of 40% (Table 3, entry 1), compared to the conventional tert-BuOH/MeCN conditions with an RCY of 57% (Table 2, entry 6), but the benefits of a simple and time-saving procedure and omitting the azeotropic drying step may offer great potential for its use in automated radiosynthesis modules. which produced an RCY of 40% and 3%, respectively. This comparative study more clearly shows the differences in RCY between the reaction starting from [ 18 F]fluoride after elution with CH 3 CN/H 2 O and subsequent azeotropic drying and the same reaction starting from [ 18 F]fluoride after elution with MeOH without azeotropic drying. As mentioned above, it is tempting to speculate that the decrease in RCY from 57% to 40% could be due to the remaining water content in the reaction mixture. Considering the important advantage of TBMA-I that no azeotropic drying is needed to obtain an adequate RCY, we turned our attention to study aromatic radiofluorinations using TBMA-I as the PTC. In a first step, starting from the reported reaction conditions of the copper-mediated aromatic 18 F substitution of aryl boronic esters utilizing protic solvents in combination with polar aprotic solvents for optimal RCY [15], we adopted and optimized the precur- In a first step, starting from the reported reaction conditions of the copper-mediated aromatic 18 F substitution of aryl boronic esters utilizing protic solvents in combination with polar aprotic solvents for optimal RCY [15], we adopted and optimized the precursorto-[Cu(OTf) 2 py 4 ] ratio for the 18 F-labeling of 5-benzoxazole boronic acid pinacol ester (3), which served as precursor for the radiosynthesis of 5-[ 18 F]fluorobenzoxazole ([ 18 F]4; Figure 2). Figure 2 shows the optimization approach on the 3-to-[Cu(OTf) 2 py 4 ] ratio, revealing that the reported ratio of 2.2 equivalents provided 16% RCY of [ 18 F]4 after 20 min. Interestingly, in the case of precursor 3, the ratio of 1.1:1 provided a maximum RCY of 25%. When increasing the ratio to 4.4, the product could not be detected after 10 min and a further decrease in the precursor amount compared to [Cu(OTf) 2 py 4 ] did not improve the RCY. The optimal reaction condition employed a total reaction volume of 1. In a first step, starting from the reported reaction conditions of the copper-mediated aromatic 18 F substitution of aryl boronic esters utilizing protic solvents in combination with polar aprotic solvents for optimal RCY [15], we adopted and optimized the precursor-to-[Cu(OTf)2py4] ratio for the 18 F-labeling of 5-benzoxazole boronic acid pinacol ester (3), which served as precursor for the radiosynthesis of 5-[ 18 F]fluorobenzoxazole ([ 18 F]4; Figure 2). Figure 2 shows the optimization approach on the 3-to-[Cu(OTf)2py4] ratio, revealing that the reported ratio of 2.2 equivalents provided 16% RCY of [ 18 F]4 after 20 min. Interestingly, in the case of precursor 3, the ratio of 1.1:1 provided a maximum RCY of 25%. When increasing the ratio to 4.4, the product could not be detected after 10 min and a further decrease in the precursor amount compared to [Cu(OTf)2py4] did not improve the RCY. The optimal reaction condition employed a total reaction volume of 1. In a second step, we aimed to minimize the reaction volume, as a reduced total amount of boronic acid ester precursor could simplify the purification of 18 F-labeled compounds. In Figure 3, the effect of a reduced reaction volume on the 18 tively). The highest RCY achieved was 62% for 6-[ 18 F]fluorobenzothiazole ([ 18 F]6) a min in a reaction volume of 500 µ L, whereas further decrease or increase led to re RCYs. The found optimal conditions of the precursor-to-Cu-catalyst ratio (1.1:1 16.1 µ mol precursor and 14.6 µ mol of [Cu(OTf)2py4], and a reaction volume of 5 were then applied in the following experiments using TBMA-I as PTC. Table 4 summarizes the direct comparison of TEAB and TBMA-I as the PTC cleophilic SNAr, applying 18 F-fluorination on 5-indoleboronic acid pinacol ester ( model compound. We evaluated the reactivity of 7 using a total volume of 500 DMA/n-BuOH (2:1) in the presence of TBMA as PTC, producing the desired 18 Fnated indole [ 18 F]8 with a 55-60% RCY. To compare our results, we performed the reaction using TEAB as the PTC, which resulted in 10% more RCY of [ 18 F]8 than TBMA-I as the PTC. Table 4 summarizes the direct comparison of TEAB and TBMA-I as the PTC in nucleophilic S N Ar, applying 18 F-fluorination on 5-indoleboronic acid pinacol ester (7) as a model compound. We evaluated the reactivity of 7 using a total volume of 500 µL of DMA/n-BuOH (2:1) in the presence of TBMA as PTC, producing the desired 18 F-fluorinated indole [ 18 F]8 with a 55-60% RCY. To compare our results, we performed the same reaction using TEAB as the PTC, which resulted in 10% more RCY of [ 18 F]8 than using TBMA-I as the PTC. pounds. In Figure 3, the effect of a reduced reaction volume on the F-fluorination of 6-benzothiazole boronic acid pinacol ester (5) is shown. Elution of [ 18 F]fluoride with TEAB in MeOH allowed a reduction in the reaction volume to 300-600 µ L, while the concentration of precursor 5 and [Cu(OTf)2py4] was kept constant (32.2 mM and 29.2 mM, respectively). The highest RCY achieved was 62% for 6-[ 18 F]fluorobenzothiazole ([ 18 F]6) after 20 min in a reaction volume of 500 µ L, whereas further decrease or increase led to reduced RCYs. The found optimal conditions of the precursor-to-Cu-catalyst ratio (1.1:1), i.e., 16.1 µ mol precursor and 14.6 µ mol of [Cu(OTf)2py4], and a reaction volume of 500 µ L, were then applied in the following experiments using TBMA-I as PTC. Table 4 summarizes the direct comparison of TEAB and TBMA-I as the PTC in nucleophilic SNAr, applying 18 F-fluorination on 5-indoleboronic acid pinacol ester (7) as a model compound. We evaluated the reactivity of 7 using a total volume of 500 µ L of DMA/n-BuOH (2:1) in the presence of TBMA as PTC, producing the desired 18 F-fluorinated indole [ 18 F]8 with a 55-60% RCY. To compare our results, we performed the same reaction using TEAB as the PTC, which resulted in 10% more RCY of [ 18 F]8 than using TBMA-I as the PTC. time course of the reaction clearly indicates that the TBMA-18 F intermediate is more than the TEAB analog and less reactive, but reaches a comparable RCY of about 20 min. We suggest that hydrogen bonding between [ 18 F]fluoride and tert-OH of tightly coordinates the [ 18 F]fluoride. This could be the reason for a relatively slow reaction due to the reduced availability of free fluoride anions in the reaction mixtu The applicability of TBMA-I as the PTC was further studied with various aryl p boronic esters. Figure 5 shows the reactivity of TBMA-18 F with commercially av boronic acid pinacol ester derivatives of 5-indole, 6-quinoxaline, and 5-benzothiazo found optimal reaction conditions were applied, affording the highest RCY of 80% indole [ 18 F]8. The reactivity of TBMA-18 F with 6-quinoxaline boronic acid pinacol est 5-benzothiazole boronic acid pinacol ester was lower, providing 57% and 29% RC spectively, of the corresponding 18 F-fluorinated products after 20 min. The applicability of TBMA-I as the PTC was further studied with various aryl pinacol boronic esters. Figure 5 shows the reactivity of TBMA-18 F with commercially available boronic acid pinacol ester derivatives of 5-indole, 6-quinoxaline, and 5-benzothiazole. The found optimal reaction conditions were applied, affording the highest RCY of 80% for the indole [ 18 F]8. The reactivity of TBMA-18 F with 6-quinoxaline boronic acid pinacol ester and 5-benzothiazole boronic acid pinacol ester was lower, providing 57% and 29% RCY, respectively, of the corresponding 18 F-fluorinated products after 20 min. tightly coordinates the [ 18 F]fluoride. This could be the reason for a relatively slow reaction due to the reduced availability of free fluoride anions in the reaction mixtu The applicability of TBMA-I as the PTC was further studied with various aryl p boronic esters. Figure 5 shows the reactivity of TBMA-18 F with commercially ava boronic acid pinacol ester derivatives of 5-indole, 6-quinoxaline, and 5-benzothiazol found optimal reaction conditions were applied, affording the highest RCY of 80% f indole [ 18 F]8. The reactivity of TBMA-18 F with 6-quinoxaline boronic acid pinacol est 5-benzothiazole boronic acid pinacol ester was lower, providing 57% and 29% RC spectively, of the corresponding 18 F-fluorinated products after 20 min. General Radio-HPLC was performed on an Agilent 1100 system (Agilent Technologies, Böblingen, Germany) with a quaternary pump and variable wavelength detector and radio-HPLC detector HERM LB 500 (Berthold Technologies, Bad Wildbad, Germany) on a Chromolith RP-18e column (RP, 100 × 4.6 mm, 5 µm particle size, flow rate: 4 mL/min) using a linear gradient from 10-100% acetonitrile (0.1% TFA) in water (0.1% TFA) over 5 min. All 18 F-labeled compounds were identified via the retention time (t R ) of their non-radioactive reference compounds. Aromatic 18 F-Fluorination: Radiosynthesis of [ 18 F]4 The elution of [ 18 F]fluoride was modified from the reported procedure of Zischler et al. [15]: After loading aqueous [ 18 F]fluoride onto Sep-Pak ® Light (46 mg) Accell TM Plus QMA carbonate cartridge from the male side, acetone (dry, 2.0 mL) was passed through the cartridge from the same side. Subsequently, air (10 mL) was applied from the male side and [ 18 F]fluoride could be eluted directly into the reaction vial, using a solution of TEAB in n-BuOH Conclusions In conclusion, we introduced the new tert-OH-functionalized quaternary ammonium salt tri-(tert-butanol)-methylammonium iodide (TBMA-I) for efficient [ 18 F]fluoride elution and as the PTC in nucleophilic 18 F-fluorination on aliphatic tosylates and aryl pinacol boronic ester precursors. TBMA-I showed promising PTC properties for its use in 18 F]fluoride with TBMA prevented the occurrence of hydrolytic byproducts during the aliphatic radiofluorination of bistosylates used as precursors in the 18 F-synthesis of prosthetic groups. Moreover, TBMA-I also demonstrated its potential for use as a PTC in aromatic radiofluorination of aryl boronic acid pinacol esters.
2021-09-25T16:12:02.004Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "57c373c8aa6d514789d93d9013b6806211a779a3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/14/9/833/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d451fa1426473bf4576bf1688065423ff2a30ad7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
232133345
pes2o/s2orc
v3-fos-license
Differences in the gut microbiomes of dogs and wolves: roles of antibiotics and starch Background Dogs are domesticated wolves. Change of living environment, such as diet and veterinary care may affect the gut bacterial flora of dogs. The aim of this study was to assess the gut bacterial diversity and function in dogs compared with captive wolves. We surveyed the gut bacterial diversity of 27 domestic dogs, which were fed commercial dog food, and 31 wolves, which were fed uncooked meat, by 16S rRNA sequencing. In addition, we collected fecal samples from 5 dogs and 5 wolves for shotgun metagenomic sequencing to explore changes in the functions of their gut microbiome. Results Differences in the abundance of core bacterial genera were observed between dogs and wolves. Together with shotgun metagenomics, the gut microbiome of dogs was found to be enriched in bacteria resistant to clinical drugs (P < 0.001), while wolves were enriched in bacteria resistant to antibiotics used in livestock (P < 0.001). In addition, a higher abundance of putative α-amylase genes (P < 0.05; P < 0.01) was observed in the dog samples. Conclusions Living environment of dogs and domestic wolves has led to increased numbers of bacteria with antibiotic resistance genes, with exposure to antibiotics through direct and indirect methods. In addition, the living environment of dogs has allowed the adaptation of their microbiota to a starch-rich diet. These observations align with a domestic lifestyle for domestic dogs and captive wolves, which might have consequences for public health. Background Dogs (Canis lupus familiaris) were probably the first and only animal domesticated before the advent of settled agriculture [1]. The history of dog domestication is often considered to be a two-stage process, where primitive dogs were first domesticated from gray wolves (Canis lupus laniger) and then, in a second stage, further selection on these primitive forms yielded the many specialized dog breeds found today [2][3][4]. Recent investigations suggest that the novel adaptations allowing early ancestors of dogs to thrive on diets rich in starch, in comparison to the carnivorous diet of wolves, was a crucial step in domestication [5]. Since the typical food source for wolves are ungulates, such as wild boar, and small mammals, the adaptation of dogs to eating grains and other vegetation is reflected in changes in the dog genome to the sequences of genes involved in starch and glucose metabolism [5]. Microbes have been found living in the gut of virtually all metazoans, including both invertebrates and vertebrates [6]. It is commonly appreciated that the activity of microbes, and their metabiotic products, play important roles in the health of mammals, including humans [7,8]. Adaptation and convergence of microbiota to diet occurs across mammals, and food consumed by a mammal influences its gut microbiota [9]. For example, diets rich in plant fiber promote a gut microbiota that is considerably different from the microbiota found with diets rich in animal fat [10]. It has been shown that the microbial composition of the gut of the giant panda differs from its carnivorous close relatives, likely due to the adaptation of its gut microbiota to the digestion of bamboo [11]. A comparative studies with 51 breeds of dogs has shown that diet influences bacterial composition and function [12]. To date, however, reconstruction of hostmicrobe evolutionary histories has been limited, and additional studies are needed on the gut microbiomes of wild animals [13]. An area of interest is the diversity of antibiotic resistance in the gut bacteria of dogs. For example, cephalosporin-resistant Enterobacteriaceae was found to be prevalent among dogs of various backgrounds living in animal shelters [14]. Similarly, a study of companion animals in North-West Germany found that 2.6% of the dogs in this population possessed methicillin-resistant Staphylococcus aureus and 3.6% of them had extended spectrum beta-lactamase-producing Enterobacteriaceae [15]. In addition, a study that characterized and compared antibiotic resistance by fecal E. coli isolates from dogs and their owners found that the most prevalent resistance gene was against sulfamethoxazole gene [16]. These studies suggest that differences in the prevalence of antibiotic resistant bacteria exist within dogs that might reflect their living conditions. In this study, we addressed this question by assessing the composition of the gut microbiota found in 27 dogs, belonging to 3 different breeds, and 31 captive wolves, initially using bacterial 16S rRNA sequences from feces samples, followed by the parallel deciphering of microbial genomes from five samples from each population, to assess the functional consequences of the microbes to their hosts. We found that the gut microbes in dogs and wolves possess unique genes involved in antibiotic resistance, which might echo direct and indirect antibiotic intake. In addition, genes related to starch metabolism are found in greater abundance in the gut microbes of dogs compared to wolves, which might assist the better utilization of starch by dogs. Comparative analysis of 16S communities of Canis lupus Gut bacteria in the two groups of animals (27 dogs and 31 wolves) were identified from Illumina 16S ribosomal DNA V4-V5 hyper variable region sequence data from fecal samples. A total of 3,858,805 effective tags were obtained, with an average of 72, 808 tags per sample. Tags were clustered into 14,118 operational taxonomic units (OTUs) using a 97% sequence identity cutoff. Rarefaction curves for phylogenetic diversity plateaued, approximating a saturation phase, after 7000 sequence per sample. The α-diversity of the gut microbes, which was measured using the observed numbers of OTUs (P < 0.01), Shannon index (P < 0.001) and Simpson index (P < 0.001), was significantly higher within the dog group than within wolves (Fig. 1a). We then compared the overall community structure and composition of the microbiota between the two groups. Interestingly, both groups showed highly separated clustering for NMDS (Nursing Minimum Data Set) distances (Fig. 1b). We also found that dogs and wolves have different microbial community compositions, with Allobaculum (Kruskal-Wallis; LDA = 4.93, P < 0.001) and Lactobacillus (Kruskal-Wallis; LDA = 4.91, P < 0.001) dominating in dogs, while wolves possess more Clostridium sensu stricto 1 (Kruskal-Wallis; LDA = 5.17, P < 0.001) (Fig. 1c). Antibiotic resistance profiling of the gut microbiomes of dogs and wolves Shotgun metagenomic data can be used to assess the metabolic repertoire of the entire complex microbial population by analyzing coding genes within the microbiomes [17]. To examine the consequence of the genetic changes of the dog compared to the wolf on the composition of their gut microbiomes, we investigated metagenomes from 5 dogs (ASD04, ASD05, D19, D20, and D23) and 5 wolves (ASW03, ASW04, ASW05, W28, and W27) using shotgun metagenomic sequencing. The 10 samples were selected based on their 16S microbial profiles to minimize intragroup differences, nevertheless, we cannot eliminate difference other than composition. The shotgun metagenomic approach generated a total of 869, 723,146 reads with an average of 86,972,315 reads per individual. We searched for antibiotic resistance genes (ARGs) with the CARD database [18][19][20], where we found that ARGs account for 0.58 and 0.84% of the genes in dog and wolf microbial metagenomes, respectively. More specially, genes coding for ARGs such as cdeA, were more abundant in dogs, while tetO, Bifidobacteria intrinsic ileS, aminocoumarin resistant alaS, mefA, Streptomyces cinnamoneus EF-Tu, adeG, adeC, CfxA6, mefC and tet40 are more prominent in wolves (Fig. 2). In addition, although Staphylococcus aureus parE and LlmA 23S ribosomal genes were not among the top ten most abundant gene types in dogs, they were about 270 and 12 times more abundant, respectively, in the wolf. Discussion Bacterial 16S rRNA sequences were used to assess the composition of the gut microbiota from feces samples of 27 dogs, belonging to 3 different breeds, and 31 captive wolves. Bioinformatic analyses performed to evaluate whether differences in the gut microbiota existed between the canine breeds found no differences. A recent study comparing domestic dog breeds and their wild relatives also have suggested that host phylogeny only plays a minor role in the modulation of gut populations in Canis lupus [12]. Therefore, we ignored differences among dogs and considered them as a group. Analyses of alpha-diversity analysis, such as observed number of OTUs, Shannon index, Simpson index and Pielou's evenness index, show that the microbiota of dogs have a higher diversity than that of wolves, which might be due to the diversity of the dog food. a Variation in microbial diversity and richness in dogs and wolves was calculated using the observed number of OTUs, Shannon index, Simpson index and Pielou's evenness index. (*P < 0.05; **P < 0.01; ***P < 0.001). b NMDS between the gut microbiota from pairs of animals. Each node represents a pair of samples. Note, the gut bacteria of wolves from different zoos cluster together, indicating that the influence of different feeding areas is weak. c Histograms of the proportions of top 20 OTUs classified at the genus level. OTUs were compared by the Holm-Sidak method t-test for all OTUs, with significant differences indicated by asterisks (*P < 0.05; **P < 0.01; ***P < 0.001) Fig. 2 Abundance of different types of antibiotic resistance genes (ARG) in the gut microbiomes of dogs and wolves. Histograms of the relative abundance of top 20 ARGs in each group. Significance of the differences was tested by Holm-Sidak method and indicated by asterisks (*P < 0.05; **P < 0.01; ***P < 0.001) Fig. 3 Abundance of genes associated with starch digestion in the metagenomes of the dog and wolf are different. a Pathway for starch digestion, from KEGG. b Histogram showing the normalized distribution of the abundance of genes encoding enzymes related to starch digestion is significantly enriched in the dog microbiome relative to the wolf. Significant differences are indicated by asterisks (*P < 0.05; **P < 0.01; ***P < 0.001) Examination of the abundance of different species in the microbiota showed that the genus Clostridium sensu stricto 1 was most prevalent (35.6%) in the wolf microbiota, which is consistent with the results of Wu et al. [21], suggesting the co-evolution between this genera and the wolf gut. In the dog, the genera Lactobacillus (17.5%) and Allobaculum (19.1%) were the most prevalent, which might be due to the change in their diet to one that is high in carbohydrates and fiber. Gut resistome research typically focuses on humans, where numerous and diverse resistance gene orthologues and the origins of drug resistance genes in the clinic have long been debated [22,23]. Today, veterinary antibiotics (VAs) are widely used in many countries to treat diseases in pets and to improve growth rates and feed efficiency in farming livestock [24]. This has resulted in increased levels of antibiotic resistance in the gut flora of food animals, which subsequently enters the food chain, or goundwater, of other ominvores and carnivores [23,25], and potentially having deleterious effects [26]. Therefore, a comparison of the gut microbe resistome of the wolf to that of the domestic dog might provide insight into how antibiotic use, both direct and indirect use, has altered antibiotic resistance in the gut microbiome of Canis lupus. Furthermore, research on the gut resistome in domestic dogs might provide clues concerning antibiotic resistance in dogs, identifying antibiotics that should suggest be replaced for increased efficacy. Compared to wolves, the dog gut microbiome is considerable enriched for cdeA, Staphylococcus aureus parE, and llmA, which confer resistance to fluoroquinolone, novobiocin, and clindamycin, respectively. Indeed, enrofloxacin (fluoroquinolones) and clindamycin are common clinical antimicrobial drugs consumed by sick dogs [27] and cdeA was found to be enriched in the gut microbiota of infant humans treated with antibiotics [28]. This observation suggested that the fluoroquinolone and clindamycin resistance in dogs likely comes from veterinary drugs, and that dogs have undergone selective evolution due to clinical exposure to antibiotics. Moreover, resitance to some antibiotics, such as novobiocin that is rarely used in the treatment of dogs, potentially was acquired through exposure to these antibiotics from pet food. Unexpectedly, we found a diverse set of ARGs (tetO and tet40 conferring resistance to tetracycline; Bifidobacteria intrinsic ileS show resistance to mupirocin; aminocoumarin resistant alaS show resistance to novobiocin; mefA show resistance to macrolide; Streptomyces cinnamoneus EF-Tu conferring resistance to elfamycin; adeG, mefC and adeC show multidrug resistance and CfxA6 conferring resistance to cephamycin) in wolves. Since uncooked meat is the primary food source for the wolves, we suspect that this is the source of these antibiotic resistance genes. Livestock, such as chicken, are treated with antimicrobials during production to maintain health and productivity [29,30]. Thus, uncooked meat will contain bacteria with antibiotic exposure, which could then be indirectly transferred to a predator (e.g., wolf) via the digestive system. In contrast, high temperature processing in the production of dog food production would lead to the destruction of ARGs and drug-resistant bacteria. It is worth noting that humans also obtain protein via livestock, thus, the long-term consequences of the consumption of under-cooked meat potentially has serious consequences for public health and threaten the sustainability of the livestock industry. In 2019, Alessandri et al. showed that different diets in dogs resulted in differentiated microbiota, however, with a core set of of gut bacteria genera that did not fluctuate, which might be due to extensive co-evolution with the host [12]. We hypothesize that among the environmental factors separating our two populations (diet, sanitation, hygiene, geography, and climate), the presence of Allobaculum could be a consequence of high fiber intake, and maximizing metabolic energy extraction from ingested plant polysaccharides. It has been reported that Clotridium sensu stricto 1 and Allobaculum have been linked to protein and lipid degradation [31,32]. However, Lactobacillus can help ferment carbohydrates. Enhanced starch digestion, through AMY2B copy number expansion in the dog genome, has been postulated to be an adaptation to the shift from the carnivorous diets of wolves to the starchrich diets of the domesticated dog [5]. Our dog microbiome samples show higher abundances of putative GH13 and GH31 type genes compared to the wolf. This result suggests that the increased amylase generated by the changes in the dog genome may not completely explain the shift to the starch-rich diet in dogs, and that changes in the composition of gut microbes might help break down starch-rich food. Gut microbes in the domestic dogs should be better able to digest starch compared to those of captive wolves, thus we hypothesize that the gut microbes from free running wolves that have a significantly greater amount of aerobic exercise and fewer opportunities to eat grain should have a weaker ability to digest starch compared to our captive wolves. Conclusions In summary, our findings demonstrate that long-term domestication has affected the gut microbes of dogs leading to increases in the number of coding genes for starch digestion and antibiotic resistance. Furthermore, direct consumption of uncooked livestock products also indirectly leads to increases in ARGs. Nevertheless, the two groups differ in many other variables such as amount of aerobic exercise, hygiene, exposure to human (for example: veterinary care) thus, it is difficult to disentangle the role of genetic and environmental changes in shaping the domestic microbiome due to captivity. Sample collection Fecal samples from 27 adult police dogs (Canis lupus familiaris) (including 22 purebred German Shepherds, 4 purebreds Belgian Malinois, and 1 purebred English Springer Spaniel) (25-96 months old) and 31 wolves (Canis lupus laniger) (adult) for this project were collected between December 2016 and January 2017. Dogs were from a dog-breeding center where the animals were individually kept in kennels and this dog-breeding center only had these three breeds. Investigators were gloved, masked, and gowned during sampling. To avoid non-physiological changes in the fecal microbiota and contamination with organisms from the environment, fresh feces were collected as soon after defecation as possible, when the fecal material was still warm, soft, and smelly. A sterile medicine spoon was used to remove the outer part of the feces and each sample were transferred into a tube using a new sterile medicine spoon as quickly as possible. Tubes with fecal samples were kept initially on dry ice and then stored at − 80°C until processing. Wolf fecal samples were similarly collected from communal pens at two zoos (Shenyang Forest Zoological Garden, Liaoning, China, 17 wolves; Changchun Plant and Animal Park, Jilin, China, 14 wolves). While wild free running wolves might be more suitable for this study, collecting samples from these animals, however, would have been more difficult, thus we collected from captive wolves that are recent (a few generations) descendants of wild-caught wolves. The habits of these wolves show no evidence of domestication. Diets and treatments of the dogs and wolves The diet for the police dogs was composed of commercial dog food, which contains grain (rice/wheat/corn), meat, vitamins, and minerals, and was manufactured by Pedigree, MARS, China. Moreover, as puppies, these dogs were injected with a combination vaccine to prevent viral diseases and the police dog breeding center is equipped with a dog hospital where cephalosporins (βlactams), gentamicin (aminoglycosides), and enrofloxacin (fluoroquinolones) are commonly used as medication. Wolves were primarily fed unprocessed chicken carcasses and beef, which were purchased from markets and not labeled as organic, and could eat grass found on the grounds of their habitats. However, live sheep were occasionally put in with the wolves, as evidenced by white bones seen in their habitats. Although zoo animals did give some medical attention, including treatment with antibiotics, this was rare as injured and sick wolves usually healed by themselves, and treatment of them can be extremely dangerous for the breeders. The wolf populations are descendants of wild wolves that were captured a few decades ago, and have since multiplied in a fenced area of three thousand square meters. DNA extraction and sequencing Total bacterial DNA was extracted at Novogene Bioinformatics Technology Co., Ltd. (Beijing, China) using TIANGEN kits according to the manufacturer's recommendations. Approximately 40-200 mg of fecal material was used for each extraction. The hypervariable V4-V5 region of the 16S rRNA gene was amplified using specific primers (515F: GTG CCA GCM GCC GCG G; 907R: CCG TCA ATT CMT TTR AGT TT). All PCR reactions were carried out with Phu-sion® High-Fidelity PCR Master Mix (New England Biolabs). The same volume of 1X loading buffer (containing SYB green) was mixed with PCR products, which were then separated by electrophoresis on 2% agarose gels. Bright bands between 400 and 450 bp in length were chosen for further analysis. Selected PCR product bands were then mixed in equidense ratios and purified with the Qiagen Gel Extraction Kit (Qiagen, Germany). Sequencing libraries were generated using TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA) following the manufacturer's recommendations with index codes added. Library quality was assessed on the Qubit@ 2.0 Fluorometer (Thermo Scientific) and Agilent Bioanalyzer 2100 system. The library was then sequenced on an Illumina HiSeq 2500 platform with 250 bp paired-end reads generated. 16S rRNA sequence analysis Paired-end reads was assigned to samples based on their unique barcode and truncated by removing the barcode and primer sequences. Paired-end reads (raw tags) were merged using FLASH (Version 1.2.7). Quality filtering on the raw tags were performed under specific filtering conditions to obtain high-quality clean tags according to the QIIME2 software quality control process. Tags were compared with the Gold database using the UCHIME algorithm to detect chimeric sequences, which were then removed sequences analysis was performed using Uparse (Version 7.0.1001). Sequences with ≥97% similarity were assigned to the same OTU. For each representative sequence, the GreenGene Database was used, based on the RDP classifier (Version 2.2) algorithm, to annotate the taxonomic information. To study the phylogenetic relationships of the different OTUs, and differences in the dominant species in each group, a multiple sequence alignment was constructed using MUSCLE (Version 3.8.31). OTU abundance information was normalized to the number of sequences for the sample with the least sequences. α-diversity and β-diversity were performed based on the normalized data and calculated with QIIME then displayed with R (Version 2.15.3). Metagenomic sequence analysis All samples were paired-end sequenced on the Illumina platform (insert size 350 bp, read length 150 bp) at Novogene Bioinformatics Technology Co., Ltd. After quality control, the Clean Data was blasted to the dog genome database with Bowtie (Version 2.2.4, parameters: --end-to-end, −-sensitive, −I 200, −X 400.) to filter reads that are of host origin. The set of high-quality reads was then used for further analysis. Clean data was executed using SOAPdenovo2 (Version 2.04). Assembled Scaftigs were disassembled at N connections to generate Scaftigs without Ns. Clean Data from all samples were compared to each Scaffolds using Bowtie (Version 2.2.4) to identify PE reads not used. All reads not used in the forward step were combined and then used for mixed assembly with SOAPdenovo2 (Version 2.04). Scaftigs (continuous sequences within scaffolds) < 500 bp were filtered before statistical analysis of both the single and mixed assemblies. ORFs in the Scaftigs (≥ 500 bp) assembled from both the single and mixed assemblies were predicted using MetaGeneMark (prokaryotic GeneMark.hmm Version 2.10). A nonredundant gene catalogue was then constructed with CD-HIT (Version 4.5.8). Clean Data for each sample was mapped to the initial gene catalogue using Bowtie (Version 2.2.4) to obtain the number of reads for each gene mapped in each sample. Only genes with ≥2 mapped reads were retained and used for the subsequently analysis. The abundance of a gene was calculated by counting the number of reads that aligned to the gene and normalized by the gene length. To access the taxonomic assignments of genes, genes were aligned to the integrated functional database, for example, CAZy database, using DIAMOND (Version 0.9.9, blastp, −e 1e-5). For each sequence's blast result, the best Blast Hit is used for subsequent analysis. Resistance Gene Identifier (RGI) software was used to align the Unigenes to the CARD database with default parameter settings and blastp e value≤1e-5.
2021-03-07T14:06:14.501Z
2021-03-06T00:00:00.000
{ "year": 2021, "sha1": "599133fef04bcf5cce84e0852feaa3ba632450d5", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-021-02815-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92902904d86dd0672b735fa0abd61853c0a6a803", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
58638030
pes2o/s2orc
v3-fos-license
Personalized prescription of imatinib in recurrent granulosa cell tumor of the ovary: case report Ovarian cancer is the fifth leading cause of cancer-related female mortality and the most lethal gynecological cancer. In this report, we present a rare case of recurrent granulosa cell tumor (GCT) of the ovary. We describe the case of a 26-yr-old woman with progressive GCT of the right ovary despite multiple lines of therapy who underwent salvage therapy selection based on a novel bioinformatical decision support tool (Oncobox). This analysis generated a list of potentially actionable compounds, which when used clinically lead to partial response and later long-term stabilization of the patient's disease. INTRODUCTION Ovarian cancer is the fifth most common cause of cancer-related death among women and the most lethal gynecological malignancy (Stewart and Wild 2014). Worldwide, malignant ovarian neoplasms account for an estimated 225,500 new cases and 140,200 deaths (22,300 and 15,500, respectively in the United States) (Siegel et al. 2017). Total incidence has increased by 6% from 2005 to 2010 (Siegel et al. 2017). Despite significant advances in the development of new treatment regimens, the survival rate has remained poor with 50% of affected women succumbing to their disease at 5 years (Eisenhauer 2017). Granulosa cell tumor (GCT) of the ovary constitutes 2%-5% of all ovarian malignancies (Schumer and Cannistra 2003). Most cases are diagnosed early, and the prognosis is favorable (Khosla et al. 2014). The scope of surgical treatment depends on the stage and age of the patient. In cases of favorable prognosis and reproductive age, the treatment may be limited to unilateral salpingo-oophorectomy and further observation. In postmenopausal women, bilateral salpingo-oophorectomy is recommended. The proper amount of surgical intervention in GCT is extirpation of the uterus with appendages and removal of the large omentum. At late (II-IV) stages of the disease, a radical tumor removal is necessary. Surgical treatment (radical removal of recurrent tumors or cytoreductive operations) is also recommended by NCCN for treatment of GCT relapses and metastases. Common metastatic lesions include neoplasms in both the pelvic area and the parenchymal organs. Adjuvant platinum-based chemotherapy is recommended for patients with a high risk of recurrence. In the presence of residual neoplasms, regimens that include platinum drugs are effective in a considerable proportion of cases (∼60%) (Bridgewater and Rustin 1999). In this report, we present a case of recurrent ovarian GCT, which progressed during platinum-based therapy but was successfully treated with imatinib monotherapy. The imatinib prescription was based on individual analysis of gene expression in the patient's tumor and bioinformatic profiling of signaling pathway activation. RESULTS A 26-yr-old woman was diagnosed at N.N. Blokhin Russian Cancer Research Center with primary GCT of the right ovary in 2001. The patient underwent unilateral salpingo-oophrectomy, with peritoneal biopsies without evidence of tumor growth in the left ovary and solitary complexes of malignant cells. From 2003 until 2008, the patient underwent three excisions of the following neoplasms: cystadenoma in the left ovary (the operation was organo-preserving because of pregnancy planning, and part of the left ovary was saved), cystic formation in the left ovary, and GCT in the right lateral region of the abdominal cavity. Dissemination of neoplastic foci on the patient's peritoneum was observed in 2010. Ultrasound examination revealed a 2.9 × 0.8-cm neoplasm in the S7 liver capsule; the dimensions of cystic formation in the pelvis were 4.0 × 3.2 × 2.6 cm. The patient received megestrol (Megace, 160 mg/day) for 5 mo; however, lesions progressed during this period. Extirpation of recurrent tumors in the pelvic area was performed with dissection of adhesions and resection of the large omentum. Relapse and further progression of the disease started in 2012. Abdominal examinations showed formations in the Douglas space, in the pelvic area and to the left behind the uterus, in the field of the splenic hilum, and in the lateral canal of the liver's right lobe. We introduce enumeration of neoplasms for further comparison of the measurements. The dimensions of all neoplasms across the study are summarized in Supplemental Table S1. The largest formations were identified on the posterior surface of the liver (3.8 × 2.5 cm, neoplasm #1) and in the right part of the posterior paranephric fat (6.8 × 5.8 cm, neoplasm #3). A 4.8 × 4.5-cm neoplasm in the Douglas space (neoplasm #2) displaced the rectum to the right. The patient underwent cytoreductive (debulking) surgery. Hematoxylin and eosin staining confirmed the primary origin of the tumor (Fig. 1). The patient's condition after the operation was found satisfactory without complications. BEP (bleomycin, etoposide, and cisplatin) therapy was prescribed following the surgical procedures; however, it was the patient's decision to refuse further chemotherapy. The disease progressed in 2014. Lesions revealed included multiple cystic neoplasms in the right lobe of the liver, neoplasms in the splenic hilum and in the epigastrium, and multiple lesions in the navel field. The neoplasm in the pelvic area progressed. The patient agreed to receive chemotherapy in the beginning of 2015 and was administered four courses of BEP therapy. However, ultrasound examination revealed continuous progression of the disease. To identify further treatment options, we performed molecular analysis of the patient's tumor. We extracted DNA from the patients' formalin-fixed, paraffin-embedded (FFPE) tumor tissue sample, obtained following cytoreductive (debulking) surgery in 2013, and performed whole-exome sequencing. The sequencing data were deposited in the NCBI Sequence Read Archive (SRA) under accession ID PRJNA503667. The tumor appeared to be FOXL2 C134W-positive, which corresponds to the adult-type GCT of the ovary (Shah et al. 2009). We also extracted RNA from the patients' sample and profiled gene expression (for details, see Materials and Methods section). The results of molecular analysis were deposited to the Gene Expression Omnibus (GEO) database under accession ID GSE112579. We next used the Oncobox bioinformatical platform for personalized prescription of target therapy. The Oncobox target drug scoring algorithm is based on the analysis of the intracellular signaling pathway activation using gene expression data. Oncobox analysis estimates activation level for approximately 380 cancer-related signaling pathways. In particular, Oncobox analysis revealed that ERK signaling was one of the pathways, which was strongly up-regulated in the patient's tumor sample, as compared to the normal tissue taken from unrelated postmortal donors (Fig. 2). According to the results of the Oncobox test, the following target drugs could be potentially effective for treatment of this patient (in a decreasing efficiency order): regorafenib, sorafenib, sunitinib, pazopanib, axitinib, aflibercept, cabozantinib, and imatinib. The pathway activation profiles and the full ratings of the target drugs are provided in Supplemental Table S2. We also determined the expression level of c-Kit (Imatinib target) using immunohistochemistry. The sample appeared to be c-Kit-negative, in accordance with microarray data (Table 1; Supplemental Table S3). The patient was administered sorafenib (Nexavar, 400 mg daily) from October 2015. The expression level of sorafenib target genes is presented in Table 2. However, sorafenib was not well tolerated and the patient developed polyarthritis. Sorafenib therapy was terminated 2 mo after initial administration. In January 2016, ultrasound examination indicated a decrease in the size of several cystic formations: Three out of four neoplasms decreased in size after sorafenib treatment (7% decrease in the sum of all lesions' diameters). As sorafenib was not tolerated by the patient, therapy regimen was switched to imatinib-another TKI, which was predicted to be effective for this patient using the Oncobox test. Imatinib (400 mg daily) was administered from . The ERK signaling pathway was hyperactivated in the patient's tumor tissue. Visualization was provided by the Oncobox software. The pathway is shown as an interacting network, where green arrows indicate activation, and red arrows indicate inhibition. The color depth of each node of the network corresponds to the logarithms of the case-to-normal (CNR) expression rate for each node, in which "normal" is a geometric average between normal tissue samples; the scale represents the extent of up-/down-regulation. The molecular targets of imatinib are shown by black arrows. Table S1; Figs. 3-5A,B). However, generic imatinib was not as well tolerated as the original imatinib (Gleevec). The patient complained of severe colitis. Filachromine administration was terminated because of this side effect. Indeed, substitution of the original drug with a generic could potentially decrease the Table S1; Figs. 3-5B,C); the sum of all target lesions' diameters increased by 1.6%. However, ascites, compression of the left ureter, and blockage of the left kidney were observed. MRI of the abdomen and pelvic area performed in December 2016 (source images were not available) revealed that the cystic-solid nodules under the right lobe of the liver (13.5 × 12.0 cm) squeeze the liver and liver's gates; the nodules in the right lateral canal (7.8 × 5.5 cm) displaced the right kidney upward and medially. The dimensions of the tumor in the pelvic cavity were 17.0 × 11.0 cm, the rectum was shifted to the right and compressed, and the bladder was compressed and shifted anteriorly. Personalized imatinib prescription in ovarian GCT Another cytoreductive surgery was performed in December 2016. Neoplasms in the right lobe of the liver (#1) and in the pelvic area (#5) were partially removed. Revision of the abdominal cavity revealed significant adhesive process. Tumor nodules with a thin capsule were intimately connected with the bladder, rectum, and ureters. Operative blood loss was 4 L. The metastatic nodules were removed in part because of technical difficulties. The ultrasound examination performed after surgery revealed that the liver was enlarged with the right lobe pushed back and pressed by a cystic neoplasm (neoplasm #1, 11.0 × 12.0 × 14.0 cm). Similar neoplasm of size 8.4 × 6.6 cm (neoplasm #3) was located laterally. Original imatinib (Gleevec) became available and its administration (400 mg daily) started in February 2017. Before imatinib treatment, ultrasound examination in February 2017 revealed a tumor in the region of the posterolateral liver capsule (15.5 × 12.0 cm, neoplasm #1), in the splenic hilum (5.0 × 3.3 cm, neoplasm #4), and in the area of the lateral channel in the abdominal cavity (9.0 × 7.5 cm, neoplasm #3) and several neoplasms in the pelvic region (neoplasms #7, #8, #9). MRI analysis in October 2017 revealed a decrease in size for these lesions, with the exception of the neoplasm in the splenic hilum area (its size Table S1). A new cystic neoplasm (#12, Supplemental Fig. S9) was observed during MRI examination in October. The sum of the largest diameters for all target lesions increased by 12% when compared to the MRI results from September 2016 (considering only the lesions investigated during both examinations), which supports disease stabilization. An MRI examination in March 2018 revealed moderate growth of lesions #3, #4, #10, and #13 ( Fig. 5; Supplemental Figs. S1, S7, and S10, respectively; sum of largest diameters for all target lesions increased by 15%), which corresponds to disease stabilization. As of June 2018, the patient is alive and physically active with a Karnofsky scale index of 90%. Imatinib administration is continued, no significant side effects are observed, and no surgical procedures are required. DISCUSSION Tyrosine kinase inhibitors (TKIs) represent a class of target drugs that have been widely integrated into clinical practice since the beginning of the twenty-first century (Vergoulidou 2015). Protein tyrosine kinases (TKs) play key roles in the development and progression of cancer by acting as major components of various intracellular signaling pathways. These enzymes actively participate in many intracellular processes, including proliferation, metabolism, angiogenesis, differentiation, and apoptosis (Zhang and Imatinib is a TKI, which targets pathological fusion enzyme BCR-ABL, platelet-derived growth factor receptors (PDGFRs), KIT, and several other TKs (Druker et al. 2001;Matei et al. 2004;Miselli et al. 2007;Ren et al. 2011). Imatinib is a derivative of 2-phenylaminopyrimidine and acts through blocking of the ATP-binding domain of TKs, thus preventing their phosphorylation and subsequent activation (Iqbal and Iqbal 2014). Imatinib monotherapy is FDA-approved for the treatment of Philadelphia chromosome-positive chronic myeloid leukemia and Kit-positive unresectable malignant gastrointestinal stromal tumors (Demetri et al. 2002;Peng et al. 2005;Berman et al. 2013). The Phase 2 clinical trial of imatinib monotherapy in epithelial ovarian cancer was terminated because of the absence of an objective response (NCT00510653). The combination of imatinib and paclitaxel in recurrent epithelial ovarian cancer was studied in trial NCT00840450. Twelve-month progression-free survival was obtained for only 17% of participants. Thus, there was no evidence of imatinib efficacy in ovarian cancer. However, several other TKIs showed promising efficacy in treatment of epithelial ovarian cancer (Ntanasis-Stathopoulos et al. 2016). Several studies investigated the efficacy of imatinib in ovarian GCT. The results obtained in cell lines were controversial (Chu et al. 2008;Jamieson and Fuller 2015); however, a previous case reported the benefit of imatinib in GCT of the ovary (Raspagliesi et al. 2011). Here, we report a case of adult-type recurrent ovarian GCT, successfully treated with imatinib monotherapy. Although the disease progressed during best supportive care, imatinib treatment resulted in a prolonged stabilization of the disease. Importantly, the prescription of imatinib was based on the bioinformatical analysis of gene expression data from the patient's tumor biopsy (Oncobox platform). Moreover, sorafenib treatment, which was also suggested by Oncobox, resulted in a partial tumor response and was terminated only because of significant side effects. We conclude that a personalized approach for TKI prescription in ovarian cancer is needed. The selection of patients who may potentially benefit from imatinib or other TKI treatment may be based on the molecular profiling of tumor biopsies and further bioinformatical analysis. However, further extended clinical trials are required for validation and adjustment of clinical indications for this particular bioinformatical platform Oncobox. MATERIALS AND METHODS An FFPE block with >80% of tumor cells was analyzed. We extracted RNA from five 250-µmthick simultaneously made sections of this block. DNA was extracted from the FFPE tissue using the AnaPrep FFPE DNA extraction kit following the manufacturer's instruction. Whole-exome DNA was captured from total genomic DNA using the SeqCap EZ System from NimbleGen according to the manufacturer's instructions. Briefly, genomic DNA was sheared and size selected to roughly 200-250 base pairs and the ends were repaired and ligated to specific adapters and multiplexing indexes. Fragments were then incubated with SeqCap biotinylated DNA baits followed by the ligation-mediated PCR, and the RNA-DNA hybrids were purified using streptavidin-coated magnetic beads. The RNA baits were then digested to release the targeted DNA fragments, followed by a brief amplification of 15 or fewer PCR cycles. Sequencing was performed using Illumina HiSeq 3000. The reads were aligned with BWA-MEM. Sequencing coverage table is available in Supplemental Table S4. Mutation calling was performed using the Picard and Genome Analysis Toolkit. The gene expression profile in the mixed sample was analyzed using the microarray platform CustomArray Inc (USA). The Manufacturer's protocol was modified by adding to the amplification reaction dNTP mix containing biotinylated dUTP, resulting in a final proportion of dTTP/biotin-dUTP as 5/1. Hybridization was performed according to CustomArray ElectraSense Hybridization and Detection protocol. The hybridization mix contained 2.5 µg of labeled DNA library, 6× SSPE, 0.05% Tween 20, 20 mM EDTA, 5× Denhardt's solution, 100 ng/µL sonicated calf thymus gDNA, and 0.05% SDS. The hybridization mix was incubated with chip overnight at 50°C. Hybridization efficiency was detected electrochemically using the CustomArray ElectraSense Detection Kit and ElectraSense 4X2K/12K Reader. Gene expressions of more than 3000 human genes were profiled and deposited at GES under accession ID GSE112579. The analysis of gene expression peculiarities was performed based on comparison with four samples of healthy ovarian tissues (samples of normal ovary from data set GSE6008). Gene expression profiles were pooled and quantile-normalized using the R statistical programming language and "preprocessCore" library. The profiling of intracellular signaling pathways altered in the patient's tumor tissue when compared to normal was performed using the Oncobox bioinformatical platform. The Oncobox system is capable of modeling the drug's ability to block pathological changes in molecular pathways and simultaneously block gene products with a pathological increase in the expression level. In contrast to other known analogs, the Oncobox platform uses the parameter of the balanced efficiency score (BES) for each drug as a target drug efficiency measure. Wherein, the data on molecular pathway activity in a test sample and the data on expression levels of gene products-targets of a certain drug-are simultaneously used for the BES calculation. The BES value is calculated according to the formula in which d is the target drug under investigation; a and b are the weight coefficients varying from −1 to 1.5 depending on the target drug type; the drug efficiency index for molecular pathways DES MP d is calculated based on the activity levels for molecular pathways containing molecular targets of drug d; and the drug efficiency score for target genes DES TG d is calculated based on levels of expression of individual gene products. To calculate DES MP , the formula is used, in which d is the unique identifier of the target drug; t is the unique identifier of the gene product, the target of drug d; p is the unique identifier of the signaling pathway; PAL p is the molecular pathway p activation strength; and the discrete value AMCF (activation-to-mitosis conversion factor) is to be determined as follows: AMCF = 1, when the activation of a pathway facilitates cell survival, growth, and division; 0, when there are no data as to whether the molecular pathway activation facilitates cell survival, growth, and division, or when such data available to the researcher are conflicting; −1, when the activation of a pathway prevents cell survival, growth, and division. The discrete value DTI (drug-target index) is defined as DTI dt = 0, when drug d does not affect gene product t; 1, when drug d affects gene product t. The discrete value NII (node involvement index) is defined as NII tp = 0, there is no gene product t in pathway p; 1, there is gene product t in pathway p. To calculate DES TG , use in which d is the unique identifier of the target drug; t is the unique identifier of the gene product, molecular target of drug d; p is the unique identifier of the signaling pathway; CNR t (case-to-normal ratio) is the ratio of the expression levels of the protein-coding gene t in the test sample to the norm (averaged expression level for a control group); ln is the natural logarithm; the definitions of DTI d,t , AMCF p , and NII are similar to those given above. The discrete value ARR tp (activator/repressor role) is defined for a gene product t in the pathway p as follows and deposited into the molecular pathway database: ARR tp = −1, gene product t is repressor of pathway p; −0.5, gene product t is rather repressor than activator of pathway p; 0, activator/repressor role of gene product t in pathway p is unclear or unknown; 0.5, gene product t is rather activator than repressor of pathway p; 1, gene product t is activator of pathway p. To calculate the BES for drug d, weight coefficients a and b are used, which differ depending on the drug type. For low-molecular TKIs (nibs), both weight coefficients are equal to 0.5, representing the equal significance of the target molecular pathway activation and target gene expression levels in the pathological tissue sample tested. This is related to the nibs' capability of blocking their molecular targets and thus inhibiting their activities, as well as modulating the cell signaling via related molecular pathways. For hormones, both weight coefficients are equal to −0.5, because they activate but do not inhibit their molecular targets and act accordingly also on their target molecular pathways. For antihormones, coefficients are equal to 0.5 again, which is because of their inhibition effect on their molecular targets, hormone products, and on the respective molecular pathways. For retinoids, both coefficients are equal to 0.5 because these drugs bind retinoic acid receptors and activate a number of dependent molecular pathways. For rapalogs (rapamycin analogs), both coefficients are equal to 0.5 because they demonstrate their inhibition effect by directly binding with their molecular targets and act accordingly on the relevant molecular pathways. For mibs (proteasome inhibitors), both coefficients are equal to 0.5 because these drugs demonstrate the inhibition effect when binding with their molecular targets and act accordingly on the relevant molecular pathways and proteasome signaling. For VEGF blocking agents, a = 0 and b = 1, because these drugs directly block the VEGF molecules in the blood flow while not binding with the molecular targets inside the cell or on the cell surface and, therefore, do not directly affect the intracellular signaling. For monoclonal antibodies that bind with their molecular targets on the cell surface (mAbs), a = 0 and b = 1, as their main mode of action consists in activation of immune cytotoxic response against the cells having bound mAb molecules on their surface and does not deal with strong modulation of signaling by affecting molecular pathways. Killer mAbs consist of antibodies against molecular targets on the cell surface chemically bound with cytotoxic agents. When binding with their targets on the cell surface, the killer mAbs kill these cells, thus demonstrating therapeutic mechanism not related to intracellular molecular pathway activation. For them, a = 0 and b = 1.5; in this case, the increased coefficient b represents proprietary high cytotoxic activities of these drugs. For drugs blocking de novo tubulin polymerization, a = 0 and b = 1; this represents the indefinite function of many targeted pathways for these drugs in cell survival and proliferation, as well as their direct inhibitory effect on their molecular targets. The same coefficients are also set for histone deacetylase inhibitors for the same reasons concerning their mechanism of action. For DNA-alkylating agents, a = 0 and b = −1, reflecting the indefinite functions of the majority of targeted pathways for cell survival and proliferation, as well as the direct inhibitory effect of these drugs on DNA repair proteins that target the alkylated DNA (reflected by the coefficient b = −1). For immunotherapeutic drugs, both coefficients are equal to 0.5, because of the dependence of their effect on the availability of both direct molecular targets and molecular pathway activation profiles related to tumor infiltration with lymphocytes. Similarly, the poly-ADP ribose polymerase blocking drugs inhibit DNA repair and depend on both availability of direct molecular targets and on the activities of targeted molecular pathways. This is reflected by both coefficients a and b being equal to 0.5. ADDITIONAL INFORMATION Data Deposition and Access Gene expression data derived from the patient's tumor tissue were deposited at the GEO (https://www.ncbi.nlm.nih.gov/geo/) under accession number GSE112579. Whole-exome sequencing data were deposited to NCBI SRA under accession number PRJNA503667. Ethics Statement The patient provided informed written consent for gene expression analysis and wholeexome sequencing of her sample and publication of this article. Gene expression profiling was approved by the Institutional Review Board (IRB) at Clinical Center Vitamed, Moscow, Russia, according to the principles of the Declaration of Helsinki.
2019-01-22T22:34:50.478Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "de4b27364d3ae1e05dcdb05d5167676cee3c3c9f", "oa_license": "CCBY", "oa_url": "http://molecularcasestudies.cshlp.org/content/5/2/a003434.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de4b27364d3ae1e05dcdb05d5167676cee3c3c9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257079016
pes2o/s2orc
v3-fos-license
Fair Correlation Clustering in Forests The study of algorithmic fairness received growing attention recently. This stems from the awareness that bias in the input data for machine learning systems may result in discriminatory outputs. For clustering tasks, one of the most central notions of fairness is the formalization by Chierichetti, Kumar, Lattanzi, and Vassilvitskii [NeurIPS 2017]. A clustering is said to be fair, if each cluster has the same distribution of manifestations of a sensitive attribute as the whole input set. This is motivated by various applications where the objects to be clustered have sensitive attributes that should not be over- or underrepresented. We discuss the applicability of this fairness notion to Correlation Clustering. The existing literature on the resulting Fair Correlation Clustering problem either presents approximation algorithms with poor approximation guarantees or severely limits the possible distributions of the sensitive attribute (often only two manifestations with a 1:1 ratio are considered). Our goal is to understand if there is hope for better results in between these two extremes. To this end, we consider restricted graph classes which allow us to characterize the distributions of sensitive attributes for which this form of fairness is tractable from a complexity point of view. While existing work on Fair Correlation Clustering gives approximation algorithms, we focus on exact solutions and investigate whether there are efficiently solvable instances. The unfair version of Correlation Clustering is trivial on forests, but adding fairness creates a surprisingly rich picture of complexities. We give an overview of the distributions and types of forests where Fair Correlation Clustering turns from tractable to intractable. The most surprising insight to us is the fact that the cause of the hardness of Fair Correlation Clustering is not the strictness of the fairness condition. Introduction In the last decade, the notion of fairness in machine learning has increasingly attracted interest, see for example the review by Pessach and Schmueli [32]. Feldman, Friedler, Moeller, Scheidegger, and Venkatasubramanian [26] formalize fairness based on a US Supreme Court decision on disparate impact from 1971. It requires that sensitive attributes like gender or skin color should neither be explicitly considered in decision processes like hiring but arXiv:2302.11295v1 [cs. LG] 22 Feb 2023 also should the manifestations of sensitive attributes be proportionally distributed in all outcomes of the decision process. Feldman et al. formalize this notion for classification tasks. Chierichetti, Kumar, Lattanzi, and Vassilvitskii [19] adapt this concept for clustering tasks. In this paper we employ the same disparate impact based understanding of fairness. Formally, the objects to be clustered have a color assigned to them that represents some sensitive attribute. Then, a clustering of these colored objects is called fair if for each cluster and each color the ratio of objects of that color in the cluster corresponds to the total ratio of vertices of that color. More precisely, a clustering is fair, if it partitions the set of objects into fair subsets. To understand how this notion of fairness affects clustering decisions, consider the following example. Imagine that an airport security wants to find clusters among the travelers to assign to each group a level of potential risk with corresponding anticipating measures. There are attributes like skin color that should not influence the assignment to a risk level. A bias in the data, however, may lead to some colors being over-or underrepresented in some clusters. Simply removing the skin color attribute from the data may not suffice as it may correlate with other attributes. Such problems are especially likely if one of the skin colors is far less represented in the data than others. A fair clustering finds the optimum clustering such that for each risk level the distribution of skin colors is fair, by requiring the distribution of each cluster to roughly match the distribution of skin colors among all travelers. The seminal fair clustering paper by Chierichetti et al. [19] introduced this notion of fairness for clustering and studied it for the objectives k-center and k-median. Their work was extended by Bera, Chakrabarty, Flores, and Negahbani [11], who relax the fairness constraint in the sense of requiring upper and lower bounds on the representation of a color in each cluster. More precisely, they define the following generalization of fair sets. Following these results, this notion of (relaxed) fairness was extensively studied for centroidbased clustering objectives with many positive results. For example, Bercea et al. [12] give bicreteira constant-factor approximations for facility location type problems like k-center and k-median. Bandyapadhyay, Fomin and Simonov [7] use the technique of fair coresets introduced by Schmidt, Schwiegelshohn, and Sohler [34] to give constant factor approximations for many centroid-based clustering objectives; among many other results, they give a PTAS for fair k-means and k-median in Euclidean space. Fairness for centroid-based objectives seems to be so well understood, that most research already considers more generalized settings, like streaming [34], or imperfect knowledge of group membership [25]. In comparison, there are few (positive) results for this fairness notion applied to graph clustering objectives. The most studied with respect to fairness among those is Correlation Clustering, arguably the most studied graph clustering objective. For Correlation Clustering we are given a pairwise similarity measure for a set of objects and the aim is to find a clustering that minimizes the number of similar objects placed in separate clusters and the number of dissimilar objects placed in the same cluster. Formally, the input to On the positive side, we identify color distributions that allow for efficient algorithms. Not surprisingly, this includes ratio 1 : 1, and extends to a constant number of k colors with distribution c 1 : c 2 : c 3 : . . . : c k for constants c 1 , . . . , c k . Such distributions can be used to model sensitive attributes with a limited number of manifestation that are almost evenly distributed. Less expected, we also find tractability for, in a sense, the other extreme. We show that Fair Correlation Clustering on forests can be solved in polynomial time for two colors with ratio 1 : c with c being very large (linear in the number of overall vertices). Such a distribution can be used to model a scenario where a minority is drastically underrepresented and thus in dire need of fairness constraints. Although our results only hold for forests, we believe that they can offer a starting point for more general graph classes. We especially hope that our work sparks interest in the so far neglected distribution of ratio 1 : c with c being very large. Related Work The study of clustering objectives similar or identical to Correlation Clustering dates back to the 1960s [10,33,37]. Bansal, Blum, and Chawla [8] were the first to coin the term Correlation Clustering as a clustering objective. We note that it is also studied under the name Cluster Editing. The most general formulation of Correlation Clustering regarding weights considers two positive real values for each pair of vertices, the first to be added to the cost if the objects are placed in the same cluster and the second to be added if the objects are placed in separate clusters [4]. The recent book by Bonchi, García-Soriano, and Gullo [13] gives a broad overview of the current research on Correlation Clustering. We focus on the particular variant that considers a complete graph with {−1, 1} edgeweights, and the min disagreement objective function. This version is APX-hard [16], implying in particular that there is no algorithm giving an arbitrarily good approximation unless P = NP. The best known approximation for Correlation Clustering is the very recent breakthrough by Cohen-Addad, Lee and Newman [20] who give a ratio of (1.994 + ). We show that in forests, all clusters of an optimal Correlation Clustering solution have a fixed size. In such a case, Correlation Clustering is related to k-Balanced Partitioning. There, the task is to partition the graph into k clusters of equal size while minimizing the number of edges that are cut by the partition. Feldmann and Foschini [27] study this problem on trees and their results have interesting parallels with ours. Aside from the results on Fair Correlation Clustering already discussed above, we are only aware of three papers that consider a fairness notion close to the one of Chierichetti et al. [19] for a graph clustering objective. Schwartz and Zats [35] consider incomplete Fair Correlation Clustering with the max-agree objective function. Dinitz, Srinivasan, Tsepenekas, and Vullikanti [23] study Fair Disaster Containment, a graph cut problem involving fairness. Their problem is not directly a fair clustering problem since they only require one part of their partition (the saved part) to be fair. Ziko, Yuan, Granger, and Ayed [38] give a heuristic approach for fair clustering in general that however does not allow for theoretical guarantees on the quality of the solution. Contribution We now outline our findings on Fair Correlation Clustering. We start by giving several structural results that underpin our further investigations. Afterwards, we present our algorithms and hardness results for certain graph classes and color ratios. We further show that the hardness of fair clustering does not stem from the requirement of the clusters Figure 1 Example forest where a cluster of size 4 and two clusters of size 2 incur the same cost. With one cluster of size 4 (left), the inter-cluster cost is 0 and the intra-cluster cost is 4. With two clusters of size 2 (right), both the inter-cluster and intra-cluster cost are 2. exactly reproducing the color distribution of the whole graph. This section is concluded by a discussion of possible directions for further research. Structural Insights We outline here the structural insights that form the foundation of all our results. We first give a close connection between the cost of a clustering, the number of edges "cut" by a clustering, and the total number of edges in the graph. We refer to this number of "cut" edges as the inter-cluster cost as opposed to the number of non-edges inside clusters, which we call the intra-cluster cost. Formally, the intra-and inter-cluster cost are the first and second summand of the Correlation Clustering cost in Equation (1), respectively. The following lemma shows that minimizing the inter-cluster cost suffices to minimize the total cost if all clusters are of the same size. This significantly simplifies the algorithm development for Correlation Clustering. The condition that all clusters need to be of the same size seems rather restrictive at first. However, we prove in the following that in bipartite graphs and, in particular, in forests and trees there is always a minimum-cost fair clustering such that indeed all clusters are equally large. This property stems from how the fairness constraint acts on the distribution of colors and is therefore specific to Fair Correlation Clustering. It allows us to fully utilize Lemma 3 both for building reductions in NP-hardness proofs as well as for algorithmic approaches as we can restrict our attention to partitions with equal cluster sizes. Consider two colors of ratio 1 : 2, then any fair cluster must contain at least 1 vertex of the first color and 2 vertices of the second color to fulfil the fairness requirement. We show that a minimum-cost clustering of a forest, due to the small number of edges, consists entirely of such minimal clusters. Every clustering with larger clusters incurs a higher cost. Figure 1. In fact, this color distribution is the only case for forests where a partition with larger clusters can have the same (but no smaller) cost. We prove a slightly weaker statement than Lemma 4, namely, that there is always a minimum-cost fair clustering whose cluster sizes are given by the color ratio. We find that this property, in turn, holds not only for forests but for every bipartite graph. Note that in general bipartite graphs there are more color ratios than only 1 : 1 that allow for these ambiguities. Table 1 Running times of our algorithms for Fair Correlation Clustering on forests depending on the color ratio. Value p is any rational such that n /p − 1 is integral; c1, c2, . . . , c k are coprime positive integers, possibly depending on n. Functions f and g are given in Theorems 23 and 27. In summary, the results above show that the ratio of the color classes is the key parameter determining the cluster size. If the input is a bipartite graph whose vertices are colored with k colors in a ratio of c 1 : c 2 : · · · : c k , our results imply that without loosing optimality, solutions can be restricted to contain only clusters of size d = k i=1 c i , each with exactly c i vertices of color i. Starting from these observations, we show in this work that the color ratio is also the key parameter determining the complexity of Fair Correlation Clustering. On the one hand, the simple structure of optimal solutions restricts the search space and enables polynomial-time algorithms, at least for some instances. On the other hand, these insights allow us to show hardness already for very restricted input classes. The technical part of most of the proofs consists of exploiting the connection between the clustering cost, total number of edges, and the number of edges cut by a clustering. Tractable Instances We start by discussing the algorithmic results. The simplest case is that of two colors, each one occurring equally often. We prove that for bipartite graphs with a color ratio 1 : 1 Fair Correlation Clustering is equivalent to the maximum bipartite matching problem, namely, between the vertices of different color. Via the standard reduction to computing maximum flows, this allows us to benefit from the recent breakthrough by Chen, Kyng, Liu, Peng, Probst Gutenberg, and Sachdeva [18]. It gives an algorithm running in time m 1+o (1) . The remaining results focus on forests as the input, see Table 1. It should not come as a surprise that our main algorithmic paradigm is dynamic programming. A textbook version finds a maximum matching in linear time in a forests, solving the 1 : 1 case. For general color ratios, we devise much more intricate dynamic programs. We use the color ratio 1 : 2 as an introductory example. The algorithm has two phases. In the first, we compute a list of candidate splittings that partition the forest into connected parts containing at most 1 blue and 2 red vertices each. In the second phase, we assemble the parts of each of the splittings to fair clusters and return the cheapest resulting clustering. The difficulty lies in the two phases not being independent from each other. It is not enough to minimize the "cut" edges in the two phases separately. We prove that the costs incurred by the merging additionally depends on the number of of parts of a certain type generated in the splittings. Tracking this along with the number of cuts results in a O(n 6 )-time algorithm. Note that we did not optimize the running time as long as it is polynomial. We generalize this to k colors in a ratio c 1 : c 2 : · · · : c k . 3 We now have to consider all possible colorings of a partition of the vertices such that in each part the i-th color occurs at most c i times. While assembling the parts, we have to take care that the merged colorings remain compatible. The resulting running time is O(n g(c1,...,c k ) ) for some (explicit) polynomial g. Recall that, by Lemma 4, the minimum cluster size is d = k i=1 c i . If this is a constant, then the dynamic program runs in polynomial time. If, however, the number of colors k or some color's proportion grows with n, it becomes intractable. Equivalently, the running time gets worse if there are very large but sublinearly many clusters. To mitigate this effect, we give a complementary algorithm at least for forests with two colors. Namely, consider the color ratio 1 : n p − 1. Then, an optimal solution has p clusters each of size d = n /p. The key observation is that the forest contains p vertices of the color with fewer occurrences, say, blue, and any fair clustering isolates the blue vertices. This can be done by cutting at most p − 1 edges and results in a collection of (sub-)trees where each one has at most one blue vertex. To obtain the clustering, we split the trees with red excess vertices and distribute those among the remaining parts. We track the costs of all the O(n poly(p) ) many cut-sets and rearrangements to compute the one of minimum cost. In total, the algorithm runs in time O(n f (p) ) for some polynomial in p. In summary, we find that if the number of clusters p is constant, then the running time is polynomial. Considering in particular an integral color ratio 1 : c, 4 , we find tractability for forests if c = O(1) or c = Ω(n). We will show next that Fair Correlation Clustering with this kind of a color ratio is NP-hard already on trees, hence the hardness must emerge somewhere for intermediate c. Table 2 shows the complexity of Fair Correlation Clustering on graphs with bounded diameter. We obtain a dichotomy for trees with two colors with ratio 1 : c. If the diameter is at most 3, an optimal clustering is computable in O(n) time, but for diameter at least 4, the problem becomes NP-hard. In fact, the linear-time algorithm extends to trees with an arbitrary number of colors in any ratio. A Dichotomy for Bounded Diameter The main result in that direction is the hardness of Fair Correlation Clustering already on trees with diameter at least 4 and two colors of ratio 1 : c. This is proven by a reduction from the strongly NP-hard 3-Partition problem. There, we are given positive integers a 1 , . . . , a where is a multiple of 3 and there exists some B with i=1 a i = B · 3 . The task is to partition the numbers a i into triples such that each one of those sums to B. The problem remains NP-hard if all the a i are strictly between B /4 and B /2, ensuring that, if some subset of the numbers sums to B, it contains exactly three elements. . . . We model this problem as an instance of Fair Correlation Clustering as illustrated in Figure 2. We build stars, where the i-th one consists of a i red vertices, and a single star of /3 blue vertices. The centers of the blue star and all the red stars are connected. The color ratio in the resulting instance is 1 : B. Lemma 4 then implies that there is a minimum-costs clustering with /3 clusters, each with a single blue vertex and B red ones. We then apply Lemma 3 to show that this cost is below a certain threshold if and only if each cluster consist of exactly three red stars (and an arbitrary blue vertex), solving 3-Partition. Maximum Degree The reduction above results in a tree with a low diameter but arbitrarily high maximum degree. We have to adapt our reductions to show hardness also for bounded degrees. The results are summarized in Table 3. If the Fair Correlation Clustering instance is not required to be connected, we can represent 3-Partition with a forest of trees with maximum degree 2, that is, a forest of paths. The input numbers are modeled by paths with a i vertices. The forest also contains /3 isolated blue vertices, which again implies that an optimal fair clustering must have /3 clusters each with B red vertices. By defining a sufficiently small cost threshold, we ensure that the fair clustering has cost below it if and only if none of the path-edges are "cut" by the clustering, corresponding to a partition of the a i . There is nothing special about paths, we can arbitrarily restrict the shape of the trees, as long it is always possible to form such a tree with a given number of vertices. However, the argument crucially relies on the absence of edges between the a i -paths/trees and does not transfer to connected graphs. This is due to the close relation between inter-cluster costs and the total number of edges stated in Lemma 3. The complexity of Fair Correlation Clustering on a single path with a color ratio 1 : c therefore remains open. Notwithstanding, we show hardness for trees in two closely related settings: keeping the color ratio at 1 : c but raising the maximum degree to 5, or having a single path but a total of n /2 colors and each color shared by exactly 2 vertices. For the case of maximum degree 5 and two colors with ratio 1 : c, we can again build on the 3-Partition machinery. The construction is inspired by how Feldmann and Foschini [27] used the problem to show hardness of computing so-called k-balanced partitions. We adapt it to our setting in which the vertices are colored and the clusters need to be fair. For the single path with n /2 colors, we reduce from (the 1-regular 2-colored variant of) the Paint Shop Problem for Words [24]. There, a word is given in which every symbol appears exactly twice. The task is to assign the values 0 and 1 to the letters of the word 5 such that that, for each symbol, exactly one of the two occurrences receives a 1, but the number of blocks of consecutive 0s and 1s over the whole word is minimized. In the translation to Fair Correlation Clustering, we represent the word as a path and the symbols as colors. To remain fair, there must be two clusters containing exactly one vertex of each color, translating back to a 0/1-assignment to the word. Relaxed Fairness One could think that the hardness of Fair Correlation Clustering already for classes of trees and forests has its origin in the strict fairness condition. After all, the color ratio in each cluster must precisely mirror that of the whole graph. This impression is deceptive. Instead, we lift most of our hardness results to Relaxed Fair Correlation Clustering considering the relaxed fairness of Bera et al. [11]. Recall Definition 2. It prescribes two rationals p i and q i for each color i and allows, the proportion of i-colored elements in any cluster to be in the interval [p i , q i ], instead of being precisely ci /d, where d = k j=1 c j . The main conceptual idea is to show that, in some settings but not all, the minimumcost solution under a relaxed fairness constraint is in fact exactly fair. This holds for the settings described above where we reduce from 3-Partition. In particular, Relaxed Fair Correlation Clustering with a color ratio of 1 : c is NP-hard on trees with diameter 4 and forests of paths, respectively. Furthermore, the transferal of hardness is immediate for the case of a single path with n /2 colors and exactly 2 vertices of each color. Any relaxation of fairness still requires one vertex of each color in every cluster, maintaining the equivalence to the Paint Shop Problem for Words. In contrast, algorithmic results are more difficult to extend if there are relaxedly fair solutions that have lower cost than any exactly fair one. We then no longer know the cardinality of the clusters in an optimal solution. As a proof of concept, we show that a slight adaption of our dynamic program for two colors in a ratio of 1 : 1 still works for what we call α-relaxed fairness. 6 There, the lower fairness ratio is p i = α · ci d and the upper one is q i = 1 α · ci d for some parameter α ∈ (0, 1). We give an upper bound on the necessary cluster size depending on α, which is enough to find a good splitting of the forest. Naturally, the running time now also depends on α, but is of the form O(n h(1/α) ) for some polynomial h. In particular, we get an polynomial-time algorithm for constant α. The proof of correctness consists of an exhaustive case distinction already for the simple case of 1 : 1. We are confident that this can be extended to more general color ratios, but did not attempt it in this work. Summary and Outlook We show that Fair Correlation Clustering on trees, and thereby forests, is NP-hard. It remains so on trees of constant degree or diameter, and-for certain color distributions-it is also NP-hard on paths. On the other hand, we give a polynomial-time algorithm if the minimum size d of a fair cluster is constant. We also provide an efficient algorithm for the color ratio 1 : c if the total number of clusters is constant, corresponding to c ∈ Θ(n). For our main algorithms and hardness results, we prove that they still hold when the fairness constraint is relaxed, so the hardness is not due to the strict fairness definition. Ultimately, we hope that the insights gained from these proofs as well as our proposed algorithms prove helpful to the future development of algorithms to solve Fair Correlation Clustering on more general graphs. In particular, fairness with color ratio 1 : c with c being very large seems to be an interesting and potentially tractable type of distribution for future study. As first steps to generalize our results, we give a polynomial-time approximation scheme (PTAS) for Fair Correlation Clustering on forests. Another avenue for future research could be that Lemma 5, bounding the cluster size of optimal solutions, extends also to bipartite graphs. This may prove helpful in developing exact algorithms for bipartite graphs with other color ratios than 1 : 1. Parameterized algorithms are yet another approach to solving more general instances. When looking at the decision version of Fair Correlation Clustering, our results can be cast as an XP-algorithm when the problem is parameterized by the cluster size d, for it can be solved in time O(n g(d) ) for some function g. Similarly, we get an XP-algorithm for the number of clusters as parameter. We wonder whether Fair Correlation Clustering can be placed in the class FPT of fixed-parameter tractable problems for any interesting structural parameters. This would require a running time of, e.g., g(d) · poly(n). There are FPT-algorithms for Cluster Editing parameterized by the cost of the solution [15]. Possibly, future research might provide similar results for the fair variant as well. A natural extension of our dynamic programming approach could potentially lead to an algorithm parameterizing by the treewidth of the input graph. Such a solution would be surprising, however, since to the best of our knowledge even for normal, unfair Correlation Clustering 7 and for the related Max Dense Graph Partition [22] no treewidth approaches are known. Finally, it is interesting how Fair Correlation Clustering behaves on paths. While we obtain NP-hardness for a particular color distribution from the Paint Shop Problem For Words, the question of whether Fair Correlation Clustering on paths with for example two colors in a ratio of 1 : c is efficiently solvable or not is left open. However, we believe that this question is rather answered by the study of the related (discrete) Necklace Splitting problem, see the work of Alon and West [6]. There, the desired cardinality of every color class is explicitly given, and it is non-constructively shown that there always exists a split of the necklace with the number of cuts meeting the obvious lower bound. A constructive splitting procedure may yield some insights for Fair Correlation Clustering on paths. Preliminaries We fix here the notation we are using for the technical part and give the formal definition of Fair Correlation Clustering. Notation We We only consider simple paths, i.e., we have v i = v j for all i = j. A graph is called connected if for every pair of vertices u, v there is a path connecting u and v. The distance between two vertices is the length of the shortest path connecting these vertices and the diameter of a graph is the maximum distance between a pair of vertices. A circle is A forest is a graph without circles. A connected forest is called a tree. There is exactly one path connecting every pair of vertices in a tree. A tree is rooted by choosing any vertex r ∈ V as the root. Then, every vertex v, except for the root, has a parent, which is the next vertex on the path from v to r. All vertices that have v as a parent are referred to as the children of v. A vertex without children is called a leaf. Given a rooted tree T , by T v we denote the subtree induced by v and its descendants, i.e., the set of vertices such that there is a path starting in v and ending in that vertex without using the edge to v's parent. Observe that each forest is a bipartite graph, for example by placing all vertices with even distance to the root of their respective tree on one side and the other vertices on the other side. A finite set U can be colored by a function c : U → [k], for some k ∈ N >0 . If there are only two colors, i.e., k = 2, for convenience we call them red and blue, instead by numbers. For a partition P = {S 1 , S 2 , . . . , S k } with S i ∩ S j = ∅ for i = j of some set U = S 1 ∪ S 2 ∪ . . . ∪ S k and some u ∈ U we use P[u] to refer to the set S i for which u ∈ S i . Further, we define the term coloring on sets and partitions. The coloring of a set counts the number of occurrences of each color in the set. The coloring of a partition counts the number of occurrences of set colorings in the partition. Definition 7 (Coloring of Partitions). Let U be a colored set and let P be a partition of U . Let C = {C S | S ⊆ U } denote the set of set colorings for which there is a subset of U with that coloring. By an arbitrarily fixed order, let C 1 , C 2 , . . . , C denote the elements of C. Then, the coloring of P is an array C P such that C P [i] = |{S ∈ P | C S = C i }| for all i ∈ [ ]. Problem Definitions In order to define Fair Correlation Clustering, we first give a formal definition of the unfair clustering objective. Correlation Clustering receives a pairwise similarity measure for a set of objects and aims at minimizing the number of similar objects placed in separate clusters and the number of dissimilar objects placed in the same cluster. For the sake of consistency, we reformulate the definition of Bonchi et al. [13] such that the pairwise similarity between objects is given by a graph rather than an explicit binary similarity function. Given a graph G = (V, E) and a partition P of V , the Correlation Clustering cost is We refer to the first summand as the intra-cluster cost ψ and the second summand as the inter-cluster cost χ. Where G is clear from context, we abbreviate to cost(P). Sometimes, we consider the cost of P on an induced subgraph. To this end, we allow the same cost definition as above also if P partitions some set V ⊇ V . We define (unfair) Correlation Clustering as follows. Task: Find a partition P of V that minimizes cost(P). We emphasize that this is the complete, unweighted, min-disagree form of Correlation Clustering. It is complete as every pair of objects is either similar or dissimilar but none is indifferent regarding the clustering. It is unweighted as the (dis)similarity between two vertices is binary. A pair of similar objects that are placed in separate clusters as well as a pair of dissimilar objects in the same cluster is called a disagreement, hence the naming of the min-disagree form. An alternative formulation would be the max-agree form with the objective to maximize the number of pairs that do not form a disagreement. Note that both formulations induce the same ordering of clusterings though approximation factors may differ because of the different formulations of the cost function. Our definition of the Fair Correlation Clustering problem loosely follows [2]. The fairness aspect limits the solution space to fair partitions. A partition is fair if each of its sets has the same color distribution as the universe that is partitioned. We now define complete, unweighted, min-disagree variant of the Fair Correlation Clustering problem. When speaking of (Fair) Correlation Clustering, we refer to this variant, unless explicitly stated otherwise. Input: Graph Task: Find a fair partition P of V that minimizes cost(P). Structural Insights We prove here the structural results outlined in Subsection 2.1. The most important insight is that in bipartite graphs, and in forests in particular, there is always a minimum-cost fair clustering such that all clusters are of some fixed size. This property is very useful, as it helps for building reductions in hardness proofs as well as algorithmic approaches that enumerate possible clusterings. Further, by the following lemma, this also implies that minimizing the inter-cluster cost suffices to minimize the Correlation Clustering cost, which simplifies the development of algorithms solving Fair Correlation Clustering on such instances. pairs of vertices, each incurring an intra-cost of 1 if not connected by an edge. Let the total intra-cost be ψ. As there is a total of m edges, we have In particular, if G is a tree, this yields cost(P) = (d−3)n 2 + 2χ + 1 as there m = n − 1. Forests We find that in forests in every minimum-cost partition all sets in the partition are of the minimum size required to fulfill the fairness requirement. For any clustering P of V to be fair, all clusters must be at least of size d. We show that if there is a cluster S in the clustering with |S| > d, then we decrease the cost by splitting S. First note that in order to fulfill the fairness constraint, we have |S| = ad for some a ∈ N 2 . Consider a new clustering P obtained by splitting S into S 1 , S 2 , where S 1 ⊂ S is an arbitrary fair subset of S of size d and S 2 = S \ S 1 . Note that the cost incurred by every edge and non-edge with at most one endpoint in S is the same in both clusterings. Let ψ be the intra-cluster cost of P on F [S]. Regarding the cost incurred by the edges and non-edges with both endpoints in S, we know that since the cluster is of size ad and as it is part of a forest it contains at most ad − 1 edges. In the worst case, the P cuts all the ad − 1 edges. However, we profit from the smaller cluster sizes. We have Hence, P is cheaper by This term is increasing in a. As a 2, by plugging in a = 2, we hence obtain a lower bound For d 2, the bound is increasing in d and it is positive for d > 3. This means, if d > 3 no clustering with a cluster of size more than d has minimal cost implying that all optimum clusterings only consist of clusters of size d. Last, we have to argue the case d = 3, i.e., we have a color ratio of 1 : 2 or 1 : 1 : 1. In this case d 2 − 4d + 2 evaluates to −1. However, we obtain a positive change if we do not split arbitrarily but keep at least one edge uncut. Note that this means that one edge less is cut and one more edge is present, which means that our upper bound on cost(T [S], P ) decreases by 2, so P is now cheaper. Hence, assume there is an edge {u, v} such that c(u) = c(v). Bipartite Graphs We are able to partially generalize our findings for trees to bipartite graphs. We show that there is still always a minimum-cost fair clustering with cluster sizes fixed by the color ratio. However, in bipartite graphs there may also be minimum-cost clusterings with larger clusters. We start with the case of two colors in a ratio of 1 : 1 and then generalize to other ratios. Lemma 10. Let G = (A ∪ B, E) be a bipartite graph with two colors in a ratio of 1 : 1. Then, there is a minimum-cost fair clustering of G that has no clusters with more than 2 vertices. Further, each minimum-cost fair clustering can be transformed into a minimum-cost fair clustering such that all clusters contain no more than 2 vertices in linear time. If G is a forest, then no cluster in a minimum-cost fair clustering is of size more than 4. Proof. Note that, due to the fairness constraint, each fair clustering consists only of evenly sized clusters. We prove both statements by showing that in each cluster of at least 4 vertices there are always two vertices such that by splitting them from the rest of the cluster the cost does not increase and fairness remains. Let P be a clustering and S ∈ P be a cluster with |S| 4. Let S A = S ∩ A and S B = S ∩ B. Assume there is a ∈ S a and b ∈ S b such that a and b have not the same color. Each of them is cut in P but not in P, so they incur an extra cost of at most |S| − 2. However, due to the bipartite structure, there are |S A | − 1 vertices in S \ {a, b} that have no edge to a and |S B | − 1 vertices in S \ {a, b} that have no edge to b. These |S| − 2 vertices incur a total cost of |S| − 2 in P but no cost in P . This makes up for any cut edge in P, so splitting the clustering never increases the cost. If there is no a ∈ S a and b ∈ S b such that a and b have not the same color, then either S A = ∅ or S B = ∅. In both cases, there are no edges inside S, so splitting the clustering in an arbitrary fair way never increases the cost. By iteratively splitting large clusters in any fair clustering, we hence eventually obtain a minimum-cost fair clustering such that all clusters consist of exactly two vertices. Now, assume G is a forest and there would be a minimum-cost clustering P with some cluster S ∈ P such that |S| > 2a for some a ∈ N >2 . Consider a new clustering P obtained by splitting S into {u, v} and S \ {u, v}, where u and v are two arbitrary vertices of different color that have at most 1 edge towards another vertex in S. There are always two such vertices due to the forest structure and because there are S 2 vertices of each color. Then, P is still a fair clustering. Note that the cost incurred by each edge and non-edge with at most one endpoint in S is the same in both clusterings. Let ψ denote the intra-cluster cost of P in G [S]. Regarding the edges and non-edges with both endpoints in S, we know that as the cluster consists of 2a vertices and has at most 2a − 1 edges due to the forest structure. In the worst case, P cuts 2 edges. However, we profit from the smaller cluster sizes. We have Hence, P costs at least 2a − 5 more than P , which is positive as a > 2. Thus, in every minimum-cost fair clustering all clusters are of size 4 or 2. We employ an analogous strategy if there is a different color ratio than 1 : 1 in the graph. However, then we have to split more than 2 vertices from a cluster. To ensure that the clustering cost does not increase, we have to argue that we can take these vertices in some balanced way from both sides of the bipartite graph. Then, there is a minimum-cost fair clustering such that all its clusters are of size d = k i=1 c i . Further, each minimum-cost fair clustering with larger clusters can be transformed into a minimum-cost fair clustering such that all clusters contain no more than d vertices in linear time. Proof. Due to the fairness constraint, each fair clustering consists only of clusters that are of size ad, where a ∈ N >0 . We prove the statements by showing that a cluster of size at least 2d can be split such that the cost does not increase and fairness remains. Let P be a clustering and S ∈ P be a cluster with |S| = ad for some a 2. Let S A = S ∩ A as well as S B = S ∩ B and w.l.o.g. |S A | |S B |. Our proof has three steps. First, we show that there is a fair S ⊆ S such that | S| = d and | S ∩ A| | S ∩ B|. Then, we construct a fair set S ⊆ S by replacing vertices in S with vertices in Last, we prove that splitting S into S and S \ S does not increase the clustering cost. We then observe that the resulting clustering is fair, so the lemma's statements hold because any fair clustering with a cluster of more than d vertices is transformed into a fair clustering with at most the same cost, and only clusters of size d by repeatedly splitting larger clusters. For the first step, assume there would be no such S ⊆ S, i.e., that we only could take s < d 2 vertices from S A without taking more than c i vertices of each color i ∈ Now, for the second step, we transform S into S. Note that, if |S A \ S| |S B \ S| it suffices to set S = S. Otherwise, we replace some vertices from S ∩ S A by vertices of the respective color from S B \ S. We have to show that after this we still take at least as many vertices from S A as from Consequently, S fulfills the requirements. Assume there would be no such δ 2 vertices but that we could only replace s < δ 2 vertices. Let s i be the number of vertices of color i among these vertices for all i ∈ [k]. By a similar argumentation as above and because there are only (a − 1)c i vertices of each color i in S \ S, we have This contradicts edges that are cut when splitting S into S and S \ S. At the same time, there are pairs of vertices that are not connected and placed in separate clusters in P but not in P. Hence, we have P is more expansive than P by at least Hence, splitting a cluster like this never increases the cost. Unlike in forests, however, the color ratio yields no bound on the maximum cluster size in minimum-cost fair clusterings on bipartite graphs but just states there is a minimum-cost fair clustering with bounded cluster size. be a complete bipartite graph with |R| = |B| such that all vertices in R are red and all vertices in B are blue. Then, all fair clusterings in G have the same cost, including the one with a single cluster S = R ∪ B. This holds because of a similar argument as employed in the last part of Lemma 10 since every edge that is cut by a clustering is compensated for with exactly one pair of non-adjacent vertices that is then no longer in the same cluster. Hardness Results This section provides NP-hardness proofs for Fair Correlation Clustering under various restrictions. Forests and Trees With the knowledge of the fixed sizes of clusters in a minimum-cost clustering, we are able to show that the problem is surprisingly hard, even when limited to certain instances of forests and trees. To prove the hardness of Fair Correlation Clustering under various assumptions, we reduce from the strongly NP-complete 3-Partition problem [29]. Task: Decide if there is a partition of the numbers ai into triples such that the sum of each triple is B. Our first reduction yields hardness for many forms of forests. Theorem 11. Fair Correlation Clustering on forests with two colors in a ratio of 1 : c is NP-hard. It remains NP-hard when arbitrarily restricting the shape of the trees in the forest as long as for every a ∈ N it is possible to form a tree with a vertices. Proof. We reduce from 3-Partition. For every a i , we construct an arbitrarily shaped tree of a i red vertices. Further, we let there be p isolated blue vertices. Note that the ratio . . . The tree with diameter 4 in the reduction from 3-Partition to Fair Correlation Clustering. The notation follows that of Theorem 12. between blue and red vertices is 1 : B. We now show that there is a fair clustering P such that if and only if the given instance is a yes-instance for 3-Partition. If we have a yes-instance of 3-Partition, then there is a partition of the set of trees into p clusters of size B. By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain a fair partition. As there are no edges between the clusters and each cluster consists of B + 1 vertices and B − 3 edges, this partition has a cost of p · B(B+1) and less than B 2 , this implies that each cluster consists of exactly one blue vertex and exactly three uncut trees with a total of B vertices. This way, such a clustering gives a solution to 3-Partition, so our instance is a yes-instance. As the construction of the graph only takes polynomial time in the instance size, this implies our hardness result. Note that the hardness holds in particular for forests of paths, i.e., for forests with maximum degree 2. With the next theorem, we adjust the proof of Theorem 11 to show that the hardness remains if the graph is connected. Theorem 12. Fair Correlation Clustering on trees with diameter 4 and two colors in a ratio of 1 : c is NP-hard. Proof. We reduce from 3-Partition. For every a i , we construct a star of a i red vertices. Further, we let there be a star of p blue vertices. We obtain a tree of diameter 4 by connecting the center v of the blue star to all the centers of the red stars. The construction is depicted in Figure 3. Note that the ratio between blue and red vertices is 1 : B. We now show that there is a fair clustering P such that cost(P) if and only if the given instance is a yes-instance for 3-Partition. If we have a yes-instance of 3-Partition, then there is a partition of the set of stars into p clusters of size B, each consisting of three stars. By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain a fair partition. We first compute the inter-cluster cost χ. We call an edge blue or red if it connects two blue or red vertices, respectively. We call an edge blue-red if it connects a blue and a red vertex. All p − 1 blue edges are cut. Further, all edges between v (the center of the blue star) and red vertices are cut except for the three stars to which v is assigned. This causes 3p − 3 more cuts, so the inter-cluster cost is χ = 4p − 4. Each cluster consists of B + 1 vertices and B − 3 edges, except for the one containing v which has B edges. The intra-cluster cost is hence Combining the intra-and inter-cluster costs yields the desired cost of For the other direction, assume there is a fair clustering of cost at most pB 2 −pB 2 + 7p − 7. As there are p(B +1) vertices, Lemma 4 gives that there are exactly p clusters, each consisting of exactly one blue and B red vertices. Let a denote the number of red center vertices in the cluster of v. We show that a = 3. To this end, let χ r denote the number of cut red edges. We additionally cut p − 1 blue and 3p − a blue-red edges. The inter-cluster cost of the clustering hence is χ = χ r + 4p − a − 1. Regarding the intra-cluster cost, there are no missing blue edges and as v is the only blue vertex with blue-red edges, there are (p − 1)B + B − a = pB − a missing blue-red edges. Last, we require p · B(B−1) 2 red edges, but the graph has only pB − 3p red edges and χ r of them are cut. Hence, there are at least p · B(B−1) 2 − pB + 3p + χ r missing red edges, resulting in a total intra-cluster cost of ψ p · B(B−1) 2 + 3p + χ r − a. This results in a total cost of As we assumed cost(P) pB 2 −pB 2 + 7p − 7, we have 2χ r − 2a + 6 0, which implies a 3 since χ r 0. Additionally, χ r aB 4 − (B − a), because there are at least B 4 red vertices connected to each of the a chosen red centers but only a total of B − a of them can be placed in their center's cluster. Thus, we have aB 2 − 2B + 6 = (a−4)B 2 + 6 0, implying a < 4 and proving our claim of a = 3. Further, as a = 3, we obtain χ r 0, meaning that no red edges are cut, so each red star is completely contained in a cluster. Given that every red star is of size at least B 4 and at most B 2 , this means each cluster consists of exactly three complete red stars with a total number of B red vertices each and hence yields a solution to the 3-Partition instance. As the construction of the graph only takes polynomial time in the instance size and the constructed tree is of diameter 4, this implies our hardness result. The proofs of Theorems 11 and 12 follow the same idea as the hardness proof of [27,Theorem 2], which also reduces from 3-Partition to prove a hardness result on the k-Balanced Partitioning problem. There, the task is to partition the vertices of an uncolored graph into k clusters of equal size [27]. Input: Graph k-Balanced Partitioning is related to Fair Correlation Clustering on forests in the sense that the clustering has to partition the forest into clusters of equal sizes by Lemmas 4 and 10. Hence, on forests we can regard Fair Correlation Clustering as the fair variant of k-Balanced Partitioning. By [27,Theorem 8], k-Balanced Partitioning is NP-hard on trees of degree 5. In their proof, Feldmann and Foschini [27] reduce from 3-Partition. We slightly adapt their construction to transfer the result to Fair Correlation Clustering. Theorem 13. Fair Correlation Clustering on trees of degree at most 5 with two colors in a ratio of 1 : c is NP-hard. Proof. We reduce from 3-Partition, which remains strongly NP-hard when limited to instances where B is a multiple of 4 since for every instance we can create an equivalent instance by multiplying all integers by 4. Hence, assume a 3-Partition instance such that B is a multiple of 4. We construct a graph for Fair Correlation Clustering by representing each a i for i ∈ [n] by a gadget T i . Each gadget has a center vertex that is connected to the end of five paths: one path of length a i , three paths of length B 4 , and one path of length B 4 − 1. Then, for i ∈ [n − 1], we connect the dangling ends of the paths of length B 4 − 1 in the gadgets T i and T i+1 by an edge. So far, the construction is similar to the one by Feldmann and Foschini [27]. We color all vertices added so far in red. Then, we add a path of 4n 3 blue vertices and connect it by an edge to an arbitrary vertex of degree 1. The resulting graph is depicted in Figure 4. Note that the construction takes polynomial time and we obtain a graph of degree 5. We now prove that it has a fair clustering P such that if and only if the given instance is a yes-instance for 3-Partition. Assume we have a yes-instance for 3-Partition. We cut the edges connecting the different gadgets as well as the edges connecting the a i -paths to the center of the stars. Then, we have n components of size B and 1 component of size a i for each i ∈ [n]. The latter ones can be merged into p = n 3 clusters of size B without further cuts. Next, we cut all edges between the blue vertices and assign one blue vertex to each cluster. Thereby, note that the blue vertex that is already connected to a red cluster should be assigned to this cluster. This way, we obtain a fair clustering with inter-cluster cost χ = n − 1 + n + 4n 3 − 1 = 10n 3 − 2, which, by Lemma 3, gives cost(P) = (B−2)n 2 + 20n 3 − 3. For the other direction, let there be a minimum-cost fair clustering P of cost at most , the graph consists of 4n 3 · B red and 4n 3 blue vertices. By Lemma 4, P hence consists of 4n 3 clusters, each consisting of one blue vertex and B red vertices. Thus, P has to cut the 4n 3 − 1 edges on the blue path. Also, P has to partition the red vertices into sets of size B. By [27,Lemma 9] this requires at least 2n − 1 cuts. This bounds the inter-cluster cost by χ 2n − 1 + 4n 3 − 1 = 10n 3 − 2, leading to a Correlation Clustering cost of (B−2)n 2 + 20n 3 − 3 as seen above, so we know that no more edges are cut. Further, the unique minimum-sized set of edges that upon removal leaves no red components of size larger than B is the set of the n − 1 edges connecting the gadgets and the n edges connecting the a i paths to the center vertices [27, Lemma 9]. Hence, P has to cut exactly these edges. As no other edges are cut, the a i paths can be combined to clusters of size B without further cuts, so the given instance has to be a yes-instance for 3-Partition. Paths Theorem 11 yields that Fair Correlation Clustering is NP-hard even in a forest of paths. The problem when limited to instances of a single connected path is closely related to the Necklace Splitting problem [5,6]. Input: Opened necklace N , represented by a path of n · k beads, each in one of t colors such that for each color i there are ai · k beads of that color for some ai ∈ N. Task: Cut the necklace such that the resulting intervals can be partitioned into k collections, each containing the same number of beads of each color. The only difference to Fair Correlation Clustering on paths, other than the naming, is that the number of clusters k is explicitly given. From Lemmas 4 and 10 we are implicitly given this value also for Fair Correlation Clustering, though. However, Alon and West [6] do not constructively minimize the number of cuts required for a fair partition but non-constructively prove that there is always a partition of at most (k − 1) · t cuts, if there are t colors and the partition is required to consist of exactly k sets with the same amount of vertices of each color. Thus, it does not directly help us when solving the optimization problem. Moreover, Fair Correlation Clustering on paths is related to the 1-regular 2-colored variant of the Paint Shop Problem for Words (PPW). For PPW, a word is given as well as a set of colors, and for each symbol and color a requirement of how many such symbols should be colored accordingly. The task is to find a coloring that fulfills all requirements and minimizes the number of color changes between adjacent letters [24]. Input: Word w = w1, w2, . . . , wn ∈ Σ * , number of colors k ∈ N>0, and requirement function r : PPW instances with a word containing every symbol exactly twice and two PPW-colors, each requiring one of each symbol, are called 1-regular 2-colored and are shown to be NPhard and even APX-hard [14]. With this, we prove NP-hardness of Fair Correlation Clustering even on paths. Theorem 14. Fair Correlation Clustering on paths is NP-hard, even when limited to instances with exactly 2 vertices of each color. Proof. We reduce from 1-regular 2-colored PPW. Let w = s 1 s 2 , . . . , s . We represent the 2 different symbols by 2 colors and construct a path of length , where each type of symbol is represented by a unique color. By Lemma 4, any optimum Fair Correlation Clustering solution partitions the paths into two clusters, each containing every color exactly once, while minimizing the number of cuts (the inter-cluster cost) by Lemma 3. As this is exactly equivalent to assigning the letters in the word to one of two colors and minimizing the number of color changes, we obtain our hardness result. APX-hardness however is not transferred since though there is a relationship between the number of cuts (the inter-cluster cost) and the Correlation Clustering cost, the two measures are not identical. In fact, as Fair Correlation Clustering has a PTAS on forests by Theorem 42, APX-hardness on paths would imply P = NP. On a side note, observe that for every Fair Correlation Clustering instance on paths we can construct an equivalent PPW instance (though not all of them are 1-regular 2-colored) by representing symbols by colors and PPW-colors by clusters. We note that it may be possible to efficiently solve Fair Correlation Clustering on paths if there are e.g. only two colors. There is an NP-hardness result on PPW with just two letters in [24], but a reduction from these instances is not as easy as above since its requirements imply an unfair clustering. Beyond Trees By Theorem 12, Fair Correlation Clustering is NP-hard even on trees with diameter 4. Here, we show that if we allow the graph to contain circles, the problem is already NP-hard for diameter 2. Also, this nicely contrasts that Fair Correlation Clustering is solved on trees of diameter 2 in linear time, as we will see in Subsection 6.1. Theorem 15. Fair Correlation Clustering on graphs of diameter 2 with two colors in a ratio of 1 : 1 is NP-hard. Proof. Cluster Editing, which is an alternative formulation of Correlation Clustering, is NP-hard on graphs of diameter 2 [9]. Further, Ahmadi et al. [1] give a reduction from Correlation Clustering to Fair Correlation Clustering with a color ratio of 1 : 1. They show that one can solve Correlation Clustering on a graph G = (V, E) by solving Fair Correlation Clustering on the graph G = (V ∪ V , E ∪ E ∪ E) that mirrors G. The vertices in V are colored blue and the vertices in V are colored red. Formally, Further, E connects every vertex with its mirrored vertex as well as the mirrors of adjacent vertices, i.e., Figure 5. Observe that if G has diameter 2 then G also has diameter 2 as follows. As every pair of vertices {u, v} ∈ V 2 is of maximum distance 2 and the vertices as well as the edges of G are mirrored, every pair of vertices {u , v } ∈ V 2 is of maximum distance 2. Further, every vertex and its mirrored vertex have a distance of 1. For every pair of vertices u ∈ V, v ∈ V we distinguish two cases. If {u, v} ∈ E, then {u, v } ∈ E, so the distance is 1. Otherwise, as the distance between u and v is at most 2 in G, there is w ∈ V such that {u, w} ∈ E and {v, w} ∈ E. Thus, {u, w } ∈ E and {w , v } ∈ E , so the distance of u and v is at most 2. As Correlation Clustering on graphs with diameter 2 is NP-hard and the reduction by Ahmadi et al. [1] constructs a graph of diameter 2 if the input graph is of diameter 2, we have proven the statement. Further, we show that on general graphs Fair Correlation Clustering is NP-hard, even if the colors of the vertices allow for no more than 2 clusters in any fair clustering. This contrasts our algorithm in Subsection 6.4 solving Fair Correlation Clustering on forests in polynomial time if the maximum number of clusters is constant. To this end, we reduce from the NP-hard Bisection problem [29], which is the k = 2 case of k-Balanced Partitioning. Task: Find a partition P = {A, B} of V that minimizes |{{u, v} ∈ E | u ∈ A ∧ v ∈ B}| under the constraint that |A| = |B|. Theorem 16. Fair Correlation Clustering on graphs with two colors in a ratio of 1 : c is NP-hard, even if c = n 2 − 1 and the graph is connected. Clique 2 Figure 6 Graph constructed for the reduction from Bisection to a Fair Correlation Clustering instance with just 2 large clusters. The middle part corresponds to the input graph G and is colored red. Clique1 and Clique2 are both cliques of |V | red vertices and one blue vertex each. Proof. We reduce from Bisection. Let G = (V, E) be a Bisection instance and assume it has an even number of vertices (otherwise it is a trivial no-instance). The idea is to color all of the vertices in V red and add two cliques, each consisting of one blue and |V | red vertices to enforce that a minimum-cost Fair Correlation Clustering consists of exactly two clusters and thereby partitions the vertices of the original graph in a minimum-cost bisection. The color ratio is 2 : 3|V | which equals 1 : |V | 2 − 1 with V being the set of the newly constructed graph. We have to rule out the possibility that a minimum-cost Fair Correlation Clustering is just one cluster containing the whole graph. We do this by connecting the new blue vertices v 1 , v 2 to only one arbitrary red vertex v ∈ V . We illustrate the scheme in Figure 6. We first argue that every clustering with two clusters is cheaper than placing all vertices in the same cluster. Let n = |V | as well as m = |E|. Let P be a clustering that places all vertices in a single cluster. Then, cost(P) = (3n + 2)(3n + 1) 2 − m + 2 + 2 · n(n + 1) as the cluster is of size 3n + 2, there is a total of m + 2 plus the edges of the cliques, and no edge is cut. Now assume we have a clustering P with an inter-cluster cost of χ that puts each clique in a different cluster. Then, since there are at most n 2 · n 2 inter-cluster edges between vertices of V and one inter-cluster edge from v to either v 1 or v 2 , so χ n 2 4 + 1. Placing all vertices in the same cluster is hence more expensive by than any clustering with two clusters. This is positive for n 2. Thus, Fair Correlation Clustering will always return at least two clusters. Also, due to the fairness constraint and there being only two blue vertices, it creates exactly two clusters. Further, it does not cut vertices from one of the two cliques for the following reason. As the clusters are of fixed size, by Lemma 3 we can focus on the inter-cluster cost to argue that a minimum-cost Fair Correlation Clustering only cuts edges in E. First, note that it is never optimal to cut vertices from both cliques as just cutting the difference from one clique cuts fewer edges. This also implies that at most n 2 red vertices are cut from the clique as otherwise, the other cluster would have more than the required 3n 2 red vertices. So, assume 0 < a n 2 red vertices are cut from one clique. Any such solution has an inter-cluster cost of a · (n + 1 − a) + χ E , where χ E is the number of edges in E that are cut to split V into two clusters of size n 2 + a and n 2 − a as required to make a fair partition. We note that by not cutting the cliques and instead cutting off a vertices from the cluster of size n 2 + a, we obtain at most a · n 2 + χ E cuts. As n 2 < n + 1 − a, this implies that no optimal solution cuts the cliques. Hence, each optimal solution partitions the V in a minimum-cost bisection. Thus, by solving Fair Correlation Clustering on the constructed graph we can solve Bisection in G. As further, the constructed graph is of polynomial size in |V |, we obtain our hardness result. Algorithms The results from Section 5 make it unlikely that there is a general polynomial time algorithm solving Fair Correlation Clustering on trees and forests. However, we are able to give efficient algorithms for certain classes of instances. Simple Cases First, we observe that Fair Correlation Clustering on bipartite graphs is equivalent to the problem of computing a maximum bipartite matching if there are just two colors that occur equally often. This is due to there being a minimum-cost fair clustering such that each cluster is of size 2. Theorem 17. Computing a minimum-cost fair clustering with two colors in a ratio of 1 : 1 is equivalent to the maximum bipartite matching problem under linear-time reductions, provided that the input graph has a minimum-cost fair clustering in which each cluster has cardinality at most 2. Proof. Let the colors be red and blue. By assumption, there is an optimum clustering for which all clusters are of size at most 2. Due to the fairness constraint, each such cluster consists of exactly 1 red and 1 blue vertex. By Lemma 3, the lowest cost is achieved by the lowest inter-cluster cost, i.e., when the number of clusters where there is an edge between the two vertices is maximized. This is exactly the matching problem on the bipartite graph G = (R ∪ B, E ), with R and B being the red and blue vertices, respectively, and E = {{u, v} ∈ E | u ∈ R ∧ v ∈ B}. After computing an optimum matching, each edge of the matching defines a cluster and unmatched vertices are packed into fair clusters arbitrarily. For the other direction, if we are given an instance G = (R∪B, E ) for bipartite matching, we color all the vertices in R red and the vertices in B blue. Then, a minimum-cost fair clustering is a partition that maximizes the number of edges in each cluster as argued above. As each vertex is part of exactly one cluster and all clusters consist of one vertex in R and one vertex in B, this corresponds to a maximum bipartite matching in G . By Lemma 10, the condition of Theorem 17 is met by all bipartite graphs. The recent maxflow breakthrough [18] also gives an m 1+o(1) -time algorithm to compute bipartite matchings, this then transfers also to Fair Correlation Clustering with color ratio 1 : 1. For Fair Correlation Clustering on forests, we can do better as the reduction in Theorem 17 again results in a forest, for which bipartite matching can be solved in linear time by standard techniques. We present the algorithm here for completeness. Proof. We apply Theorem 17 to receive a sub-forest of the input for which we have to compute a maximum matching. We do so independently for each of the trees by running the following dynamic program. We visit all vertices, but each one only after we have already visited all its children (for example by employing topological sorting). For each vertex v, we compute the maximum matching M v in the subtree rooted at v as well as the maximum matching M v in the subtree rooted at v assuming v is not matched. We directly get that Each vertex is visited once. If the matchings are not naively merged during the process but only their respective sizes are tracked and the maximum matching is retrieved after the dynamic program by using a back-tracking approach, the time complexity per vertex is linear in the number of its children. Thus, the dynamic program runs in time in O(n). Next, recall that Theorem 12 states that Fair Correlation Clustering on trees with a diameter of at least 4 is NP-hard. With the next theorem, we show that we can efficiently solve Fair Correlation Clustering on trees with a diameter of at most 3, so our threshold of 4 is tight unless P = NP. Theorem 19. Fair Correlation Clustering on trees with a diameter of at most 3 can be solved in time O(n). Proof. Diameters of 0 or 1 are trivial and the case of two colors in a ratio of 1 : 1 is handled by Theorem 17. So, assume d > 2 to be the minimum size of a fair cluster. A diameter of two implies that the tree is a star. In a star, the inter-cluster cost equals the number of vertices that are not placed in the same cluster as the center vertex. By Lemma 4, every clustering of minimum cost has minimum-sized clusters. As in a star, all these clusterings incur the same inter-cluster cost of n − d + 1 they all have the same Correlation Clustering cost by Lemma 3. Hence, outputting any fair clustering with minimum-sized clusters solves the problem. Such a clustering can be computed in time in O(n). If we have a tree of diameter 3, it consists of two adjacent vertices u, v such that every vertex w ∈ V \ {u, v} is connected to either u or v and no other vertex, see Figure 7. This is due to every graph of diameter 3 having a path of four vertices. Let the two in the middle be u and v. The path has to be an induced path or the graph would not be a tree. We can attach other vertices to u and v without changing the diameter but as soon as we attach a vertex elsewhere, the diameter increases. Further, there are no edges between vertices in V \ {u, v} as the graph would not be circle-free. For the clustering, there are now two possibilities, which we try out separately. Either u and v are placed in the same cluster or not. In both cases, Lemma 4 gives that all clusters are of minimal size d. If u and v are in the same cluster, all clusterings of fair minimum sized clusters incur an inter-cluster cost of n − d + 2 as all but d − 2 vertices have to be cut from u and v. In O(n), we greedily construct such a clustering P 1 . If we place u and v in separate clusters, the minimum inter-cluster is achieved by placing as many of their respective neighbors in their respective clusters as possible. After that, all remaining vertices are isolated and are used to make these two clusters fair and if required form more fair clusters. Such a clustering P 2 is also computed in O(n). We then return the cheaper clustering. This is a fair clustering of minimum cost as either u and v are placed in the same cluster or not, and for both cases, P 1 and P 2 are of minimum cost, respectively. Color Ratio 1 : 2 We now give algorithms for Fair Correlation Clustering on forests that do not require a certain diameter or degree. As a first step to solve these less restricted instances, we develop an algorithm to solve Fair Correlation Clustering on forests with a color ratio of 1 : 2. W.l.o.g., the vertices are colored blue and red with twice as many red vertices as blue ones. We call a connected component of size 1 a b-component or r-component, depending on whether the contained vertex is blue or red. Analogously, we apply the terms br-component, rr-component, and brr-component to components of size 2 and 3. Linear Time Attempt Because of Lemma 4, we know that in every minimum-cost fair clustering each cluster contains exactly 1 blue and 2 red vertices. Our high-level idea is to employ two phases. In the first phase, we partition the vertices of the forest F in a way such that in every cluster there are at most 1 blue and 2 red vertices. We call such a partition a splitting of F . We like to employ a standard tree dynamic program that bottom-up collects vertices to be in the same connected component and cuts edges if otherwise there would be more than 1 blue or 2 red vertices in the component. We have to be smart about which edges to cut, but as only up to 3 vertices can be placed in the topmost component, we have only a limited number of possibilities we have to track to find the splitting that cuts the fewest edges. After having found that splitting, we employ a second phase, which finds the best way to assemble a fair clustering from the splitting by merging components and cutting as few additional edges as possible. As, by Lemma 3, a fair partition with the smallest inter-cluster cost has a minimum Correlation Clustering cost, this would find a minimum-cost fair clustering. Unfortunately, the approach does not work that easily. We find that the number of cuts incurred by the second phase also depends on the number of br-and r-components. For our approach to work, the first phase has to simultaneously minimize the number of cuts as well as the difference between br-and r-components. This is, however, not easily possible. Consider the tree in Figure 8. There, with one additional cut edge we have three br-components less and one r-component more. Using a standard tree dynamic program, therefore, does not suffice as when encountering the tree as a subtree of some larger forest or tree, we would have to decide between optimizing for the number of cut edges or the difference between br-and r-components. There is no trivial answer here as the choice depends on how many br-and r-components are obtained in the rest of the graph. For our approach to work, we hence have to track both possibilities until we have seen the complete graph, setting us back from achieving a linear running time. The Join Subroutine In the first phase, we might encounter situations that require us to track multiple ways of splitting various subtrees. When we reach a parent vertex of the roots of these subtrees, we join these various ways of splitting. For this, we give a subroutine called Join. We first formalize the output by the following lemma, then give an intuition on the variables, and lastly prove the lemma by giving the algorithm. and f be a computable function f : f (x, x k ). Then, an array As we later reuse the routine, it is formulated more generally than required for this section. Here, for the 1 : 2 case, assume we want to join the splittings of the children u 1 , u 2 , . . . , u 1 of some vertex v. For example, assume v has three children as depicted in Figure 9. Then, for each child u i , let there be an array R i such that R i [x] is the minimum number of cuts required to obtain a splitting of the subtree T ui that has exactly x more br-components than r-components. For our example, assume all edges between v and its children have to be cut. We see, that Last, note that R 3 = R 2 . The function f returns the set of indices that should be updated when merging two possibilities. When a splitting of one child's subtree has x 1 more br-components and a splitting of another child's subtree has x 2 more br-components, then the combination of these splittings has x 1 + x 2 more br-components than r-components. Hence, the only index to update is f (x 1 , x 2 ) = {x 1 + x 2 }. Later, we will require to update more than a single index, so f is defined to return a set instead of a single index. Note that by the definition of f and f , each value placed in R[x] by the routine corresponds to choosing exactly one splitting from each array R i such that the total difference between br-components and r-components sums up to exactly x. In our example, assume any splitting is chosen for each of the three subtrees. Let We now describe how the Join subroutine is computed. Proof of Lemma 21. The algorithm works in an iterative manner. Assume it has found the minimum value for all indices using the first i − 1 arrays and they are stored in R i−1 . It then joins the i-th array by trying every index if it is smaller than the current element there. Thereby, it tries all possible ways of combining the interim solution with R i and for each index tracks the minimum that can be achieved. Formally, we give the algorithm in Algorithm 1. Algorithm 1 The Join subroutine. The algorithm terminates after O(k · 2 · T f ) iterations due to the nested loops. We prove by induction that R is a solution of Join over the arrays R 1 , . . . , R i after each iteration i. The first one simply tries all allowed combinations of the arrays R 1 , R 2 and tracks the minimum value for each index, matching our definition of Join. Now assume the statement holds for some i. Observe that we only update a value R[x] if there is a respective M ∈ A x , so none of the values is too small. To show that no value is too large, take any x ∈ [ 2 ] and let a be the actual minimum value that can be obtained for R [x] in this iteration. Let j 1 , j 2 , . . . , j i+1 with x ∈ f (j 1 , j 2 , . . . , j i+1 ) be the indices that obtain a. Then, there is y ∈ [ 2 ] such that after joining the first i arrays the value at index y is a − R i+1 [j i+1 ] and y ∈ f (j 1 , j 2 , . . . , j i ). This implies R[y] a − R i+1 by our induction hypothesis. Further, as both x ∈ f (j 1 , j 2 , . . . , j i+1 ) and y ∈ f (j 1 , j 2 , . . . , j i ), we have x ∈ f (y, j i+1 ). Thus, in this iteration, With this, all values are set correctly. Observe that in the case of f (x 1 , x 2 ) = {x 1 + x 2 }, which is relevant to this section, the loop in lines 4-6 computes the (min, +)-convolution of the arrays R and R i . Simply trying all possible combinations as done in the algorithm has a quadratic running time. This cannot be improved without breaking the MinConv Conjecture, which states there is no algorithm computing the (min, +)-convolution of two arrays of length n in time in O(n 2−ε ) for any constant ε > 0 [21]. The Tracking Algorithm With the Join subroutine at hand, we are able to build a dynamic program solving Fair Correlation Clustering on forests with two colors in a ratio of 1 : 2. We first describe how to apply the algorithm to trees and then generalize it to work on forests. In the first phase, for each possible difference between the number of br-components and r-components, we compute the minimum number of cuts to obtain a splitting with that difference. In the second phase, we find the splitting for which the sum of edges cut in the first phase and the number of edges required to turn this splitting into a fair partition is minimal. This sum is the inter-cluster cost of that partition, so by Lemma 3 this finds a fair partition with the smallest Correlation Clustering cost. Splitting the tree. In the first phase, our aim is to compute an array D, such that, for all integers −n x n 3 , Figure 10 gives examples of how a head is composed from the splittings of the children. In This concludes the computations for the leaves, as the only possibilities are to cut the edge above the leaf or not. Now suppose we have finished the computation for all children u 1 , u 2 , . . . , u k of some vertex v. Observe that at most two children of v are placed in a head with v. For every head h ∈ {∅, r, b, rr, br} that is formable at vertex v, we try all possibilities to obtain that head. If h ∈ {r, b} and c(v) corresponds to h, this is done by choosing ∅ heads for all children. There is no unique splitting of the subtrees however, as for each subtree rooted at some child vertex u i there is a whole array D ∅ ui of possible splittings with different numbers of br-and r-components. To find the best choices for all child vertices, we employ the Join subroutine that, when called with f (x 1 , x 2 ) = {x 1 + x 2 } and a list of arrays, returns an array R such that, for all indices x R[x] is the minimum value obtained by summing up exactly one value from each of the input arrays such that the indices of the chosen values sum up to i. We hence set ∆ h v = Join(∆ ∅ u1 , . . . , ∆ ∅ u k ). Here and in the following, we only call the Join subroutine with at least two arrays. If we would only input a single array, we go on as if the Join subroutine returned that array. We note that here our indexing ranges from −n to is unambiguous as the only way to obtain an rr-head is to choose the r-head for the left child and an ∅-head for the right one. Both the left and the right variants have to be considered as they differ in the number of br-components minus the number of r-components. The splittings in Figures 10c-10e create an ∅-head, as they cut the edge above the root of the subtree, so no vertices of the subtree can be part of a component with vertices outside the subtree. Out of these 3 splittings, however, only Figures 10c and 10d will be further considered as Figure 10e obtains the same difference between brand r-components as Figure 10c but cuts one more edge. We note that other splittings obtain an ∅-head as well that are not listed here. index of x here maps to an index x + n + 1 in the subroutine. If h = br or both h = rr and c(v) corresponds to r, then the heads for all children should be ∅ except for one child that we place in the same component as v. It then has a head h ∈ {r, b}, depending on h and c(v). We have h = r if h = rr and c(v) corresponds to R or h = rb and c(v) corresponds to b. Otherwise, h = b. For all i ∈ [k], we compute an array for the first option. For the second option, we compute the arrays , x 2 , . . . , x k ) gives the correct index of merging k subtrees. In particular, ∆ ∅ r is the array containing for each −n x n 3 the minimum number of edges to cut such that the there are exactly x more br-components than r-components, where r is the root of T . By adjusting the Join subroutine to track the exact combination that leads to the minimum value at each position, we also obtain an array D that contains not only the numbers of edges but the sets of edges one has to cut or is marked with N if no such set exists. Forests. Our algorithm is easily generalized to also solve Fair Correlation Clustering on unconnected forests with two colors in a ratio of 1 : 2 by slightly adapting the first phase. We run the dynamic program as described above for each individual tree. This still takes overall time in O(n 6 ). For each tree T i in the forest and every h ∈ {∅, r, b, rr, br}, let then ∆ ∅ Ti denote the array ∆ ∅ r with r being the root of tree T i . To find a splitting of the whole forest and not just on the individual trees, we perform an additional run of the Join subroutine using these arrays ∆ Ti and the function f (x 1 , x 2 ) = {x 1 + x 2 }. This gives us an array R such that R[x] is the minimum number of cuts required to obtain a splitting with exactly x more br-components than r-components for the whole tree rather than for the individual trees. Note that we choose the ∅-head at each tree as the trees are not connected to each other, so in order to find a splitting we do not yet have to consider how components of different trees are merged, this is done in the second phase. The first phase then outputs an array D that contains the set of edges corresponding to R, which is obtained by a backtracking approach. As the additional subroutine call takes time in O(n 3 ), the asymptotic run time of the algorithm does not change. This gives the following result. Small Clusters To obtain an algorithm that handles more colors and different color ratios, we generalize our approach for the 1 : 2 color ratio case from the previous section. We obtain the following. Once more, the algorithm runs in two phases. First, it creates a list of possible splittings, i.e., partitions in which, for every color, every component has at most as many vertices of that color as a minimum-sized fair component has. In the second phase, it checks for these splittings whether they can be merged into a fair clustering. Among these, it returns the one of minimum cost. We first give the algorithm solving the problem on trees and then generalize it to also capture forests. Splitting the forest. For the first phase in the 1:2 approach, we employed a dynamic program that kept track of the minimum number to obtain a splitting for each possible cost incurred by the reassembling in the second phase. Unfortunately, if we are given a graph with k 2 colors in a ratio of c 1 : c 2 : . . . : c k , then the number of cuts that are required in the second phase is not always as easily bounded by the difference of the number of two component types like rand br-components in the 1 : 2 case. However, we find that it suffices to track the minimum number of cuts required to obtain any possible coloring of a splitting. We first bound the number of possible colorings of a splitting. As during the dynamic program we consider splittings of a subgraph of G most of the time, we also have to count all possible colorings of splittings of less than n vertices. U be a set of n elements, colored in k ∈ N >1 colors, and let d 1 , d 2 Proof. The number of sets with different colorings is at most setvars as there are 0 to d i many vertices of color i in each component. Thus, a coloring of a partition P using only these sets is characterized by an array of size setvars with values in [n] ∪ {0} as no component occurs more than n times. There are (n + 1) setvars ways to fill such an array. However, as the set colorings together have to form a partition, the last entry is determined by the first setvars − 1 entries, giving only (n + 1) setvars−1 possibilities. Lemma 24. Let With this, we employ a dynamic program similar to the one presented in Subsection 6.2 but track the minimum cut cost for all colorings of splittings. It is given by the following lemma. for all C ∈ C, we find a minimum-sized set D C ⊆ E such that the connected components in F − D C form a partition of the vertices with coloring C or certify that there is no such set. Proof. We first describe how to solve the problem on a tree T and then generalize the approach to forests. We call a partition of the vertices such that for every color i there are at most d i vertices of that color in each cluster a splitting. We employ a dynamic program that computes the set D C for the colorings of all possible splittings and all subtrees rooted at each vertex in T . We do so iteratively, by starting to compute all possible splittings at the leaves and augmenting them towards the root. Thereby, the connected component that is connected to the parent of the current subtree's root is of particular importance as it is the only connected component that can be augmented by vertices outside the subtree. We call this component the head. Note that the head is empty if the edge between the root and its parent is cut. We do not count the head in the coloring of the splitting and only give it explicitly. Formally, for every v ∈ V , every possible coloring of a splitting C, and every possible coloring h of the head we compute D h v [C] ⊆ E, the minimum-sized set of edges such that the connected components of is the coloring of the component {v} and C ∅ the coloring of the partition over the empty set. Also, we set D where the vertex v is not placed in the head as the edge to its parent is cut. As to cut or not to cut the edge above are the only options for leaves, this part of the array is now completed. Next, suppose we have finished the computation for all children of some vertex v. For every possible coloring h of the head that is formable at vertex v, we try all possibilities to obtain that coloring. To this end, first assume h to be non-empty. Therefore, v has to be placed in the head. Let h −c(v) denote the coloring obtained by decreasing h by one at color c(v). To obtain head h, we hence have to choose colorings of splittings of the subtrees rooted at the children u 1 , u 2 , . . . , u of v such that their respective heads h u1 , h u2 , . . . , h u combine to h −c(v) . A combination of colorings C 1 , C 2 , . . . , C refers to the coloring of the union of partitions M 1 , M 2 , . . . , M that have the respective colorings and is defined as the element-wise sum over the arrays C 1 , C 2 , . . . , C . Often, there are multiple ways to choose heads for the child vertices that fulfill this requirement. As every head is of size at most setmax, h −c(v) and contains v, it is composed of less than setmax non-empty heads. As there are at most setvars possible heads and we have to choose less than setmax children, there are at most n setmax−1 · setvars setmax−1 < n setmax−1 · setvars setmax−1 possible ways to form h −c(v) with the children of v. Let each way be described by a function H assigning each child of v a certain, possibly empty, head. Then, even for a fixed H, there are multiple splittings possible. This stems from the fact that even if the head H(u) for a child u is fixed, there might be multiple splittings of the subtree of u with different colorings resulting in that head. For each possible H, we hence employ the Join subroutine with the arrays D H(u) u for all children u using the cardinality of the sets as input for the subroutine. For the sake of readability, we index the arrays here by some vector C instead of a single numerical index as used in the algorithmic description of the Join subroutine. We implicitly assume that each possible coloring is represented by a positive integer. By letting these indices enumerate the vectors in a structured way, converting between the two formats only costs an additional time factor in O(n). For f (x 1 , x 2 ) we give the function returning a set containing only the index of the coloring obtained by combining the colorings indexed by x 1 and x 2 , which is computable in time in O(n). Combining the colorings means for each set coloring summing the occurrences in both partition colorings. Thereby, f (x 1 , x 2 , . . . , x k ) as defined in the Join subroutine returns the index of the combination of the colorings indexed by x 1 , x 2 , . . . , x k . Note that there are at most n arrays and each is of length less than (n + 1) setvars−1 as there are so many different colorings by Lemma 24. After executing the Join subroutine, by Lemma 21, we obtain an array D H that contains the minimum cut cost required for all possible colorings that can be achieved by splitting according to H. By modifying the Join subroutine slightly to use a simple backtracking approach, we also obtain the set D ⊆ E that achieves this cut cost. We conclude our computation of D h v by element-wisely taking the minimum-sized set over all computed arrays D H for the possible assignments H. If h is the empty head, i.e., the edge above v is cut, then v is placed in a component that is either of size setmax or has a coloring corresponding to some head h . In the first case, we compute an array D full in the same manner as described above by trying all suitable assignments H and employing the Join subroutine. In the second case, we simply take the already filled array D h v . Note that in both cases we have to increment all values in the array by one to reflect cutting the edge above v, except if v is the root vertex. Also, we have to move the values in the arrays around, in order to reflect that the component containing v is no longer a head but with the edge above v cut should also be counted in the coloring of the splitting. Hence, the entry D full [C] is actually stored at D full [C −full ] with C −full being the coloring C minus the coloring of a minimum-sized fair cluster. If no such entry D full [C −full ] exists, we assume it to be ∞. The same goes for accessing the arrays D h v where we have to subtract the coloring h from the index. Taking the element-wise minimum-sized element over the such modified arrays D full and D h v for all possibilities for h yields D ∅ v . By the correctness of the Join subroutine and as we try out all possibilities to build the specified heads and colorings at every vertex, we thus know that after completing the computation at the root r of T , the array D ∅ r contains for every possible coloring of a splitting of the tree the minimum cut cost to achieve that coloring. For each of the n vertices and the setvars possible heads, we call the Join subroutine at most n setmax−1 · setvars setmax−1 many times. Each time, we call it with at most n arrays and, as by Lemma 24 there are O(n setmax ) possible colorings, all these arrays have that many elements. Hence, each subroutine call takes time in O(n · (n setvars ) 2 ) = O(n 2 setvars+1 ), so the algorithm takes time in O(n 2 setvars+setmax+2 · setvars setmax ), including an additional factor in O(n) to account for converting the indices for the Join subroutine. When the input graph is not a tree but a forest F , we apply the dynamic program on every tree in the forest. Then, we additionally run the Join subroutine with the arrays for the ∅-head at the roots of all trees in the forest. The resulting array contains all minimum-cost solutions from all possible combinations from colorings of splittings from the individual trees and is returned as output. The one additional subroutine does not change the asymptotic running time. Because of Lemmas 4 and 10 it suffices to consider partitions as possible solutions that have at most c i vertices of color i in each cluster, for all i ∈ [k]. We hence apply Lemma 25 on the forest F and set d i = c i for all i ∈ [k]. This way, for every possible coloring of a splitting we find the minimum set of edges to obtain a splitting with that coloring. Assembling a fair clustering. Let D be the array produced in the first phase, i.e., for every coloring C of a splitting, D[C] is a minimum-sized set of edges such that the connected components in F − D[C] induce a partition with coloring C. In the second phase, we have to find the splitting that gives the minimum Correlation Clustering cost. We do so by deciding for each splitting whether it is assemblable, i.e., whether its clusters can be merged such that it becomes a fair solution with all clusters being no larger than setmax. Among these, we return the one with the minimum inter-cluster cost computed in the first phase. This suffices because of the following reasons. First, note that deciding assemblability only depends on the coloring of the splitting so it does not hurt that in the first phase we tracked only all possible colorings of splittings and not all possible splittings themselves. Second, we do not have to consider further edge cuts in this phase: Assume we have a splitting S with coloring C S and we would obtain a better cost by further cutting a edges in S, obtaining another splitting S of coloring C S . However, as we filled the array D correctly, there is an entry D[C S ] and |D[C S ]| |D[C S ]| + a. As we will consider this value in finding the minimum anyway, there is no need to think about cutting the splittings any further. Third, the minimum inter-cluster cost yields the minimum Correlation Clustering cost by Lemma 3. When merging clusters, the inter-cluster cost computed in the first phase may decrease but not increase. If it decreases, we overestimate the cost. However, this case implies that there is an edge between the two clusters and as they are still of size at most setmax when merged, in the first phase we will also have found another splitting considering this case. We employ a dynamic program to decide the assemblability for all possible O(n setvars ) colorings of splittings. Define the size of a partition coloring to be the number of set colorings in that partition coloring (not necessarily the number of different set colorings). We decide assemblability for all possible colorings of splittings from smallest to largest. Note that each such coloring is of size at least n setmax . If it is of size exactly n setmax , then all contained set colorings are of size setmax, so this partition coloring is assemblable if and only if all set colorings are fair. Now assume we have found all assemblable colorings of splittings of size exactly j n setmax . Assume a partition coloring C of size j + 1 is assemblable. Then, at least two set colorings C 1 , C 2 from C are merged together. Hence, let C be the partition coloring obtained by removing the set colorings C 1 , C 2 from C and adding the set coloring of the combined coloring of C 1 and C 2 . Now, C is of size j and is assemblable. Thus, every assemblable splitting with j + 1 components has an assemblable splitting with j components. The other way round, if we split a set coloring of an assemblable partition coloring of size j we obtain an assemblable partition coloring of size j + 1. Hence, we find all assemblable colorings of splittings of size j +1 by for each assemblable partition coloring of size j (less than n setvars many) trying each possible way to split one of its set colorings (less than i · 2 setmax as there are j set colorings each of size at most setmax). Thus, to compute all assemblable colorings of splittings of size j + 1, we need time in O(n setvars · j · 2 setmax ), which implies a total time for the n − n setmax iterations in the second phase in O(n setvars+2 · 2 setmax ). This is dominated by the running time of the first phase. The complete algorithm hence runs in time in O(n 2setvars+setmax+2 · setvars setmax ), which implies Theorem 23. This gives an algorithm that solves Fair Correlation Clustering on arbitrary forests. The running time however may be exponential in the number of vertices depending on the color ratio in the forest. Few Clusters The algorithm presented in the previous section runs in polynomial time if the colors in the graph are distributed in a way such that each cluster in a minimum-cost solution is of constant size. The worst running time is obtained when there are very large but few clusters. For this case, we offer another algorithm, which runs in polynomial time if the number of clusters is constant. However, it is limited to instances where the forest is colored in two colors in a ratio of 1 : c for some c ∈ N. The algorithm uses a subroutine that computes the minimum number of cuts that are required to slice off clusters of specific sizes from the tree. It is given by Lemma 26. Proof. We give a construction such that R[a 0 , a] stores not the partition itself but the incurred inter-cluster cost. By a simple backtracking approach, the partitions are obtained as well. We employ a dynamic program that involves using the Join subroutine. For the sake of readability, we index the arrays here by some vector a ∈ [n] k and a 0 ∈ [n] instead of a single numerical index as used in the algorithmic description of the Join subroutine. We implicitly assume that each possible a 0 , a is represented by some index in [n k+1 ]. By letting these indices enumerate the vectors in a structured way, converting between the two formats only costs an additional time factor in O(k). Starting at the leaves and continuing at the vertices for which all children have finished their computation, we compute an array R v with the properties described for R but for the subtree T v for each vertex v ∈ V . In particular, for every vertex v we do the following. Let a 0 , a and a 0 , a recall that f ((a 0 , a), (a 0 , a )) should return a set of indices of the form (a 0 , a ). Each such index describes a combination of all possibilities for v and the already considered children (a 0 , a) and the possibilities for the next child (a 0 , a ). First, we consider the possibility to cut the edge between v and the child u that is represented by (a 0 , a ). Then, we add all possible ways of merging the two sets with their k + 1 clusters each. As we cut the edge {u, v}, there are k possible ways to place the cluster containing u (all but the cluster containing v) and then there are k! ways to assign the remaining clusters. All these are put into the set f ((a 0 , a), (a 0 , a )). Second, we assume the edge {u, v} is not cut. Then, the clusters containing v and u have to be merged, so there are only k! possible ways to assign the other clusters. In particular, for all indices (a 0 , a ) put into f ((a 0 , a), (a 0 , a )) this way, we have a 0 = a 0 + a 0 . Note that f can be computed in O(k · k!). Note that f (x 1 , x 2 , . . . , x ) as defined in the Join subroutine lists all possibilities to cut the combined tree as it iteratively combines all possibilities for the first child and the vertex v and for the resulting tree lists all possible combinations with the next child and so on. The Join subroutine takes time in O((k + 1) · n k+1 2 · (k · k!) · k), which is in O((k + 3)! · n 2k+2 ). All O(n) calls of the subroutine hence take time in O((k + 3)! · n 2k+3 ). With this, we are able to give an algorithm for graphs with two colors in a ratio of 1 : c, which runs in polynomial time if there is only a constant number of clusters, i.e., if c ∈ Θ(n). Proof. Note that, if there are c red vertices per 1 blue vertex, p = n c+1 is the number of blue vertices. By Lemma 4, any minimum-cost clustering consists of p clusters, each containing exactly one blue vertex, and from Lemma 3 we know that it suffices to minimize the number of edges cut by any such clustering. All blue vertices are to be placed in separate clusters. They are separated by cutting at most p − 1 edges, so we try all of the O((p − 1) · n−1 p−1 ) subsets of edges of size at most p − 1. Having cut these edges, we have trees T 1 , T 2 , . . . , T , with p of them containing exactly one blue vertex and the others no blue vertices. We root the trees at the blue vertex if they have one or at an arbitrary vertex otherwise. For each tree T i , let r i be the number of red vertices. If we have exactly p trees and r i = c for all i ∈ [p], we have found a minimum-cost clustering, where the i-th cluster is simply the set of vertices of T i for all i ∈ [p]. Otherwise, we must cut off parts of the trees and assign them to other clusters in order to make the partition fair. To this end, for each tree T i we compute an array R i that states the cost of cutting up to p − 1 parts of certain sizes off. More precisely, R i [(a 1 , a 2 , . . . , a p−1 )] is the number of cuts required to cut off p − 1 clusters of size a 1 , a 2 , . . . , a p−1 , respectively, and ∞ if there is no such way as We compute these arrays employing Lemma 26. Note that here we omitted the a 0 used in the lemma, which here refers to the number of vertices not cut from the tree. However, a 0 is still unambiguously defined over a as all the values sum up to the number of vertices in this tree. Further, by connecting all trees without blue vertices to some newly added auxiliary vertex z and using this tree rooted at z as input to Lemma 26, we reduce the number of subroutine calls to p + 1. Then, the only entries from the array obtained for the all-red tree we consider are the ones with a 0 = 1 as we do not want to merge z in a cluster but every vertex except z from this tree has to be merged into another cluster. We call the array obtained from this tree R 0 and the arrays obtained for the other trees R 1 , R 2 , . . . , R p , respectively. Note that every fair clustering is characterized by choosing one entry from each array R i and assigning the cut-off parts to other clusters. As each array has less than n p p! entries and there are at most (p!) p ways to assign the cut-off parts to clusters, there are at most n p 2 possibilities in total. For each of these, we compute in linear time whether they result in a fair clustering. Among these fair clusterings, we return the one with the minimum inter-cluster cost, computed by taking the sum over the chosen entries from the arrays R i . By Lemma 3, this clustering has the minimum Correlation Clustering cost. We obtain a total running time of Combining the results of Theorems 23 and 27, we see that for the case of a forest with two colors in a ratio of 1 : c for some c ∈ N >0 , there are polynomial-time algorithms when the clusters are either of constant size or have sizes in Θ(n). As Theorem 11 states that Fair Correlation Clustering on forests is NP-hard, we hence know that this hardness evolves somewhere between the two extremes. Relaxed Fairness It might look like the hardness results for Fair Correlation Clustering are due to the very strict definition of fairness, which enforces clusters of a specific size on forests. However, in this section, we prove that even when relaxing the fairness requirements our results essentially still hold. Definitions We use the relaxed fairness constraint as proposed by Bera et al. [11] and employed for Fair Correlation Clustering by Ahmadi et al. [1]. For the following definitions, given a set U colored by a function c : Note that we require p i and q i to be such that an exact fair solution is also relaxed fair. Further, we exclude setting p i or q i to 0 as this would allow clusters that do not include every color, which we do not consider fair. Input: Graph Task: Find a relaxed fair partition P of V with regard to the pi and qi that minimizes cost(P). While we use the above definition for our hardness results, we restrict the possibilities for the p i and q i for our algorithms. Task: Find a α-relaxed fair partition P of V that minimizes cost(P). Hardness for Relaxed Fairness The hardness result for exact fairness on paths, see Theorem 14, directly carries over to the relaxed fairness setting. This is due to it only considering instances in which there are exactly two vertices of each color. As any relaxed fair clustering still requires at least one vertex of every color in each cluster, this means that every relaxed clustering either consists of a single cluster or two clusters, each with one vertex of every color. Thereby, relaxing fairness makes no difference in these instances. Corollary 32. Relaxed Fair Correlation Clustering on paths is NP-hard, even when limited to instances with exactly 2 vertices of each color. Our other hardness proofs for relaxed fairness are based on the notion that we can use similar constructions as for exact fairness and additionally prove that in these instances the minimum-cost solution has to be exactly fair and not just relaxed fair. To this end, we require a lemma giving a lower bound on the intra-cluster cost of clusterings. Lemma 33. Let G = (V, E) be an n-vertex m-edge graph and P a partition of V with an inter-cluster cost of χ. Then, the intra-cluster cost of P is at least n 2 2|P| − n 2 − m + χ. If |S| = n |P| for all clusters S ∈ P, then the intra-cluster cost of P is exactly ψ = n 2 2|P| − n 2 −m+χ. Proof. We first prove the lower bound. We employ the Cauchy-Schwarz inequality, stating that for every ∈ N, x 1 , x 2 , . . . , x , and y 1 , y 2 Observe that we can write the intra-cluster cost ψ of P as By Cauchy-Schwarz, we have S∈P |S| 2 1 |P| · S∈P |S| 2 = n 2 |P| . This bounds the intra-cluster cost from below by ψ For the second statement, assume all clusters of P to be of size n |P| . Then, there are 1 2 · n |P| · n |P| − 1 pairs of vertices in each cluster. Thereby, we have We further show that no clustering with clusters of unequal size achieves the lower bound given by Lemma 33. Lemma 34. Let G = (V, E) be an n-vertex m-edge graph and P a partition of V with an inter-cluster cost of χ such that there is a cluster S ∈ P with |S| = n |P| + a for some a 0. Then, the intra-cluster cost of P is ψ Proof. If a = 0, the statement is implied by Lemma 33. So, assume a > 0. We write the intra-cluster cost as with ψ rest being the intra-cluster cost incurred by P \ {S}. By applying Lemma 33 on P \ {S}, we have Bringing the first summands to a common denominator of 2|P| − 2 yields ψ n 2 (|P| − 1) We then add 0 = − n 2 2|P| · 2|P |−2 2|P |−2 + n 2 2|P| and obtain Observe that as |P| > 1 and a = 0 this means that such a clustering never achieves the lower bound given by Lemma 33. In particular, this means that for fixed inter-cluster costs in minimum-cost fair clusterings in forests all clusters are of equal size. This way, we are able to transfer some hardness results obtained for exact fairness to relaxed fairness. Proof. We reduce from 3-Partition. We assume B 2 > 16p. We can do so as we obtain an equivalent instance of 3-Partition when multiplying all a i and B by the same factor, here some value in O(p). For every a i we construct a star of a i red vertices. Further, we let there be a star of p blue vertices. We obtain a tree of diameter 4 by connecting the center v of the blue star to all the centers of the red stars. Note that the ratio between blue and red vertices is 1 : B. We now show that there is a relaxed fair clustering P such that cost(P) if and only if the given instance is a yes-instance for 3-Partition. If we have a yes-instance of 3-Partition, then there is a partition of the set of stars into p clusters of size B, each consisting of three stars. By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain an exact fair partition, which is thus also relaxed fair. We first compute the inter-cluster cost. We call an edge blue or red if it connects two blue or red vertices, respectively. We call an edge blue-red if it connects a blue and a red vertex. All p − 1 blue edges are cut. Further, all edges between v (the center of the blue star) and red vertices are cut except for the three stars to which v is assigned. This causes 3p − 3 more cuts, so the inter-cluster cost is χ = 4p − 4. Each cluster consists of B + 1 vertices and B − 3 edges, except for the one containing v which has B edges. The intra-cluster cost is Combining the intra-and inter-cluster costs yields the desired cost of For the other direction, assume there is a relaxed fair clustering P such that cost(P) We prove that this clustering is not just relaxed fair but exactly fair. To this end, we first show |P| = p. Because each cluster requires one of the p blue vertices, we have |P| p. Now, let χ denote the inter-cluster cost of P. Note that |V | = p(B + 1) and |E| = p(B − 3) + 3p + p − 1 = p(B + 1) − 1. Then, by Lemma 33, we have Note that the lower bound is decreasing in |P|. If we had |P| p − 1, then As the inter-cluster cost χ is non-negative, we would thereby get cost(P) Figure 11 Exemplary path with a color ratio of 1 : 1 where there is a 2 3 -relaxed fair clustering of cost 3 (marked by the orange lines) and the cheapest exactly fair clustering costs 4. Thus, we have proven a = 3, which also gives χ r = 0 and χ = 4p − 4. So, not only do we have that cost(P) pB 2 −pB 2 + 7p − 7 but cost(P) = pB 2 −pB 2 + 7p − 7. In Equation 3 we see that for χ = 4p − 4 this hits exactly the lower bound established by Lemma 33. Hence, by Lemma 34, this implies that all clusters consist of exactly 1 blue and B red vertices and the clustering is exactly fair. As χ r = 0, all red stars are complete. Given that every red star is of size at least B 4 and at most B 2 , this means each cluster consists of exactly three complete red stars with a total number of B red vertices each and hence yields a solution to the 3-Partition instance. As the construction of the graph only takes polynomial time in the instance size and the constructed tree is of diameter 4, this implies our hardness result. In the hardness proofs in this section, we argued that for the constructed instances clusterings that are relaxed fair, but not exactly fair would have a higher cost than exactly fair ones. However, this is not generally true. It does not even hold when limited to paths and two colors in a 1 : 1 ratio, as illustrated in Figure 11. Because of this, we have little hope to provide a general scheme that transforms all our hardness proofs from Section 5 to the relaxed fairness setting at once. Thus, we have to individually prove the hardness results in this setting as done for Theorems 35 and 36. We are optimistic that the other hardness results still hold in this setting, especially as the construction for Theorem 13 is similar to the ones employed in this section. We leave the task of transferring these results to future work. Algorithms for Relaxed Fairness We are also able to transfer the algorithmic result of Theorem 23 to a specific α-relaxed fairness setting. We exploit that the algorithm does not really depend on exact fairness but on the fact that there is an upper bound on the cluster size, which allows us to compute respective splittings. In the following, we show that such upper bounds also exist for α-relaxed fairness with two colors in a ratio of 1 : 1 and adapt the algorithm accordingly. To compute the upper bound, we first prove Lemma 37, which analogously to Lemma 4 bounds the size of clusters but in uncolored forests. Using this lemma, with Lemma 38, we then prove an upper bound on the cluster size in minimum-cost α-relaxed fair clusterings for forests with two colors in ratio 1 : 1. pairs of vertices and m edges, none of which is cut by P 1 . In the worst case, P 2 cuts all of the at most n − 1 edges in the forest. It has one cluster of size |S| and one of size n − |S|, so Then, we have Note that the bound is increasing in n. As we have, n |S| + 3 and |S| > 4, this gives With the knowledge of when it is cheaper to split a cluster, we now prove that also for α-relaxed Fair Correlation Clustering there is an upper bound on the cluster size in minimum-cost solutions in forests. The idea is to assume a cluster of a certain size and then argue that we can split it in a way that reduces the cost and keeps α-relaxed fairness. Lemma 38. Let F be a forest with two colors in a ratio of 1 : 1. Let 0 < α < 1 and let α ∈ N be minimal such that 2α α ∈ N and 2α α > 4. Then, if P is a minimum-cost α-relaxed fair clustering on F , we have |S| < 4α α 2 for all S ∈ P. Proof. Assume otherwise, i.e., there is a cluster S with |S| 4α α 2 . Let b and r denote the number of blue and red vertices in S, respectively, and assume w.l.o.g. that b r. Because |S| 4α α 2 we have α 2 2α α|S| . Due to the α-relaxed fairness constraint, this yields b |S| 2α α|S| and thereby r b 2α α . Then, consider the clustering obtained by splitting offα blue and 2α α −α red vertices of from S into a new cluster S 1 and let S 2 = S \ S 1 . Note that we chooseα in a way that this is possible, i.e., that both sizes are natural numbers. As the cost induced by all edges with at most one endpoint in S remains the same and the cost induced by the edges with both endpoints in S decreases, as shown in Lemma 37, the new clustering is cheaper than P. As we now prove that the new clustering is also α-relaxed Fair, this contradicts the optimality of P. We first prove the α-relaxed fairness of S 1 . Regarding the blue vertices, we have a portion ofα α+ 2α α −α = α 2 in S 1 , which fits the α-relaxed fairness constraint. Regarding the red vertices, we have 2α α −α α+ 2α α −α = 1 − α 2 , which fits the α-relaxed fairness constraint as 0 < α < 1, so 1 − α 2 α 2 and 1 − α 2 = 2α−α 2 2α 1 2α . Now we prove the α-relaxed fairness of S 2 . The portion of blue vertices in S 2 is b−α r+b− 2α α , so we have to show that this value lays between α 2 and 1 2α . We start with showing the value is at least α 2 by proving α 2 · r + b − 2α α b −α. As S is α-relaxed fair, we have r 2b Similarly, we show the ratio is at most 1 2α by proving the equivalent statement of 2α(b −α) r + b − 2α α . As we assume r b, we have the first phase, we do not have to consider cutting more edges in this phase, because for the resulting splittings coloring we already have tracked a minimum inter-cluster cost. Hence, the only questions are whether a splitting is assemblable, i.e., whether its components can be merged such that it becomes an α-relaxed fair clustering, and, if so, what the cheapest way to do so is. Regarding the first question, observe that the assemblability only depends on the partition coloring of the splitting. Hence, it does not hurt that in the first phase we tracked only all possible partition colorings of splittings and not all possible splittings themselves. First, note that the coloring of a splitting may itself yield an α-relaxed fair clustering. We mark all such partition colorings as assemblable, taking time in O(n d 2 +1 ). For the remaining partition colorings, we employ the following dynamic program. Recall that the size of a partition coloring refers to the number of set colorings it contains (not necessarily the number of different set colorings). We decide assemblability for all possible partition colorings from smallest to largest. Note that each partition coloring is of size at least n d . If it is of size exactly n d , then there are no two set colorings that can be merged and still be of size at most d, as all other set colorings are of size at most d. Hence, in this case, a splitting is assemblable if and only if it is already an α-relaxed fair clustering so we have already marked the partition colorings correctly. Now, assume that we decided assemblability for all partition colorings of size i n d . We take an arbitrary partition coloring C of size i + 1, which is not yet marked as assemblable. Then, it is assemblable if and only if at least two of its set colorings are merged together to form an α-relaxed fair clustering. In particular, it is assemblable if and only if there are two set colorings C 1 , C 2 in C such that the coloring C obtained by removing the set colorings C 1 , C 2 from C and adding the set coloring of the combined coloring of C 1 and C 2 is assemblable. Note that C is of size i. Given all assemblable partition colorings of size i, we therefore find all assemblable partition colorings of size i + 1 by for each partition coloring of size i trying each possible way to split one of its set colorings into two. As there are at most i d 2 partition colorings of size i, this takes time in O(i d 2 · i · 2 d ). The whole dynamic program then takes time in O(n d 2 +1 · 2 d ) ⊆ O(n d 2 +d+1 ). It remains to answer how we choose the assembling yielding the minimum cost. In the algorithm for exact fairness, we do not have to worry about that as there we could assume that the Correlation Clustering cost only depends on the inter-cluster cost. Here, this is not the case as the α-relaxed fairness allows clusters of varying size, so Lemma 3 does not apply. However, recall that we can write the Correlation Clustering cost of some partition P of the vertices as S∈P |S|(|S−1|) 2 + 2χ, where χ is the inter-cluster cost. The cost hence only depends on the inter-cluster cost and the sizes of the clusters, which in turn depends on the partition coloring. To compute the cost of a splitting, we take the inter-cluster cost computed in the first phase for χ. Once more, we neglect decreasing inter-cluster cost due to the merging of clusters as the resulting splitting is also considered in the array produced in the first phase. By an argument based on the Cauchy-Schwarz Inequality, we see that merging clusters only increases the value of S∈P |S|(|S−1|) 2 as we have fewer but larger squares. Hence, the cheapest cost obtainable from a splitting which is itself α-relaxed fair is just this very clustering. If a splitting is assemblable but not α-relaxed fair itself, the sum is the minimum among all the values of the sums of α-relaxed fair splittings it can be merged into. This value is easily computed by not only passing down assemblability but also the value of this sum in the dynamic program described above and taking the minimum if there are multiple options for a splitting. This does not change the running time asymptotically and the running time of the second phase is dominated by the one of the first phase. Theorem 39. Let F be an n-vertex forest in which the vertices are colored with two colors in a ratio of 1 : 1. Then α-relaxed Fair Correlation Clustering on F can be solved in time in O(n 2d 2 +6d+4 · (d + 1) 4d ), where d = 4α α 2 andα ∈ N is minimal such that 2α α ∈ N and 2α α > 4. We are confident that Lemma 38 can be generalized such that for an arbitrary number of colors in arbitrary ratios the maximum cluster size is bounded by some function in α and the color ratio. Given the complexity of this lemma for the 1 : 1 case, we leave this task open to future work. If such a bound is proven, then the algorithmic approach employed in Theorem 39 is applicable to arbitrarily colored forests. Similarly, bounds on the cluster size in the more general relaxed fair clusterings can be proven. As an intermediate solution, we note that for Relaxed Fair Correlation Clustering we can employ the approach used for α-relaxed Fair Correlation Clustering by setting α large enough to contain all allowed solutions and filtering out solutions that do not match the relaxed fairness constraint in the assembling phase. We do not give this procedure explicitly here as we suspect for these cases it is more promising to calculate the precise upper bound on the maximum cluster size and perform the algorithm accordingly instead of reducing to the α-relaxed variant. Approximations So far, we have concentrated on finding an optimal solution to Fair Correlation Clustering in various instances. Approximation algorithms that do not necessarily find an optimum but near-optimum solutions efficiently are often used as a remedy for hard problems, for example, the 2.06-approximation to (unfair) Correlation Clustering [17]. In this section, we find that just taking any fair clustering is a quite close approximation and the approximation becomes even closer to the optimum if the minimum size of any fair cluster, as given by the color ratio, increases. Formally, a problem is an optimization problem if for every instance I there is a set of permissible solutions S(I) and an objective function m : S(I) → R >0 assigning a score to each solution. Then, some S ∈ S(I) is an optimal solution if it has the highest or lowest score among all permissible solutions, depending on the problem definition. We call the score of this solution m * (I). For example, for Fair Correlation Clustering, the instance is given by a graph with colored vertices, every fair clustering of the vertices is a permissible solution, the score is the Correlation Clustering cost, and the objective is to minimize this cost. 8 An α-approximation an optimization problem is an algorithm that, for each instance I, outputs a permissible solution S ∈ S(I) such that 1 α m(S) m * (I) α. For Fair Correlation Clustering in particular, this means the algorithm outputs a fair clustering with a cost of at most α times the minimum clustering cost. APX is the class of problems that admit an α-approximation with α ∈ O(1). A polynomialtime approximation scheme (PTAS), is an algorithm that for each optimization problem instance as well as parameter ε > 0 computes a (1 + ε)-approximation for a minimization problem or a (1 − ε)-approximation for a maximization problem in time in O(n f (ε) ), for some computable function f depending only on ε. We use PTAS to refer to the class of Now, the approximation factor is still decreasing in d and converges to 1 as d → ∞. However, it is positive and defined for all d 2. For d = 2 we obtain 6n−4 2n−4 < 3. Therefore, we have a 3-approximation to Fair Correlation Clustering on trees. Nevertheless, our results for forest suffice to place Fair Correlation Clustering in APX and even in PTAS. First, for d 5 we have a 5-approximation to Fair Correlation Clustering on forests. If d 4, a minimum-cost fair clustering is found on the forest in polynomial time by Theorem 23. Hence, Fair Correlation Clustering on forests is in APX. Next, recall that the larger the minimum fair cluster size d, the better the approximation becomes. Recall that our dynamic program for Theorem 23 has better running time the smaller the value d. By combining these results, we obtain a PTAS for Fair Correlation Clustering on forests. This contrasts Fair Correlation Clustering on general graphs, as even unfair Correlation Clustering is APX-hard there [16] and therefore does not admit a PTAS unless P = NP. 1 + ε, it suffices to return any fair clustering by Theorem 40. Otherwise, we have d 5 and It follows that, d − 5 + dε − 5ε < d − 1, which simplifies to d < 4 ε + 5. Hence, by Theorem 23, we find a minimum-cost fair clustering in time in O(n f (ε) ) for some computable function f independent from n. In all cases, we find a fair clustering with a cost of at most 1 + ε times the minimum Correlation Clustering cost and take time in O(n f (ε) ), giving a PTAS. To show that f is in fact bounded by a polynomial in 1 /ε, we only need to look at the third case (otherwise f is constant). The bound d < 4 ε + 5 and d = k i=1 c i together imply the the number of colors k is constant w.r.t. n. Under this condition, the exponent of the running time in Theorem 23 is a polynomial in d and thus in 1 /ε.
2023-02-23T06:42:26.815Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "c73a547a5dc848c46b20df669e42d73155f62743", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c73a547a5dc848c46b20df669e42d73155f62743", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
229921955
pes2o/s2orc
v3-fos-license
End-stage heart failure patients should be treated instantly despite a pandemic with all-time available technology to ensure best outcomes Abstract Since the earliest cases of coronavirus disease 2019 (COVID-19) infection were reported, our care delivery systems have been reorganized and challenged in unprecedent ways, specifically the cardiovascular community. COVID-19 poses a challenge for heart transplantation, affecting donor selection, immunosuppression, and posttransplant management. Left Ventricular Assist Device (LVAD) therapy is currently a viable option for patients with end-stage heart failure as a bridge to heart transplantation or destination therapy. Here, we present a therapeutic strategy for the management of acute HF with Intermacs profiles from 1 to 4, with or without Covid-19 infection, exemplified by serie of patients presenting with severe HF and successfully treated by LVAD therapy during the spread of the Covid-19 pandemic and the French national lockdown. This experience has shown that we still have the capacity to provide the right therapy for the right disease to the right patient. LVAD implantation seems to be the treatment of choice for advanced HF due to the lack of healthy donor hearts for cardiac transplantation. Covid or non-Covid context, we have to take care of our patients with end-stage HF the best we can. Since the earliest cases of coronavirus disease 2019 (COVID-19) infection were reported, our care delivery systems have been reorganized and challenged in unprecedent ways, specifically the cardiovascular community. COVID-19 poses a challenge for heart transplantation, affecting donor selection, immunosuppression, and posttransplant management. Left Ventricular Assist Device (LVAD) therapy is currently a viable option for patients with end-stage heart failure as a bridge to heart transplantation or destination therapy. Here, we present a therapeutic strategy for the management of acute HF with Intermacs profiles from 1 to 4, with or without Covid-19 infection, exemplified by serie of patients presenting with severe HF and successfully treated by LVAD therapy during the spread of the Covid-19 pandemic and the French national lockdown. This experience has shown that we still have the capacity to provide the right therapy for the right disease to the right patient. LVAD implantation seems to be the treatment of choice for advanced HF due to the lack of healthy donor hearts for cardiac transplantation. Covid or non-Covid context, we have to take care of our patients with end-stage HF the best we can. Coronavirus disease 2019 (COVID-19) is a global pandemic that represents the biggest public health challenge in the 21st century. Since the earliest cases of COVID-19 infection were reported, our care delivery systems have been reorganized and challenged in unprecedent ways, specifically the cardiovascular community. 1,2 COVID-19 poses a challenge for heart transplantation, affecting donor selection, immunosuppression, and post-transplant management. 3 Most clinicians have noted a decline in the number of patients seeking medical care for non-COVID-19-related causes, which has raised concerns for significant collateral damage in a lot of patients with cardiac disease and, in particular, patients with heart failure (HF), who are tenuous at baseline. 4 End-stage HF patients were particularly affected not only by the increased risk of acquiring COVID-19 but also by transplant volume reduction to meet intensive care unit (ICU) bed, staffing, and medical equipment needs of the majority non-transplant population. The reduction of organ donors during lockdown period also contributed to their risk. 5 However, advanced HF continues to be a life-threatening condition carrying a high mortality and morbidity, but which may become far worse during a pandemic. Left ventricular assist device (LVAD) therapy is currently a viable option for patients with end-stage HF as a bridge to heart transplantation or destination therapy. 6 *Corresponding author. Marie-Cécile Bories, Department of Cardiology, Hôpital Européen Georges Pompidou, 20 rue Leblanc, 75015 Paris. Tel.: +33652249418, Email: marie-cecile.bories@aphp.fr Published on behalf of the European Society of Cardiology. V C The Author(s) 2020. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com Here, we present a therapeutic strategy for the management of acute HF with Intermacs profiles from 1 to 4, with or without COVID-19 infection, exemplified by series of patients presenting with severe HF and successfully treated by LVAD therapy (HeartMate 3, Abbott) during the spread of the COVID-19 pandemic and the French national lockdown. Cases series During the period from 2 March through 6 June, we identified six consecutive critically ill patients with end-stage left monoventricular HF in our institution from Intermacs 1 to 4. Their clinical characteristics are shown in Table 1. Patient 1 A 51-year-old man with ischaemic cardiomyopathy had coronary artery bypass surgery in 2012. He was diagnosed with systolic severe HF with large anterior akinesia and left ventricular ejection fraction (LVEF) at 25%. He developed signs of congestive HF in 2019 with severe pulmonary hypertension (PH). He was Intermacs 4 and despite medical optimization and cardiac rehabilitation, his PH increased to 90 mmHg. He was admitted in February 2020 because of worsening dyspnoea with acute renal failure and hepatic cytolysis. The results of right heart catheterization showed extremely elevated pulmonary vascular resistance (PVR), with low cardiac output (CO) at 1.8 L/min/m 2 . Vasoreactivity testing with inhaled nitric oxide showed partially reversible PH based on decreased PVR from 8.5 to 5.1 Wood Units. After failure of all attempts to stabilize his condition with optimal medical therapy, and because of his PH, we decided to implant an LVAD promptly as a bridge to candidacy. LVAD implantation was performed on 2 March 2020. After the surgery, he needed concomitant transitory percutaneous right ventricular (RV) assistance for 5 days. He had a septic shock on 11 March 2020 cured by antibiotics. He was discharged one month later on 2 April 2020. Today, he is home, well, without signs of COVID-19 infection and with normal pulmonary arterial pressure, awaiting a heart transplant. Patient 2 A 33-year-old woman had suffered from a transmural anterior wall myocardial infarction in 2019. We performed a coronary angiography, which showed a local dissection in the left anterior descending artery without thrombus nor stenosis. The coronary arteries were otherwise normal. During the following months, she developed severe symptoms of HF. The echocardiography revealed severe dilated cardiomyopathy (DCM) with LVEF at 25%, a low CO, and mild PH, but her RV function was normal. She was Intermacs 4 with three acute decompensations during 1 year. She came to the ICU at the beginning of March 2020 because of severe shortness of breath and postprandial abdominal pain. We decided to do an urgent LVAD implantation as a bridge to transplantation. Our decision was based on our fear not to have a graft on the right time if we decided to transplant first. On the other hand, we wanted to get her home as soon as possible to minimize her potential exposure to COVID-19, while staying in hospital where COVID-19 patients are being admitted every day. Abbott's HeartMate 3 TM LVAD implantation was performed on 13 March 2020, with simple post-operative course. At 6 months, she is still perfectly well at home. Patient 3 A 39-year-old man was diagnosed with idiopathic DCM in 2013 following resuscitation from an out-of-hospital cardiac arrest caused by ventricular fibrillation. Troponin level was persistently low at 0.41 ng/L after this cardiac event and cardiac magnetic resonance showed dilated cardiomyopathy with late patchy contrast enhancement on inferolateral and septal wall without confirm or rule out the diagnosis of acute myocarditis. At that time, we did not have any histologic proof of the aetiology of his cardiomyopathy. The patient was treated with a dual chamber implantable cardioverter-defibrillator (ICD) and usual medications with a good response for three years. He developed ventricular tachycardia (VT) in 2016, and conduction disturbances with complete atrioventricular block. Genetic testing for DCM was negative, as was the standard exhaustive aetiological assessment including infectious diseases. Despite optimal medical treatment and upgrade of He was admitted to the ICU in April 2020 for cardiogenic shock. His COVID-19 polymerase chain reaction (PCR) was negative. Cardiac evaluation revealed severe left ventricular hypokinesis, low CO, and good RV function. Despite inotropic support, the patient's liver enzymes and creatinin increased greatly, requiring higher dose of inotropes. In order to achieve haemodynamic stability, a decision was made to give LVAD support as a bridge to transplantation. On Day 9, he was taken to the operating room (OR) for implantation of a HeartMate 3 TM LVAD. Anatomopathological findings found giant cell myocarditis (Figure 1), which we decided not to treat with immunosuppressive drugs, given the risk for infection (including a potential severe form of COVID-19 infection). Over the next 2 days, dobutamin was weaned and the patient was discharged to the cardiac rehabilitation centre on post-operative day 22. The patient is currently well at home and is awaiting a transplant. Patient 4 A 57-year-old man presented with out-of-hospital cardiac arrest in 2015. Echocardiography showed DCM with LVEF of 45% and good RV function. Genetic testing showed a mutation in the MYH7 gene, which is a pathogenic variant for hypertrophic cardiomyopathy or dilated cardiomyopathy. He had two atrial fibrillation ablations and VTablation in 2016. His LVEF decreased to 35% in 2019. He presented with fever and cough at the emergency department on 21 March 2020. COVID-19 was diagnosed in the patient based on rapid test-polymerase chain reaction (RT-PCR) testing. Chest computed tomography (CT) revealed multiple patchy ground-glass opacities in both lower lobes. The initial treatment was supportive, and he was discharged from hospital 2 days later. Almost 1 month later, on 12 April 2020, he called emergency medical services for shortness of breath. He was admitted to the ICU immediately with acute respiratory distress and arterial hypotension. The RT-PCR test for COVID-19 was still positive. Point-of-care cardiac ultrasonography revealed severely depressed left ventricular function (10%), while he was receiving dobutamine. We did not find a pulmonary embolism or novel infection. Ultra-sensitive cardiac troponin was persistently low. Acute decompensation of cardiomyopathy was irreversible, with a lot of non-sustained VT, and persistent kidney failure. We decided, together with infectiologists, to implant a HeartMate 3 TM at least 40 days after the first symptoms of COVID-19 to avoid the spread of infection in the OR to our medical staff despite personal safety protection measures. The first negative result for RT-PCR was on 21 April 2020, 1 month after his first symptoms. The surgery was scheduled on 27 April 2020. He was discharged 30 days later, after surgical pericardial and pleural drainage at post-operative day 9. The anatomopathological findings showed hypertrophic cardiomyopathy with no signs of acute myocarditis. Furthermore, he's on continued LVAD support with persistent low LVEF, confirming irreversible HF. Patient 5 A 59-year-old man was diagnosed with idiopathic DCM in 2012. He became a frequent flyer in 2019 despite optimal medical management and cardiac resynchronization therapy, and in December, he had his first cardiogenic shock event requiring dobutamin. He went to the cardiac rehabilitation centre in January and then came to our centre for pre-operative evaluation for heart transplantation without contraindication. Unfortunately, he was admitted for cardiogenic shock on 15 April 2020 and had atrial fibrillation. Despite atrioventricular node ablation, he remained in refractory shock. He was listed for heart transplantation, but after waiting for 10 days without call for a graft, we decided to perform a HeartMate 3 TM implantation on 4 May 2020 as a bridge to transplantation. After a few days on IV dobutamin, he had a simple postoperative course and left our institution for cardiac rehabilitation 21 days later. He had a pleural effusion many weeks later which was drained, and today he's perfectly well, with ongoing LVAD support, and without COVID-19 infection. End-stage heart failure patients P35 Patient 6 A 66-year-old man presented with ischaemic cardiomyopathy. He had an anterior myocardial infarction in 2018 with many clinical sequelae. His LVEF was 20% with persistent severe symptoms of HF despite medical treatment and resynchronization therapy. His past medical history included a stroke in the middle cerebral artery territory, and another one in the cerebellar territory whose origin was cardioembolic, but with few clinical sequelae. He had severe PH (pulmonary arterial pressure at 81 mmHg and SVR 5 Wood Units). An LVAD was planned as bridge to decision. Unfortunately, on 4 June 2020, he developed acute pulmonary oedema and cardiogenic shock. Venoarterial extracorporeal membrane oxygenation (VA-ECMO) was inevitable to stabilize his condition. He received a HeartMate 3 TM on 9 June 2020. The post-operative course was complicated: RV failure, tracheotomy, pleural drainage, multiple sepsis, and paresis acquired in the ICU. He left the ICU after being weaned of his tracheotomy 2 months later, and is currently in ambulatory state on LVAD support in our cardiac rehabilitation centre. Discussion This single-centre case series describes six successful HeartMate 3 TM implantations during the French national lockdown due to the COVID-19 pandemic. All patients survived, and five were discharged within 30 days following implantation. One patient contracted COVID-19 infection before LVAD implantation and developed cardiogenic shock within 21 days. Though some cardiac injuries are reversible in the context of COVID-19 infection, preexisting cardiac conditions may be exacerbated by COVID-19 and result in severe chronic HF. 7 All patients tested negative by COVID-19 RT-PCR on nasopharyngeal swab test preoperatively to authorize the surgery. The five other patients did not contract the virus after implantation with standard precautions of care. Social distancing and strict precautions applied by caregivers has been applied for those patients considered vulnerable. For some doctors, the need to protect caregivers and preserve critical care capacity may affect their decisions. For everyone, radical transformation of the healthcare system will affect the ability to maintain high-quality care. 8 Our major focus has been to prevent high-risk patients with chronic disease from infection. But our experiences show that patients with serious chronic disease, such as HF, may have changed their behaviour if symptoms occurred to avoid hospitalizations. Amplifying the messages that those with chronic conditions should practice social distancing and stay home may have confused and frightened patients with HF, leading them to delay evaluation for advanced congestive or low output symptoms, and result in worse outcomes. 9 In a recent Danish report, the admission rates for worsening HF and the incidence rates of new-onset HF declined by 30% in Denmark after the country's lockdown. However, the 15-day mortality rate for admitted HF patients with COVID-19 diagnosis was 37%. 10 Additionally, many centres have inactivated the heart transplant waiting lists to meet ICU bed, staffing, and medical equipment needs of the majority non-transplant population. 11 Furthermore, a lack of donors was observed and seems to be multifactorial: safety measures applied to organ procurement organizations, mandatory PCR test, CT scans to exclude possible asymptomatic COVID carriers, less car accidents, and less non-COVID patients admitted to ICUs due to confinement leads to less potential organ donors. Answers to many questions remain unclear, including still limited available knowledge of the virus and its impact on heart transplant recipients compared to patients on LVAD support. A theory about the potential protective effect of immunosuppression has been reported, mitigating the 'cytokine storm' related to COVID-19 poor outcomes. 12 Dexamethasone may reduce mortality for patients receiving either invasive mechanical ventilation or oxygen alone in a recent study. 13 In the other hand, in a recent report of 28 heart transplant recipients with COVID-19, 79% were hospitalized, 25% required mechanical ventilation, and 25% died, suggesting poor outcomes. 14 Conclusion In this unprecedented context, our main goal was to evaluate ways to treat people with non-COVID-19-related disease, especially in patients with end-stage HF. Left ventricular assist device implantation has been considered in our centre as a first choice to treat patients to get at-risk patients home and out of the hospital, minimizing their exposure to COVID-19. Left ventricular assist device therapy presents several advantages, especially in the context of a pandemic; it is always available, which allowed us to anticipate and plan implants in the OR, and secure ICU beds. As well, by implanting earlier (Intermacs 4), we improved patient status by stabilizing their condition and keeping them safe at home. We also may have been able to reduce hospital length-of-stay as patients' physical condition at baseline was associated with less complications. This experience has shown that we still have the capacity to provide the right therapy for the right disease to the right patient. Left ventricular assist device implantation seems to be the treatment of choice for advanced HF due to the lack of healthy donor hearts for cardiac transplantation. COVID or non-COVID context, we have to take care of our patients with end-stage HF the best we can.
2020-12-24T09:04:38.460Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "1ddbbb0cf40e15a23de89a64a52c777f80b4fd0a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/eurheartj/suaa183", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "48a041f8173b94e3a6cb745860ba9c3b805f6a43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201641462
pes2o/s2orc
v3-fos-license
Cardy-like asymptotics of the 4d $\mathcal{N}=4$ index and AdS$_5$ blackholes Choi, Kim, Kim, and Nahmgoong have recently pioneered analyzing a Cardy-like limit of the superconformal index of the 4d $\mathcal{N}=4$ theory with complexified fugacities which encodes the entropy of the dual supersymmetric AdS$_5$ blackholes. Here we study the Cardy-like asymptotics of the index within the rigorous framework of elliptic hypergeometric integrals, thereby filling a gap in their derivation of the blackhole entropy function, finding a new blackhole saddle-point, and demonstrating novel bifurcation phenomena in the asymptotics of the index as a function of fugacity phases. We also comment on the relevance of the supersymmetric Casimir energy to the blackhole entropy function in the present context. Introduction It has been a long-standing challenge in AdS 5 /CFT 4 to reproduce the entropy of the charged, rotating, BPS, asymptotically AdS 5 blackholes of [1][2][3][4][5] from a microscopic counting of BPS states in the 4d N = 4 CFT. Several attempts in this direction were made in the past fifteen years or so (e.g. [6][7][8][9][10]), leading to various new lessons for holography and superconformal field theory (SCFT), but not to the desired microscopic count. In particular, an index was devised in [6,11] for counting the BPS states of general 4d SCFTs. The index counts all of the states-in the radial quantization of the SCFT-that are annihilated by a chosen supercharge. We adopt conventions in which such states satisfy the "BPS condition" ∆ − J 1 − J 2 − 3 2 r = 0, where ∆, J 1 , J 2 , r are the quantum numbers of the 4d N = 1 superconformal group SU(2, 2|1). The index I(p, q, u k ) := Tr (−1) F eβ (∆−J 1 −J 2 − 3 2 r) p J 1 + r 2 q J 2 + r 2 k u q k k , (1.1) is thus independent ofβ, but it does depend on the spacetime fugacities p, q, as well as the flavor fugacities u k associated with flavor quantum numbers q k commuting with the supercharge. In the case of the N = 4 theory, the SU(4) R-symmetry of the N = 4 superconformal algebra decomposes into SU(3)×U(1) r , so there is an SU(3) "flavor" symmetry group commuting with the chosen supercharge; hence there are three q k with 3 k=1 q k = 0, and three u k satisfying 3 k=1 u k = 1. It is customary to define y k := (pq) 1/3 u k and Q k := q k + r/2. Then, dismissingβ, we can rewrite the index of the N = 4 theory as I(p, q, y 1,2,3 ) = Tr (−1) F p J 1 q J 2 y Q 1 1 y Q 2 2 y Q 3 3 . (1.2) This index was computed at finite rank for the U(N ) N = 4 theory in the original paper [6]. Then, in an initial attempt to make contact with holography, the large-N limit of the index was evaluated for real-valued fugacities and was seen to be O(N 0 ); the result perfectly matched the index of the KK supergravity multi-particle states in the dual AdS 5 theory, but clearly could not account for the O(N 2 ) entropy of the bulk supersymmetric AdS 5 blackholes [6]. For some time this negative result was interpreted as an indication that the index does not encode the bulk blackhole microstates. Very recently it has been discovered by Choi, Kim, Kim, and Nahmgoong (CKKN) [12], and independently by Benini and Milan [13], that allowing the five fugacities in the index to take complex values one can achieve the desired O(e N 2 ) behavior in the large-N limit of the index. Benini and Milan have succeeded in directly obtaining the AdS 5 blackhole entropy function in the large-N limit of the index [13], while CKKN took a different route and derived the entropy function in a double-scaling-Cardy-like as well as large-N -limit [12,14]. In the present paper we derive the entropy function in a Cardy-like limit of the index at finite rank; although our analysis is closely related to that of CKKN [14], ours is more analogous to the Cardy-formula [15] derivations of blackhole entropy in AdS 3 /CFT 2 (e.g. [16][17][18]) where the central charge is kept fixed. The study of the Cardy-like asymptotics of 4d superconformal indices had some history prior to [14], but was again mostly limited to real-valued fugacities (e.g. [19][20][21][22][23][24]). The idea that blackhole microstate counting requires complex-valued fugacities in the N = 4 index was not properly appreciated until the recent work of Hosseini, Hristov, and Zaffaroni (HHZ) [25]. This work provided the impetus for the later investigations of CKKN [12,14] and Benini-Milan [13]. HHZ started from the supergravity side and bridged half-way towards the CFT by presenting a "grand-canonical" functional-henceforth the HHZ functional-from which a Legendre transform gives the micro-canonical entropy of the AdS 5 blackholes; the remaining challenge was to extract the HHZ functional in an appropriate asymptotic regime from the index. In particular, it was understood by HHZ [25] (based on recent lessons from AdS 4 /CFT 3 [26,27]) that complexified fugacities are needed in the index in order to make contact with the grand-canonical functional of the AdS 5 blackholes. As alluded to above, CKKN [14] and independently Benini and Milan [13] have recently completed the bridge between the CFT and the bulk by deriving the HHZ functional through asymptotic analysis of the N = 4 theory index, the first group in a double-scaling limit and the second group in a large-N limit. In the present paper we analyze the Cardy-like asymptotics of the N = 4 theory index with complexified fugacities using the rigorous machinery of elliptic hypergeometric integrals [28][29][30][31] in various Cardy-like regimes of parameters where the flavor fugacities approach the unit circle and the spacetime fugacities approach 1. In particular, we fill a gap in the CKKN derivation of the HHZ functional in this limit by showing that the eigenvalue configuration they chose in their asymptotic analysis of the matrix-integral expression for the index is indeed the dominant configuration in the regime of parameters pertaining to the blackhole saddle-point they considered. Moreover, we discover a new blackhole saddle-point in a differ-ent regime of parameters, corresponding to fugacities that are complex conjugate to those at the CKKN saddle-point. We present intuitive arguments suggesting that no other blackhole saddle-points exist in the Cardy-like limit. We also demonstrate interesting dependence of the qualitative behavior of the Cardy-like asymptotics of the index on the complex phases of the fugacities. In the rest of this introduction we give a sketchy account of the asymptotic analysis extracting the blackhole entropy function from the appropriate Cardy-like limit of the superconformal index of the N = 4 theory. The main body of the paper starts in Section 2 where we elaborate on the sketchy derivation of the present section; we study the Cardy-like asymptotics of the N = 4 theory index with all its fugacities complexified, clarifying-and addressing a gap in-the CKKN derivation of the HHZ functional. A thorough enough understanding of the asymptotics of the index in different Cardy-like regimes of parameters results in that section which reveals a second blackhole saddle-point in a regime complementary to that of CKKN, and moreover allows us to argue intuitively that no other relevant saddlepoints exist. In Section 3 we keep the spacetime fugacities real-valued, and demonstrate novel bifurcation phenomena in the asymptotics of the index as a function of the flavor-fugacity phases. Section 4 discusses the relation between the Hamiltonian superconformal index and the Lagrangian index computed through path-integration; the two differ by a Casimir-energy factor which is argued to be irrelevant to the blackhole entropy function in the present context. Finally, Section 5 discusses the important open ends of the present work. Outline of the CKKN derivation in the elliptic hypergeometric language We now present an outline of the CKKN derivation [14] of the HHZ functional [25], translated to the language of elliptic hypergeometric integrals. More precisely, the problem we consider differs from that of [14] in two respects: • while [14] considered the N = 4 theory with U(N ) gauge group, we consider the SU(N ) theory-the details are rather similar and the end results are related via N 2 → N 2 − 1 shifts; • while in [14] a double-scaling-Cardy-like as well as large-N -limit is taken to simplify the analysis, here in analogy with the Cardy-formula derivations of blackhole entropy in AdS 3 /CFT 2 we keep N finite and only take a Cardy-like limit. The special function as the starting point The superconformal index of the SU(N ) N = 4 theory is given by the following elliptic hypergeometric integral (see e.g. [32]): with the unit-circle contour for the z j = e 2πix j while N j=1 z j = 1, and with p, q, y k strictly inside the unit circle such that 3 k=1 y k = pq. The two special functions (·; ·) and Γ(·) are respectively the Pochhammer symbol and the elliptic gamma function [33]: The integral expression gives the index as a meromorphic function of p, q, y k in the domain 0 < |p|, |q|, |y k | < 1. A contour deformation can presumably allow meromorphic continuation of the index to 0 < |p|, |q| < 1, y k ∈ C * (c.f. [34]). Asymptotic analysis in the limit encoding blackholes The Cardy-type limit analyzed prior to the work of CKKN [14] was of the form p, q, y k → 1; more precisely, it was what in the mathematics literature is referred to as the hyperbolic limit of the elliptic hypergeometric integral [31,35]. CKKN considered instead limits of the type p, q → 1, y i → e iθ i , with θ i / ∈ 2πZ: they correctly recognized that giving finite (non-vanishing) phases to the flavor fugacities can obstruct the bose-fermi cancelations 1 occurring in the hyperbolic limit. For future reference we define σ, τ, T k through p = e 2πiσ , q = e 2πiτ , y k = e 2πiT k , and write the appropriate limit explicitly as the CKKN limit: |σ|, |τ |, ImT k → 0, with τ σ ∈ R >0 , ReT k fixed, and Imτ, Imσ > 0. Note that the "balancing condition" 3 k=1 y k = pq implies 3 k=1 T k − σ − τ ∈ Z, and that the restriction Imτ, Imσ > 0 keeps us in the domain of meromorphy of the index. 1 A similar obstruction mechanism is at work in the AdS3/CFT2 context, where the entropy of the AdS3 blackholes is derived from a Cardy-like limit of the CFT2 elliptic genus χ(q, y): the limit q, y → 1 does not encode the bulk blackholes, but the limit q → 1, y → e iθ with θ / ∈ 2πZ does. However, note that while in the AdS3/CFT2 context q can be kept real, in AdS5/CFT4 the spacetime fugacities p, q should take off the real line to meet the blackhole saddle-points. See [13,26,27] for related discussions of "I-extremization" in the large-N analysis. The asymptotic analysis of the integral (1.3) now proceeds as follows. As will be explained in Section 2, the leading asymptotics comes from the elliptic gamma functions Γ(·), so the Pochhammer symbols (·; ·) and the N ! in the pre-factor can be neglected. The required estimate, reviewed in Section 2, follows from Proposition 2.11 of Rains [31]: for |τ |, |σ| → 0, with Imτ, Imσ > 0, and τ σ ∈ R >0 , x ∈ R. Here κ(·) is the continuous, odd, piecewise cubic 2 , periodic function In order to apply the estimate (1.6) to the gamma functions in (1.3) we have to identify the phase of the arguments with 2πx; then, for instance, we can apply (1.6) to the gamma function in the numerator of the integrand of (1.3) by identifying x with ReT k ± (x i − x j ). This way we can simplify (1.3) to where κ(A ± B) stands for κ(A + B) + κ(A − B). It only remains to evaluate the asymptotics of the integral (1.8). Note that we are assuming Im(τ σ) = 0; this corresponds to complexifying the "temperature" as explained below. When Im(τ σ) = 0 the integrand of (1.8)-or already the RHS of (1.6)-would be a pure phase, and not sufficient to describe the exponential growth of the blackhole microstates. The Im(τ σ) = 0 case is therefore not directly relevant to the AdS 5 blackhole physics, but it exhibits some interesting asymptotic bifurcation phenomena that are discussed in Section 3. The last step of the asymptotic analysis of the index involves arguing that in the appropriate range of parameters the dominant small-|τ |, |σ| configuration in (1.8) is x = 0. CKKN presented [14] numerical evidence for this in the N = 2 case, and left it as a conjecture that the same is true for N > 2. In Section 2 we will prove that for the range of parameters relevant to the AdS 5 blackholes (e.g. for Im(τ σ) > 0 and −1 < ReT 1,2 , −1 − ReT 1 − ReT 2 < 0) their conjecture is correct. Hence the asymptotics of the index becomes 4 log I(p, q, (1.10) The right-hand-side is a nonanalytic function of the ReT k , manifestly invariant under ReT k → ReT k + 1 as it should be. To match the grand-canonical functional of HHZ [25] we now pick a particular chamber in the parameter-space so that an analytic expression can be written down. Specifically, assuming Im(τ σ) > 0, going into the chamber −1 < ReT 1,2,3 < 0 with ReT 3 = −1 − ReT 1 − ReT 2 , we can simplify 3 k=1 κ(ReT k ) to 6ReT 1 ReT 2 ReT 3 , and arrive at Analytic continuation of (1.11) to complex T k (i.e. replacing every ReT k with T k ) allows recovering the subleading terms in the CKKN limit and connecting with the complex HHZ functional [25]: So far in this subsection we have been essentially rephrasing the developments due to CKKN [14]. One of the novel contributions of the present paper is to demonstrate in Section 2 that when Im(τ σ) < 0 another chamber with 0 < ReT 1,2,3 < 1 and ReT 3 = 1 − ReT 1 − ReT 2 yields the asymptotics (1.11), this time with Legendre transform and blackhole entropy Thinking of the index (1.2) as the generating function of the degeneracies d(J 1,2 , Q 1,2,3 ) of the BPS states 5 in the N = 4 theory, methods of elementary analytic combinatorics can be used to extract the large-J 1,2 , Q 1,2,3 asymptotics of d(J 1,2 , Q 1,2,3 ) from the Cardy-like asymptotics of the index. The CKKN limit of the index encodes the degeneracy of the BPS states as of the bulk AdS 5 blackholes is satisfied [14]. 4 Compare with Eq. (2.34) of CKKN [14]; note that 2πiT Although at first glance it appears that because of the (−1) F factor in it the index (1.2) counts the number of bosonic states minus the number of fermionic states, as argued in [13], on the blackhole saddlepoints essentially all the states are bosonic, so the index counts a degeneracy. The degeneracies can be obtained from the generating function through with all the contours slightly inside the unit circle; note that y 3 is not independent, so is not integrated over on the RHS (c.f. Section 5 of [13]). The asymptotic degeneracy can be obtained using a saddle-point evaluation of the integral on the right-hand side. Using the Cardy-like asymptotics in (1.12), the result for the asymptotic entropy S( The subscript "ext" on the RHS means picking its extremized value on the saddle-point. The extremization problem was addressed for the Imτ σ > 0 case by HHZ [25], but was made completely explicit and analytic by CKKN [14] (and independently in Appendix B of [36] by Cabo-Bizet, Cassani, Martelli, and Murthy), who found the blackhole saddle-point at and giving the entropy which thanks to the charge relation (1.13) can be written in the alternative form Both of the relations (1.17), (1.18) correctly reproduce the Bekenstein-Hawking entropy of the BPS AdS 5 blackholes of [1][2][3][4][5] in the scaling limit of CKKN, upon using the AdS/CFT dictionary , with AdS 5 , G AdS 5 respectively the radius and the Newton constant of the bulk AdS 5 . In Section 2 we show that (1.11) is valid also when 0 < ReT 1,2,3 < 1, Imτ σ < 0, though this time with T 1 + T 2 + T 3 − τ − σ = +1, and find a new blackhole saddle-point at with the same entropy S as that of the CKKN saddle-point 6 . We moreover argue that besides the two just described-having complex conjugate fugacities p, q, y 1,2,3 -no other blackhole saddle-points exist in the Cardy-like asymptotics of the N = 4 theory index. Final remarks A remaining gap for unequal Q k . A rather serious gap in the above derivation is revealed upon closer inspection of the critical T k in (1.16) and (1.19): while our asymptotic analysis is valid only in the limit ImT k → 0, the blackhole saddle-points have nonzero ImT k unless Q 1 = Q 2 = Q 3 . It is therefore only in the special case with equal-or approximately equal-charges that the above derivation (augmented with the refinements of Section 2) is satisfactory. CKKN assumed in a leap of faith [14] that the asymptotics (1.12) remains valid away from the limit ImT k → 0, and thus the blackhole entropy derivation can be extended to the general case with unequal charges. In Section 2 we present a partial justification for this extrapolation; the rigorous justification is beyond the scope of the present paper, and its absence constitutes the most important open end of this work. Cardy-like versus large-N. The above derivation extracts the AdS 5 blackhole entropy from a "high-temperature" (Cardy-like) limit of the 4d superconformal index at finite N . This is analogous to how the classic papers of Strominger-Vafa [16], BMPV [17], and Strominger [18] derived the Bekenstein-Hawking entropy of certain blackholes in what nowadays might be called an AdS 3 /CFT 2 context. From the holographic perspective, a more conceptually satisfying derivation would involve the large-N limit of the index. In AdS 3 /CFT 2 such conceptually satisfactory derivations can be found in [37,38]. In the AdS 5 /CFT 4 context this was achieved very recently by Benini and Milan [13], leveraging the Bethe Ansatz formula of Closset, Kim, and Willett [39]. Curiously, although the derivation in [13] is not limited to the equal-charge blackholes, because of certain technical obstacles it so far applies only to the case with equal angular momenta J 1 = J 2 and the general case with J 1 = J 2 is still open. The more general Bethe Ansatz formula of [40] seems promising in that direction. The elliptic gamma function estimate (1.6) Let us define the parameters b, β through τ = iβb −1 2π , σ = iβb 2π . For p, q ∈ R, the parameter β defined as such was referred to as the inverse-temperature in [21,22]; here we similarly refer to β as the complexified inverse-temperature. Throughout the present work we assume b ∈ R >0 (i.e. τ /σ ∈ R >0 ); this simplifies the analysis and suffices for making contact with blackhole physics in the Cardy-like limit. We also take Reβ > 0 (i.e. |argβ| < π 2 ) to stay within the domain of meromorphy of the index (1.3). In terms of b, β we have the CKKN limit: |β|, ImT k → 0, with b ∈ R >0 , ReT k fixed, and Reβ > 0. The starting point for deriving the estimate (1.6) is the following identity, essentially due to Narukawa [41]: and ψ b (x) a function [see Appendix A of [22] for its definition in terms of the hyperbolic gamma function] with the important property that for argx ∈ (−π, 0) and fixed b > 0 with an exponentially small error, of the type e −|x| -see Corollary 2.3 of Rains [31] for the precise statement and see Appendix B of [42] for an earlier analysis in a different notation. This property guarantees that the infinite product in (2.1) is convergent when Reβ > 0. For x strictly inside the strip as |β| → 0 with Reβ > 0 and with b > 0 fixed, all the ψ b functions on the RHS of (2.1) approach unity exponentially fast. Moreover, the dominant piece of Q + in the limit is of order 1 τ σ and gives Since the LHS of the above relation is periodic in x → x + 1, we can extend it beyond x ∈ S + by replacing every x on the RHS with its horizontal shift {x} := x − Rex + Imx · tan(argβ) to inside S + . For x ∈ R we have {x} = x − x ; this yields our desired estimate (1.6). A somewhat subtle point is that the estimate (1.6) is not uniform with respect to x when applied to the ("vector multiplet") gamma functions in the denominator of the RHS of (1.3)-or more generally (2.5) is not uniform when x approaches the boundaries of the strip S + . We need a uniform estimate because we want to apply the estimate in the integrand of the index. We expect though that an argument similar to that at the top of page 23 of [22] can be given implying that the non-uniform estimate introduces a negligible error on the leading asymptotics of the index. Cardy-like asymptotics of the index (1.10) It follows from the relation between the Pochhammer symbol and the Dedekind eta function η(τ ) = e 2πiτ /24 (e 2πiτ ; e 2πiτ ), (2.6) and the modular property η(−1/τ ) = √ −iτ η(τ ) of the eta function that in the Cardy-like limit the Pochhammer symbols on the RHS of (1.3) contribute an exponential growth of They can hence be neglected, along with the N ! in the denominator of (1.3), in the Cardy-like limit when 0 < |argβ| < π/2. We thus end up with (1.8) as promised. We remind the reader that if β ∈ R >0 the integrand of (1.8) becomes a pure phase, and the more precise asymptotic analysis of Section 3 has to be performed. For 0 < |argβ| < π 2 , in the small-|β| limit the integral (1.8) is localized around the minima of − sin(2argβ) · Q h (x; ReT k ), whose x-dependent part can be read from (1.9) to be − sin(2argβ) 12 is thus roughly a pair-wise potential for the "holonomies" x i . We take argβ and ReT 1,2 to be our control-parameters; ReT 3 is determined (mod Z to be precise, which is enough) by the balancing condition. We take the fundamental region of ReT 1,2 to be [−1/2, 1/2]. The two qualitatively different behaviors that the function V Q can exhibit in various regions of the space of the control-parameters ReT 1,2 are shown in Figure 1 for −π/2 < argβ < 0. This figure can be deduced either by numerically scanning (using Mathematica for instance) the fundamental region ReT 1,2 ∈ [−1/2, 1/2], for some fixed argβ ∈ (−π/2, 0), or by analytically investigating the function 3 k=1 κ(ReT k ± x ij ) in its various regions of analyticity. Note that an M -type potential means x ij = 0 is preferred in the small-|β| limit, while a W -type potential means some x ij = 0 (always a neighborhood of x ij = ±1/2 it turns out) is preferred. Since Figure 1 is a bit too featureful, we use the equivalence ReT 1,2 → ReT 1,2 ± 1 to shift its triangular regions so that the equivalent Figure 2 is obtained, which is one of the main results of the present paper. It should be clear from the sin(2argβ) factor in (2.8) that the M and W wings in Figure 2 switch places if argβ is taken to be inside (0, π/2) instead. To be specific, let us continue with the argβ ∈ (−π/2, 0) case for the moment. Then on the M wing of Figure 2 the minimum value of V Q occurs at x = 0. Moreover, since V Q is stationary at x = 0, the phase of the integral (1.8) is stationary there. We conclude that for − π 2 < argβ < 0 and −1 < ReT 1,2 , −1 − ReT 1 − ReT 2 < 0, the leading small-β asymptotics of the index is dominated by the x ij = 0 configuration-which in our SU(N ) case implies x i = 0. This proves CKKN's conjecture in [14] and fills the gap in their derivation of the HHZ functional in the appropriate region of the parameter-space. On the bifurcation set, indicated by the dashed lines in Figure 2, the functions V Q and Q h vanish; a more precise analysis using the techniques of Section 3 is then required, but in any case it is clear that the asymptotic growth of the index is much slower (with Re log I = O( 1 |β| )) there, so we do not discuss this set any further. For N = 2 the reason is that the x-independent piece of Q h in (1.9) moves κ(−1/3 ± 0) = −4/9 further down by −2/9, while it moves κ(+1/3 ± 1/2) = −5/9 further up by +2/9. Thus in the CKKN limit with −π/2 < argβ < 0 we have while I N =2 (p, q, y 1,2,3 ) In short, for N = 2 the fastest asymptotic growth in the CKKN limit with −π/2 < argβ < 0 occurs on the M wing of Figure 2. For higher ranks there is a more important reason why points on the W wing do not exhibit a faster asymptotic growth. That is because for N > 2 it is impossible to distribute N holonomies x i on the fundamental region [−1/2, 1/2] (with −1/2 and 1/2 identified, and with x N determined from the rest via N i=1 x i ∈ Z) and have all of them at equal distance |x ij | = 1/2 from each other. Colloquially speaking, it is not possible to capitalize on the minima of V Q on the W wing at |x ij | = 1/2 with all the holonomies, whereas it is possible to do so on the minima at |x ij | = 0 on the M wing; hence as we increase the rank it becomes more and more intuitively likely that the asymptotic growth of the index should be faster on the M wing, and so we expect that only this region potentially bears blackhole entropy functions. Based on a numerical study of the contribution of the equal-distanced configuration for the holonomies we conjecture that on the W wing, as N increases, the index exhibits an asymptotic growth with an O(N 0 ) exponent in the CKKN limit, and so asymptotic growth with an O(N 2 ) exponent is viable only on the M wing. Let us recapitulate our findings so far. We have demonstrated that the |x ij | = 0 point is preferred in the CKKN limit on the M wing of the space of the control-parameters ReT 1,2 , and thus the asymptotic result (1.10) is valid there. We have also argued intuitively that the W wing yields slower asymptotic growth and is not expected to bear blackhole entropy functions. It is straightforward to deduce the analogous statements for 0 < argβ < π 2 . In that case the M and W wings of Figure 2 are swapped. Hence this time it is on the upper-right wing that the |x ij | = 0 configuration is preferred in the CKKN limit, and the asymptotic result (1.10) is valid, though this time with ReT 3 = 1 − ReT 1 − ReT 2 . We also know that for N = 2 the asymptotic growth of the index is slower on the lower-left wing, and as N increases we conjecture that the asymptotic growth has an O(N 0 ) exponent there. Now we ask: in the case −π/2 < argβ < 0 does the lower-left wing, and in the case 0 < argβ < π 2 does the upper-right wing contain blackhole saddle-points? To make contact with the AdS 5 blackholes we have to find the critical points of the Legendre transform of log I in the CKKN limit. In both cases it turns out that one blackhole saddle-point exists. The latter blackhole saddle-point seems to have been overlooked in [14], but can be obtained with minor modification of the computations in their Section 2.3 as we now outline. Recall that when 0 < argβ < π 2 we impose k T k = τ + σ + 1 rather than k T k = τ + σ − 1; while CKKN [14] (following HHZ [25]) impose the latter relation via (2.11) the former relation can be simply imposed by putting (2.12) We now would like to argue that the z * 1,2,3,4 which solve the extremization problem for 0 < argβ < π 2 are indeed the complex conjugates of the z 1,2,3,4 that CKKN found solving the extremization problem for −π/2 < argβ < 0. To demonstrate this, we present some of the details of the extremization problem, in parallel with Section 2.3 of CKKN [14]. Setting the derivatives of (1.15) with respect to z * 1,2,3,4 to zero, we get with a different sign on the RHS compared to the CKKN case-c.f. their Eq. (2.79). As a result, the equations for z * 1,2,3,4 following from the above relations read To obtain S, we can follow CKKN and write things in terms of f * : and then use the definition of f * to obtain which is a cubic relation for f * . The cubic equation that follows for S = 2πi(f * + J 2 ) will then have the entropy functions (1.17) and (1.18) as its solutions in the CKKN scaling limit. To demonstrate the self-consistency of our computations we need to show that the CKKN/HHZ saddle-point (1.16) is indeed on the lower-left wing of Figure 2 and has −π/2 < argβ < 0, while the new saddle-point lies on the upper-right wing and has 0 < argβ < π 2 . We show only the second statement, as the first follows using the fact that the two saddle-points have their T k , τ, σ negative complex conjugate of each other. A quick way to the desired result is to note that S + 2πiQ k are on a straight line in the complex plane, so that their reciprocals are on a circle. This observation motivates the change of variables 1 S+2πiQ k = 1 2S (1 + e −iφ k ), with φ k ∈ (0, π). Then the desired ranges of ReT k and argβ follow easily from the vector representation of the complex numbers 1 + e −iφ k . In summary, we have shown that when 0 < argβ < π 2 a blackhole saddle-point exists on the upper-right wing of Figure 2; as comparison of Eqs. (2.11) and (2.12) shows, the new saddle-point has fugacities p * , q * , y * k that are complex conjugates of the fugacities at the CKKN/HHZ saddle-point. Moreover, we have argued that besides this and the CKKN/HHZ saddle-point no other (inequivalent) blackhole saddle-points exist in the Cardy-like limit. Moving the flavor fugacities away from the unit circle As we noted at the end of Subsection 1.1, unless Q 1 = Q 2 = Q 3 , the critical T k have nonzero imaginary parts, and thus the critical fugacities u k (and also y k ) lie away from the unit circle. Hence to complete the blackhole entropy derivation for the general case with unequal Q k , we need to be able to justify the Cardy-like asymptotics (1.12) when ImT k are not sent to zero. A partial justification is as follows. Let us assume that ImT k are small enough so that the integral (1.3) still represents the index, albeit with a slightly deformed contour of integration 7 . We can then use (2.5) to arrive at the following variant of (1.8): where κ(x) is still defined as in (1.7), but with {x} := x− Rex+Imx·tan(argβ) as discussed around (2.5). We expect that for fixed argβ (either in (−π/2, 0) or in (0, π/2)), and for small enough ImT k , the catastrophic behavior of the pair-wise potential for the holonomies to remain similar to that discussed above, with the two complementary "wings" T 1,2 , 1 − T 1 − T 2 ∈ S + and T 1,2 , −1 − T 1 − T 2 ∈ S + − 1 being associated to M -or W -type behaviors, with one or the other having x = 0 as its preferred configuration depending on the sign of argβ. Then for argβ ∈ (0, π/2) one can use (2.5) on the wing T 1,2 , 1 − T 1 − T 2 ∈ S + to arrive at (1.12) with k T k = τ + σ + 1, while for argβ ∈ (−π/2, 0) one can use (2.5) with x → x + 1 on the wing Beyond a small neighborhood of ImT k = 0 the methods of the present paper do not seem powerful enough to demonstrate (1.12). Whether the fascinating formalism of [39,40] can help addressing the general case with nonzero ImT k is currently being investigated. Real-valued temperature In this section we keep the spacetime fugacities p, q real-valued and define b, β ∈ R >0 through p = e −βb , q = e −βb −1 . We also keep the flavor fugacities u k = e 2πiT k on the unit circle (hence T k ∈ R), and study the effect of finite nonzero T k on the small-β asymptotics of the index. In order to provide some conceptual context for the somewhat technical analysis in the rest of this section we now briefly discuss the path-integral interpretation of the index with real-valued p, q. We will still be analyzing the Hamiltonian index I, and only importing intuition from the path-integral picture-until the next section where the path-integral partition function is analyzed. The superconformal index with real p, q can be obtained via the path-integral SUSY partition function of the theory on S 3 b × S 1 β , where S 3 b is the squashed three-sphere with unit radius and squashing parameter b, while S 1 β is the circle with circumference β [43]. The integration variables z i in the index (1.3) correspond to the eigenvalues of the holonomy 7 It appears like we might only need the contour-deformation to be small near x = 0, which is the dominant eigenvalue configuration in the regime of parameters pertaining to the blackhole saddle-points. matrix P exp(i S 1 β A 0 ), with A 0 the component along S 1 β of the SU(N ) gauge field. The u k correspond to the eigenvalues of the background holonomy matrix P exp(i S 1 β A u 0 ), with A u 0 the component along S 1 β of the background gauge field A u associated to the "flavor" SU(3) of the N = 4 theory. The path-integral partition function is actually a Casimir-energy factor different from the index; this factor is irrelevant for the present analysis and we postpone its discussion to the next section. Interpreting the S 3 b as the spatial manifold and the S 1 β as the Euclidean time circle, we refer to β as the inverse-temperature in analogy with thermal quantum physics-even though our fermions have supersymmetric (i.e. periodic) boundary conditions around S 1 β . Next, we note that while large-N QFTs (N → ∞) on compact spatial manifolds can have finite-temperature phases associated to large-N saddle-points, in the present work we are considering a finite-N QFT on a compact spatial manifold (namely S 3 b ), which can not be assigned a phase at any finite temperature. In the high-temperature limit (β → 0), however, infinite-temperature phases can be associated to the small-β saddle-points. In particular, we will say that the infinite-temperature phase of the index is Higgsed if the dominant small-β saddle-point(s) of its matrix-integral lie away from the "origin" x = 0. For example, the infinite-temperature phase of the index of the SU(2) ISS model is Higgsed, but that of the N = 1 SU(N ) SQCD (say in the conformal window) is not [22]. Moreover, we will say that the infinite-temperature phase of the index is deconfined if for the leading small-β asymptotics we have Re log I ≈ A/β with A > 0; in other words if the index exhibits exponential growth in the high-temperature limit. Below we will see that for generic non-zero T k ∈ R the infinite-temperature phase of the index of the SU(N ) N = 4 theory is Higgsed, and in the N = 2 case for a specific range of T k also deconfined. We suspect, but could not demonstrate, that for large enough N no values of T k can make the infinite-temperature phase of the index deconfined. Taking p, q to be real means taking τ, σ to be pure imaginary. Then we have Im(τ σ) = 0, so that the estimate (1.6) gives only a pure phase; hence we have to consider the subleading terms in the exponent of its RHS to get information about the modulus of the index. The improved estimate is [22] log where the continuous, positive, even, periodic function is defined after Rains [31]. In order to apply the estimate (3.1) to the gamma functions in (1.3) we have to interpret the modulus of the arguments of the gamma functions as (pq) r/2 , and interpret the phase of the arguments as 2πx; then, for instance, we can apply (3.1) to the gamma function in the numerator of the integrand of (1.3) by identifying r, x as r = 2/3, x = T k ± (x i − x j ); note that the balancing condition 3 k=1 y k = pq implies 3 k=1 T k ∈ Z. Since the Pochhammer symbols in (1.3) yield asymptotics that cancel the contribution of the gamma functions from the third term on the RHS of (3.1) [20,22], applying (3.1) to (1.3) we get where we have used τ = iβb −1 /2π and σ = iβb/2π. The functions L h and Q h are the natural generalizations of those defined in [22] for T k = 0, and are explicitly given by 8 (3.5) Note that for T k = 0 both functions identically vanish, as in [22]. Since the 1/β 2 term in the exponent of the RHS of (3.3) gives a pure phase, the dominant contribution to the integral presumably comes from the locus of minima of L h (x, r k = 2/3; T k ). One has to make sure that Q h (x; T k ) is stationary at that locus though, otherwise a more careful analysis is required. The SU(2) case Take for example the N = 2 case. Figure 3 shows the L h function of the SU(2) N = 4 theory for sample values of T k . As the picture clearly shows, at the point x 1 = 0 the integrand is maximally suppressed. It is easy to check that the correct "saddle-point" for N = 2 lies at |x 1 | = 1/4 ( Figure 3 is suggestive of this also); not only L h is minimized there, but also Q h is stationary as desired. Moreover, we see from Figure 3 that depending on T k the minimum of L h can be positive, negative, or zero. Only when the minimum is negative the infinite-temperature phase is deconfined. The contours of L h (x 1 = ±1/4, r k = 2/3; T k ) are shown in Figure 4: outside the blue contour we have L h (x 1 = ±1/4, r k = 2/3; T k ) < 0, so the index is deconfined, except on the blue dots at Let us review what we have observed. While for T k = 0 both functions L h and Q h are zero and the index has a power-law asymptotics (more precisely an I ≈ 1/β behavior as β → 0 [22]), finite nonzero T k can induce Mexican-hat potentials for the holonomies in the high-temperature limit, triggering an infinite-temperature deconfinement in the index. Higher ranks We now show that the integrand of the index is maximally suppressed at x = 0 in fact for arbitrary N ≥ 2 and T k ∈ R/Z. Let us study the behavior of the L h function in (3.4) with respect to x i . For this purpose, we use the following equality derived in [22] (c.f. Eq. (3.51) there), valid for −1/2 ≤ u i ≤ 1/2: min(|u l |, |u m |). (3.6) We will use the above identity with M = 4 and u 1,2,3 = T 1,2,3 ; we would moreover like to take u 4 = x i − x j , but this is not allowed since the range −1 < x i − x j < 1 is incompatible with −1/2 ≤ u 4 ≤ 1/2; to fix that we put instead u 4 = {x i − x j + 1/2} − 1/2. Using (3.6) we can now rewrite the L h function in (3.4) such that its only x-dependent piece is The above expression is obviously negative-semi-definite as a function of x i , and it is maximized when x i − x j = 0. So the index is Higgsed for any T k ∈ R/Z at infinite temperature. Just as we argued for the W wings of the previous section, here we expect that with increasing rank it becomes increasingly difficult to have the holonomies distributed such that they can pair-wise yield the negative minima of the L h function. Here one might speculate that for large enough N , since in the dominant configuration we would likely have a significant portion of the holonomies close to each other (giving x ij near zero and thus yielding nearmaxima of L h ) the total L h function would likely have a positive minimum. In other words, we find it tempting to speculate that for large enough N the infinite-temperature phase of the index is not deconfined, no matter how T k ∈ R are tuned. This is of course not incompatible with the asymptotic exponential growth arising for complexified temperature discussed in the previous section (and the related discussions of "deconfinement" in [12][13][14]). Supersymmetric Casimir energy with complex chemical potentials When all the fugacities p, q, u k are real-valued, the index I(p, q, u k ) is related to the pathintegral SUSY partition function Z(β, b, m k ) of the theory on where E SUSY (b, m k ) is known as the supersymmetric Casimir energy, β, b, m k are defined through p = e −βb , q = e −βb −1 , u k = e −βm k , (4.2) and S 3 b is the squashed three-sphere with unit radius and squashing parameter b, while S 1 β is the circle with circumference β. (The special case of (4.1) with m k = 0 was understood already in [21,45], based on earlier slightly contrasting computations of [43].) As made clear by HHZ [25] (and further elucidated in [12][13][14]36]) making contact with the AdS 5 BPS blackholes requires considering complex fugacities p, q, u k in the index. With the goal of understanding the role of the supersymmetric Casimir energy in the blackhole entropy discussion, in this section we study the relation between Z and I for complex fugacities such that b ∈ R >0 and β ∈ C with Reβ > 0 as in Section 2, while u k are on the unit circle as in Section 3. Rather than modifying the background geometry to achieve such complexified β (c.f. [43]), we simply analytically continue the results obtained for real p, q. Let us consider a free chiral multiplet to begin with; as in [21,43], we expect that solving this case leads to the solution of the interacting non-abelian case as well. Following Appendix A of [21], we start with the one-loop determinant of the nth KK mode on S 3 × S 1 . Eq. (A.15) in [21] now generalizes to where b is the special function discussed in [21], the R-charge of the multiplet is denoted by R, and T k := iβm k 2π ∈ R, with m k the only chemical potential the chiral multiplet couples to. Define X := (R − 1) b+b −1 2 for notational convenience. Following [21] step by step, we now rewrite log Z (n) in terms of ψ b which has a simple asymptotic behavior. Eq. (A.2) of [21] implies that in terms of ψ b : )]sgn(n + T k ). (4.5) One way to check the above equation is to check it separately for sgn(n + T k ) = +1 and sgn(n + T k ) = −1, using b (−x) = − b (x) and Eq. (A.2) in [21]. The reason for this rewriting is to divide ψ b s into the numerator and denominator of Z, so we can eventually relate Z to I using expressions such as (2.1). Finally, we sum (4.5) over n ∈ Z. In doing so, we use the relations Di Pietro and Honda used [24] for analyzing the high-temperature asymptotics of the index: n∈Z sgn(n + T k ) n + T k 2 = − 1 3 κ(T k ). With this regularization-combining techniques from [21] and [24]-we obtain (4.9) Putting T k = 0 we can compare with Eq. (A.16) in [21], noting that κ(0) = ϑ(0) = 0, so that the only surviving term on the second line of the RHS of the above relation gives the Di Pietro-Komargodski asymptotics [20] as β → 0; the first and the third lines combine to give the first and the third terms on the RHS of Eq. (A.16) in [21]. We are done with our regularization. We believe our method of regularization is correct because we have been careful with the convergence of the infinite product appearing in Z-or equivalently the convergence of the infinite sum appearing in log Z-after regularization, and because we have used well-established tools of analytic continuation 9 for evaluating the sums (4.6)-(4.8). As a byproduct, from the second line on the RHS of (4.9) we can read off the high-temperature asymptotics of the partition function of a chiral multiplet with a flavor fugacity on the unit circle. We now would like to relate Z as obtained in (4.9) to the index I. We use (2.1) and the fact that the index of the chiral multiplet is Γ((pq) R/2 u k ). For simplicity we assume 0 < T k < 1, and replace all {T k } in (4.9) with T k . Then we set (4.9) equal to The end result is that E SUSY comes out just as in [21,45]: there is no dependence on T k ! In other words, for u k = e 2πiT k on the unit circle (which is relevant to the equal-charge AdS 5 blackholes) we have E SUSY (b, m k ) = E SUSY (b, 0). Since in the small-|β| limit with b > 0 fixed we have βE SUSY (b, 0) → 0, we conclude that on the saddle-point associated to the equal-charge blackholes the supersymmetric Casimir energy has no significance in the leading Cardy-like asymptotics of the partition function Z. In particular, the Casimir-energy factor relating Z and I is irrelevant to the blackhole entropy function arising in the Cardy-like limit of either. The relation between the above discussion and the interesting proposal of [36] which seems to involve analytic continuation of Z with respect to τ and σ is currently under study. Open problems We have presented a careful analysis of the asymptotics of the SU(N ) N = 4 theory index in the CKKN limit where the flavor fugacities approach the unit circle and the spacetime 9 See Chapter VII of [46] for some context. fugacities approach 1. For 0 < |argβ| < π/2, we have demonstrated that in the CKKN limit, depending on the sign of argβ, there are complementary M or W wings in the space of the control-parameters ReT 1,2 . On the M wings we have given the leading asymptotics of the index, and from it extracted the two blackhole saddle-points discussed above. On the W wings, except for the N = 2 case, the analysis seems difficult; we have only presented intuitive arguments suggesting that for N > 2 the index has a slower asymptotic growth there (with an exponent that based on our numerical investigation we conjecture to be O(N 0 ) as N → ∞) and therefore no blackhole saddle-points are expected in those regions. Problem 1) In the CKKN limit |σ|, |τ |, ImT k → 0, with τ σ ∈ R >0 , ReT k fixed, and Imτ, Imσ > 0, (5.1) find the asymptotics of the SU(N ) N = 4 theory index for N > 2 when τ, σ are inside the 2nd quadrant and ReT 1,2 are on the lower-left wing of Figure 2, or when τ, σ are inside the 1st quadrant and ReT 1,2 are on the upper-right wing of Figure 2. In particular, prove (or disprove) that in those regions the growth exponent in the CKKN limit is O(N 0 ) as N → ∞. Even without addressing the above problem, we have successfully derived two blackhole saddle-points in the CKKN limit. However, the saddle-points have flavor fugacities that are away from the unit circle unless the three charges Q k are equal (or approximately equal). Therefore our derivation of the blackhole entropy function is incomplete for the general blackholes with unequal Q k . To complete the analysis for the general case we have to derive the asymptotic relation (1.12) when ImT k are not sent to zero. We would like to emphasize that although we have not given a complete derivation of the entropy function for the general case with unequal charges, our analysis in the equalcharge case already allows addressing various conceptual issues in the derivation. One such conceptual issue has been the significance of the rather special relation k T k = τ + σ − 1 in the HHZ functional [25]. In the present paper we have shown that a similar asymptotics arises with k T k = τ +σ +1 in a separate region of parameters, leading to a second blackhole saddle-point with fugacities that are complex conjugate to those of the CKKN saddle-point. (See [13] for related statements in the large-N analysis.) Another conceptual point that we were able to clarify in the special case with equal charges was the insignificance of the supersymmetric Casimir energy to the blackhole entropy function in the Cardy-like limit. Generalizing that discussion to the case with the flavor fugacities away from the unit circle constitutes another important open problem related to the present work. Problem 3) Study the supersymmetric Casimir energy of the N = 4 theory with flavor fugacities away from the unit circle. In particular, investigate its relevance to the blackhole entropy function in the Cardy-like limit. Note added: While this work was nearing completion the preprint [47] appeared on arXiv which has some overlap with our Section 2 and moreover suggests that extra hairy-blackhole [48,49] saddlepoints might reside in the regions of Problem 1 above. As discussed in Section 2, we find it more likely that no such extra blackhole saddle-points (with O(N 2 ) entropy) exist in the Cardy-like limit of the index. The existence/interpretation of extra saddle-points in the large-N analysis [13] is of course a separate issue.
2019-03-04T20:22:11.000Z
2019-02-18T00:00:00.000
{ "year": 2019, "sha1": "325a2de54ad08a5fc6bd3b98edee194a0c7e0a77", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2019)134.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "63d48ac7595d4a19d5f0de0de3a49bbf0f8c17d6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
258422647
pes2o/s2orc
v3-fos-license
The Incidence and Severity of Patient-Reported Side Effects of Chemotherapy in Routine Clinical Care: A Prospective Observational Study Introduction: Understanding patients' self-reported chemotherapy side effects is significant because it affects patients' quality of life (QOL) and compliance with treatment. Our current knowledge of chemotherapy side effects comes from available literature, whose external validity is questionable. Moreover, there are very few studies available in the literature that focus on various cancers and their associated side effects. Methods: A single-center, prospective observational study was conducted at a tertiary care center from July 2019 to July 2021. After deriving the sample size, we interviewed 76 consecutive study patients with gastric, periampullary, colorectal, and breast cancer for six months after chemotherapy initiation with a structured patient-reported outcome tool adapted in English and Tamil to record the side effects like diarrhea, vomiting, chest pain, constipation, dyspnea, fatigue, mucositis, and rash. The grading of symptoms was done according to the Common Terminology Criteria for Adverse Events version 5.0. The frequency and prevalence of side effects were calculated as the number of patients who reported the side effect of any grade at least once during the follow-up period. The incidence rate of side effects was calculated in terms of person-time. The association between each side effect and cancer type was calculated using the chi-square test and Fisher's exact test as appropriate. Results: Of the 77 patients in the study, 51.9% were male, 63.6% were between 40 and 60 years of age, 45.5% had stage-3 disease, and 44.2% received neoadjuvant treatment. During the six-month follow-up period, 97.4% of patients experienced at least one side effect. Fatigue was the most common side effect (87%), followed by loss of appetite (71.4%) and diarrhea (49.4%). Approximately 66.7% of patients experienced six or more side effects. There was a statistically significant difference in the frequency of side effects between cancer types. However, age, socioeconomic status, BMI, comorbidity, chemo-intent, and stage of disease did not affect the frequency of side effects. Conclusions: This study highlights the need to integrate patient-reported side effects into routine clinical practice. Identifying these side effects, even if they are mild in intensity, and managing them in a timely manner may improve the patient's emotional state, QOL, and compliance with chemotherapy. Introduction The incidence of cancer is increasing globally, with an estimated 19.3 million new cases worldwide in 2020 [1]. In India, the cancer burden for 2020 was estimated to be 98.7 per 100,000 population, accounting for 1,392,179 patients [2]. Improved treatment modalities and increased overall survival have resulted in an increase in the number of patients living with cancer [3]. However, chemotherapy drugs that are effective in killing cancer cells can also damage normal cells and cause side effects [4], which may affect patients' quality of life (QOL) and psychosocial well-being [5]. While clinical trials provide important information about chemotherapy side effects, their results may not be generalizable to routine clinical care. Patients in clinical practice may experience more chemotherapy side effects than reported in clinical trials which often do not accompany appropriate external validation [6]. For instance, a clinical trial reported a frequency of 34% and 45% for diarrhea (any grade) in patients receiving three months and six months of fluorouracil-/oxaliplatin-/leucovorin-or capecitabine-/oxaliplatin-based chemotherapy, respectively [7]. An observational study on chemotherapy side effects done in clinical practice showed that the frequency of diarrhea (any grade) in colorectal cancer is 75% (107 of 142) [8]. In clinical trials, chemotherapy side effects are often reported by clinicians, who may underestimate the incidence and severity of side effects compared to patient reports [9]. To obtain more generalizable results, observational studies that rely on patient-reported outcomes in routine clinical care are necessary [10]. Although there are some observational studies available in the literature on patient-reported chemotherapy side effects for specific cancer types, stages, chemotherapy regimens, or side effects [11,12], there are few studies available on the side effects of chemotherapy across various cancers and treatment regimens [13]. Moreover, no established research is available on the Indian population. Therefore, this study aims to estimate patient-reported chemotherapy side effects, including diarrhea, vomiting, chest pain, constipation, dyspnea, fatigue, mucositis, and rash in patients with gastric, periampullary, colorectal, and breast cancers in routine clinical care in the Indian population rather than in clinical trials. Identifying and managing these side effects, even if they are mild, can improve patients' emotional state, quality of life, and compliance with chemotherapy. Materials And Methods This study was a single-center, prospective observational study done in the Departments of Surgery and Medical Oncology at the Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, which is a tertiary care teaching hospital in South India. The study's ethical clearance was obtained from the Institute Ethics Committee (approval number: JIP/IEC/2019/0202). The study period was two years, from July 2019 to July 2021. All the consecutive patients with gastric, periampullary, colorectal, and breast cancers above 18 years of age and receiving chemotherapy were included in the study. Patients not willing to consent or participate in clinical trials are excluded from the study. The primary objective of this study is to assess the incidence and severity of patient-reported chemotherapy side effects in gastric, periampullary, colorectal, and breast cancers in routine clinical care during the follow-up period of six months. Study patients Informed consent was obtained from patients who were included in the study. Demographic data like age, sex, phone number, address, BMI, education, income, occupation, comorbidities, type and staging of carcinoma, chemotherapy intent, chemotherapy regimen, imaging, histopathology, and any hospital admissions were collected from hospital medical records. The socioeconomic status of the participants was calculated using a modified Kuppusamy scale [14]. Study methods Eligible patients were identified and informed about the study and the need for monthly face-to-face or telephone interviews for six months. Patients who gave consent were included in the study and interviewed for six months after chemotherapy initiation. The first interview was three to four days after the first cycle of chemotherapy and then every month for the next five months. (Due to the COVID-19 pandemic, most of the subsequent interviews were via telephone.) Participants were asked with a structured set of questions if they had experienced any side effects like diarrhea, vomiting, chest pain, constipation, dyspnea, fatigue, mucositis, and rash, with examples of each grade. These side effects were selected because they are common side effects observed in patients receiving chemotherapy and can be easily expressed from the patient's perspective. This patient-reported outcome tool was made according to the Common Terminology Criteria for Adverse Events version 5.0 and was adapted into English and Tamil. Sample size calculation Sample size calculation was performed using OpenEpi v3.03, with a calculated sample size of 63, a power of 80%, relative precision of 10%, and CI of 95%, based on an overall adverse effects rate of 86% [8]. With an attrition rate of 20%, the final sample size for the study was 76. A convenient sampling technique was used. Statistical analysis The statistical analysis was performed using SPSS Statistics version 21.0 (IBM Corp. Released 2012. IBM SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp.). The frequency and prevalence of side effects were calculated as the number of patients who reported the side effect of any grade at least once during the follow-up period. The incidence rate of side effects was calculated in terms of person-time. Once individuals experienced the selected side effect, they were censored. The severity of side effects was determined using the worst grade of each side effect experienced by the individual during follow-up. Data were presented as percentages for categorical variables. The association between each side effect and cancer type was calculated using the chi-square test and Fisher's exact test as appropriate, with a p-value of less than 0.05 considered significant. Demographic and clinical data A total of 77 patients were included in the study. The majority of 34% were breast cancer patients, and the least were periampullary cancer (4%) patients. In the study cohort, 48% were females, and most patients were between 40 and 60 years of age (63.6%). About 58.4% of the cohort had BMI ≥ 25 kg/m 2 , and 33.8% of the cohorts in the study had comorbidities (diabetes mellitus (19.5%)/hypertension (24.7%)/coronary artery disease (3.9%)/others (6.5%)). About 45% of patients had locally advanced disease, 14% had metastatic disease, and 44.2% received neoadjuvant chemotherapy ( Incidence of patient-reported side effects Fatigue had an overall high incidence in the study cohort (25.9 per 100 person-month), followed by loss of appetite (17.7 per 100 person-month). Similarly, chest pain had a low incidence in the study cohort (0.4 per 100 person-month), followed by a rash (0.7 per 100 person-month) ( Table 3). S.No Side effect Severity of patient-reported side effects For 96.1% of study participants, the overall grade of side effects experienced was grade I or II, and 2.6% of participants reported grade III side effects (chest pain and dyspnea) ( Table 4). Discussion In recent decades, there has been a rise in cancer incidence. Chemotherapy, along with surgery and radiotherapy, is an indispensable modality in cancer treatment. Thanks to the effective chemotherapy drugs available in the last few decades, overall survival has increased. As the overall survival of cancer patients has improved, knowledge about chemotherapy side effects affecting their QOL and compliance to treatment has become more critical. Currently, the available knowledge about chemotherapy side effects comes from clinical trials. However, their external validity is questionable, and there are few observational studies across various cancers, regimens, and side effects. In a recent prospective study among breast cancer patients by Galizia et al., it was concluded that doctors often underestimate the incidence and severity of treatment-related side effects in clinical practice, and the extent of underreporting is more significant in high-volume centers. The study also found that patients' questionnaires were a reliable tool for collecting side effect-related information [15]. Frequency of patient-reported side effects The frequency of any side effect reported by patients in this study is 97.4%, with fatigue being the most common side effect reported (87%). The study results are similar to a cross-sectional survey conducted in the United States by Henry et al., which showed that 88% of participants reported at least one side effect during their cancer treatment, and fatigue was the most common side effect reported (80%) [13]. Another study by Alison Pearce et al. about patient-reported side effects in chemotherapy showed that 86% of participants reported at least one side effect during the study period, and fatigue was the most common side effect reported (85%), consistent with the findings in this study [8]. The frequency of side effects in this study was comparable to previous observational studies available in the literature. The increased frequency of side effects reported by patients in this study shows that chemotherapy side effects are more common in clinical practice than in clinical trials. Incidence of patient-reported side effects It is not easy to compare the incidence rates of each side effect with available studies in the literature because the follow-up period differs between studies. In a similar study done in the Australian population, the incidence rate of any event was 0.22 events per person per month of follow-up, whereas, in this study, the incidence rate of any event was 0.65 events per person per month of follow-up [8]. The difference in the incidence rates reported may be due to data collection methods like providing an example of each side effect and its grade helps in a better estimate of side effects than open-ended questions. The difference might also be due to the time of data collection; for example, immediately after chemotherapy, patients might report more side effects than those collected after a few days of chemotherapy. Additionally, due to the ongoing COVID-19 pandemic, many patients with side effects might be left untreated due to difficulty in accessing healthcare, and ongoing side effects reported in monthly interviews could have also caused the higher incidence rates in the study. Severity of patient-reported side effects Most patients in the study reported mild (grade I and II) side effects during the six-month follow-up period (96.1%). The study by Alison Pearce et al. about patient-reported side effects in chemotherapy showed that 35% of participants reported moderate (grade III) side effects, and 27% reported severe (grade IV) side effects during the median 5.64 months of follow-up [8]. The discrepancy in the severity of side effects reported between studies can be ascertained by (a) sample size (77 vs. 441), (b) age group of the study participants (in this study, 63.6% of participants are between 40 and 60 years of age, whereas in the other study, 27.5% of participants are above 65 years of age), (c) type of cancer patients included in the study (present study included gastric, periampullary, colorectal, and breast cancer patients. whereas the other study included breast, colorectal, and lung cancer patients), (d) difference in the stage of the disease included in the studies (in the current study, 14.3% of participants had stage 4 disease, whereas the previously said study had 52.4% of participants with stage 4 disease), (e) and many patients did not receive their intended chemo cycles as planned due to the COVID-19 pandemic; this delay between chemo cycles could have influenced the severity of side effects in the study [8]. However, both studies showed that many patients reported ongoing side effects during their chemotherapy. Even though mild, persistent symptoms can affect the patient's emotional state, which further influences compliance to treatment and causes them to delay the next chemo cycle or miss the chemotherapy cycles. Some patients reported side effects after a few months of treatment, and a few reported severe side effects during the follow-up period because of which their treatment regimen was changed; this shows the importance of monitoring side effects in patients receiving chemotherapy. Limitations Dropout rates and patient compliance with chemotherapy could not be evaluated in this study, and the overall sample size was relatively small. Side effects of chemotherapy could not be monitored for the entire treatment duration and follow-up. Only breast, gastric, periampullary, and colorectal cancers were included in the study. The study did not record the proportion of patients who received treatment for side effects. Recall bias happened due to monthly interviews. There is no control group to compare; hence, it was difficult to determine the proportion of reported side effects due to chemotherapy. Conclusions This study shows the need to integrate patient-reported outcomes into routine clinical care. Identifying these common chemotherapy side effects in clinical practice and their in-time management can help increase patient compliance and thus decrease dropout rates. It may help to improve the patient's emotional state and QOL.
2023-05-01T15:02:41.605Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "3fd1a8eeece1d437725f9d8f868ed4093d1fdecc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.38301", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c171956f69a17d362ca35f8c8b0b2a033e1e512a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237517924
pes2o/s2orc
v3-fos-license
‘Look out you rock’n’rollers, pretty soon now you’re gonna get older’: A unique study of ‘Boys to Men’ over half a century In September 1973, a second-year undergraduate social science student at Cardiff University, from a middle class background, moved to ‘Milltown’, a large and extremely disadvantaged working class estate on the edge of the city. He did so because it was the location of housing association accommodation he had just been granted. One day that autumn he was intercepted on the street by an ‘angel-faced boy’, aged 12 or 13, who said unashamedly: ‘Lend me a tenner’. The tenner was needed for a court fine and it was beyond the means of the boy’s mother. Even though he thought it must be a wind up, and the money might never be seen again, the student lent the boy the money. The boy duly paid it back at a pound a week, turning up at the student’s flat every Thursday evening to do so. The student, Howard Williamson, got to know the boy ‘Marty’, and Marty’s mates, and soon began to volunteer at the ‘youth club’ they attended, a derelict former nursery school building on an adventure playground at the top of the council estate, bordering woods and farmland. It was known as the ‘Rec’ and they were the ‘Rec Boys’. Thus began what must be one of the most remarkable journeys -intellectual and emotional -ever undertaken by a social scientist, resulting among other things in the three books under review here. And it is not over yet. Williamson chose the pseudonym Milltown because at one time a paper mill bordered the estate. Often described as one of the largest housing estates in Europe, its construction began after World War 1. Even in the inter-war period, it had a reputation for 'trouble'. By the time Williamson came to live there it had all the characteristics of the most disadvantaged estates: high unemployment (or unskilled and uncertain work), poverty, educational underachievement, pervasive social problems, poor transport links and few public amenities apart from what was then (but is no longer) a 'flourishing network' of social clubs and pubs. Five years ('that's all we've got?') Williamson was not at first intending to embark on an ethnographic study, much less a research project lasting many years. He thought he was going to become a social worker. It was only a few years later, beginning a PhD, that he asked the Boys if they would be the subjects of his thesis (1981). By then he was able to draw on several years' knowledge of their lives and experiences which, together with the ethnography for the PhD research, provided the content for Five Years (Williamson and Williamson, 1981). Quite unlike a PhD thesis, however, Five Years began as a memoir and was aimed at a wide rather than scholarly readership. This is not an academic book. . .It is a book about people -about their interests, their music, their crime and about the high spots and bad times of their adolescence. Above all, it is my attempt at a tribute to them all. (p. 5) Five Years includes only a handful of academic references, including what were then two key recent youth studies texts: Parker (1974) and Willis (1978). It is essentially a detailed narrative account of the teenage years of five of the Boys, with a particular but not exclusive focus on their brushes with the law. The five were selected because the author 'knew them well and also because each individual had characteristics that almost caricature the beliefs and way of life of others in the area' (p. 4). The five were (pseudonymously): Danny (the 'coolest', who progressed steadily through Detention Centre, Borstal and prison), Marty (the most intelligent and attractive, but also the meanest about money), Jerry (relatively cautious and 'conventional', eventually staying out of trouble long enough to be able to join the army), Ted (physically the toughest, from a huge family, 'all either crooks or married to crooks') and Pete (loud and opinionated, who after considerable agonising came out as gay -although 'the term seemed ironic at the time' -and left Milltown for London). All except Jerry were devoted David Bowie fans, as were most of their mates. Full of incident (sometimes hair-raising, sometimes hilarious) and densely packed with detailed insight -into the youth and adult justice systems, the strategies and tactics of routine (mostly petty) criminal activity and also the humdrum realities of family and community life in a poor working class neighbourhood -Five Years is astonishing for the candour of the author's commentary on the Boys' behaviour and personalities. It is frequently positive and flattering but it is sometimes very much the contrary. Two Boys are described as 'incurable skivers and pathological liars'. There was nothing underhand about this because the book was intended to be read by the Boys and it is clear from the later volumes that they did indeed read it, critically but enthusiastically. Moreover, they were well able to give as good as they got, often with added twists of humour. One day I was saying to [Marty] that he had never been generous in his life. He replied, 'what do you mean, I paid twenty-five pound off my fine today '. (p. 81) The main purpose of Five Years is to explore and document how, despite having such similar backgrounds, the Boys went different ways between the ages of 13 and 18. It therefore challenges simplistic models and metaphors of youth 'transitions' or of the life course more generally, and it has a continued relevance for that reason, all the more so because of the follow-up studies of the Boys as middleaged and then older men. Still crazy (about Bowie?) after all these years The Milltown Boys Revisited (Williamson, 2004) is a very different book, and not just because the participants and author were twenty five years older. It is an 'unashamedly empirical study' (p. 23) and much more of an academic text than Five Years, making more explicit references to relevant theory and with a detailed chapter on methodology. It goes far beyond the original five 'case studies'. Williamson started with a list of 67 names of boys from the Rec and from Milltown, and with the help of a small number of those with whom he had stayed in closest contact over the years (including Danny and Marty), snowballed his way to a sample of 30 with whom he conducted in-depth interviews. The result was almost half a million words of transcribed data. Given that seven of the 67 Boys were dead, this meant there were formal interviews with half of the surviving cohort, remarkable for a follow-up study after such a long time, and this did not include a significant number of informal conversations with others (see Figure 1 for a summary of the sampling strategy; reproduced from Williamson, 2004). The book describes and analyses the experiences of the Boys over the preceding quarter century, across all major life domains (employment or 'ways of getting by', involvement in crime, housing, health, leisure, families and relationships, and more). It is full of interesting 'taxonomies' and systematic comparisons. There is even a detailed analysis of the experiences (educational and otherwise) of the Boys' children, sixty in all at that stage. The author concludes with a tentative suggestion of three 'clusterings within the life course' of these Boys, who were the first generation of 'status zer0' or 'NEET' young people (not in education, employment or training; see Istance et al., 1994;Williamson, 1997). The clusters could be termed, loosely, the 'successful', the 'unsuccessful' and those 'in the middle' with regard to matters such as employment, housing, health, desistance from crime, stability of relationships, children's education and so on. But even that categorisation leaves out a small number of cases and Williamson ends by urging: Caution must be exercised in passing judgement on the Boys on the basis of some extraneous measures of success (or failure), for the most significant finding from this study is the complex interaction between the life-course trajectories in the public domain and those within more private spheres. (p. 237) His careful teasing out of such interactions in individual cases, comparing and contrasting them with the experiences of others, is an enormous strength of this book and of the overall Milltown 'project'. To answer the question at the head of this section, the author found the middleaged Boys 'surprisingly quiet' about the place of music in their lives, although Danny remained a specialist in trivia about both Bowie and the Beatles. 'Will you still interview me when I'm (almost) 64?' As 2020 approached, having had sporadic contact with the Boys in the ensuing years (but seeing most of the original 'core' group at least every Christmas), Williamson began to consider a follow-up study. His initial hesitance was dispelled by a few developments, including a call from Danny to say he had become a grandfather. 'He said that he had given up drinking and that perhaps his life was turning a corner. Perhaps I should write another book, he suggested' (Williamson, 2021: 33). Then Adrian, the son of another Milltown Boy Gary, died by suicide. At the funeral, despite the sadness, Gary said that it must be time for another book and the response from the other Boys was decisive for the author. He was unsure whether semi-structured face-to-face interviews would work this time and was leaning towards 'more opportunistic and spontaneous exchanges of experiences and perspectives' (p. 57), when the Covid-19 crisis provided an unexpected opportunity to conduct the research online. The result is an absorbing study drawing on 12 online interviews, augmented by informal conversations with many more Boys and information from miscellaneous social media. While ethics had 'hardly been an issue' for the previous books, since, even around the turn of the century, ethical procedures and expectations were 'light-touch'(p. 58), this time it was necessary to get formal ethical approval from his university. Today, when 'light-touch' institutional ethics procedures seem 'light years' away, this seems quite an achievement, and it must have made a great difference that it was not an ab initio study but a follow-up in the context of an established research relationship. Once again the book teems with empirical material, covering themes similar to those in 'Revisited'. There is much fuller engagement than before with relevant theory including a range of literature on youth transitions, living with 'precarity' and much broader sociological questions such as the nature of modernity/postmodernity and the relationship between structure and agency (e.g. Evans and Furlong, 1997;Giddens, 1991;Helve and Bynner, 1996;Standing, 2011;Swartz et al., 2021). James Coˆt e (2014: 62) has suggested that 'the structure-agency debate often implicitly informs many topical areas in youth studies'. In the third Milltown volume it is dealt with very explicitly. Williamson cautions against overemphasising either structure or agency and, in a nuanced reading, suggests: It might be preferable to consider life course decisions in terms of whether the Boys have been proactive or reactive, and -in relation to each decision -how much room for manoeuvre, of which they were aware, was available to them. Going to prison might suggest little scope for 'agency' of any kind, yet the collective knowledge (even wisdom) about custody amongst the Boys meant that most were well prepared for the experience and already had contacts and networks on the inside when they got there. That helped considerably in the balance of power within the prison system. Similar points can be made in relation to the social security system, with which many of the Boys have been dealing throughout their lives. . ..It is invariably some combination of external circumstance and internal judgement that moved the Boys in particular directions (p. 193). As before, it is the detailed scrutiny of the balance of proactivity and reactivity for individual Boys, and the way in which that balance can be tilted by a range of factors -home and family circumstances, poverty, the peer group, parental choice of school (if choice there is), the young person's experience of attending or avoiding school, the quality of personal and intimate relationships, and sometimes entirely unanticipated 'critical moments' of diverse kinds -that makes for an exceptionally engaging and provocative read. But most fascinating about The Milltown Boys at Sixty is the reflective dimension, as the author looks back on the process of research for all three books and on his half-century relationship with the Boys, interweaving his and their own reflections and reminiscences. All about the Boys? The heavily gendered nature of youth cultural and sub-cultural studies was a matter of comment and controversy even before the publication of Five Years. The seminal collection Resistance through Rituals (Hall and Jefferson, 1976), which Williamson acknowledges as one of the main catalysts for his interest in youth studies, included in its almost 300 pages a single chapter on 'Girls and Subcultures'. In it, Angela McRobbie and Jenny Garber criticised and challenged the usual invisibility of girls or the way in which, if visible, they were 'fleetingly and marginally presented' (p. 209), a fact ironically confirmed by their own experience. It prompted another important and ground-breaking collection a few years later, McRobbie and Nava's Gender and Generation (1984). It is not that studies of boys were not also 'about' girls. Even if not the main focus, even if 'invisible' or out-of-sight, and not the subject of direct observation or explicit analysis, girls and women were always 'there' within the discourse of youth studies, in the sense meant by Foucault when he referred to the 'repressive presence. . .of the not-said' (Foucault, 1972: 35). But the 'said' was often all too clear. The attitudes to girls and women of the Boys in Five Years, some in particular, were breathtakingly sexist and demeaning, and Williamson does not spare the reader from them. In today's climate of (somewhat?) heightened sensitivity to gender inequality in all its guises -blatant, subtle and insidious -it might be difficult to print some of the content. Presenting it explicitly was in keeping with the author's purpose of conveying the reality of the Boys' lives as authentically as possible, and as we have seen his relationship with them was one in which he could robustly challenge their attitudes and behaviour without losing their trust. Today, while giving the reader a jolt it also prompts the question of how much things have really changed. Not surprisingly, girls and women become much more visible and vocal in later volumes, particularly The Milltown Boys at Sixty. Even in the earlier years, Williamson was close to Marty's grandmother and some other Boys' female family members (and knew that if they didn't approve he mightn't have had the access that he did), but as time passed he got to know many wives, female partners and children, and they feature frequently (at the 60 th birthday party of Kelvin's wife Julie, he breaks one of the Boys' 'golden rules' by spending time in the women's company). There is also an amusing account of an exchange on Facebook in 2017 between a number of people from Milltown after one of the Boys posted the cover of 'Revisited' with the comment 'Just read this book. . .great read. . .£10 on e bay. . .It's a book about the rec boys'. A woman replies 'Not about the rec girls then xx'! After a number of other contributions, some witty and ribald, the author joins in and explains why he had not 'included the stories of the girls'. The reasons are implicit throughout the books, but including the explanation explicitly here in the published volume would have been of interest and benefit to the reader and the field. Patterns of difference Apart from gender, and of course class (they all grew up 'within a stone's throw of each other' in social housing but the Boys 'destinations' are far from uniform), the contemporary reader is also much more likely than the reader of the early 1980s to ask whether diversity is otherwise reflected in the lives of the Boys. All of the 'original' five Boys were white and only one of the follow-up study participants comes from wholly BAME heritage: both Matt's biological parents were from Barbados. Vic's father was from Nigeria, his mother from Ireland. Looking back, Matt says that it is now clear in his mind that in his youth he was often 'picked out', including by the police, but he had hardly considered that possibility at the time. The existence of anything other than a 'white British (or Welsh)' ethnicity only features in Five Years through the disparaging remarks of the Boys (not unlike gender). Marty, as we hear for ourselves, was 'a staunch racist and although he was clever enough to see the irrationality of his prejudice he was not willing to accept it' (p. 78). By contrast, in The Milltown Boys at Sixty there is a thoughtful series of observations by the Boys, Black and white, both on the past and the present (the murder of George Floyd by a police officer in Minneapolis and Black Lives Matter street protests in the UK formed part of the recent context for the interviews). As mentioned above, Pete, one of the Five Years Boys, is gay. After coming out, he left Milltown, a place where the culture 'suppressed difference, distinction and achievement' (2021: 19-20). Despite openly homophobic attitudes from the other Boys, a few (in particular Jerry and Marty) continued to show great personal loyalty and visit him in London, while Vic would always continue to see him as a 'top mate' and in later years would threaten (with his commanding physical presence) anyone who made derisory comments. Having suffered close bereavement and tragedy in his personal life more than once, Pete suffers from profound depression and was not up to a formal interview for the third volume. One of the most poignant contributions in the second ('Revisited') is when he comments on returning to Milltown after some years: 'I didn't realise I had any friends until I came back. But there's all the Boys in Milltown -I'm not such an embarrassment after all! I'd missed them desperately for all those years. I cared about them but I never thought they cared for me ' (2004: 192). Disability does not feature in Five Years but it increasingly becomes an explicit part of the Boys' lives as described in the later books. This is either because they have become the parents of children with a disability (Jerry and his wife Sam have a profoundly disabled daughter Rachel, and we learn a lot about her life in the family and community and the terrible impact on her and them of public sector austerity measures) or because of the onset for some of the Boys of chronic physical or mental health problems. Marty's story is a heart-breaking one. In Five Years he rues the frequency with which he gets caught and arrested because he was too drunk to plan his burgling carefully enough: 'I must be mad. I don't know why I do it. I think I need to see a psychiatrist. There must be something wrong with me'. These words take on a terrible irony when we later learn of his diagnosis of paranoid schizophrenia and read about him being taunted and ridiculed by local youngsters (as an 'old man', in his thirties). A deeply moving account of his funeral in 2014 at the age of 52 is the subject of the first chapter of The Milltown Boys at Sixty. (The funeral of Ted, another one of the 'five', took place just before the book was completed.) Later, the chapter on beliefs concludes with a typically droll quip from 'Spaceman' that when it came to religion, because of his schizophrenia Marty sometimes thought he was God (2021: 140). Positioning and relationships -'half-in, half-out' The Introduction of The Milltown Boys at Sixty quotes the late Peter Lauritzen of the Council of Europe who praised the second Milltown volume for being an example of the 'distant intimacy' that is required for good participant observation. As so often happens in this trilogy, one of the Boys (Tony) is later quoted capturing a similarly sophisticated idea in the most uncomplicated of terms: 'You were half in, half out -that's how I would see it' (2021: 176). Howard Parker's study of car radio thieves in Liverpool (1974) was an important influence on Williamson's research interests and orientation although it hadn't been published when he moved to Milltown. He learned from Parker that after the book came out its author lost touch with the 'Roundhouse boys' because they resented him for -as they saw it -becoming wealthy by using them. 'He couldn't go back. . .I was adamant that I would not end up like Parker' (p. 171). He set out to ensure this in various ways: sharing with the Boys the proceeds of newspaper articles or radio interviews, offering advice and information (and, more than once, loans), accompanying them to court, visiting them in custody and at home long after the initial ethnography was formally completed, taking photographs at weddings and parties, and in other ways. The 'norm of reciprocity' that he learned about as a social science student was the touchstone, and the existence of two detailed follow-up studies, many years apart, is persuasive evidence that it has been successfully observed on both sides. But the Boys can be forgiven for extracting dry humour from the relationship, as they do from so much else. One of them gives the following response to a newspaper article featuring the author: You're pretty clever really, aren't you, How. We tell you stuff in simple language. You put it into posh words and you get paid a fucking fortune for it. And then you talk to the paper about it, and get paid for that too, and then they write it in simple words, so that we can read what we told you in the first place! (2021: 171-172) Conclusion Towards the end of The Milltown Boys at Sixty there is a passage in which Paul talks about his feelings and about the fact that he doesn't do so with the other Boys: 'How can I say to them what I say to you, some of the things I've told you. . .? Well you're not going to talk about them. . . ' (p. 173). The author tells us that even though the Boys have pseudonyms, there are many things they have revealed to him that he has not published. Reflecting on why they are as open with him as they are, Williamson says: 'I am a useful repository for some of their deeper thoughts. . .precisely because -paradoxically -I don't really count and don't really matter. It is precisely because I am not one of them that makes them more comfortable in sharing certain things with me' (p. 174). It's not hard to grant the validity of this observation, and appreciate the paradox of it. But it is not the whole truth. The power of the Milltown trilogy lies in the fact that, quite palpably, and increasingly as we move through the volumes, the author does count in the lives of the Boys, as they do in his. The relationship is an unusually complex one between researcher and researched, because that is precisely what it takes for a study of such depth and duration to be possible. It is probably unique: in the scope of participants' lives that it covers, and in how long it has been sustained, by the same individual researcher throughout (as Williamson notes in comparing it with Laub and Sampson, 2003). The trilogy documents many things, including all those mentioned above and many others that deserve attention if space allowed, like Danny's way with words, or Spaceman's artwork and analytical skill. But it also documents a history of relationships, both among the Boys and between them and the author. This latter relationship, while not being one between 'friends' in the conventional sense (although there are many degrees and types of friends, as the Boys themselves attest), certainly has some of the key qualities of friendship. The mix at play in the Milltown Boys trilogy has many striking parts, including a truly prodigious research effort, robust scholarship, insight, imagination and humour. When the warmth, care and affection of long friendship are added, it becomes an extraordinarily special combination, and a life-affirming as well as enlightening experience for the reader. Views from the boys. . . 1981. This copper asks me where I'm going. . .he says he'll give me a lift. . .I think they wanted me to tell them who was [pinching cars]. . .So we pulls up outside my house and I says 'You know I wouldn't say anything anyway, but seen as you gave me a lift home, I'll tell you it was none of the boys I know'. (Danny) I'm shit scared of going to Borstal. All the boys will probably think I'm cool [but] I don't know what it's like in Borstal. It's murder, that is, when you don't know nothing about a place. . . (Marty) [Magistrates] got no idea what it's like being skint. I know they probably says that they're skint, but they can always get money from somewhere -you know, the fucking banks'll always lend 'em money because they're respectable. Being skint is having fuck all. (Marty) 2004. I just lost interest in it [the apprenticeship]. . . and then I found prison. (Nathan) I didn't care [about going to prison]. I knew what it was going to be like and I knew everybody. No big deal. All of them was helping me. Home from home really. All you miss is the beer. (Ryan)
2021-09-16T13:08:21.471Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "1d94e6969877acffbf3a374803e8226003d96379", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14733250211039076", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "1d94e6969877acffbf3a374803e8226003d96379", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
239002416
pes2o/s2orc
v3-fos-license
A Two-Year Longitudinal Study of the Effectiveness of the CRT® Bacteria Test in Evaluating Caries Risk in Three-Year-Old Children Objective To study the correlation between the level of infection with Streptococcus mutans (SM) and lactobacilli (LB) in saliva with existing status and the development of primary dental caries in 3-year-old children and to evaluate the results of CRT® bacteria as a Caries Risk Test. Methods A total of 140 3-year-old children were selected for the study. Oral examination was conducted and the levels of infection with SM and LB in saliva were measured using a CRT® bacteria test. Oral reexamination was conducted after two years. The prevalence rate of caries, the decayed-missing-filled tooth (dmft) and decayed-missing-filled surface (dmfs) indices, and Caries Severity Index (CSI) were calculated at the start and end of the two years. The indices were statistically analyzed. Results The caries prevalence rate, dmft, dmfs, and CSI increased with increasing levels of CRT-SM and CRT-LB at the start and end of the two years; the increases in dmft, dmfs, and CSI over the period were consistent with the increases in CRT-SM and CRT-LB levels, with all differences being highly statistically significant. The increase in caries prevalence rate over the two years was not statistically different for different CRT-SM and CRT-LB levels. CRT-SM and CRT-LB levels were highly positively correlated with dmft, dmfs, CSI and their increases over the two years. Levels of infection with oral SM and LB were found to be independent risk factors for primary dental caries, respectively. For an SM concentration in saliva of ≥104 CFU/mL and an LB concentration of <104 CFU/mL, the risk of caries increased by approximately 2.8-fold. When the concentration of LB in saliva was ≥104 CFU/mL and that of SM <104 CFU/mL, the risk of caries increased by approximately 3.9-fold. When the concentration of both SM and LB was ≥104 CFU/mL, the risk increased by approximately 10.9-fold. Conclusions Significant positive correlations were found between the level of infection with oral SM and LB and existing oral decay status and the trend in the development of primary dental caries. Infection with SM and LB significantly increased the risk of caries in primary teeth. The CRT® bacteria is a simple, convenient, reliable, and effective Caries Risk Test. Introduction e most common chronic illness among children globally in both developed and developing countries is caries [1,2]. e prevalence of childhood caries remains the highest of all childhood diseases, five times greater than that of asthma and seven times that of hay fever [1]. Untreated primary dental caries affected 621 million children worldwide in 2010, representing approximately 9% of the global population [2]. In 2005 and 2015, the 3 rd and 4 th national oral epidemiological surveys were conducted in China, and the results revealed that the prevalence rate of caries and the mean value of the decayed-missing-filled tooth (dmft) index in 5-year-old children were 66.0% and 3.5 in 2005 and 71.9% and 4.24 in 2015, respectively, reflecting a significant deterioration in oral health [3,4]. e 3 rd oral epidemiological survey in China found that 79.3% of caries in children aged 5 years was concentrated in one-third of the population with a mean dmft value of 8.33, and the 4 th survey reported that 75.4% of caries in 5-yearold children could be found in one-third of the population with a mean dmft value of 9.61 [4]. An oral epidemiological survey conducted in the US also found that 20% of the population suffered from approximately 60% of all caries [5]. Evidence indicates a skewed distribution of caries in the population, with a particular subpopulation susceptible to severe caries [3][4][5]. Caries are a chronic infectious illness resulting from the interaction of multiple factors, such as varying microorganisms, the host, and dietary factors [6,7]. A wide range of bacteria grow in the oral cavity and the population undergoes pathological evolution and change during the development of caries in children, from an equilibrium state with multiple bacteria to one in which a few cariogenic bacteria predominate [7][8][9]. Furthermore, cariogenic bacteria decompose and ferment food to produce acid which gradually erodes the teeth, eventually leading to demineralization and so caries [6]. In particular, Streptococcus mutans (SM) and lactobacilli (LB) are considered the primary cariogenic bacteria [6,9]. Caries risk refers to the sensitivity of a host to caries, reflecting their susceptibility and propensity to develop caries [6]. A Caries Risk Test (CRT) aims to detect risk factors for the occurrence of caries by objectively evaluating the risk of caries or the level of caries activity in an individual which is significant for the prevention and control of caries in high-risk populations [6]. In the present study, we conducted an investigation of 3year-old children from the Shenzhen Kindergarten, Guangdong Province, China, in which oral examinations were performed and tests for levels of infection with SM and LB in saliva using a CRT® bacteria test, from which the correlation between the level of SM and LB infection and the development and status of primary dental caries was calculated, as well as the ability of the CRT® bacteria test to function as a CRT during surveillance of the occurrence and development of caries in children. (i) Healthy children with no systemic disease, 3 years of age at the time of screening (ii) Willing to accept oral examination and collection of a saliva sample stimulated using paraffin (iii) No antibiotics use for two weeks prior to saliva collection (iv) No professional fluoride treatment within the 48 hours prior to saliva collection (v) No use of an antimicrobial mouth rinse for 12 hours prior to saliva collection (vi) Signed parental consent or that of a legal guardian or a family member that is the primary care provider when the primary caregiver is not the parent e results of the first clinical examination and reexamination after two years were provided to the parents in written form. Parents were provided oral healthcare guidance each year, principally relating to the children's diet, and oral cleaning and healthcare, such as the impact of sugar consumption and the frequency of its consumption on oral health, the need to gargle after eating, the use of fluoride toothpaste, teaching parents how to brush and floss for children, and the importance of regular oral examination. Research Method. Oral examination and the relative measurement of SM and LB infection levels in saliva were conducted for each individual at the initial clinical examination. Only an oral examination was conducted at the clinical reexamination after two years. Oral Examination. e initial and follow-up oral examinations were conducted by the same senior pediatric dentist. e kappa statistic was calculated for both examinations, the resultant values were found to be greater than 0.9, indicating that the results were reliable [10,11]. e dentist performed a diagnosis both visually in natural light and with probing using a disposable mirror and probe, with all examination results recorded contemporaneously. e diagnostic criteria of caries as described in the Oral Health Surveys: Basic Methods by the World Health Organization (WHO) [12] was used, any uncertain cases being excluded. e type of caries was also recorded, such as secondary caries, enamel caries, dentin caries, or that of the residual crown or residual root. Indicators of Caries Status. Based on the oral examination results, the prevalence rate of caries, dmft, decayedmissing-filled surface (dmfs), and Caries Severity Index (CSI) were calculated [6,13]. CSI was scored using the caries criteria developed by Shimono et al. [13]: 0 if a tooth was sound; 0.5 where a filling was present; 1 if secondary caries was present after the filling had been placed, enamel caries, or superficial dentin caries; and 2 when deep caries of the dentin was present, or exposure of the endodontium, or a residual crown, or root was observed. e highest score was recorded if multiple decayed surfaces were detected on one tooth. Measurement of Levels of Infection of Oral SM and LB. A standard CRT® bacteria kit (Ivoclar Vivadent Inc., Liechtenstein) was adopted in the present study, containing paraffin pellets and two special plates. One side of the special plate was covered with black SM selective culture medium (MSB) and the other side coated with green LB selective culture medium (Rogosa Agar). e test was performed between 9 and 10 a.m. Participants fasted for one hour prior to the examination. e specific collection procedure was as follows: stimulated saliva was obtained from the children by asking them to chew the paraffin pellets prior to its collection in a sterile sputum cup. e agar was entirely covered with saliva. e carrier was then held slightly obliquely to allow excess saliva to flow out. e agar was held upright and placed tightly to form a seal. No contact by the researcher was permitted with the surface of agar during the entire process. e agar plates were incubated at 37°C for 48 h and the colony density of SM and LB was recorded. e numbers of SM and LB on the agar were compared with a standard plate and the results recorded accordingly. Sample Size. is was a prospective cohort study. From the data in a previous report [14], the two-year increment for decayed-filled surface (dfs) of CRT-SM at levels 0, 1, 2, and 3 were 1.53 ± 2.43, 2.75 ± 3.12, 6.91 ± 6.49, and 9.61 ± 6.19, and CRT-LB at levels 0, 1, 2, and 3 were 1.54 ± 3.21, 3.80 ± 4.82, 6.82 ± 4.83, and 10.36 ± 7.52, respectively. For a two-sided test where α � 0.05 and β � 0.10 with power � 90%, the sample size N for a study of SM was 64 cases, and 80 cases for a study of LB, as calculated using PASS 15 software. Considering a loss to follow-up of 15%, at least 92 cases should be included. In the present study, all the 3-year-old children in the same kindergarten in Shenzhen were selected. A total of 143 3-year-old children were included in the first clinical examination. After two years, 3 children had withdrawn from the kindergarten due to the family moving away, a rate of loss in the follow-up of 2.10%. A total of 140 children completed the two-year follow-up, half male and half female. Statistical Analysis. A normality test, Chi-square test, Kruskal-Wallis test, Wilcoxon signed-rank test, and logistic regression with Spearman's rank correlation coefficient were calculated in the present study. All statistical analyses were conducted using SAS 8.02 software. e results of two-sided tests were considered statistically significant where P < 0.05 and highly significant at P < 0.01. Oral Examinations for Caries. In the initial oral examination, the prevalence of caries in the 140 children was 34.29%, with dmft, dmfs, and CSI values of 1.48, 2.23, and 4.71, respectively. In the follow-up oral examination two years later, the prevalence was 66.43%, and the values of dmft, dmfs, and CSI were 3.81, 6.08, and 11.87, respectively. No significant difference between the genders in the indicators of caries status at the initial exam and after two years was observed, nor an increase in indicator values over the two years (Table 1). Correlation between the Caries Status and CRT-SM Levels. Table 2 displays the caries status and statistical analysis of the two oral examinations of the 140 children at each CRT-SM level. e prevalence of caries, dmft, dmfs, and CSI at each CRT-SM level at both the initial and the follow-up examinations and the increase in dmft, dmfs, and CSI over the two years increased with increasing CRT-SM level, at high levels of statistical significance. e prevalence of caries, dmft, dmfs, and CSI was statistically different when comparing CRT-SM levels 0, 2, and 3 at the initial oral examination. At the follow-up examination, the prevalence of caries was statistically different between CRT-SM levels 0 and 3, in addition to between levels 1 and 3, and dmft, dmfs, and CSI were statistically different between CRT-SM levels 0, 2, and 3, between levels 1 and 3, and between levels 2 and 3. e increase in dmft, dmfs, and CSI during the two years was found to be statistically different between CRT-SM level 3 and all other levels. No statistical difference was found for the increase over two years of caries prevalence rate between each CRT-SM level. Table 3 displays the results of statistical and correlation analysis between the CRT-SM levels and indicators of caries status, such as dmft, dmfs, and CSI in the two oral examinations and the increase in values of indicators over the two years. All were positive correlations with coefficients ranging from 0.30 to 0.41 (P < 0.01). Correlation between Caries Status and CRT-LB Levels. Statistical analysis of caries status at each CRT-LB level of the two oral examinations of the 140 children is presented in Table 4. e prevalence of caries, dmft, dmfs, and CSI increased with increased CRT-LB level at both the initial and follow-up examinations, with dmft, dmfs, and CSI increasing over the two years with increased CRT-LB level, at a high level of statistical significance. In the initial oral examination, the prevalence of caries was significantly different between CRT-LB level 0 and all other levels, and dmft, dmfs, and CSI were statistically different between CRT-LB levels 0, 2, and 3. In the follow-up examination after two years, the prevalence was statistically different between CRT-LB levels 0 and 2, with dmft, dmfs, and CSI significantly different between CRT-LB levels 0, 2, and 3, and levels 1, 2, and 3. e increase in dmft and dmfs over the two years was statistically different between CRT-LB levels 0 and 2, with CSI significantly different between CRT-LB levels 0, 2, and 3, and between levels 1 and 2. Table 5 displays the results of statistical and correlation analysis between CRT-LB levels and the indicators of caries status, such as dmft, dmfs, and CSI in the two examinations and the increase in values of indicators over the two years, all positively correlated with coefficients ranging from 0.26 to 0.39 (P < 0.01). 4 Evidence-Based Complementary and Alternative Medicine Multivariate Logistic Regression Model for Caries Status at Different Levels of Infection with Cariogenic Bacteria. e results of multivariate logistic regression analysis of the impact of different levels of infection of oral SM and LB on caries status found χ 2 � 19.9783 with P < 0.0001, a highly significant result (Table 7). e parameters obtained and the statistical analysis are shown in Table 8. Furthermore, the resultant probability of caries was calculated to be p � Pr (caries � 1) � e −2.2740+1.0442 * SM+1.3482 * LB /(1 + e −2.2740+1.0442 * SM+1.3482 * LB ), with the odds of caries increasing 2.8-fold when SM ≥ 10 4 CFU/mL and LB < 10 4 CFU/mL in saliva, 3.9-fold when LB ≥ 10 4 CFU/mL and SM < 10 4 CFU/mL in saliva, and 10.9-fold when both SM and LB were equal to or greater than 10 4 CFU/mL. Discussion Both SM and LB are naturally present within the human oral microbiota [6][7][8]. SM is a chain-like coccus 0.5∼0.8 μm in length and can be observed everywhere in the human mouth [15]. LB is rod-shaped bacteria, not generally abundant in the oral cavity, accounting for approximately 1% of the total salivary flora, and can often be obtained from the surface of the tongue, oral saliva, and decayed teeth [16][17][18]. SM and LB share the following biological characteristics: they are Gram-positive; they are acidogenic and aciduric bacteria that can survive in a strongly acidic environment and continue to ferment sugars to produce lactic acid; they rely on glycolysis for energy; and they are microaerophiles and require similar nutrition [18,19]. Hence, both SM and LB can survive and thrive in low pH in addition to environments with inadequate oxygen or nutrition [18,19]. Furthermore, SM-derived glucosyltransferase can synthesize glucans by fermenting sucrose [20,21]. Glucan is a highmolecular weight polymer that can be both water-soluble and water-insoluble. Soluble glucan can act as a reserve source of energy, while insoluble glucan is highly viscous and plays an important role in SM adhesion and aggregation to the surface of teeth [20,21]. Additionally, surface proteins on SM are also important factors for adhesion, which can selectively attach the bacteria to the surface of tooth enamel to form dental plaque [20,22]. Unlike SM, LB have no adherent surface proteins because they do not produce large quantities of extracellular polysaccharides to promote adhesion, and therefore have a low affinity for dental tissue, thereby often presenting at low levels in plaques [16]. A clinical study investigating changes in the proportion of cariogenic bacteria in dental plaques during the development of caries in children's primary teeth indicated that the percentage of SM in a complete plaque was 16.35%, 26.10%, and 37.24% in precaries, enamel caries, and superficial dentin caries, respectively, and the proportion of LB was extremely low, 0.02%, and 7.17% in precaries, enamel caries, and superficial dentin caries, respectively. e increase in both SM and LB was statistically significant, indicating that SM was the primary cariogenic bacteria and that LB was not the initiating factor in the development of caries but the driving factor in its progression [23]. In the results of the present study, the prevalence of caries, dmft, dmfs, and CSI significantly increased with the increasing CRT-SM and CRT-LB levels at both the initial and follow-up examinations (P < 0.01), suggesting that the children with different levels of infection of oral SM and LB had significant differences in caries status, with caries severity increasing as concentration levels of SM and LB increased in the saliva. In children with different levels of CRT-SM and CRT-LB, although there was no statistical difference in the increase in caries prevalence over two years, the increase in dmft, dmfs, and CSI over the two years was highly significant (P < 0.01). Increasing evidence has emphasized the contribution of SM and LB to caries. Beighton et al. [24] demonstrated that SM and LB are detected in children with caries significantly more frequently than in caries-free children. Lin et al. [25] studied children aged 3 to 4 years and found that, in the caries group with mean dmft of 9.00 and caries-free group, SM was present in 95.0% and 65.0% of cases, respectively, and LB in 42.5% and 10.0%, respectively, differences that were significant in each case. Matee et al. [26] discovered that the mean SM and LB counts in dental plaque in children with rampant caries were 100-fold higher than in caries-free children, indicating that the level of infection with salivary SM is directly related to rampant caries status. Mattos-Graner et al. [27] studied children aged 1 to 2.5 years and established that children with high levels of Differences between groups with identical letters denoting SNK ranking are not statistically significant, while differences between groups with different letters are statistically significant. e level of SNK is ranked alphabetically. Evidence-Based Complementary and Alternative Medicine 7 infection of salivary SM had a higher prevalence of caries than those with low infection levels. Additionally, Wu et al. [28] observed 8-month-old infants and conducted caries and LB tests in their plaque every 6 months until 32 months of age, revealing that measurements of LB were significantly higher in all age groups than in caries-free infants. e levels of CRT-SM and CRT-LB were highly positively correlated with dmft, dmfs, and CSI in the two oral examinations and their increase over the two years, further demonstrating that the levels of infection of oral SM and LB are associated with the severity and activity of caries in children [6]. SM and LB can colonize the mouth in early infancy [26]. Teanpaisan et al. [29] conducted a longitudinal study of 169 infants aged 3 to 24 months and found that the detection rates of SM and LB in the saliva of 3-month-olds were 1.78% and 8.88%, respectively, and 86.98% and 66.86% by 24-months, respectively. Moreover, the detection rate of LB in children aged 3-9 months was evidently higher than that of SM, and the rate of SM in children aged 18-24 months was considerably higher than that of LB [29]. e risk of caries in children aged 12-24 months with an SM count >50 CFU/1.5 cm 2 in the saliva was found to be 7.5-13.0-fold higher than in children without SM infection, and the risk of caries was 3.1-and 13.3-fold higher in children aged 24 months with salivary LB counts of 1-50 and >50 CFU/1.5 cm 2 , respectively, compared with children without LB infection. Importantly, children in whom SM and LB had colonized the oral cavity at early time point were more susceptible to caries, the level of infection with SM and LB positively correlated with the caries status of the children [29]. Kanasi et al. [30] also reported that the level of infection with oral SM and LB was positively correlated with caries in children, and a risk marker for early childhood caries. e results of the present study confirmed that infection with SM and LB are independent risk factors for caries in primary teeth, with the risk of caries increasing approximately 10.9fold when both SM and LB counts are ≥10 4 CFU/mL in saliva. Li et al. [31] researched 3-and 5-year-old children and found that the risk of caries increased 6-8-fold when SM were present at >10 6 CFU/mL in saliva. Hong et al. [32] investigated the association between the concentration of salivary SM in children aged 11 to 12 years and caries; their findings demonstrate that the concentration of salivary SM in children with caries was significantly higher than that of caries-free children, with a highly positive correlation between the concentration of SM in saliva and caries. Moreover, Hong and Hu [32] also concluded that the prevalence of caries in children increases exponentially at an SM concentration of 8.64 × 10 7 /L in saliva. High levels of infection with SM and LB in childhood caries and their capacity to generate a low pH environment, in addition to their pathogenicity and aciduric properties, indicate that they are key determinants of the development and progression of caries [16]. In the present study, we found [26] pointed out that there is no difference in SM count in dental plaque and healthy enamel surfaces, but the LB count in dental plaque is 100-fold higher than on a healthy tooth surface, and the proliferation of LB in caries lesions suggests that LB is associated with the progression and severity of caries. Studies have revealed that SM is the bacteria that is cariogenic rather than LB, whereas LB is involved in the development of caries. Specifically, LB counts gradually increase in dental plaque after caries has occurred in normal plaque, lowering the pH and thus affecting the development of caries [17,18]. It has also been concluded that LB and SM interact and operate in combination during the development and progression of caries. LB is an indirect indicator of fermentable carbohydrate [16,18]. Caries is a chronic infectious disease that is affected by multiple factors. In addition to microbiological factors, children's feeding and oral hygiene habits are also closely related to the occurrence and development of caries [6,7]. Studies have demonstrated that the risk factors for caries in young children are a delay to start to brush their teeth, the absence of toothpaste, a high frequency of sweets, and their frequent consumption [6,33]. All subjects in the present study were from the same kindergarten in Shenzhen. e composition of the diet and the frequency of its consumption in the kindergarten were identical for each child. e children were from civil servants' families living close to the kindergarten, with relatively little mobility. e parents that had been provided with oral healthcare guidance every year were relatively consistent in how they had educated their children and the habits they had retained, reducing the impact of host and dietary factors on the research results to the greatest extent, although this was also a limitation of the study. e caries diagnostic criteria as described in the Oral Health Surveys: Basic Methods (5th Edition) formulated by the WHO [12] in 2013 were used in the present study. Caries with cavitated lesions were examined and recorded, but early caries of the enamel with initial noncavitated lesions were not observed and evaluated, a limitation of the WHO caries diagnostic criteria and also the present study. In addition, if follow-up observation data a year after initial examination and additional follow-up examinations over a longer duration had been available, such as three years, the research results would be more complete and the data more convincing. A Caries Risk Test contributes to the identification of populations with high caries activity or at high risk of causing caries [6]. An ideal Caries Risk Test has the following attributes: consistency with clinical findings; high reproducibility; the ability to reflect current caries status and predict caries trends; ease of use; short test duration with high accuracy; and the capacity to present individual characteristics [34]. At present, no Caries Risk Test fully meets the aforementioned criteria [34]. Tests often require sampling from dental plaque, saliva, and teeth [25][26][27][28][35][36][37]. Saliva is a bridge between different tissues and structures in the oral cavity and serves as an oral microecological medium with a large number of microorganisms that remain relatively stable [36]. e collection of saliva is a simple, noninvasive, and acceptable approach and a common source for oral clinical research [18,32,36]. Evidence has shown that a dynamic balance exists between the bacteria in saliva and dental plaque, with SM and LB counts in saliva highly correlated with the number of corresponding cariogenic bacteria in dental plaque [18]. Motisuki et al. [36] compared the influence of different sample types and collection methods on SM and LB counts and found that the number of SM identified in whole saliva and in dental plaque were similar, whereas the number of LB detected using a whole saliva method was superior to the dental plaque method, suggesting that whole saliva is sensitive to LB measurements. e present study utilized stimulated whole saliva as the sample collected from 3-year old children, who showed a high level of cooperation. e results of the study demonstrated that levels of infection with SM and LB in saliva can be used to predict caries risk in children [37][38][39][40][41]. In the present study, we leveraged the CRT® bacteria test for semiquantitative measurement of SM and LB in saliva; the results showed that the test represents a simple, convenient, reliable, and effective method of conducting a Caries Risk Test, consistent with the findings of Liang and Xu et al. [14,42]. Tanabe et al. [43] found satisfactory consistency in terms of outcomes between the CRT® bacteria method and conventional methods of selective microbial culture and counting. e CRT® bacteria kit contains a special plate preprepared with MSB and Rogosa agar on different sides; thus, no special preparation is required; and it is characterized by its simple operation and measurement, with high reproducibility and feasibility, low technical requirements, ability to be used in large sample testing, and easy generalization [14,42,43]. However, this method requires incubation for 48 hours after sample collection and manual comparison of results rather than precise quantification. erefore, the development of easy-to-use, fast, and accurate quantification methods would be a significant step forward for Caries Risk Testing. Conclusions e level of infection with oral SM and LB was positively correlated with caries status in children's primary teeth and the development and progression of caries. A high level of infection with oral SM and LB suggests a high prevalence of caries and predicts an increasing trend in the future, with a large number of decayed teeth and surfaces indicating more severe caries. Furthermore, infections with oral SM and LB are independent risk factors for caries in primary teeth, the risk of caries increasing approximately 10.9-fold when both salivary SM and LB counts >10 4 CFU/mL. Finally, the CRT® bacteria test is a facile yet effective form of the Caries Risk Test. Data Availability e data used to support the findings of this study are restricted by the Ethics Committee of the Shenzhen Maternity Evidence-Based Complementary and Alternative Medicine and Child Healthcare Hospital Affiliated to Southern Medical University in order to protect children's privacy. e data that support the findings of this study are available from the corresponding author for researchers who meet the criteria for access to confidential data upon reasonable request. Conflicts of Interest e authors have declared that no competing interest exists.
2021-10-15T15:30:13.942Z
2021-10-05T00:00:00.000
{ "year": 2021, "sha1": "5fd087270f255ec1dc7ea5effb059a97399ada33", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2021/7488855.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "997f58096858f360d580a59978a579e4e1505c19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238989659
pes2o/s2orc
v3-fos-license
A case of giant ameloblastoma: destructive effect on the facial skeleton and soft tissues of the head and neck Ameloblastoma is a benign odontogenic tumor characterized by slow growth causing painless facial swelling. The tumor can behave locally aggressively, and may have direct destructive effects on the surrounding soft and hard tissues. This paper reports the unique case of a female patient with giant ameloblastoma of the mandible. Computed tomography (CT) revealed an enormous swelling of the left side of the face, resorption of the affected hemi-mandible, left maxilla, and tissues of the temporal, infratemporal, and pterygopalatine fossae. Pressure from the tumor resulted in displacement and destruction of the facial skeleton, upper aero-digestive tract structures, and some structures of the neck. The patient was treated by radical hemimandibulectomy with removal of the tumorous mass. Precise knowledge of the anatomical structures, and their locations and topographical relationships is required in the diagnosis and treatment plan for each surgical procedure in cases of giant ameloblastoma. CT imaging can be used to determine the extent and exact location of the lesion, revealing other important details that may help in selecting appropriate treatment. Introduction Ameloblastoma is a locally aggressive and destructive benign tumor. This tumor has the potential to grow to an enormous size, with resulting bone deformity, facial asymmetry, and displacement of the soft tissues and neurovascular structures. Ameloblastoma usually originates from remnants of the dental lamina and odontogenic epithelium in the mandible and maxilla. 1,2 Generally, ameloblastoma typically occurs at approximately equal rates in both sexes between 30 and 60 years of age. The peak age at diagnosis is in the fifth decade of life, in the overall population, while the peak incidence in Europe, specifically, is reported in adults in the fifth and sixth decades of life. 3,4 These differences are likely attributable to socioeconomic factors in specific countries. [2][3][4] The objective of this case report was to highlight a giant destructive ameloblastoma. To the best of our knowledge, no similar cases describing such extensive changes in the facial skeleton and soft tissues have previously been reported in the European population. Case report A 60-year-old female patient was referred because of a massive swelling of the left side of her face and neck. The size of the swelling had increased over at least 10 to 15 years. The patient suffered no pain or problems with breathing and eating, although there was an obvious obstruction of the upper aero-digestive tract. Medical examination revealed a monstrous swelling (23.0 Â 12.8 Â 13.4 cm) on the left side of the face affecting the temporal, parotideomasseteric, zygomatic, buccal, and oral regions of the head, and the entire left half of the mandible, spreading to the anterior cervical region. Pathological masses filled most of the oral cavity, spreading from the left part of the mandible. Histopathological examination of a biopsy confirmed the presence of conventional "multicystic" ameloblastoma, with mixed, follicular, acanthomatous, and reticular growth patterns. Three-dimensional (3D) computed tomography (CT) virtual reconstruction and surgical planning were performed using the software program, SimPlant V R OMS 10.1 (Materialise V R , Leuven, Belgium). CT angiography was performed to examine the relationship of the major neck vessels to the tumor and to evaluate the potential microanastomosis options. 3D CT imaging revealed a large, welldefined lesion affecting the entire left side of the mandible, destroying and pressing on the left maxilla and growing into the infratemporal and temporal fossae ( Figure 1). Axial CT images (Figures 2-4) showed the presence of a large multilocular radiolucent area on the left side of the head. Growing from the midline, the tumor formed numerous cavities of various sizes and shapes containing septations. The cavities, presenting as "soap bubbles", were surrounded by tumorous bony septa with a thickness of up to 2.5 cm (Figure 2a-c). The left half of the mandible was completely consumed by the tumor (Figure 2b). The temporomandibular joint and associated structures on the left side were completely missing ( Figure 2c). The alveolar bone of the lateral segment of the maxilla was also completely missing, and the body of the maxilla, including the sinus, was considerably reduced in volume but was not destroyed (Figure 3a, b). The size and shape of the zygomatic bone were severely affected by an abnormal growing tumorous mass extending from the infratemporal to temporal fossae and pushing anteriorly and laterally. The zygomatic arch was 0.09 to 0.12 cm thick compared with 0.17 to 0.41 cm on the healthy right side (Figure 3c). The cranial margin of the tumor was identified 4.15 cm above the zygomatic arch plane extending into the temporal fossa (Figure 3d). Muscles as well as vessels and nerves located within the temporal, infratemporal, and pterygopalatine fossae were not identifiable. The tongue, sublingual tissue, and the oropharynx were pushed to the healthy side by the tumor, and the oropharynx was deviated 1.43 cm from the midline (Figure 2b). CT also revealed morphological changes and compression of the cervical blood vessels. The carotid bifurcation was separated from the tumor by only 0.53-cm-thick soft tissue and a distance of only 1.18 cm from the midline (compared with 3.37 cm on the healthy side). The lumen of the internal carotid artery and the beginning of the external carotid artery (ECA) were preserved. The ECA had merged with the tumor at the level of the C2-C3 vertebrae (hyoid bone). The internal jugular vein was significantly cranially, with its lumen disappearing at the level of the C4 vertebra (Figure 4a-c). Surgery was performed as radical hemimandibulectomy with removal of the tumorous mass in the left mandible via a submandibular approach. The patient's postoperative recovery was uneventful and smooth. Neither clinical nor radiological evidence of tumor recurrence was found during the 3.5-year follow-up period. The patient refused mandibular reconstruction owing to the complicated nature and risks of treatment. Discussion Ameloblastoma, often referred to, classically, as an intraosseous lesion, is a slowgrowing benign epithelial odontogenic tumor. Ameloblastoma accounts for approximately 10% of all odontogenic tumors in the mandible and maxilla, which comprise 90% of all cases of ameloblastoma. 4 Most cases of odontogenic tumors are diagnosed in young adults, with a median age of 10 to 38 years, with no significant sex predilection. [5][6][7][8] In Asia and North America, the mean (AE standard deviation) age of patients with ameloblastoma is 38.27 AE 17.78 years. 6 8 and older sources report a mean of 36 years. 5 Approximately 80% of ameloblastomas occur in the mandible, usually in the posterior region, and represent only 1% of all oral/head and neck tumors. 4,[7][8][9] Patients are very often asymptomatic because tumor growth is intermittent, with no evidence of swelling. In cases of massive and rapid growth, aggressive tumors can cause severe disfigurement, facial asymmetry, pathological fractures, and functional impairment of neurovascular structures in affected and surrounding areas. 8 Tumors may erode through the cortical bone into adjacent soft tissues and impair facial expressions, speech, and mouth opening. Paraesthesia and pain are rare. 10 Up to 80% of cases are associated with an unerupted mandibular third molar, and the remaining 20% occur in the maxilla, causing a grotesque facial appearance if the patient delays seeking treatment. The most common symptom is a painless facial swelling. Other symptoms include malocclusion, and tooth displacement and loosening. 7,9,10 Histological examination remains the most sensitive tool for the differential diagnosis. However, clinical and radiological findings are important in the final diagnosis. 11 Many lesions, especially smaller lesions, are asymptomatic and may be detected incidentally during an intra-oral examination or by conventional dental intra-oral or panoramic X-ray. With larger tumors, CT and 3D volume rendering techniques or magnetic resonance imaging are useful and provide precise information in the assessment of the buccolingual expansion of the lesion and cortical bone destruction. [9][10][11] Knowledge of the characteristic radiological imaging features narrows the differential diagnosis and is crucial in planning treatment. The treatment of ameloblastoma includes various surgical methods, which are divided into two types: a conservative approach (type I), such as enucleation with curettage, and a radical approach (type II), with wide local excision and reconstruction. Considering the lesser aggressiveness of the tumor, enucleation is an adequate treatment for unicystic-type lesions, and radical treatment with bone resection is appropriate for aggressive multicystic ameloblastoma, which has a higher recurrence rate than with the unicystic variant. 5,8 Segmental hemimandibulectomy with wide margins and concurrent reconstruction are currently accepted as the treatment of choice in most cases. Segmental resection with 1-to 2-cm margins is therefore favored for solid or multicystic ameloblastoma. 8,10 A recent quantitative and epidemiological study revealed that the risk of recurrence is three times higher with conservative treatment compared with resection. Solid ameloblastoma shows a high recurrence rate (60%-90%) with conservative treatment. 12 In conclusion, even though human anato-my has not changed over time, precise knowledge of anatomical structures and their clinical presentation is required in the diagnosis and treatment plan for current surgical procedures. Although ameloblastoma is a locally invasive neoplasm, delayed surgical treatment can lead to severe facial disfigurement; therefore, early referral to a specialist is the best approach. Authors' contributions KL designed the study and created the figures; was responsible for the acquisition, analysis, and interpretation of the data for the work; and had final control and approval of the version to be published -PhD supervisor. BB wrote the manuscript and figure legends -PhD student and the oral surgeon who performed the operation. PK is a maxillofacial surgeon who performed the operation and was actively involved in obtaining the results data. MA is an otorhinolaryngologist who performed the operation and evaluated the patient. DK took part in critical revision of the manuscript and the anatomical relationships descriptions. IH was responsible for acquiring the relevant references and is the corresponding author. Declaration of conflicting interest The authors declare that there is no conflict of interest. Ethics statement The present study was performed in accordance with the current laws in our country and with the written approval (number: 13N/2020) of the Scientific Ethical Committee of the Faculty of Medicine, Pavol Jozef Safa´rik University in Ko sice, which are based on the World Medical Association Declaration of Helsinki. Written informed consent was obtained from the patient for publication of this case report prior to submission of the manuscript, including the accompanying images.
2021-10-16T06:16:34.163Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "0121d46954d491072060d3591e12cfec744d0588", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03000605211050185", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "e16001c2422f51e5a97a1c18977eed1119709dd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54688327
pes2o/s2orc
v3-fos-license
Effects of traditional forest management on carbon storage in a Mediterranean holm oak (Quercus ilex L.) coppice Sebastiano Sferlazza, Federico Guglielmo Maetzke, Massimo Iovino, Giorgio Baiamonte, Vincenzo Palmeri, Donato Salvatore La Mela Veca In the last decade, there has been increased interest in measuring and modeling storage in the five forest carbon pools: the aboveground and belowground biomass (living biomass), the deadwood and litter (dead biomass), and the soil (soil organic matter). In this paper, we examined carbon storage in a holm oak coppice stand in the Madonie Mountains in Sicily (Italy), which is a typical case of managed coppice stands. Today, traditional coppice practices are only applied to a small number of forested areas in Sicily, such as the selected site, because of the decline in demand for wood and charcoal. The dendrometric parameters of the stands were recorded, and silvicultural indices were calculated immediately after cutting as well as during and at the end of the rotation period; they showed the trends typical of coppices. The carbon stocks in the five carbon pools were quantified to investigate the effects of coppicing on carbon storage in this Mediterranean area. Results showed that the lowest living biomass values were observed in the first years following coppicing, except for litter carbon. Belowground biomass and the soil carbon stock did not vary significantly with coppicing. During the rotation period, the aboveground biomass was completely restored, and the balance of the carbon stocks indicates that coppicing is a sustainable forest management choice from the point of view of the carbon balance, given that the logged trees are generally used for bioenergy production. Introduction According to Intergovernmental Panel on Climate Change (IPCC 2003), carbon storage in forest ecosystems involves the following five carbon pools: the aboveground and belowground biomass (living biomass), the deadwood and litter (dead biomass), and the soil (soil organic matter).Forest carbon storage provides an important mechanism for mitigating climate change, and it is an essential ecosystem service that is classified as a regulating service (MEA 2005).However, the role of forests as a carbon pool is only ensured if the proportion of living biomass exceeds the loss of carbon due to dying biomass, forest fires, and harvest.In the context of climate change, Mediterranean forests are consid-ered vulnerable to the loss of biodiversity and carbon storage services (Fischlin et al. 2007, Badalamenti et al. 2017).Moreover, the projections of the effects of climate change in Mediterranean basin may lead to reduced productivity and lower resilience of forests (Sferlazza et al. 2017).Therefore, estimating carbon stocks and their distribution in the different components of ecological and production systems is essential to understanding how carbon is allocated among labile and stable components.This information is also important for evaluating the quantity of carbon that can potentially be emitted into the atmosphere owing to natural or human-induced disturbances (Sierra et al. 2007). Interest in measuring and modeling car-bon storage in forests has greatly increased over the last decade, and many studies have adopted a comprehensive approach to investigate the quantification of carbon stocks that accounts for all of the five carbon pools in forest ecosystems (Nunes et al. 2010, De Simon et al. 2012, Ruiz-Peinado et al. 2013, Moreno-Fernandez et al. 2015, Oubrahim et al. 2015, Ruiz-Peinado et al. 2016).However, some studies focused on only one carbon pool, such as the soil (Vesterdal et al. 2008, Diaz-Pines et al. 2011, Rodeghiero et al. 2011), deadwood (logs, snags, fine and coarse woody debris - Herrero & Bravo 2012, Herrero et al. 2014, Paletto et al. 2014) or living biomass (Ruiz-Peinado et al. 2011, Ruiz-Peinado et al. 2012), while other studies have considered 4 carbon pools by omitting pools such as soil (Xu et al. 2016) or deadwood (Rodeghiero et al. 2010, Scalenghe et al. 2015).Pan et al. (2011) attempted to quantify forest carbon pools at the global level and estimated the total stock to be 861 Pg of carbon with 45% in the soil (up to 1 m in depth), 42% in the above-and belowground biomass, 8% in deadwood and 5% in litter.Geographically, 55%, 32% and 14% of carbon is stored in tropical, boreal and temperate forests, respectively, which is why studies of the carbon pools in the Mediterranean region are required. In recent years, some research has focused on the influences of forest manage-ment on carbon storage (Powers et al. 2013), and some studies have also investigated the distribution of carbon stocks among the different pools in the Mediterranean region (Bravo et al. 2008, De Simon et al. 2012, Ruiz-Peinado et al. 2013, Moreno-Fernandez et al. 2015, Oubrahim et al. 2015, Ruiz-Peinado et al. 2016).However, most of those studies have been carried out in coniferous stands, except for De Simon et al. (2012) and Oubrahim et al. (2015) who investigated broadleaved stands. The history of Mediterranean forests encompasses fragmentation, degradation and deforestation, natural expansion (Scarascia-Mugnozza et al. 2000) and afforestation, and Sicilian forests reflect all of these dynamics.In Sicily, the typical forest stand is dominated by holm oak (Quercus ilex L.), which forms a discontinuous patchwork mainly located along the slopes of the primary mountain ranges.Holm oak coppices account for 28,650 hectares or approximately 10% of the forest area in Sicily (Camerano et al. 2011), and play a significant role in the carbon balance of this region.In most of these stands, minimal or no silvicultural management has been applied in recent decades.Since these forests are generally characterized by a simplified structure and composition originated by past intensive coppicing, they urgently require silvicultural treatments to ensure both their ecological resilience and functioning as carbon pools.Only a few holm oak coppices are still managed, thus the reconstruction of historical management in most cases is practically impossible.Coppicing remains the only traditional forest management system used to provide firewood at the local scale. The objectives of this study were (i) to quantify the carbon stocks in the five carbon pools (above and belowground biomass, deadwood, litter and soil) in a holm oak coppice stand generated by silvicultural felling practices carried out at different times and (ii) to investigate the effects of traditional forest management, in the form of coppicing, on carbon storage in a Mediterranean area by examining a significant example of correctly and timely managed stand in Sicily.The quantification of carbon in forest stands is currently of interest to forest managers since carbon storage can be significantly modified through silvicultural practices (Del Río et al. 2008).This work contributes to the knowledge on carbon dynamics in a managed holm oak coppice in Mediterranean area. Study area The study area is located in the Madonie Mountains (Sicily, Italy -37° 53′ N, 14° 06′ E, elevation ~1000 m a.s.l.) within the B zone of the Madonie National Park, in the meso-Mediterranean vegetation belt.The selected forest stand is mainly composed of holm oak (Quercus ilex L.), downy oak (Quercus pubescens Willd.) and manna ash (Fraxinus ornus L.).According to data collected at the Castelbuono meteorological station over the period 1980-2003, the mean annual rainfall is 811 mm, and the corresponding mean air temperature is 14.5 °C.According to the USDA classification system, the soil in the plots is Lithic Xerorthents (Soil Survey Staff 2010). In the past, coppicing represented the main silvicultural management system aimed at firewood and charcoal productions in the Madonie Mountains (Cullotta et al. 2016a(Cullotta et al. , 2016b)).The number of residents of the nine municipalities of the Madonie Mountains (Castelbuono, Petralia Soprana, Petralia Sottana, Castellana Sicula, Polizzi Generosa, Isnello, Gratteri, Collesano and Geraci Siculo) has decreased from 52,762 in 1951 to 31,258 in 2011 (-41% -ISTAT 2017) due to emigration to other countries and internal migration.The trend of forest harvesting and its products (cutter timber and fuel wood) in Sicily (ISTAT 2011) reflects the depopulation of the rural areas such as Madonie Mountains: (i) forest harvesting has decreased from 133,000 m 3 in 1950 to 35,000 m 3 in 2011; (ii) cutting timber has decreased from 36,000 m 3 in 1950 to 16,000 m 3 in 2011; and (iii) fuel wood has decreased from 97,000 m 3 in 1950 to 20,000 m 3 in 2011 (La Mela Veca et al. 2016).The gradual abandonment of any silvicultural treatments has consequently led these stands to a state of natural evolution, which is now the most common management system.Today, traditional coppicing practices are currently applied only to a small number of areas, such as the forest stand selected for study. Four plots (A1, A2, A3, and A4) were established in the study area on the northeastern slopes (Tab.S1 in the Supplementary material) that were characterized by coppice stands of different age (i.e., 40year rotation) based on the past silvicultural felling age.In particular, felling occurred in 2013 in plot A1, 2009 in plot A2, 1993 in plot A3, and 1973 in plot A4. Sampling of dendrometric and structural attributes Field surveys were conducted in 2014, and one circular subplot with a 20-m radius was established in each plot.The subplots were as homogeneous as possible in terms of altitude, exposure and stand structure.For the dendrometric characterizations, all trees taller than 1.30 m were individually labeled, and their diameters at breast height (Dbh) ≥ 4 cm and heights (H) were measured in each subplot.Dbh values were measured for all shoots on each stool.Using these basic data, the following parameters were calculated for each plot: stem density (shoots ha -1 ), stool density (stools ha -1 ), mean tree diameter (Dm, in cm), mean tree height (Hm, in m) and basal area (G m 2 ), and the whole shoot volume (V, in m 3 ) was calculated using mathematical models developed by Tabacchi et al. (2011).More-over, three different deadwood components were sampled in each subplot: woody debris (WD), standing dead trees (SDT) and stumps (S).WD includes fallen dead trees and branches lying on the ground with a minimum top diameter (diameter of the narrowest section of the end of a piece of deadwood) of 3 cm and a minimum length of 20 cm; SDT includes all dead trees still standing with Dbh ≥ 3 cm; S includes the portions of trees remaining after cutting or, less frequently, the stems truncated by natural hazards less than 1.30 m and with a diameter at least 3 cm at the cut section or breaking section.All deadwood components were classified according to decay classes adopted for deadwood assessment by the Italian National Forest Inventory (Paletto & Tosi 2010, Di Cosmo et al. 2013, Paletto et al. 2014).To characterize the structure, measurements were taken along 10 × 12-m transects oriented to the cardinal directions within the core subplots owing to the homogeneity of the forest stand.In each transect, we recorded DBH and H of all living trees, the height of crown insertion (m), crown radius (mean of the radii taken at the four cardinal points), diameter of the cut section and the height of the stools (in the A1 and A2 plots), and the polar coordinates (angle and distance of each shoot and stool from the center of the subplot).Natural regeneration was also recorded along each transect; in particular, the origin of the plants from seeds or sprouts was determined by examining the form of the stem base and the belowground root system.All plants were classified based on different dimensional thresholds: h < 130 cm, h ≥ 130 cm and Dbh < 4 cm.The forest crown cover was determined using the Stand Visualization System (SVS) software (McGaughey 1997), which generates graphic images of stand conditions and displays overhead, profile, and perspective views of a forest stand.Finally, we assessed the diversity of the holm oak coppice stands using three indices (see Tab. S2 in Supplementary material).First, the Shannon index (SH -Shannon 1948) accounts for species diversity; second, the Winkelmass index (W -Von Gadow et al. 1998) describing the spatial distribution or horizontal structure of the stand; finally, vertical evenness (VE) index which describes the vertical structure (Neumann & Starlinger 2001).The SH index integrates both species number and the relative abundance of the different species, assuming values from 0 to ∞; values close to zero indicate low species diversity while high values indicate high species diversity.The W index reflects the regularity of the horizontal spatial distribution of trees in a forest and was calculated based on the number of reference trees (n=6) and the k trees closest to a randomly identified reference tree (k=4).The W index assumes values between 0 (regular distribution) and 1 (clumped distribution); values close to 0.5 indicate a random distribu-tion.The VE index characterizes the vertical distribution of the coverage within a stand and was assessed using the TSTRAT function (Latham et al. 1998), which defines multiple vertical height cut-off points based on tree and crown lengths and assigns individual trees to the vertical strata depending on the position of tree crowns relative to these cut-off points.In this study, we set the lower strata limit at a height of 1.30 m based on field observations; all trees with heights below this lower limit were placed in the lower stratum.The VE index assumes values between 0 and 1; low VE values are characteristic of single-storied stands, whereas the theoretical maximum of 1 would result in vertically equally distributed trees. Soil sampling Soil sampling was performed at 10 randomly selected points in each subplot.For a given point, 20 undisturbed soil cores (0.05 m in height by 0.05 m in diameter) were collected following the removal of the litter layer at the depths of 0 to 0.05 m and 0.05 to 0.10 m.In the laboratory, the undisturbed soil cores were used to determine the initial volumetric soil water content, θi (m 3 m -3 ), i.e., the antecedent moisture condition (soil water content at the time of sampling), and the dry soil bulk density, ρb (Mg m -3 ).Both quantities were measured using the oven-drying method and were averaged over the two depths.Ten disturbed soil samples (0 to 0.10 m in depth) in each subplot were also collected to determine the clay (cl), silt (si), and sand (sa) contents according to USDA standards (Gee & Bauder 1986).The soil organic carbon, SOC (kg Mg -1 ), of seven samples was measured by the Walkley-Black method. Carbon stock estimation To estimate the carbon stocks in the five forest carbon pools, the approach shown in Fig. 1 was adopted. The biomass equations for holm oak, downy oak and manna ash developed by Tabacchi et al. (2011) were used to estimate the dry weight of the aboveground biomass, ABVshoots (Mg ha -1 ), using the tree Dbh and the H of the shoots.In the recently cut A1 (in 2013) and A2 (2009) plots, the aboveground biomass (ABV) of cut stools was also estimated as follows (eqn.1): (1) where V (m 3 ) is the fresh volume of the cut stool, WBD (Mg m -3 ) is the wood basic density to convert the fresh volume to dry weight for each forest typology (ISPRA 2015), and A (ha) is the area of the subplot.The fresh volume of the cut stools was calculated assuming a cylindrical shape for the cut stool as follows (eqn.2): (2) where d (in m) is the average diameter of the cut section taking into account two perpendicular measures, and h (in m) is the height of the stool. The aboveground biomass per unit area (Mg ha -1 ) was defined as the sum of the aboveground biomass of the shoots and the aboveground biomass of the cut stools.A 0.5 carbon fraction to dry matter conversion factor (IPCC 2003) was applied to obtain the aboveground carbon, Cabv (Mg ha -1 ), from the biomass. The belowground biomass was estimated by applying a standard root/shoot ratio (dimensionless) for each forest typology to the aboveground biomass (ISPRA 2015).In particular, we chose to use a coefficient equal to: (i) 1 for evergreen oak (i.e., holm oak); (ii) 0.2 for other oaks (i.e., downy oak); and (iii) 0.24 for other broadleaved species (i.e., manna ash).Additionally, the carbon fraction of dry matter conversion factor (IPCC 2003), which is equal to 0.5, was applied to obtain the belowground carbon, Cblw (Mg ha -1 ), from the biomass, but because these stands were managed as coppice, the belowground biomass, calculated for the 40-year-old stand, is assumed to remain constant after each cutting when shoot production begins. Litter carbon, Clitter (Mg ha -1 ), was estimated from the amount of carbon in the aboveground biomass based on linear relationships between the stand biomass and litter for each forest typology; this ap-proach has been used in many forest studies (Waring & Running 1998, Federici et al. 2008).We applied the following relationship for evergreen oak coppices (eqn.3): (3) where x is the aboveground carbon (Mg ha -1 ), and y is the litter carbon (Mg ha -1 ).Thus, it is assumed that the litter carbon is at least equal to 9.366 Mg ha -1 with no aboveground carbon. Dead mass in the form of woody debris (WD), standing dead trees (SDT) and stumps (S) was estimated by recording all of the dead material in the subplots.In the case of WD and S, volume (m 3 ) was calculated using the following relationship (eqn.4): (4) where D (in m) is the maximum diameter, d (in m) is the minimum diameter, and L (in m) is the length/height of the dead material. In the case of SDT, volume (V, in m 3 ) was calculated using the standard biometric equation (Cannell 1984 -eqn. 5): (5) where G (in m 2 ) is the basal area, h (in m) is Fig. 1 -Flowchart describing the approach adopted for estimating forest carbon pools in the investigated stands. iForest -Biogeosciences and Forestry y=−0.0299 x+9.366 the height obtained from the hypsometric curve; and f is a standard stem form factor equal to 0.5. To estimate the deadwood carbon stock, the volume (in m 3 ) of each subplot was converted to dead mass (Mg) using the appropriate basic density (kg m -3 ) value for each deadwood category (broadleaves, in our case) and decay class (Di Cosmo et al. 2013, Paletto et al. 2014).Therefore, the dead mass was converted to deadwood carbon per unit area, Cdead (Mg ha -1 ), by applying an oak wood carbon factor equal to 0.4895 that was obtained by direct analysis (Matthews 1993). The soil carbon stock per unit area, Csoil (Mg ha -1 ), was calculated using the following relationship (eqn.6): (6) where SOC (kg Mg -1 ) is the soil organic carbon content, ρb (Mg m -3 ) is the soil bulk density, L (m) is the depth of the sampled layer, and the ratio 10000/1000 expresses Csoil in Mg ha -1 . Data analysis Spearman's correlation analysis was used to individuate the correlation between the species diversity index (SH index) and the stand structure indices (W and VE index).Differences in the carbon pools among plots were analyzed with Kruskal-Wallis one-way analysis of variance on ranks.If any significant differences were detected, a post-hoc Tukey's Honestly Significant Difference (THSD) test was applied to the pairwise comparisons, and Dunn's Method was applied for post-hoc pairwise multiple comparisons in the case of unequal treatment group sizes.For the basic soil properties considered in this investigation (cl, si, sa, θi, ρb, SOC), each dataset was summarized by calculating the arithmetic mean, ma, and the associated coefficient of varia-tion, CV.The four plots were compared in terms of the basic soil properties (cl, si, sa, θi, ρb, SOC) using THSD test.The significance level for all tests was α=0.05. Dendrometric and structural aspects For each forest plot, the values of all measured and derived stand parameters are reported in Tab. 1. Moving from plot A1 to plot A4, the complexity and closure of the forest stands increased with the age of the stems (shoots) since the last coppice felling (see Tab. S1 in the Supplementary material).For example, the crown cover index progressively increased from 34% in plot A1 to 97% in plot A4; similarly, the basal area (G) and the volume (V) of all shoots increased from A1 to A4 (Tab.1).Conversely, the stool density decreased from the A1 to A4 plots, which can be explained by normal competition among plants for space and light over time. In the recently cut A1 (2013) and A2 (2009) plots, the percentages of natural regeneration with height < 130 cm were 100% (41,322 shoot ha -1 ) and 67% (21,916 shoot ha -1 ), respectively, and the corresponding percentages of sprout origin regeneration were 86% (35,447 shoot ha -1 ) in plot A1 and 90% (29,583 shoot ha -1 ) in plot A2 (Tab.S3 in the Supplementary material).In plots A3 and A4, the natural regeneration with height < 130 cm was less than in the other plots (A1 and A2) and equal to 90% (5,000 shoot ha -1 ) and 97% (7,750 shoot ha -1 ), respectively.In both plots, the natural regeneration was sprout origin (Tab.S3).Considering (i) that manna ash is a light-demanding species and (ii) that there is a higher density of sprout origin regeneration in more recently disturbed plots (A1 and A2), the lower abundance of standards in A3 (63.7 plants ha -1 ) and A4 (127.3 plants ha -1 ) suggests a tree-cutting effect; the openness of the canopy and light should have positively influenced seed germination and early seedling development. The values of all structural indices calculated for each forest plot are reported in Tab.S4 (Supplementary material).Generally, all stands showed low species diversity, but the Shannon index values were higher in plots A1, A2 and A3, as three species (holm oak, downy oak and manna ash) were detected in the tree layers; contrastingly, only two species were found (holm oak and downy oak) in plot A4.Holm oak was dominant in all stands.Forest stands A1 and A2 were characterized by a clumped tree distribution with W index values equal to 0.71 and 0.83, respectively, whereas stands A3 and A4 were characterized by randomly distributed trees with W index values equal to 0.63 and 0.46.The vertical distribution of crowns obtained by TSTRAT consisted of three strata for the A2, A3 and A4 forest stands, which were characterized by VE values greater than 0.8, and two strata for the more recently cut stand (A1) with a VE value of 0.37.Except for the latter stand, the distribution of crowns into the strata was uniform since the crowns of all trees were within the vertical strata.There were no significant correlations (p>0.05) between any pair of indices (SH, W and VE). Soil properties The A1, A2, A3 and A4 forest plots were established very close together, no further than 600 meters apart which assured the pedological uniformity of the site; thus, the mean steepness values were very similar, varying from 48% (for A4) to 59% (for A3 -Tab.2).Therefore, stand age represented the main factor that differed among these plots.Plots A3 and A4 did not differ significantly in terms of any basic soil property (Tab.2), suggesting that the possible effects of soil alteration due to tree cutting did not last for more than 20 years. A coppicing effect was detectable when the less disturbed plots (A3 and A4) were compared with more recently disturbed plots (A1 and A2).The soil in the latter plots was denser and had less organic carbon than in the former, but the effects of coppicing on ρb and SOC were statistically negligible.However, the plot disturbed 6 years ago (A2) had significantly more sand and less clay than the less disturbed plots (A3 and A4), and similar result was detected in the more recently disturbed plot (A1), although the differences were smaller and not significant.Accounting for the close proximity of the four stands, in particular plots A1 and A3, differences in soil texture likely resulted from coppicing.The soil in plot A2 remained exposed to the direct action of rainfall the longest, and it was also affected by some loss or weakening of stabilizing agents since the re-establishment of plant cover was rapid but not immediate (i.e., a couple of years).Therefore, the conditions in this plot particularly favored soil erosion and likely facilitated the removal of iForest -Biogeosciences and Forestry fine and easily transportable soil particles.The data collected in plot A1 were consistent with this interpretation since they suggested that the above-described phenomena began soon after coppicing. Carbon storage The aboveground carbon (Cabv) was higher in the older coppices (109.82Mg ha -1 in A3 and 245.67 Mg ha -1 in A4) than in the recently cut plots (23.87 Mg ha -1 in A1 and 22.00 Mg ha -1 in A2) with significant differences between the plots, except for A1 and A2 (Tab.3).The belowground carbon (Cblw) was 170.93 Mg ha -1 in plot A4 and was assumed to have remained constant over time.The litter carbon (Clitter) decreased from the recently cut plots (8.65 Mg ha -1 in A1 and 8.71 Mg ha -1 in A2) to the older coppices (6.08 Mg ha -1 in A3 and 2.02 Mg ha -1 in A4) with significant differences between the plots except for A1 and A2 (Tab.3).The recently cut plots (A1 and A2) had low levels of carbon stored in the deadwood (Cdead), 1.61 Mg ha -1 in A1 and 0.06 Mg ha -1 in A2, with no significant differences between the two plots.In contrast, Cdead was higher in the older coppices (4.89 Mg ha -1 in A3 and 9.56 Mg ha -1 in A4) with no significant differences between the plots, but there were significant differences between plot A4 and the recently cut plots (A1 and A2 -Tab.3).Soil carbon stock (Csoil) varied from 67.25 to 87.90 Mg ha -1 (Tab.4).The lowest values for Csoil were observed in the stands with the lowest stand density with no significant differences between the four plot (Tab.4).The total carbon stocks (Cstock) in plots A1, A2, A3 and A4 were 276.89Mg ha -1 , 268.95 Mg ha -1 , 377.61 Mg ha -1 and 516.08 Mg ha -1 , respectively (Fig. 2), and there were no statistically significant differences among the plots (Tab.4).Cstock values of the studied stands were higher than those found for stands of other species under similar climatic conditions and sampling methods in Mediterranean forests, Tab. 3 -Carbon stocks in the living and dead biomass of each forest stand.For a given carbon pool, values followed by the same letter are not significantly different (p>0.05) according to Tukey's Honestly Significant Difference test.Tab. 4 -Soil carbon stocks (Csoil, Mg ha -1 ) and total carbon stock (Cstock, Mg ha -1 ) of each forest stand (± standard error).For a given carbon pool, means followed by the same letter are not significantly different (p>0.05) according to Tukey's Honestly Significant Difference test.The results revealed an effect of coppicing on carbon storage in the living and dead biomass; except for litter carbon, the lowest carbon storage values were observed in the plots with the lowest stand density (A1 and A2), and basal area was explained by coppicing.In terms of soil carbon, coppicing caused an almost immediate loss of carbon, i.e., plot A2, which was followed by a slow recovery of the carbon pool over time as exhibited by the older coppices in plots A3 and A4 (Tab.4).The Csoil value measured in the more recently disturbed plot A1 was consistent with this interpretation, since it is conceivable that these phenomena begin a few years after coppicing, but there were no statistically significant differences in soil carbon among the plots (Tab.4).These results agree with those of other studies (Powers et al. 2013, Ruiz-Peinado et al. 2013, 2016), which, on average, report little effect of harvesting on soil carbon, but this depends on the type of harvest. Plot Taking into account the relationships between carbon stocks and the five dendrometric parameters (crown cover, number of stools, number of shoots, basal area and volume), the most appropriate parameter to describe the changes in the values of the carbon stocks among the four forest stands was basal area, G (m 2 ha -1 ), since there were statistically significant relationships between this factor and the three carbon pools (aboveground carbon, dead carbon and soil carbon).In particular, Cabv, Cdead and Csoil significantly linearly increased with G (Fig. 3a, Fig. 3b, Fig. 3c).In all case, the correlations were very strong (R 2 equal to 0.922, 0.890 and 0.845, respectively).The positive relationship found between Csoil and G has already been reported in the Mediterranean area (Oubrahim et al. 2015).Therefore, these relationships, despite being obtained for a small number of plots in this study, could be applied under similar forest conditions (i.e., stand development, climate, topography, soil properties) to estimate carbon storage in the aboveground, dead and soil pools of holm oak coppice stands. Conclusions This study examined managed holm oak stands that had been regularly coppiced under a 40-year rotation in the Madonie Mountains of Sicily.This case study is particularly significant given that, contrary to most Sicilian stands, few holm oak coppices are managed and the reconstruction of historical management in most cases is practically impossible.The trends in the dendrometric parameters and silvicultural indices of the stands immediately after cutting and during and at the end of the rotation period were typical of coppices, i.e., they were characterized by poor compositional and structural diversity with very limited diversification in the older stands. A complete analysis of five carbon pools has been carried out.Living biomass was the main carbon pool, and, on average, no significant differences were found in terms of total carbon stock and soil carbon of each investigated forest stand.However, we observed an effect of coppicing on carbon storage in the living and dead biomass.Except for litter carbon, the lowest living biomass values were observed in the plots in the early stages of development after coppicing.The results of this study revealed that coppicing does not affect the carbon balance, endorsing the sustainabil- iForest -Biogeosciences and Forestry Effects of traditional management on carbon storage in Q. ilex coppices ity of this kind of management, at least from a four decades perspective, and from the point of view of total carbon stocks and the carbon stored in the soil.The limited number of investigated plot does not allow to draw conclusions of general validity on the relationship between basal area and three carbon pools (aboveground carbon, dead carbon and soil carbon) in the Mediterranean area.However, the data obtained in the present study improve understanding of the effects of coppicing on carbon storage in a Mediterranean holm oak stand.A long-term monitoring of the investigated stands, as well as the characterization of other managed coppices in Sicily and in the Mediterranean area would be useful for the development of an international database on the effects of coppice management on carbon storage. List of abbreviations The following abbreviations have been used throughout the paper: such as values of 86.5-159.5 Mg ha -1 reported by Oubrahim et al. (2015) for Quercus suber L. stands in Morocco, 234.4-317.3Mg ha -1 observed by Ruiz-Peinado et al. (2013) for a reforestation of Pinus pinaster Ait. in degraded open woodlands of Quercus faginea Lamk.and Q. suber L. in Spain, and 197.1-276.8Mg ha -1 recorded by Ruiz-Peinado et al. (2016) for a reforestation of Pinus sylvestris L. established on natural forests of Quercus pyrenaica Willd. in Spain.The most important carbon pool identified in the studied stands was living biomass (70.4-80.7%)which included above and belowground carbon (Fig.2).The next largest was the carbon soil pool (top 10 cm in our case) which accounted for 17.0-25.9%according to the stands.Finally, the remaining carbon being withheld in dead wood and litter (Fig.2).Soils from the Mediterranean broadleaved forests are relatively poor in carbon; our results, therefore, fall well within the range of other observations(Diaz-Pines et al. 2011, Ruiz-Peinado et al. 2013, Moreno-Fernandez et al. 2015, Oubrahim et al. 2015, Ruiz-Peinado et al. 2016). Biogeosciences and Forestry Tab. 2 -Summary statistics of the basic soil properties for each forest stand.(Ns): sample size; (CV): coefficient of variation; (ρb): dry soil bulk density; (SOC): soil organic carbon.For a given variable, means followed by the same letter are not significantly different (p>0.05) according to Tukey's Honestly Significant Difference test. a Fig. 2 -Estimation of carbon stocks in the five carbon pools for each investigated plot.iForest -
2018-12-05T17:27:55.226Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "46249cd726e6e051d92f0fe94b292aed1dafc773", "oa_license": "CCBYNC", "oa_url": "https://iforest.sisef.org/pdf/?id=ifor2424-011", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "46249cd726e6e051d92f0fe94b292aed1dafc773", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
210965792
pes2o/s2orc
v3-fos-license
HIGH RESOLUTION COMPUTED TOMOGRAPHY AND CHEST X-RAY FINDINGS IN PATIENT WITH PULMONARY TUBERCULOSIS Background: Pulmonary Tuberculosis (PTB) is a major public health problem in Nepal. Diagnosis of pulmonary tuberculosis is done by bacteriological confirmation of respiratory specimen however Negative smear needs clinical and radiological evaluation for the diagnosis in suspected patient. This study focuses on radiological findings in both Pulmonary bacteriologically confirmed (PBC) and pulmonary clinically diagnosed (PCD) Tuberculosis. INTRODUCTION Tuberculosis (TB) is an infection caused by mycobacterium tuberculosis and is a leading cause of mortality, predominantly in developing country. 1 In Nepal total of 31764 cases of TB were notified and registered in 2016/17.About 71 % of all TB cases were pulmonary cases, out of which 77% were bacteriologically confirmed. 2WHO classifies PTB into pulmonary bacteriologically confirmed tuberculosis (PBC) and pulmonary clinically diagnosed tuberculosis (PCD).Chest x-ray (CXR) is historically done in all the suspected patient of PTB but CXR is initially correct only in 49% of all cases. 3On the other hand, High Resolution Computed Tomography(HRCT) scan of chest can correctly diagnose 91% of cases of PTB. 4 HRCT chest findings of active PTB include tree-in-bud appearance, lobular consolidation, cavitation and bronchial wall thickening. 5though chest radiographs and sputum Acid-Fast Bacilli (AFB) usually provide adequate information for the diagnosis of active pulmonary tuberculosis, clinicians usually face the problem in sputum smears negative suspected PTB patients.In such situation they have difficulties about whether anti-tubercular therapy (ATT) should be initiated for these patients, because prompt initiation of ATT will make them non-infectious and eventually cured.In such scenario radiological imaging may help in early diagnosis of suspected disease. 6 patients with suspected PTB, when every effort to diagnose a case by bacteriological conformation fails, clinical and radiological features may help in formulating diagnosis.Despite this fact one of the major causes of delay in case detection is less frequent use of radiological modalities like CXR and HRCT chest.Hence this study aimed to evaluate chest X-rays and HRCT chest findings of Pulmonary Tuberculosis in both PBC and PCD tuberculosis. METHODS This observational study was conducted in authors center from Feb 2019 to July 2019 wherein 45 cases of pulmonary tuberculosis were included.Necessary permission from Institutional Review Committee (IRC) (ref no-076/077-004) was taken prior to the study.All the patients enrolled in this study were explained about the nature of the study and informed written consent was obtained.Relevant data was collected by direct interview and analysis of the final reports. Patients age more than 18 years with bacteriologically con- Patterns of disease activity were analyzed in both chest x-ray and HRCT by pulmonary critical care medicine (PCCM) fellows.The terms used for interpretation of radiological findings were presence or absence of cavity, lobar consolidation, infiltrates, ground glass opacity, micronodule, macronodule, bronchiectasis, tree in bud opacity etc. Findings were recorded in proforma and if radiological findings did not reveal any abnormality it was recorded as normal.SPSS version 16 was used for data recording and analysis.For the purpose of this study P-value of <0.05 was accepted as significant. RESULTS A of 45 patients (Range: 21 -90 years) with mean age: 54.60 ± 19.01 years were enrolled in the study.There were 21 males (46.7 %) and 24 females (53.3 %).Cough and fever were the predominant symptoms in most of the patients while 15 patients (33.3 %) also had hemoptysis.Very few patients presented with anorexia and dyspnea.57.8 % of the patient presented with symptom of >3 weeks duration.10 patients (22.2 %) past history of PTB, 13.3 % were current consumers of alcohol and 10/45 were current smokers.Chronic obstructive pulmonary disease, diabetes and hypertension were the most common comorbidities in these patients.Out of 45 patient, 24 (53.3 %) were PBC and 21 (46.7 %) were PCD Tuberculosis.The two groups were not different based on age, sex and duration of symptoms.Characteristics of study population in PBC and PCD (Table 1).HRCT findings in PBC and PCD (Table 3). in identification of the disease activity of pulmonary TB, mostly in subtle areas of consolidation, cavitation, bronchogenic and miliary spread. 9n a study that compared the CXR and HRCT chest, HRCT showed cavities in 58% of patients with active PTB, whereas only 22% in chest radiographs.10Findings in our study also shows cavity 45.8% in HRCT vs 8.3% in CXR of PBC and 23.8% vs 14.3% of PCD respectively.tient with negative AFB sputum but having strong clinical and radiological suspicion of active PTB underwent bronchoscopy and BAL from HRCT suspected pathological area.Among them 8 patient showed positive for AFB in BAL samples.Findings of this study is supported by a study in 100 patients suspected to have smear-negative active pulmonary, where HRCT chest findings segregate higher risk patients among the suspected for further laboratory tests or bronchoscopy. 14ere are some limitation to our study.As the imaging findings were analysed by fellows of pulmonary and critical care medicine there might be variation in findings of HRCT and CXR.BAL was not performed in all the sputum negative patient which could have underestimated the actual number of PBC Tuberculosis. CONCLUSION None Our study concluded that radiological findings are helpful in diagnosis of pulmonary tuberculosis patient, HRCT chest is more useful for recognition of disease activity than chest radiography.Findings like cavity, tree in bud and lobar consolidation are more clearly seen in HRCT which can easily be missed in chest x-ray, especially in sputum negative case. ISSN 2091 - 2889 (Online) ISSN 2091-2412 (Print) firmed pulmonary tuberculosis were undergone CXR and HRCT chest, clinically diagnosed pulmonary tuberculosis patient having CXR and HRCT chest and Patient under anti-tubercular therapy having CXR and HRCT chest were included in this study.Patient having Pleural pathology and patient with MDR PTB were excluded. Table 2 : X-ray findings in bacteriologically confirmed pulmonary TB (PBC) and clinically diagnosed pulmonary TB (PCD) chest shows normal findings.In comparison to the chest x-ray cavity, tree in bud are more common findings in HRCT chest. Table 3 : HRCT findings in bacteriologically confirmed pulmonary TB (PBC) and clinically diagnosed pulmonary TB (PCD) confirmed pulmonary TB.7 In most of the tuberculosis centers, even after meticulous search the bacteriological positive yield from sputum is around 16 to 50% despite clinical profile and chest x-ray lesions being consistent with diagnosis of pulmonary tuberculosis.8HRCT chest is better than plain chest radiograph ISSN 2091-2889 (Online) ISSN 2091-2412 (Print)
2020-01-16T09:05:51.332Z
2019-12-27T00:00:00.000
{ "year": 2019, "sha1": "c470e48db3737eee0f0978dd9edbb855dd518263", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/JCMC/article/download/26897/22260", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d2a9a4d49029cc5bd42f1ba9502fe77f276b511", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252874676
pes2o/s2orc
v3-fos-license
Sensory brain activation during rectal balloon distention: a pilot study in healthy volunteers to assess safety and feasibility at 1.5T Objective Although increasing evidence suggests a central mechanism of action for sacral neuromodulation, the exact mechanism remains unclear. We set up a scanning paradigm to measure brain activation related to various stages of rectal filling using rectal balloon distention. Materials and Methods Six healthy volunteers underwent rectal balloon distention during MRI scanning at a 1.5T scanner with a Tx/Rx head coil. MR images were collected at four levels of distention: empty balloon (EB), first sensation volume (FSV), desire to defecate volume (DDV), maximum tolerable volume (MTV). Data were analyzed using BrainVoyager 20.4. Whole brain and ROI-based fixed-effects general linear model analyses were performed on the fMRI time-course data from all participants. Results Rectal filling until FSV evoked the most blood-oxygen-level-dependent responses in several clusters throughout the cortex, followed by the responses evoked by rectal filling until DDV. Interestingly, rectal filling until MTV evoked negative responses compared to baseline throughout the cortex. No negative side effects were found. Discussion This study shows that a standardized paradigm for functional MRI combined with rectal filling is feasible and safe in healthy volunteers and is ready to be used in fecal incontinent patients to assess whether their brain activity differs from healthy controls. Introduction In healthy individuals, the defecation process starts when luminal contents are propelled forwards by colonic activity, filling the rectum [1]. Rectal filling induces the rectoanal inhibitory reflex (RAIR), which relaxes the internal anal sphincter (IAS). Upon relaxation of the IAS, distal rectal contents are moved to the upper anal canal, where the nature of rectal contents can be differentiated [2]. In case defecation is convenient, the pelvic floor and external anal sphincter (EAS) are relaxed and defecation occurs. In case defecation is not convenient, the pelvic floor muscles and external sphincter are contracted, thereby deferring defecation. The ability to sufficiently contract the pelvic floor muscles, combined with adequate anorectal sensation are the cornerstones of being continent [3]. In fecal incontinence (FI) patients, one of these mechanisms, or a combination of both, is absent, but the underlying relationship remains complex and multifactorial. Throughout the past two decades, sacral neuromodulation (SNM) has demonstrated to be an effective treatment option for intractable fecal incontinence [4][5][6][7][8][9][10]. Although there is increasing evidence for a central mechanism of action of SNM on FI, the exact mechanism responsible for the beneficial effect remains unclear [11][12][13][14]. To clarify the possible central working mechanisms of SNM in patients with FI, the exact central neural mechanisms involved in the sensation of rectal filling need to be identified. Functional magnetic resonance imaging (fMRI) is a method that is sensitive to fluctuations in blood oxygen levels in the brain's vascular system. These changes in blood oxygen levels are closely related to synaptic input activity [15,16]. To date, it is unknown how neural responses associated with rectal sensation differ between healthy controls and FI patients. Therefore, setting up experimental designs similar to studies performed on urinary incontinence (UI) and to studies in which sensation or pain is exerted through rectal balloon distention may be fruitful [11,17]. In patients with UI, bladder fullness enhances activity in several cortical regions, most prominently in the midbrain and limbic cortical areas [17]. In studies in which sensation or pain was exerted through rectal balloon distention, changes in evoked neural responses were found in several areas including the pre-and postcentral gyrus, thalamus, primary somatosensory (SI), secondary somatosensory cortex (SII), sensory association cortex, anterior cingulate cortex (ACC) and the insular cortex [18][19][20]. These studies showed that specific brain regions are related to perceptions of bladder fullness and rectal sensation/pain. We assume that comparable neural mechanisms are involved in the perception of rectal filling. To study these mechanisms, a standardized scanning paradigm to evaluate evoked neural responses during rectal filling stages should be available. We, therefore, designed such a paradigm to measure neural responses during the following rectal filling stages: first sensation volume (FSV), desire to defecate volume (DDV) and maximum tolerable volume (MTV) of the rectum. The aim of this feasibility and safety pilot study was to evaluate whether this scanning paradigm was successful at assessing evoked neural responses related to various stages of rectal filling in healthy volunteers. All volunteers underwent a 1.5T MRI with a Tx/Rx head coil. Moreover, the pain scores of participants were determined on a visual analog scale (VAS) to establish the safety of the scanning paradigm. We studied brain activation levels in the following bilateral regions-of-interest (ROIs): anterior cingulate gyrus, insular cortex, precentral gyrus, postcentral gyrus, thalamus and the pons. We hypothesized that different rectal filling stages would evoke different activation levels within these areas. Participants and ethical statement Six healthy volunteers were recruited through public advertising (1 female, mean age 42.5 years, range: 20-70 years). All participants were fecal continent and without a history of inflammatory bowel disease, neurological, psychiatric, kidney or cardiac disorders. The approval for the study was granted by the Ethical Committee of Maastricht University Medical Center (MUMC + , Maastricht, the Netherlands). Clinical Trial Center Maastricht independently monitored this study. All participants gave written informed consent to participate and to publish, prior to participating in the study. Experimental design Prior to the start of the MRI scanning, a rectal balloon was inflated, using the Solar GI, to determine the following four levels of distention thresholds in each participant as per the International Anorectal Physiology Working Group (IAPWG) recommendations [21]: empty balloon (EB), first sensation volume (FSV), desire to defecate volume (DDV) and maximum tolerable volume (MTV). These volumes were determined while the participants were in the MRI scanner shortly before the start of the experiment. After positioning the rectal balloon, subjects were positioned in the supine position, with their knees bent on a triangular pillow and covered with a blanket. Their heads were placed in a head coil. A schematic depiction of the experimental setup with the Solar GI (Fig. 1A), the rectal balloon ( Fig. 1B) and the MRI (Fig. 1C) can be found in Fig. 1. The experiment consisted of 4 functional runs that lasted about 10 min each (Fig. 2). Blood-oxygen-level-dependent (BOLD) responses were measured during the following staircase cycle: EB, inflation level 1, FSV, inflation level 2, DDV, inflation level 3, MTV and deflation. This cycle was repeated 4 times within one. Each run ended with an extra EB measurement. Each EB measurement lasted 20 s, each FSV, DDV or MTV measurement lasted 12 s and each deflation measurement lasted 40 s. The filling periods varied from 1 to 7 s, during these periods the balloon was gradually inflated until it reached the participants' individual filling threshold for the corresponding condition. A vacuum pump was used for the deflation of the balloon. Ultimately, we collected 16 repetitions (12 s for each repetition, which corresponds to 4 volumes) for each of the filling conditions (FSV, DDV and MTV). Prior to the fMRI measurements, anatomical images were collected for each participant. The entire experiment was performed within 45 min, which was easily feasible for all participants. For one of the participants, three runs were collected instead of four, due to technical failure. Given the expected low neural responses to the subtle stimulation, we aimed to collect as many repetitions as possible in a 1 hour scanning period. This entails that we chose to fill up the balloon in an increasing stepwise fashion (staircase approach), rather than pseudo-randomizing the trials. Pseudo-randomizing would take more time since it would require deflating the balloon between each condition. Not only would this be unfavorable for scanning time, but this would also be detrimental to the participants' comfort. Safety Within 5 min after concluding the experiment, participants were asked to rate their maximum, minimum and current pain score during the experiment on a 10-point VAS scale. Additionally, participants were called 48 h after the experiment to determine whether any discomfort had occurred after leaving the hospital. MRI parameters Brain imaging was performed on a 1.5T scanner (Ingenia; Philips Medical Systems, Best, the Netherlands) with a transmit and receive head coil (Philips dStream T/R head coil) at the Maastricht University Medical Center (MUMC + , Maastricht, the Netherlands). The functional T2*-weighted images were acquired using a multi-shot Echo Planar Imaging (EPI) sequence (repetition time/time of acquisition [TR] = 3000 ms; echo time [TE] = 50 ms, voxel size = 1.8 × 1.8 × 4 mm). Each volume consisted of 31 slices covering the whole brain. Anatomical T1-weighted images were acquired using the 3D-NAFTRA sequence (voxel size = 1.0 × 1.0 × 1.0 mm). Data preprocessing Functional and anatomical images were analyzed using BrainVoyager 20.4 [24]. Preprocessing steps for the functional images consisted of slice scan-time correction (cubic spline interpolation), 3D-motion correction (trilinear interpolation for motion estimation/sinc interpolation for correction) and temporal high-pass filtering to remove low-frequency drifts (of maximum four cycles per time course/run). The functional images were co-registered to the anatomical images, and both were transformed into MNI space. Anatomical ROIs Region-based group analyses were performed to assess the differences in activation levels between the three filling conditions. The following regions of interest (ROIs) were obtained from Harvard-Oxford Cortical Structural and Harvard-Oxford Subcortical Structural atlases as implemented in Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library (FSL) [25]: anterior cingulate gyrus, insular cortex, precentral gyrus, postcentral gyrus, thalamus and brainstem. To examine responses within a subregion of the brainstem, the pons was manually segmented from this structure using BrainVoyager and added as an additional ROI to the analyses. Since the pontine micturition center can be found in the pons (include reference), we hypothesized a similar structure might be present to control defecation. All ROIs were segmented in left-and right hemispheric portions by following the midsagittal plane along the MNI brain atlas (Fig. 3). fMRI statistical analysis Whole brain and ROI-based fixed-effects general linear model (GLM) analyses were performed on the fMRI time courses from all the participants. We used one predictor per condition (EB, FSV, DDV and MTV; convolved with a double gamma hemodynamic response function). Inflation and deflation measurements were included as confounding predictors. Whole brain contrast maps (t statistics) were calculated to estimate evoked-neural responses to the separate rectal filling conditions throughout the entire brain. All contrast maps were thresholded with uncorrected = p value < 0.05 and a cluster-size threshold of 10 voxels. Per ROI, contrast maps were calculated to assess the differences in evoked responses between the different conditions. This was done for each hemisphere separately. For each ROI we tested the following contrasts: c 1 : FSV > DDV, c 2 : FSV > MTV, c 3 : DDV > MTV. The contrasts were corrected for multiple comparisons using false discovery rate (FDR)-correction ( = p FDR < 0.05; [26], as implemented in Matlab (www. mathw orks. com)). Results None of the volunteers reported any pain or negative side effects related to the experiment. Cortical responses during different rectal filling stages We measured the evoked-neural responses during different rectal filling stages. Generally, rectal filling until FSV evoked most uncorrected BOLD responses in several clusters throughout the cortex, followed by the responses evoked by rectal filling until a desire to defecate. Interestingly, rectal filling until MTV evoked negative responses compared to baseline (empty balloon) throughout the cortex (uncorrected p value < 0.05 with clustersize-thresholding of 10 voxels; Fig. 4). Differences in ROI-based cortical responses between rectal filling stages Within the ROIs, we did not find differences in evoked responses between FSV and DDV (c 1 : FSV > DDV). Safety Average current pain scores, which were obtained 5 min after concluding the experiment, was 0.38 (range 0-1) on a 10-point VAS scale. During the experiment, the maximum pain score was 3.77 (range 0-7) and the minimum pain score was 0.87 (range 0-3). Participants with a higher than average pain score all mentioned that the sudden distention of the balloon surprised them, causing discomfort, even after explicit explanation before the study that this could occur. No adverse events occurred during the study. Moreover, no experiences of pain or discomfort came to light during the telephone conversations 48 h after the experiment. Discussion In this pilot study, we evaluated the feasibility and safety of a standardized scanning paradigm to measure evoked neural responses during rectal filling in healthy volunteers. In addition to the feasibility, we explored which brain regions elicited different neural responses during rectal filling. Most uncorrected BOLD responses were found in several clusters throughout the cortex when filling the rectum until FSV when compared to baseline. Rectal filling until DDV evoked less BOLD responses than filling until FSV, although this decrease did not reach statistical significance. Filling until MTV evoked even less BOLD responses than filling until DDV in several regions. Thus, it seems like most brain regions are being activated during the first stage of rectal filling. BOLD responses even showed a deactivation of several cortical and subcortical regions, such as the left pre-and postcentral gyri, bilateral insular cortex, pre-and postcentral gyri and pons, right anterior cingulate cortex and left thalamus, during maximum filling of the rectum. Although the BOLD response shows a downward trend with increasing rectal pressure, the magnitude of the negative peak in the MTV condition is larger than expected, i.e., lower than baseline, by the increase in rectal pressure. This implies that other explanations, both physiological as well as methodological, are to be sought for this effect. On the physiological level, one could argue that the maximum tolerable pressure differs from the other conditions in that it induces pain. Kong and collegues showed that pain can lead to deactivation in several brain areas involved in the so-called pain matrix, which comprises regions such as the thalamus, insula, and postcentral gyri [27]. Therefore, it is not unlikely that painful sensations induced by the maximum filling of the rectum directly involved regions in the pain matrix. A methodological explanation for this negative peak in the MTV condition would be that this condition always followed FSV and DDV, without deflating the balloon. Therefore, a possible effect of these preceding conditions on MTV cannot be ruled out. The staircase setup of the scanning paradigm was used for time reasons since it allowed for more scanning time per condition, and to minimize participant discomfort. However, since the order of conditions was identical for each cycle, each filling state was always preceded by the same filling state. Moreover, for time reasons, no baseline measurement was conducted in between each condition. Baseline brain activity was only measured before and after each staircase cycle. Although this is not an uncommon procedure when one is interested in the relative difference between conditions, a randomization of the trial order would have allowed for better response estimates. The low pain scores, and absence of adverse events or discomfort after concluding the experiment, showed that this scanning paradigm is safe to use. In the case of a follow-up study, it would be advisable to explain very explicitly to participants that the sudden distention of the balloon might initially surprise them. The lead (model 3889/3093) and implantable pulse generator (IPG, model 3058) produced by Medtronic, which were used in the majority of SNM patients up until 2020, received FDA conditional MRI approval at 1.5T with a transmit/receive (Tx/Rx) head coil [22,23]. Therefore, to allow for follow-up studies with patients having an IPG in place, we deliberately chose to use a scanner with this field strength. However, this inherently yields several limitations. First, 1.5T MRI has limited spatial resolution per time unit compared to higher magnetic field strengths. Second, the Tx/Rx coils that are approved for use on patients with aforementioned implants are typically restricted in transmit power, limiting signal-to-noise ratio. Third, one could hypothesize that nuclei in the brain stem are of interest in this line of research, given this brain region's involvement in for instance micturition. At 1.5 T, with limited transmit capabilities, these deep-lying brain regions are likely too small for the relatively large voxel size, as well as too far away from the surface to yield a reliable signal. Lastly, the long repetition time (TR) to acquire a particular spatial resolution led to a long scanning time per volume. Since total scanning time was limited, this resulted in the staircase approach of the scanning paradigm previously mentioned, whereas a randomized stimulation protocol would be preferable. These factors need to be taken into consideration when designing an experiment on this specific participant population. In this study, we wanted to evaluate whether we could differentiate brain activity between EB, FSV, DDV and MTV. Therefore, we needed a controlled situation and due to technical limitations of our MRI, we placed our participants in a supine position. We realize that with this study setup, outcomes may not perfectly reflect body position throughout the day in real life. We know that gravity in daily life has an effect on the filling status of the rectum. This implies that in theory, the DDV in the supine position is larger than in the upright position. For example, in runners, it is very common to have bowel problems during running, but when they stop running and sit down, these problems disappear. Additionally, most people do not have problems holding their stool during the night. Upon completion of this study, Medtronic introduced new SNM leads and IPGs, which are suitable for 3 T MRI. Therefore, future studies using 3 T MRI can benefit from a shorter repetition time (TR) to acquire a particular resolution, leading to a decreased time needed for scanning. Shorter repetition time might result in higher T1-weighted images, however, this will not compromise the ability to detect BOLD fluctuations which depends on T2 (*) fluctuations. Consequently, the possibility to randomize conditions without sacrificing statistical power becomes feasible. Given the relatively lower SNR that can be achieved at 1.5 T compared to higher magnetic field strengths, compromises in terms of slice thickness had to be made to favour a higher in-plane resolution. However, future studies at higher field strengths can benefit from the increase in SNR to acquire isotropic voxels at a high resolution, to improve spatial specificity in all 3 dimensions and to reduce partial voluming effects caused by increased slice thickness. In conclusion, this study showed that a standardized scanning paradigm for functional MR imaging combined with rectal filling is feasible and safe. New developments within SNM MR compatibility make this an interesting prospect for future studies. The next step would be to use an optimized (non-staircase) scanning paradigm based on the one discussed in this paper and apply it to fecal incontinent patients to assess if their brain activity differs from healthy controls and if so, to establish the nature of these differences. Author contributions RA: study conception and design, acquisition of data, analysis and interpretation of data, drafting of manuscript, critical revision. SR: analysis and interpretation of data, drafting the manuscript, critical revision. J van den H: study conception and design, acquisition of data, analysis of data, drafting of the manuscript,
2022-10-14T06:17:16.211Z
2022-10-13T00:00:00.000
{ "year": 2022, "sha1": "e0bd6c909c1ad7f7360597290327ef5697c4669b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10334-022-01044-0.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "c378d6117a2bef5c0b945df943562649b8348e0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40250943
pes2o/s2orc
v3-fos-license
Fruits, vegetables and their polyphenols protect dietary lipids from oxidation during gastric digestion HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Fruits, vegetables and their polyphenols protect dietary lipids from oxidation during gastric digestion Mylène Gobert, Didier Remond, Michele Loonis, Caroline Buffière, Veronique Sante-Lhoutellier, Claire Dufour Introduction Accumulating evidence suggests that lipid oxidation products present in the diet may contribute to the pathogenesis of atherosclerosis. 1 Among others, the intake of oxidized oils was shown to induce endothelial dysfunction. 2 At the molecular level, lipid oxidation products appear to be absorbed by the small intestine before their incorporation into chylomicrons and then LDL as shown for humans and pigs. 3,4 Besides, LDL postprandial modications such as aldehyde binding to apolipoproteins are reported to be strongly implicated in the atherogenicity of LDL. 5,6 Food processing or food storage may not be the only routes for the formation of dietary lipid oxidation products. The latter can be generated in vivo and the gastric compartment has been proposed as a major site for diet-related oxidative stress. 7 Indeed, aer food intake, dietary iron could trigger lipid oxidation during gastric digestion. This assumption was substantiated in vitro using oil-in-water emulsions to model the physical state of dietary lipids. 8,9 In this work, lipid-derived conjugated dienes and short-chain aldehydes and alcohols were produced concomitantly. Their accumulation rates were found to be drastically inuenced by pH, the emulsier type (proteins vs. phospholipids), and iron forms (heme vs. non-heme iron). To the best of our knowledge, very little is known on the in vivo gastric fate of lipid hydroperoxides. Only two studies reported the decomposition of trilinolein and linoleic acid hydroperoxides to aldehyde, epoxyketone and alcohol derivatives in the stomach of rats fed intragastrically. 10,11 Nonetheless, the gastric stability of dietary lipids aer the ingestion of a complex meal remains unknown and should be further elucidated. On the other hand, various meta-analyses have revealed that the consumption of fruit and vegetables (F&V) was associated with a reduced rate of coronary artery disease 12 and stroke. 13 Besides, the development of coronary artery disease was inversely associated with the consumption of avonoids, a class of polyphenols largely distributed in fruit and vegetables. 14 Increase in plasma antioxidant capacity, inhibition of LDL oxidation, decrease in platelet aggregation and improvement of the endothelial function are the main mechanisms proposed for the health benet of avonoids. 15,16 In a recent controlled trial, avonoid-rich apples independently augmented the nitric oxide status, enhanced endothelial function, and lowered blood pressure acutely, outcomes that may benet the cardiovascular health. 17 Similarly, the ingestion of a Western-type meal enriched in wine polyphenols led to a reduced elevation of malondialdehyde in plasma 18 and decreased the susceptibility of postprandial LDL to oxidation. 19 Dietary intakes in polyphenols have been reliably evaluated for the British (0.9 g per day) 20 and for the French people (1.2 g per day). 21 A deeper insight revealed that tea, coffee and fruit juices are the major contributors for both groups. Aer ingestion of a meal rich in plant products, native forms of polyphenols could thus be recovered in elevated concentrations in the gastric tract. However, the bioaccessibility of polyphenols, which is dened as the amount of polyphenols released and solubilized in the chyme, can be modulated by several parameters such as plant matrix, processing, bolus constituents and physiological conditions. 22 The present study aims at assessing lipid oxidation in the gastric tract aer the consumption of a typical Western diet. Because lipid oxidation is triggered by dietary iron forms which themselves show pH dependency, the contents in heme and non-heme iron forms will be kinetically monitored along with pH. Additionally, the lipid protective capacity of polyphenols either embedded or extracted from their natural F&V matrix will be compared. The reported digestion study is conducted with minipigs as the relevance of this model animal has already been established for the digestion of proteins. 23 2 Experimental section 2.1 Test meals 2.1.1 Fruit and vegetables (F&V) and the phenolic extract (PE). Frozen artichoke hearts (Camus de Bretagne var., Picard) were cooked in a microwave oven for 5 min (8 hearts at a time) then cut into 6 pieces and quick-frozen before freezing at À20 C. Fresh rennet apples were purchased from a local market. The central part was removed before cutting apples into 12 or 24 pieces for meal or extraction, respectively. Quick freezing of apple pieces was followed by freezing at À20 C until use. Frozen quetsche plums (halves) were purchased from Picard and kept at À20 C until needed. Each F&V portion was made of 120 g of apple, 40 g of artichoke heart and 40 g of plum as prepared above. For the extraction of phenolic compounds from frozen F&V, apple (2.4 kg), cooked artichoke (800 g) and plum (800 g) were ground separately in liquid nitrogen for 3 min at 3000 rpm min À1 using a PM-400 ball grinder (Retsch GmbH, Germany). The resulting powders were freeze-dried and kept at À20 C. The combined powders were divided into four portions and each one was extracted as follows. One powder batch (ca. 170 g) was homogenized with 800 mL of acetone-water (70 : 30) for 2 min at 24000 rpm (Ultra-Turrax T25, IKA) before addition of 2.2 L of the same solvent system and stirring for 30 min at RT. Aer Buchner ltration on Whatman paper no. 3, the powder was extracted once more with 3 L of this solvent system for 30 min. The combined liquid phases were concentrated in vacuo using a rotative evaporator at 30 C. The obtained aqueous extract was distributed into plastic trays, freeze-dried and kept at À20 C until needed. One F&V portion contained the same amount of polyphenols as 22.8 g of the phenolic extract (PE). 2.1.2 Meal preparation. Each meal contained primarily 40 g of sunower oil (Lesieur "Coeur de Tournesol" from local market) as a source of lipids and 120 g of ground beef meat as a source of protein (Table 1). The meat (Triceps brachii muscle) was obtained from a 15 month old Charolais bull and aged 15 days. It was minced with an 8 mm diameter grind before cooking in a vacuum packing at 70 C (water bath) for 30 min and nally freezing at À20 C. The meals were prepared by quickly mixing in a food processor (KM336 Kenwood) the defrosted meat, the sunower oil, egg yolk phospholipids (Sigma-Aldrich, St Quentin-Fallavier, France) and either the frozen F&V cut into cubes (2, 5 and 8 mm-edge lengths for apple, plum and artichoke, respectively) or the phenolic extract. F&V defrosted during the mixing step were thus protected as long as possible from browning and polyphenol degradation. When F&V were absent from the meal, starch, cellulose, and apple pectin (all from Sigma-Aldrich, St Quentin-Fallavier, France) were added to simulate complex sugars and cell wall materials as found in the F&V matrix along with water. Study design All procedures were conducted in accordance with the guidelines formulated by the European Community for the use of experimental animals (L358-86/609/EEC), and the study was approved by the Local Committee for Ethics in Animal Experimentation (no. CE24-10; Comité d'Ethique en Matière d'Expérimentation Animale d'Auvergne, Aubière, France). 2.2.1 Animals. The study involved 6 female Göttingen minipigs (Ellegaard, Denmark) (12-16 months old; 20-25 kg body weight). At least 3 weeks before initiating the study, minipigs were surgically tted with a permanent cannula (silicone rubber; 12 mm i.d., 17 mm o.d.) in the body of the stomach, in the middle of the long axis of the greater curvature. The cannula Surgical procedures, as well as post-surgical care, have been previously described in detail by Rémond et al. 24 Minipigs were housed in individual pens (1  1.5 m), separated by Plexiglass walls, in a ventilated room with controlled temperature (20-23 C). Apart from sampling days, they were fed once daily, at 0815, with 400 g of a commercial feed [18% protein, 2% fat, 5% cellulose, 6% ash] (Porcyprima, Sanders Nutrition Animale, France), and had free access to water. In order to ensure a rapid and complete ingestion of the test meals during the sampling days, they were accustomed to receive this type of meal before starting the experiment. 2.2.2 Experimental protocol. The three test meals were randomly tested on each minipig. For a given minipig, the days of sampling were separated by at least 3 days. On days in between, minipigs received the commercial feed. The evening before the day of sampling, stomach was ushed by intragastric injection of 200 mL of water followed by free evacuation of the chyme through the cannula. On the day of sampling, minipigs did not receive the commercial feed and were exclusively offered test meals (at 0815). They always consumed the whole meal in less than 15 min. Minipigs had continuous access to water during the sampling period. Digesta (average volume 60 mL) were gravimetrically collected in a graduated beaker 30 min before and 15, 45, 90, 150, 240, and 330 min aer test meal delivery. The exact digesta volume was recorded before mixing with 10 mL of water for better consistency. Then the diluted digesta were halved. One part was homogenized for 30 s with an Ultra-Turrax (IKA25, 20 000 rpm) and pH was immediately recorded. The homogenized digesta were subsampled for the remaining analyses (TBARS, lipid-derived conjugated dienes, iron forms). All aliquots were immediately frozen in liquid nitrogen and kept at À80 C until analysis. Analyses of the meals and digesta 2.3.1 Fatty acid chemical analysis. Total lipids of oil, bovine meat and whole meals were extracted from 6 g of ground samples with chloroform-methanol (2 : 1, v/v) according to the method reported by Folch et al., and then assayed gravimetrically. 25 Lipids were converted into fatty acid methyl esters (FAME) at room temperature using 1 M Na methanolate followed by 14% (vol/vol) BF 3 -methanol for 2  20 min. FAME analysis was performed by gas chromatography as described previously. 26 FAME were quantied using C19:0 as internal standard (Supelco, Bellefonte, PA, USA). The identication and calculation of the response coefficient for each individual FAME were achieved using the Supelco quantitative mix C4-C24 FAME. Determination of lipid oxidation Determination of total lipids. Total lipids from freeze-dried homogenized digesta and meals (1-1.5 g) were extracted twice with chloroform-methanol (2 : 1, v/v) according to the method reported by Folch et al. using 4 mL per g of fresh matter. 25 The combined organic phases were washed with 0.9% aq. NaCl, dried on sodium sulfate and concentrated rst in vacuo and then under nitrogen. Total lipids were assayed gravimetrically and the results are expressed in grams of lipids per 100 g of fresh sample. Measurement of conjugated dienes. Total lipids were dissolved in 2-propanol (2 mL). The concentration of conjugated dienes (CD) was determined by measuring the absorbance at 234 nm (HP 8453 diode-array spectrometer equipped with a magnetically stirred cell; optical path length ¼ 1 cm) and by using 27 000 M À1 cm À1 as the molar absorption coefficient for conjugated linoleyl hydroperoxides. Results are expressed in micromoles of CD per gram of lipids. Determination of TBARS. Thiobarbituric acid-reactive substances (TBARS) were evaluated according to Lynch & Frei 27 with slight modications. Freeze-dried samples of meat, meal and gastric digesta (1 g) were homogenized for 30 s with 10 mL of 0.15 M KCl containing 0.1 mM butylated hydroxytoluene (BHT) using an Ultra-Turrax homogenizer (IKA25, 15 000 rpm). Homogenates (0.5 mL) were incubated with 1% (w/v) 2-thiobarbituric acid in 50 mM NaOH (0.25 mL) and 2.8% (w/v) trichloroacetic acid (0.25 mL) for 10 min in a boiling water bath. Aer cooling at room temperature for 30 min, the aqueous phase was added with n-butanol (2 mL) under stirring, and then centrifuged (4000g, 10 min). The absorbance of the extracted pink chromogen in n-butanol was measured at 535 nm with deduction of potential turbidity at 760 nm. TBARS concentrations were calculated using 1,1,3,3-tetraethoxypropane as a standard, and expressed as mmole of equivalent malondialdehyde (MDA) per g of lipids. 2.3.3 Determination of total iron, heme iron, free iron and Fe 2+ . Total iron in bovine meat, meals and gastric digesta was evaluated by wet mineralization to extract all iron forms including chelated forms. Measurements were performed by inductively-coupled plasma mass spectrometry (ICP-MS) and expressed as mg of Fe per g of fresh sample. Free iron and Fe 2+ were measured by the ferrozine assay according to Stolze et al. 28 with slight modications. Samples (1 g) of frozen meat powder, frozen meal powder and defrosted gastric content were homogenized with 10 mL of 140 mM NaCl and 10 mM sodium citrate buffer for 30 s using an Ultra-Turrax homogenizer (20 000 rpm). The dialysis membrane (12 kDa MWCO) was used to form two rolls lled each with 1 mL of the previous buffer and dipped into the rst mixture. Aer 3 to 4 h of dialysis at room temperature under agitation, the contents of the dialysis tubing were centrifuged (4000 rpm, 4 C, 10 min). The rst dialysate was added with 1 mM ferrozine and the second one with 1 mM ascorbate and 1 mM ferrozine, allowing the determination of the Fe 2+ form and free iron, respectively. Iron was determined spectrophotometrically at 562 nm using iron sulfate for calibration. The level in Fe 3+ is the difference between [free iron] and [Fe 2+ ]. The heme iron level was deduced by subtracting the free iron content from that of the total iron. All iron form levels are expressed in mg of Fe per g of sample. Statistical analysis All data are presented as mean AE SEM (n ¼ 6 per group). The postprandial evolutions of pH, CD and TBARS were compared by one-way ANOVA for repeated measures (Tukey post-hoc test Evolution of pH during gastric digestion Aer the ingestion of a standard Western diet containing principally beef meat and sunower oil (beef meal, Table 1), the gastric pH increased sharply from 2.1 in the fasting state to 5.6 aer 15 min (Fig. 1). When F&V or the phenolic extract (PE) were added to the meal, this pH was found to be 4.5 in both cases outlining a signicant effect of meal (p < 0.05). The postprandial pH decayed faster during the rst 150 min for the beef meal compared to the F&V-and PE-added meals and similarly for the last part of the digestion. At 45 min, the gastric pH aer ingestion of the PE meal was still signicantly different from pH for the beef meal. Aer 330 min, pH has not yet returned to the fasting pH suggesting that a period of 5 h 30 min was not sufficient for completion of gastric digestion by minipigs. Finally, a signicant effect of time on gastric pH was found during the digestion of the three meals (p < 0.0001). Iron forms The content in total iron for cooked beef (23 mg per g FW, Table 2) was in the range of data reported for total iron for raw beef (19.5-26.1 mg per g) 29-31 and cooked beef (24.1 mg per g). 31 In the beef, F&V and PE meals, total iron levels were respectively 10, 8 and 9 mg per g FW as a result of the dilution by the different meal constituents. There is no apparent contribution of F&V although artichoke, apple and plum could theoretically contribute for 2. (Fig. 2A). Free iron was mostly recovered in the form of Fe 3+ in both the meal and the digesta suggesting a more oxidizing environment compared to meat. The Fe 3+ form decayed more slowly than heme iron suggesting that part of the heme iron atoms could be released from the protoporphyrin ring of metmyoglobin. For the F&V and the PE meals, slower decreases for total and heme irons were observed while Fe 3+ was even shown to accumulate in agreement with the suggested conversion of heme iron into free iron ( Fig. 2B and C). It is worth noting that the content in the Fe 3+ form is unexpectedly low in the F&V meal (0.4 mg per g) and the corresponding T15 min digesta (0.28 mg per g) compared to the contents in the PE meal (1.8 mg per g) and corresponding T15 min digesta (1.07 mg per g). This could be attributed to a strong complexation of free iron by unidentied F&V components and its subsequent lack of dialysability, thus making free iron unavailable for titration by ferrozine. Lipid stability in the gastric tract 3.3.1 Total lipids. The decrease in total lipids in the gastric digesta was almost linear over the 330 min long period of monitored digestion for the three test meals (Fig. 3) in agreement with dilution by gastric juices and simultaneous gastric emptying. Although the effect of meals cannot be statistically assessed owing to the difference in meal size, a faster rate for the decay in total lipids was observed during the digestion of the beef meal. This trend is similar to the one observed for the gastric pH and suggests a slowdown role in digestion parameters for some biomolecules present in both the extract and F&V. 3.3.2 Lipid oxidation in the gastric digesta. The oxidative state of lipids was rst probed by analyzing the gastric digesta for lipid-derived conjugated dienes (CD) as primary markers. Forty ve minutes aer the ingestion of the beef meal, CD started to accumulate following a bell-shaped kinetics (Fig. 4A). The maximal content, observed between 150 and 240 min, corresponds to a 35% increase in CD. The addition of the phenolic extract to the beef meal (PE meal) had no effect on the CD accumulation. However, with the F&V meal, CD levels were found signicantly higher at the initial stage of the digestion (T15 and T45 min). Nevertheless, there was no noticeable CD accumulation within this meal during the 330 min-long digestion process. TBARS were next followed as secondary lipid oxidation products. Their evolution was clearly different from that of CD (Fig. 4B). Indeed, TBARS accumulated continuously for the three meals for at least 240 min. The F&V and PE meals markedly slowed down the formation of TBARS. ANOVA with repeated measures revealed signicant effects for meal (p ¼ 0.03), time (p < 0.0001) and meal  time (p ¼ 0.0003). At 240 min, the TBARS level per gram of lipids was signicantly lower (p < 0.05) for both the F&V and the PE meals compared to the beef meal. At this stage, TBARS have increased by a 5-fold factor for the beef meal, while only by a two-fold factor for both the F&V and PE meals. Discussion In the few intervention studies investigating gastric pH for complex meals, liquid test meals were classically fed to nasogastrically intubated humans. By contrast, solid ingredients of human consumption such as beef meat, sunower oil and fruit and vegetables (F&V) were used in our study aer classical home processing including grinding, cooking and mixing. The ratio between triglycerides and phospholipids is representative of an average Western adult consumption (100-150 g triglycerides and 2-10 g phospholipids each day) ( Table 1). The Western diet is also characterized by a markedly high content in linoleic acid 19.8 6.9 7.6 7.2 a n ¼ 2 or 3, mean AE SD. b Heme iron is the difference between total iron and free iron. Food & Function Paper compared to linolenic acid as evidenced here with the commonly consumed sunower oil, egg yolk phospholipids and beef meat. It was reported that gastric pH reached 6.4 thirty minutes aer the consumption of Ensure Plus R (a nutrient-rich emulsion with an intrinsic pH of 6.6), pH between 5.4 and 6.2 twenty minutes aer the ingestion of a liquid Western-type diet enriched in vegetable purees and pH 5.4 only three minutes aer the consumption of a cocoa beverage (intrinsic pH 6.4). [32][33][34] The pH variations recorded in the minipig stomach during digestion appear thus to be similar to those observed in humans with a very rapid rise aer food ingestion followed by a nearly linear decay to return to the fasting pH. The high pH values reached aer a few minutes are mainly related to food intrinsic pH and its buffering capacity. In meat, proteins and carnosine play this role. In this study, initial gastric pHs were found to be 5.6, 4.5, and 4.5 aer the ingestion of the beef, F&V and PE meals, respectively. A signicant effect of the F&V and PE matrices is highlighted, resulting possibly from the additional presence of soluble sugars, amino-acids, small peptides or polyphenols. Hence, pH kinetic data obtained for gastric digestion in this study, like data previously reported for meat and milk protein digestibility, 35,36 support well the use of the minipig as an animal model for digestion studies. The fate of lipids during digestion 4.1.1 Total lipids. The contents in total lipids were evaluated for beef meat, meals and the corresponding gastric digesta over 330 min (Table 2 and Fig. 3). The measured total lipid contents were 12.1, 9.4 and 12.8 g per 100 g FW in the initial meals in agreement with a higher dilution of the lipids by F&V than PE during meal preparation. In the T15 min sampling arising from the beef, F&V and the PE meals, lipids represent 7.1, 5.8 and 5.6 g per 100 g of FW, respectively. These concentrations in total lipids correspond to only 59, 62 and 44% of the total lipids initially present in the beef, F&V and PE meals, respectively. Part of this difference in concentrations could be explained by the dilution of the chyme by both saliva and gastric juices as shown by the 12-15% decrease in total iron. Indeed, the viscous aspect observed for the chyme reveals the presence of mucins known to be present in both uids. However, the loss at 15 min of nearly half of the lipids could also be accounted for by the formation of a lipid layer on top of the chyme in the upper part of the gastric compartment. This hypothesis could not be conrmed as sampling was performed on the greater curvature of the stomach, i.e. at the mid-height of the full stomach. Nevertheless, gastric digesta that were sampled did not exhibit phase separation, only a continuous decrease in viscosity upon time (ESI, S1 †). Additionally, light microscopy and granulometry revealed perfectly circular objects which could be assessed to emulsied oil droplets (unpublished data). Egg yolk phospholipids and meat proteins or their hydrolysates are known to be efficient dietary emulsiers helping thus to the early emulsication of sunower oil triacylglycerols. 4.1.2 Lipid oxidation. CD correspond to early lipid oxidation products which share a conjugated dienyl system and diversely oxygenated functions. Unstable lipid hydroperoxides are known to give rise to related alcohols, epoxides and ketones through intramolecular radical and non-radical rearrangements. 37 In the presence of metmyoglobin, a heme iron form, linoleic acid hydroperoxides were found to be mainly converted into the corresponding ketones. 38 Other pathways including carbon-carbon cleavage lead to short-chain aldehydes, unsaturated aldehydes or alcohols among others. 39 Most of these short-chain derivatives were shown to be produced during the storage of sunower oil or in the thermal treatment of vegetable oils rich in linoleic and linolenic acids. 40,41 Among them, malondialdehyde (MDA) is a typical marker for secondary lipid oxidation and is classically assessed as thiobarbituric acidreactive substances (TBARS). Although CD are mostly produced through lipid oxidation of linoleic acid, a fatty acid largely found in sunower oil, MDA may be a more suitable marker for more highly polyunsaturated fatty acids such as linolenic and arachidonic acids 42,43 mostly provided by meat and egg yolk phospholipids (Table 3). For the complete oxidation of polyunsaturated fatty acids, the yield in TBARS is 0.55% (mol/mol) for linoleic acid, 4.9% for linolenic acid and 8.6% for 43 For the meals under study, the relative composition in these fatty acids is 100 : 1 : 1, respectively (Table 3). Although unexpected, linoleic acid could thus produce 4-fold more TBARS than combined linolenic and arachidonic acids. The initial oxidation state of the test meals was evaluated right before serving to minipigs. CD were present at the levels of 18.7, 10.4 and 10.1 mmol per g lipids in the beef, F&V and PE test meals, respectively (Table 2). TBARS were also identied with again higher levels for the initial beef meal (0.156 mmol per g lipids) compared to both F&V and PE meals (0.106 and 0.105 mmol per g lipids respectively). F&V and the phenolic extract may thus exert a protective effect during meal preparation when polyunsaturated fatty acids and prooxidant iron species from meat are brought into contact. In the same way, a polyphenolrich grape seed extract was reported to inhibit the onset of lipid oxidation during the storage of minced sh. 44 This difference in favor of a higher oxidation level in the beef meal was also outlined in the T15 min sample of gastric digesta with TBARS values of 0.222, 0.175 and 0.191 mmol per g lipids (Fig. 4B). Similarly, CD evolved rapidly between meal preparation and T15 min with values of 12.6, 13.9 and 12.2 mmol per g lipids for the beef, F&V and the PE meals, respectively (Fig. 4A). It is noteworthy that a signicantly higher CD content was observed for the F&V meal (+10%, p ¼ 0.0004) although this content did not increase further during the course of the digestion process. A steady-state pattern for CD usually accounts for identical rates for the formation and degradation of lipid oxidation products sharing a conjugated dienyl moiety. Thus, the bellshaped kinetics observed for CD (Fig. 4A) indicates faster rates of formation than decomposition during the period between 15 and 150 min. Aer 150 min, the CD content tends to level off before apparently decreasing. The assumption of a continuous accumulation of CD is supported by recent reports of in vitro digestion. TBARS and lipid hydroperoxides were shown to dramatically increase when cod hemoglobin was added to cod liver oil 45 or when cooked turkey meat was digested with simulated gastric juices. 46 The low stability of the lipid hydroperoxyl group under gastric conditions was investigated by Kanazawa and Ashida. These authors found that, when partly peroxidized trilinolein was intragastrically administered to rats, the stomach content in trilinolein hydroperoxides decayed over 4 h. 10 Linoleic acid hydroperoxides and the corresponding alcohols were recovered in the stomach probably through the action of gastric lipase. Besides, neither trilinolein hydroperoxides nor linoleic acid hydroperoxides reached the intestine but only cleavage products. These data support the decomposition of lipid conjugated dienes which mainly consist of linoleyl hydroperoxides owing to the abundance of linoleic acid residues in the meals. Whatever the meal ingested, TBARS levels increased during the whole process of gastric digestion in agreement with the continuous degradation of primary lipid oxidation products (Fig. 4B). These results substantiate the occurrence of lipid oxidation in gastro and validate previous results obtained in static in vitro models of gastric digestion. 8,9,47 Lorrain et al. reported a quasi-linear accumulation of both CD and short-chain volatile compounds upon addition of metmyoglobin, the heme iron form of beef, to sunower oil-inwater emulsions. The emulsier type (BSA, phospholipids), pH and the iron form were demonstrated to be key factors governing the lipid oxidation rates. Overall, the extent of lipid oxidation was markedly depressed when egg yolk phospholipids were present. 8,9,47 This small-sized surfactant gives more homogeneous interfaces, limiting the access to the prooxidant species. Additionally, in the early step of the in vitro digestion at pH 5.8, heme iron forms (metmyoglobin, hematin) had twice a prooxidant activity than free iron forms (Fe 2+ and Fe 3+ /ascorbate). When the pH was set at 4, the free and the heme iron forms were found as aggressive. Moreover, metmyoglobin undergoes denaturation at pH 4 with the concomitant release of its protoporphyrin nucleus. In this study, pH is above 4 between T15 and T150 min and the prooxidant iron form is thus mainly metmyoglobin or digested metmyoglobin. Aer 150 min, pH decreases below 4 and the main iron forms may be hematin and Fe 3+ (Fig. 2). A redox Fe 3+ /Fe 2+ cycle in the presence of ascorbic acid may lead to transient Fe 2+ concentrations, an iron form which cleaves lipid hydroperoxides through the Fenton reaction. The amount of TBARS compared to CD can be calculated. The CD/TBARS ratio evolved from 60 to 70 in the initial stage of the digestion (T15 min) to 15 for the beef meal and 30 for the F&V and PE meals, respectively, at T240 min. This difference is thus in favor of the primary marker of oxidation and is similar to that observed for lipoproteins where TBARS were found to accumulate 5-to 10-fold less than lipid hydroperoxides in normolipidic 48 Free MDA may be underevaluated in the presence of proteins as it reacts with the 3-amino group of lysine leading to the formation of covalent adducts such as Schiff bases. 49 Additional routes for the formation of secondary oxidation products may also be responsible for the measured low levels in TBARS. Indeed, the formation of covalent adducts between proteins and either 4-hydroxy-2-nonenal or 4-hydroxy-2-hexenal, arising from the respective oxidation of n-6 and n-3 polyunsaturated fatty acids, was evidenced in the three meals with a noticeable increase starting aer 150 min (unpublished results). Lipid protection by F&V and the corresponding phenolic extract The sunower oil used in this study contained 900 ppm of vitamin E. The main constituent of vitamin E in sunower oil, atocopherol, is thus unable to totally protect emulsied lipids from oxidation during gastric digestion as also observed for the in vitro digestion of cod liver oil. 45 F&V and their avonoid constituents exert coronary and vascular protection as demonstrated by epidemiologic studies. [12][13][14] Although the causal mechanism of these associations needs to be demonstrated, these studies provide a strong support for the recommendations to consume more than ve servings of F&V per day. In this study, minipigs were fed with a Western-type diet associated with half of the recommended portion, i.e. 2.5 servings or 200 g of F&V. Both cubed F&V and the corresponding hydroacetonic extract contained 154 mg of identied monomeric phenolic compounds (ESI, S2 †), 79 mg of oligomeric avanols (average degree of polymerization ¼ 3) along with F&V soluble sugars and amino acids (22.6 g). As expected, apple (120 g) was a source of monomeric and oligomeric avanols, avonols as well as dihydrochalcones. Quetsche plum (40 g) contributed to the different classes of phenolic compounds. As to the artichoke heart (40 g), it provided 61% (p/p) of the monomeric phenolic pool mainly as hydroxycinnamic acids. In the French diet, hydroxycinnamic acids are the most largely consumed polyphenols (599 mg per day) followed by proanthocyanidins (227 mg per day). 21 Actually, caffeoylquinic acids are the main contributors (74%, p/p) to the extract with chlorogenic acid being the most abundant compound. Caffeoylquinic acids and avanols, the second major group, display the typical 1,2-dihydroxyphenyl moiety that is critical to the reducing capacity of phenolic compounds. It has been reported that ferrylmyoglobin (MbFe(IV)]O), produced upon activation of metmyoglobin MbFe(III) by lipid hydroperoxides or hydrogen peroxide, is efficiently reduced by hydroxycinnamic acids 50,51 and avonoids. 38,52 Thus, the phenolic compounds brought by F&V and the extract may protect lipids by reduction of hypervalent iron forms as well as by chelation of free iron forms, all involved in the initiation step of lipid oxidation. In the evaluation of the lipid protection, different effects were unexpectedly observed on the accumulation kinetics of CD. The polyphenol extract had no inuence on the CD pattern whereas F&V, although increasing the initial level in the primary marker, prevented totally and signicantly (p < 0.05) their apparent formation (Fig. 4A). By contrast, when TBARS were assessed, both F&V and the corresponding extract proved to be highly protective of lipids, limiting TBARS accumulation by a 2.5 to 3-fold factor (Fig. 4B). Signicance (p < 0.05) was only reached at T240 min owing to a large inter-individual variability. Similarly, Gorelik et al. found a marked inhibition of lipid hydroperoxide and MDA formation when heated turkey meat was digested in vitro in the presence of red wine polyphenols. 53 Additionally, the inclusion of a polyphenol-rich grape seed extract during the digestion of minced sh in a dynamic in vitro digestion model decreased the formation of CD in both the gastric and intestinal compartments. 44 In a static in vitro digestion model, Lorrain et al. established that catechol-bearing quercetin, (+)-catechin and chlorogenic acid highly inhibited the accumulation of CD and short chain volatiles in the initial step of gastric digestion (pH 5.8), although only slightly when human gastric juice was added or pH set at 4. 8 By contrast, in this in vivo study, the inhibitory capacity of F&V and the corresponding phenolic extract appeared conserved throughout the digestion process. Conclusion In conclusion, the present study clearly demonstrates the occurrence of in vivo oxidation of dietary lipids in the presence of meat iron and suggests that F&V and their polyphenols can play a protective role. The chemical structure of the antioxidant microconstituents and their respective bioaccessibility are key determinants to the antioxidant capacity of F&V. Because data on the metabolism of polyphenols in the human gastrointestinal (GI) tract are scarce and mainly from ileostomy patients, efforts should now be devoted to the evaluation of the polyphenol bioaccessibility in the GI tract.
2018-04-03T04:45:00.319Z
2014-08-20T00:00:00.000
{ "year": 2014, "sha1": "698e8c551fbc4053e96ead85496e696c0acdae58", "oa_license": "CCBY", "oa_url": "https://hal.inrae.fr/hal-02631724/file/2014%20-%20Loonis-Dufour_1.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "273660e562b381615ae624c7d8a0d7adc310614a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
174799365
pes2o/s2orc
v3-fos-license
An ALE-type discrete unified gas kinetic scheme for low-speed continuum and rarefied flow simulations with moving boundaries In this paper, the original discrete unified gas kinetic scheme (DUGKS) is extended to arbitrary Lagrangian-Eulerian (ALE) framework for simulating the low-speed continuum and rarefied flows with moving boundaries. For ALE method, the mesh moving velocity is introduced into the Boltzmann-BGK equation. The remapping-free scheme is adopted to develop the present ALE-type DUGKS, which avoids the complex rezoning and remapping process in traditional ALE method. As in some application areas, the large discretization errors will be introduced into the simulation if the geometric conservation is not guaranteed. Three compliant approaches of the geometric conservation law (GCL) are discussed and a uniform flow test case is conducted to validate these schemes. To illustrate the performance of present ALE-type DUGKS, four test cases are carried out. Two of them are the continuum flow cases, which are the flows around the oscillating circular cylinder and the pitching NACA0012 airfoil, respectively. Others are the rarefied flow cases, one is the moving piston driven by the rarefied gas, another is the flow caused by the plate oscillating in its normal direction. The results of all test cases are in good agreement with the other numerical and/or experimental results, demonstrating the capability of present ALE-type DUGKS to cope with the moving boundary problems at different flow regimes. Introduction Moving boundary problems can be found in various scientific and engineering fields. For example, in aerospace area, the store separation from the aircraft body, the motion of landing gear during the take-off and landing, etc. [1]. For micro air vehicles, the design of rotary wings and flapping wings which include lots of moving boundary problems will be another application aera [2]. In the above examples, usually the reference lengths of obstacle are much larger than the mean-free-path of gas molecule. According to the definition of Knudsen number [3], the flows in above cases are the continuum flows. Furthermore, the moving boundary problems are also encountered in the application of rarefied flow regime. Such as, the sound wave generated by the oscillating plate, the motion of vanes for the Crookes radiometer, etc. [4]. Generally speaking, the macro-methods based on the N-S equations [5] can cope with the corresponding problems at continuum flow regime, and the prevailing direct simulation Monte Carlo (DSMC) method [6] can deal with that at rarefied flow regime. But in some applications, both macro-methods and DSMC will encounter obstacle as the flow regime out of its computational range, such as the flow in transition regime. Though some hybrid methods [2,7] can be used, due to the different temporal and spatial scales, this kind of hybrid method also encounters great difficulties. Consequently, developing and improving a method which can simulating the moving boundary problems at all flow regimes will have enormous values for engineering application. Recently, the discrete unified gas kinetic scheme (DUGKS) proposed by Guo et al. [8] is a promising method which can handle flows at all regimes [9]. For this method, it combines the advantages of both lattice Boltzmann method [10] (LBM) and unified gas kinetic scheme [11] (UGKS), where the flux at cell interface is easy to calculate like LBM, and the computational cost is declined compared with the UGKS. Some details can be found in Ref. [8,12,13,14]. At current stage, the DUGKS is implemented under the stationary mesh, so the purpose of this paper is further extends the application range of original DUGKS, that is can deal with moving boundary problems. Nowadays, there are many methods can handle the moving boundary problems. Based on the mesh systems used in the numerical computation, these methods simply can be divided into two categories: Eulerian method and Lagrangian method. For Eulerian method, the mesh is fixed at each iteration computation. The immersed boundary method (IBM) is one of the representation [15,16,17]. In this method, the uniform Cartesian grids are used near the region of wall at present stage, the moving boundary will be regarded as a set of Lagrangian nodes, and the influence of wall (no-slipping condition) to the nearby nodes of the Cartesian grid is considered with some interpolation methods. Besides the applications in the continuum flow regime, the IBM coupled the UGKS now can also deal with the rarefied flow moving boundary problems [18]. The primary disadvantage of IBM is in some applications, such as high Reynolds number flows, the amount of mesh is intolerable. The static mesh movement method [19] will be another representation of Eulerian method to cope the moving boundary problems. During the numerical simulations, after an Eulerian step, a new mesh is generated according to certain requirement. In general, regenerating a new mesh is time-consuming for most applications. Besides. some interpolation methods also needed to transfer the flow variables from the old to the new mesh. For Lagrangian method, the moving velocity of mesh is equal to the local fluid flow velocity, so the mesh distortion and tangling is unavoidable for most cases. To combine the advantages of both Eulerian and Lagrangian method, a famous method, that is arbitrary Lagrangian-Eulerian [20] (ALE) technique is developed and improved during the last few decades. Nowadays, the ALE method also can be divided into two types. In traditional procedure, three steps, that are the explicit Lagrangian phase, the rezoning phase, and the remapping phase, will be implemented. Similar to the static mesh movement method in pure Eulerian method, the mesh regeneration or modification, and flows variables transfer will be the two critical steps for this ALE type. In other words, if the quality of mesh can maintain very well during the moving process, this type ALE will degenerated into the pure Lagrangian method. For another type of ALE, the mesh velocity will be introduced into the governing equations (convective terms) to modify the net flux of cell interface. And the mesh moving technique will be introduced to maintain the mesh quality. Besides, the mesh moving velocity also will be constructed based on the old and the new meshes. In aerospace area, such as the aeroelastic analysis, this type of ALE method is usually used. Generally speaking, the time-consuming for the implement of mesh moving technique is less than that of regenerating a new mesh, and rezoning and remapping phases can be discarded for this type ALE, so the remapping-free ALE technique will be used in this paper to improve the original DUGKS. For the methods of dealing with the moving boundary problems, one source of numerical error is violates the geometric conservation law (GCL). In some applications, it will yields erroneous results [21]. Following the previous works, in this paper, GCL compliance schemes are considered to exclude this part of numerical errors. The rest of this paper is organized as follows. In Sec. 2, original DUGKS is introduced briefly, ALE-type DUGKS and several GCL schemes are detailed illustrated. In Sec. 3, one case to verify the GCL schemes, and four test cases to validate the capacity of present method are conducted. Finally, a short conclusion is summarized in the Sec. 4. The sketch of original discrete unified gas kinetic scheme In this section, the original DUGKS proposed by Guo et al. [8] is introduced briefly. The starting point of DUGKS is the Boltzmann-BGK equation, which can be expressed as where f = f (x, ξ, η, ζ, t) is the velocity distribution function for particles moving in D-dimensional velocity space with ξ = (ξ 1 , . . . , ξ D ) at position x = (x 1 , . . . , x D ) and time t. η = (ξ D+1 , . . . , ξ 3 ) is the rest components of the particle velocity with the length L = 3 − D. ζ is a vector with K dimension which represent the internal degree of freedom of molecules. τ is the relaxation time relating to the fluid dynamics viscosity µ and pressure p with τ = µ/p. And f eq is the Maxwellian equilibrium distribution function, which is given by where R is the gas constant, T is the fluid temperature, ρ is the density of fluid, and c = (ξ − u) is the peculiar velocity with u being the macroscopic flow velocity. In order to remove the dependence of distribution function on the internal variable η and ζ, usually, two reduced distributions [22] can be introduced in practical computation, and respectively, given by and With Eq. (1), the evolution equations for g and h can be expressed as respectively, where the equilibrium distribution functions for g eq and h eq are given by The macro-quantities can be calculated with and with ideal gas law, p = ρRT , the pressure can be obtained. In additional, the relationship between dynamics viscosity µ and temperature T , based on the hard-sphere (HS) or variables hard-sphere (VHS) molecules, can be given by where ω is the index related to the HS or VHS model, µ ref is viscosity at the reference temperature T ref . As Eq. (5) and Eq. (6) are exactly the same, which can be rewritten as with φ represent g or h. In DUGKS, Eq. (11) will be solved with finite volume method. And the discretization of this equation can be divided into the two steps: velocity-space discretization and physical-space discretization. For particle velocity-space discretization, usually, a finite set of discretized micro-velocities will be used [8], and ξ i represents the i-th discretized velocity. As the macro-quantities depending on the particle micro-velocities (Eq. (9)), the choose of the values of micro-velocity can be set coincide with the abscissas of quadrature rule. For low-speed continuum flows, the discretized velocities, weights, and corresponding equilibrium distribution functions developed in LBM, such as D2Q9 model [23], can be used in DUGKS, which also build the connection between LBM and DUGKS. For rarefied flows, usually, the Gauss-Hermit and Newton-Cotes quadrature rules will be used to integral the macroquantities, and corresponding set of abscissas will be used as the set of discretized velocities. For physical-space discretization, in this paper, the finite volume method based on unstructured mesh is used. Fig. 1(a) shows the schematic of unstructured mesh, j is the center of triangle cell ABC, and subscript represent the index number of cell. If φ j and Ω j are the average values of φ and Ω in cell ABC, t = t n+1 − t n is the time step, and the mid-point rule is used for the time integration of the convection term and the trapezoidal rule for the collision term, Eq. (11) can be rewritten as where V j represent the volume of cell ABC. The F n+1/2 (ξ) is the flux of cell surface and given by where ∂V j is the cell surface, x is the center of cell interface, and n is the outward unit vector normal to the surface. To remove the implicit collision term, two new distribution functions will be introduced: Then Eq. (11) will be further rewritten as Due to the conservative property of the collision term, in practical computation, φ will be solved instead of the φ. The Eq. (9) for computing the macro-quantities also will be rewritten as For the calculation of interface flux, Fig. 1 Similar the treatment toφ, another two new distribution functions will be introduced and given byφ Then Eq. (18) can be rewritten as If we instead theφ in Eq. (17) byφ, the macro-quantities at cell interface can be calculated. And with Eq. (19), the original distribution function at (x b , t n+1/2 ) is given by As illustrated by Eq. (21) and Fig. 1(b), for givenφ + at x a = x b − ξs, the φ n+1/2 at cell interface can be obtained. With Eq. (14), (19) and (20), we can build the relationship betweenφ andφ + : So, withφ and corresponding equilibrium distribution function φ eq stored at x j . According to Taylor expansion, we can calculate the φ + (x a , t n ) with where ∇φ + is the gradient ofφ + . For the calculation of ∇φ + under the unstructured mesh, in this paper, the least square method will be used: where w j,n = 1/(x j,n − x j ) 2 is the geometrical weighting factor, n is the total number of cell neighbor. Besides, in practical computation, whenφ + has been calculated, with Eq. (15) and (23), theφ + in Eq. (16) can be updated with 2.2. The ALE-type discrete unified gas kinetic scheme 2.2.1. The discretized method for ALE-type DUGKS In this section, the discretized formulation of ALE-type DUGKS will be introduced detailed. Under the ALE framework, during the simulation, the geometry information of the cell, such as volume, the location of cell center, the length of cell interface, etc., will be changed from time to time. Following the handling method used in macro numerical method based on the N-S equations and UGKS which also based on the Boltzmann-BGK equation [24], the mesh moving velocity v that modify the net flux of cell interface is introduced, and Eq. (1) is rewritten as The discretization scheme that presented in original DUGKS will also be used in ALE-type DUGKS, then Eq. (12) and (13) also will be modified as and respectively. Where V n+1, * and V n, * are the cell volumes at n + 1 and n time steps, superscript * means that the value of volume at corresponding time maybe not equal to the true value of volume at that time and will be illustrated at next is the moving velocity of cell interface at n + 1/2 time. n * b and S * b are the outward unit normal vector and area of cell interface, respectively. The computational method for these three variables also will be illustrated at next section. Similar to the original DUGKS, in ALE-type DUGKS, two new distribution functions,φ andφ + also will be introduced, and Eq. (28) can be modified as where the formulations ofφ n+1 j andφ +,n j are same to that of original DUGKS. The scheme for reconstructing the φ n+1/2 at cell interface also same to that of original DUGKS, and x a = x n b − ξs is used to compute the location of interpolating point. Besides, the sign of ξn n is used to judge the upwind direction. It will become the ALE-type DUGKS solver, and has ability to deal with both continuum and rarefied flow problems with moving boundary. Additionally, Laplace smoothing equations for mesh deformation [25] is solved to update the unstructured mesh under moving boundary. Geometric Conservation Law The concept of GCL was firstly well-defined by Thomas and Lambard in 1979 [26]. Generally speaking, for uniform flow, if scheme based on the moving mesh is GCL complaint, any distribution must not introduced into flow domain at any time. 'Free Stream Preservation Property' is the fundamental condition for any time-integral schemes on moving mesh [21]. Following the theoretical derivation used in macro numerical method, the Boltzmann-BGK equation is considered here. If integral the Eq. (1) on the control volume, and semi-discrete form is used, we have If flow is uniform, that is f = const in each cell, and with n b dS = 0, Eq. (31) can be simplified as which is the governing equation of GCL, and means that the variation of volume of a moving cell equals to the integration of the volume-flux (or "sweeping volume") of all the surfaces (named as "face" in the following context) surrounding the control volume [21]. In this paper, the mesh velocity v b of cell interface in Eq. (29) is given as Based on the mesh velocity and GCL governing equation, three discretized geometric conservation law (DGCL) compliant schemes which decide the V n+1, * j , V n, * j , S * b will be discussed. (1) DGCL scheme1: If we set: that are the true values of volume at corresponding times will be used. And following the idea used in Ref. [27], the S * b can be calculated with Through the simple mathematical derivation, Eq. (34) and (35) will automatically satisfy the Eq. (32) during the mesh moving process. (2) DGCL scheme2: If we set: to satisfy the DGCL, the volume V * ,n+1 must be modified [21]. With the firstorder Euler time discretized scheme, Eq. (32) can be rewritten as and V n+1, * j can be modified with (3) DGCL scheme3: If we set: similar to the DGCL scheme2, the V n j must modified and can be calculated with As the DGCL scheme1 is much easy to implement, and only one time level (F n+1/2 ) is needed to calculate the flux at cell interface, it can natural couple with current ALE-type DUGKS. Though DGCL scheme2 and scheme3 are much complex, further improved ALE-type DUGKS framework such as highorder DUGKS [28] or multi-time-level implicit method [21] which the geometry information at intermediate time-levels are much difficult to define [29], these two schemes are the good choice. Besides, the scheme2 and scheme3 are the volume-constrained scheme, the face-constrained scheme [21] also can be used but not considered in this paper. Boundary conditions In this section, the boundary conditions used in this paper will be illustrated in detail. For the wall boundary condition, depended on the flow conditions, that are continuum or rarefied flows, the non-equilibrium extrapolation [30] or diffusescattering rule [8] will be used. For continuum flow, the original distribution functions at n + 1/2 can be given by where subscript w represent the wall boundary, j is the neighbor cell of wall interface, ρ w and u w are the density and velocity at wall, respectively. For rarefied flow, the particles which the direction reflect from the wall can be calculated as where n b represent the unit vector with the direction normal to the wall pointing to the cell. And the density at wall ρ w is determined by the condition that no particles can go through the wall: then where the distribution functions f n+1/2 w with direction ξ i · n w < 0 can be constructed following the procedure described in Sec.2.2.1. For the test cases of flow around the obstacle, in this paper, the far-field boundary condition will be used. Similar the treatment to the wall boundary, the directions reflect from the boundary can be calculated as where the ρ 0 and u 0 are the density and velocity of free-stream, respectively. Numerical results and discussions In this section, several test cases are set up to validate the proposed ALEtype DUGKS in this paper. The first case is the GCL compliance test and shows why its important the GCL be implemented. The second and third cases are low-speed continuum flows around the oscillating circular cylinder and pitching NACA0012 airfoil, respectively, which show the method presented in this paper also can get goods results compared with macroscopic method. The fourth and last cases are low-speed rarefied flows. One is the moving piston driven by the rarefied gas. Another is the flow caused by a plate oscillating in its normal direction. This case is a typical problem in MEMS devices, but currently not finish the systematic research [31]. For continuum cases, only g distribution function governing equation is solved, and three-points Gauss-Hermite quadrature [8] is used to calculate the macro-quantities. For rarefied cases, both g and h distribution functions will be solved, and the quadrature rules are the Gauss-Hermit and Newton-Cotes formulations [14]. Besides, the continuum flow problems are 2D flows and rarefied flow test cases are set as 1D flows. ALE-type DUGKS formulation has been coded with the help of Code Saturne [32], an open-source computational fluid dynamics software of Electricite De France (EDF), France (http://code-saturne.org/cms/). We appreciate the development team of Code Saturne for their great works. The uniform flow for GCL compliance test Uniform flow usually used as the basic case for GCL compliance test, some additional test cases for GCL complaint scheme can be found in Ref. [21]. Firstly, the important of GCL will be illustrated. For the mode of mesh moving, the grid nodes will be oscillating randomly at its original positions with amplitude of ±0.5 x ( x is the size of a cell). Fig. 2 will be a small value compared with other numerical errors. But, in some areas like aero-elasticity, it has been reported that the GCL error will yields erroneous results [33]. So, following the suggests proposed in Ref. [21], the DGCL compliance scheme of ALE-type DUGKS will be ulteriorly studied in the further works. Besides, the DGCL scheme1 will be used in this paper as it is easy to implement. The DGCL scheme2 and scheme3 maybe useful to develop other ALE-type DUGKS like high-order DUGKS [28] or multi-steps implicit DUGKS, where the geometry information of cell at middle time level are not easy to defined. Continuum flow around an oscillating circular cylinder In this section, the continuum flow around the oscillating circular cylinder is simulated. For this case, the flow is incompressible, the cylinder oscillates sinusoidally in the horizontal direction, and the equation of motion can be expressed as where x is the displacement of cylinder at horizontal direction, A is the amplitude and f is oscillating frequency. Following the set up used in Ref. [34], two key parameters will be defined, that are the Reynolds number, Re, and the Keulegan-Carpenter number, KC, respectively. These two parameters will dominate the pattern of this oscillating flow, and the expressions are respectively, where d is diameter of the cylinder, U max is maximum velocity of cylinder in horizontal direction, ρ is the fluid density, and µ is the fluid viscosity. In this simulation, the Re = 100, and KC = 5 will be considered. Fig. 5, for one oscillating period T , it is clear that the inline force F x will much influenced by the different Ma, especially for the amplitude of F x . Because in framework of LBM, the equilibrium distribution function used in this case will recover the compressible Navier-Stokes equations, so the compressible effect might lead to some undesirable errors in numerical simulations [35]. For the tests based on the x and the t, as illustrated in Fig. 6 and Fig. 7 [ 34], whereū x ,ū y , andx (ȳ is the vertical distance for comparison) are defined x and y are the coordinates relative to the equilibrium position of cylinder, u x and u y are the velocity components in horizontal and vertical direction. As shown in Fig. 9, generally, our results also agree well with the numerical and experiment results of Dütsch et al. [34]. For oscillating flow, the semi-empirical equation of Morison et al. [36] are widely used to estimate the inline force F x on body. When the circular cylinder oscillating in the stationary fluids, the time-dependent inline force F x is expressed as where x is the displacement of cylinder, c d and c i are the drag coefficient and the added mass coefficient, respectively. Integral the pressure and stress at the surface of cylinder, the F x can be calculated, then with the help of least-squares fitting or Fourier analysis, the c d and c i also can be evaluated. Fig. 10 shows the time history of F x , it is clear that the pressure is dominating contribution to the total force. Similar behavior also described by Dütsch et al. [34]. The fitted c d and c i , and some other numerical results are compared in Table II, though the c i is little higher, generally the present ALE-DUGKS can get good results. Given the c d and c i , Eq. (50) can evaluate the empirical values of F x at one oscillating period. In Fig. 11, our results are great well with that of Dütsch et al. [34], and rougher consistency with the empirical values of Morison et al. [36]. Continuum flow around a pitching NACA0012 airfoil When study the details of propulsion for insects, birds, or fishes, the flows around oscillating airfoil with pitching and/or heaving motions usually used as benchmark test cases [38]. In this section, only motion of pitching is considered, which also can check the performance of present ALE-DUGKS to cope with the rotary moving boundary problem. The profile of airfoil is NACA0012, and Note: a, Data are based on the finest meshes in Ref. [34] and [37]. pitching at its quarter-chord. The variation of attack of angle (AoA) α for pitching airfoil can be expressed as where d is amplitude of pitching, and f is pitching frequency. Usually, for pitching airfoil, a new parameter, that is reduced frequency, k, will be defined. Firstly, the flow around stationary airfoil is simulated. Reported by Koochesfahani [40] with experiment, under this Reynolds number, the phenomenon of vortex-shedding will generate, and the equivalent reduced frequency, k equi. , based on the f , is 8.7. Fig. 13 shows the time evolutions of lift coefficient C l and drag coefficient C d , where C l and C d are defined as F x and F y are the components of force at x-direction and y-direction, and ρ 0 is density of free-stream (ρ 0 = 1.0 in this case). As the amplitude of C l is very small, and maybe due to the unstructured mesh is used, the maximum and minimum of C l is little different. From our test, the numerical result of k equi. is equal to 8.23, close to the experiment value. Besides, the mesh at the region about 0.5c width behind the airfoil trail must be refined. If the mesh at this region is too coarse, such as O-type mesh, the simulation will result a steady flow. Secondly, a series of flow over the pitching airfoil are simulated at d = 2 • and 4 • , respectively, with different reduced frequency k. Fig. 14 shows the time evolutions of C d at three values of k. The C d are also calculated with Eq. (52). At small value of k, both the maximum and the minimum of C l are larger than zero, so the flow will generate the drag force. When enhance the value of k, the minimum of C d will little than zero. Though the flow is still generate the drag force, the magnitude is decreasing. If we continue enhance the value of k, finally, the absolute value of minimum of C d will larger than that of maximum of C d , and flow will generate the thrust force. Here, we define a new force coefficient, C T , represent the thrust coefficient, and C T = −C d . Fig. 15 shows the mean thrust coefficientC T at different kα. Generally, we get the same tendency compared with other numerical results [41,38,42] and experiment result [40]. For d = 2 • , when kα less than 0.2, the numerical results are consistency with each others, and close to the experiment values. But when kα larger than 0.2, the discrepancy becomes obvious. Some reasons described in Ref. [41] maybe explain this discrepancy. Fig. 16 shows theC T at different Ma, the discrepancy at high values of kα is much larger. We demonstrate again that if compressible flow solver is used, the Ma of free-stream must set with a small value, at least for this pitching airfoil test case. The similar profiles also reported by Young et al. [38]. Fig. 17 and Fig. 18 show the time evolutions of C l and C d at two different reduced frequencies. We reproduce the same phenomenons illustrated by Liang et al. [39] at these two flow conditions. That is, for lift coefficient, though the amplitudes are very larger, the zero mean lift will acting on the airfoil. Stress force almost equal to zero, only the pressure is dominating contribution to the total lift force. For drag coefficient, the stress force almost keep the same values at these two flow conditions, but the instantaneous pressure declined, and will offset the stress force, consequently the magnitude of total drag force is reduced. Koochesfahani [40] with experiment. A moving piston driven by the rarefied gas In this section, a case that the rarefied gas driving a piston is simulated. This problem has been studied by Dechristé et al. [43] with a deterministic numerical scheme coupled with immersed boundary method, and Shrestha et al. [6] with DSMC. Fig. 21 shows the schematics of this problem. The one-dimensional computational domain will divided into two sub-domains by a piston. The length of one sub-domain is L, and the width of piston is 2l. At initial time, these two sub-domains will be filled with the same gas, and the density ρ, pressure p, and temperature T also set the same values. For right part of computational domain, as the temperature of wall higher than that of gas, the pressure will enhance, and push the piston moving from right to left. Finally, if the pressures at two walls of piston are same, the piston will stop moving. With the mass conservation and the state equation of gas for each part, we have [43]: and where R is the gas constant. So, the equilibrium location of piston is and, respectively, the density and pressure at equilibrium states for each part are Following the set up described by Shrestha et al. [6], the gas is argon, the mass of atom is m g = 6. The similar profiles also described in Ref. [6]. That is the position of piston for Kn = 0.31 will much faster converge to its equilibrium position than that for Kn = 0.031. In addition, DSMC at small value of Kn, the results will fluctuating and need perform several independent runs to reduce this stochastic fluctuations in time-dependent problems. On the contrary, our results are much smooth even at small Kn. Besides, the red dash and dot lines shown in figure are the theory solutions (Eq. (55)), the errors between the numerical results and theory solutions are both less than 1%. Fig. 23 and Fig. 24 show the time evolutions of density and pressure at two sub-domains, respectively. Our numerical results are also consistency with the theory solutions. During the evolutions, the pressure difference of two walls for piston at Kn = 0.31 are larger than that of Kn = 0.031, it may also indicate that the piston at computation condition of large Kn will faster moving to its converged equilibrium position. In this case, the Gauss-Hermit quadrature rule is used [14] to integral the macro-quantities, and the codes to calculate the abscissas and weights are shown in Ref. [44]. Fig. 25 shows the influence of number of quadrature points to the results. For Kn = 0.031, from our test, the integral accuracy with 28 quadrature points is good enough for left sub-domain to calculate the macro-quantities. But, for the right side, about two times number of point is needed to get good results. Due to the error of integral, the density in right part will continue decline, and the velocity will not converge to zero. Consequently, even though the pressure difference of two walls for piston converge to a very small value, the piston will move from left to right with a small value of velocity, finally the simulation will blow up. Rarefied flow caused by a plate oscillating in its normal direction In this section, another rarefied flow test case will be simulated, which has been studies by Tsuji et al. [31]. Fig. 26 shows the schematic of this problem. In one-dimensional domain, the right wall is stationary, and left wall is oscillating with cosine function x(t) = a w cos(ωt) (a w = 0.1 and ω = 1 in this case), the amplitude equal to 0.1 make sure this case is low-speed flow. Following the set up described in Ref. [31], in initial time, the length of computational domain is d = 2π 5/6, which is the wavelength of the sinusoidal acoustic wave with angular frequency ω in an inviscid (Euler) gas [45]. The Knudsen number is given as where K is the special Kn number, and two K numbers, which equal to 0.5 and 1.0, respectively, will be considered in this case. Figs. 27 -29 show the profiles of density, velocity and temperature, respectively, at ten moments in each oscillating period. Generally, our results are agree well with that of Tsuji et al. [31], especially at high value of K. Without more reference, its hard to identify which result is better. From the figures, it is clear that at t/π = 1.0, the velocity profile is close to the sinusoidal shape, but the profiles of density and temperature deviate from it significantly. Furthermore, for velocity profile, this shape will deviates more from the sinusoidal shape and tends to attenuate more rapidly as K increased, especially for the right part of wave. For the quadrature rule to calculate macro-quantities, the Newton-Cotes rule will be used. For higher value of K, due to the number of particles is small, the wave generated from the left moving wall and the reflected wave from the right stationary wall will lead to the singular of distribution function [4]. So, to get the smooth results, the number of abscissas is much larger for higher K. From our test, for K = 0.5, 50 abscissas is enough to get the convergent and smooth results. But for K = 1.0, about 200 abscissas can get the smooth results. Conclusion In the present work, the original DUGKS is extend to ALE-type DUGKS. The mesh moving velocity is introduced into the Boltzmann-BGK equation to modify the net flux of cell interface. Consequently, based on the constructed mesh moving velocity, the remapping-free-type ALE method is used to develop the current ALE-type DUGKS. To exclude the GCL error, three DGCL compliance schemes are discussed. As the present DUGKS only the middle time is needed to calculate the flux of cell interface, based on the geometry average, the geometry information at this time level is easy to defined, so the DGCL scheme1 is a good choice. Further improved work such as high-order or multitime-levels implicit ALE-type DUGKS which the geometry information at these time levels are not easy to define, DGCL scheme2 and scheme3 will be the good choice. From the uniform flow test case, all these three schemes have good performance which no disturbances will be introduced into the computational domain. Four test cases of low-speed flow are simulated, two of them are the continuum flows, and others are the rarefied flows. Results of all the cases are in good agreement with the other numerical and/or experiment results. Therefore, for continuum flow, similar to the macro-methods based on the N-S equations, the present ALE-type DUGKS has the capability to cope with more complex low-speed moving boundary problems. And for rarefied flows, which out of ability for macro-methods, the present method also has the power to deal with them. Further works, such as parallel computing, implicit accelerated method, etc., will be continued, to enhance the ability of present ALE-type DUGKS for simulating the moving boundary problems at different flow regime.
2019-06-05T03:52:45.000Z
2019-06-05T00:00:00.000
{ "year": 2019, "sha1": "0b0f86620dade2fece77806eec2465bd31d20b57", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.01813", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b0f86620dade2fece77806eec2465bd31d20b57", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
40973102
pes2o/s2orc
v3-fos-license
Addition of in-situ reduced amidinato-methylaluminium chloride to acetylenes TABLE OF CONTENTS: General methods........................................................................................................................page S3 NMR spectra of [DippNC(Me)NDipp]AlMeCl Fig. S1 S2.........................................pages S3 S4 Synthesis of LAlMeC(Ph)=C(Ph)AlMeL (1)............................................................................page S4 NMR spectra of 1 Fig. S3 S4................................................................................................page S5 Synthesis of LAlMeC(Ph)=C(C6H4-4-SiMe3)AlMeL (2)................................................pages S5 S6 Synthesis of {[LAl-(C≡C-SnMe3)3]K+}2 (3)............................................................................page S6 X-ray structure determination....................................................................................................page S7 Crystallographic data for 1 3 Table S1...............................................................................pages S8 The coordination sphere of aluminium atoms in 1 in detail Fig. S5.......................................page S8 The molecular structure of 2 Fig. S6..............................................................................pages S8 S9 The coordination sphere of aluminium atoms in 2 in detail Fig. S7.......................................page S9 The coordination sphere of aluminium and potassium atoms in 3 in detail Fig. S8.............page S10 Reaction of 1 with iodine.........................................................................................................page S10 Reaction of 1 with iodine Scheme S1...................................................................................page S10 NMR spectra of the reaction mixture of 1 with iodine Fig. S9 S10......................pages S11 S11 Reaction of the LAlMeI-diphenylacetylene mixture with potassium.....................................page S12 Reaction of the LAlMeI-diphenylacetylene mixture with potassium Scheme S2................page S12 Reaction of 1 with hydrogen chloride.....................................................................................page S12 Reaction of 1 with hydrogen chloride Scheme S3................................................................page S12 NMR spectra of the reaction mixture of 1 with HCl Fig. S11 S13.........................pages S13 S14 EI-MS of the reaction mixture of 1 with HCl Fig. S14.........................................................page S14 Computational studies..................................................................................................pages S15 S16 In the last few decades, the activation of various small molecules and unsaturated systems by low-valent main group metal complexes and their subsequent chemical transformations have attracted considerable attention. 1The chemistry of lowvalent aluminium 2 compounds, such as Al I and Al II compounds, developed considerably after the first stable species with an Al-Al arrangement, [(Me 3 Si) 2 CH] 2 Al-Al[CH(SiMe 3 ) 2 ] 2 , was prepared and structurally characterized by Uhl 3 in 1988.However, the key milestone was the synthesis of a stable monomeric Al I species, an aluminium analogue of carbene decorated with a crowded bidentate diketiminato ligand reported in 2000 by Roesky and co-workers. 4In addition to synthetic routes yielding new low-valent aluminium complexes, such as Al I species, dialumanes 1i,2e,5 and some metalloids/clusters, 2g,h,6 new reactivity patterns of the compounds towards smaller and larger molecules 7 were reported. An important part of these synthetic and structural studies is the activation of the C-C multiple bond via either in situ generated Al I /Al II species or via the stepwise reaction with the isolable Al-Al/Al : /Al • intermediate.The in situ reduction of diiodoaluminium N,N-diketiminate in the presence of an RCuCR moiety (R = Ph or SiMe 3 ) 8 and the stepwise activation of RCuCR 9 (R = H, Ph, Me or SiMe 3 ) by the isolable LAl I intermediate both afforded aluminacyclopropenes.However, dialuminacyclobutenes were obtained with organoaluminium compounds containing sterically demanding ligands from the in situ reduction by KC 8 10 and the stepwise reaction via the Al-Al fragment 7b of Me 3 SiCuCSiMe 3 .To the best of our knowledge, only examples of dinuclear aluminium ethylene bridged 5a or double bridged compounds prepared from bisamido-dialane and PhCuCH 5c followed by heating of the product in benzene (1,4-dialuminacyclohexadienes 11 ) and PhCuCPh, respectively, have been described.Pioneering studies of the reactivity of trialkyl aluminium compounds with acetylenes activated by UV light or sodium metal have also been published.5d,e Furthermore, the reaction of dichloroaluminium amide with an excess of alkali metal acetylides (Li, 12 Na and K 12b ) yielded ate complexes consisting of an ionic aluminium fragment carrying two or three terminal ethynyl groups involving alkali metal ions in bridging mode.Herein, we report the synthesis, structural properties and reactivity of products resulting from the in situ reduction of chloromethylaluminium species supported by the NCN chelating amidinato ligand with various acetylenes.Our approach stemmed from previous work, 13 in which the preparation of the starting LAlMeCl (L = DippNC(Me)NDipp) and its reduction by a potassium mirror to yield LAlMe 2 and L 2 AlMe were investigated.The probable existence of transient LAlMe particles offers the possibility that they could be used in further studies as a trap for various unsaturated systems. Thus, the reductive coupling (Scheme 1) of [DippNC(Me)-NDipp]AlMeCl with either PhCCPh or 4-Me 3 Si-C 6 H 4 CCPh and potassium at room or lower temperatures yielded novel ethylene-bridged methylaluminium amidinates 1 (31%) and 2 (27%), respectively, along with aluminum amidinates LAlMe 2 † Dedicated to Dr Bohumil Štíbr on the occasion of his 75 th birthday in recognition of his outstanding contributions to the area of boron chemistry.‡ Electronic supplementary information (ESI) available: Experimental details, spectroscopic characterization, computational details, and X-ray crystallographic data.CCDC 1406406-1406408.For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c5dt03128a and L 2 AlMe (L = DippNC(Me)NDipp) as side-products that could be removed by crystallization (ESI ‡).In addition, the blank test showed no reaction between both components without potassium.The in situ activation of acetylenes within the three-component framework affording aluminacyclopropenes or dialuminacyclobutenes has been published by Roesky and others. 8,10 (Fig. 1 and S5 in ESI ‡) and 2 (Fig. S6 and S7 in ESI ‡) were fully characterized by 1 H and 13 C NMR spectroscopy in C 6 D 6 , elemental analyses, and XRD.The structures of compounds 1 and 2 both contained four-coordinate aluminium atoms with a distorted tetrahedral arrangement of the substituents.The main feature of both dinuclear structures is the presence of an Al-C(Ph)vC(Ph)-Al chain fragment with twisted phenyl groups (torsion angles of 50.50 and 49.95°) in the trans configuration.This structural arrangement may predetermine the nature of the further reactivity of the complex and the structural design of the products.The diphenylethylene moiety (CvC found in 1 C55-C56 1.367(3) Å) serves as a linker (Al1-C55 1.985( 2) and C56-Al2 1.987(3) Å in 1) between the two aluminium atoms decorated by bidentately bonded amidinates. Two mechanisms were proposed by Roesky et al. 8 for the reduction of the similar aluminium complex LAlI 2 in the presence of alkynes, via either the formation of the aluminium centred radical LAlI • , which couples with alkynes, or via the electron transfer from K to the alkyne and the formation of the radical anion K + (RCCR) •− , which displaces the iodide in LAlI 2 .In both pathways, the same intermediate, LAlI(RCCR) • , is formed, yielding the desired product via a further electron transfer reaction.For our system, the latter pathway could be ruled out because only one electron transfer reaction can take place; therefore, alkyne coupling would be observed instead of the formation of 1. Based on these facts, DFT calculations were performed to elucidate a plausible reaction mechanism, suggesting one of the pathways described above and another possible pathway via an experimentally postulated dialumane intermediate 5a (Fig. 2 sequence.The second pathway is similar to the mechanism proposed by Roesky et al.,8 suggesting the reaction of methylaluminium radical INT-1 with diphenylacetylene, which generates intermediate INT-2B.The coupling of the aluminiumdiphenylethylene radical INT-2B with methylaluminium radical INT-1 forms the expected product P (P′).Similar to the first pathway, the rate-determining step of the reaction mechanism is the activation of a CuC triple bond, which has a slightly negative ΔG.Therefore, the second pathway seems to be more thermodynamically favourable; however, the negligible differences in ΔG (−0.9 vs. 2.4 kcal mol −1 ) mean that the first pathway cannot be excluded. Finally, the two radicals occurring in the proposed reaction pathways were investigated.INT-1 is an aluminum-centered radical (Mulliken spin density at Al 82%), whereas for INT-2B the spin density is more delocalized (Mulliken spin density at C ethylene 58%) in the π-system of the phenyl ring (Fig. S15 in ESI ‡). The oxidation of 1 by molecular iodine produced a clear mixture of diphenylacetylene and [DippNC(Me)NDipp]AlMel (Fig. S1 and S2 in ESI ‡). 1 H and 13 C NMR of the reaction mixture are shown in Fig. S9 and S10 in the ESI.‡ The reaction mixture was used without further workup for a re-reduction using the same method.The reaction proceeded to the same product (1) in 34% yield.In addition, oxidizing 1 with oxygen gas afforded a complex mixture of products, mainly consisting of benzil and DippNC(Me)NHDipp, along with a small amount of diphenylacetylene and other by-products. The importance of the structural arrangement of the diphenylethylene moiety is demonstrated by the reactivity of complex 1 towards small molecules (Scheme 1).Based on the 1 H (Fig. S11 ‡) and 13 C NMR spectra (Fig. S12 ‡) and the EI-MS results (Fig. S14 in ESI ‡), the chemical transformation of the diphenylethylene fragment by two equivalents of HCl to trans-stilbene was quantitative.This hydrogen substitution process was completed by the formation of amidine DippNC-(Me)NHDipp ( 1 H, 13 C NMR and EI-MS), and an unidentified methylaluminium chloride-containing species (Fig. S13 in ESI ‡) as by-products formed due to the decomposition of the initially formed [DippNC(Me)NDipp]AlMeCl. The analogous in situ reduction of [DippNC(Me)NDipp] AlMeCl in the presence of bis(trimethylstannyl)acetylene (Scheme 2) resulted in the formation of ca.15% ate complex 3 identified by NMR and XRD.In the reaction mixture, aluminium amidinate 3 was accompanied by major by-products Me 3 SnSnMe 3 (−109 ppm in the 119 Sn NMR spectrum) 14 and L 2 AlMe (L = DippNC(Me)NDipp). 13This structural arrangement on the aluminum atom is not entirely surprising.Some examples of these rare types of aluminium ate complexes have been obtained from the reaction of amido-aluminium dichloride with a large excess of alkali metal acetylide. 12Most probably, the potassium atom attacks the Sn-C bond in the first step to form KCCSnMe 3 , 15 The internal ethynyl groups, which are bridged by two potassium atoms, have K-C distances (K1-C32a vs. K1-C33a, see caption of Fig. 3) that are different by 0.23 Å, whereas the other K-C distances were similar to K1-C32a.The Al-CuC fragment was not linear (angles from 170.53°to 176.15°) in 3 or in structures of {[LAl(CuC-Ph) 3 ]M} 2 (M = Li, Na or K) 12b and this could be explained by the small energy difference between the linear and non-linear Al-CuC arrangements. In conclusion, we described the in situ activation of an unsaturated CC multiple bond via the reduction of amidinato-methylaluminium chloride in the presence of various acetylenes.The reaction was partially reversible by the oxidation of iodine with re-reduction.The structure of the products is strongly affected by the nature of the CuC group substituents (C substituent vs. Sn substituent).Moreover, we proposed two possible reaction mechanisms for model compound 1 by using DFT calculations.In addition, the use of the building block concept was demonstrated by the reactivity of 1 towards HCl resulting in the formation of transstilbene. Financial support from the Grant Agency of the Czech Republic (grant nr.P207/12/0223) is acknowledged.J.T. and F.D.P. would like to acknowledge the financial support of Research Foundation Flanders (FWO Pegasus Marie Curie fellowship) and Free University of Brussels (VUB). Notes and references Scheme 1 Synthesis of dinuclear ethylene-bridged methylaluminium amidinates (1, 2) with the three-component approach and the reactivity of 1 towards HCl and iodine.
2018-04-03T04:57:02.291Z
2015-10-06T00:00:00.000
{ "year": 2015, "sha1": "30b6786dca8d2439b81569444c9d00311c315df1", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/dt/c5dt03128a", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "30b6786dca8d2439b81569444c9d00311c315df1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
255393958
pes2o/s2orc
v3-fos-license
Unlearnable Clusters: Towards Label-Agnostic Unlearnable Examples There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet. UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models. UEs typically are generated via a bilevel optimization framework with a surrogate model to remove (minimize) errors from the original samples, and then applied to protect the data against unknown target models. However, existing UE generation methods all rely on an ideal assumption called label-consistency, where the hackers and protectors are assumed to hold the same label for a given sample. In this work, we propose and promote a more practical label-agnostic setting, where the hackers may exploit the protected data quite differently from the protectors. E.g., amclass unlearnable dataset held by the protector may be exploited by the hacker as a n-class dataset. Existing UE generation methods are rendered ineffective in this challenging setting. To tackle this challenge, we present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations. Furthermore, we propose to leverage Vision-and-Language Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains. We empirically verify the effectiveness of our proposed approach under a variety of settings with different datasets, target models, and even commercial platforms Microsoft Azure and Baidu PaddlePaddle. Code is available at https://github.com/jiamingzhang94/Unlearnable-Clusters. Introduction While the huge amount of "free" data available on the Internet has been key to the success of deep learning and computer vision, this has also raised public concerns on the unauthorized exploitation of personal data uploaded to the Internet to train commercial or even malicious models [16]. For example, a company named Clearview AI has been found to have scraped billions of personal images from Facebook, YouTube, Venmo and millions of other websites to construct a commercial facial recognition application [44]. This has motivated the proposal of Unlearnable Examples (UEs) [17] to make data unlearnable (or unusable) to machine learning models/services. Similar techniques are also known as availability attacks [2,41] or indiscriminate poisoning attacks [14] in the literature. These techniques allow users to actively adding protective noise into their private data to avoid unauthorized exploitation, rather than putting our trust into the hands of large corporations. The original UE generation method generates errorminimizing noise via a bilevel min-min optimization framework with a surrogate model [17]. The noise can then be added to samples in a training set in either a sample-wise or class-wise manner to make the entire dataset unlearnable to different DNNs. It has been found that this method cannot survive adversarial training, which has been addressed by a recent method [11]. In this work, we identify one common assumption made by existing UE methods: labelconsistency, where the hackers will exploit the protected dataset in the same way as the protector including the labels. This means that, for the same image, the hacker and protector hold the same label. We argue that this assumption is too ideal, and it is possible that the hackers will collect the protected (unlearnable) samples into a dataset for a different task and label the dataset into different number of classes. As illustrated in Figure 1, an image can be labelled with different annotated labels (cat or animal), showing that a mclass (e.g., 10-class) unlearnable dataset may be exploited by the hacker as a n-class (e.g., 5-class or 20-class) dataset depending on its actual needs. We term this more generic assumption as label-agnostic and propose a novel method Unlearnable Clusters (UCs) to generate more effective and transferable unlearnable examples under this harsh setting. In Figure 2 (a), we show that this more generic labelagnostic setting poses a unique transferability challenge for the noise generated by existing methods like Error-Minimizing Noise (EMinN) [17], Adversarial Poisoning (AdvPoison) [10], Synthetic Perturbations (SynPer) [41] and DeepConfuse [9]. This indicates that the protective noise generated by these methods are label-dependent and are rendered ineffective when presented with different number of classes. As such, we need more fundamental approaches to make a dataset unlearnable regardless of the annotations. To this end, we start by analyzing the working mechanism of UEs generated by EMinN, AdvPoison as they are very representative under the label-consistency setting. Through a set of visual analyses, we find that the main reason why they could break supervised learners is that the generated noise tends to disrupts the distributional uniformity and discrepancy in the deep representation space. Uniformity refers to the property that the manifold of UEs in the deep representation space does not deviate much from that of the clean examples, while discrepancy refers to the property that examples belonging to the same class are richly diverse in the representation space. Inspired by the above observation, we propose a novel approach called Unlearnable Clusters (UCs) to generate label-agnostic UEs using clusterwise (rather than class-wise) perturbations. This allows us to achieve a simultaneous disruption of the uniformity and discrepancy without knowing the label information. Arguably, the choose of a proper surrogate model also plays an important role in generating effective UEs. Previous methods generate UEs by directly attacking a surrogate model and then transfer the generated UEs to fight against a diverse set of target models [10,17]. This may be easily achievable under the label-consistency setting, but may fail badly under the label-agnostic setting. However, even under the label-consistency setting, few works have studied the impact of the surrogate model to the final unlearnable performance. To generate effective, and more importantly, transferable UEs under the label-agnostic setting, we need to explore more generic surrogate model selection strategies, especially those that can be tailored to a wider range of un-known target models. Intuitively, the surrogate model should be a classification DNN that contains as many classes as possible so as to facilitate the recognition and protection of billions of images on the Internet. In this paper, we propose to leverage the large-scale Vision-and-Language Pre-trained Models (VLPMs) [22,23,30] like CLIP [30] as the surrogate model. Pre-trained on over 400 million text-to-image pairs, CLIP has the power to extract the representation of extremely diverse semantics. Meanwhile, VLPMs are pre-trained with a textual description rather than a one-hot label to align with the image, making them less overfit to the actual class "labels". In this work, we leverage the image encoder of CLIP to extract the embeddings of the input images and then use the embeddings to generate more transferable UCs. We evaluate our UC approach with different backbones and datasets, all in a black-box setting (the protector does not know the attacker's network architecture or the class labels). Cluster-wise unlearnable noise can also prevent unsupervised exploitation against contrastive learning to certain extent, proving its superiority to existing UEs. We also compare UC with existing UE methods against two commercial machine learning platforms: Microsoft Azure 1 and Baidu PaddlePaddle 2 . To the best of our knowledge, this is the first physical-world attack to commercial APIs in this line of work. Our main contributions are summarized as follows: • We promote a more generic data protection assumption called label-agnostic, which allows the hackers to exploit the protected dataset differently (in terms of the annotated class labels) as the protector. This opens up a more practical and challenging setting against unauthorized training of machine learning models. • We reveal the working mechanism of existing UE generation methods: they all disrupt the distributional uniformity and discrepancy in the deep representation space. • We propose a novel approach called Unlearnable Clusters (UCs) to generate label-agnostic UEs with clusterwise perturbations without knowing the label information. We also leverage VLPMs like CLIP as the surrogate model to craft more transferable UCs. • We empirically verify the effectiveness of our proposed approach with different backbones on different datasets. We also show its effectiveness in protecting private data against commercial machine learning platforms Azure and PaddlePaddle. Related Work Unlearnable examples (UEs) can be viewed as one special type of data poisoning attacks [1,2] that aim to make model training fail completely on the poisoned (protected) dataset. UEs should be differentiated from the other two well-known attacks to deep learning models: backdoor attacks [5,13,24] and adversarial attacks [12,37]. Backdoor attacks are the other special type of data poisoning attacks that do not impact the model's performance on clean data, which is in sharp contrast to UEs. Adversarial attacks are one type of test-time attacks that evade the model's prediction by adding small imperceptible adversarial noise to the inputs. UEs can be generated via a min-min bilevel optimization framework with a surrogate model [17], similar to the generation of strong data poisons via bilevel optimization [18,34,36,45]. The generated noise is termed Error-Minimizing Noise (EMinN) as it progressively eliminates errors from the training data to trick the target model to believe there is nothing to learn [17]. We use EMinN to denote the original UE generation method. In addition to EMinN, there are also UE generation methods that utilize adversarial noise, such as Error-Maximizing Noise (EMaxN) [19], Deep-Confuse [9] and Adversarial Poisoning (AdvPoison) [10]. Recently, Yu et al. [41] unveil a linear-separability property of unlearnable noise and propose the Synthetic Perturbations (SynPer) method to directly synthesize linearly-separable perturbations as effective unlearnable noise. The original UE method EMinN has a few limitations. First, the generated unlearnable noise can be removed to a large extent by adversarial training [26], although this will also decrease the model's performance by a considerable amount [17]. This was later on solved by a recent work published at ICLR 2022 [11]. The idea is to optimize the adversarial training loss in place of the standard training loss to produce more robust error-minimizing noise. The other limitation is its transferability to different training schemes, target models (the models to protect against) or datasets. For example, it has been found that unlearnable noise generated in a supervised manner fails to protect the dataset from unsupervised contrastive learning [14]. A unsupervised UE generation method was then proposed to craft UEs unlearnable to unsupervised contrastive learning. However, a very recent work by Ren et al. [32] demonstrates that, surprisingly, unsupervised UEs cannot protect the dataset from supervised exploitation. All above UE methods all rely on the ideal label-consistency assumption, i.e., the same (or no) labels for the protected data will be used by both the protectors and hackers. In this paper, we promote a more practical label-agnostic setting where different labels could be used by the hackers for their own purposes. Besides UEs, strong adversarial attacks have also been proposed to protect personal data from malicious face recognition systems, such as LowKey [6] and APF [44]. They differ from UEs by making a normally trained model unable to recognize the protected images, rather than preventing the proper training of any machine learning models on the protected images. In this work, we focus on UEs rather than other data protection techniques which we believe are of independent interest. Proposed Method Threat Model. We introduce two parties: the protector and the hacker. The protectors leverage a surrogate model to generate UEs for its private data before publishing it on the Internet. For example, online social network companies (or users) could convert their photos to their UE versions before posting them online. These "protected" images are then collected, without the protectors' consent, by a hacker into a dataset to train a commercial or malicious model. The protectors' goal is to make the collected dataset unlearnable, i.e., cannot be used for model training, while the hackers' goal is to train accurate models on the unlearnable (protected) dataset. Following prior works [11,17,25], we assume the released dataset is 100% protected, i.e., all the samples are perturbed to be unlearnable. While this assumption appears to be ideal, if the protection technique is reliable, there is no reason not to employ it to gain more protection and privacy. Therefore, in this work we choose to focus on the unlearnable technique itself rather than changing the setting of the protectors. Following our label-agnostic setting, we also assume the hackers could exploit the unlearnable dataset with different labels. E.g., a m-class dataset could be exploited by the hacker as a n-class dataset. Here, we give an example of such label-agnostic scenario with a online social media company who strives to protect the contents created by all of its users. The company could leverage unlearnable techniques to develop systematic protection scheme against unauthorized data explorers. In this case, we can assume all the images uploaded by the users are protected (by the company). Potential hackers like Clearview AI may crawl the images from the online platform without the users' content into one or a set of datasets for its own purposes. Thus, the collected datasets cannot be guaranteed to have the same labels as their original versions. The protector thus needs to craft more powerful and transferable unlearnable examples to make data unexploitable against different labeling strategies. Problem Formulation We focus on image classification tasks in this paper. Given a clean m-class training dataset D m consisting of k clean training images x ∈ X ⊂ R d and their labels y ∈ Y, in a standard unlearnable setting [17], the protector trains an m-class surrogate model f m s on D m c . The protector can then generate an unlearnable version of the dataset as D m generated unlearnable noise which is often regularized to be imperceptible. The unlearnable dataset D m u is assumed to be the dataset collected by the hackers, and will be exploited to train a commercial or malicious m-class target model f m t without the protectors' consent. Label-consistency vs. Label-agnostic. The above formulation follows the standard label-consistency assumption of previous works [11,17], where the hackers collect, annotate and exploit the unlearnable dataset D m u exactly the same as it was initially released by the protectors. Under a more general and practical label-agnostic assumption, the hackers could annotate the collected dataset D m u differently, e.g., assigning it with different number of classes. In this case, the hackers may exploit the dataset as a n-class (n = m) classi- to train a n-class target model f n t . Note that the protectors have no knowledge of the target class number n nor the target labels y i . Arguably, the hackers may even exploit the dataset as an object detection dataset rather than a classification dataset. We will explore such a more challenging task-agnostic assumption in our future work and focus on the label-agnostic in this work. The Label-agnostic Challenge Existing methods are not robust to label-agnostic exploitation. We test the effectiveness of existing unlearnable methods developed under the label-consistency setting against label-agnostic hackers. Here we consider current unlearnable method including Error-Minimizing Noise (EMinN) [17], Adversarial Poisoning (AdvPoison) [10], Synthetic Perturbations (SynPer) [41] and DeepConfuse [9], on the CIFAR-10 dataset [21]. The ResNet-18 [15] models are used for both the surrogate and target models. As shown in Figure 2 (a), these methods are extremely effective in preventing the training of machine learning models on the unlearnable dataset with the same labels. However, if the unlearnable dataset is crafted using ImageNet surrogate model with the predicted ImageNet labels (i.e., labels predicted by the surrogate model), it fails to prevent the model training with the original CIFAR-10 labels. This indicates one unique challenge of the label-agnostic setting: unlearnable noises generated to prevent one set of labels are not transferable to preventing other labeling strategies. The working mechanism of existing UEs under the labelconsistency setting. Here, we investigate the representations learned by the target model on clean vs. unlearnable examples, aiming to gain more understanding of the unlearnable mechanism. In Figure 2 Figure 2 (b) that the unlearnable examples crafted by EMinN and AdvPoison tend to significantly reduce the variance at certain dimensions. There are also classes that collapse into smaller clusters, like the green class. This indicates that the noise disrupts the distributional discrepancy in the representation space to make the data "unlearnable". The other key observation is that the noise greatly shifts the points away from the normal data manifold, causing an unnecessary spread over a certain direction. This indicates that the noise also breaks the distributional uniformity of the data. Overall, it is evident the unlearnable noise crafted by EMinN and AdvPoison cripples the learning process by distorting both the discrepancy and uniformity of the data distribution in the deep representation space. Unlearnable examples can overfit to the labels. A closer look at the visualizations in Figure 2 (b), one may notice that the unlearning effects occur only within the classes. I.e., the UEs have overfitted to the class labels. This is somewhat not surprising as the unlearnable noises are generated via a supervised loss function (i.e., cross-entropy) defined by the labels. The noise are thus optimized to thwart the most predictive information to the class labels. However, this causes the overfitting problem and fails to work if the labels are changed. Intuitively, if we could remove the dependency on the class labels and turn to exploit the clusters that naturally arise during the learning process, we could make the unlearnable noise more robust to different annotations. Unlearnable Clusters (UCs) Overview. Motivated by the above observations, in this work we propose to generate UEs by exploiting the clusters learned by a surrogate model and making the clusters unlearnable instead of the labeled classes. We term this approach as Unlearnable Clusters (UCs) and illustrate its workflow in Figure 3. The key components of UC are one generator model G and one surrogate model f s . At a high level, UC first employs a surrogate model f s to extract the representations E of all samples in the clean dataset D c . It then utilizes the K-means [35] Figure 3. The Unlearnable Clusters pipeline. The entire dataset is divided into p clusters via K-means clustering, where each cluster corresponds to a certain generator with parameters θi and a cluster-wise perturbation δi. clusters from the representations E. Subsequently, for each cluster, it generates a cluster-wise perturbation δ i using the generator G. The noise will be generated and applied to craft the UE for each sample in D c , with samples belonging to the same cluster are added with the same cluster-wise noise δ i . UEs crafted in this manner can prevent the target model from learning meaningful clusters rather than class predictions, thus is more general to different types of label exploitations. Next, we will introduce the details of UCs. Cluster-wise Perturbations. In our UC framework, one encoder-decoder [29] generator network is used to generate the cluster-wise perturbations, with each generator will be reinitialized for one cluster. As such, we need to extract the clusters first. Here, we leverage the most classic clustering method K-means [35] to detect clusters from the deep representations. Particularly, the clean dataset D c is fed into the surrogate model f s to extract the representation matrix before the classification layer E = [e 1 , · · · , e k ]. Kmeans clustering is then applied on the representation matrix to detect p number of clusters C = {C 1 , · · · , C p }, where The corresponding centers for the clusters are µ C = {µ C1 , · · · , µ Cp }. With the detected clusters C, we can now propose the following method to generate the unlearnable noise for each cluster. Intuitively, for cluster C i , we hope the unlearnable noise δ i could move all samples in the cluster to a wrong cluster center, so as to force the model to forget the correct clusters. This is done via the following minimization framework: where, L DDU is our proposed Disrupting Discrepancy and Uniformity (DDU) loss that defines the distance (d(·)) of samples in C i to a permuted (wrong) cluster center by a permutation function g(µ Ci ); θ i are the parameters of generator network G; G(σ; θ i )) generates the unlearnable noise for all samples in C i (i.e., x ij ∈ C i ). Please note that the above problem needs to be solved for p times to obtain the clusterwise unlearnable noise for all p clusters, and for each cluster, the generator G is reinitialized with new parameters θ i . The complete procedure is described in Algorithm 1. Algorithm 1 Unlearnable Cluster Generation 1: Input: surrogate model f s , distance metric d, uniform noise σ, number of clusters p, random permutation g, L ∞ -norm restriction , clean images x ∈ D c , initialized generator G with parameters θ 2: Output: cluster-wise perturbations δ = {δ 1 , · · · , δ p } 3: feature matrix E = f s (x) 4: clusters and cluster centers {C, µ C } = K-means(E, p) 5: for i in 1 · · · p do 6: Initialize θ i 7: 15: end for CLIP Surrogate Model. How to choose a surrogate model remains to be an independent challenge for generating effective cluster-wise unlearnable noise. As shown in prior works, it plays a central role in facilitating the transferability of the generated UEs to different datasets or target models [17]. In the traditional label-consistency setting, the surrogate model can be a model that directly trained on the original (unprotected) dataset, which may of a different (and plausibly a better or more complex) model architecture. It could also be a model that trained on a larger dataset with more classes, e.g., ImageNet-trained models [10,17]. We thus adopt an ImageNet-pretrained ResNet-50 as the default surrogate model of our UC. Analogous to the classification surrogate models used for generating the traditional UEs, the ideal surrogate models for unlearnable clusters could be those powerful feature extractors that could lead to accurate detection of clusters from an image dataset. We thus propose to also leverage one large-scale vision-and-language pre-trained model (VLPM) [22,23] CLIP [30] as our surrogate model. Pretrained on over 400 million text-to-image pairs, CLIP has the power to extract the representation of extremely diverse semantics. Moreover, CLIP was pre-trained with a textual description rather than a one-hot label to align with the image, thus overfitting less to the actual class labels. Concretely, we employ the image encoder of CLIP to extract the feature matrix for the clean dataset, which is then used to compute the clusters and cluster centers. We denote the version of UC equipped with the CLIP surrogate model as UC-CLIP. Experiments In this section, we evaluate our UCs methods on different datasets against different target models, which is to simulate as many unknown cases as possible. We also examine the robustness of UCs against several advanced defenses. Finally, we demonstrate its effectiveness in attacking commercial machine learning platforms Azure and PaddlePaddle. For each δ i , we repeated p times to train the generator G for 10 epochs for entire ImageNet and 50 epochs for other entire datasets. For random permutation g(·), we simply chose i → i + 1 to build a closed loop. We consider L ∞norm restriction in this work, i.e., δ ∞ < = 16/255. The number of clusters p is set to 10, with an analysis is provided in Section 4.5. Label-agnostic Setup. Please note that we conduct all of our experiments under the proposed label-agnostic setting. The UCs (and the UEs they serve) are all generated with the predicted labels by the surrogate models. The predicted labels may overlap with the ground truth labels to some extent, but are highly inconsistent with the original labels. We report the test accuracy of the target models on the respective clean test sets. Main Results Effectiveness against different target models. We first compare our UC and UC-CLIP with the 5 baselines against different target models. Table 1 shows the results against ResNet-18, EfficientNet-B1, and RegNetX-1.6GF. We have the following main findings: (1) Our methods outperform the baselines by a huge margin consistently across different datasets and target models. This demonstrates the superiority of our methods over the baselines. (2) Our UC-CLIP achieves a better performance than UC, and in most of the cases, by a considerable margin. This proves the great potential of using CLIP as the surrogate model to protect person data from unauthorized exploitations. Effectiveness Against Different Labelings. An even more challenging label-agnostic setting is that the hacker may exploit the unlearnable dataset using different labeling strategies instead of one. So, a natural question is that what if the number of labeled classes of the unlearnable dataset is less than our cluster number p = 10? Here, we take the 37-class Pets dataset as an example and explore the impact if the hacker re-labels the unlearnable version of the dataset as a 5 to 36 class dataset. One possible labeling strategy is that the hacker first extracts the embeddings of the original text labels using the BERT model [8], and then clusters the embeddings into 5-37 classes using K-means, so as to construct a mapping from the old labels to the new labels. As shown in Figure 4 (a), both our UC and UC-CLIP can bring the test accuracy of the target model down to a level that is close the random guess (the black curve). This verifies that our methods can craft more generic UEs against the most severe label-agnostic exploitations. Robustness to Unsupervised Exploitation. We also compare our methods with the baselines under an unsupervised contrastive learning setting against SimCLR [4]. Although our UC methods are not specifically designed for this unsupervised setting, Figure 4 (b) shows that cluster-wise unlearnable noise can also prevent unsupervised exploitation against SimCLR. Preventing Commercial Platforms Here, we apply our UC methods to prevent two commercial machine learning platforms: Microsoft Azure and Table 2, which are consistent with that in Table 1. I.e., both of our methods can protect the data uploaded to the two platforms against their training algorithms. Unsurprisingly, the ViTB32-powered UC-CLIP method achieves the best protection performance by causing the lowest test accuracy. This suggests the effectiveness of our methods even against commercial platforms. Resistance to Potential Defenses In this section, we test the robustness of our UC methods to several augmentation based defenses, including Mixup [43], Gaussian smoothing, Cutmix [42] and Cutout [7]. As can be observed in Table 3, the 4 data augmentation defenses have minimum impact on our UC and UC-CLIP methods. Particularly, Gaussian smoothing appears to be the most effective defense, but the accuracy is still below 25%. Ablation Study Here, we analyze the sensitivity of our methods to the number of clusters p, which has been set to p = 10 as a default. We take the 37-class Pets dataset as an example and evaluate our UC and UC-CLIP method under different values of p ∈ [5,40]. As shown in Figure 5, our methods are quite stable to varying hyperparameter p for p ≥ 10. This indicates that, as long as the clusters can cover most of the concepts in a dataset, the generated unlearnable noise can effectively prevent the model from learning the real content from the dataset. As the number of clusters increases, the noise tends to become more effective, although there is a slight variation at 35. Note that, even in the worst case at p = 5, our methods still outperform the baselines. All the above experiments are conducted under the assumption that all samples in the dataset are protected, a commonly adopted assumption in the literature [10,17,41]. This setting is reasonable when the protectors have the access to the entire dataset, e.g., an online social media company adopts the technique to protect the contents created by all of its users. A more general case is that only a certain proportion of the users protect their data while others do not. This results in mixed dataset with both clean and unlearnable samples. Here we test our UC method under this setting and show the change in test accuracy with the number of clean classes in Figure 6 (a). I.e., for the mixture dataset, the rest of the classes are made unlearnable by UC. It can be inferred that the unlearnable classes almost do not contribute to the model training, a similar conclusion as in previous works [10,17,41]. This implies that only those who adopt the technique will get protected. More Understanding Why our UCs are more powerful than standard UEs against label-agnostic exploitation? As we explained in Section 3.1, the idea of UCs is inspired by the effectiveness of disrupting the uniformity and discrepancy in preventing the model from learning useful information. However, this also raises another question: what exactly does the target model learn? To answer these two questions, here we analyze the learning curves of the target model on the clean vs. unlearnable examples separately. As shown in Figure 6 (b), as the training progresses, the training accuracy on the unlearnable training samples steadily improves until it reaches 100%. But there is almost no improvement in the clean test accuracy on the clean test samples. This is consistent with the the above experimental results that the target model has not learned the capability to perceive normal samples. Surprisingly, however, the model's accuracy on the perturbed test samples is fairly high (> 60%), considering that the normally trained ResNet-18 only achieves a test accuracy of 62.31% on clean Pets dataset. This implies that the unlearnable noise distribution contained in the UCs has effectively concealed the real data distribution. Conclusion Unlearnable examples (UEs) have shown great potential in preventing hackers from using users' private data to train commercial or malicious models. A number of methods have been proposed to improve UEs' transferability and robustness to different datasets, target models and training paradigms. In this work, we identified one limitation of existing UE methods, i.e., their label-consistency assumption. To overcome this limitation, we proposed a more general setting where the hackers could exploit the protected data with different sets of labels. We termed this more challenging setting as label-agnostic, and proposed an Unlearnable Clusters (UCs) technique with conditioned generator models, K-means clustering, and large-scale vision-and-language pretraining model CLIP, to craft effective UEs against a wide range of datasets and target models. We also demonstrate its effectiveness against commercial platforms Microsoft Azure and Baidu PaddlePaddle.
2023-01-04T06:42:11.178Z
2022-12-31T00:00:00.000
{ "year": 2023, "sha1": "8e8bce055cb1cbf688a43b5cfe598159294ce39c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e8bce055cb1cbf688a43b5cfe598159294ce39c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
30690574
pes2o/s2orc
v3-fos-license
Effects of Quercetin Administration on the Pregnancy Outcome of Diabetic Rats Objective: Investigate the effects of diabetes and treatment with quercetin on the maternal reproductive performance and impact on foetal growth. Study design: A total of 32 female Wistar rats were distributed into four groups: non-diabetic (G1); non-diabetic treated with quercetin (G2); diabetic (G3) and diabetic treated with quercetin (G4). At day 21 of pregnancy, each rat was anesthetized and humanely killed for laparotomy; was observed reproductive performance, foetal and placental weights and the placental index. Maternal and foetal data were analysed by ANOVA followed by the Tukey test. Foetal weight classification was assessed by Goodman’s test. Results: Diabetes and diabetes treated with quercetin caused placentomegaly an increased placental index and small foetus rates for pregnancy age. Conclusion: Quercetin, administered to pregnant diabetic rats, controlled glucose levels and promoted weight gain compared to untreated diabetic rats, but it did not improve reproductive performance or foetal or placental development. Citation: Braga CP, de Fátima Ferreira Baptista R, Peixoto FB, Momentti AC, Fava FH, et al. (2012) Effects of Quercetin Administration on the Pregnancy Outcome of Diabetic Rats. J Diabetes Metab 3:180. doi:10.4172/2155-6156.1000180 Introduction Diabetes mellitus is a syndrome characterized by an absolute or relative deficiency of the action of insulin in target organs, resulting in the exposure of all tissues to chronic hyperglycaemia [1]. This deficiency in insulin action, which is the common basis of diabetes, causes characteristic abnormalities in the metabolism of lipids, proteins and carbohydrates, resulting in high concentrations of glucose in the blood, featuring hyperglycaemia with metabolic disorders [2]. Streptozotocin (STZ), an antibiotic produced by Streptomyces achromogenes, is a frequently used agent in experimental diabetes. In STZ-induced type 1 diabetes, hyperglycaemia and oxidative stress have been implicated in the aetiology and pathology of disease complications [3]. The mechanism by which STZ destroys cells of the pancreas and induces hyperglycaemia is still unclear. One of the actions that have been attributed to STZ is the depletion of intracellular nicotinamide dinucleotide (NAD) in islet cells. In addition, STZ has been shown to induce DNA strand breaks and methylation in pancreatic islet cells. Chemicals with antioxidant properties and free radical scavengers were shown to prevent pancreatic islets against the cytotoxic effects of STZ or alloxan, another agent that induces experimental diabetes [4] because inhibit the formation of free radicals: in the initiation (by interacting with superoxide ions), the formation of hydroxyl radicals (by chelating iron ions) and lipid peroxidation (by reacting with lipid peroxyl radicals) [5]. Flavonoids are a group of naturally occurring compounds widely distributed as secondary metabolites throughout the plant kingdom. They have been recognized for having interesting clinical properties, such as anti-inflammatory, antiallergic, antiviral, antibacterial, and antitumoural activities [6]. One of these flavonoids, quercetin (3,5,7,3,4-pentahydroxyflavone), prevents oxidant injury and cell death via several mechanisms, such as scavenging oxygen radicals [7,8] protecting against lipid peroxidation [9] and chelating metal ions [10]. Quercetin is capable of inhibiting biomolecule oxidation and it can alter antioxidant defence pathways in vivo and in vitro [11]. Quercetin is present in many plants, such as Camellia sinensis, Allium sativum, Capsicum frutescens, Ginkga biloba and Hypercium perforatum, which are used for the treatment of diabetes [12]. Quercetin often comprises a major component of the medicinal activity of these plants and it has been shown in experimental studies to have numerous protective effects on the body [12]. Pregnancy complicated by poorly controlled diabetes is associated with an increased risk of abortion, congenital malformations and perinatal mortality [13]. Diabetes mellitus is a state of chronic hyperglycaemia and a major cause of serious micro and macrovascular diseases, affecting, therefore, nearly every system in the body. Growing evidence indicates that oxidative stress is increased in diabetes due to the overproduction of reactive oxygen species and the decreased efficiency of antioxidant defences, a process that starts very early and becomes worse over the course of the disease [14]. The aim of the present study was to investigate the effects of diabetes and treatment with quercetin on the maternal reproductive performance and foetal and placental development of rats. Animals and experimental groups Six-week-old female and male Wistar rats, weighing approximately 190 g and 220 g, respectively, were obtained from São Paulo State University (UNESP) at Botucatu, São Paulo State, Brazil. During the 3-week acclimatization period and the experimental exposure periods, the rats (four per cage) were maintained in an experimental room under controlled conditions of temperature (22 ± 2 °C) and humidity (50 ± 10%), with a 12-hour light/dark cycle and ad libitum access to a commercial diet (Purina ® Rat Chow, Purina, Brazil) and tap water. A total of 32 rats were randomly distributed into four groups (n = 8 each): G1= non-diabetic, G2= non-diabetic treated with quercetin, G3= diabetic and G4= diabetic treated with quercetin. The Experimental Ethical Committee for Animal Research of the Botucatu School / UNESP approved the protocols used in this study. Induction of diabetes The diabetic state was only induced in female rats, by streptozotocin (Sigma Chemical Company, St. Louis, Millstone, United States). Streptozotocin was dissolved in a citrate buffer (0.1 mol/l, pH 6.5) and administered by intravenous (i.v.) injection at a dose of 60 mg/ kg bodyweight. The diabetic state was confirmed by a blood glucose concentration test of > 220 mg/dL [15], and the rats were then subjected to mating. To verify pregnancy, vaginal washing was performed where the tip of an automatic pipette containing 10 μl of 0.9% saline was introduced into the vagina of each female and then aspirated. Factors indicative of pregnancy, such as the presence of sperm, were used to define gestational day zero (GD 0) [16]. Administration of quercetin With the establishment of pregnancy, quercetin was administered via intragastric gavage. Animals belonging to groups G2 and G4 received the flavonoid quercetin (Q SIGMA.-0125) at a concentration of 50 mg/kg body weight. The pregnant rats received the flavonoid throughout pregnancy, at intervals of 7 days (the following days of pregnancy: 0, 7, 14 and 20). The dose and administration interval of quercetina was based on the protocol adopted in our laboratory [17], which found that quercetina administered at intervals of 7 days can have beneficial effects on biochemical parameters of diabetic rats. Evaluation of the pregnancy at term At day 20 of pregnancy, the dams were weighed to determine body weight gain (maternal weight at day 20 compared to day 0 of pregnancy) and anaesthetized with sodium pentobarbital (Hypnol 3%) for laparotomy. The uterus was removed and weighed, and the ovaries and uterine contents were examined to determine the number of corpora lutea and implantation sites, resorptions (embryonic death), and the number and position of viable or dead foetuses. The rate of embryonic loss before implantation was calculated as: (number of corpora lutea -number of implantations) × 100/number of corpora lutea, and used as a measure of failed conceptions or pre-implantation losses. The percentage embryonic loss after implantation was calculated as: (number of implantations -number of live foetuses) × 100/number of implantations, which was used as a measure of the abortifacient effect or to identify post-implantation loss [18]. Immediately after exploratory laparotomy, all viable foetuses and placentas were weighed to determine the placental index (placental weight/foetal weight). The foetuses were classified by mean ± SD according to the mean values of foetal weights of the non-diabetic group (G1): as small for pregnancy age (SPA) when the weight was lower than G1 mean -1.7 SD; appropriate for pregnancy age (APA) when the weight was included in G1 mean ± 1.7 SD; and large for pregnancy age (LPA) when weight was greater than G1 mean + 1.7 SD [19]. Measurement of glycaemia Biochemical parameter were measured using spectrophotometric methods with commercial enzymatic kits (CELM -Modern Laboratory Equipment Company, São Paulo, Brazil). Statistical analysis The results were reported as mean ± SD. All data were statistically analysed using analysis of variance (ANOVA) followed by the Tukey test. Goodman's test was used for foetal weight classification. Statistical significance was considered as p < 0.05 [20]. Results In non-diabetic rats (G1) and non-diabetic rats treated with quercetin (G2), normoglycaemia was confirmed with mean glucose values around 110 mg/dL, whereas in diabetic rats (G3) hyperglycaemia was confirmed by mean glucose concentrations of around 309 mg/dL and diabetic rats treated with quercetina mean glucose concentrations of around 164 mg/dL. A comparison between the diabetic rats (G3) and diabetic rats treated with quercetin (G4) showed that the maternal serum glucose in the group of treated diabetic rats significantly declined (p <0.05) ( Table 1). Table 1 presents the maternal reproductive performance. The mean number of corporea lutea of G3 and G4 group was not different to any of the other groups. Diabetes (G3) and diabetes treated with quercetin (G4) did not cause a significant decrease in implantation numbers in relation to groups G1 and G2; or live foetus numbers. In relation to maternal weight gain, G1 and G2 showed higher values of weight gain that were not significantly different to each other (p>0.05), whereas they were significantly different to G3 Mean ± SD. All data were statistically analysed using analysis of variance (ANOVA) followed by the Tukey test. Means followed by different letters indicate significant differences between the groups (p<0.05). G1= non-diabetic, G2= non-diabetic treated with quercetin, G3= diabetic and G4= diabetic treated with quercetin. group treated with quercetin showed a higher weight gain compared to the untreated diabetic group, but it did not equal the values observed for the control groups. Not significant increase (p > 0.05) in the rate of pre-implantation loss was observed in relation to the groups (G1, G2, G3 and G4), and the rate of post-implantation was significantly different between the control groups (G1 and G2) and the diabetic groups (G3 and G4). Table 2 shows that the foetal weights were significantly lower in G3 and G4 compared to G1 and G2. Increased that the placental weight and index were significantly higher (p < 0.05) in G3 and G4 compared to the control groups (G1 and G2). There was an increase in the proportion of SPA foetuses in the diabetic group and diabetic group treated with quercetin in relation to G1 and G2 groups ( Table 3). Discussion In our study, the diabetic rats (G3) showed a significant increase in blood glucose levels compared with the control rats. The hypoglycaemic effect of quercetin may be due to its antioxidant properties [21,22]. An assessment of the weight of a pregnant woman is an indirect measurement of the degree of maternal and foetal impairment. Weight gain can lead to insufficient intrauterine growth [23,24]. In contrast, in pregnancies complicated by diabetes, maternal weight is often exaggerated, associated with macrosomia and polyhydramnios [25,26]. In rats with diabetes induced by drugs, and macrosomia, the excessive maternal weight gain is not easily reproducible. The difference in weight of the foetuses at day 20, the developmental phase of the foetuses [27,28]. Another explanation is that these disorders may be associated with hyperglycemia in the intrauterine environment [25]. Moreover, it was demonstrated that diabetes leads to thickening of the membranes and limiting the intervillous space [26], with consequent reduction in blood flow and maternal-foetal exchange, thus the flow of blood to the placenta in diabetic rats is reduced by 50% in late pregnancy [18], restricting the levels of oxygen and nutrients to the foetus, which can cause lower birth weight. Experimental studies suggest that maternal hyperglycaemia results in a severe renal overload, with the elimination of large amounts of water and electrolytes, dehydration and culminating with the consequent loss or difficulty in gaining weight [29,30]. Persistence of a severe hyperglycaemic state in the intrauterine environment prevents the foetal pancreas from obtaining an adequate energy intake from glucose, and restricts development of the foetus between the 18 th and 21 st days of pegnancy [14]. This situation was confirmed in this study and treatment with quercetin failed to match the weight gain values of the control group, although the weight of these rats did increase significantly (p < 0.05) compared to the diabetic group. Miscarriages are frequent in women with uncontrolled diabetes [19]. In the diabetic rats, there was a similar outcome, with higher numbers of resorptions and increased rates of post-implantation loss, leading to decreased numbers of live foetuses. The average weight of foetus was lower in the G3 and G4 groups compared to the control group, and the average weight of the placenta in G4 was improved in relation to G3. Treatment with quercetin in the diabetic animals failed to prevent the development of these complications and parameters. Placental changes have been demonstrated in diabetic women, including the predominance of endarteritis, the thickening of membranes and restriction of the intervillous space (EIV) [19,28], with a consequent reduction in blood circulation and maternal-foetal exchanges. Total blood flow to the placenta in diabetic rats is decreased by 50% in the last days of pregnancy [31], restricting levels of oxygen and nourishment in the foetus. The increase in placental weight is a compensation mechanism that attempts to increase the surface area for maternal-foetal exchange. However, this increase in placental weight has been shown to be insufficient, hindering foetal nutrition [19]. The high value of the placental index in the diabetic group confirmed placental alteration. As a result, there was a higher proportion of small for gestational age foetuses in the diabetic groups, thus further confirming the existence of placental alteration in maternal-placentalfoetal exchanges [32,33]. Findings in placental morphometry may also have helped to explain the increased rate of intrauterine growth in restriction foetuses of women diabetic and diabetic treated (characterized by a higher classified as small for age of pregnancy). Although quercetin have improve glucose concentration was not able to improve reproductive and placental development, a fact that might explain what would be the main stimulus for change in these factors seems to be hyperglycaemia, present in the intrauterine environment in the first week of pregnancy, favouring the possible development of oxidative stress, with adverse conditions for the implementation and foetal development, so quercetin may have had its beneficial effect during pregnancy in reducing blood glucose and may not have reduced glucose in the first week, which can have favoured this way the conditions adverse reproductive and placental development. Conclusion Quercetin administered to pregnant diabetic rats controlled glucose levels and promoted weight gain compared to untreated diabetic rats, but it did not improve reproductive performance or foetal or placental development.
2019-03-12T13:04:56.266Z
2012-01-16T00:00:00.000
{ "year": 2012, "sha1": "199f5b41fc059762c843efde3a8879e7fe6550cc", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/effects-of-quercetin-administration-on-the-pregnancy-outcome-of-diabetic-rats-2155-6156.1000180.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c8eccf60be585212b73cf9f2a606604805a6b682", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2538991
pes2o/s2orc
v3-fos-license
An Experimental Study on the Effectiveness of Disclosing Stressful Life Events and Support Messages: When Cognitive Reappraisal Support Decreases Emotional Distress, and Emotional Support Is Like Saying Nothing at All How can we best support others in difficult times? Studies testing the effects of supportive communication revealed mixed findings. The current study focuses on the effects of supportive communication following different disclosure styles, and includes outcome measures to assess emotional well-being. Hypotheses were tested in a 2 (disclosure style: cognitive reappraisal disclosure vs. emotional disclosure) ×3 (support message: cognitive reappraisal response vs. socio-affective response vs. no response) between subjects factorial design. Receiving a cognitive reappraisal response, rather than a socio-affective response or no response, decreased emotional distress in the emotional disclosure group. Support messages showed no effects in the cognitive reappraisal disclosure group. Although socio-affective responses were positively evaluated, cognitive reappraisal responses may be more effective during emotional upheaval because they provide a positive way out of negative emotions. Introduction A little comfort can go a long way during moments of distress. Research has shown that social support may improve coping with stressful events, positively affect relationships, and decrease levels of emotional distress (for an overview of literature, see [1]). However, the question remains: what do we need to say to let others benefit most from our support? Is it most important to acknowledge and understand ones' feelings or should we help the person to change perspective by portraying it as a learning experience and focusing on the future? The current study has an interdisciplinary character by combining knowledge from two fields of research; communication research on support messages and social psychology literature on processing and disclosing trauma. We propose that effects of a support message might depend on the disclosure style of the individual in need. Previous research showed that the psychological impact of an event depends not only on the type of support individuals receive, but also on one's personal appraisal of the experience [2,3]. Psychological research suggests that after a traumatic or stressful experience individuals go through different phases of appraisal and emotional arousal, and these phases influence one's needs for support [4]. In line with these findings, we put forward that support messages should match individual's disclosure style. Furthermore, we aim to extend previous research on support communication by assessing effects of social support messages not only by indications of selfreported helpfulness, but also with regard to emotions and emotion-related symptoms. Most previous studies on support messages assessed the effectiveness of support messages by self-reported evaluations of helpfulness or perceived affective change. However, perceptions of helpfulness do not necessarily correlate with actual emotional distress relief [5,6,7]. In order to move research in this domain beyond indications of what individuals think a conversational partner should say, we aim to compare these with actual psychological emotional distress measures in the present study. The next section starts by providing an overview of empirical research on supportive communication. We then forward several propositions regarding the interaction between disclosure style and supportive communication, followed by a discussion on the reliance on introspective outcome measures. We describe an experimental study to test the effects of the fit between disclosure style and support message on both perceptions of helpfulness (i.e., evaluations of appropriateness, pleasantness, and supportiveness) and measures of emotional distress (i.e., emotions and emotion-related symptoms). Supportive communication What makes supportive communication effective? Research examining this question has increased our understanding extensively by assessing the type of support provided and its perceived helpfulness in conversations about a stressful event [8]. However, some findings across studies appear mixed, e.g., [8][9][10][11][12][13]. The research field mainly consists of two types of approaches. Departing from a naturalistic framework [14,15], descriptive typologies of support behaviors were developed based on retrospective self-reports. In these retrospective self-reports, individuals are asked to memorize the responses they received from others following a stressful life event and evaluate the helpfulness of each response, e.g., [16][17][18]. This approach has yielded insight into helpful and unhelpful behaviors. For example a study on cancer patients classified 'emotional support behaviors', 'being physically present', and 'showing empathy and concern' as helpful behaviors, and 'critical responses' or 'minimization' as unhelpful behaviors [17]. The difficulty is, however, that different contexts have generally yielded different typologies, and therefore findings are not easily generalized across different situations. Research based on the deductive message perception paradigm [14,15] tested perceptions of helpfulness of pre-defined support messages across contexts. In this research paradigm, the researcher presents an imaginary scenario or dialogue (see [19] for an exception that deals with actual experienced situations), followed by different, often emotional, support messages. Participants are asked to indicate the helpfulness, effectiveness, appropriateness or sensitivity of each support message, e.g., ([20] (Study 2) [21,22]. Across studies, this paradigm has also yielded different results; for instance, giving advice is in some situations perceived as helpful, whereas in others it is not. To overcome these mixed findings, some researchers proposed 'matching models' according to which supportive interactions should match coping demands created by a certain stressor. For example, Cutrona and colleagues distilled five types of support: emotional support; network support; esteem support; tangible support; and informational support [23][24][25] (for a slightly different model see [26]), and four dimensions of life stressors; desirability (i.e., intensity of negative emotions the event provokes), controllability (i.e. preventability of the consequences of the event), duration of the consequences, and its life domain (i.e., loss or treat of assets, relationships, achievements, social roles [23]). They propose that support type should match the demands produced by the stressful event. A number of studies indeed found the proposed effects, e.g., [11,12,27]. However, others did not, e.g., [9,28,13]. Disclosure style One reason for observed inconsistencies in findings across studies may be that most studies focused on characteristics of the event (as categorized by the researcher) and the type of support received, but did not take into account individual differences in appraisal and disclosure style. These might however be of interest, considering that individuals who experience a negative event use different emotion regulation strategies [29], and have their own interpretation of its emotional load, controllability, and consequences [2]. Although to our knowledge the matching between support type and disclosure style has not received any empirical attention, Jacobsen already underscored the necessity of a match between support messages and phase of disclosure in 1986 [30]. He suggests that support should match 'stressor sequences' [31]. Specifically, a crisis situation (i.e., when something occurs or changes abruptly that elicits emotional arousal) especially demands emotional support, whereas in times of transition (i.e., a period of personal and relational change between the individual and the stressor) cognitive support is more appropriate, and in a deficit state (i.e., a situation in which someone's life is defined by chronically excessive demands) someone is in need of material support and direct action to restore the balance between needs and tangible resources. Related to this point, Rimé has proposed that coping with stressful events includes different regulation needs; socio-affective needs (i.e., emotional support, comforting) during the emotional episode, cognitive needs (i.e., reorganization of motives, re-creation of meaning) to overcome perseveration, and action needs in the form of creating new experiences [4]. Hence, since processing a stressful life experience follows a sequence of different coping phases, like Jacobsen (1986) suggested, we propose that support messages are required to match the current appraisal of the person in need. Although until now this proposition has not been tested explicitly in the context of supportive communication, more information regarding the effects of disclosing stressful life events can be found in the expressive-writing literature. Expressive writing is a form of expressive therapy aimed to help individuals to overcome emotional trauma. In expressive writing experiments, participants express their deepest thoughts and feelings about a stressful event that has affected them and their life (for the explicit assignment, see [32]). Research has shown that such disclosure about emotional life events positively affects psychological and physical health over time, e.g., [32][33][34][35][36][37]. In line with the idea of Jacobsen and Rimé that processing a stressful event follows a sequence of different phases and needs, Lepore. Greenberg, Bruno, and Smyth suggested that expressive writing enables three important underlying mechanisms to cope with trauma; directing attention to the stressor and related emotions, habituation to the emotions, and cognitive restructuration [38]. Especially cognitive restructuring the experience appears of value in this psychological process since the influence of stress on health outcomes is mediated by appraisal [2]. Hence, expressive writing initially promotes habituation to emotions and coping with demands related to the stressor, and in turn there is mental capacity to positively reinterpret the stressor and its relation to the self. Therefore emotional disclosure seems to facilitate cognitive reappraisal [39]. In an experimental test of this idea, Lu and Stanton used different disclosure assignments, focused on emotional disclosure, cognitive reappraisal, or a combination of both [39]. With the emotional disclosure instructions, participants had to focus on their deepest emotions about a current most stressful experience that had affected them and their lives. The cognitive reappraisal assignment was mainly focused on perceptions of the stressful event, consequences of the event, challenges and opportunity arising from the event, and cognitive reappraisal of coping strategies. Results revealed that cognitive reappraisal writing reduced physical symptoms, emotional disclosure buffered a decrease in positive affect over time, and the combination of emotional disclosure and cognitive reappraisal was most effective on both physical symptoms and positive affect. However, to date no study has tested what type of social support is the most valuable when individuals are emotionally aroused by thinking about the experience (i.e., crisis situation) or when they are cognitively restructuring the event (i.e., in times of transition). We propose that support is most effective when it matches disclosure style of the recipient. The first goal of the present study was thus to empirically test the proposition that social support messages should fit the recipient's disclosure style. Based on the above reasoning, we propose that individuals with an emotional disclosure style benefit especially from a socioaffective support message, and that individuals with a cognitive reappraisal disclosure style benefit most from a cognitive reappraisal support message (main hypotheses). Evaluations of helpfulness The second goal of this study is to extend previous studies by testing the effects of support messages by assessing participants' emotions, in addition to self-reported perceptions of helpfulness. Thus far, most studies assessed the effectiveness of social support messages using self-report ratings of helpfulness (or sometimes 'sensitiveness', 'supportiveness', 'appropriateness', 'effectiveness'; e.g., [8,40]) or perceived affective improvement, e.g., [19,41,42]. These studies have increased our knowledge on support messages but introspective procedures have their limits, simply because not all mental processes are accessible to people. For instance, when individuals are asked to report why they made a certain choice or how they arrived at a certain judgment, the resulting reports are often confabulated [5,6]. People may underestimate the helpfulness of unpleasant strategies in particular. For instance, a study on public speaking showed that talking about feelings was related to less fear of speaking, but was not related to self-reported supportiveness [7]. Hence, although individuals may perceive some types of support as less-or unhelpful, there are conditions under which this support may still be good for them, i.e., have a positive impact on their emotional well-being. This may hold true especially for socially undesirable support strategies. For example, socio-affective responses in which a conversational partner affirms an individual's emotions may positively affect perceptions of relatedness to the response provider but may not necessarily be most beneficial in terms of emotion and health outcomes. The current study is a first attempt to increase insight into the effects of social support by including evaluations of the support message as well as relatedness to the support provider, and measures of emotional well-being, i.e., emotions and emotion-related symptoms [43]. Since there is a lack of knowledge on the relationship between support message evaluations (i.e., appropriateness, pleasantness, supportiveness), relatedness to the support message provider, and emotional well-being in the context of support messages, we introduce a guiding research question (RQ): What is the relationship between perceptions of helpfulness, relatedness and emotional distress, and is this relationship moderated by the match of disclosure style and support message? Overview Previous studies have investigated supportive communication, but the match with individual's disclosure style has not been examined and findings beyond selfreported perceptions of helpfulness are lacking. We propose an experiment to test the combined effects of disclosure style (emotional disclosure vs. cognitive reappraisal) and support messages (cognitive reappraisal (CR) response vs. socioaffective (SA) response vs. no response) on support message evaluations (i.e., appropriateness, pleasantness, and supportiveness); the extent to which one feels related to the response provider; emotions; and emotion-related symptoms. Design and Participants Hypotheses were tested in a 2 (Disclosure style: cognitive reappraisal vs. emotional disclosure) 63 (Support message: cognitive reappraisal (CR) response vs. socioaffective (SA) response vs. no response) between subjects factorial design. There were 122 individuals who participated in this study. Most of them were undergraduate students and received credits for participation. Seven respondents were excluded from data analysis because they misunderstood the disclosure assignment. Our sample consisted of 115 respondents (87 females and 28 males), with a mean age of 22 years (SD58.42). The distribution of male and female participants was almost equal per experimental condition (emotional disclosure style, 14 males and 40 females; cognitive reappraisal disclosure style, 14 males and 47 females; no response, 9 males and 29 females; SA response, 10 males and 32 females; CR response, 9 males and 26 females). Procedure and Independent Variables All respondents were invited to participate in a study about written disclosure. Half the respondents received disclosure instructions focused on emotional expression and the other half received instructions facilitating cognitive reappraisal (for the exact writing instructions, see [39]). The emotional disclosure group was instructed to write 15 minutes about their deepest emotions about a current most stressful event that affected them and their lives. They were asked to let go and explore their feelings and thoughts about it. Participants assigned to the cognitive reappraisal condition were instructed to write 15 minutes about positive and negative consequences of a current most stressful event, their perceptions of the stressful event, challenges and opportunity arising from the event, cognitive reappraisal of their coping strategies and their positive thoughts about the stressor. After the disclosure assignment participants were first told that another respondent would read and react on their story (only in the conditions where participants received a SA or CR response) and then answered filler questions and filled out demographics, to make it plausible that another participant had enough time to read and respond on their story in the meantime. Subsequently, respondents randomly received a response to their story on their computer screens (except for the control group, who received no response), purportedly from another anonymous participant. This response was manipulated as a socio-affective response or a cognitive reappraisal response. Responses were matched according to length and valence in 'person centeredness', i.e., the extent to which the feelings and perspective of a distressed other are explicitly acknowledged, elaborated, and granted legitimacy [8]. The difference in response type (socio-affective response vs. cognitive reappraisal response) was based on the regulation needs of Rimé, whereby the socio-affective response is especially focused on social integration by comforting, understanding and legitimating feelings [4]. Participants in the socio-affective response condition read the response: 'Dear writer, thanks for telling me your story. I think it was an impressive story. It must have been intense to experience something like that. I experienced something quite similar, and I recognize a lot in your story. I understand how it must have felt and the impact it must have had on your life. Take care.' The cognitive reappraisal response, in contrast, focused on the recreation of meaning, i.e., learning from-and coping with the experience in order to change motives or goals. Respondents in the cognitive reappraisal response condition read: 'Dear writer, thanks for telling me your story. I admire the way you dealt with this situation. Learning from these experiences is very important. Whenever you will experience something similar, you know better how to deal with it. I wish you good luck in the future.' After they received this support message, we measured participants' emotions and emotion-related symptoms. Subsequently, except for the control group, participants evaluated the support message they received (i.e., appropriateness, pleasantness, supportiveness) and if they felt related to the anonymous person that provided the support message. Disclosure assignment To confirm that the two different writing assignments elicited a different disclosure style, the stories participants wrote during the experiment were analyzed with the Dutch LIWC computerized text analysis program [44,45]. The software is designed to analyze written text on a word-by-word basis. The program calculates the percentage of words in the text that matches different language dimensions, such as emotional, cognitive, structural, and process components. The proportion of words indicating each dimension was counted for each participant. One would expect that the cognitive reappraisal disclosure assignment should elicit the use of more cognitive mechanism words (words indicating causation, e.g. because, depend; insight, e.g. know, explain; discrepancy, e.g. should, would; inhibition, e.g. block, conflict; tentativeness, e.g. perhaps, might; and certainty, e.g., always, never) than the emotional disclosure assignment, and that the emotional disclosure assignment should bring forward the use of more words indicating negative emotions (e.g. sad, hate, hurt, guilty) (word categories LIWC; [44,45]) than the cognitive reappraisal assignment. Previous studies support the reliability and validity of LIWC-based analyses, e.g., [46,47]. Support message To verify if the social support responses differed in socio-affective level, three items measured perceived socio-affective characteristics (validating, soothing, comforting; Cronbach's a5.86). For example, 'The response from the other person was comforting?'. Dependent Measures Emotions Emotions were measured with the Symptom/emotion checklist: a state measure [43], including 5 items (e.g., sad) on a 5-point scale (Cronbach's a5.83). Positive emotion items were recoded. Higher scores imply more negative emotions. Emotion-related symptoms A 12-item symptom measure (Symptom/emotion checklist: a state measure [43]) was used to assess emotion-related symptoms respondents felt after disclosing their story and receiving the support message. Participants rated on a 5-point scale if they felt the symptoms or not ('Now, at this moment, I have a headache'; Cronbach's a5.81). Ratings were summed and averaged across items. Higher scores indicate more emotion-related symptoms. Support message evaluation Three items were included to assess response evaluation (appropriateness, pleasantness, supportiveness; Cronbach's a5.87). In previous studies single-item outcome variables have frequently been used to measure message quality, for example by appropriateness, effectiveness, or supportiveness [21,22]. Item example; 'did you perceive the reaction of the other person to your story as supportive?'. All items were answered on a 5-point scale from 'Not at all' to 'Very much'. Perceived relatedness Participants filled out a 4-item measure on a 4-point scale to assess perceived relatedness to the person who wrote the response (e.g. 'I feel that I associate with the person who read and responded to my story, in a very friendly way'). These questions were based on the relatedness subscale in the Autonomy, Competence, and Relatedness in Exercise scale [48]. The scale was internally consistent (Cronbach's a5.85). See S1 Appendix for the items of all dependent variables. Covariates Because it is plausible that a very recent event has more impact on well-being than something that happened years ago, participants were asked when the event occurred. Participants could respond by choosing one of six categories, ranging from 'this year' to 'more than 8 years ago'. For 35,7% of the participants the event took place last year, for 15,7% about a year ago, for 14,8% about two years ago, for 13,9% about 3 or 4 years ago, for 12,2% about 5 till 8 years ago, and for 7,8% more than 8 years ago. To examine a potential influence of the topic participants wrote about, all stories were coded by its' subject. The first author coded the stories based on the Life Events Inventory [49], in which life events are ranked for the severity of the stress they elicit. The second author coded 50% of the stories to test for intercoder reliability, which was high (Kalpha 5.94). Since most of our participants were undergraduate students, ranking was based on results of LEI scales tested among student samples [50,51]. See S2 Appendix for the codebook. Ethics Statement All procedures were approved by the Department of Communication Science of the VU University Amsterdam, because 1) no adverse events were expected based on the current expressive writing literature, 2) experimental conditions do not deviate from participants' real life situations, 3) participants voluntarily chose the topic they wrote about and where in control of the details they disclosed. The study adhered to all the APA ethical guidelines [52], and complies with EU legislation [53] and the Dutch legislation [54] on data protection. Participants (mostly undergraduate students) voluntary registered online to participate in the study to earn credits. On this university website, students can freely pick a study that appeals to them out of a number of studies provided. The online introduction page of the experiment included the length and purpose of the study (i.e., writing about a personal distressful life event, and that during the study there was a possibility that another study participant would read the story written) contact information of the investigator (in case participants would have any questions), and ensured anonymity. On the last page of the study, participants were debriefed; we explained that we were examining the effects of support messages, and that the response of the other study participant was automated, hence not real, and that no other participant read the story written. We again provided them with contact information on the last page, in case participants would have any additional questions. Manipulation Checks Disclosure assignment A unifactor (disclosure condition: emotional disclosure vs. cognitive reappraisal disclosure) ANOVA revealed the expected difference in the use of negative emotion words and cognitive mechanism words between the two disclosure assignments. Participants in the emotional disclosure condition used more negative emotion words (M52.72, SD50.89) than participants in the cognitive reappraisal disclosure condition (M52.16, SD50.89), F(1,113)511.184, p 5.001, g 2 r 5.090. Results also showed that participants used more cognitive mechanism words in the cognitive reappraisal disclosure condition (M56.89, SD51.56), than participants in the emotional disclosure condition (M56. 22 Effect testing Correlation analyses between all dependent variables showed that there was a significant relation between emotions and emotion-related symptoms, and between support message evaluation and perceived relatedness (see Table 1). Perceived relatedness A 262 ANOVA showed a marginally significant main effect of the support message condition on relatedness to the person who provided this message, F(1,71)53.30, p5.073, g 2 r 5.044. Respondents felt slightly more related to the person who provided the socio-affective response (M52.73, SD51.04) than to the person who provided the cognitive reappraisal response (M52.28, SD50.79). No significant main effect of disclosure condition (F,1) and no interaction was found (F(1,71)51.60, p 5.210, g 2 r 5.022; see Table 3). Emotion-related symptoms A 263 ANOVA revealed only an interaction effect of disclosure condition and support message condition on emotion-related symptoms, F(2,109)53.30, p5.041, g 2 r 5.057 (See Table 5). Post-hoc comparisons indicated that significant mean differences emerged for respondents in the emotional disclosure condition; respondents reported less symptoms after the cognitive reappraisal response (M51.30, SD50.33) compared with the socio-affective response (M51.86, SD50.74; p5.008) or no response condition (M51.69, SD50.72; p5.071), although the latter effect was only marginally significant. The difference between the socio-affective response and no response condition was not significant (Fig. 1). No significant simple effects were observed in the cognitive reappraisal writing condition (Fig. 2). Additional analyses To reveal if the topic participants wrote about or the time since the event happened had an influence on the dependent variables (i.e., emotions, emotionrelated symptoms, support message evaluation and perceived relatedness) we ran a correlation matrix. Only the topic of the story was related to emotions, no other correlations were found. The more serious the topic (i.e., the lower the score on this variable) the more negative emotions participants experienced (r52.208, p5.025). We added 'story subject' to our model to see if this would change our findings. The 2 (disclosure condition: cognitive reappraisal vs. emotional disclosure) by 3 (support message condition: cognitive reappraisal vs. socioaffective vs. no response) ANOVA still revealed a similar main effect of the assignments on emotions, F(1,108)54.65, p 5.033, g 2 r 5.041. The previous found interaction effect of disclosure condition and support message condition on emotions became marginally significant, F(2,108) 52.91, p 5.059, g 2 r 5.051. Post-hoc comparisons showed exactly the same mean differences as before; respondents reported fewer negative emotions after a cognitive reappraisal response (M51.64, SD50.62) compared with a socio-affective response (M52.35, SD50.96; p5.015), or no response (M52.19, SD50.83; p5.050). No main effect of 'story subject' on emotions was found. Discussion The present study tested the effects of disclosing a negative life experience and receiving a supportive response on perceived helpfulness, relatedness to the support message provider, emotions and emotion-related symptoms of the recipient. Supportive responses moderated the effects of disclosure style on emotions and emotion-related symptoms. Cognitive reappraisal responses, which focused on reinterpreting the negative life experience, decreased negative emotions and symptom reporting particularly for individuals who had just expressed their deepest emotions, i.e., for participants in the emotional disclosure condition. Supportive responses had no effect on participants who disclosed a negative life event by cognitively reappraising the experience. These findings suggest that cognitively reappraising a stressful situation may have beneficial effects on well-being in two different ways. First, the fact that individuals who cognitively reappraised a stressful situation had similar -lowerlevels of negative emotions and emotion-related symptoms regardless of type of support message they received suggest that cognitively reappraising a negative life experience makes individuals less vulnerable to responses from others. Cognitively re-evaluating a negative experience might not only make individuals feel better about the situation, it also buffers ones susceptibility to responses. Cognitive reappraisal may thus promote resilience and a decreased dependency on others. Second, cognitive reappraisal responses from a conversational partner may help individuals to interpret an emotional experience from a different viewpoint, especially when they are emotional; it might provide a positive way out of negative emotions. Solely disclosing emotions attached to a stressful situation could evoke a vicious cycle of negative emotions, which may drain individual resources to look at a situation from a different viewpoint. In such conditions, supportive responses The Influence of Disclosure Style on the Impact of Support Messages may be helpful to break this vicious cycle and help individuals see a different picture. These findings are in line with Rimé and, Lu and Stanton, who proposed that satisfaction of socio-affective needs is not sufficient; individuals should fulfill their cognitive needs as well to overcome mental rumination and intrusive thoughts [4,39]. Furthermore, studies showed that individuals who reappraise stressful situations innately (i.e., ''constructing a more positive meaning out of the many possible meanings that may be attached to that situation'' p.352, [29] generally show more positive emotions, fewer negative emotions, and a better well-being [3,55] than individuals with a lower score on this regulation strategy. Thus support messages that stimulate to cognitively reappraise the situation might help individuals to change perspective, especially when individuals do not naturally use reappraisal as emotion regulation strategy. In future studies it might be interesting to assess if individual differences in ingrained use of certain emotion regulation strategies (e.g., reappraisal, suppression) affect the current effects. Contrary to expectations, our findings suggest conditions under which responses that do not match a certain style of disclosure are actually better than matched responses, and that validating one's negative feelings does not break the vicious cycle of negative emotions. Future studies should further examine effects of different support messages on well-being, for example by comparing short versus long-term effects of different disclosure styles and support types on wellbeing. There is some empirical evidence that expressing one's emotions elicits more emotional distress and a higher heart rate during disclosure, but promotes psychological well-being in the longer run [56,57]. It should be worthwhile examining whether diminishing negative emotions by providing cognitive reappraisal support messages also promotes long-term well-being. The present study also extends previous research on supportive communication by comparing effects on emotional distress to the evaluation of the support message. This study seems to indicate that individuals are not always capable of assessing certain effects on their own well-being. Participants felt slightly more related to the person who provided a socio-affective response, and perceived this response as more soothing, comforting, and validating than a cognitive reappraisal response. However, these positive evaluations did not translate into lower levels of emotional distress. On the contrary, participants who just expressed their deepest emotions did not benefit from a socio-affective response; levels of emotional well-being were similar to the control condition (i.e., no response), and lower than the cognitive reappraisal response condition. Finally, although the experimental conditions showed no effects on perceived supportiveness of the support message, effects were observed on measures of emotional well-being. Additionally, message evaluations were unrelated to emotions, and to emotion-related symptoms. Together, these findings indicate the need for additional outcome measures next to self-perceived helpfulness in future studies. Limitations and Future Research A limitation of this research is that only two different response messages were used to cover different response types. For example, Jackson and Jacobs recommend using more than one message to cover a support category in order to verify whether the different support messages differ in the proposed theoretical categories, or whether there was something particular about the messages that led to the observed effects [58]. To keep the experiment as naturalistic as possible we chose to provide participants with only one supportive response purportedly from another study participant. Nonetheless, one message to cover a response type is limited, and in future research experiments should be extended with more responses that cover one response type. A second limitation is the lack of a control group for the writing assignment, i.e., study participants who write about a neutral event. Since we were especially interested in the effects of different support messages when individuals disclose stressful events, we only included a control group for the support message condition and did not include a control group for the writing assignment. In future research it might be interesting to compare the effects of the different writing assignments in order to gain a better understanding of baseline values for the measures used in the present research. Furthermore, we cannot exclude the possibility of selection bias. For ethical reasons we had to inform potential participants upfront that they would disclose a personal stressful life event. There is a possibility that the current study participants differ from individuals not willing to participate. For example, the current participants might have a higher need for disclosure (i.e., to talk about thoughts and feelings) than individuals who decided not to participate, and that, in turn, might have had an influence on the effects of the support messages. Another restriction is that an extensive part of the participants were females. Although there was no effect of gender on the dependent variables and every experimental condition contained an almost equal distribution of males and females, it could be that gender has an effect on moderators of the psychological process, such as personality traits or coping strategies. For example, a metaanalysis focused on gender differences in coping showed that females cope by engaging in social relationships and they try to create change (in cognitive and actual terms) more frequently than men do. On the other hand, males rely more often on stress reduction activities or they tend to distract themselves (i.e., diversions) [59]. Gender differences may be important for the process of recovering from a stressful event, and should be further investigated in relation to social support messages. Additionally, in the current study the response provider was an unknown anonymous person. Future research should reveal if responses from significant others (e.g., family, friends) elicit different outcomes. Finally, future studies should examine long-term effects on well-being. By repeating this experiment and conducting additional measurements for emotional distress or well-being a few weeks later, it may be possible to see how disclosure in combination with different support messages affects well-being over time. Conclusions The current study findings suggest that responding by cognitively reappraising a stressful situation may produce positive effects on emotions and emotion-related symptoms. Although telling someone that 'you understand how they feel' is perceived as helpful and might increase a relational bond, it may not be the best strategy to get someone back on track following a stressful situation: in the current study its effects are similar to saying nothing at all.
2016-05-12T22:15:10.714Z
2014-12-22T00:00:00.000
{ "year": 2014, "sha1": "bd1467ed72367647a6d4e57927bbab190877499a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0114169&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78ff4e2c67ce1692322d66fbbef75a94b97a5ae2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
239999857
pes2o/s2orc
v3-fos-license
Cell Membrane-Coated Mimics: A Methodological Approach for Fabrication, Characterization for Therapeutic Applications, and Challenges for Clinical Translation Cell membrane-coated (CMC) mimics are micro/nanosystems that combine an isolated cell membrane and a template of choice to mimic the functions of a cell. The design exploits its physicochemical and biological properties for therapeutic applications. The mimics demonstrate excellent biological compatibility, enhanced biointerfacing capabilities, physical, chemical, and biological tunability, ability to retain cellular properties, immune escape, prolonged circulation time, and protect the encapsulated drug from degradation and active targeting. These properties and the ease of adapting them for personalized clinical medicine have generated a significant research interest over the past decade. This review presents a detailed overview of the recent advances in the development of cell membrane-coated (CMC) mimics. The primary focus is to collate and discuss components, fabrication methodologies, and the significance of physiochemical and biological characterization techniques for validating a CMC mimic. We present a critical analysis of the two main components of CMC mimics: the template and the cell membrane and mapped their use in therapeutic scenarios. In addition, we have emphasized on the challenges associated with CMC mimics in their clinical translation. Overall, this review is an up to date toolbox that researchers can benefit from while designing and characterizing CMC mimics. P harmacological/drug-based therapies are the most common and foremost recourse prescribed for treating diseases and disorders in the human body. In practice, for many years, these therapies have improved health and extended lives without the need for aggressive interventions. 1−6 However, the advent of nanomedicine has revolutionized this traditional approach for disease diagnosis and treatment. Nanomedicine combines the principles of nanotechnology, immunology, and biomaterials to create delivery systems with significantly improved safety and efficacy. 7−9 Delivery systems have two main functions: to execute a specific application that they are designed for and to interact favorably with the complex physiological environment surrounding them to support and enhance their function. Loading a drug of interest or modulating its physiochemical properties can improve these functions partially. However, it is vital to ensure that they have biointerfacing capabilities to avoid roadblocks during clinical translation. 10−12 Biointerfacing capabilities include improving stimuli responsiveness, reducing nonspecific interactions, increasing circulation times, and evading uptake or clearance by the reticuloendothelial system. 13−15 While PEGylation offered some respite by introducing stealth properties, minimizing nonspecific interactions and prolonging circulation, yet negative immunogenic response and allergic reactions were unavoidable. 16, 17 An alternative approach is incorporating ligands (antibodies, 18,19 aptamers, 20,21 peptides, 22,23 and small molecules 24,25 ) to improve target efficacy, but this rendered the system overly complicated for scale-up. These strategies were only partial remedies and not universally applicable or sufficient for clinical translation. Vital clues to improve the biointerfacing capabilities of synthetic delivery systems can be obtained by understanding the structure, function, and homeostasis of cells in the complex physiological environment surrounding them. Incorporating cell properties like shape, flexibility, 26,27 compartmentalization, 28−30 lipid bilayer structure, 31,32 autonomous and specific functionality, 33−35 and protecting cargo 36,37 can be advantageous in delivery systems. In this regard, researchers have attempted to use liposomes, 38,39 polymeric micelles, 40 or naturally occurring extracellular vesicles 41 as delivery systems. For example, Doxil and GenexoltPM are the first FDA-approved liposomal and polymeric micelle formulations, respectively, translated into the clinic and many more under different phase trials. 42,43 However, liposome and polymeric micelle's long-term stability issues, degradation during sterilization, and complex surface modifications for active targeting still remain a challenge for large scale-up. 42−46 For avoiding surface modification complexity, extracellular vesicles are viable alternatives as a delivery system. These are lipid bilayer vesicles, naturally secreted by the cells that display the same proteins, ligands, and targeting moieties like a cell on its surface. 47 Unfortunately, the existing isolation and purification methods for vesicle production cause functional heterogenicity and low yield. 48−50 Besides, low drug loading efficiency also limits their use for a wide range of applications. 51−53 The cell membrane is a major structural component of a cell and extracellular vesicles and replicates their surface functionality. If done correctly, the cell membrane conserves this functionality post-isolation, and its coating improves biointerfacing capabilities. Referred to as cell membrane-coated (CMC) mimics henceforth, these intelli-gently engineered delivery systems combine the biomimetic features of the cell membrane and the functional versatility of a template. The template (spherical or nonspherical) acts as the central scaffold that carries a payload of interest and provides a structural basis. The cell membrane offers surface functionality that mimics a natural cell to improve accumulation and efficacy at the target site. 54 Their assembly process utilizes noncovalent interactions and physical and soft techniques and eliminates a need for complex chemical processing and traditional synthetic modifications. 55−57 Compared to the conventional delivery systems, these CMC mimics demonstrate excellent biological compatibility, stealth properties, and retain cellular properties for active targeting using receptor−ligand interactions. 58−66 In this review, we focus on providing a detailed insight into the various aspects of designing CMC mimics. We begin with an overview of different cell types, their inherent biological properties, and suitability for specific therapeutic applications including cancer, inflammatory diseases, infectious diseases, and their potential use in personalized medicine. In the next section, we present protocols for isolating cell membranes from both nucleus-free and nucleus-containing cells with minimal nuclear and mitochondrial contamination. It is vital to follow protocols that conserve their surface functionality and mechanical stability during the isolation process. Next, we present an overview of templates and their properties available for cell membrane coating. Selecting the right template allows for the chemical and genetic tunability of the mimics and improves bioimaging, 67,68 drug delivery, 55,69,70 diagnostic, 71−74 biosensing, 75 detoxification, 76,77 and phototherapy performance. 56,78,79 We then highlight the processes used for CMC assembly and the challenges for large-scale production, followed by physiochemical and biological characterization techniques that validate their 90 and gained significant research interest in utilizing cell membrane vesicles for coating onto a template to design CMC mimics in 2011. 62 For designing these mimicking systems, the cell membrane from a wide variety of cells source (leukocyte, 63 cancer cell, 92 platelet, 93 bacteria, 94 stem cell, 95 macrophage, 69 β-cell, 96 RBC-platelet hybrid, 97 neutrophil, 55 T-cell, 98 platelet-leukocyte hybrid, 99 RBC-cancer cell hybrid, 100 epithelial cell, 101 RBC-stem cell hybrid, 86 natural killer (NK) cell, 102 leukemic cell, 103 fibroblast, 104 patient-derived tumor cell, 105 dendritic cell 106 ) has been explored depending upon the importance of cells for a specific application. Recently, intracellular organelle membrane coating was investigated using mitochondria as a model organelle. These CMC mimics have shown great potential for use in personalized medicine. 92 Some patents granted on these CMC mimics using the RBC membrane are highlighted in green in this figure. structural integrity and functionality. The last part of the review presents examples of CMC mimics designed for therapeutic applications and in vitro and in vivo models that evaluate their efficacy. Finally, we conclude with an overview of current challenges en route to clinical translation. BIOLOGICAL PROPERTIES OF DIFFERENT CELL MEMBRANES IN CMC MIMICS The cell membrane is the outermost protective layer of a cell with a thickness of around 5−10 nm, mainly composed of lipids, proteins, and carbohydrates, and it interacts and performs complex biological functions with the surrounding environment for survival and proliferation. 80,81 Bilayer assembly of lipids incorporates structural rigidity and fluidity, 82 while carbohydrates are responsible for cellular recognition, 83,84 and proteins play a vital part in signaling and adhesion, briefly. 85 The composition and properties of these three components of the cell membrane differentiate them. The possibility of benefiting from native functionalities originating from cell membranes has resulted in significant research interest in CMC mimics. 86−89 Figure 1 provides a timeline of different cell sources utilized in the CMC mimics fabrication. The idea of isolating RBC vesicles was explored in 1994 90 and gained significant research interest in utilizing cell membrane vesicles for coating onto a template to design CMC mimics in 2011. 62 Until 2020, the natural cell membrane has widely been used from different cell types, but recently, the outer intracellular membrane from the mitochondria has also been explored to enhance biointerfacing capabilities. 91 This section describes the specific biological function of the cell membrane of various cell types and the intracellular organelle that they offer to a CMC mimic. Red Blood Cell Membrane. Red blood cells (RBCs) are the most abundant cell type of the human body, with the longest circulation time of approximately 120 days. 107 RBCs transmembrane express protein cluster of differentiation 47 (CD47), also known as the 'do not eat me' marker, 108 selectively binds to signal-regulatory protein alpha (SIRPα) glycoprotein expressed by macrophages to prevent its uptake. 109,110 RBCs are also responsible for oxygen transport to various tissues and organs in the body 111 and are involved in pathogen removal by oxycytosis. 112 Their membrane is rich in glycophorins that attract pathogens to their surface to release oxygen for killing them. 113 Thus, coating the template with an RBC membrane improves long-term circulation, 62 pathogens removal, 64,114 and toxins absorption 115,77 for detoxification applications. These specific advantages have popularized the use of RBC membranecoated CMC mimics. Platelet Cell Membrane. Platelets, also known as thrombocytes, inhibit bleeding by forming clots and help in tissue repair. 116 Platelets membrane like RBCs express CD47 receptor proteins on their surface that help in evading macrophages. Additional membrane proteins on platelets: integrin like αIIbβ3, α6β1, and P-selectin help in targeting tumor cells, 117 glycoprotein Ib (GPIb/IX/V) complex in binding to exposed subendothelial collagen at the injury site in blood vessels by interacting with von Willebrand factor (VWF), 117 clusters of differentiation 55 (CD55), and clusters of differentiation 59 (CD59) for immune modulation, 118 tolllike receptors for pathogen removal. 119 Platelets are involved in a cross-talk with inflamed endothelium cells and bind with immune cells to redirect them to the injury site. 120 Thus, coating the template with a platelet membrane offers an escape from macrophage detection, selective adhesion to tumor tissues or injured vessels, 121,70 targeting of vascular disorders, 93,122,123 and binding ability to circulatory tumor cells 87 and pathogen removal. 93 Macrophage Cell Membrane. Macrophages are part of the innate immune system, known for removing unwanted or foreign materials/bacteria/viruses from the human body by engulfing them (phagocytosis) using recognition receptors such as scavenger receptors, mannose receptors, and toll-like receptors ((TLR)-2, -4, -5). 124,125 Derived from circulatory monocytes, they are present in all the tissues. During infections or tissue damage, cytokines actively recruit monocytes where they differentiate into macrophages. 126 Chemokine receptors on the macrophage membrane like C−C chemokine receptors type 2 (CCR2), C−X−C chemokine receptor type 1 (CXCR1), C− C chemokine receptor type 7 (CCR7), etc., facilitate their recruitment at the inflammation site. 127 Along with other leukocytes, macrophage membranes also express adhesions molecules like P-selectin glycoprotein ligand-1 (PSGL-1), Lselectin, lymphocyte function-associated antigen 1 (LFA-1), Lselectin, and very late antigen-4 (VLA-4) that assist in their recruitment and cell adhesion. 128,129 Thus, coating the template with the macrophage membrane has the potential to bind pathogens and can also easily escape from macrophage detection to provide active targeting at inflammatory sites 130 and tumors. 69,131,132 Neutrophil Cell Membrane. Neutrophils belong to the innate immune system and constitute around 40−60% of the white cell population in a healthy human body. 133 In response to inflammation, their production rate in bone marrow increases by at least 10-fold. 134 After leaving the bone marrow, their targeting abilities depend on their phenotypic changes and surface. Neutrophils are usually resting when circulating in healthy body receptors. 135,136 They become activated by cytokines or chemokines like tumor necrosis factor-alpha (TNF-α), granulocyte-macrophage colony-stimulating factor (GM-CSF), interleukin 8 (IL-8), and interferon gamma (IFN-γ) which mobilize them to the infection or inflammation site. 137 Conformational changes in integrin adhesion receptors like very late antigen-4 (VLA-4), lymphocyte function-associated antigen 1 (LFA-1), macrophage-1 antigen (Mac-1), P-selectin glycoprotein ligand-1 (PSGL-1), and L-selectin also facilitate neutrophil migration through extravasation from blood vessels. 133,138 Thus, coating the template with the activated neutrophil membrane actively targets the tumors 139,55 and inflammatory sites. 66 Natural Killer Cell Membrane. Natural killer (NK) cells are part of the innate immune system and the first line of defense against tumor and virally infected cells that do not require any prior activation like other immune cells (T cells, B cells). 140 In human peripheral blood, the NK cells comprise 10−15% of the total lymphocyte population. These cells contain many activating and inhibitory receptors on their surface that selectively target tumor/virally infected cells without affecting healthy cells. 141 Some of the important activating receptors are NK group 2D (NKG2D), DNAX accessory molecule-1 (DNAM-1), natural cytotoxicity receptor (NKp30), etc.; 142 integrin adhesion receptors are LFA-1, VLA-4, Mac-1, PSGL-1, and L-selectin (along with other leukocytes), etc., that help in extravasation from blood vessels. 142,143 These cells also activate other immune cells like T cells by releasing cytokines and chemokines. 144 NK cell lines, for example, KHYG-1 and NK-92 membranes, also contain activating and adhesion receptors like primary NK cell membrane, facilitating their use in clinical trials. 145−147 These cell lines are also easy to culture and expand in vitro. Therefore, utilizing NK cell line membrane in CMC mimics could also be a potential alternative. Recently, chimeric antigen receptor (CAR)-NK and CAR-NK-92 technologies began undergoing clinical trials for immunotherapy. 148,149 Thus, coating the template with the NK cell membrane has the potential to actively target inflammation, infection, and tumor sites without prior activation. 102,150 T-Cell Membrane. T cells are part of the adaptive immune system that can recognize antigens using T-cell receptors (TCR). 151 TCRs cannot bind to antigens directly and require peptides fragments of antigens for binding. These fragments are presented to them by major histocompatibility complex molecules (MHC I or II) present on antigen-presenting cells (dendritic cells or macrophages). 152 Naive T cells recognize these specific fragments and differentiate into subsets like cytotoxic, helper, or regulatory T cells. Cytotoxic T cells express cluster of differentiation 8 (CD8) coreceptor (CD8 + T cell) that recognizes antigens on MHC-I molecules and can kill the infected cells (virus/bacteria/cancer cells) by releasing cytotoxic granules or Fas/FasL interaction. 153 Helper T cells express cluster of differentiation 4 (CD4) coreceptor (CD4 + T cells) recognize antigens on MHC-II molecules and regulate immune response that indirectly affects the infected cells. 154 According to literature reports, helper T cells play an important role in treating HIV due to its high-affinity receptor (CD4 + T). 155 CAR-T cell therapy is an FDA-approved therapy for multiple myeloma (ABECMA) and is under evaluation for treating other cancer types and avoid unwanted side effects. 156 Therefore, utilizing the T-cell membrane in CMC mimics could be a potential strategy for treating cancer and infectious diseases. 98,157−159 Dendritic Cell Membrane. Dendritic cells (DCs) are central players of the immune system that link innate and adaptive immune systems. These cells are also known as "professional" antigen-presenting cells (APCs). 160 DCs are the first immune cells to become activated in the human body post a pathogenic attack (bacteria, virus, or cancer cells). 161,162 Even in their resting immature state, iDCs are involved in phagocytosis. They encapsulate pathogens and process them, degrade them into fragments, and present them on the MHC molecules on their surface. 163 During this activation process, iDCs mature and migrate to adaptive immune cells (T cells and B cells) and present antigens for their activation. During antigen presentation, DCs upregulate the expression of co-stimulatory receptors molecules CD86, CD83, CD80, and CD40 on their cell membrane. 164 These molecules effectively bind to their corresponding receptors on T cells and trigger the release of cytokines (interleukin, IL-12 or IL-10) from DCs that differentiate T cells into their pro-inflammatory or antiinflammatory subsets. According to experimental reports, one mature DC can stimulate up to 100−3000 T cells. 165,166 Thus, CMC mimics fabricated with mature dendritic cell membrane can generate sufficient immune response to activating T cells, required to treat several tumors and infectious diseases. 106,167,168 Cancer Cell Membrane. Cancer cells can escape the immune system and are known for their rapid and infinite proliferation. Because of their robust nature, it is easy to culture and expands them in vitro. Different types of cancer cell membranes express numerous tumor-specific antigens and adhesion molecules on their surface. Some of them include cadherins, integrins, galectin-3, lymphocyte-homing receptors (like clusters of differentiation 44 (CD44)), epithelial adhesion molecules, and mucoprotein-1 that play a vital role in cell-to-cell and cell-to-matrix interactions. 169−171 Mainly, cancer cell membranes have self-targeting abilities to adhere to their homologous cells. 65,172 Thus, coating a template with the cancer cell membrane allows it to escape from macrophage detection and for homotypic tumor targeting 173−175 and helps in the development of personalized medicine for cancer. 105 Stem Cell Membrane. Stem cells are known for their ability to replicate indefinitely and differentiate into specialized cell types in the body. Among other stem cells, mesenchymal stem cell (MSC)-based therapies have shown immense potential as regenerative medicine 176 and have entered many clinical trials. 177,178 These cells can specifically target different cancerous and metastatic diseases because of their intrinsic tumor tropic properties, 179−181 they are readily isolated, are stable through multiple in vitro passages, and are produced under good manufacturing practice (GMP) conditions. 182,183 Various chemokines and cytokine receptors like CCR1, CCR2, CXCR1, CXCR2, etc., help the MSCs to migrate to the inflammatory or injured site. 184 Like leukocytes, stem cells also undergo rolling, adhesion, and an extravasation process. Thus, coating the template with a stem cell membrane provides actively targeting abilities toward tumor 95,185,186 and degenerative diseases. 187,188 Bacterial Cell Membrane. Bacteria have an additional peptidoglycan cell wall, unlike other mammalian cell types. Gram-positive bacteria have a thick peptidoglycan cell wall and no outer membrane, while Gram-negative bacteria have thin cell walls as well as lipopolysaccharide outer membranes. 189 Both the Gram-positive and Gram-negative bacteria secrete membrane vesicles. Gram-positive bacteria secrete extracellular vesicles (EVs), whereas Gram-negative bacteria secrete outer membrane vesicles (OMVs). 190 These membrane vesicles express several immunogenic antigens with adjuvant properties and pathogens-associated patterns that help immune modulation. 94,191 Thus, coating the template with the bacterial membrane vesicles (Escherichia coli (E. coli); Staphylococcus aureus (S. aureus); Klebsiella pneumonia (K. pneumonia)) provides an antibacterial immune response, 94 vaccination against bacterial infection, 94,192,193 and tumor targeting abilities. 194,195 Hybrid Cell Membrane. The hybrid cell membrane coating strategy fuses cell membranes from multiple cell types to incorporate multiple cell-specific functional properties in a single mimic. 168,196,197 For example, CMC mimics designed using RBC and B16-F10 melanoma cancer cell membrane express both CD47 transmembrane protein from RBCs and selfrecognition markers (glycoprotein, gp100) from the cancer cell membrane. 100,198 Overall, these RBC-cancer hybrid membranes provide several features like long-term circulation, immune evasion, and homotypic targeting abilities in the CMC mimics. 196 Depending on the specific target application, the relative amount of each membrane can be varied for designing CMC mimics. Thus, hybrid membrane coating by coupling different cell types (refer to previous sections) provides the possibility of designing CMC mimics with multiple desired functionalities, thus offering several advantages in various therapeutic applications. 97,99,199−201 Intracellular Cell Membrane (Organelle). Intracellular membranes from organelles of eukaryotes display the same fundamental structure as the plasma membrane, with the phospholipid bilayer responsible for specific functions. 202 Targeting intracellular membrane functions can be an intelligent strategy for treating several diseases. For example, the delivery of biomolecules across nuclear membranes is considered safe and effective gene therapy. 203,204 For drug-resistant bacterial or viral infections, it is preferable to block the alteration of intracellular Figure 2. A schematic illustration of isolating and preparing membrane vesicles from nucleus-free, nucleus-containing cells, and organelle before coating: (A) The two-step process involves extracting cell membrane fragments (step 1) and preparing cell membrane vesicles (step 2). Depending on the type of cell used, cell membrane extraction and vesicle formation require a combination of techniques: (B) nucleus-free cells and (C) nucleus-containing cells. (D) Organelles: Step 3 is the final step of coating cell membrane onto a template (spherical or nonspherical) using suitable technique mentioned. Abbreviations: RBCs, red blood cells; OMVs, outer membrane vesicles; EVs, extracellular vesicles. membranes with pathogens and inhibit their intracellular replication. 205 Inducing permeability in the mitochondrial, nucleus, and lysosomal membranes is a well-established strategy to overcome drug resistance during cancer treatment. 206 Recently, CMC mimics fabricated using intracellular membranes were explored to targeted detoxification and molecular detection in ABT-263-induced thrombocytopenia. 91 Therefore, coating templates with the intracellular membranes can be an innovative approach to probe the complex intracellular machinery for several therapeutic applications. PROTOCOLS FOR CELL MEMBRANE EXTRACTION There are two categories of cells: nucleus-free or nucleuscontaining cells. There are several reports on cell membrane isolation from various cell types. An attempt has been made to simplify the procedure and discuss the main steps involved during the isolation ( Figure 2). Cell membranes isolation protocols aim to separate the cell membrane from the cell with minimal or no nuclear/ mitochondrial/cytosol contamination depending on the cell type. Using a pure cell membrane helps in assembling CMC mimics by enhancing an efficient and homogeneous surface coating with maximal functional replication on the template surface. The extraction buffers (pH 7−7.4) are supplemented with protease/phosphatase inhibitor cocktails in ice-cold conditions to protect the membrane proteins from degradation. 55,106,188,207,208 Prior to isolation, cells are washed multiple times with 1× phosphate-buffered saline (PBS) buffer to remove remnants from cell culture media. Post-isolation, the cell membrane is lyophilized and usually stored at −80°C to maintain the long term stability and function of membrane proteins. 175,187,209,100,173 The cell membrane isolation mainly involves two steps depending on the cell type ( Figure 2): (1) Gentle rupturing of cells using detergent-free hypotonic treatment (osmotic imbalance) or a combination of hypotonic treatment and physical disruption technique (2) Separation and purification of the cell membrane from intracellular components using multiple centrifugation steps, differential centrifugation, or discontinuous sucrose density gradient centrifugation. In this section, we have discussed the membrane isolation methodology from nucleus-free, nucleus-containing cells and the recently explored intracellular organelle (mitochondria) in designing CMC mimics. All the different conditions (hypotonic buffers, physical disruption techniques, and centrifugation speeds) used in cell membrane isolation are summarized in Tables 1 and 2. Nucleus-Free Cells. RBCs and platelets do not contain nuclei, making their membrane extraction process relatively simple. These cells are isolated first from whole blood using appropriate methodologies. For RBCs, hypotonic treatment easily ruptures the cells followed by centrifugation to collect a pink RBC membrane/ghost pallet. Multiple cycles of centrifugation remove hemoglobin impurities from the pallet. 62,210,211 For platelets, it is common to use multiple freeze−thaw cycles to damage their cell membrane by breakage of ice crystals to remove the cytosol followed by centrifugations to obtain the cell membrane. 93,212 According to one report, the obtained platelet vesicles were subjected to a discontinuous sucrose gradient (5%, 40%, 55%) step to remove any free proteins, intact platelets, and high-density granules to collect pure platelet vesicles from the interface of 5% and 40% sucrose gradient. 213 Bacteria are interesting exceptions in this nucleus-free cell category. Besides containing peptidoglycans in addition to the cell membrane, their cell membrane extraction process can be laborious. 189,190 Therefore, they undergo ultrafiltration to separate their membrane as OMV without a cell lysis step. The reported protocols for isolating OMVs and EVs from Gramnegative (E. coli, K. pneumonia) and Gram-positive (S. aureus) bacteria, respectively, are quite similar. In the first step, the bacterial cultures were centrifuged, and the supernatant was collected. The supernatant was further vacuum filtered through a micron filter and concentrated using ultrafiltration. Finally, the obtained filtrate was subjected to ultracentrifugation to get OMV pellets or EV pellets. 94,192,193 Some groups reported further purification of these OMVs or EVs with some modifications. For example, after the first ultrafiltration step, the concentrate was reprecipitated overnight using ammonium sulfate (4°C) and ultracentrifuged to get E. coli OMVs. 194,195 OMVs resuspended in PBS were further purified using sucrose gradient (1 mL each of 2.5, 1.6, and 0.6 M sucrose), separated by ultracentrifugation. In another report, after ultrafiltration and ultracentrifugation steps, the obtained S. aureus EVs pellet was resuspended in 50% Optiprep/HEPES (2.2 mL). 190 The suspension was applied to the bottom of a step-density gradient (2.0 mL of 40% and 0.8 mL of 10% Optiprep in 10 mM HEPES, supplemented with 150 mM NaCl, pH 7.0), and obtained the pure S. aureus EVs floating at 1.16− 1.20 g/mL. Nucleus-Containing Cells. For nucleus-containing cells, cell membrane isolation and purification are slightly more tedious than that with nucleus-free cells. Examples include immune cells (macrophages/monocytes, neutrophils, NK cells, T cells), cancer stem cells, fibroblasts, and β-cells. These cells can either be obtained from established cell lines (like human breast cancer cell lines (MCF-7, 4T1), mouse macrophage cell line (J447), human NK cell line (NK-92), etc.), or isolated from tissues or blood (neutrophils, cancer cells, T cells, NK cells, stem cells). On average, 200−300 million cells are required for cell membrane isolation to assemble a CMC mimic. 63,67 These cells are ruptured using the hypotonic treatment and physical disruption techniques, resulting in a mixture containing pure cell membrane, intact cells, and high-density granules. Differential centrifugation or discontinuous sucrose gradient ultrafiltration of the mixture finally isolates the cell membrane. These methods are described in detail below. Differential centrifugation method: This method is the one most commonly used for isolating cell membranes. 55,130,150 It works by a stepwise increase in the centrifugation speed. The lower g at the beginning of the process removes heavy particles like a nucleus. A gradual increase in g removes other particles like mitochondria. Finally, very high g is used to pellet down the cell membrane, as it is lighter in weight. For example, the commonly reported centrifugation speeds for isolating cell membrane are 800 g (4°C, 10 min), followed by 10,000 g (4°C, 30 min), and finally 100,000 g (4°C, 60 min) to isolate pure cell membrane. 175,214 Discontinuous sucrose gradient ultracentrifugation method: In this method, sucrose concentration increases discretely from top to bottom, aiding density-based separation of particles in the solution. The particles move across the density gradient stopping in a region where their density matches that of the medium. For example, this method was used to demonstrate the isolation of leukocyte cell membrane using 55%, 40%, and 30% (w/v) sucrose gradients in a physiological saline solution. 63 The cell membrane was collected from a 30/40% interface with minimal/no nuclear and mitochondrial contamination. A similar approach has been preferred to isolate several cell membranes using this method. 63,98,102 During the membrane isolation there can be a loss of functional components like transmembrane proteins/receptors or structural components like cholesterol from the membrane. Cholesterol is mainly responsible for maintaining the rigidity of the cell membrane. 215,216 Such loss may result in a decrease in the mechanical stability of the membrane. Therefore, to reduce protein loss and maintain membrane stability, hypotonic buffers with divalent ions (like MgCl 2 ) or the addition of cholesterol can be useful. These stabilize the membrane skeleton by specifically binding to the junction complex and other membrane proteins like tropomyosin, etc. 217,218 Additionally, mild lysis buffers, gentle rupturing techniques, the right pH, and ice-cold conditions must be used for membrane isolation to prevent the degradation of the transmembrane proteins and receptors. Intracellular Organelle. Cell membrane isolation of intracellular organelle requires additional steps, unlike nucleuscontaining cells. Before cell membrane isolation, it is essential to first isolate the desired organelle from nucleus-containing cells in their pure form. Isolating organelles from cells is a three-step process: hypotonic treatment, physical disruption, and ultracentrifugation. The final step is carried out in a sucrose density gradient to get the purified organelle in a specific sucrose band. The process is repeated with the purified organelle to extract the pure cell membrane. The mitochondrial outer membrane was isolated from the mouse liver using a similar protocol 91 (Table 2). CHOICE OF TEMPLATE BASED ON ITS PROPERTIES The template is a central component of a CMC mimic that provides a structural basis during its assembly. Inherent properties of templates extend the application of CMC mimics for diagnosis, drug delivery, and disease suppression/treatment. There are two major categories of templates (spherical or nonspherical): organic and inorganic. Poly(lactic-co-glycolic acid) (PLGA), gelatin, and liposomes are examples of organic templates. Mesoporous silica, gold and iron oxide (Fe 3 O 4 ), upconversion nanoparticles (UCNPs), persistent luminescent nanoparticles (PLNPs), and metal−organic frameworks (MOFs) are examples of inorganic templates. Organic templates offer features like biocompatibility, biodegradability, and nontoxicity and are often straightforward choices. 229 In comparison, inorganic templates display additional features like magnetic, optical, and electrical properties that determine their selection in a CMC mimic. 230 In this section, we have provided the general overview of the templates categorized based on their properties in the context of CMC mimics like Food and Drug Administration (FDA) approval, biocompatibility, biodegradability, and low toxicity to understand the clinical translation perspective, phototherapy for cancer suppression/treatment, bioimaging for disease diagnosis, and detoxification for enhancing the removal/absorption of toxins (summarized in Figure 3). FDA-Approved, Biocompatible, Biodegradable, Low Toxicity Templates. CMC mimics are biocompatible, as the cell membrane protects every template from the external microenvironment. For clinical translation, it is also vital to consider template biodegradability and biocompatibility. Byproducts after the biodegradation and their interaction with the human body also determine their toxicity. 231 Renal clearance helps to evade undesirable side effects. 231 FDA-approved templates are considered the safest, nontoxic, or nonhazardous to the human body in every aspect. These properties help to protect the healthy cells in the body and avoid any unwanted immune response. Most organic templates are generally thought to be safer than inorganic templates and were, therefore, entered easily into clinical trials. 232,233 Examples of organic templates used in CMC mimics are PLGA, gelatin, and liposomes that are FDA approved, biocompatible, and biodegradable in nature. In 2011, the possibility of designing these mimicking systems was demonstrated using a PLGA nanoparticle as a template. 62 PLGA is a versatile synthetic polymer that molds into both nano-and micro-sized particles. It is the most common template used for several cell membrane coatings like RBC, 62 106 and so on. Gelatin is a natural polypeptide used in cosmetics, pharmaceuticals, the food industry, and in the assembly of CMC mimics. 236 For designing CMC mimics, several cell membranes used for coating on gelatin templates are RBC, 64 stem cell, 186 T cell, 237 mosquito medium host Aedes albopictus (C6/36) cell, 228 and patient-derived tumor cells. 105 Liposomes are spherical vesicles having at least one lipid bilayer. Liposomes have been used for coating macrophages, 129 RBCs, 238 and cancer cell membranes. 175 As reported in the literature, liposomes can also easily fuse with cell membrane vesicles like RBC 239 and NK-92 cells 102 for designing CMC mimics. Perfluorocarbons (PFCs) are another example of a regulatory-approved template. In 1989, PFCs (Fuosol-DA) were approved in the US, Japan, and Europe for clinical use but were taken off from the market after 5 years due to difficulties in their storage-related issues. 240 Nevertheless, PFCs are biocompatible and biodegradable and have a high oxygen-carrying capacity. Many PFCs have a capacity for oxygen dissolution that is nearly 20 times that of water. They can, moreover, be easily fabricated at the nanoscale for oxygen delivery even to the smallest capillaries. 241 Therefore, several CMC mimics reported using PFC can supply oxygen at the tumor sites to relieve hypoxic conditions. 242,243 Most of the inorganic templates are biocompatible, but toxicity depends on the metal used for their synthesis and its degradation in the cell. Among inorganic templates, mesoporous silica is considered the safest (approved by FDA) and is biocompatible and biodegradable. 244 It degrades into nontoxic silicic acid (water-soluble). 245 It has been a popular template for many years in research due to its high porosity, large surface area, and high drug/photosensitizer loading capacity. 246 CMC mimics reported with spherical silica nanoparticles used several cell membranes from RBCs, 61 cancer cells, 247 and macrophages. 247 Other templates like liposome-PEG, 175 UCNPs, 185 and PLNPs 68 were used in combination with silica to increase their drug/photosensitizer loading capacity. Mesoporous silica nanoparticles are tunable to different sizes and shapes. 248,249 According to the reports, rod-shaped silica nanoparticles can enhance antimicrobial properties 250 and regulate the endogenous reactive oxygen species for oxidative therapy. 251 These tunable properties coupled with CMC mimics could offer potential therapeutic benefits if explored further. Silica templates can also have several desired surface functionalities postchemical modifications. 252 For example, positively charged 3aminopropyl triethoxysilane (APTES) was used to modify the surface charge of silica microparticles to coat a negatively charged leukocyte membrane 115 and platelet membrane for CTC detections. 213 For Fe 3 O 4 nanoparticles, iron ions are its biodegradation byproducts and are mostly nontoxic. 231 Several CMC mimics reported using Fe 3 O 4 templates used cell membranes like macrophage, 131 MSCs, 95 and HeLa cells. 173 Similarly, MOFs are well-defined 3D architectures formed by the complexation between organic ligands and inorganic metal ions. 253 These are biocompatible, and their toxicity depends on the nature of the metal and organic linker used. For example, zinc-based MOFs (zeolitic imidazolate (ZIF-8)) release Zn 2+ ions post-degradation, an endogenous element that causes a less harmful effect on the human body if present in a low amount. 254 MOFs of porphyrin (TPP)-based Gd/Zn nanocomposites release gadolinium (Gd 3+ ) and zinc (Zn 2+ ) ions postdegradation. Gd 3+ can cause a toxic effect in the abnormal functioning of kidneys and cross the blood−brain barrier to accumulate in the brain. 255 Several CMC mimics reported using cancer cell membrane-coated MOFs for homologous targeting. 219,256−258 MOFs also have high porosity, large surface area, and high photosensitizer loading capacity 256−258 due to their structural arrangement. Gold particles are another commonly used biocompatible inorganic template because of their inert nature. However, they are not biodegradable and may be cause for concern. 259 To overcome these issues, the use of nano or ultrasmall templates to facilitate rapid renal clearance is preferred. 231,260 Gold particles are tunable to different shapes: nanoparticles, nanocages, nanorods, and nanoshells, and all are used as templates for fabricating CMC mimics. Examples of these are gold nanocages with RBC membrane coating 56 and H22 liver cancer cell membrane coating, 226 nanorods with RBC membrane 261 and platelets membrane coating, 262 and nanoshells with macrophage membrane coating 139 for a specific application. Phototherapy. Phototherapy is a noninvasive and effective cancer treatment. It includes photothermal therapy (PTT) and photodynamic therapy (PDT). 263 With the right choice of template, these photothermal or photodynamic properties can be explored with CMC mimics. PTT involves the photo absorbing agents to generate heat under near-infrared region (NIR) laser irradiation to kill cancer cells thermally and is less harmful to other cells or tissues. 264 Gold templates have a large NIR absorption cross-section and tunable localized surface plasmon resonance (LSPR) band in the NIR region. 226 This makes them most suitable to incorporate in CMC mimics for PTT. 221,226 Similarly, magnetic templates like Fe 3 O 4 are also good alternatives for their use in CMC mimics for PTT. Fe 3 O 4 templates are efficient in photothermal conversion and are outstanding options for hyperthermia treatment. 265 Fe 3 O 4 nanoclusters showed a significant increase in NIR absorption, 265 in contrast to their nanoparticles. PDT involves reactive oxygen species (ROS) generation with photosensitizers under the light of a specific wavelength for oxidation and killing cancer cells. Mainly ROS are like singlet oxygen ( 1 O 2 ), superoxide anion radical (O 2-• ), or hydroxyl radical ( • OH). 266 Some combinations of photosensitizers and templates used together in CMC mimics are chlorin e6 (Ce6) in hollow mesoporous silica, 267 merocyanine 540 (MC540) in UCNPs, 268 zinc phthalocyanine (ZnPC) and MC540 in mesoporous silica encapsulated UCNPs, 185 5,10,15,20-tetraphenylchlorin (TPC) in (ROS)-responsive paclitaxel (PTX) dimer (PTX2-TK), 79 and silicon phthalocyanine in PLPNs. 72 PFCs used in combination with photosensitizers provide an adequate oxygen supply to accelerate the generation of reactive singlet oxygen ( 1 O 2 ) and enhance PDT therapy. 269 In porphyrin-based MOFs, 168,219,257 porphyrin acts as a photosensitizer due to its ability to readily absorb visible light and improve overall ROS generation efficiency. 270 The template used in CMC mimics for both PDT and PTT is a semic o n d u c t i n g p o l y m e r ( S P ) n a n o p a r t i c l e s -p o l y -(cyclopentadithiophenealt-benzothiadiazole) (PCPDTBT)). SP nanoparticles are known for their excellent optical properties and high NIR absorbing capacity and can generate signet oxygen and heat. 104 Verteporfin is a photodynamic agent approved by the US FDA for eliminating abnormal blood vessels in the eyes. 271 Recently, platelet membrane-coated verteporfin loaded PLGA nanoparticles reduced skin damage in PDT in combination with solar radiation. 121 Indocyanine green (ICG) is an FDA-approved photosensitizer and photothermal agent for template encapsulation. 174,209,269,272 Bioimaging. Bioimaging technology has significantly enhanced the ability to diagnose, treat, and prevent diseases by enabling early detection. It helps in imaging inside the animal and human body. Bioimaging includes magnetic resonance imaging (MRI), near-infrared (NIR) imaging, and fluorescence (FL) imaging. Fe 3 O 4 nanoparticles are the most commonly used as negative (T 2 ) contrast agents for MRI in CMC mimics. 57,265,273 Currently, standard probes used in MRI scans are gadolinium (Gd 3+ )-based compounds. These are positive (T 1 ) contrast agents in MRI and also have been a preferred choice in the clinic for their better image resolution and easy detection, tunable magnetic properties, and higher colloidal stability. 274 But these agents also limit their use in patients with renal imparitment and have been reported to cross blood−brain barriers to accumulate in the brain. 255 Some examples of Gd 3+ -based templates used in CMC mimics are PLGA-Gd-lipid 67,71 and MOFs like porphyrin (TPP)-based Gd/Zn nanocomposites. 219 Manganese (Mn 2+ ) ions can also be a potential alternative to gadolinium as positive (T 1 ) MRI contrast agents. 275 Since Mn 2+ is one of the essential elements in the human body, its intake in small amounts does not produce toxic effects. 274 Recently, CMC mimics designed using porphyrin (TCPP)-based Zr 4+ clusters MOFs-coated with MnO 2 nanosheets converted MnO 2 into Mn 2+ because of the generation of H 2 O 2 in the system used for MRI. 256 Porphyrinbased MOFs can absorb the energy produced by the excitation of light and generates fluorescence for imaging. 270 Gold nanoparticles, PLNPs, UCNPs, and semiconducting polymer (SP) nanoparticles are examples of templates in CMC mimics used for NIR imaging. Gold templates have a large NIR absorption cross-section and a tunable LSPR band in the NIR region, providing greater penetration depth in the imaging. 226,276 PLNPs have a long-lasting near-infrared afterglow and avoid tissue autofluorescence from in situ excitation. 68,72 SP nanoparticles have a high NIR absorption capacity. 104 UCNPs have significant light penetration depth, narrow emission peaks, no background fluorescence, and exceptional photostability. 132,277 Indocyanine green (ICG) is best known for NIR fluorescence imaging 65 along with phototherapy. Detoxification. Detoxification removes infections caused by pathogens. The RBC membrane alone 278 or in combination with platelet membrane 59 has toxin absorbing capabilities. There are also some templates/devices used in the CMC mimics to enhance the detoxification process differently. These include olive oil nanodroplets, Janus micromotors, redox-responsive hydrogels, and a 3D bioprinted nanoparticle-hydrogel hybrid device. RBC membrane wrapped olive oil nanodroplets were used to form biomimetic oil nanosponges. 77 In these nanosponges, the olive oil core soaked nonspecific toxicants through the physical partition, and RBC absorbed and neutralized toxicants through biological binding. They also found greater detoxification than that obtained with PLGA-RBC nanosponges. RBC membranecoated antibiotic-loaded redox-responsive hydrogels (RBCnanogels) were reported to absorb and neutralize the poreforming toxins in the extracellular environment. 114 This facilitated their intracellular uptake into the bacteria. Once entered within the bacteria, the cross-linked hydrogel cleaved to release the antibiotics to inhibit the bacterial growth. These redox-responsive hydrogels were more effective in inhibiting bacterial growth than the free antibiotics and nonresponsive hydrogels. Further, RBC membrane-coated Janus micromotors were used to improve the speed of absorption and neutralization of both nerve agent stimulants and biological protein toxins. 76 The water-driven mimicking systems were designed by integrating RBC membranes, gold nanoparticles, and alginate (ALG) onto the exposed surface areas of magnesium (Mg) microparticles partially embedded in parafilm. This partial embedding leads to a small hole in the Mg particles. Hydrogen bubbles produced by the spontaneous redox reaction between Mg and water provided the guided propulsion without any external fuel. The 3D bioprinted nanoparticle-hydrogel hybrid device was designed with multiple inner channels for encapsulating many RBC nanoparticles. 201 Many RBC nanoparticles in one device enhanced the detoxification process while at the same time absorbing various nonspecific toxins flowing through the channel. ASSEMBLY OF CELL MEMBRANE-COATED MIMICS The most crucial step in designing a CMC mimic is the assembly of extracted cell membranes from the cell of interest with the template of choice that can incorporate its physiochemical properties to the CMC mimics. The isolated cell membrane may either be in the form of fragments or of vesicles. Before coating, it may be necessary to include an additional extrusion 56,69,92 or sonication 55,93,279,280 steps to form cell membrane vesicles ( Figure 2). This section describes commonly employed CMC assembly techniques. We have also highlighted other less explored assembly techniques such as microfluidic electroporation, in situ polymerization, and graphene nanoplatformmediated cell membrane coating. Additionally, we will emphasize the scope and challenges of assembly processes, manufacturing difficulties (reproducibility and scale-up), and limitations for clinical translation. Extrusion. Producing uniformly sized particles by pushing material through a porous membrane of the desired crosssection is called extrusion. 281 The formation of a wide range of nanomaterials like nanoparticles, liposomes, nanotubes, nanofibers, and emulsions use of extrusion technique is preferred. The commonly used membrane extrusion strategies are vesicle extrusion (for liposomes), 282 membrane emulsification (for emulsions), 283,284 precipitation extrusion (for nanofibers and nanoparticles), 285 and biological membrane extrusion (for CMC mimics). 62,286 For fabricating CMC mimics, a solution of the cell membrane vesicles and the template repeatedly passes through a porous polycarbonate membrane in a mini extruder. The mechanical force applied during the process disrupts the membrane structure and helps it to wrap around the template. In 2011, extrusion was reported for uniform coating of an RBC cell membrane onto a PLGA nanoparticle template through 400 and 100 nm polycarbonate porous membranes. 62 Since then, several groups have reported this technique for assembling CMC mimics using different pore sizes of polycarbonate membranes, cells, and template types. 56,69,130,132,186,235 After repeated extrusion, centrifugation separates the left/unbound cell membrane vesicles from the mixture. The main limitation of this technique is the loss of sample due to the accumulation of the material on the porous membrane, leading to difficulty in large-scale production. Sonication. Sonication is the process of applying sound energy to disperse the particles in the liquid using an ultrasonic bath or probe sonicator. In this technique, both the cell membrane and the template are co-incubated, followed by sonication in ice-cold conditions for few minutes to fabricate CMC mimics. Sonication disrupts the cell membrane layer, and the noncovalent interactions between the template and the cell membrane facilitate their assembly. Several groups have reported this technique for CMC mimic assembly using different cell and template types, for example, RBC membrane coating onto cross-linked 2-hydroxyethyl acrylate (HEA) hydrogel microparticles; 287 cardiac stem cell membrane onto PLGA microparticles; 187 and stem cell, 227 platelet, 93 neutrophil membranes 55 onto PLGA nanoparticles; and a hybrid of RBC and platelet membrane onto gold nanowires. 196 After the sonication, centrifugation of the mixture separates the left/ unbound cell membrane vesicles. Sonication, unlike the extrusion technique, avoids the loss of material during the coating process. It requires optimization of parameters like inputs of power, frequency, and time to avoid sample damage or denaturation of protein due to heat energy. However, the resulting particles may vary in size and uniformity of coating. 187,227 This technique might also not be appropriate for soft/some templates, as it might affect their size and stability. 288,289 In Situ Polymerization. In situ polymerization is a technique of preparing nanocomposites. It consists of polymeric molecules bound to nanoparticles 290 (like carbon nanotubes, graphene oxide, etc.) or to biomolecules 291 (like DNA, RNA, or proteins, etc.) in a reaction polymerization mixture to form linear conjugates or nanocapsules. The reaction mixture consists of a monomer, initiator, and a cross-linker, exposed to a source of heat or radiation to initiate the polymerization mechanism. In 2015, this technique was reported using RBC membranederived vesicles as a nanoreactor to synthesize polymeric cores via in situ polymerization to prepare cell membrane-coated hydrogel nanoparticles. 114,292 Membrane vesicles were prepared by extruding a mixture containing RBC ghosts, monomer (acrylamide), cross-linker (N,N′-methylene bisacrylamide), and an initiator (lithium phenyl-2,4,6-trimethylbenzolyphosphinate) through a polycarbonate membrane filter. A PEGmodified 2,2,6,6-tetramethylpiperidin-1-yl)oxyl (TEMPO) inhibitor was added to this solution to prevent cross-linking of monomers on the outside of the cell membrane vesicles. This inhibitor selectively promotes in situ cross-linking, protects outer cell proteins from denaturing, and inhibits nonspecific interactions and leaking of inner monomers across the cell membrane. Upon UV exposure for 5 min, monomers inside the cell membrane selectively polymerized to form a stable template at room temperature. This process is opposite to the traditional coating methods. It has the potential to be extended for other cross-linking mechanisms and materials and for templates-cell membrane combinations that are not currently feasible due to their unfavorable surface properties. However, preparing cell membrane vesicles using extrusion technique can lead to sample loss during large-scale production. Microfluidic Electroporation. Electroporation is a highthroughput technique of incorporating nanoparticles within cells. 293,294 In this technique, cells subjected to rapid highvoltage electric field pulses create temporary hydrophilic pores within the cell membrane. In 2017, the microfluidic-assisted fabrication of CMC mimics was demonstrated using electroporation. 57 An electroporation setup was integrated with a microfluidic chip with an S-shaped channel to facilitate efficient mixing of RBC vesicles and nanoparticles, fed through a Y-shaped polydimethylsiloxane microchannel. During electroporation, the pores formed with the cell membrane allow passive transport of nanoparticles within the RBC vesicles and fabrication of uniformly coated RBC-Fe 3 O 4 nanoparticles with improved colloidal stability, uniform size, and in vivo efficacy. The advantages of this technique are in the autologous extraction of RBCs, allowing for personalized diagnosis and therapy. Scalability and storing capacity of this technique promote its feasibility for industry translation. Graphene Nanoplatform-Mediated Cell Membrane Coating. In 2019, a single-step methodology for extraction and assembly of the leukocyte cell membrane was reported. 295 The design aimed to increase the antileukocyte targeting ability of CMC mimics using a leukocyte cell membrane. The selective ability of graphene nanosheets to extract phospholipids from the cells vigorously was the innovative aspect of this CMC mimic. Initially, negatively charged Fe 3 O 4 magnetic nanoparticles were modified with graphene and prepared by the layer-by-layer technique. A positively charged polyethylenimine (PEI) facilitated the immobilization of negatively charged graphene nanosheets onto Fe 3 O 4 nanoparticles. The CMC mimics were assembled in a quick single-step process by co-incubating graphene-modified nanoparticles with leukocytes in serum-free media. High phospholipid content on the surface of CMC mimics helped to immobilize lipids for antibody conjugation to target epithelial cell adhesion molecule (EpCAM)-positive CTCs, for example, MCF-7 (the human breast cancer cell line) and HepG2 (human hepatocellular cancer cell line). They also demonstrated CMC mimics' selectivity with a very high antileukocyte targeting efficacy when tested in synthetic samples (blood mixed with green fluorescent protein (GFP)-MCF-7 cells). The advantage of this protocol is the selective extraction and immobilization of phospholipids from different cell types 296 and the efficient separation and proliferation of the captured CTCs for several passages. Further, it is possible to use these CTCs to design a biomimetic system with homotypic target abilities. CHARACTERIZATION OF CMC MIMICS Physiochemical Characterization. After the fabrication of a CMC mimic, it is essential to analyze its structural features involving the cell membrane and template interface to enhance its colloidal stability. The incomplete/unstable membrane may lead to template exposure and impair the effectiveness of the CMC mimics. Therefore, it is critical to perform qualitative and quantitative evaluations of their structural integrity. The quantitative determination of the number of templates coated with the cell membrane remains unexplored even though this is an important parameter for its clinical translation. In this section, we collate all reported physiological techniques to quantify and visualize thickness, uniformity, stability of the cell membranes, and deformable and permeability properties of CMC mimics post-assembly (Figure 4). Size and Surface Charge. Size and surface charge are two parameters monitored in real-time during the assembly of a CMC mimic. Size (hydrodynamic radius) and surface charge of CMC mimics can be measured using a dynamic light scattering (DLS) analyzer and zeta sizer ( Figure 5A). Post-assembly of CMC mimics, it is typical to note a negative surface charge close to that of the cell membrane and a few nanometers increase in their size, confirming the coating. 64,94,100,198,226,257 Measuring size pre and post-CMC assembly by using DLS helps to determine the thickness of the cell membrane. However, the thickness of outer membrane coating can vary depending on the number of layers and their extent of fusion with the template. 63 In this section, we have mentioned a few variations of the template and cell membrane thickness in different CMC mimics. For example, a T cell membrane-coated PLGA system was reported with an observed size change from 88.3 ± 1.3 nm to 105.4 ± 4.4 nm (thickness ∼17.1 nm) and a surface charge of −29.5 ± 1.2 mV similar to that of the cell membrane. 159 In T1 cell membrane-coated cerium oxide dotted CS, there was an increase in 20 nm size from 131.7 ± 5.2 nm to 152.8 ± 3.9 nm post-assembly with −26.1 ± 0.9 mV ζ potential after cell membrane coating. 297 In monocyte membrane (U837)-coated PLGA systems, there was an increase of size by ∼20−40 nm with −16.5 mV ζ potential (PLGA: −8.3 mV; U837: −13.6 mV). 298 In MDA-MB-231 cell membrane-coated mesoporous silica loaded with ferric oxide, an increase in average size from 164 to Figure 4. The schematic illustrates the qualitative and quantitative physicochemical and biological properties of CMC mimics that validate their formation. Some essential parameters need to be considered while designing CMC mimics like surface charge, the thickness of cell membranecoated onto a template, elasticity, protein quantification and identification of right orientation, amount and area of cell membrane covered onto a template, and permeability of the mimics for the diffusion process. This helps in confirmation and visualization of the cell membrane with right-side-out in CMC mimics. The schematics also list the methods and instruments used for characterizing specific physicochemical and biological properties of CMC mimics. 220 nm (thickness ∼56 nm) with surface charge −20.88 ± 0.4 mV post-assembly was observed. 247 In fact, in MCF-7 cell membrane-coated mesoporous silica PEG-liposomes, the size change from 74.07 ± 0.7 nm to 188.5 ± 3.3 nm (thickness ∼114 nm) with −23.8 ± 1.1 mV surface charge closer to that of the cell membrane was observed. 175 In addition to using a DLS analyzer, the thickness and coverage of the cell membrane on each template can be visualized using microscopic techniques discussed in the next section. Structure Integrity. Different microscopic techniques help to visualize the structural integrity of CMC mimics. The three microscopic techniques most often used to gain insight into the structural integrity and uniformity of assembled CMC mimics are cryo-transmission electron microscopy (cryo-TEM) or TEM, field-emission scanning electron microscopy (FESEM) or SEM, and confocal laser scanning microscopy (CLSM) ( Figure 5B). CMC mimics have a characteristic core−shell structure. This consists of a dense inner core of a template and a thin outer coating of a cell membrane. Due to differences in composition, there is a difference in electron density between these two layers. However, TEM imaging visualizes the structure containing a dark core and a light outer coating. The variation in the thickness of the cell membrane in different CMC mimics was also visualized in TEM analysis, same as observed in the above section. For example, in the leukocyte membrane-coated silica-APTES system, the outer layer thickness could reach 500 nm by increasing the membrane:particle ratio. 63 For leukocyte cell membrane-coated Fe 3 O 4 -PEI-graphene-modified nanoparticles, variable thickness of the membrane around 11.78 and 16.94 nm was observed. 295 In case of hybrid RBC and MCF-7 breast cancer cell membrane-coated melanin nanoparticles, an ∼9.1 nm-thick membrane was reported. 198 Similarly, for 4T1 cancer cell membrane-coated MOFs 257 and RBC membrane-coated PFCs nanoparticles, 269 an ∼10 and 20 nm-think membrane was observed, respectively. Overall, TEM can provide a qualitative estimation of the membrane homogeneity around the template post-coating, mostly in the case of nanoscale CMC mimics. SEM is another qualitative technique used to visualize the change in surface morphology/texture after the cell membrane coating and complete/incomplete coverage of cell membrane, predominantly in the case of microscale CMC mimics. For example, using SEM, the complete coverage of leukocyte membrane onto APTES-silica microparticles 63 and incomplete coverage of cardia stem fragments onto PLGA microparticles were observed. 187 In case of motor sponge designed using RBC membrane and gold nanowires of 400 nm diameter and 3 μm in length, no change in the concave end of the nanowire was observed after complete RBC membrane coating. 299 Uniform RBC membrane coating onto Mg Janus motors was observed with spherical geometry of size 20 μm and a small circular opening of 2 μm. 76 CLSM is a qualitative technique that provides an insight into the efficiency of the whole process of assembly by determining an estimation of the number of templates that are well coated. For CLSM, the cell membrane and the inner core are fluorescently tagged with different dyes to aid visualization. Coverage. The coverage of cell membrane onto the template can be validated using an aggregation assay based on streptavidin-biotin cross-linking ability, Forster resonance energy transfer (FRET) analysis, thermogravimetric analysis (TGA), and Fourier transform infrared (FT-IR) spectroscopy ( Figure 5C). In 2014, the completeness of RBC membrane was demonstrated onto PLGA nanoparticles using the streptavidin-biotin cross-linking chemistry (aggregation assay). 300 The RBC membrane-coated biotinylated PLGA nanoparticles were mixed with the streptavidin and monitored for a change in particle size. When the membrane coverage is low, exposed biotin on the PLGA surface binds to streptavidin and induces significant aggregation and size change. Therefore, the suitable membrane to polymeric ratio can be optimized when the aggregation or size changes upon the addition of streptavidin impede. Such an aggregation assay helps in determining the efficiency of membrane coating in CMC mimics. TGA technique measures the percentage loss in weight of samples when heated. 301 The TGA profiles can differ from material to material (template, cell membrane, and the final CMC mimic). The difference in percentage weight loss between the template and the CMC mimic determines the amount of cell membrane-coated onto a template. It also helps to study the stability of the membrane onto a template. The temperature range in TGA depends on the thermal degradation of the samples. For example, the amount and stability of leukocyte membrane coating onto APTES-silica microparticles were demonstrated in the range from 30 to 150°C. About 15 wt % membrane and 8.0 wt % membrane were observed after 1 and 24 h incubation of CMC mimics in PBS, respectively. 63 In case of U-251 MG glioblastoma cell membrane-coated magnetic nanocubes (a hybrid of Fe 3 O 4 and MnO 2 ), particles were heated from 100 to 600°C and observed 12% weight of cell membrane absorbed on nanocubes. 273 Similarly, for LNCaP-AI prostate cancer cell membrane-coated CaCO 3 capped mesoporous silica nanoparticles (MSN@CaCO 3 ), around 30% weight of cell membrane was absorbed onto MSN@CaCO 3 after heating samples from 25 to 800°C. 225 FT-IR spectroscopy is another versatile technique to qualitatively characterize cell membrane coating by comparing spectra before and after CMC assembly. Only a few reports mentioned this technique to confirm CMC mimics assembly are discussed in this section. The sharp IR characteristics peaks of the template weaken after the cell membrane coating confirms the immobilization of the membranes on the template surface. For example, the U-251 MG glioblastoma membrane coating was confirmed onto magnetic nanocubes by observing an additional peak in the range of 1500−3000 cm −1 . 273 The additional broad peak at approximately 1750 cm −1 was due to the C−N and CO vibrations of the cell membranes. In the case of MDA-MB-231 breast cancer cell membrane-coated mesoporous silica nanoparticles (SiFePNs), an additional peak was also observed around 2950 cm −1 by the C−H band stretching vibration of the methyl group of cell membrane phospholipids and weakening of the sharp IR peaks of SiFePNs after coating. 247 The characteristic peak of CO and −NH 2 in the spectra was reported to confirm the RBC membrane coating onto persistent luminescence nanocarriers. 68 On occasion, additional peaks can also be attributed due to chemical interaction between the cell membrane and the template surface. For example, in the leukocyte membrane-coated APTES-silica microparticles, two strong peaks were observed at 1652 and 1544 cm −1 that correspond to the amide I and II modes of the proteins of the membrane. 63 The CO stretching vibrations of the peptide bonds lead to the amide I band, while the C−N stretch coupled with N−H bending mode leads to the amide II band. This amide linkage arose from the peptide bonds between protein residues and the covalent bond between the carboxylic moieties of protein and the primary amines of the APTES molecule in the microparticles. The spectra of CMC mimics also exhibited a weak peak for the Si−O moieties and a strong peak for the C−H stretching compared to that of the APTES-silica particles spectra. This leads to the immobilization of the membranes on the particle's surface, shielding the silicon surface while exposing the long C−H and C−C chains of phospholipids and proteins. FRET is another assay used to characterize CMC mimics as they assemble using one or two different cell membranes. In this study, each cell membrane labels with a donor or acceptor fluorescent dye from FRET pairs. 7-Nitrobenz-2-oxa-1,3-diazol-4-yl (NBD) and RhB (rhodamine), DiI, and DiD are the commonly used FRET pairs. This study identifies the molecular distance between the fluorophores using the energy transfer mechanism. Donor energy minimizes when energy transfer occurs from the donor to the acceptor when they are nearby. For example, fusion of NK cell membrane and liposome to create CMC mimics was validated using a FRET study. 102 The two sets of liposomes were tagged with fluorescence donor (PE-NBD) and fluorescent acceptor (PE-RhB) and mixed them with NK cell membrane. As the coating progressed, FRET was masked by the NK cell membrane, resulting in a decrease in the fluorescent intensity of the acceptor and an increase in the fluorescent intensity of the donor. Similarly, FRET study has been used to confirm fusion of various cell membranes, for example, the fusion of platelet and RBC membrane using DOPE-RhB/C6-NBD-doped platelet membrane, 97 fusion of the B16-F10 cancer cell membrane and RBC membrane using DiD/DiO-doped B16-F10 cancer cell membrane, 100 fusion of liposomes and RBC membrane using DHPE-RhB/C6-NBDdoped liposomes, 239 and the fusion of RBC and MCF-7 cancer cell membrane using DOPE-RhB/C6-NBD-doped MCF-7 cancer cell membrane. 198 FRET also helps to validate the fusion of the cell membrane onto a template. In this case, the cell membrane and template are ACS Nano www.acsnano.org Review both doped with a FRET pair. For example, the fusion of RBC membrane onto PLGA was confirmed by doping the RBC membrane with NBD-PE and PLGA with RhB-egg phosphatidylcholine (PC). 71 As the distance between the NBD-labeled RBC membrane and RhB-labeled polymeric core becomes shorter, NBD-energy donor fluorophore becomes more efficient in donating energy to RhB-energy acceptor fluorophore, therefore increasing the fluorescent intensity of the acceptor and decreasing the fluorescent intensity of the donor. Membrane Permeability. The permeability of the cell membrane in the CMC mimics has been little explored. The cell membrane in the live cells is permeable in nature and controls the in and out of the ions using their ion pumps. It helps to understand the better control of the drug encapsulation and release in CMC mimics. The permeability of RBC-PLGA systems was investigated by using the cell membrane permeable molecular probe. 302 The permeable molecular probe succinimidyl 6-(N-(7-nitrobenz-2-oxa-1,3-diazoyl-4-yl)-amino)hexanoate (NBD-NHS) was used to label both the sides of the RBC membrane and dithionite (S 2 O 4 2− ) ion for quenching the NBD fluorescence. It was observed that the RBC-PLGA has a higher permeability to the dithionite ions than to living RBC cells and egg-PC/cholesterol liposomes ( Figure 5D). Deformability. Deformability is a vital design parameter that affects the behavior of particles on both the micro and nanoscales. 303,304 It is mostly dependent on the shape and average particle elasticity. Atomic force microscopy (AFM) and compression testing machine/universal testing machines are some of the techniques used for measuring the mechanical properties of CMC mimics. The multiparticle tracking (MPT) method and microfluidic technique help to validate and visualize the deformability of CMC mimics. There are ongoing efforts to incorporate the elastic properties ( Figure 6) in the designed particle system for enhancing mobility and biodistribution in animal studies. For example, RBC-shaped microparticles (RBC-MPs) were prepared using an electrospinning-based technique and showed intraparticle elasticity (IED), which was measured by AFM. 305 The Young's modulus (E) of the dent in the RBC-MPs was <100 MPa, whereas that of the thick rim was 100−300 MPa. The difference in the E value of the dent and the rim in RBC-MPs shaped the particles. This shape helped these particles to deform and retain their original shape after passing through a membrane filter with 1 μm pores. These RBC-MPs also showed less accumulation in the lungs and the spleen. The same group used these RBC-MPs for RBC membrane coating to mimic the shape and the surface structure of RBC for increasing its circulation time in blood. 287 In 2011, the Prof. DeSimone group also prepared tunable elastic RBC-shaped hydrogel microparticles (RBCMs) in the nonwetting templates (PRINT) technique. 26 They tested their mechanical properties (bulk modulus) with a Universal Testing Machine (Instron) with a strain rate of 5 mm/min. They achieved tunable modulus of the hydrated polymer from 63.9 to 7.8 KPa by varying the cross-linker from 10% to 1%, respectively, which overlapped with the reported modulus of RBCs (26.7 KPa). They successfully demonstrated the deformability of RBCMs under the flow condition using microfluidic models of vascular constriction. Microfluidics has also widely been used to study the deformability of cells like RBCs, 306 leukocytes, 307 etc. Therefore, microfluidics can also be explored to analyze and visualize the flexibility of CMC mimics. Recently, the mechanical properties of the yolk−shell structured MCF-7 cancer-cell-membrane-coated mesoporous silica nanoparticles supported liposome (CCM@LM) was validated using AFM and demonstrated using MPT method. 175 These yolk−shell structures showed moderate rigidity with young's modulus around 40 MPa. During filtration, these CMC mimics could also transform into an ellipsoidal shape. These properties facilitated its penetration through spheroids in vitro. They evaluated its ECM diffusion capability using the MPT method in which the MPT medium was collagen (I) hydrogels. The MSD value of CCM@LM was approximately 7.1-and 2.6fold higher than that of LM and PLGA nanoparticles, respectively. Incorporating the elastic properties in CMC mimics can enhance its mobility and penetration in the tumors. This property in CMC mimics needs to be explored more indepth. Biological Characterization of CMC Mimics. The cell membrane provides surface functionality to the CMC mimics that allows communication with other cells and helps to escape from macrophages and circulate more in the bloodstream. For example, the CD47 receptor on the RBC membrane selectively binds to signal-regulatory protein alpha (SIRPα) glycoprotein expressed by macrophages to prevent its uptake. 109,110 Therefore, for CMC mimics to function efficiently, the cell membrane must maintain the right orientation post-coating and contain the maximum amount of translocated protein on its surface. The isolated membrane should have minimal nuclear, mitochondrial, and cytosol contamination, and the transmembrane proteins must face outward for active targeting and accumulation at the intended site. Improper membrane orientation (integral protein exposed onto the other surface of the mimics) will affect cell-tocell communication, overall function, and risk of macrophage detection and causes unwanted side effects. Therefore, proper qualitative and quantitative evaluation of intact membrane proteins, purity, and orientation is required in CMC mimics to improve their functional efficiency and reproducibility for clinical translation. All the properties are also summarized in Figure 4. Protein Analysis. The protein composition and expression on the isolated cell membrane and CMC mimics can be analyzed, identified, and quantified using several techniques. These include bicinchoninic acid assay (BCA)/Bradford assay, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), Coomassie brilliant blue staining, Western blot/dot blot, flow cytometry, and liquid chromatography-mass spectroscopy (LC-MS) (Figure 7). Before the protein analysis, the proteins need to be extracted from the isolated cell membrane and CMC mimics using lysis buffer (for example, radioimmune precipitation (RIPA) buffer). The lysis buffer should be supplemented with protease and phosphatase inhibitor cocktail and phenylmethylsulfonyl fluoride (PMSF) to prevent degradation of proteins and stored at −80°C for protein analysis. BCA assay and Bradford assay are most commonly used for colorimetric detection and quantification of total protein concentration from various cell membrane and CMC mimics. 65,67,100,150,195,308,309 Although most of the studies have discussed the use of these assays, there are only a few reports that have mentioned the amount of total protein translocated onto the isolated cell membrane or CMC mimics. In addition, there are no standards reported on total protein content that should be present on CMC mimics for their therapeutic effects. For example, around 300 mg of protein content was found in NK cell (NK-92) membranes extracted from 1 × 10 7 cells using the Bradford assay. 102 A 7.9 ± 2.0 wt % protein loading yield of the bacterial cell membrane onto the gold nanoparticles, 94 2.8 ± 0.5 wt % protein loading yield of the MIN6 cell membrane onto fibers, 96 and 18.6 ± 5.7 wt % protein loading yield of the RBC membrane coating onto mesoporous silica nanoparticles 61 were found using BCA assay. In fact, determination of the protein content before and after the cell membrane coating can also provide an insight into validate the membrane coating onto a template. The protein profile in the cells, isolated cell membrane, and CMC mimics can be visualized, analyzed, and compared qualitatively by loading and running the same amount of protein in a specific % of SDS-PAGE gel that allows the separation of protein-based on mass. 99,150,207,277 After the separation of proteins, these gels can be stained with irreversible Coomassie brilliant blue dye. This binds nonspecifically to proteins because of ionic interactions between sulfonic acid groups and positive protein amine groups through van der Waals attractions and appears as blue protein bands. 63,87,106,175,235,310 The intensity of the blue protein bands helps to compare the total protein profile translocated from a natural cell to the isolated cell membrane and CMC mimics. This is the basic analysis that is reported in almost every literature to validate the successful coating of cell membrane onto a template. Western blot is the most widely used technique to identify and compare the expression of specific proteins from among a mixture of proteins on the cell lysate, isolated cell membrane, and CMC mimics. For example, using Western blot, comparable presence of CD47 receptor on natural RBC lysate, RBC membrane and RBC membrane-coated Fe 3 O 4 nanoparticles was observed. 211 Similarly, comparable DNAM-1, NKG2D, and CD56 (neural cell adhesion molecule receptors) on the NK cell lysate, NK cell membrane, and its membrane-coated PLGA nanoparticle 67 and mPEG-PLGA nanoparticle 150 were found, respectively. In the case of hybrid RBC and MCF-7 cancer cell membrane-coated melanin nanoparticles, 198 comparable RBCspecific membrane proteins (band 3, GPA, CD55, and CD47) and MCF-7 specific membrane proteins (EpCAM, N-cadherin, galectin-3) on hybrid RBC-MCF-7 vesicles and its membranecoated mimics were also observed. The enrichment of proteinlike clusters of differentiation receptors (CD11c, CD86, CD40) expression was found on the dendritic cell membrane and its membrane-coated PLGA nanoparticles than dendritic cell lysate. 106 Similarly, the significant protein enrichment for LPS binding proteins (CD14, TLR4), cytokine binding receptors (CD126 and CD130 for IL-6, CD120a, and CD120b for TNF, and CD119 for IFN-γ) was observed on macrophage membrane and membrane-coated PLGA nanoparticles than macrophage cell lysate. 130 Similarly, the enrichment of surface protein like TNF alpha R, IL-1R, LFA-1 receptors was also observed on the neutrophil membrane and its membrane-coated PLGA as compared to the neutrophil lysate. 66 Dot blot is another blotting technique that requires only a few microliter samples directly onto the PVDF or nitrocellulose membrane followed by blotting procedures. Dot blot is a quick and fast technique used to identify the position of isolated membrane fraction in the sucrose gradient using specific protein markers. For example, leukocyte cell membrane was isolated using 55−40−30% sucrose (w/v) gradient. 63 The gradient was divided into 10 different fractions, analyzed for the specific markers using dot blot, and found the lipid ring between 40 and 30% sucrose interface (fractions 5 and 6) enriched with plasma membranes. Similarly, a 44−40−5% sucrose gradient was performed and divided the gradient into eight fractions for the dot blot analysis 213 and found that the majority of platelet membranes were present in the lipid ring between 5 and 40% sucrose interfaces. There are very few reports that have used dot blot to identify the expression of a specific protein on the isolated cell membrane and CMC mimics because it does not provide information on the actual size of the target protein like a Western blot. For example, the presence of CD47 and glycophorin A (GPA) on the RBC membrane and its coated PLGA-Gd nanoparticles was reported using both dot blot and Western blot. 71 Western blot or dot blot also helps to determine the purity of isolated cell membrane by using nuclear or mitochondrial or cytosol specific antibodies that help to assist in detecting its content in the membranes. For example, histone H3 209,257 or nucleoporin p62 63 antibodies as a nuclear marker, cytochrome c-oxidase (COX IV) 63,209,257 or ATP5a 224 antibody as a mitochondrial marker, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) 209,224,257 as cytosol marker and Pcadherin 92 or Na+/K+-ATPase 73,92,150 as plasma membrane markers have been used. Flow cytometry is a powerful qualitative and quantitative technique, useful to identify and quantify specific protein on the CMC mimics by measuring the fluorescent intensity. Only a few reports explore this technique for some protein analysis. For example, the fluorescence intensity of CD42b, CD47, CD41, and CD61 on platelet membrane-coated APTES modified silica nanoparticles 213 and the fluorescence intensity of CXCR4 on U87 cancer cell membrane-coated PLGA nanoparticles 224 was identified. An equal amount of LFA-1 receptor on a neutrophil membrane-coated PLGA nanoparticles and neutrophil cells 66 was observed. Similarly, the comparable fluorescence intensity of MHC-II protein on dendritic cell membrane-coated mimics and natural dendritic cells with an equal amount of MHC-II surface protein was observed. 106 Therefore, this technique has a lot of potential to analyze and compare the amount of specific protein on CMC mimics with the natural cell in a given batch that may overcome the reproducibility issues. LC-MS/proteomics analysis is a quantitative large-scale protein study. This study involves fractionation of complex peptide or protein mixtures, acquiring the data necessary to identify individual proteins using mass spectroscopy, and finally, analyzing and organizing the mass spectroscopy data using bioinformatics. 311 The total number of proteins identified can be further characterized based on their cellular function (integral or peripheral plasma proteins, cytoskeletal or junctional protein), biological process (transport, immunity, cell−cell adhesion, developmental process, proteolysis, lipid metabolism, etc.), and molecular function (GTP (guanosine triphosphate) binding, protein binding, GTPase activity, GDP (guanosine diphosphate) binding, actin-binding, etc.). 150,239,312 For example, the shotgun proteomics method identified 868 distinct proteins on human NK cell membrane-coated mPEG-PLGA nanoparticles. 150 They also analyzed specific proteins on CMC mimics like immunity-related GTPase family M protein 1 (IRGM1), cannabinoid type 1 receptor (CB1), ras-related protein encoded with RAB10 gene (RAB 10), and receptor activator of nuclear factor κ-Β ligand (RANKL) involved in the polarization of M1-macrophages and NKG2D, DNAM-1 in targeting tumor cells. 2215 common proteins were identified on U-251 MG cell membranes and their coated magnetic nanocubes. 273 Those identified proteins were clusters of differentiation 59 (CD59), epidermal growth factor receptor (EGFR), CD44, tight junction protein 1 (TJP1), myosin lightchain kinase (MYLK), and others. A label-free quantification proteomics method identified 474 membrane proteins on RBC membrane-coated PLGA nanoparticles 313 and 148 common proteins in erythroliposomes (a hybrid of RBC vesicles and liposomes) and RBC vesicles. 313 All the research papers reported above have provided a long list of proteins identified on the cell membrane and their CMC mimics. The Orientation of Cell Membranes (Right-Side-Out). The orientation of the cell membrane in the CMC mimics helps in determining the direction of the receptors while coating onto a template. The extracellular proteins must be directing outward, whereas the intracellular proteins should be directing inward, to maintain the functionality of the CMC mimics. Among the techniques/method for studying the orientation are immunogold staining, antibody binding assays, quantification of glycoproteins or sialic acid content, and quantification of transmembrane or internal membrane proteins (Figure 8). Immunogold staining or labeling is a qualitative technique used in electron microscopy for the identification and distribution of a specific protein of interest located on the interior or the exterior of the cell membrane of the CMC mimics. It utilizes gold nanoparticles conjugated with the secondary antibody of interest and, in turn, attaches to the primary antibody designed to bind a specific protein on the CMC mimic. The gold nanoparticles are visible as a black spot under electron microscopy and help visualize the arrangement of a specific protein onto CMC mimics. Some of the reported specific biomarkers on RBC membrane are CD47 198 and CD235a, 97 platelet membranes are CD61 97 and CD47, 93 and MCF-7 breast cancer cell membrane is CD340 198 that have been used in the immunogold staining and analyzed under TEM. The orientation of the HeLa membrane onto PLGA nanoparticles was also demonstrated using immunogold staining, but tagged the gold nanoparticles with AS1411, a nucleic acid aptamer that targets the extracellular region of nucleolin in the cell membrane and is analyzed under a transmission electron microscope. 235 Antibody binding assay was reported to examine the coating and sidedness of RBC membrane onto gold nanoparticles. 286 Two distinct anti-CD47 antibodies (exoplasmic and cytoplasmic) conjugated polystyrene microspheres were used for the interaction with the membrane and further analyzed under TEM. Glycans and sialic acid groups are asymmetrically distributed on the extracellular side of the cell membrane. Hence, their quantification has also been reported for validating the sidedness of the RBC membrane in a CMC mimic. 211,299,300 The glycoprotein content and terminal sialic groups were first enzymatically removed from CMC mimics using trypsin and sialidase, respectively. Further, the glycoprotein content was quantified using a periodate-based glycoprotein detection assay or mouse glycoprotein ELISA kit and sialic acid content using a sialic acid quantification kit. Transmembrane protein and/or internal membrane protein in a CMC mimic can also be identified, quantified, and compared with the cell source using flow cytometry or immunoblotting or fluorescence microscopy to validate the orientation of the cell membrane. The localization of CD3z (intracellular) and LFA-1 (extracellular) proteins on leukocyte membrane-coated mimics was determined using both flow cytometry and immunoblotting. The fluorescence intensity of LFA-1 on CMC mimics was observed to be three times higher than that of the leukocyte vesicles but detected CD3z signals only after permeabilization. 63 The comparable intensity of major histocompatibility complex II (MHC-II) protein (extracellular) on dendritic cell membrane-coated mimics and dendritic cells was confirmed using flow cytometry. 106 Similarly, the comparable intensity of LFA-1 (extracellular) protein on neutrophils membrane coated mimics and neutrophils cell 66 and comparable intensity of CD47 antibody (extracellular) on RBC membrane-coated mimics and RBC ghosts was observed using flow cytometry. 278 In the case of platelet membranecoated mimics, two anti-CD41 (glycoprotein (Gp) IIb/IIIa integrin) primary antibodies were used for binding to the Nterminal and C-terminal of CD41 present on the extracellular and intracellular regions, respectively. 213 Extracellular CD41 intensity was observed four times higher than that in its intracellular domain on the outer surface of CMC mimics using flow cytometry. Similarly, for UM-SCC-7 membrane (human squamous carcinoma cell line) and its mimics, two anti-CXCR4 antibodies were used to bind the extracellular and intracellular region. 73 A comparable fluorescence intensity of CXCR4 on CMC mimics and cells using a fluorometer was observed. Epifluorescence microscopy has also been reported to analyze the immunostained CD3 receptor (extracellular) on T cell membrane-coated mimics. 280 Fluorescence intensity per particle using ImageJ software was quantified and observed that 40% of the CMC mimics displayed some or all the CD3 receptors in the correct orientation. Role of CMC Mimics in Various Therapeutic Applications. CMC mimics have gained attention in several therapeutic applications like cancer, inflammatory diseases, and infectious diseases. The purpose of designing these mimics is to achieve target efficacy and accumulation at the target site. Figure 9 presents an overview of the different cell membrane and template combinations used for applications, along with a summary of in vitro and in vivo models used to validate their efficacy in Table 3. Cancer Therapy. Cancer causes abnormal and uncontrolled growth of cells in the human body. The primary tumor is the initial region from where the cancer cells begin to spread. These tumor cells secrete various chemokines to redirect the platelets, immune cells (neutrophils, macrophages, T cells, NK cells) to facilitate their growth and progression in different parts of the body. 314,315 Circulating tumor cells (CTCs) rapidly spread to the blood and lymph nodes and cause life-threatening metastasis. 316,317 Therefore, during chemotherapy, delivering the drug at the metastatic site and neutralizing the CTCs in the blood and lymph nodes is crucial. Chimeric antigen receptor T cell immunotherapy (CAR-T), 318,319 adoptive immunotherapy, 320,321 immune checkpoint blockade therapy, 322,323 vaccines, treatment with oncolytic viruses, 324,325 monoclonal antibodies, 326 cytokines, 327 and immunomodulatory treatment 328 are immunotherapies currently under consideration. However, tumor heterogeneity, immune cell dysfunction, acquired resistance to immunotherapy, and immunotoxicity complicate their clinical translation. 329−331 Therefore, there is a pressing need to discover and deliver tumor neoantigens to activate the patient's immune system efficiently. Incorporating a cancer cell membrane within a CMC mimic provides the required neoantigens, particularly in the case of highly mutagenic tumors. Current treatment options include modifying nano/microparticle surfaces, delivering immunomodulators, and chemo drugs. To resolve the complexity of modification, several CMC mimics with natural biocompatible characteristics have been designed with various combinations of the cell membrane and templates to effectively target primary, metastatic, and CTCs. If required, the cell membrane can be modified further with desired active targeting moieties using lipid insertion 71,234,332,333 or membrane fusion 97,100,196,198 to increase its target efficacy toward tumor cells in various organs (brain, breasts, lungs, cervical, colon, pancreas, etc.). This section discusses CMC mimics used for cancer treatment, as summarized in Table 3. RBC membrane-coated PLGA nanoparticles were designed in Zhang's lab to enhance the low circulation half-life of nanoparticle drug delivery systems by utilizing CD47 receptors on the RBC membrane. 62 Compared to the PEGylated systems, the half-life of these mimics improved by at least 2-fold, and they remained in circulation for up to 72 h post-injection. Following this work, RBC cell membrane coating of gold nanocages, Fe 3 O 4 nanoparticles, and PFCs improved their circulation times. Additionally, this enhanced their suitability for bioimaging and phototherapy applications. 265,269,59,68 Due to the lack of tumors targeting proteins on the RBC membrane, many researchers have attempted modifying with peptides or fusing it with another cell membrane to enhance its targeting efficacy toward specific primary or metastatic tumors. For example, arginyl glycyl aspartic acid (RGD)-modified RBC membrane-coated paclitaxel loaded polycaprolactone (PCL) nanoparticles inhibited the growth of the primary tumor in breast cancer and lung metastasis significantly. 309 Similarly, brain-targeted peptide ( D CDX, 234 T7, 334,335 and NGR 335−337 )-modified RBC membrane facilitated CMC mimic's crossing of the blood−brain barrier and improved their ability to target glioma. Platelet cell membranes gained interest in designing CMC mimics for targeting CTCs due to the effective interaction between P-selectin and CD44 receptors on platelets and tumor cells. 338 According to these reports, platelet membrane-coated mimics captured and killed CTCs in blood and lymph nodes and inhibited breast cancer metastasis effectively. 87,119 Further, TNF-related apoptosis-inducing ligand (TRAIL) modified platelet membrane-coated templates also eliminated CTCs effectively. TRAIL additionally activates apoptosis in tumor cells by binding to the death receptors (DR4, DR5) on the cell surface. 213,339,340 Additionally, coating a hybrid membrane of platelets and leukocytes on commercially available immunomagnetic beads was very effective in isolating pure CTCs from clinical blood samples collected from breast cancer patients, demonstrating the possibility of extending these CTCs for in vitro applications and their potential for use in personalized medicine. 99 The cancer cell membrane is known for its homologous targeting abilities attributed to the presence of different adhesion molecules on their surface. These adhesion molecules play an important role in the development of invasive and distant metastasis. 341 Designing mimics using cancer cell membranes could be a potential strategy to develop personalized tumorspecific therapies or vaccines. In this context, CMC mimics of cisplatin-loaded gelatin nanoparticles coated with the patientderived tumor cell membrane (head and neck squamous cell carcinoma) ( Figure 10) were fabricated and tested for efficacy in a patient-derived xenograft model. 105 These autologous cell membrane-coated mimics were able to ablate the tumor completely and inhibit tumor recurrence. However, the mismatch of membrane donors and hosts resulted in weaker targeting. Numerous cancer cell membrane-coated mimics using different templates are developed for such homotypic targeting (Table 3). 235,173,219,342,223,174,103,342,175,220,221 Recently, CMC mimics were redesigned as nanovaccines for cancer. These vaccines combine the cancer cell membrane with an adjuvant that facilitates delivery of tumor-associated antigens to dendritic cells and stimulates the tumor antigen specific-T cells. For example, B16-F10 (murine melanoma cell line) membrane-coated PLGA mimics were designed as nanovaccines for cancer. They incorporated an adjuvant, monophosphoryl lipid A, MPL (FDA-approved LPS derivative) that binds specifically to toll-like receptor 4 to boost the immune response. Additionally, CMC mimics of cancer cell (B16-OVA) membrane and PLGA nanoparticles were formulated as nanovaccines to specifically targeted antigen-presenting cells (APCs) and enhance their uptake to trigger cell maturation. Imiquimod R837, an adjuvant and agonist for toll-like receptor 7 (TLR-7), was preloaded into PLGA particles and coated with a mannose-modified cell membrane. 343 These mimics effectively inhibited the growth of melanoma tumors when combined with antiprogrammed cell death protein 1 (PD-1) checkpoint blockade therapy. Viruses also have natural adjuvant properties that initiate an immune response. For example, the oncolytic virus replicates inside tumor cells by releasing tumor antigens and causes tumor lysis without affecting healthy cells. 344 Antigen-presenting cells (APCs) engulf these antigens and redirect dendritic and T cells toward the infected site. Therefore, coating adenovirus serotype-5 virus particles with the melanoma (B16.F10) and lung cancer (LL/2) cell membranes can help explore their properties and treat aggressive melanoma and lung tumor. 214 Despite coating the virus with different cancer cell membranes, binding efficacy was maximized using homologous and tumor-matched cell membranes. Macrophages are the most abundant cells in the tumor microenvironment of solid tumors. Interactions between α4 and β1 integrins present on the macrophage surface with the vascular cell adhesion molecule-1 (VCAM-1) present on cancer cells are responsible for progression and metastasis of tumor. 345−348 Using these interactions, CMC mimics of macrophages (RAW 264.7) and emtansine-loaded liposomes targeted and inhibited lung metastasis in breast cancer models. 129 In another report, using CCL2/CCR2 chemokines interactions, macrophages were recruited. Quercetin-loaded bismuth selenide nanosystems acted as the templates for mimic assembly. 88 Post-targeting, nanosystems inhibited primary cancer and lung metastasis by photothermal therapy. Quercetin released inhibited thermoresistant tumors by damaging their heat shock protein 70 (HSP70). Subsequently, several groups have reported the fabrication of CMC mimics with macrophage membranes on various templates that target breast cancer for photothermal therapy. 131,132,139 Inflammatory neutrophils are activated and directed by the granulocyte-colony stimulating factor (G-CSF) and C−X−C chemokines (CXCL1, CXCL2, CXCL5) toward early premetastatic niche formation. 349,350 Inspired by this mechanism, neutrophil membrane-coated PLGA nanoparticles loaded with carfilzomib were designed. 55 These mimics neutralized CTCs, prevented early lung metastasis, and inhibited the progression of already-formed lung metastasis in breast cancer. Among other immune cells, NK cells can detect and target tumor cells without preactivation. These cells also regulate immune response and T cell activation to kill tumor cells. 144 The NK cell membrane from murine NK cells and NK-92 cell linecoated on poly(ethylene glycol) methyl ether-block-poly-(lactide-co-glycolide) (mPEG-PLGA) nanoparticles loaded with 4,4′,4″,4‴-(porphine-5,10,15,20-tetrayl) tetrakis (benzoic acid) (TCPP) 150 were able to polarize M1 macrophages, kill primary tumors, and inhibit the growth of distant tumors. Further, the fusion of the NK cell membrane from NK-92 with liposomes to create NKsomes loaded with doxorubicin demonstrated an excellent tumor homing potential of NKsomes against breast cancer cells. 102 MSCs have inherent tumor-targeting properties and exhibit immunomodulatory activities. 351,352 However, biosafety concerns, stability, and reproducibility issues limit their use in clinical applications. 353 Taking advantage of MSCs mechanism, adipose-derived MSCs membrane-coated Fe 3 O 4 nanoparticles were constructed as a proof-of-concept to inhibit prostate tumor cells via hyperthermia mechanism. Furthermore, bone marrowderived MSC membranes were used to coat doxorubicin-loaded gelatin nanogels 186 and UCPNs 185 to enhance the target efficacy toward cervical tumor cells. The presence of tumor recognition receptors and adhesion molecules on T-cells surfaces has prompted their use for fabricating CMC mimics. 354 Taking advantage of T-cell receptors, human cytotoxic Tlymphocyte cell membrane-coated paclitaxel-loaded PLGA nanoparticles were designed in combination with low-dose irradiation (LDI) to target gastric tumor cells. 98 These mimics inhibited gastric tumor growth significantly when used in combination with LDI than mimics alone. Further, an azidemodified T-cell membrane was modified with azide and assembled with a PLGA template for biorthogonal targeting of bicyclo[6.1.0]nonyne (BCN)-modified tumor cells. 158 Modified cell membrane-coated mimics showed a 1.5-fold higher accumulation around Raji tumor cells. Dendritic cells (DCs) are the initiators of the primary immune response and capable of activating naive T cells. 355 DC-based cancer vaccines have drawn attention in immunotherapy for treating prostate cancer with one of their variants approved by the US FDA. 356 There is also evidence that immunotherapy could benefit patients with ovarian cancer. 357−359 Recently, in a clinical phase 1 study for ovarian cancer patients, DC vaccines initiated T-cell responses in only half of the total patients. Their clinical efficacy was affected by low immunogenicity of tumorassociated antigens (TAAs), immunosuppressive tumor-associated microenvironment, restricted migration due to physiological barriers, and downregulation of major histocompatibility complex (MHC). 360−362 To overcome these limitations and utilize interesting functions of DCs, cell membranes from mature DCs (primed by ovarian cancer cell lysate) were coated on interleukin-2 (IL-2)-loaded PLGA nanoparticles to fabricate mini DCs. 106 These mini DCs enhanced the activation of the Tcell immune response and effectively inhibited the progression and metastasis of ovarian cancer. There are reports of CMC mimics designed from the cell membrane of AF to target fibroblast-associated cancer cells. This approach enabled crossing the protective physical barriers built around tumor cells by cancer-associated fibroblasts for delivering anticancer drugs. 363 Chemically modified nanoparticles are known for their ability to target and kill cancerassociated fibroblasts, prevent biological interactions between tumor and stroma, and enhance chemotherapy. 364 Based on these reports, CMC mimics were designed using activated fibroblast cell membranes and semiconducting polymeric nanoparticles and compared their efficacy to that of 4T1 cancer cell membrane-coated mimics in breast cancer models. 104 The AF and 4T1-coated mimics showed superior targeting efficacy for cancer-associated fibroblast and 4T1 cells, respectively, due to their homologous targeting capabilities. OMVs have the potential to induce the production of antitumor cytokines and trigger the antitumor immune response. 365−367 Utilizing their mechanism of action, cell membranes of OMVs, and cancer cells were fused to induce both an immune response and increase the homotypic ability. The hybrid cell membrane of E. coli DH5α membrane vesicles and B16-F10 cell membrane coating on hollow polydopamine nanoparticles significantly inhibited melanoma growth and stimulated DC maturation in lymph nodes. 200 Inflammation and Immune Diseases. Inflammation is a physiological process that protects the body from harmful stimuli using immune cells, blood vessels, and molecular mediators and promotes tissue repair. 368 Chronic or uncontrolled inflammation causes diseases like atherosclerosis, ischemic diseases (myocardial infarction, ischemic stroke, hindlimb ischemia), rheumatoid arthritis, and acute liver failure. Modulation of inflammatory responses to balance the immune homeostasis helps to overcome the disease progression. 369 Some of the inflammation-related cells that play a vital role in shaping its microenvironment are neutrophils, NK cells, macrophages, lymphocytes, platelets, and stem cells. These cells are in their resting state during circulation but are activated by cytokines or chemokines during inflammation and migrate to the infected site. 370,371 Therefore, designing CMC mimics using these cells has considerable potential to treat inflammatory diseases. Atherosclerosis is a condition caused by the accumulation of lipids, cholesterol, and fibrous elements in the artery wall that restricts blood flow. 372 The primary challenge of this disease is it is asymptomatic until the very late stages. Surgically stenting the artery is often the preferred intervention route. However, this can lead to potential side effects such as restenosis and stent thrombosis, eventually triggering neointimal hyperplasia. 373,374 Using a noninvasive strategy to image and monitor plaque development would be the preferred mode of treatment, as platelets are responsible for hemostasis in the body and are involved in atherogenesis. 375−378 CMC mimics assembled with their cell membranes may provide a viable alternative to the existing line of treatment. The platelet membrane-coated PLGA nanoparticles loaded with docetaxel were reported for restenosis therapy. 93 Their results concluded that CMC mimics localized better than the drug alone at the plaque site and inhibited neointima growth. Further, the MRI-based platelet membranecoated PLGA nanoparticles localize better at the plaque-forming and atherosclerosis areas than the PLGA or RBC membranecoated PLGA nanoparticles, providing crucial information for managing atherosclerosis. 123 The nonthrombogenic and stentfree restenotic therapy was also treated using platelet membrane-coated nanoclusters of poly(amidoamine) and polyvalerolactone (PAMAM-PVL). 122 These dendritic, unimolecular nanoclusters were preloaded with an endotheliumprotective epigenetic inhibitor (JQ1) before coating. Comparing the efficacy of the JQ1 and rapamycin (endothelium-toxic status quo drug) showed up to a 60% reduction of neointimal hyperplasia. However, rapamycin impairs the endothelial recoverage, while JQ1 protects the endothelial coverage in the inner artery wall. Therefore, these noninvasive platelet membrane-coated mimics loaded with MRI contrast agents or endothelium-protective inhibitors or drugs have the potential for live imaging to assess, prevent, and manage the development of atherosclerosis at an early stage. Restricted blood flow in blood vessels resulting in tissue damage or dysfunction is a characteristic of ischemic disease. 379 Mesenchymal stem cells are a promising candidate for their treatment as they can interact effectively with the stromalderived factor (SDF) overexpressed in ischemic tissue through their CXCR4. 380,381 Researchers have tried to overexpress the CXCR4 receptor on stem cells to increase its efficiency. For example, they bioengineered human adipose-derived stem cell (hASCs) membranes to overexpress CXCR4 receptors for coating PLGA nanoparticles loaded with vascular endothelial growth factor (VEGF). These particles showed a higher accumulation at the ischemic site than the hASC-coated mimics. 227 Likewise, a neural stem cell membrane overexpressing CXCR4 was coated on PLGA nanoparticles loaded with glyburide for testing in ischemia stroke models. 188 Even in the injured brain, these CMC mimics could effectively cross the blood−brain barrier for drug delivery. PLGA microparticles loaded with secretomes were coated with cardiac stem cell membrane for myocardial infarction. 187 Their functional efficacy was comparable to cardiac stem cell therapy and allowed for surgical transfer intramyocardially. Acute liver failure causes deterioration of liver function and requires liver transplantation to cure the patient. 382 Stem cell therapy can be a promising treatment for liver failure, as MSCs secret anti-inflammatory factors reducing inflammation and promote healing. 383 Cultured MSCs with 20 μm in diameter are larger than the width of the microcapillaries of the lung; 384 therefore, intravenously infused MSCs are short-lived and easily filtered by the lungs and do not reach the liver. 385 Thus, using RBC membrane-coated PLGA nanoparticles loaded with MSCs regenerative factors of 200 nm in size resolved this size issue. 86 The small size of the mimics helped them pass through the lungs and reach the liver, and additionally, the RBC membrane coating prolonged their circulation time. Rheumatoid arthritis (RA) is another autoimmune disorder that leads to joint damage and disability. Current treatment focuses on targeting the inflammatory responses such as inhibiting the interleukin (IL)-1 and tumor necrosis factor-alpha (TNF-α). 386,387 Various chemoattractant are known to promote neutrophil migration into the joints during RA. 388 Microvesicles produced by neutrophils can readily enter cartilage and protect joints. 389 Therefore, neutrophil membrane-coated PLGA nanoparticles were designed. 66 These mimics neutralized proinflammatory cytokines (IL-1β and TNF-α), suppressed synovial inflammation, targeted cartilage matrix, and protected chondrocytes against damage. Apart from neutrophils, other cell membrane-like T cells, dendritic cells, macrophages, and monocytes are present in the primary stage of RA, working together for its progression. Therefore, CMC mimics using the membrane of these cells have the potential to show stealth properties for the treatment of RA. 390,391 Infectious Diseases. Pathogens, viruses, and bacteria cause infectious diseases by interacting and penetrating the host cell membrane. 392,393 A lack of effective drugs, specific treatment options, and increasing drug-resistant strains caused by overuse of antibiotics are the main challenges to their treatment. CMC mimics can target viruses, bacteria, and multidrug-resistance bacteria and absorb toxins. Viral Infection. Infections caused by viruses are the most difficult to treat, as viruses do not follow a regular cell division process for their growth. They replicate by binding to the receptors on the host cell membrane and infusing genetic material inside them. CMC mimics can be potential alternatives to neutralizing virus-caused infections. Many cells are rich in virus binding receptors, and fabricating CMC mimics from the cell membrane of these cells diverts viruses from host cells. This mechanism is employed for treating infections caused by the influenza virus, zika virus, and human immunodeficiency virus (HIV), and recently by COVID-19 ( Figure 11). The surface of the influenza virus is rich in hemagglutinin, a glycoprotein that has a high affinity toward sialic acid residues present in cells. 394 RBCs membrane is rich in sialic acid and glycoproteins. Designing CMC mimics using their cell membrane can favorably interact with the influenza virus shown by RBC membrane-coated PLGA nanoparticles. 74 These mimics bind efficiently with the influenza virus and form clusters that can be readily isolated in vitro by magnetic extraction. COVID-19 is a viral infection caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The spike protein (S) of SARS-CoV-2 consists of S1 and S2 subunits. The S1 subunit engages human angiotensin-converting enzyme II (ACE2) for binding with the host cells, and the S2 subunit facilitates the entry and fusion of the virus within the host cells. Preclinical and clinical studies report that monoclonal antibodies targeting interleukin 6 (IL-6) and granulocyte-macrophage colony-stimulating factor (GM-CSF) can prevent the infection caused by SARS-CoV-2. 395,396 Based on these reports, the human embryonic kidney 293T cell membrane genetically engineered to express ACE2 receptor and monocyte THP-1 cell membrane with abundant cytokine binding receptors were combined to form vesicles. 397 These fused vesicles effectively bind and neutralize IL-6 and GM-CSF, suppressing the immune disorders and lung injury in the acute pneumonia mouse model. The ACE2 receptor on SARS-CoV-2 also binds to CD147 expressed on the host cells, human alveolar epithelial type II cells, and human macrophages. 398,399 PLGA nanoparticles coated with their cell membrane effectively target SARS-CoV-2. 400 Overall epithelial and macrophage-based CMC mimics are preferable for inhibition and neutralization of SARS-CoV-2 ( Figure 11). Zika virus is a mosquito-borne flavivirus, transmitted by Aedes mosquitos (Aedes albopictus, Aedes aegypti). 401,402 They can easily pass physiological barriers like the brain−blood barrier and placental−blood barrier, causing fetal microcephaly and other neurological complications. 403,404 In general, the nanoparticles cannot enter the immune-privileged sites. 405 Coating gelatin-nanoparticles with the Aedes albopictus (C6/36) cell membrane circumvents this limitation. 228 These CMC mimics divert the Zika virus away from the fetal brain, suppress fetal microcephaly in pregnant mice, negate virus-induced degenerative changes, prevent replication, and improve overall survival rate. HIV infects leukocytes via interaction between glycoproteins on its surface (gp120) and cluster of differentiation 4 (CD4) receptor, C−C chemokine receptor type 5 (CCR5), or CXCR4 coreceptors on CD4 + T cells. 406 CD4 + T-cell membrane-coated PLGA nanoparticles effectively treated two distinct HIV strains: X4 and R5. 159 These mimics neutralized the strains and prevented the HIV-1 from binding to and entering the healthy CD4 + T cells. Bacterial Infection. Bacteria show both positive and negative impacts on the human body. While probiotic bacteria aid the digestive process, other bacterial strains (Gram-positive: S. aureus and Gram-negative: E. coli, K. pneumonia) cause mild to severe infections and host cell death. Additionally, excessive use of antibiotics results in drug-resistant bacterial strains (e.g., methicillin-resistant S. aureus (MRSA), carbapenem-resistant K. pneumonia (CRKP)) in the human body, posing additional therapeutic challenges. This section highlights antimicrobial activity and toxin neutralization using bacterial-and cell membrane-coated mimics. The α-toxins are a class of pore-forming toxins secreted by bacteria (E. coli, S. aureus, MRSA, etc.) that create pores in the host cell membrane, causing cell lysis. 407 RBCs and platelets express several surface markers (e.g., glycophorin A in RBCs, 112 toll-like receptors in platelets 119 ) that readily interact with such pathogens. Using these receptors, RBC membrane-coated PLGA nanoparticles sequestered α-toxins 115 and platelet membrane-coated PLGA nanoparticles 93 delivered vancomycin to MRSA252. Platelet membrane-coated particles loaded with vancomycin (PNP-Vanc) showed superior efficacy, specificity, and retention due to the presence of platelet-specific serine-rich (SraP) adhesion sites on MRSA252. 395 Similarly, RBC membrane-coated vancomycin-loaded redox responsive hydrogels absorbed α-toxins and killed MRSA. 114 The intracellular reducing environment of the bacteria triggered vancomycin release from the hydrogels. Other examples include acoustic nanorobots of gold nanorods coated with the hybrid membrane (platelet and RBC). These fuel-free nanorobots accelerated toxin neutralization and removal of bacteria (MRSA USA300). 196 RBC membrane coating on carbon nanotubesbased field transistors resulted in rapid detection of several poreforming toxins. These biomimetic nanosensors could quantitatively detect live pathogens without involving traditional colonycounting methods. 75 Homotypic targeting utilizes the membrane of the targeted pathogen for specificity. For example, bovine serum albumin nanoparticles coated with CRKP bacterial membranes enhanced the immune response by secreting cytokines from macrophages and activating dendritic cells. 193 Similarly, S. aureus membranecoated PLGA nanoparticles showed superior targeting efficacy toward S. aureus-infected macrophages. 192 These mimics actively targeted all organs except the liver and showed improved efficacy in kidneys and lungs prone to a higher risk of S. aureus infections. Similarly, E. coli membrane-coated gold nanoparticles coated with E. coli membrane removed E. coli bacterial infection. 94 They also activated dendritic cells in lymph nodes and increased the production of IFN-γ and IL-17, but not IL-4 generating type 1 T-helper cell (Th1) and type 17 T-helper cell (Th17)-based cell responses against bacterial infections. CURRENT CHALLENGES Ingraining complex biological functionalities in delivery systems is a significant outcome of coating with cell membranes that differentiates them from synthetic mimics. Throughout this review, we have emphasized why these CMC mimics utilizing surface functional properties of cells are better suited for therapeutic applications over their synthetic counterparts. Thus far, fabrication and in vitro and in vivo evaluation studies on CMC mimics are limited to the lab setting. However, the actual therapeutic dose of materials required and conditions for clinical studies are higher and stringent. In this context, it is vital to have standardized protocols and well-established characterization tools for scale-up and GMP production of these mimics to maintain quality and reduce batch-to-batch variability. Herein, we discuss some of the challenges associated with their clinical translation. (1) Large-scale expansion: Cell membrane isolation requires at least a 100 million cells that maintain their phenotype, purity, and quality while passaging. A standardized and well-established cell culture protocol specific to each cell type is essential for large-scale production. In this regard, one can benefit from the existing well-established biomanufacturing platforms using 3D bioreactors (like stirred-tanked bioreactor, WAVE bioreactor, etc.) for the ultralarge scale-up of stem cells, T cells, and dendritic cells. 411−414 (2) Cell membrane yield: Lab-scale procedures for cell membrane isolation are multistep and specific to each cell type and may result in loss of sample, functional receptors, and nuclear/mitochondrial/cytosol contamination. Therefore, there is a requirement for an established protocol with minimal manual steps for cell membrane isolation with high yield and purity for various cells, especially for nucleus-containing cells. (3) Assembly of CMC mimics: In CMC assembly, it is vital to control the cell membrane layers coated onto each template while achieving a homogeneous coating. The physiological effects of differences in membrane layers on CMC mimics remain unexplored. Using automated technology to improve the coating efficiency and avoid uneven coating can be a viable alternative. For example, assembly of RBC-coated CMC mimics using the microfluidic electroporation technique resulted in uniform-sized mimics. 57 (4) Long-term storage: Optimizing long-term storage conditions and membrane stability of CMC mimics are critical to improve their shelf life. Post-lyophilization isolated cell membranes can be stored in cold conditions and resuspended in buffers before use. However, shelf life studies to determine the stability and functional efficacy of isolated cell membranes remain unexplored. (5) Quantitative evaluation of CMC mimics: An in-depth quantitative characterization of CMC mimics is essential to avoid batch-to-batch variability in their biological efficacy. These include the number of templates coated evenly with and without the cell membrane and the amount of transmembrane protein translocated on the mimics in the correct orientation. (6) Quality control: A standard quality control criteria must be defined to ensure that the cell membranes are free of contamination like viruses, bacteria, or pyrogens. Additionally, removal of denatured proteins from the CMC mimics avoids potential immune responses to endogenous antigens. Also, every assembly step (isolation of cell membrane, synthesis of template, fabrication of CMC mimics) should be carried out under sterile conditions to avoid chemical and biological contamination and maintain GMP requirements. (7) Unwanted proteins on the CMC mimics: There are several proteins present on the cell membrane. Some are accountable for effective targeting and evading immune responses, and others are involved in interacting with the host environment affecting biodistribution, immune response, and toxicity profile. Optimizing protocols to selectively retain proteins of interest and remove unwanted proteins from the cell membrane can enhance the CMC performance and remains to explore. (8) Surface modification of cell membrane: Numerous membrane modification strategies are known, but not all of them offer proper orientation, linkage strength, and conserve membrane protein functionality. For example, noncovalent modifications protect the membrane protein functionality, but the interactions are weaker in linkage strength. 415 Conversely, covalent bonding with a template is robust, but there is a risk of altering the natural membrane functionality and compromising the protein profiles. Changes in the ζ potential only primary measure the membrane modification, 333 creating a necessitous gap in qualitative and quantitative evaluation techniques. It is also complicated to observe small-molecule conjugation on the cell membrane and evaluate the overall impairment. (9) Autologous cells: For designing CMC mimics, most studies have utilized immortal cell lines. However, certain cell types (like leukocytes) can be heterogeneous and induce hemolysis during a blood transfusion. In such cases, autologous cells are the most suitable option, but would require screening of the donors to prevent the use of allogeneic cells as membrane sources. VOCABULARY Biointerface, point of interaction between nano/microparticles and the surrounding cells; personalized medicine, modifying disease treatment to suit the patient's genetic profile; autologous therapy, utilizing patient-derived cells for disease treatment; tumor microenvironment, cellular environment surrounding tumors consisting of immune cells, blood vessels, and extracellular matrix that facilitates their growth; receptor, proteins present on the cell surface that binds selectively to a ligand to transmit signals.
2021-10-28T06:23:42.198Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "40a1414f4d2dce3e7c3fb02731d341683bfa042b", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.1c03800", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "04637a4ce4ba85b646fa245ebacabb3ce6e25b01", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256164857
pes2o/s2orc
v3-fos-license
Deep Dermatophytosis Presented as Multiple Exophytic Masses caused by Trichophyton rubrum in Immunocompromised Patient with Rheumatoid Arthritis; A Case Report Dermatophytes invade the stratum corneum and infect the skin, nails, and hair, mostly resulting in superficial infections. Deep dermatophytosis involving the dermis and subcutaneous layer has rarely been reported in immunocompromised individuals. Herein, we report a case of deep dermatophytosis caused by Trichophyton rubrum. A 71-year-old woman presented with multiple erythematous exophytic and subcutaneous nodules in both the lower legs. The patient was taking immunosuppressive agents for rheumatoid arthritis and antifungal agents for tinea pedis and onychomycosis, which was improperly ceased. Histopathological findings showed diffuse granulomatous infiltration with multinucleated cells, lymphocytes, histiocytes, and neutrophils in the dermis. Septate and branched hyphae were observed in the dermis using periodic acid-Schiff diastase and Gomori methenamine silver staining. T. rubrum was identified in fungal culture from the tissue sample and confirmed through phylogenetic analysis of the internal transcribed spacer and large subunit regions in ribosomal ribonucleic acid gene. Intravenous amphotericin B was administered for septic shock before the confirmation of the causative organism, which rapidly improved the condition. CASE REPORT A 71-year-old woman admitted for pneumonia was referred for an erythematous exophytic mass with multiple subcutaneous nodules on both lower legs (Fig. 1A).The initial skin lesion was detected on the left leg a month before referral, which subsequently spread to the adjacent tissue and the right leg.The patient had a history of rheumatoid arthritis and was treated immunosuppressants, such as tacrolimus, low-dose systemic steroid, leflunomide, and methotrexate, for more than 10 years.Furthermore, she had taken an antifungal medication for tinea pedis and unguium; however, she decided to discontinue the treatment. A deep fungal infection was suspected based on her skin lesion and history.A skin biopsy was performed on her left lower leg.Histopathological examination revealed pseudoepitheliomatous epidermal hyperplasia with micro-abscess formation in the epidermis and diffuse granulomatous inflammation consisting of multinucleated giant cells, lymphocytes, neutrophils, and histiocytes in the dermis.Immunohistochemical staining with periodic acid-Schiff and Gomori methenamine silver revealed septate and branched fungal hyphae in the dermis (Fig. 2).Fungal cultures performed using biopsy tissue showed red-brown pigmentation, suggesting Trichophyton species infection (Fig. 3).A molecular approach was used based on the sequence analysis of ribosomal DNA (rDNA).T. rubrum was identified by phylogenetic analysis using internal transcribed spacer and large subunit (LSU) region sequencing of ribosomal RNA gene (Fig. 4). Before the identification of the causative organism, the patient's condition deteriorated because of septic shock.Amphotericin B was administered empirically for 6 days prevent hematogenous dissemination, and the skin lesions resolved simultaneously (Fig. 1B).This case corresponded to deeper dermal dermatophytosis. DISCUSSION In contrast to Majocchi granulomas, it is characterized by a rapidly growing lesion, commonly confined to the extremities without the involvement of other internal organs.Although the mechanism of fungal invasion in the dermis is yet to be clarified, the possibility of follicular invasion 7,8 or direct invasion from the epidermis into the dermis has been raised 3,9 . In most cases of deep dermatophytosis, T. rubrum is the most common causative agent; however, other species, such as T. violaceum, T. mentagrophytes, T. verrucosum, and T. ferrugineum, have also been found 1 . Dermatophytes are the most common cause of superficial fungal infections in humans; Trichophyton (T.) rubrum is the most frequently isolated pathogen In humans, dermatophytes are mainly confined to the stratum corneum, nails, and hair.Dermatophytes do not actively penetrate deeper than the basal layer 2 Copyright@2022 by The Korean Society for Medical Mycology Journal of Mycology and Infection increased risk of invasive form transformation, owing to chronic infection resulting from a higher rate of recurrence and recalcitrance.Herein, we report an atypical presentation of deep dermatophytosis as multiple exophytic masses in the bilateral lower limbs. Fig. 1 . Fig. 1. (A) 71-year-old woman presented with multiple erythematous exophytic and subcutaneous nodules located on lower legs.(B) All lesions showed improvement after amphotericin B administration. 4 . 6 . 1 . Dermatophytes cause superficial infections of the skin and rarely cause deep infections invading the dermis in patients with acquired or innate immunosuppression.Most patients acquired immunosuppression are organ transplant recipients.However, it can occur in patients receiving immunosuppressive treatments for other diseases, such as interstitial lung disease 3 and rheumatoid arthritis, as observed in our case Recently, deep dermatophytosis has been reported in healthy individuals with a genetic predisposition to fungal organisms, such as those with autosomal recessive caspase recruitment domain-containing protein 9 (CARD9) deficiency 5 and C282Y mutation According to a recent systematic review, the most common predisposing factor for invasive dermatophytosis is superficial dermatophytosis, followed by solid organ transplantation, topical immunosuppressant use, gene mutation, diabetes mellitus, and trauma Deep dermatophytosis is classified into three types: (i) Majocchi granuloma, (ii) deeper dermal dermatophytosis, and (iii) disseminated dermatophytosis 6 .
2023-01-24T16:13:54.494Z
2022-12-31T00:00:00.000
{ "year": 2022, "sha1": "289943894bcefbf0e32c0a0bc97ab1726e9ba9eb", "oa_license": null, "oa_url": "https://doi.org/10.17966/jmi.2022.27.4.85", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4ff27e8a3fce0511c2b9acfec6d1daa56520586a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
88523692
pes2o/s2orc
v3-fos-license
Towards an explicit construction of local observables in integrable quantum field theories We present a new viewpoint on the construction of pointlike local fields in integrable models of quantum field theory. As usual, we define these local observables by their form factors; but rather than exhibiting their $n$-point functions and verifying the Wightman axioms, we aim to establish them as closed operators affiliated with a net of local von Neumann algebras, which is defined indirectly via wedge-local quantities. We also investigate whether these fields have the Reeh-Schlieder property, and in which sense they generate the net of algebras. Our investigation focuses on scalar models without bound states. We establish sufficient criteria for the existence of averaged fields as closable operators, and complete the construction in the specific case of the massive Ising model. Introduction Quantum field theory is based on the concept of local observables, i.e., operators associated with points or regions of space-time which commute at spacelike distances. Yet these local objects are notoriously difficult to construct in the presence of interaction. Even in simplified situations that are amenable to a mathematically rigorous treatment, explicit control over the local observables is hard to obtain: one can either make a direct ansatz for quantum fields, but face difficulties in controlling their singular nature, particularly in the high-energy regime; or one can define local quantities via an abstract limiting process, allowing one to control their functional analytic properties, but losing track of their explicit form. These difficulties are exemplified in the models we consider in this article, namely, quantum integrable models on 1+1 dimensional Minkowski space; open questions remain about the structure of their local observables, despite substantial research focusing on this issue. There are two complementary approaches to obtaining local observables in integrable models. The first of them, known as the form factor program [1,2], aims at constructing point-local quantum fields Φ(x) directly. Their n-point functions are expanded in a series by inserting a basis of intermediate asymptotic states, for example for n = 2, Ω, Φ(x)Φ(y)Ω = ∞ m=0 dθ 1 · · · dθ m (2π) m Ω|Φ(0)|θ 1 , . . . , θ m in 2 e i(y−x) m j=1 p(θj) , where the θ j are rapidities. The expansion terms Ω|Φ(0)|θ 1 , . . . , θ m in are called form factors; locality and covariance requirements for Φ(x) then lead to restrictions on these, the form factors equations. For specific forms of the interaction, such as the massive Ising model [3] or the sinh-Gordon model [4], one can find explicit solutions of the form factor equations. The remaining problem is now to control the convergence of the infinite series, in order to verify, e.g., the Wightman axioms [5]. However, only partial results in certain asymptotic regimes exist so far, even in the simplest interacting case, the massive Ising model [6]. The second approach [7,8], which we call the operator algebraic one, proceeds in an indirect way: One first constructs observables with weaker localization properties, namely, quantum fields localized in spacelike wedges. While not the desired final result, these wedge-local fields can explicitly be described and mathematically controlled. Passing to algebras of bounded operators A(W) associated with wedges W, one then obtains observable algebras in bounded regions by taking intersections: Where a bounded region is the intersection of two wedges, O = W 1 ∩ W 2 , one sets A(O) := A(W 1 ) ∩ A(W 2 ). (1.2) This net of algebras quite directly fulfills the Haag-Kastler axioms [9]. The mathematically hard task, however, is to show nontriviality of the intersections. This can be done by abstract arguments in a class of models [10,11], including the sinh-Gordon and Ising models, at least for sufficiently large regions O. But explicit control of the form of these observables A ∈ A(O) is lost; essentially, they are obtained from the axiom of choice. Thus, known results allow one to either control the explicit form of observables or their functional analytic behaviour. In the present paper, we propose a method to close this gap using a hybrid approach: We take our local observables to be defined by explicit expressions for pointlike fields, following ideas from the form factor program. Then, we aim to show that they are local operators in a mathematically strict sense: namely, that their closures are affiliated with the algebras A(O) as defined in (1.2). Relying on affiliation with von Neumann algebras rather than on n-point functions of fields gives us the flexibility needed to tackle longstanding convergence issues. We carry out this programme in the context of scalar integrable quantum field theories without bound states. In this context, we present sufficient criteria that make this approach work, and that do not refer to details of the interaction, i.e., to the two-particle scattering function. We verify the criteria in the massive Ising model. To that end, we make use of techniques from [12] which exhibit the connection between the two approaches to integrable systems. The local operators A ∈ A(O) constructed abstractly in [10] can be expanded into a series, where z, z † are "interacting" annihilators and creators (cf. [13]), and F m+n are meromorphic functions, paralleling the form factor program. In fact, they fulfill very similar relations to the known form factor equations, plus certain growth bounds encoding the localization of A. (These will be recalled in Sec. 2.3.) The expansion (1.3) is not restricted to bounded operators, but should also hold for other local quantities, such as locally averaged quantum fields, or more general quadratic forms A. However, most functional analytic properties (such as boundedness or closability) of the operator A are not directly visible on the level of the expansion coefficients F k , and the series exists only in the sense of matrix elements between finite particle number states, where the sum is actually finite. Locality for these objects is only defined in a weak sense, namely, as relative locality to the wedge-local field mentioned above (ω-locality, see Def. 2.1 below). Hence our main line of argument is as follows. As our input, we take meromorphic functions F k that fulfill a refined version of the form factor axioms (see Thm. 2.2 below); in concrete models, candidates are known in the literature. This gives us our observables (averaged quantum fields) as quadratic forms by (1.3). Additionally, we assume a certain summability condition for the series (1.3), resulting in our local fields as closed operators. Based on the locality conditions for the functions F k , the operators are then shown to be affiliated with the local algebras A(O). We note that this construction does not depend on a priori information on the size of the algebras A(O). We verify our summability condition in an example, the massive Ising model. In our context, the massive Ising model is the 1+1-dimensional massive integrable quantum field theory with constant twoparticle S-matrix S = −1. While the massless Ising model is generated by (the even powers of) a free Fermi field, the massive Ising model differs from this in important aspects: its scattering states are bosonic, and its PCT operator is different from the related Fermi field on the same Hilbert space [14,Sec. II]. It is, in this sense, a theory of interacting Bosons, even if with a very simple type of interaction. While quadratic expressions in the Fermi field generate a subnet of A, this is a proper subnet, and A(O) contains also operators with odd particle number transfer. Crucially, for these the series in (1.3) cannot terminate, thus providing us with a test case for our ideas. The Ising model has been constructed in the operator algebraic context [15] and as a Euclidean quantum field theory [16], but to the authors' knowledge, direct convergence results for the series (1.3) in Minkowski space are new. We stress that, while the observables we construct are formally averaged versions of the local field of the form factor program, given as A = Φ(g) = d 2 x g(x)Φ(x) with Φ(x) as in (1.1), we do not claim that they fulfill the Wightman axioms. For one, we do not use Schwartz functions g, but rather functions of Jaffe class [17]; but this is a more minor point. More fundamentally, we do not want to, or need to, control the product of two such operators; we do not claim that their n-point functions exist, or that the fields have a common invariant domain. For our interpretation as local observables, it is sufficient to In a slight extension of scope, one can ask whether this method leads to all local observables of the model. Namely, for each bounded region O of space-time, we obtain a linear space Q(O) of quadratic forms (which extend to closed operators, affiliated with A(O)); this set would also include the "composite fields" or "descendant operators" of the model, although we do not explicitly deal with normal products or product expansions. Is this Q(O) maximally large, in a well-defined sense? One criterion would be whether the space has the Reeh-Schlieder property, i.e., whether Q(O)Ω is dense in the Hilbert space of the model. A somewhat stricter notion is whether the elements of Q(O), or their spectral data, generate the algebra A(O). Both questions can be traced back to sufficient conditions on the functions F k , where for the last mentioned point, we understand "generate" in the sense of the dual of a net of algebras. We also investigate which consequences this completeness has for the net A(O) itself. The paper is organized as follows. In Sec. 2, we recall our mathematical setting, including the definition of wedge-local algebras and the characterization of local operators in terms of a series expansion. Then, in Sec. 3, we develop sufficient criteria for closability of operators, affiliation with local algebras, and completeness in the sense of the Reeh-Schlieder property or duality. We explicitly treat the situation in the Ising model in Secs. 4 and 5. In the Ising model, for local observables with even particle number transfer, the series (1.3) can be finite, whereas for odd particle number transfer, it is necessarily infinite. We discuss the easier, even case in Sec. 4, hoping it will be instructive for the reader. The odd case is treated in Sec. 5; it involves quite delicate estimates of the singular integral operators with kernels F m+n (θ + i0, η + iπ − i0), which are boundary values of meromorphic functions, with first-order poles located on the boundary. We summarize our results, and give an outlook on future work, in Sec. 6. Two appendices provide technical results needed in Sec. 5: Appendix A deals with symmetric Laurent polynomials that are required for treating composite fields, and Appendix B investigates the singularity structure of a certain multivariable meromorphic function needed in the construction. This paper is partly based on one of the authors' Ph.D. thesis [18]. Background The context of this paper are integrable models of quantum field theory on 1+1 dimensional Minkowski space, with a single species of massive scalar particle. We also exclude bound states, i.e., the two-particle scattering function will not have poles in the physical strip. (For possible generalizations, see Sec. 6.3.) We formulate them in the mathematical framework of [10,19,12], the relevant aspects of which we now recall. On H, annihilation and creation operators z(θ) and z † (θ) act, defined as usual in a distributional sense on finite particle number vectors, but fulfilling an S-deformed version of the CCR [10,Sec. 3]. The "smearing functions" of these operator-valued distributions will often be Fourier transforms of functions f ∈ S(R 2 ), taken with the convention Quadratic forms We wish to describe observables as operators or quadratic forms on H with a certain high-energy behaviour. To that end, let ω : [0, ∞) → [0, ∞) be an analytic indicatrix [12,Def. 2.1], that is, a function growing slightly less than linearly with certain additional conditions; here we just note that ω(p) = β log(1 + p) for some β > 0, (2.6) and are valid examples. Associated with ω and an open region O ⊂ R 2 , we define the test function space , whereas in the case (2.7), it is a dense subspace. We also consider the dense subspace H ω,f ⊂ H of vectors ψ such that e ω(H/µ) ψ < ∞ and which have finite particle number (P f n ψ = ψ for some n). Further, let Q ω be the space of quadratic forms A on H ω,f × H ω,f such that the norms are finite for any n ∈ N 0 . Examples of such forms are smeared normal-ordered monomials in the annihilators and creators [19,Prop. 2.1], written in formal integral notation as where f ∈ D(R m+n ) ′ is such that the following norm f ω m×n is finite: In fact, all A ∈ Q ω can be decomposed into monomials of the form (2. The sum is finite in matrix elements, so that convergence issues do not arise at this point. Vice versa, given distributions f m,n such that f m,n ω m×n < ∞, we can define A ∈ Q ω by the sum above. For an explicit expression of the (unique) relation between A and f m,n [A], see [19,Sec. 3.1]. The symmetry representation U acts on Q ω by adjoint action, and correspondingly on the expansion coefficients f m,n [A]; we refer to [19,Sec. 3.3] for details. Locality We now describe locality of our observables in open spacetime regions R, the most relevant being: the right wedge W with tip at the origin; its causal complement, the left wedge W ′ ; shifted wedges W x , W ′ x with tip at x; double cones O x,y = W x ∩ W ′ y ; and the standard double cone O r of radius r around the origin. We start by introducing the wedge-local field [10,Sec. 3] f ∈ S(R 2 ). (2.14) This field, or its formal kernel φ(x), can with respect to the symmetry representation U be consistently interpreted as localized in the wedge W ′ x . We then define a von Neumann algebra of bounded operators associated with the right wedge as The subscript R indicates real-valuedness. In [10] this was introduced with ω = 0, but the algebra is actually independent of ω by density arguments for the test functions f .) From here, algebras associated with other wedges W x and W ′ y can be defined by symmetry transformations, and for double cones In this way, one obtains a Haag-Kastler net A of local algebras for every region of Minkowski space, where the vacuum Ω is cyclic and separating for A(W), Haag duality for wedges holds, i.e., A(W ′ x ) = A(W x ) ′ , and the Tomita-Takesaki modular group of A(W) coincides with the boosts U (0, Λ). It is a priori not clear whether the algebras A(O x,y ) contain any operator other than multiples of the identity, but under certain conditions ("modular nuclearity"), 1 the vacuum is in fact cyclic for these as well [10,Sec. 2]. This gives a well-defined sense of locality for bounded operators. For (unbounded) quadratic forms, the situation is different, as we cannot formulate commutation relations between these directly. Instead, we can define a weaker notion by means of relative locality to the wedge-local field φ: We will clarify in Sec. 3 how ω-locality is related to the local net A, as well as to locality conditions for closed unbounded operators. For our purposes, it is crucial to know how locality of A ∈ Q ω is reflected in the properties of its expansion coefficients f m,n [A]. In fact, for A localized in a double cone, one finds that f m,n [A] are distributional boundary values of meromorphic functions F m+n at specific points. To formulate this, consider the regions in R k , I k + = {λ : 0 < λ 1 < . . . < λ k < π}, I k − = {λ : −π < λ 1 < . . . < λ k < 0}. (2.17) When we write boundary distributions of the type F k (θ + i(0, . . . , 0), η + i(π − 0, . . . , π − 0)), or F k (θ + i0, η + iπ − i0) for short, this is understood as an approach from within the region I k + , and similar for I k − . With this, we can characterize ω-locality in the double cone O r as follows, in a reformulation of [12,Theorem 5.4]. Theorem 2.2. Let ω be an analytic indicatrix and let r > 0. Let F = (F k ) ∞ k=0 be a collection of functions C k →C which fulfills the following conditions 3 for any fixed k, and with ζ ∈ C k arbitrary: (FD1) Analyticity: F k is meromorphic on C k , and analytic where Im ζ 1 < . . . < Im ζ k < Im ζ 1 + π. Locality of operators and quadratic forms In this paper, we investigate local (unbounded) operators in integrable models, going beyond the quadratic forms considered earlier [19,12]. More specifically, we aim at closed operators affiliated with the local von Neumann algebras A(O). This class, while still technically manageable, seems large enough to contain a variety of accessible examples, including smeared pointlike fields where they exist [21,22]. The present section gives general criteria that allow us to investigate the problem, independent of the scattering function S and of specific examples of local observables. The criteria will later be applied to examples in the case S = −1, in Secs. 4 and 5. We first clarify in Sec. 3.1 how quadratic forms in Q ω relate to closed (unbounded) operators, and establish sufficient criteria for convergence of the infinite series (2.13) in this context. Then, in Sec. 3.2, we show how the closed operators are related to bounded local operators, in the sense of affiliation with the local algebras. Lastly, in Sec. 3.3, we ask when a set of quadratic forms is large enough to describe all local observables of the quantum field theory, in the sense of the Reeh-Schlieder property and of generating the net of local von Neumann algebras. Throughout the section, an analytic indicatrix ω is kept fixed. Closable operators and summability We will be concerned with the extension of quadratic forms A ∈ Q ω to closed operators. Since A is a priori only a quadratic form, we clarify in which case this extension, or closure, is to be understood. Correspondingly, the operator A − , which is uniquely determined, is called the ω-closure of A. (It may depend on ω, but this will not matter for our purposes.) A simple criterion for ω-closability is as follows. and only if, the expression ψ, Aχ has a continuous linear extension to χ ∈ H for any fixed ψ ∈ H ω,f , and a continuous antilinear extension to ψ ∈ H for any fixed χ ∈ H ω,f . Proof. Let A ∈ Q ω . The two continuity conditions imply that A can be extended to a linear operator In particular, A 0 and A * 0 are both densely defined, which implies that The converse is evident. In particular, this shows that the ω-closable elements form a subspace of Q ω . While the criterion in Lemma 3.2 is easy to state, it is rather hard to apply in examples where the expansion coefficients f m,n [A] are used to define A. We will therefore deduce a sufficient criterion for ω-closability which is based directly on estimates for the f m,n [A]. The idea is to establish absolute convergence of the series (2.13) in a certain sense. (In the sense of quadratic forms, the series is always well-defined as it is finite in matrix elements; for obtaining closed operators, however, convergence issues become relevant.) Then, A is ω-closable. Proof. By [19, Prop. 2.1], the annihilator-creator monomials fulfill the estimate, k ∈ N 0 , Using this estimate in the expansion (2.13), we obtain for ψ, χ ∈ H ω,f , which converges by assumption. Thus the matrix element is H-continuous in ψ at fixed χ. A similar argument, with the roles of m and n exchanged, shows continuity in χ at fixed ψ. The result then follows from Lemma 3.2. Under a stricter summability condition, we can deduce an additional property that will become relevant later, in Proposition 3.5(b). Then, A is ω-closable; and for any g Proof. In view of Proposition 3.3, only the second part requires proof. We recall the estimates [19, With φ(g) = z † (g + ) + z(g − ), it follows that for any j ∈ N, where c g := e ω(cosh ·) g + + g − . Since H ω,f consists of analytic vectors for φ(g), and since φ(g) changes the particle number by at most 1, we then obtain for k ≥ ℓ, with some constant c ′ g > 0 depending on g. For any χ ∈ H ω,f , we therefore have with suitable ℓ, m,n ω m×n , (3.9) where the estimate on A − has been deduced from (3.4). The series on the r.h.s. exists by hypothesis. Now setχ k := P f k exp(iφ(g) − )χ ∈ H ω,f . Since the r.h.s. of (3.9) is summable over k, bothχ k and A −χ k are convergent sequences in H. As A − is closed, this implies that lim k→∞χk = exp(iφ(g) − )χ is contained in the domain of A − . Locality We now consider local observables of our model. In Sec. 2, we introduced two notions of locality: a net of von Neumann algebras A(O), where locality can be expressed in terms of commutation relations in the usual sense, and the concept of ω-locality for quadratic forms (Def. 2.1), which was based on relative locality to the wedge-local fields φ(g), φ ′ (g). A priori, ω-locality is a much weaker notion, since it only involves commutators in the weak sense between a restricted set of observables. However, we show that for suitably regular quadratic forms (bounded or ω-closable), ω-local observables can be linked to the net of local algebras. (c) In the case S = −1, statement (b) is true even without the condition (3.10). Proof. We will prove the statement only for R = W (the standard right wedge). For R = W x or R = W ′ y , it can then be obtained by applying Poincaré transformations, and for R = O x,y by considering intersections. Also, (a) is a special case of (b). an operator defined at least on H ω,f , along with its adjoint. Noting that powers of φ(g) leave H ω,f invariant, we can deduce from ω-locality of A by repeated application of Definition 2.1 that where the last equality uses ψ ∈ dom (A − ) * . Both ψ and χ are analytic vectors for φ(g). Therefore, as n → ∞, we have B n χ → Bχ and B * n ψ → B * ψ, with B := exp iφ(g) − . Equation (3.12) implies Since B is bounded and ψ can be chosen from a dense set in H, we conclude that If now more generally χ ∈ dom A − , we can find a sequence ( We compute from Eq. (3.14), using boundedness of B, The same then holds if B is a finite product of operators exp(iφ(g) − ), a linear combination of those, or their strong limit (by a similar computation as in (3.15)). Thus, by the double commutant theorem, For the converse, let A − η A(W) and let g ∈ D R (W ′ ). For any t ∈ R, we have exp Since ψ, χ are analytic vectors for φ(g), both sides of (3.17) are real analytic in t. Computing their derivative at t = 0, we find Since g ∈ D R (W ′ ) was arbitrary, and since we can extend the relation to complex-valued g by linearity, this means that A is ω-local in W. This completes the proof of (b). For (c), note that in the case S = −1, the operators φ(g) − are actually bounded, and generate the algebra A(W) ′ [15]; we can restrict to g ∈ D ω R (W ′ ) here by density. It is clear that φ(g)H ω,f ⊂ H ω,f ⊂ dom A − , and using this instead of (3.10), a similar (in fact, simpler) computation as for (b) shows that Cyclicity of the vacuum, and relation to the local algebras We now ask for criteria which guarantee that a set of (local) quadratic forms is "maximally large", in the sense of generating all vectors in the Hilbert space, or all local observables in a certain sense. We first investigate the Reeh-Schlieder property, i.e., the question whether the vacuum is cyclic for given subspaces Q d ⊂ Q ω of quadratic forms. More specifically, we suppose that each We show that for cyclicity, it is sufficient to check density of the states at finite particle number over compact sets in rapidity space only. To that end, for m ∈ N 0 , we denote with P m the projector onto H m ⊂ H as before, and for M ⊂ N 0 we write P M := m∈M P m . Further, let P m,ρ be the subprojection of P m onto functions supported in the ball of radius ρ > 0, and P M,ρ accordingly. be a subspace with the following properties: (iii) For each finite subset N ⊂ M , and each ρ > 0, the inclusion Remarks: Condition (i) can be replaced with the weaker requirement that each A extends to an operator with Ω in its domain. In applications in Secs. 4-5, M will either be the set of even or of odd numbers, as we need to treat even and odd particle numbers separately. Proof. Let ψ ∈ H be orthogonal to P M A − Ω for all A ∈ Q d ; we need to show P M ψ = 0. We apply a variant of the well-known Reeh-Schlieder argument [24]. To that end, let e be the unit vector in time direction, and consider for fixed A ∈ Q d the function which is well-defined and continuous due to (i). It vanishes for |t| < ǫ due to (ii). On the other hand, due to the spectrum condition for U , it is the boundary value of a function analytic in the upper half-plane, which must therefore vanish identically. Computing its Fourier transform in the sense of distributions, we see that In particular, for given q > 0, we can choose h to equal 1 on [−q, q] and 0 outside [−2q, 2q]. Since E(θ) ≥ m, the sum (3.21) is then finite (m ≤ 2q), and the integration can be restricted to the compact region E(θ) ≤ 2q. Due to (iii) with suitably chosen ρ, we then conclude that ψ m (θ) vanishes when m ∈ M , m ≤ 2q, for (almost every) θ in the support of h(E( · )) -that is, at least where E(θ) ≤ q. Now letting q → ∞, we see that ψ m (θ) = 0 for all m ∈ M and almost all θ, i.e., P M ψ = 0. From here, if the A ∈ Q d are affiliated with some algebra A(O), we can deduce that the local algebra has the Reeh-Schlieder property. But more is true. To that end, consider the "locally generated" net of algebras, Then we have: (a) Reeh-Schlieder property: Ω is cyclic and separating for A(Ô) and for A(Ô) ′ . (b) Locally generated wedge algebras: There remains the question whether the spaceQ generates the algebra A(Ô) in some sense. This cannot follow directly from the above: Namely, if we considerÔ := O r for some fixed r > 0, and Then, A Q as defined in (3.25) is a local, isotonous, covariant net of von Neumann algebras; Ω is cyclic for A Q (R) if R is nonempty, and separating if R ′ is nonempty; and A is the dual net of A Q , in the sense that for every double cone O, Proof. As A Q (R) ⊂ A(R) for every R, locality is automatic; isotony and covariance follow from (iv) and (v), respectively. Ω is cyclic and separating by Theorem 3.7(a). Also, thanks to covariance of A Q , one obtains with methods as in Theorem 3. for any x, y. Hence we have for any double cone O x,y . Examples of local operators: even case We now illustrate the above methods for constructing local observables in examples; specifically, we will in a moment specialize to the massive Ising model, defined by the scattering function S = −1. In view of the results in Sec. 3, our strategy will be as follows: We define meromorphic functions F k that satisfy the conditions (FD1)-(FD6) for some r > 0, guided by experience from the form factor program. By Theorem 2.2, the associated quadratic form is then ω-local in the double cone O r . Separately, we show using summability criteria (Proposition 3.3 or 3.4) that A is also ω-closable. Then A − is affiliated with A(O r ) by Proposition 3.5. If we construct sufficiently large sets of such A, fulfilling additional constraints such as isotony, covariance, and a density condition in compact regions of rapidity space, then the results in Sec. 3.3 imply the Reeh-Schlieder property for the local algebras, Haag duality, and that the A(O) are generated by our quadratic forms via duality. For all scattering functions S in our class, the overall theory is invariant under the Z 2 -symmetry that replaces the wedge-local field φ with −φ. As a consequence, all local observables can be split into an even and an odd part, in which only the even-and odd-numbered F j contribute, respectively. Hence it suffices to consider even and odd observables separately. Specific to the Ising model is the fact that for even observables, the recursion relations (FD4) simplify considerably, since the factor on the right-hand side vanishes, hence the relation does not link F k and F k+2 . We will consider this simpler case in the present section, and the case of odd observables in Sec. 5. In the even case of the Ising model, we can hence choose only one of the functions F 2k to be analytic and nonzero. This may seems an uninteresting special case at first glance; yet it comprises physically important observables, such as the averaged energy density T 00 (g) (see, e.g., [14]) whose only nonvanishing coefficient function is where g ∈ S(R 2 ) is nonnegative. 4 Other examples of this type haven been given by Buchholz and Summers [28] by considering even polynomials of the field φ(f ). We aim at constructing a large enough set of observables so that the Reeh-Schlieder property is fulfilled for these. To that end, let k ∈ N 0 , let g ∈ D(R 2 ), and let P ∈ Λ ± 2k be a symmetric Laurent polynomial in 2k variables (see Appendix A for notational conventions). We define a sequence of analytic functions F [2k,P,g] j (ζ) := g(p(ζ))P (e ζ )M even 2k (ζ) for j = 2k, 0 otherwise, and by convention, M even 0 := 1. We claim that these functions fulfill our locality conditions. Proposition 4.1. Let k ∈ N 0 , P ∈ Λ ± 2k , and g ∈ D(O r ) be fixed, with some r > 0. Then F [2k,P,g] j enjoy the properties (FD1)-(FD6) with respect to this r and both (a) ω(p) = β log(1 + p) with sufficiently large β > 0 for given P and k, and (b) ω(p) = p α with any fixed α ∈ (0, 1), independent of P and k. Proof. By Poincaré covariance, we can assume without loss of generality that O = O r . (Note that translations act only by shifting the argument of g, whereas boosts also scale the arguments of the polynomial P by a constant factor; cf. [19,Sec. 3.3].) Now the property of ω-locality is a consequence of (FD1)-(FD6) by Theorem 2.2. Closability follows from Proposition 3.4, where the sum is actually finite; and Proposition 3.5 proves affiliation. We now show that we have constructed all (even) local quantities in the sense of the Reeh-Schlieder property. To that end, let us define for any double cone O, where ω(p) = p α with α ∈ (0, 1), fixed in the following. By the above remark, this is a covariant definition in the sense that U (x, Λ)Q even (O)U (x, Λ) * = Q even (ΛO + x). With P even the projector onto the even particle number space within H, we prove: But since ψ( · )g(p( · ))M even 2k is symmetric, andg(p( · ))M even 2k vanishes only on a null set (due to analyticity), this follows from the density of polynomials on compact sets. We postpone duality results to Sec. 5.3. Examples of local operators: odd case We now consider observables in the Ising model where the "odd" coefficients F 2k+1 are nonzero. Due to the recursion relations (FD4), which are nontrivial in this case, we are forced to choose an infinite sequence of nonvanishing F 2k+1 , linked to each other by their residues. Observables of this type have been considered in [3,29,30], among others; they include the so-called order parameter, or basic field, of the Ising model. Our particular focus is on closability of these quadratic forms, or put differently, on the summability of the expansion series (4.1). Specifically, we choose the sequences of meromorphic functions, k ∈ N 0 , Here g ∈ D(R 2 ) is a test function; we will make further restrictions on its momentum space behaviour below. P is (essentially) a Laurent polynomial in any number of variables such that P (y, −y, x) = P (x); we formalize the class Λ ± I of these polynomials in Appendix A, but let us note here that typical examples are the odd power sums, π 2s+1 (x) = j x 2s+1 j . The meromorphic function will be further explored in Appendix B. As in Sec. 4, we want to verify that these functions indeed define local observables, and sufficiently many. We first check the more elementary properties (FD1)-(FD4) and (FD6) in Sec. 5.1. Then we turn to (FD5) and summability in Sec. 5.2, which involves delicate operator norm estimates of singular integral operators. Finally we derive the Reeh-Schlieder property and duality results in Sec. 5.3. Throughout this section, we will take ω(p) = p α with some α ∈ (0, 1). We also define the function spaces Elementary properties We briefly state the results for (FD1)-(FD4) and (FD6), which can be deduced from the properties of P and M odd 2k+1 as explained in Appendices A and B, respectively. Operator domain and summability The remaining part for establishing F [1,P,g] 2k+1 as the coefficients of a local operator is as follows. Setting f mn (θ, η) := F [1,P,g] m+n (θ + i0, η + iπ − i0), we need to find bounds for the norm f mn ω m×n . This will, first of all, establish (FD5). However, we also need these estimates in order to show the summability of the series (4.1) when applied to a certain class of vectors, in order to extend A to a closed operator. The individual terms of the series are singular integral operators due to the poles of the F m+n along the integration contour, and we have to find operator norm estimates for these. We start with a lemma to that end. In it, for a set of integers J = {j 1 , . . . , j ℓ }, we denote mixed partial derivatives of a function h as ∂ J h(θ) = ∂/∂θ j1 · · · ∂/∂θ j ℓ h(θ); and where the function has additional arguments denoted η, the derivatives will not act on these. Lemma 5.3. Let m, n ∈ N 0 and 0 ≤ ℓ ≤ min(m, n). Let h : R m × R n → C be smooth, and let L : R\{0} → C be a continuous function, bounded outside a neighborhood of zero and analytic inside that neighborhood, except for a possible first order pole at 0. Then, the integral kernel on R m × R n , fulfills the bound with a constant c L > 0 that depends on L but not on m, n, ℓ, or h. Proof. We reduce the statement to special cases in four steps (a)-(d). (a) It suffices to prove the statement for m = n = ℓ. Namely, once known for that case with some c L , we can write for ψ ∈ D(R m ), ϕ ∈ D(R n ), We now apply the known statement to the inner integral and the Cauchy-Schwarz inequality to the outer integral, which yields the desired result as long as we choose c L ≥ √ π. (b) If the statement holds for some L, and M is a bounded continuous function analytic near 0, then it holds for L + M in place of L as well. To see this, write Note here that δ J,θ−η h depends on θ j only if j ∈ J, and that (θ j − η j ± i0) −1 (j ∈ J c ) thus acts as a bounded operator with respect to that variable. Splitting the integration variables as in (5.9), we find that where c > 0 is some constant. Since the finite difference quotients are majorized by the corresponding partial derivatives, and the sum has 2 ℓ terms, this proves the statement with c L := 2(c + 1). We are interested in particular in the following kernels which appear as building blocks of the F 2k+1 . Lemma 5.4. Let P ∈ Λ ± I , let g ∈ D α (O), and let k ∈ N 0 be fixed in the following. Consider the kernels on R m × R n , m, n ≥ k, For each ǫ > 0, there exists c > 0 such that ∀m, n ≥ k : Proof. To apply Lemma 5.3, we need to estimate for J ⊂ {1, . . . , k} the function We can explicitly compute . With the help of this result, and knowledge of the singularity structure of the functions M odd 2k+1 as developed in Appendix B, we can now estimate the · ω m×n -norms of the expansion coefficients f mn of our proposed local operators. Proof. Representing M odd m+n (θ + i0, η + iπ − i0) as in Lemma B.3, we obtain with the notation introduced there, Here M odd m−k , M odd n−k are bounded by 1 (Proposition B.2(e)), hence they act as multiplication operators with norm ≤ 1 with respect to the variablesθ,η. Applying Lemma 5.4 to each term of the sum (5.25), knowing that the number of terms grows like 2 m at fixed n (see Lemma B.3), then yields where c is a constant depending on ǫ, α, g, n but independent of m. Exchanging m with n, and θ with η, one obtains a similar result for exp(−E(θ) α )f mn (θ, η) m×n , and likewise for f nm , which yields the result at ζ = (θ + i0, η + iπ − i0) after a redefinition of constants. The computation at ζ = (θ − iπ + i0, η − i0) is analogous, as M odd m+n depends only on the differences of its variables. With Proposition 5.5, we have shown that the F [1,P,g] fulfill all conditions (FD). Therefore they yield ω-local quadratic forms A [1,P,g] via the series (4.1). But our estimates suffice even for affiliation with the local algebras. Theorem 5.6. Let ω(p) = p α with some α ∈ (0, 1); let P ∈ Λ ± I , and g ∈ D α (O) with some double cone O; and let F [1,P,g] j be defined as in (5.1). The associated quadratic form A [1,P,g] ∈ Q ω is ω-closable, and its closure is affiliated with A(O). and choosing ǫ < 1/2, the series then converges by the quotient criterion. Therefore, Proposition 3.3 shows that A [1,P,g] is ω-closable, and Proposition 3.5(c) shows that its closure is affiliated with A(O r ). Discussion of results In this paper we have explicitly constructed a set of local observables in the massive Ising model, as an example for an integrable quantum field theory. To that end, we defined sequences of meromorphic functions F k which are, essentially, solutions of the well-known form factor equations (more precisely, conditions (FD1)-(FD6) in Theorem 2.2). Via the series (4.1), these F k define local operators, for which the main technical point is to control convergence of the series in a suitable sense. Proposition 3.3 gives a sufficient criterion in this respect, which we can indeed verify in relevant examples (Sec. 5.2). In fact, we have found sufficiently many examples to generate all local observables in a well-defined sense (Corollary 5.8). This indicates that our approach can overcome the difficulties inherent in the convergence of n-point functions in the form factor program, and give mathematical meaning to local fields in a more general sense, i.e., as closed operators affiliated with local von Neumann algebras. Let us comment on some individual aspects of our results. Operator content of the Ising model In the Ising model, we have shown that the sets of fields Q(O) that we constructed, separated into even and odd parts, have the Reeh-Schlieder property, and that they generate the local algebras A(O) by duality (see Sec. 5.3, in particular Corollary 5.8). In this sense, we can claim that we have constructed the full operator content of the Ising model. In particular, this provides an alternative proof that the A(O) are nontrivial for any open O, which was already shown in [15]. Specific elements of our class of observables include the order parameter A [1,1,g] [3] and the energy density A [2,P,g] , which plays an important role, e.g., in the study of quantum energy inequalities in integrable models [14,32]. All operators that we have constructed are, in principle, pointlike fields smeared with test functions g in space and time; their Fourier transformg appears in the rapidity-space expansion coefficients F k . We will elaborate more on their functional analytic aspects in Sec. 6.2. Let us remark here that it would be sufficient to use averaging only in time; our results would be the same, but Poincaré covariance of the local observables would be less manifest. The Laurent polynomial P in the coefficients F k serves to enumerate the field content. For the Reeh-Schlieder property, usual polynomials P ∈ Λ I in the odd case and P ∈ Λ 2k in the even case would suffice, and indeed a subalgebra of Λ I would already have the relevant density property (cf. the proof of Proposition A.4). However, the generalization to Laurent polynomials is important for applications: it allows us to include also derivatives of our fields, which act on the coefficients by multiplication with p j (ζ), and related quantities such as the averaged energy density. It is interesting to note (cf. the end of Sec. 2 in [30]) that we obtain, among others, local observables A that do not couple the vacuum with states of low particle number; that is, for some n ∈ N, one has AΩ ⊥ H m for all m < n, but AΩ ⊥ H n . In that respect, these A are analogous to n-th Wick powers of a free field. Indeed, if n is even, then every A [n,P,g] has this property, and for n odd, one can construct such operators A [1,P,g] by including a factor of J n in the polynomial P , as discussed in Appendix A (see Lemma A.3). Pointlike fields in integrable models As mentioned, the observables we constructed in the Ising model have the structure of local averages of pointlike quantum fields. Formally replacing the averaging function g with a delta function, and hence its Fourier transform with a constant, reproduces the well-known expressions from the form factor program. This ansatz can likely be carried over to other integrable models (see Sec. 6.3). In this respect, the structure of our local fields is in line with expectations. However, our mathematical interpretation of these fields is very different from the usual approach: We construct them as (unbounded) operators, but we do not show, or require, that they exist as operator-valued distributions on a common invariant domain, neither in the axiomatic setting by Wightman [5] nor -more aligned with our choice of test functions -in its generalization by Jaffe [17]. In particular, the closed extensions of our averaged fields A ∈ Q(O) have a common dense core H ω,f ∋ Ω, which is however not invariant under their action. Consequently, we do not make any statement about products of the field operators or about their n-point functions, beyond the 2-point function which exists since Ω ∈ dom A − * . Instead, we can show -at least in the Ising model -that the closures of the field operators are affiliated with the (abstractly defined) local von Neumann algebras A(O), and indeed that they generate the algebras A(O) by duality. Note that, particularly for the Ising model, our claim is not that n-point functions of local fields do not exist. In fact, there are alternative constructions of (likely) the same model in a Euclidean setting, where the Osterwalder-Schrader axioms can be verified [16]. Therefore one would expect that the expressions from the form factor program actually do yield Wightman fields in the usual sense, fulfilling polynomial H-bounds, and hence field products should exist, even when using Schwartz-class test functions. But in a more general setting, the Wightman axioms might be too strict; and with our methods, we can interpret the fields meaningfully as local objects without the need of controlling the singular nature of operator products. In this sense, our results demonstrate that the n-point functions are not conceptually necessary. An interesting question arises for the products of operators localized at spacelike distances. Namely, let O 1 and O 2 be two spacelike separated regions, and do not a priori have a common invariant domain, they are affiliated with the commuting von Neumann algebras A(O 1 ), A(O 2 ), which means that |A − 1 | and |A − 2 | spectrally commute. Therefore, the product can be defined on a suitably chosen domain. Thus there is hope to establish an operator product expansion with methods as in [33], though the technical situation described there is somewhat different. Other integrable models While we have carried out our full construction only in the massive Ising model, there is reason to believe that similar methods can be applied in other models as well. As far as a single species of massive scalar particles is concerned, the expansion (2.13) and the characterization of locality in Theorem 2.2 apply independent of the scattering function S, and so do the criteria developed in Sec. 3. Candidates for local observables (i.e., form factors) are known in some of these models, most notably for the sinh-Gordon model [4]. Hence our methods should be applicable to the sinh-Gordon case in principle. Care is needed, however, since the form factors F k there have a more intricate structure, complicating the estimates at large k. Also, the extra condition (3.10) in Proposition 3.5 will need to be established outside the case S = −1. The situation is similar in models with a richer particle spectrum, such as the O(N ) nonlinear sigma models. Here form factors have been computed [34], and progress has been made towards the construction of the local algebras via wedge-local fields [11,35]. The expansion (2.13), Theorem 2.2, and the criteria in Sec. 3 have not yet been established for this case, but would be expected to generalize quite directly, using matrix-valued coefficient functions F k . A challenge, of course, are the ever more complicated estimates on higher-order integral kernels F k . A quite different problem arises in models with bound states, i.e., where the scattering function S has poles in the physical strip 0 < Im ζ < π, such as the Bullough-Dodd, Z(N )-Ising, and sine-Gordon models. Here the form factor equations need to be modified, but solutions to them are known (see [36,37,38], among others). However, on the side of the operator algebraic approach, the wedge-local fields can no longer have the simple form (2.14). Work towards a construction of wedge-local fields and of local algebras A(O) in this case has recently been carried out by one of the authors together with Y. Tanimoto [39,40,41]. This gives hope that our present methods, with a suitably modified version of Theorem 2.2, can be applied to models with bound states as well. In particular, a generalization of our results might imply nontriviality of the local algebras A(O) in cases where other methods have so far been unable to resolve this question: Our construction does not rely on the modular nuclearity condition or the split property for wedge algebras, rather it directly shows the existence of closable local operators and hence of their polar data. A Symmetric Laurent functions We discuss here a certain class of Laurent polynomials which are relevant in our constructions of operators in the Ising model, but more generally for "descendant fields" in integrable models of quantum field theory; see, e.g., [30,42,4]. To that end, for n ∈ N, we denote Λ n = C[x 1 , . . . , x n ] Sn the algebra of symmetric polynomials in n variables, and Λ ± n = C[x ±1 1 , . . . , x ±1 n ] Sn the algebra of symmetric Laurent polynomials (i.e., polynomials which can contain negative powers of the x i ). For our purposes in particular in Sec. 5, we need a notion of Laurent polynomials "independent of the number of variables". Let Λ be the algebra of symmetric functions (see, e.g., [43]), and ϕ n : Λ → Λ n the homomorphism that reduces a symmetric function to a polynomial in n dimensions, i.e., ϕ n P (x) = P (x 1 , . . . , x n , 0, 0, . . . ). Following [44], we define the algebra of symmetric Laurent functions as Λ ± = Λ ⊗Λ, whereΛ is a copy of Λ but read with respect to the "inverse variables" x −1 i . More formally, we set ϕ ± n : Λ ± → Λ ± n , ϕ ± n (P ⊗ Q)(x) = (ϕ n P )(x 1 , . . . , x n ) · (ϕ n Q)(x −1 1 , . . . , x −1 n ); this is compatible with ϕ n with respect to the natural inclusions Λ ⊂ Λ ± , Λ n ⊂ Λ ± n . The ring Λ ± is freely generated by the power sum functions π k = j x k j , k ∈ Z\{0}. In the following, we will often write P (x) rather than ϕ ± n P (x) for x ∈ R n , where no confusion can arise. For our purposes, we are particularly interested in functions with the property P (y, −y, x) = P (x) for all n ∈ N 0 and x ∈ R n . (A.1) More formally, for y ∈ R + , let α y : Λ → Λ be the homomorphism that substitutes x 1 → y, x 2 → −y, x j+2 → x j , and set α ± y := α y ⊗ α 1/y . Let Λ I ⊂ Λ (respectively, Λ ± I ⊂ Λ ± ) be the subalgebra of invariants under all α y (respectively, α ± y ). We are interested in characterizing these subalgebras. If now P ∈ Λ ± I , then this expression is constant in n, even if we extend the r.h.s. to n ∈ R as a polynomial. Taking derivatives by n, this means This (finite) sum must vanish at every order in y; hence Q is independent of all π 2k , k = 0. This shows the statement for Λ ± I ; the one for Λ I is analogous. Hence we have a simple characterization of the invariant subalgebras. Other generators have been constructed in [30,42]; we include them here for completeness: Let σ k = i1<···<i k x i1 · · · x i k ∈ Λ be the elementary symmetric polynomials, k ∈ N. For s ∈ N 0 , set We also set I −2s−1 (x i ) = I 2s+1 (x −1 i ) ∈ Λ ± . They have the following properties. Lemma A.2. (a) We have for every s ∈ N 0 , (A.6) (b) I 2s+1 is homogeneous of degree 2s + 1, s ∈ Z. We now show a density property of Λ I (and hence Λ ± I ). Proposition A.4. Let n ∈ N, and for each j = 1, . . . , n, let f j be a continuous symmetric function from R j to C. For every r > 0 and ǫ > 0, there exists a P ∈ Λ I such that P (e θ ) − f j (θ) < ǫ for every j ∈ {1, . . . , n} and every θ ∈ [−r, r] j . (A.10) Proof. Define the compact Hausdorff space x ∈ R j : e −r ≤ x 1 ≤ x 2 ≤ · · · ≤ x j ≤ e r . (A.11) With the obvious identification, we can consider Λ I as a * -subalgebra (with identity) of C(X, C). We show that Λ I separates points, i.e., if x, y ∈ X such that P (x) = P (y) for all P ∈ Λ I , then x = y. Let such x ∈ X ∩ R i , y ∈ X ∩ R j be given. As π 2k+1 ∈ Λ I , we have in particular for all k, x 2k+1 1 + · · · + x 2k+1 i = y 2k+1 1 + · · · + y 2k+1 j , (A.12) and hence, noting x i ≥ e −r > 0, The left-hand side has a finite, nonzero limit as k → ∞. For the right-hand side, this is true only if y j = x i . Hence we can cancel the last term on both sides of (A.12). Continuing this scheme, we either arrive at x = y (if i = j) or at a contradiction (if i = j).-Thus Λ I separates points, and hence by the Stone-Weierstraß Theorem [46, Ch. V §8], Λ I is dense in C(X, C). After symmetric extension in the j variables, and a variable transformation θ i = log x i , this is exactly the statement claimed. Proposition A.5. Let P ∈ Λ ± . There exists a, b > 0 such that for any n ∈ N, any ζ ∈ C n , and any set J ⊂ {1, . . . , n}, ∂ J P (e ζ ) ≤ a n E(Re ζ) b . (A.14) Proof. If P, Q ∈ Λ ± fulfill an estimate of the type claimed, then so do P + Q, cP (with c ∈ C) as well as P · Q, noting that the product rule reads ∂ J (P · Q)(e ζ ) = I⊂J ∂ I P (e ζ ) ∂ J\I Q(e ζ ) (A. 15) and that the sum contains at most 2 n terms. It hence suffices to prove the statement for the generators π k , k ∈ Z\{0}, which can be done by direct computation. As a first step, it is useful to rewrite the function using the following technique. By a pairing of n indices, we understand a set of pairs, p = {(ℓ 1 , r 1 ), . . . , (ℓ k , r k )} where k = ⌊n/2⌋, where ℓ j , r j ∈ {1, . . . , k} are all pairwise different and ℓ j < r j . We denote the set of all such pairings as P n , where P 0 = P 1 = {∅}. The signum of a pairing p is defined, in the case n = 2k + 1, as sign p := sign 1 2 3 4 · · · 2k−1 2k 2k+1 ℓ 1 r 1 ℓ 2 r 2 · · · ℓ k r km , wherem is the unique number not occurring in the pairs; if n = 2k, we drop the last column. We note that this expression does not depend on the ordering of the pairs. Also, sign ∅ := 1. With these definitions, we can express the function M odd n as follows. our aim is to show M odd n (ζ) = T n (e ζ ). Since T n is antisymmetric in its arguments, the expression T n (x) i<j (x i +x j ) is a skew-symmetric polynomial. Therefore, there exists [47, Thm. 3.1.2] a symmetric polynomial Q n such that But since T n is homogeneous of order 0, so is Q n ; thus Q n must be constant, and T n (e ζ ) = Q n M odd n (ζ). To determine the constant, note that lim ǫ→0 M odd n (log(ǫ), log(ǫ 2 ), . . . , log(ǫ n )) = 1 = lim ǫ→0 T n (ǫ, ǫ 2 , . . . , ǫ n ). (B.6) (All quotients (x ℓ − x r )/(x ℓ + x r ) etc. converge to 1 in this limit; and one has p∈Pn sign p = 1, as can be seen by induction on n.) Thus Q n = 1, which concludes the proof. We collect the main features of the functions M odd n . Proof. Properties (a), (b) and (e) can be read off directly from Eq. (B.1). For (c), one notes that the only factor in M odd n contributing to the pole is tanh 1 2 (ζ 1 − ζ 2 ), with residue −2; the claim then follows from (B.1) or, alternatively, from (B.3). Regarding (d), we estimate the function M odd n (ζ) for ζ ∈ I n + (the argument is similar for I n − ). We remark that tanh ζ 2 ≤ c 1 1 + 1 |ζ + iπ| ≤ 5c 1 Im ζ + π for all ζ ∈ R + i(−π, 0) (B.9) with some constant c 1 > 0. Applying this to every term in the representation (B.3), we find a constant c 2 (depending on n) such that for all ζ ∈ I n + , |M odd n (ζ)| ≤ c 2 max i<j 1 Im(ζ i − ζ j ) + π ⌊n/2⌋ ≤ c 2 dist(Im ζ, I n + ) −⌊n/2⌋ , (B.10) noting that dist(Im ζ, I n + ) ≤ | Im(ζ i − ζ j ) + π| for every i, j. Finally, we derive a representation of M odd n that is crucial for controlling its behaviour as an integral kernel. The sum runs over all pairs of indices with the described properties, including over the number k of pairs; it contains at most 2 m+n (min(m, n) + 1)! summands.θ ∈ R m−k denotes the θ j with j not in the list ℓ 1 , . . . , ℓ k , andη analogously. The integer s ℓr may depend on the choice of pairs. Proof. Recall that M odd m+n is given as a sum over pairings as in (B.3). Inserting ζ = (θ, η + iπ), we reorganize the sum over pairings p as follows: We first fix the number k of pairs (ℓ, r) ∈ p with ℓ ≤ m and r > m, and sum over k; then we sum over all possibilities for such pairs at fixed k; then we sum over the possibilities for choosing the ⌊(m − k)/2⌋ pairs (ℓ, r) ∈ p with ℓ < r ≤ m, and the ⌊(n − k)/2⌋ pairs (ℓ, r) ∈ p with m < ℓ < r, which complete the pairing of m + n indices. For the last-mentioned two sums, applying (B.3) yields the factors M odd m−k (θ)M odd n−k (η); the remaining factors of the product are of the form tanh((θ ℓ − η r−m − iπ)/2) = coth((θ ℓ − η r−m )/2) with ℓ ≤ m < r. Thus we arrive at Eq. (B.11), where s ℓr is some integer depending on the pairing (which has no further relevance for us). The sum contains at most m k n k k! ≤ 2 m+n k! summands at fixed k, so that the number N of terms can be estimated by N ≤ 2 m+n min(m,n) k=0 k! ≤ 2 m+n (min(m, n) + 1)! (B.12) as claimed.
2019-03-28T18:45:21.000Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "de1b4db0602dd4626bb6fe1cdbf7740563203ec9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00023-019-00847-7.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "16e8e95ed9a02cea103b20dad23926d921b8e64a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
9343117
pes2o/s2orc
v3-fos-license
Functionally inert HIV-specific cytotoxic T lymphocytes do not play a major role in chronically infected adults and children. The highly sensitive quantitation of virus-specific CD8+ T cells using major histocompatibility complex–peptide tetramer assays has revealed higher levels of cytotoxic T lymphocytes (CTLs) in acute and chronic virus infections than were recognized previously. However, studies in lymphocytic choriomeningitis virus infection have shown that tetramer assays may include measurement of a substantial number of tetramer-binding cells that are functionally inert. Such phenotypically silent CTLs, which lack cytolytic function and do not produce interferon (IFN)-γ, have been hypothesized to explain the persistence of virus in the face of a quantitatively large immune response, particularly when CD4 help is impaired. In this study, we examined the role of functionally inert CTLs in chronic HIV infection. Subjects studied included children and adults (n = 42) whose viral loads ranged from <50 to >100,000 RNA copies/ml plasma. Tetramer assays were compared with three functional assays: enzyme-linked immunospot (Elispot), intracellular cytokine staining, and precursor frequency (limiting dilution assay [LDA]) cytotoxicity assays. Strong positive associations were observed between cell numbers derived by the Elispot and the tetramer assay (r = 0.90). An even stronger association between tetramer-derived numbers and intracellular cytokine staining for IFN-γ was present (r = 0.97). The majority (median 76%) of tetramer-binding cells were consistently detectable via intracellular IFN-γ cytokine staining. Furthermore, modifications to the LDA, using a low input cell number into each well, enabled LDAs to reach equivalence with the other methods of CTL enumeration. These data together show that functionally inert CTLs do not play a significant role in chronic pediatric or adult HIV infection. Introduction The critical importance of HIV-specific cytotoxic T cells in controlling virus replication and in determining the outcome from infection has become increasingly apparent. Earlier studies demonstrated that, in chronic infection, high levels of CTLs were evident in asymptomatic subjects (1), generally declining to undetectable with progression to disease (2). In acute infection, the timing of early control of viremia was associated with the appearance of HIV-specific CTLs (3,4). More recently, utilization of peptide-MHC tetramer assays (5) has shown a striking negative association between CTL numbers and viral load in chronic HIV infection (6), and has revealed by CD8 ϩ T cell depletion studies in acute and chronic simian immunodeficiency virus (SIV) 1 infection in macaques the strong dependence on virus-specific CTLs for virus control and protection against rapid progression to disease (7,8). However, in spite of the quantitatively strong virus-specific CTL responses typically observed in chronic HIV infection, ultimately control of virus replication is lost, for reasons that remain unclear. Although the newer peptide-MHC tetramer assays have allowed more precise enumeration of the magnitude of the virus-specific CTL response, in the lymphocytic choriomeningitis virus (LCMV) model, the presence of significant numbers of phenotypically silent CTLs, capable of binding tetramer but not of elaborating effector functions, have now become apparent (9,10). The studies of chronic LCMV infection in CD4 knockout mice may have particular relevance to HIV infection, in which the dependence on virus-specific T helper responses in addition to CTLs for successful containment of virus has also been clearly demonstrated (11)(12)(13). Although studies using tetramers to quantitate CTLs in HIV infection have not been compared with functional assays, similar investigations in SIV infection have shown 50-500-fold higher numbers of antigen-specific CTLs than are indicated by functional assays (14). Thus, it may be hypothesized that, in the context of impaired virus-specific T helper activity, functionally inert CTLs may exist in HIV infection, and may partially explain the paradox of a numerically strong HIVspecific CTL response and yet the almost universal failure to control viremia long term. In this study of HIV-specific CTL activity, we address the relationship between functional CTL activity and antigen-specific CD8 ϩ T cell numbers both in infected adults and children. We reasoned that if phenotypically silent CTLs were present in HIV infection, they would be most evident either in pediatric infection, where viral loads tend to be higher than in adult infection (15)(16)(17), disease progression is more rapid (18), and levels of functional CTLs are reportedly lower (19,20), or in adults with high viral loads who simultaneously generated significant levels of CTLs. Three functional assays were used to study a broad range of subjects ( n ϭ 42) in order to compare the numbers of functional CTLs present with the levels detectable by peptide-MHC tetramers. The results of these studies show that the most sensitive of the functional assays, intracellular IFN-␥ staining after stimulation with the appropriate peptide, is equivalent to the tetramer assay in all subjects who were investigated ( r ϭ 0.97, n ϭ 29). The majority (median of 76%) of tetramer-binding cells were detectable with intracellular IFN-␥ staining. The absence of phenotypically silent CTLs was clear even in the antiretroviral therapynaive adults and children studied whose viral loads exceeded 100,000 RNA copies/ml plasma. The least sensitive of the functional assays, the limiting dilution assay (LDA) or precursor frequency assay, was investigated further to distinguish between the two major possibilities that have been proposed to explain the wellrecognized low estimation of CTL numbers by these cytotoxicity assays (21)(22)(23)(24). The first explanation is that a proportion of tetramer-binding cells have a fundamental inability to proliferate in culture to a level detectable in a chromium release assay (25,26). The second is that the insensitivity of the LDA is principally due to competition among different cells within the space of LDA wells, which limits expansion of antigen-specific cells of interest to proliferate to their true capacity (27,28). The studies described below indicate that modifications to the standard LDA can raise the sensitivity of the LDA to approximate that of the tetramer assay. The cause of CTL underestimation by the LDA is therefore chiefly methodological and not the consequence of substantial numbers of phenotypically silent CTLs. Materials and Methods Subjects Studied. Samples of blood from 23 children and 19 adults infected with HIV-1 were studied (Tables I and II). All of the children were perinatally infected and attend clinics at the Boston Medical Center or the Children's Hospital (Boston, MA), except two (001-UNC and 002-UNC) from the University of North Carolina (Chapel Hill, NC), one (VI06) from the University of Massachusetts (Amherst, MA), and one (DBN-11) from the University of Natal (Durban, South Africa). The mean age of the children was 8.8 yr, with a range of 3-17 yr. 21 of the 23 children and 3 of the 19 adults studied were treated with antiretroviral therapy. The viral loads in the children ranged from Ͻ 40 RNA copies/ml plasma to 867,724 copies/ml plasma (median 5,692 copies/ml). The CD4 percentage of total lymphocytes ranged from 4 to 43% (median 28%). The viral loads in the adults ranged from Ͻ 50 copies/ml to Ͼ 750,000 copies/ml (median 19,627 copies/ml). The CD4 percentage in the adults studied ranged from 8 to 56% (median 30%). The adults studied all attend clinics at the Massachusetts General Hospital except for one (9354) who attended the Fenway Community Health Center (Boston, MA), two attending King Edward VIII Hospital clinics in Durban, South Africa (DBN-1 and DBN-12), and three (AA, FWW, and SP) who attended the University of Texas Southwestern Medical Center (Dallas, TX). All subjects tested were chronically infected ( Ͼ 1 yr) except for adult subjects MCW and GV, who were studied within 1 yr of presentation with acute HIV syndrome. LDAs. LDAs were set up as described previously (29). In brief, PBMCs were plated in 24 replicate wells at limiting dilution, ranging from 16,000 to 100 cells/well. A total of 0.73 ϫ 10 6 PBMCs were required for each precursor frequency assay (PFA). When cells numbers were limited (for 13 of the 54 PFAs performed), between 50 and 100% of 0.73 ϫ 10 6 PBMCs were used in the assay, with dilutions reduced proportionately. These effector cells were cultured with irradiated allogeneic feeder PBMCs at 50,000 cells/well in a final volume per well of 200 l of R10 medium (RPMI 1640, 10% FCS, and 10 mM Hepes buffer [all from Sigma-Aldrich] with antibiotics). The anti-CD3 mAb, 12F6, was added at 10 g/ml. On day 5 and once weekly thereafter, the medium was changed with R10 medium containing 50 U/ml of recombinant IL-2 (provided by Dr. M. Gately, Hoffmann-La Roche, Nutley, NJ). Wells were screened for specific recognition of HLA-matched, peptide-pulsed, 51 Cr (New England Nuclear)-labeled EBV-transformed B lymphoblastoid cell line (BCL) target cells as described previously (29) after 15-25 d in culture. Calculation of CTL precursor frequency (CTLp) was performed using the maximum likelihood method (28) using a statistical program written by S.A. Kalams. Wells that showed 10% or greater specific lysis were scored as positive, as per convention. Comparison was made of lysis of peptide-pulsed target cells and lysis of control target cells that had not been pulsed with peptide. The fraction of negative wells at a given input cell number showing Ͻ 10% lysis of peptide-pulsed targets was subtracted from the fraction of negative wells showing Ͻ 10% lysis of control targets. To perform the calculation of CTL frequency at each individual seeding set of 24 replicate wells, the identical assumptions were made, which include the premise that the number of specific CTLs of interest are distributed in the LDA wells according to the Poisson distribution (28). In this case, if the CTL frequency is z , the expected fraction of LDA wells containing at least one of the specific CTLs would be where x is the number of cells per well at that particular seeding. From this can be derived the formula for the most likely estimate for the CTL frequency, f, which is where y is the fraction of wells scoring negative in the LDA. To convert this expression into log 10 values, as LDA data have conventionally been presented (for example as in Fig. 1 A), this approximates (as ln 0.368 ϭ Ϫ1.00) to Thus from this expression can be derived the more familiar fact that when the fraction of wells scoring negative in the LDA, y, is 37%, the best estimate of CTL frequency is the reciprocal of the number of cells per well at that seeding, x. In the 50 comparisons of percentage of tetramer-binding cells and percentage of specific lysis in the LDA wells, the specific effector to target ratio was not determined for each LDA well, in addition to measuring tetramer staining. However, the cell numbers in the eight wells that were counted were similar (Table III). Enzyme-linked Immunospot Assays. Fresh PBMCs were plated in 96-well polyvinylidene plates (Millipore) that had been precoated with 0.5 g/ml anti-IFN-␥ mAb, 1-DIK (Mabtech). The peptides were added in a volume of 20 l and then PBMCs were added at 50,000 cells/well in a volume of 180 l. The end concentration of the peptides was 10 M. The plates were incubated overnight at 37ЊC, 5% CO 2 , and washed with PBS before addition of the second, biotinylated anti-IFN-␥ mAb, 7-B6-1 biotin (Mabtech) at 0.5 g/ml and incubated at room temperature for 100 min. After washing, streptavidin-conjugated alkaline phosphatase (Mabtech) was added at room temperature for 40 min. Individual cytokine-producing cells were detected as dark spots after a 20-min reaction with 5-bromo-4-chloro-3-indolyl phosphate and nitro blue tetrazolium using an alkaline phosphataseconjugate substrate (Bio-Rad Laboratories). The number of specific T cells was calculated by subtracting the negative control values. The background was Ͻ40/10 6 PBMCs (2 spots/well at 50,000 PBMCs/well) in all cases. Wells which contained Ͼ50 spots were not used for accurate quantification. Assays were re- peated using lower input numbers of cells as necessary and in quadruplicate in order to quantitate responses to individual peptides more accurately. Intracellular INF-␥ Staining. Intracellular cytokine staining assays were performed as described elsewhere (30,31). In brief, 0.2-1.0 ϫ 10 6 PBMCs were incubated with 4 M peptide and 1 g/ml each of the mAbs anti-CD28 and anti-CD49d (Becton Dickinson) at 37ЊC, 5% CO 2 for 1 h, before the addition of 10 g/ml of brefeldin A (Sigma-Aldrich). After an additional 6-h incubation at 37ЊC, 5% CO 2 , the cells were placed at 4ЊC overnight. PBMCs were then washed and stained with surface Abs anti-CD8 and anti-CD3 (Becton Dickinson) at 4ЊC for 20 min. PBMCs which were stained also with tetramers were incubated with the tetramer at 4ЊC for 30 min before the addition of the surface Abs. After washing, the PBMCs were then fixed and permeabilized (Caltag) and anti-IFN-␥ mAb was added (Becton Dickinson). Cells were then washed and analyzed. Staining of lymphocytes was performed by incubating 0.5 ϫ 10 6 PBMCs for 30 min at 4ЊC with the appropriate tetramer at 0.5 mg/ml of tetramer, then for an additional 20 min with saturating amounts of peridinine chlorophyll protein-conjugated anti-CD8 mAb and allophycocyanin-conjugated anti-CD4 mAb (Becton Dickinson). Stained samples were analyzed on a FACS-Calibur™ flow cytometer using CELLQuest™ software (Becton Dickinson). Control samples for the tetramer staining were PBMCs from HLA-mismatched HIV-infected persons. Quadrant boundaries for tetramer staining were established by exclusion of Ͼ99.97% of control CD8 ϩ T cells. Results Comparison of LDA, Enzyme-linked Immunospot, and Tetramer Assay to Quantify CTL Numbers. To determine whether higher numbers of HIV-specific CTLs were detectable by tetramer assays compared with functional assays, as has been demonstrated in relation to other virus-specific CTLs (14,(22)(23)(24), initial studies were performed to compare the LDA with the tetramer assay. An example of data from one subject is shown in Fig. 1, A and B. Similar assays performed on a total of 20 subjects (13 children and 7 adults; a median of two comparisons per subject) are compiled in Fig. 2 A. CTL numbers were underestimated by Estimates of antigen-specific cells were also made using a second functional assay, detection of IFN-␥ production after peptide stimulation in enzyme-linked immunospot (Elispot) assays ( Fig. 1 C). Comparison of these assays demonstrated a very strong association (r ϭ 0.90, P Ͻ 0.001; Fig. 2 B), with tetramer assays detecting only a 3.5-fold greater number of antigen-specific cells than the Elispot assay, comparing median values. These data suggest that at least 25-30% of the tetramer-binding cells are functional, to the extent of elaborating IFN-␥ in response to specific antigen. The data regarding cytotoxic function of tetramer- binding cells suggest that at least 5-10% of tetramer-binding cells are capable of lysing target cells expressing the appropriate antigen. However, if there are methodological shortcomings underlying one or both of the LDA and Elispot assays, these estimates of CTL functionality may be grossly inaccurate. The relatively poor correlation between the two functional assays of antigen-specific CTLs (Fig. 2 C) suggests that this is indeed the case. The degree of precision of the LDA and the Elispot assays was therefore explored further. Comparison of CTL Numbers Derived from Tetramer and Intracellular Cytokine Staining. These results from studying persons with chronic HIV infection, as well as data from similar comparisons made in subjects with chronic EBV infection (24), show that the Elispot assay consistently estimates antigen-specific CD8 ϩ T cell numbers as 25-30% of the figures derived from tetramer staining (Fig. 2 B). One explanation for these differences would be that there is a large fraction of tetramer-positive cells that are incapable of producing IFN-␥, and are therefore functionally inert. A second possibility would be that the Elispot assay is simply less sensitive than a flow-based assay, and therefore some IFN-␥-producing cells are below the Elispot detection limit. To differentiate between these possibilities, the Elispot assay was modified to allow quantification by flow cytometry. A comparison was made between the number of cells that could bind tetramer and the number of peptide-stimulated cells detectable by intracellular IFN-␥ staining using flow cytometry. In 29 direct comparisons (Table II), the number of CTLs by intracellular IFN-␥ staining was a median of 76% of the number of CTLs estimated by tetramer staining (range 40-112%; r ϭ 0.97, P Ͻ 0.001; Fig. 2 D). Not shown are 15 additional comparisons that were undertaken using PBMCs from HIV-infected persons in which responses were undetectable (Ͻ0.03% of CD8 ϩ T cells) by either assay. These data also included six comparisons using EBV-and CMVspecific tetramers (gray symbols in Fig. 2 D). Analyzing only the comparisons using HIV-specific tetramers, the correlation coefficient was virtually unaltered (r ϭ 0.96). As one might be more likely to find nonfunctional CD8 ϩ T cells in subjects not on antiretroviral therapy who had been persistently exposed to high levels of viremia, six subjects defined in this study as such were also analyzed separately (shown by filled symbols; Fig. 2 D), again with an unaltered correlation coefficient (r ϭ 0.97) between tetramer-binding cells and IFN-␥-producing cells. (These six subjects were defined as such using the following conservative criteria: known to have been infected Ͼ4 yr; viral load Ͼ20,000/ml plasma; and antiretroviral therapy naive.) Thus, even for subjects such as 9354, whose absolute CD4 count had declined over 11 yr from 1988 to 1998 from 853/mm 3 to 180/mm 3 , and who was studied at the 12/98 time point at which his viral load was greatest (147,000 copies/ml plasma) before starting antiretroviral therapy, there remained clear evidence of substantial functional activity of CD8 ϩ T cells in response to peptide stimulation (Fig. 3). This assay was performed three times using different aliquots of PBMCs cryopreserved from the same time point with very similar results: the proportion of tetramerbinding cells detectable by intracellular IFN-␥ staining was 65, 76, and 77%, respectively (Fig. 3, and data not shown). Similarly, all PBMCs from a 4-yr-old child, DBN-11 (viral load 867,000), that bound the B42-Gag tetramer appeared to be functional by the intracellular IFN-␥ staining assay (Fig. 3). These data support the evidence from comparisons of Elispot assays and tetramer assays (Fig. 2 C) that the great majority of HIV-specific CTLs that bind tetramer are also functional, and can be detectable by flow-based assays that measure intracellular IFN-␥ production in response to peptide stimulation. By comparison, in the description of D b -GP33-specific CTL activity in the CD4 knockout mice chronically infected with LCMV (10), Ͼ98% of cells that were capable of binding the corresponding tetramer were nonfunctional. To address the question of whether the chronically infected subjects described above with persistently high viral loads might nonetheless still be able to generate HIV-specific T helper responses, the identical intracellular IFN-␥ assay was used to measure responses to p24 Gag antigen. Consistent with previous studies of Gag-specific T helper responses in HIV infection (11,30), p24 Gag-specific T helper activity was either undetectable or extremely weak in subjects with high viral loads, and only high in persons with low viral loads (Table II and Fig. 3). Analysis of Antigen-specific T Cells within the LDA Wells Using Tetramers. The cytotoxic functionality of tetramerbinding cells was next investigated further. To determine whether a factor contributing to underestimation of CTL numbers by LDAs was the presence of antigen-specific CD8 ϩ T cells that were detectable by tetramer, mostly able to elaborate IFN-␥, but incapable of cytotoxic function, comparison of the percentage of specific lysis observed in the chromium release assay, and the level of tetramer staining in each well on the same day of chromium release assay was made for each of 50 wells. This revealed a strong correlation (r ϭ 0.79, P Ͻ 0.001) between specific lysis observed in each well and the level of tetramer-staining cells present in the well (50 wells tested; Fig. 4). This correlation demonstrates the absence of wells that contained tetramerbinding cells but without the corresponding degree of cytotoxicity. Tetramer staining of the 33 "negative" LDA wells (that is, those that scored Ͻ10% specific lysis) revealed that 19 of these wells in fact contained antigen-specific CD8 ϩ T cells (Fig. 4). Whereas only 17 of the wells that were analyzed registered as positive in the LDA chromium release assay, 36 of the wells contained A*0201-SL9-specific CTLs when assayed by tetramer. Thus, it is clear that a substantial number of A*0201-SL9-specific CTLs are present in the LDA wells and are not detected in the chromium release assay, as the threshold for detection by the cytotoxicity assay is set at 10%. This relatively high cutoff has been conventionally accepted to maintain the specificity of the cytotoxicity assay, but the downside is a loss in sensitivity. To determine the stability of the proportion of tetramerstaining cells within LDA wells over time, eight of the wells that were analyzed after 15 d of culture were reanalyzed 6 d later. Some of these wells showed marked changes in the cellular composition between days 15 and 21 (Table III). Although the proportion of CD8 ϩ cells in the wells had increased only modestly (a 1.5-fold increase) in this short time, absolute numbers of tetramer-staining cells in some wells increased up to 24-fold (well 1), and in one case decreased by Ͼ40% (well 5). Thus, it is clear that antigen-specific (tetramer-staining) cells within the LDA assay wells can increase with longer in vitro culture, but also can be overgrown by cells that do not stain with that tetramer. Tetramer staining by the A*0201-SLYNTVATL tetramer of cells after 16 d in culture in 50 PFA wells, measured against specific lysis in the chromium release assay performed the previous day. 25 wells were from one PFA using cells from adult subject 161j (A*0201-positive); 25 wells were from a separate PFA from pediatric subject 048-TCH (A*0201-positive). Lymphocytes in control wells cultured for the same length of time (16 d) were from pediatric subject 049-TCH (A*0201negative), and Ͻ0.02% stained with the tetramer. Dashed lines show the threshold for positivity of the wells by the chromium release assay (10% or more specific lysis) and for the tetramer staining (Ͼ0.02% of all gated lymphocytes in the well). Inset shows the number of wells that were positive by chromium release assay but negative by tetramer staining (0), the number positive by both (17), etc. Thus, of the 50 wells analyzed, 19 contained tetramer-binding cells but did not score positive in the chromium release assay. CTL Frequency Estimation by LDA Using Low Numbers of Input Cells per Well. From the data described above, it was hypothesized that a major cause of the underestimate of CTL numbers by the LDA is overgrowth of specific ef-fectors by other cells. To test this hypothesis, it was reasoned that LDA wells that started with a low input cell number would be less likely to be overgrown, and should provide a closer estimate of the true CTL frequency. To calculate the CTLp at each individual replicate set of cell seedings, as opposed to from the standard seven dilutions, the identical assumptions are made, including that CTLs are distributed in the LDA wells according to the Poisson distribution (see Materials and Methods). From this assumption, and knowing both the fraction of wells scoring "negative" in the LDA for a particular replicate set of cell seedings and the input number of cells used to seed each well, the best estimate of the CTLp can be made for each of the seven dilutions of the LDA and compared with the standard estimate derived from all seven calculations together. In the four sets of assays performed in this way, the LDA using low input cell number per well (10-40 cells/ well) provided a very close approximation to the tetramer or Elispot-derived estimate of CTL numbers (32, 37, 41; Table IV). We therefore reevaluated standard LDAs from these donors to determine whether there is a negative association between input cell number per well and CTLp estimate. Performing these calculations in each of five LDAs (PB-MCs from two donors) showed a clear negative association between CTLp estimate and input cell number per well, overall with a correlation coefficient of r Ͼ 0.80 in each case (three replicate LDAs using PBMCs from one donor are shown in Fig. 5, A-C). In Fig. 5 C the estimates of CTL frequency from input cell numbers of Ͻ50 cells/well are included and show that the LDA method closely approximates the Elispot assay. These data strongly support the hypothesis that a major factor contributing to the underestimate of CTL numbers by use of the standard LDA is the large input cell number per well which is adopted, and not functionally inert cells. Discussion Use of peptide-MHC tetramers has revealed that even higher levels of CTLs are present in response to acute and chronic virus infections than was supposed previously (5,25,42,43). An important potential explanation for the failure of the high-frequency HIV CTL response to control HIV replication would be that a substantial portion of these tetramer-staining CTLs are phenotypically silent. This phenomenon has been clearly demonstrated in LCMV infection in mice (9,10). However, the data shown here in these studies argue against functionally inactive CTLs playing a major role in either pediatric or adult chronic HIV infection. The conclusion that functionally inert CTLs are not present in HIV infection is supported by numerous findings. First, across a broad range of subjects studied, a very close correlation exists between CTL numbers detectable by tetramer and by Elispot (n ϭ 19, r ϭ 0.90) or intracellular IFN-␥ staining assays (n ϭ 29, r ϭ 0.97). The high correlation between tetramer and the Elispot or intracellular IFN-␥ staining assays was clearly present even in subjects who had progressed to AIDS. This result is not compatible with a disassociation between tetramer binding and IFN-␥ release in response to antigen. Second, the level of tetramer staining is strongly associated with the level of cytotoxicity observed (r ϭ 0.79), as measured by detailed analysis of 50 LDA wells by tetramer staining. No wells were observed that contained lymphocytes able to bind tetramer but incapable of lytic activity. Third, modification of the LDA, using very low numbers of cells per well before in vitro culture, consistently and substantially increased the sensitivity of the LDA to levels equivalent to those of the Elispot or tetramer assay. This result implies that, given adequate growth conditions, virtually every antigen-specific cell placed in a PFA well has the capability to proliferate to a level at which it is detectable in the chromium release assay 2-3 wk later. None of these results are consistent with phenotypically silent CTLs playing a substantial role in the course of HIV infection in the 42 children and adults studied. Furthermore, these data are also fully consistent both with those of Ogg et al. (6), who showed a strong negative association between CTL numbers and HIV viral load in chronically infected adults, and with the CD8 depletion studies in SIV-infected macaques (7,8), which directly demonstrated the functionality of SIV-specific CTLs in controlling viral load. These studies, by the analysis of cells within LDA wells using tetramers, have also highlighted the ease with which standard LDAs can give underestimates of CTL numbers. Direct evidence has shown that antigen-specific cells can be overgrown by antigenically irrelevant cells. These data explain how the fraction of negative wells in one set of 24well replicates can be 4% if assayed at one time point, and 100% if assayed a few days later (as illustrated in Fig. 5). However, at low input cell numbers per well these drastic changes were not observed over time. It is interesting to note that in a study of primary EBV infection, LDA performed using autologous EBV-transformed B lymphoblastoid cell lines as feeders (37) gave very similar results to tetramer staining subsequently performed (42). It is possible that in this particular viral system, the potential for overgrowth by irrelevant T cells may be less of a problem. These studies have relied mainly on PBMCs from chronically infected children and adults, and it is quite possible that phenotypically silent CTLs may play a more substantial part during acute infection. It has been proposed that perhaps Ͻ1% of antigen-specific cells can proliferate adequately in culture in acute virus infections (10), irrespective of competition for space from other cells. Whereas massive V␤-specific and clonotypic expansions have been demonstrated in acute HIV infection (44,45), functional CTL activity of a corresponding magnitude in general has not been described (46,47). Recent studies in acute hepatitis C infection suggest that CTLs may initially adopt a "stunned" phenotype, unable to operate effectively and soon to recover as viremia subsides (48). However, it is worth noting than in one description of CTL responses in acute HIV infection (16 d after the onset of symptoms), the frequency of CTLs specific for an HLA-B44 Env epitope as determined by the LDA demonstrated was as high as 62,500/10 6 PBMCs (49). Also, in the two B8 ϩ subjects analyzed in this study soon after infection, 63 and 77% of B8 Nef tetramer-binding cells were detectable, respectively, by intracellular IFN-␥ staining after stimulation of PBMCs with the B8 Nef peptide (Table II). More extensive studies of acutely infected subjects are clearly warranted and are planned in this laboratory, pending the synthesis of the requisite array of peptide-MHC tetramers. It is possible that the CTL specificities that were not tested in these studies may prove to lack functional activity when analyzed similarly, as has been suggested by data both from human T lymphocyte virus I infection (50) and HIV infection (51; and Hay, M., unpublished data). Thus, it will be important to extend these studies to include a more extensive coverage of the HIV-specific CTL responses that are detectable than the four specificities described here. The apparent low sensitivity of the LDA, as conventionally performed, has raised questions about the future usefulness of this assay. Even though the LDA can approach the sensitivity of Elispot and tetramer assays when performed under optimal conditions to allow proliferation of antigenspecific cells of interest, as has been suggested by in vitro expansion of low-frequency CTL clones sorted using tetramers (52), it remains a much more labor-intensive assay that has other significant disadvantages, including the use of radioactivity. Epitope mapping by Elispot assay (53) and by flow cytometric detection of intracellular IFN-␥ in response to peptide stimulation (31,41) is rapidly reducing the value of CTL clones as a means of defining novel CTL epitopes. However, the LDA remains a functional assay that addresses the central cytotoxic activity of CTLs. In conclusion, these data show that, in chronic pediatric and adult HIV infection, phenotypically silent CTLs are not a significant cause of persistence of virus in the face of an apparently strong immune response. CTL numbers can be reliably and conveniently estimated in HIV-1 infection using Elispot assays, as Elispot and tetramer-derived estimations correlate so closely (r ϭ 0.90). Intracellular IFN-␥ staining of PBMCs by flow cytometry after peptide stimulation provides an even closer estimate of CTL numbers to that obtained by tetramer staining (r ϭ 0.97), and simultaneously allows immunophenotypic characterization of antigen-specific cells. As a method, intracellular IFN-␥ staining has greater flexibility, as synthesis of one tetramer per CTL epitope is not practicable. LDAs can also be a reliable method of quantifying CTLs, provided that conventional methods for performing the assays are modified and only low input cell numbers per well are used. However, LDAs are not a consistent method of CTL enumeration when, as is standard, high input cell numbers are used, as antigenspecific cells of interest can be unpredictably overgrown by antigenically irrelevant cells. Further comparison of CTL numbers using LDA, tetramer, Elispot, and intracellular cytokine staining assays will be of value in acute HIV infection, in which the relation between tetramer-binding cells and functionality in those cells has not been established.
2014-10-01T00:00:00.000Z
2000-12-18T00:00:00.000
{ "year": 2000, "sha1": "a64b3f1bd7d583a06ecd42756208a1c5ac25337f", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/192/12/1819.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e75eec69d8c1068210fe794e959716a03fe378be", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11249633
pes2o/s2orc
v3-fos-license
Acid Catalyzed Alcoholysis of Sulfinamides: Unusual Stereochemistry, Kinetics and a Question of Mechanism Involving Sulfurane Intermediates and Their Pseudorotation The synthesis of optically active sulfinic acid esters has been accomplished by the acid catalyzed alcoholysis of optically active sulfinamides. Sulfinates are formed in this reaction with a full or predominant inversion of configuration at chiral sulfur or with predominant retention of configuration. The steric course of the reaction depends mainly on the size of the dialkylamido group in the sulfinamides and of the alcohols used as nucleophilic reagents. It has been found that bulky reaction components preferentially form sulfinates with retention of configuration. It has been demonstrated that the stereochemical outcome of the reaction can be changed from inversion to retention and vice versa by adding inorganic salts to the acidic reaction medium. The unusual stereochemistry of this typical bimolecular nucleophilic substitution reaction, as confirmed by kinetic measurements, has been rationalized in terms of the addition-elimination mechanism, A-E, involving sulfuranes as intermediates which undergo pseudorotations. Introduction The mechanism and stereochemistry of nucleophilic substitution reactions at sulfur, SN-S, as well as at other heteroatoms (P, Si, Se, etc.), have been a subject of extensive studies of many research groups in recent decades [1]. Due to the fact that sulfur may form tetra-or pentacoordinate compounds (sulfuranes) [2,3], the most important question concerning the mechanism of SN-S reactions is whether these reactions occur synchronously according to an SN2-S mechanism or stepwise by an addition-elimination mechanism, A-E, involving sulfuranes as intermediates that are formed by addition of nucleophiles, N, to the reaction substrates (Scheme 1). Scheme 1. Possible mechanisms for nucleophilic substitution reactions at sulfur. The second closely related problem is connected with the relationship between the structure of transiently formed sulfuranes and the stereochemical outcome of nucleophilic substitution reactions. It is now generally accepted that diaxial or diequatorial disposal of entering, N, and leaving, L, groups in a trigonal bipyramidal structure of transient sulfurane intermediates should lead to inversion of configuration at sulfur while the steric course of axial-equatorial substitution is predicted to be retention. (Scheme 2). Scheme 2. Relationship between the structure of transient sulfuranes and stereochemistry of A-E reactions at sulfur. However, the steric course of the SN-S reactions proceeding according to the A-E mechanism may also be affected by permutational isomerization of sulfuranes. This process, commonly called pseudorotation, consists in the internal ligand reorganization changing the relative positions of axial and equatorial ligands in a trigonal bipyramidal structure. A single pseudorotation process according to the Berry mechanism is shown below (Scheme 3). Since pseudorotation processes are of very low energy (the energy barriers are in the range from ca. 6 to 8 kcal/mol [4,5]), they may have important influence on the stereochemical outcome of nucleophilic substitution at sulfur which may vary from inversion to retention and racemization. Scheme 3. A single Berry pseudorotation process of a sulfurane structure. The majority of nucleophilic substitution reactions at the stereogenic sulfur atom occur with inversion of configuration. For example, this steric course has unequivocally been established in the reaction of optically active methyl p-toluenesulfinate containing 14 C in the methoxy group with methanol catalyzed by trifluoroacetic acid (Scheme 4) [6]. The measurements of the rate of racemization of this sulfinate and the rate of isotopic methoxy-methoxy exchange revealed that it loses its optical rotation practically twice as fast as it loses the radiolabelled methoxy group. This finding provided a clear-cut evidence for a full inversion of configuration in the elementary process of the methoxy-methoxy exchange at the sulfinyl sulfur. However, the observation of inversion in the above reaction, as well as in a great number of other SN reactions at sulfur, does not allow to distinguish between the SN2-S and A-E mechanisms since both the transition state A and sulfurane intermediate B proposed for the methoxy-methoxy exchange at the chiral sulfinyl centre explain this steric course. The only conclusion, which can be drawn, is that the sulfurane B, if it is formed, should decompose before pseudorotation because all substituents around sulfur are properly placed in a trigonal bipyramidal structure from the viewpoint of apicophilicity. In contrast to stereoinvertive SN-S reactions, those occurring with retention at the sulfinyl sulfur give more convincing evidence for the operation of the A-E mechanism. In almost all such reactions reported so far retention at sulfur was convincingly explained by formation of a transient four-membered ring sulfurane intermediate with apical-equatorial arrangement of entering and leaving groups that undergoes pseudorotation and then decomposes to final product with retained configuration [7]. In accord with the microscopic reversibility rule, the pseudorotation of the primarily formed sulfurane intermediate is required to form a new sulfurane with the leaving group in apical position, however, without changing the preferable apical-equatorial disposal of a four-membered ring. The sulfur oxygen exchange between the 18 O-labelled (+)-(R)-methyl p-tolyl sulfoxide and dimethyl sulfoxide proceeding without racemization, i.e., with retention, is the best example of the A-E mechanism discussed above (Scheme 5) [8]. Scheme 5. Steric course and mechanism of 18 O/ 16 O exchange in optically active methyl p-tolyl sulfoxide. In the course of our studies on static and dynamic stereochemistry of organic sulfur compounds, especially those with the sulfur atom as a sole centre of chirality, we became interested in the reactions of sulfinamides with alcohols catalyzed by acids (Equation (1)). We hoped that based on this reaction, completely unknown at the beginning of our work, a new and general synthetic approach to sulfinates can be devised. Moreover, since the starting sulfinamides were accessible in enantiomeric forms, this reaction should also provide a new access to optically active sulfinates. Apart from synthetic aspects, the acid catalyzed alcoholysis of sulfinamides has attracted our attention as a model reaction of nucleophilic substitution at the sulfinyl sulfur atom. Examination of its stereochemistry could gain further experimental insight into the complex nature of the SN-S reactions and give new evidence for the addition-elimination mechanism, A-E. (1) In this paper we wish to report the complete results of our detailed investigations of this reaction using a broad spectrum of optically active sulfinamides, alcohols and acidic catalysts and to rationalize the most interesting and unique discovery that its steric course varies from inversion to predominant retention and can be influenced by many factors. Preliminary results of our studies have been reported in two short communications [9,10]. Synthesis of Racemic Sulfinates At the outset of our studies, the synthetic value of the reaction under discussion was checked out using racemic sulfinamides as substrates. The latter were easily prepared by condensation of sulfinyl chlorides with amines. It was found that treatment of a series of racemic sulfinamides 1 and 3 with alcohols in the presence of trifluoroacetic acid afforded the corresponding sulfinates 5 and 7 in excellent yields (Scheme 6). In general, the reactions were carried out at 0 °C or at room temperature using an excess of alcohol and two molar equivalents of trifluoroacetic acid with respect to sulfinamide. Pure sulfinates were obtained by distillation or column chromatography. Efficient and simple preparation of racemic sulfinates from sulfinamides paved the way for elaboration of a chiral version of this conversion in which optically active sulfinamides are used. Synthesis of Optically Active Sulfinamides Having in mind the development of a general synthesis of optically active sulfinates and detailed examination of the stereochemistry of the sulfinamide→sulfinate conversion, a number of optically active sulfinamides were prepared. Their structures are shown below ( Figure 1). Optically active sulfinamides 1a-d have been prepared essentially according to the method reported by Montanari et al. from the diastereoisomerically pure (-)-(S)-menthyl p-toluenesulfinate (4) and the appropriate dialkylaminomagnesium bromides [11]. This reaction was demonstrated to occur with inversion of configuration at sulfur (Equation 2). (2) However, because in our hands this procedure gave results (see Table 1) different from those reported, it seems desirable to describe briefly our own observations. Firstly, we found that this reaction in THF-ethyl ether or ethyl ether solutions is very slow at −45 °C and occurs with a synthetically acceptable rate only at temperatures above 0 °C. Secondly, the reaction stereoselectivity was found to be dependent on the reaction temperature, structure of aminomagnesium bromides and sulfinamides formed. For example, when the reaction of sulfinate (-)-(S)-4 with pyrrolidinemagnesium bromide was carried out at 0 °C the sulfinamide 1d was isolated with [α]D = +215 (88.8% op). The same reaction carried out at room temperature gave 1d with much lower optical rotation, [α]D = +135 (50.7% op), but in a comparable yield. Optical purity values for 1a and 1b were calculated based on reference [11] and those for 1d determined in this work. The stereoselectivity of the reaction leading to the sulfinamide 1c, although low, was found to be independent of the reaction temperature and 1c turned out to be optically stable under the reaction conditions. However, our additional experiments showed that it undergoes decomposition in the presence of diisopropylmagnesium bromide. Therefore, the shorter reaction time resulted in a higher yield of 1c. These observation taken altogether are most probably indicative of an addition-elimination mechanism responsible for the partial retention of configuration at sulfur during the replacement of the menthoxy group by the bulky diisopropylamino moiety. The reaction of the sulfinate (−)-(S)-4 with dimethylaminomagnesium bromide carried out under our experimental conditions afforded the sulfinamide 1a that was practically racemized. We believe that in this case the racemization of 1a in the reaction medium is due to a competive symmetrical exchange of the dimethylamino group at the sulfinyl sulfur. Optically pure (+)-(S)-benzenesulfinamide (2a) and (+)-(S)-N-methylbenzene sulfinamide (2b) have been obtained by reduction of the corresponding sulfoximides with aluminum amalgam according to and in a full agreement with the procedure reported by Johnson [12,13]. Optically active N,N-dimethyl derivative 2c was prepared by methylation of the lithium salt of (+)-(S)-2b with methyl iodide at low temperature (Scheme 7), however, with a very low optical purity (15% op). Interconversion of Sulfinamide Enantiomers The synthesis of optically active sulfinamides 1 from (-)-(S)-menthyl p-toluenesulfinate (4) described above afforded only the (+)-(S)-enantiomers of 1. Therefore, in the course of our study and for the sake of its completeness it was desirable to find a way for the conversion of the sulfinamides (+)-(S)-1 into their (-)-(R)-enantiomers. This could avoid the tedious preparation of the diastereoisomeric (+)-(R)-sulfinate 4. Being stimulated by the original work by Johnson [14] on the interconversion of the sulfoxide enantiomers involving their O-methylation and subsequent alkaline hydrolysis, we decided to extend this approach to optically active sulfinamides 1. In view of the fact that racemic N-p-tolylsulfinylpyrrolidine (1d) forms relatively stable O-alkoxysulfonium salts [15,16], the sulfinamide (+)-(S)-1d, [α]D +215, was reacted with an excess of methyl triflate in nitromethane to give the corresponding methoxy-N-pyrrolidinyl-p-tolyl-sulfonium salt that was isolated and in a crude state hydrolysed under mild alkaline conditions affording (-)-(R)-sulfinamide 1d, [α]D −175. It is necessary to point out in this place that the absolute configuration and optical purity of the starting sulfinamide (+)-(S)-1d was established by its conversion accompanied by inversion of configuration at sulfur into the well known (-)-(S)-methyl p-tolyl sulfoxide with optical rotation value, [α]D −120, which corresponds to 81% of optical purity. Hence, the optical purity of the sulfinamide (-)-(R)-1d, obtained is equal to 66%. Most probably the loss of stereoselectivity in the alkaline hydrolysis of sulfonium triflate is due to a competitive attack of the hydroxy anion at the methoxy carbon giving back the starting (+)-(S)-1d. The reactions discussed above are summarized in Scheme 8. Stereoselective Synthesis of Optically Active Sulfinates and Stereochemistry of Their Formation Having in hand the enantiomerically enriched sulfinamides 1 and 2 we could achieve the main goal of the present work i.e., the synthesis of optically active sulfinates and determination of the stereochemistry of their formation in the acid catalyzed alcoholysis of sulfinamides shown in Equation (3). In general, the reaction of the optically active sulfinamides 1 and 2 with various alcohols was caried out at room temperature using a great excess of alcohols and two molar equivalents of acidic catalyst. The progress and termination of the alcoholysis was followed polarimetrically. The isolated, analytically pure sulfinates 5 and 6 were characterized by IR and NMR spectroscopy. Their optical purity and absolute configuration were estimated from the literature data or via their stereospecific conversion into optically active methyl p-tolyl sulfoxide [17][18][19]. In the first series of experiments (+)-(S)-N,N-diethyl p-toluenesulfinamide (1b) was reacted with primary, secondary and tertiary alcohols in the presence of strong acids to give the corresponding optically active sulfinates 5, p-TolS(O)OR. In all the investigated cases the obtained 5 exhibited negative sign of optical rotation which points to their S-configuration at sulfur and formation with inversion of configuration. An inspection of the results of this set of experiments, which are summarized in Table 2, revealed that stereoselectivity of the conversion of (+)-(S)-1b into the sulfinates (−)-(S)-5 is markedly dependent on the structure of alcohols. With primary alcohols, except benzyl alcohol, a full or almost full stereoselectivity was observed. The isopropanolysis reaction gave the corresponding sulfinates with the stereoselectivity from 58% to 84%. Interestingly, it was dependent to some extent on the nature of acidic catalyst. When t-butanol was used the reaction stereoselectivity was quite low. A similar gradual decrease in the reaction yields was found on going from primary to tertiary alcohols. Moreover, the reaction rates were also changed in the same direction. For example, the termination time of methanolysis estimated polarimetrically is 45 min, isopropanolysis requires 2.5 h to be completed, while the reaction with t-butanol is finished after 8 h. These features of the alcoholysis of (+)-(S)-1b mentioned above are due to its partial racemization and decomposition under the acidic reaction conditions. For instance, when the reaction of (+)-(S)-1b, [α]D +96 (78.5% op), with t-butanol was quenched at the half-conversion, it was recovered with much lower optical rotation equal to [α]D +45 (39.8% op) whereas the sulfinate 5i was isolated with almost the same optical rotation, [α]D −32.5 (25% op), as that when the reaction was complete. Moreover, it was found that the sulfinamide (+)-(S)-1b in the presence of strong acids undergoes very fast racemization and decomposition in nonpolar solvents (CCl4, CHCl3) which occur slower in alcohols. From the dynamic stereochemistry viewpoint, much more interesting results were obtained when (+)-(S)-N,N-diisopropyl p-toluenesulfinamide (1c) was used as a substrate in acid catalyzed alcoholysis reaction. In contrast to (+)-1b, (+)-1c was found to be optically and chemically stable under the acidic reaction conditions. In Table 3 the selected results of this series of experiments are summarized. Thus, with primary alcohols (MeOH, EtOH, n PrOH) the laevorotatory (S)-sulfinates 5a, 5b and 5c were formed with predominant inversion of configuration. However, the reaction of (+)-(S)-1c with isopropanol, its hexadeutero and hexafluoro analogues, cyclohexanol and cyclopentanol afforded unexpectedly the corresponding dextroratory sulfinates 5d, 5d', 5d'', 5j and 5k with predominant retention of configuration. The percentage of retention was especially high with cyclohexanol (74.5%). These findings show clearly that steric factors in the attacking alcohol and departing dialkylamino group exert important influence on the stereoselectivity and, first of all, on steric course of the investigated reaction. Therefore, combination of a sterically hindered alcohol as a nucleophile and a bulky leaving diisopropylamino group is mainly responsible for the reversal of stereochemistry from inversion to retention. In an extention of the present work the reaction of optically active benzenesulfinamides (+)-(S) 2a-c with methanol and ethanol catalyzed by trifluoroacetic acid was investigated. As the results collected in Table 4 show, the sulfinates (-)-6 were formed with inversion of configuration and variable degree of stereoselectivity. A full inversion of configuration was observed with (+)-(S)-N,N-dimethyl benzenesulfinamide (2c). Having in hand both enantiomerically enriched N-p-toluenesulfinylpyrrolidines (+)-(S)-1d and (-)-(R)-1d prepared as shown in Scheme 8, we were also able to determine the steric course of their methanolysis. It turned out that in this case the corresponding enantiomeric sulfinates 5a were formed in a stereospecific way with inversion of configuration. It is worthy to point out that the synthesis of both enantiomeric methyl sulfinates 5a (Scheme 9) demonstrates how they can be prepared starting from only one form of the diastereoisomeric menthyl p-toluenesulfinate (-)-(S)-4. The stereochemistry of the acid-catalyzed alcoholysis of sulfinamides may also be affected by addition of silver perchlorate and other inorganic salts. In a preliminary experiment we found that isopropanolysis of (+)-(S)-N,N-diethyl p-toluenesulfinamide (1b) catalyzed by trifluoroacetic acid and carried out in the presence of silver perchlorate occurred with a higher stereoselectivity (92% inversion) than that observed in its absence (84.7% inversion). Being stimulated by this observation, a detailed study was undertaken on the effect of added inorganic salts on the steric course of the acid catalyzed alcoholysis of the sulfinamide (+)-(S)-1c (Equation (4)). The results are outlined in Table 5. (4) As it is seen, silver perchlorate favours the formation of the sulfinates 5 with inversion of configuration. The most impressive change was observed with isopropanol and cyclohexanol which reacted with (+)-(S)-1c in the absence of this inorganic salt with prevailing retention of configuration. The effect of silver perchlorate discussed above prompted us to investigate the stereochemistry of the acid catalyzed reaction of the sulfinamide (+)-(S)-1c with isopropanol in the presence of other inorganic salts (Equation (5)). The results of this set of experiments are collected below. An inspection of the results collected in Tables 5 and 6 demonstrates that the added inorganic salts are able to radically change the overall stereochemistry of the investigated reaction and both the cation (K) and anion (A) play an important role in this regard. In other words, a new tool was found which allows us to control and design the reaction stereochemistry. In contrast to significant influence of inorganic salts, the reaction stereochemistry was found to be slightly solvent dependent (see Equation (6) and Table 7). Although only four solvents were tested, it seems that polar solvents may favour retention of configuration. Reaction Kinetics The observation of a unique stereochemistry of the acid catalyzed alcoholysis of optically active sulfinamides 5, which may vary from inversion to predominant retention of configuration at the stereogenic sulfur atom, and its sensitivity to internal (the structure of both reactants) and external (the presence of inorganic salts) factors prompted us to determine the reaction kinetics as integral part of the present study aimed at elucidation of the reaction mechanism. Two model reactions were chosen for kinetic investigations i.e., isopropanolysis of (+)-(S)-N,N-diethyl p-toluenesulfinamide (1b) and N,N-diisopropyl p-toluenesulfinamide (1c) with the presence of trifluoroacetic acid. The latter was used in two molar excess (0.11752 mol/L) in respect to sulfinamide (0.05876 mol/L) and both reactions were carried out in isopropanol as a solvent used in 200 molar excess. The progress of the isopropanolysis reaction was followed polarimetrically. The calculated pseudo-first order rate constants at various temperatures (298-318 K) are listed in Table 8. For comparison purposes, the pseudo-first order rate constant of the reaction of (+)-(S)-N,N-dimethyl p-toluenesulfinamide (1a) at 310 K was determined. Based on the variable temperature measurements the energy and entropy of activation (at 25 °C) have been calculated and are shown in Table 9: Table 9. Activation energy and entropy for isopropanolysis of sulfinamide 1b and 1c catalyzed by trifluoroacetic acid. The values of energy and entropy of activation are characteristic for a typical bimolecular substitution reaction. The gradual decrease in the reaction rate constants measured at 310 K on going from isopropanolysis of 1a (k = 16 ± 0.35) to 1b (k = 5.22 ± 0.1) and to 1c (k = 0.935 ± 0.04) indicates that an increase of steric bulk at the amido-nitrogen atom is responsible for these changes. Similary, the rate constant (at 298 K) of isopropanolysis of 1b (k = 2.32 ± 0.08) is much smaller than that for methanolysis of 1b (k = 33 ± 1.2) determined at the same temperature. In this case, the difference in rate constants is due to introduction of a steric bulk to reacting alcohol as a nucleophilic reagent. Such a relationship between reaction rate constatnts and steric hindrance is typical for bimolecular nucleophilic substitution reactions. Reaction Ea (kJ mol In addition to calculation of the rate constant for the reaction of (+)-(S)-1b with methanol, that plays here dual role of a nucleophile and a solvent, the corresponding rate constant in deuterated methanol, CH3OD, was estimated. This allowed us to calculate the kinetic isotopic effect of solvent equal to 1.45. (7) This value of kinetic isotopic effect of solvent points also to bimolecular reaction mechanism. Moreover, it indicates that protonation is the first and fast reaction step and does not determine the reaction rate. Interestingly, a very similar value of the kinetic isotopic effect of solvent was reported by Tillet who investigated hydrolysis of N-aryl arenesulfinamides under acidic conditions [20]. Moreover, the recent investigation of the hydrolysis rates of N-alkyl and N-aryl methanesulfinamides led the authors [21] to the conclusion that if nitrogen protonation does occur, it is not the rate-limiting step. In order to rationalize our most interesting observation of predominant retention of configuration at sulfur in the reaction of the sulfinamide (+)-(S)-1c with isopropanol catalyzed by trifluoroacetic acid we took into consideration a possible two-step mechanism for this reaction involving the formation of a mixed anhydride as intermediate product (Equation (8)). Assuming that its formation and subsequent isopropanolysis could occur with inversion of configuration, the sulfinate (+)-(R)-5d should be formed with retention of configuration. (8) To support or rule out this mechanistic possibility, the rate constant and steric course of the reaction were determined at various concentrations of the added sodium trifluoroacetate (Equation (9)). It was anticipated that the presence of trifluoroacetate anion should facilitate the formation of mixed anhydride and increase the percentage of retention. (9) As it is seen in Table 10, the results of the above kinetic measurements are not consistent with the hypothesis of a two-step mechanism involving mixed anhydride and double inversion. Therefore, the diverse stereochemistry of the reaction under discussion is most probably due to a competition between the inversion and retention processes. Discussion Before discussing the mechanism of the acid catalyzed alcoholysis of sulfinamides it is necessary to emphasize that the stereochemistry of this reaction studied with optically active sulfinamides shows unique features. Namely, it was found that optically active sulfinates are formed with a full or predominant inversion or predominant retention of configuration at stereogenic sulfur. The predominant retention of configuration was observed with bulky dialkylamido groups in sulfinamides and with sterically demanding alkyl substituents in alcohols. Moreover, the stereochemical outcome of the alcoholysis reaction may be changed from inversion to retention and vice versa by adding inorganic salts to the reaction medium. To a lesser extent the steric course is influenced by the nature of acid catalysts and solvents. On the other hand, in contrast to the unusual stereochemistry, kinetic measurements revealed that the acid catalyzed alcoholysis of sulfinamides is a typical bimolecular substitution reaction at sulfur and protonation is a fast and not rate-detemining step. Although the sulfinamide molecule may be protonated on the nitrogen and oxygen atoms, it is evident that the nitrogen atom is protonated because in this way a leaving dialkylammonium group is created. Our comparative studies on the spectral properties of neutral and protonated sulfinamides led us to the same conclusion [22]. All the stereochemical observations on our reaction summarized above may be best rationalized in terms of the addition-elimination mechanism, A-E, involving sulfurane intermediates that are able to undergo pseudorotation. Theoretically, addition of an alcohol to the protonated sulfinamide may results in the formation of twenty chiral sulfuranes interconnected by thirty pseudorotations. Four sulfuranes are formed by nucleophilic attack of an alcohol on four different walls of the protonated (S)-sulfinamide tetrahedron. Another six sulfurane structures result from the attact of an alcohol on six edges of the tetrahedron. The remaining ten sulfuranes are enantiomeric structures which may be derived from the (R)-sulfinamide. All these twenty sulfuranes are in equilibrium due to a very low energy for pseudorotation. They are displayed in the form of a hexaasterane graph, originally proposed by Mislow [23] for pentacoordinate phosphoranes, which was applied by us to discuss the stereochemical outcome (retention or inversion) of the alcoholysis of sulfinamides ( Figure 2). As it is now generally accepted that in nucleophilic substitution reactions apical entry and apical departure are preferred over the equatorial counterparts [24], only four sulfuranes (1, 6, 8, and 14) resulting from the attack on four walls remain as candidates for the initial products of addition of an alcohol to (S)-sulfinamide (Scheme 10). In further considerations we assumed that, after addition, the negatively charged sulfinyl oxygen atom is protonated. Among these four structures the highest probability of formation should have the sulfurane structure 14 because arrangement of substituents in a trigonal bipyramid is optimal from the viewpoint of apicophilicity of ligands. Due to the diapical disposal of the entering alkoxy group and leaving protonated dialkylamido group its direct decomposition should afford sulfinic acid ester 5 as a substitution product with inverted configuration at sulfur. In fact, this steric course has been observed for the reaction of alcohols with (+)-(S)-N,N-diethyl p-toluenesulfinamide (1b) and with (+)-(S)-benzenesulfinamide (2a) (+)-(S)-N-methyl benzenesulfinamide (2b) and (+)-(S)-N,N-dimethyl benzenesulfinamide (2c). Interestingly, the acid catalyzed methanolysis of both enantiomers of N-p-tolyslsulfinylpyrrolidine (1d) was occurring with a full inversion of configuration. However, as it was described earlier, the reactions (+)-(S)-N,N-diisopropyl p-toluenesulfinamide (1c) with secondary alcohols gave the corresponding sulfinates 5 with predominant retention of configuration. In this case, in the initially formed sulfurane 14 apical positions are occupied by two bulky groups, namely, the protonated diisopropylamino group having a tetrahedral structure and bulky alkoxy group. It is reasonable to expect that steric repulsive interactions between the latter groups and equatorial substituents (a-e angle ~ 90°) force the sulfurane 14 to pseudorotate to a new trigonal bipyramidal structure where a number of unfavourable interactions will be diminished. Thus, the pseudorotation of 14 using the lone electron pair as a pivot leads to the sulfurane 15 where two bulky dialkylammonium group and alkoxy substituent are placed in equatorial positions (e-e angle ~110°), and the steric interactions of apical and equatorial substituents are smaller. The next pseudorotation of 15 with the dialkylammonium group as a pivot results in the formation of the sulfurane 3. In this structure the unfavourable apical position of the lone electron pair is compensated by the right apical placement of a strongly apicophilic alkoxy group. To complete the retention pathway it is necessary to put the departing dialkylammonium moiety into apical position via pseudorotation of 3. The resulting sulfurane 9 decomposes to the sulfinate 5 with retention of configuration. The most probable and shortest pathways for inversion and retention are shown in Scheme 11. Scheme 11. Two competing inversion and retention pathways in the acid catalyzed alcoholysis of sulfinamides 1. The important effect of the added inorganic salts on the steric course of the acid catalyzed alcoholysis of sulfinamides is, at present, very difficult to rationalize and requires further studies. Although it is evident that sulfinamides and their protonated species may form complexes with inorganic salts, their structures are unknown and sometimes hard to predict. As in the case of protonation of sulfinamides, the salt cations may be coordinated to three different sulfinamide sites, to sulfur, oxygen or nitrogen (Scheme 12). Scheme 12. Preferred steric course of the acid catalysed alcoholysis of sulfinamides in the presence of silver perchlorate. Most probably silver perchlorate, that has been found to strongly prefer the inversion pathway, is coordinated to the protonated sulfinamide 1 in such a way that silver cation, as a soft metal ion, is bound to sulfur via its electron pair, while perchlorate anion, still being under control of silver cation, forms hydrogen bond with the ammonium nitrogen atom. Addition of alcohol to this complex affords the sulfurane 14' with the same mode of silver perchlorate coordination. In this way, the energy for pseudorotation of 14' is increased as compared with that of uncomplexed 14 and the direct apical departure of the dialkylammonium moiety is facilitated affording the sulfinates 5 with inverted configuration. However, the effect of other salts on the stereochemical outcome of our reaction is still obscure and should be investigated. Finally, it is necessary to point out that formation of sulfurane intermediates was postulated not only in the nucleophilic substitution reactions at the stereogenic sulfur atom [25,26] but also in many diverse chemical reactions [27][28][29][30][31][32] and enzymatic biotransformations [33] of organic sulfur compounds. General Melting and boiling points are uncorrected. THF was distilled over K/benzophenone and benzene was distilled over Na wire, both immediately before use. Chloroform was distilled over P2O5 and stored over anhydrous Na2CO3. Thin layer chromatography (TLC) was conducted on Silica Gel 60 F254 TLC purchased from Merck (Darmstadt, Germany). Column chromatography was performed with Merck Silica gel (200-300 mesh). NMR spectra were recorded at 20 °C with Jeol C 60 HL, Tesla BS-487 C and Bruker HX 90 (Karlsruhe, Germany). 1 H-NMR chemical shifts are reported relative to TMS as internal starndard. IR spectra were recorded with Specord 71 IR Carl Zeiss and Perkin-Elmer spectrophotometers (Jena, Germany). Optical rotations were measured at 20 °C using a Perkin-Elmer 141 photopolarimeter. All reactions under anhydrous conditions were carried out under a dry argon atmosphere. Elemental analyses were done in the Microanalytical Laboratory of the institute. The correct microanalysis data (H, ±0.3%, C, ±0.4%, S, ±0.4%) were obtained for all new compounds prepared in this work. General Procedure To a stirred solution of n-propylmagnesium bromide (0.03 mol) in ethyl ether (50 mL) a solution of the proper amine (0.03 mol) in ethyl ether (20 mL) was added at room temperature. After 20 min, a solution of (-)-menthyl p-toluenesulfinate (4) (0.01 mol) in ethyl eter (20 mL) was added at given temperature. The reaction mixture was stirred for the appropriate time (see below). Then, the reaction solution was washed with a saturated aqueous solution of NH4Cl (2 × 40 mL), 3% aqueous solution of HCl (1 × 20 mL) and 5% aqueous solution of Na2CO3 (2 × 20 mL). The organic layer was dried with MgSO4 and the solvent evaporated. The reaction product 1 was purified by column chromatography. Analytically pure products 1 were characterized by 1 H-NMR and IR spectroscopy.
2015-09-18T23:22:04.000Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "4116f7dd753d9e3f322811f75a68d94ee48c3bd2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/20/2/2949/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4116f7dd753d9e3f322811f75a68d94ee48c3bd2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
221746727
pes2o/s2orc
v3-fos-license
Enabling Low-Latency Bluetooth Low Energy on Energy Harvesting Batteryless Devices Using Wake-Up Radios With the growth of the number of IoT devices, the need for changing batteries is becoming cumbersome and has a significant environmental impact. Therefore, batteryless and maintenance-free IoT solutions have emerged, where energy is harvested from the ambient environment. Energy harvesting is relevant mainly for the devices that have a low energy consumption in the range of thousands of micro-watts. Bluetooth Low Energy (BLE) is one of the most popular technologies and is highly suitable for such batteryless energy harvesting devices. Specifically, the BLE friendship feature allows a Low Power Node (LPN) to sleep most of the time. An associated friend node (FN) temporarily stores the LPN’s incoming data packets. The LPN wakes up and polls periodically to its FN retrieving the stored data. Unfortunately, the LPNs typically experience high downlink (DL) latency. To resolve the latency issue, we propose combining the batteryless LPN with a secondary ultra-low-power wake-up radio (WuR), which enables it to always listen for an incoming wake-up signal (WuS). The WuR allows the FN to notify the LPN when new DL data is available by sending a WuS. This removes the need for frequent polling by the LPN, and thus saves the little valuable energy available to the batteryless LPN. In this article, we compare the standard BLE duty-cycle based polling and WuR-based data communication between an FN and a batteryless energy-harvesting LPN. This study allows optimising the LPN configuration (such as capacitor size, polling interval) based on the packet arrival rate, desired packet delivery ratio and DL latency at different harvesting powers. The result shows that WuR-based communication performs best for high harvesting power (400 μW and above) and supports Poisson packet arrival rates as low as 1 s with maximum PDR using a capacitor of 50 mF or more. Introduction The Internet of Things (IoT) has managed to make the world a connected place with a growing number of devices. It is projected that by 2022 the Internet will connect 14.6 billion IoT devices [1]. With this growth, there is a need to maximize their energy efficiency to maximize device lifetime. Usually, these devices use batteries as the primary energy source. However, battery replacement for such a large number of IoT devices is not only impractical but also impacts the environment due to the harmful chemicals that discarded batteries can leak into the soil. To solve this problem, environment-friendly capacitors and energy harvesters can replace the batteries. Such batteryless devices have applications in tracking goods in warehouse logistics, monitoring environmental conditions, wild-life monitoring and can be installed as in-body devices. Capacitors support a vast number of charging cycles and thus have a much longer lifetime than batteries. However, the much smaller energy density of the capacitor and unpredictable availability of harvested energy result in intermittent on-off behaviour of the device, as shown in Figure 1. When the capacitor voltage drops below the minimum operating voltage (V o f f ), the device will turn off. Therefore, the device needs an energy harvester to replenish the energy stored in its capacitor continuously. A variety of environmental sources, such as light and motion, can be used to harvest energy. As shown in Figure 1, after the device turns off, it will turn on again only when achieving a predefined turn-on voltage V on . Bluetooth Low Energy (BLE), being a short-range energy-efficient communications technology is highly suitable for batteryless devices [2]. The BLE specification [3] already provides a friendship feature to ease its Low Power Nodes (LPNs) to save energy keeping themselves in sleep mode or turned off most of the time. LPNs awake only to transmit or receive data packets. For sending uplink (UL) data packets, the LPN can broadcast them at any time. The LPN can start a poll process (request/response) to receive the downlink (DL) data that are temporarily buffered at the friend node (FN). The polling happens periodically (based on a predefined duty-cycle) to receive any incoming data packets. This can reduce the overall energy consumption but increases the DL latency. Additionally, the duty-cycled polling can also lead to wasted energy by sending and receiving poll requests/responses when no data is buffered. Therefore, to reduce the DL latency and superfluous polling, it is required to align the LPNs polling with the moment when the FN receives incoming packets. Such coordination can be accomplished by an additional secondary wake-up radio (WuR). As the WuR's power consumption (few tens of µW) is many orders of magnitude lower than the main radio's (hundreds of mW), it can be kept in listening mode (switching the main radio to sleep mode) even when the device is powered by harvested energy [4,5]. This allows the FN to notify the LPN when a DL packet is available by sending a wake-up signal (WuS). The reception of a WuS then triggers the LPN to start the standard friendship polling process to receive the DL data. As shown in Figure 2, there can be two types of batteryless LPNs. LPN-A connected with FN-X is the standard node without WuR. LPN-B is WuR-based and can receive the WuS from its associated FN-Y. An FN can support friendship with a maximum of seven LPNs (any types) simultaneously. Each FN maintains multiple buffers to store its corresponding LPN's data packets. A small amount of additional power consumption due to WuR can have a significant effect on the energy availability in a device which has a harvesting power in the order of tens to hundreds of µW. To the best of our knowledge, this work is the first to investigate the network performance and requirements for a batteryless LPN in combination with a WuR. It is required to know the optimal capacitance for distinct harvesting powers at which the LPN can perform its operations without experiencing an outage. In this article, we evaluate the optimal capacitor size and polling interval for an LPN at different harvesting powers, by maximising packet delivery ratio (PDR) and minimising DL latency. We also evaluate the possible benefits of integrating a WuR in the LPN and compare the system with the standard polling approach. The outline of this article is as follows. In Section 2, we provide an overview of the related literature. In Section 3, we present an introduction to the friendship feature explaining the communication between the LPN and the FN. The system considers both the types of LPNs, the standard one without WuR and the modified LPN enabled with WuR are considered in the discussion. Additionally, a model of the batteryless device is presented to calculate its voltage over time. The system performance is analysed in Section 4, and Section 5 presents the conclusion and future work. Related Work Energy harvesting has been extensively explored to support sustainable operations of IoT systems. Various types of ambient energy have been utilized, such as radio frequency (RF), solar and wind energy. An overview of the energy harvesting technologies for various applications is presented in [6][7][8]. Meli et al. [9,10] demonstrated the suitability of the BLE protocol for a battery-free IoT device. They concluded that it is possible to use the BLE wireless standard in combination with energy harvesters. They showed battery-free BLE devices powered with solar cells in a room or building environment could broadcast pre-programmed information such as GPS coordinates. Batteryless BLE prototypes for smart building applications have also been presented, making use of ambient light energy [11] or RF energy harvesting [12]. Sanislav et al. [13] implemented a proof-of-concept design of a BLE device based on a wireless energy harvesting element. The node, equipped with a 50 mF capacitor, is charged by an RF energy harvester module harvesting from a 5 m apart GSM mobile. It can take measurements once every 30 s. Brunecker et al. achieved up to 32 mW harvested power using a 6 dBi gain transmitter antenna placed at 5 cm from the harvesting receiver and up to 1.5 mW at 40 cm distance [14]. Zhong et al. [15] implemented their design of an implantable batteryless bladder pressure monitor system that monitors bladder storage in real-time and transmits the feedback signal to the external receiver through BLE. They use a four-coil wireless energy transmission method, which supports a power transmission range of up to 7 cm. Another solution is having a batteryless BLE beacon powered by a customized water leak sensor as proposed by Witham et al. [16]. They considered a peak short-circuit harvesting current of 8.1 mA and a peak current requirement of 8.25 mA during radio events of the BLE beacon. So, a capacitor of 3.9 mF enabled the BLE beacon to transmit the data for a short time. All the mentioned researchers focus only on the UL data. In contrast, our work studies the ability for a batteryless device to receive DL data using the friendship feature. We also focus on evaluating the optimal capacitor size required to receive data packets for different harvesting power ranges. A control loop system is proposed where the nodes can request the energy source to provide energy enabling them to replenish their storage (capacitor or battery) [17]. This method can ensure a certain level of quality-of-service for an IoT application. They also present a prototype using BLE nodes equipped with a photovoltaic energy harvester, which communicate and request a recharge from an indoor smart lighting system. The detailed surveys on WuR hardware and protocols are discussed in [18,19]. Many researchers presented the design to couple a low power WuR with BLE to explore its potential. Many WuRs are also implemented to be triggered using BLE packets [20][21][22]. Giovanelli et al. [23] evaluated the possible benefits of integrating a WuR in the BLE protocol stack. They observed that the use of a WuR reduces energy consumption and DL latency. The WuR decreases the DL latency by up to 40% in the case of connection-oriented communication when the number of devices is large (100+), while with few devices, the traditional approach performs better. Mikhaylov et al. [24] demonstrated that the WuR-based BLE can outperform the classic BLE solution (without WuR) if the maximum latency for data delivery tolerable by the application does not exceed 2.1 s. Sanchez [25] also evaluated that WuR-based BLE performs better than the classic BLE for low frequent data rates. They performed the tests for battery-powered nodes, whereas our work focuses on batteryless nodes. This is expected to impact the results and conclusions significantly. Specifically, the low harvesting power density combined with the added power consumption of a WuR can worsen the intermittent behaviour of a batteryless device. Other works that have integrated the WuR capabilities in BLE are reported in [21,26,27]. These works focused on hardware design aspects of WuR-integration, while our work looks at protocol aspects instead. Liu et al. [28] presented an RF-based passive WuR enabled batteryless node. They observed that the energy harvested within 100 ms at a distance of 1 m from the RF energy transmitter is sufficient to transmit and receive 40 B long beacon messages in a range of 3 m. Whereas they target a fixed type of harvesting technology without considering the impact of the capacitor, this article investigates the optimal size of a capacitor for different ranges of harvested power. At the time of writing, no work presents the evaluation of the BLE friendship feature considering batteryless LPNs. As such, this work is the first to study the DL performance of batteryless LPNs. Moreover, we consider combining the batteryless device with a WuR to optimize the DL latency further. Batteryless LPN Design In this section, the communication between the FN and the batteryless LPN is described. First, we summarize the friendship feature of the Bluetooth mesh specification [3] and introduce the modification required in the communication mechanism by adding a WuR in an LPN to reduce the DL latency. Next, we present the batteryless LPN model to calculate its available capacitor voltage over time, as a function of the energy harvesting power, capacitance, and energy consumption. Lastly, we explain the behaviour of a batteryless LPN and usage of the model to predict the time when to perform communication with its FN. Both the communication schemes are described one when the LPN polls directly using the main radio and another when it has a WuR. BLE Friendship Feature The devices that join a Bluetooth mesh network are called nodes. They follow a publish-subscribe communication pattern. A node publishes messages to send, and receivers can subscribe to the sender's address to receive them. The nodes can possess optional additional features based on their capabilities in the network. These features categorize the nodes as a relay, proxy, friend, or low power node. The relay nodes support the re-transmission of data packets that are broadcast by other nodes. They help in extending the range of the entire network. The proxy nodes help the non-mesh-supported BLE devices to communicate via the mesh network. The FN and LPN have a friendship relationship where the FN receives and stores the DL data packets intended for the associated LPN while the LPN sleeps or temporarily shuts down. The FN maintains multiple friend queues (FQs), one for each connected LPN, to store all the incoming data packets [29]. The maximum size of an FQ containing 16 bytes of lower transport protocol data units (PDUs) that the LPN can request is 128 packets. The data can be retrieved later by the LPNs using a polling mechanism. This provides the LPNs with the flexibility to remain mostly in the lowest power state. A node in the network can enable or disable its responsibility to support these four features, as they come with an additional overhead while connected to the network. According to the specification, the LPN initiates the request to establish a friendship relationship. The neighbouring nodes (within a single hop) can respond for the acceptance with their capabilities to become a friend. Subsequently, the LPN accepts one of the best capable nodes to be its friend. A node cannot have the low power feature enabled unless a neighbouring node agrees to be a friend. Additionally, an FN needs a sustainable power supply to keep it awake consistently. An FN can support friendship with a maximum of seven LPNs simultaneously. Whereas, an LPN can be a friend with only one FN. Figure 3 shows an example of the message exchange between the LPN and FN after establishing the friendship. The LPN sends a friend subscription list message to the FN, which contains all its subscribed addresses. This list enables the FN to identify which messages to buffer for the LPN. The LPN periodically sends a friend poll (FP) message to the FN to get any stored data and to keep the connection alive. The FP messages are sent in all the three BLE advertising channels (37)(38)(39). After receiving the FP message, the FN replies with the oldest buffered data packet. It discards the packet from the FQ once the LPN acknowledges its reception. The acknowledgement consists of a single bit and is referred to as the friend sequence number (FSN). The LPN toggles the FSN each time it successfully receives a packet. Therefore, the FN sends another entry of the FQ, if it receives an FP message that has a different FSN field value as in the previously received FP message. If the FSN is the same, it retransmits the previous message (if it has not been discarded in the meantime). The FN returns a friend update (FU) message if the FQ is empty or when the security parameters for the network have changed. The FN signals the FQ occupancy to the LPN via the 1-byte More Data (MD) flag in the FU message. If MD equals 0, it denotes the FQ is empty, 1 that it is not. The values 2 to 255 are reserved for future use. The standard BLE friendship protocol defines three timing parameters: ReceiveDelay (RD), ReceiveWindow (RW), and PollTimeout (PT), that are fixed for a session of the relationship. These timers are negotiated during the process of the friendship establishment. The LPN presents the timer values for RD and PT in the friend request message whereas the FN proposes the RW in the friend offer message. These timers can be seen in Figure 4. The RD and RW can be configured with a maximum value of 255 ms and the PT with a maximum of 96 h. RD is the delay between the LPN sending the FP message and when it starts listening to a response from the FN. During the RD, the LPN can turn off its radio and switch to sleep mode. During RW, the LPN expects the data and listens actively for it. The LPN uses more energy due to active listening during RW, and therefore, a short RW should be configured if possible. The PT is used as a timeout to ensure that when an LPN leaves the network, its friendship relationship is not kept alive indefinitely. If the FN does not receive any poll request from an LPN before the PT timer expires (since the last poll), it terminates the friendship with that LPN and removes the corresponding FQ. As mentioned above, the data communication is initiated by the LPN based on its duty cycle, and independent of the arrival of DL data packets at the FN. As such, the DL latency can become very high if the LPN has a long duty cycle (which is often desirable to minimize energy consumption). To solve this, adding a WuR to an LPN provides the FN with the ability to notify it in advance about the incoming data packet before the LPN initiates polling. The LPN enabled with a WuR can continuously listen to the incoming WuS, as WuR idle listening power consumption is orders of magnitude lower than that of the main radio. The FN sends a WuS to the LPN whenever it receives a network packet corresponding to that LPN in an empty FQ. The LPN (having sufficient energy) polls until it receives an FU message indicating there are no messages buffered in the FQ. Upon receiving the WuS by the LPN's WuR, it subsequently triggers the main radio to initiate to send the FP message. This reduces the need for sending frequent periodic FP messages and polls only when data is actually buffered in the FQ. Thus, it not only reduces the DL latency but also reduces the energy wastage due to polling without receiving buffered data. Batteryless Device Model A batteryless LPN is equipped with an energy harvester, a capacitor, a micro-controller unit (MCU), a main radio and an optional WuR. The batteryless circuit is introduced by Delgado et al. [30] to calculate the voltage of a batteryless device at a specific time as shown in Figure 5. Assume the device maximum operating voltage is E (volt) and the harvester provides a power of P h (watts) modelled as a real voltage source having an internal resistance r i (ohm). In order to limit the power of the harvester, this internal resistance r i is defined as E 2 /P h . The energy-consuming components are the MCU, main radio, WuR, or any other peripherals that are modelled as a load resistance. The current consumption of these components varies with their operating states (e.g., sleep, active). Let the total current consumption of the LPN load at a time instant t be I L (t) (ampere) then according to the Ohm's law, the load resistance of the LPN R L (t) = E/I L (t). As the capacitor, the harvester and the LPN components have a parallel connection, the equivalent resistance of the LPN is calculated as Equation (1). . (1) By applying Kirchoff's voltage law to the circuit, the voltage of across the load V(t+∆t) over a time period ∆t starting at time t, given its voltage V(t) at a time t, having a capacitor of C (farad) and a fixed resistance R eq (∆t) during the time interval ∆t can be calculated as Equation (2) [30]. After substituting the value of r i and R eq (∆t) (Equation (1)) in Equation (2), the final voltage is derived as Equation (3). The formula mentioned in Equation (3) calculates the voltage change of an LPN while its state (and thus the current consumption) remains the same during a time interval ∆t. It needs to be recalculated every time the LPN's state changes. Poll-Based LPN In the default BLE friendship mechanism, the LPN initiates the communication by sending an FP message. Figure 6 shows a sequence diagram representing the communication steps. It can be seen that the FP message is sent in the three broadcast channels and then the LPN switches to sleep mode until the RD expires. During the RW timer, the FN starts sending the buffered message, which is received by the LPN after arrival time (AT). So, the LPN stops listening at this point and starts receiving the buffered message. The AT depends on the required processing time at FN, queuing time (e.g., because the FN first needs to send a packet to another node), and propagation delay. As mentioned above, the AT should always be smaller than the RW, as after that time, the LPN stops listening and the response would thus be lost. To avoid receiving frequent, continuous FU messages, a timer poll interval is introduced. On receiving an FU message (meaning that the FQ is empty), an LPN needs to wait until the poll interval timer expires to send the next FP message. The poll interval should be less than the PT, to avoid the FN from disconnecting the LPN. In the sequence diagram, the LPN first receives an FU message because the FQ is empty. So, it waits until the poll interval time expires to send the next FP messages. The FN subsequently replies with a buffered data packet, which is removed from the FQ on receiving the third FP messages. At this point, the FQ is empty again, and the FN thus sends another FU response. On receiving a buffered data packet, the LPN can poll immediately to receive the next message. Using a batteryless LPN, this communication becomes more complicated because the LPN might not always have sufficient stored energy to perform polling at a predefined polling interval, or immediately after receiving a buffered message. As the harvesting power can influence the polling interval, instead of a fixed predefined polling interval a batteryless LPN uses it as a minimum interval and waits longer to poll if not enough energy is available. Similarly, it will not poll immediately after receiving a buffered message, but only as soon as it has harvested enough power. During the process of receiving a buffered data packet, the LPN could experience multiple shutdown events as the required capacitor voltage to successfully receive a buffered data packet could be higher than its voltage at which it starts the communication. In simplicity, the LPN should start sending the FP only if it has acquired a sufficient threshold voltage (V threshold ). Initiating the poll from V threshold , it can receive at least one buffered data packet successfully without reaching below the device turn-off voltage V o f f . Thus, the overall DL latency for a batteryless LPN could increase (wrt. battery-powered) as the packets need to wait in the FQ until the LPN acquires the voltage V threshold and the poll interval expires. The LPN can be equipped with a hardware circuit, such as an ultra-low-power comparator with a power consumption in the order of pico-watts [31] to determine if it has reached V threshold . The comparator can be configured to generate events every time the LPN voltage reaches V threshold or V o f f . An UP event is generated whenever the LPN's voltage reaches V threshold and a DOWN event whenever it falls below V o f f [32]. The LPN's MCU can take appropriate actions based on these generated events. Using such an ultra-low-power comparator would not significantly impact the LPN voltage. During the idle time, the LPN remains in sleep mode. In this time, the current consumption of the LPN includes the sleep mode current consumption of the MCU (I sleepMCU ) and main radio (I sleepMR ). The new voltage at the end of sleep mode can be predicted using Equation (3), where I L (∆t) = I sleepMCU + I sleepMR . Before starting the communication, the comparator is used to compare the LPN instantaneous voltage with V threshold . If that voltage is higher than the voltage V threshold , the LPN initiates the events from top to bottom, as mentioned in Table 1, to receive the data packet from the FN. The LPN voltage changes according to the execution time and the current consumption of the corresponding events. V threshold is calculated by deducing the minimum initial voltage V(t) (using Equation (3)) by executing all the events from bottom to top mentioned in Table 1 starting with V(t+∆t) equals V o f f . The deduced initial voltage of each event needs to be above V o f f . WuR-Based LPN The sequence diagram shown in Figure 7 presents the communication system to receive DL data packets by a WuR-based LPN. The WuR remains turned-on operating at orders of magnitude lower power consumption than regular radio while listening for a WuS [5]. This state where the LPN actively listens using the WuR keeping the main radio in deep sleep mode is named as wake-up state. When the FN receives a message for an LPN in an empty FQ, it initiates a communication event by sending a WuS. Upon receiving the WuS, the WuR can interrupt the main radio of the LPN to start the process to request the message. Similar to the poll-based LPN, to prevent the LPN from getting shut down during the communication, it wakes up the main radio to send the FP message only if it has the sufficient voltage (V threshold ). Thereafter, the LPN follows the same procedure as mentioned for the direct Poll scheme (cf., Section 3.3) by sending the FP messages in the three advertisement channels. To account for the potential loss of a WuS (e.g., because the LPN does not have enough energy to receive it or is temporarily shut down), the WuS is re-transmitted by the FN if no FP is received within a pre-configured WuS interval timer. While sleeping, the current consumption of a WuR-based LPN does not only includes the sum of the MCU sleep current I sleepMCU and main radio sleep current I sleepMR , but also the WuR listening current I listenWuR . Results and System Analysis This section presents the simulator setup and compares the performance of both considered LPN communication approaches, i.e., direct Poll-and WuR-based. Simulation Setup We implemented a Python-based simulator to imitate the friendship communication mechanism of the batteryless LPNs, as shown in Figures 6 and 7. The simulator is capable of reproducing the BLE radio activities such as sending FP messages, and FQ buffered data packets, but also it implements the possibility of sending a WuS. The flow chart of the simulator is presented in Figure 8. Each experiment is run until a total of 25,000 packets have been generated according to a Poisson arrival process. To request and receive a buffered data packet, the LPN follows the sequence of events listed in Table 1, in the order from top to bottom. The table also mentions the execution time and the current consumption of the corresponding events for both types of LPNs (with and without a WuR). These time and current consumption values can be used as ∆t and I L (∆t) to calculate the LPN's voltage at any time using Equation (3). The time and the current consumption of the main radio and the MCU are based on the Nordic nRF52 power profiler [33]. According to the datasheet of the AS3933 WuR [34], a WuR-based LPN consumes a current of 2.7 µA when one WuR channel actively listens to the incoming signals (I listenWuR ) and 12 µA while receiving them. We consider a BLE data rate of 1 Mbps. Therefore, transferring a data packet of 68 B from an FN to an LPN needs 544 ms as specified in Table 1 (Event: Scan Message), and for the FP message of 48 B, 384 ms is needed (Event: Tx). The data packet size is calculated by the sum of the network PDU, advertising data (AD) type (1 B), message length (1 B), preamble (1 B), access address (4 B) and CRC (3 B). The calculated buffered data packet, FU message and FP message sizes are mentioned in Table 2. Other parameters defined in Table 2 are used to compare the LPN communication schemes. As such, V min and E are taken considering regulated supply for external components of the Nordic nRF52 as 1.8 V and 3.3 V, respectively [32]. The following performance metrics are considered in the comparison between the friendship communication mechanisms: In each experiment, we calculate the values of the capacitor size and signal (WuS/Poll) interval that achieve the minimum DL latency, while maintaining maximum PDR. For simplicity, continuous power harvesting is assumed. We have considered the packet loss only due to the LPN not having enough energy to receive the WuS or the packet, but not due to interference or collisions. There have been many studies evaluating the impact of interference on BLE under other technologies such as ZigBee, IEEE 802.11, and IEEE 802.15.4 [35][36][37]. Such interference causes the reception of erroneous packets, thereby affecting the DL latency. Collisions could happen when multiple LPNs are attached to an FN with a short advertisement interval or deployed near BLE mesh nodes. The percentage of packet collisions with an advertising interval of 500 ms and having 7 BLE nodes is less than 0.4% [38]. The batteryless LPNs (maximum 7) connected to a FN generally transmit or receive data at a much lower frequency than 500 ms. As a consequence, the PDR and DL latency of the LPNs would be negligibly affected by the presence of other nearby LPNs. Minimum Harvesting Power As the LPN consists of many energy consuming components, it is needed to know the minimum harvesting power at which the LPN can still charge its capacitor to the threshold voltage V threshold while in sleep mode (or WuR listening mode). Moreover, the harvested power needs to be enough to be able to complete at least one full polling cycle with a fully charged capacitor (i.e., V threshold needs to be lower than the maximum capacitor voltage), given a specific capacitance. The minimum harvesting power is calculated based on Equation (3), where the final voltage V(t + ∆t) equals V threshold for the limit of ∆t towards infinity at sleep state. The harvesting power becomes independent of the initial voltage and the capacitor size, as for an infinitely large ∆t, the final voltage becomes (E · P h ) / (P h + E · I L (∆t)). However, to perform the BLE friendship communication cycle (including sending a poll and receiving the response), different capacitor sized LPNs need a different threshold voltage. Accordingly, the required minimum harvesting power for different capacitance varies. Considering the turn-off voltage V o f f of 1.8 V, the threshold voltage and corresponding minimum harvesting power for different ATs are shown in Figure 9. The minimum harvesting power does not vary much for the capacitor of 50 mF and higher. It is around 63.2 µW for the LPN with only a main radio and around 73.9 µW for the LPN with a WuR along with the main radio for such large capacitors. Both the communication schemes presented for the LPN (with and without WuR) show similar behaviour for the change in the values of minimum harvesting power and threshold voltage for different capacitor sizes. As the LPN with WuR has higher current consumption, it requires more harvesting power and the differences in the minimum harvesting power observed are up to 165.6 µW. Additionally, with the differences in the minimum harvesting power, the LPN with WuR has lower threshold voltage with the difference compared to the LPN without WuR up to 0.028 V. In the WuR enabled LPN (cf. Figure 9b) at AT equal to 0 ms the minimum harvesting power decreases exponentially from 14.8 to 0.077 mW for a capacitor size between 7.5 and 750 µF. However, with the increase in AT, this exponential decrease shifts to larger capacitor sizes. As with the increase in AT, the listening time of the LPN increases, which increases its energy consumption. Therefore, high threshold voltages are required for smaller capacitor sizes resulting in the increase of the minimum harvesting power needed. WuR-Based and Direct Poll-Based Friendship Protocol Performance The results are grouped based on the capabilities of different harvesting techniques which are small (0.075 to 0.099 mW) representing harvesting at the rate of electromagnetic or piezoelectric harvesting techniques, medium (0.1 to 1 mW) in line with indoor light, and large (1.1 to 500 mW) in line with techniques based on direct sunlight, mechanical movements, or thermal energy [39]. The parameters used to compare the performance of both the LPN communication schemes (WuR-based and direct Poll-based) are defined in Table 2. Generally, for a fixed harvesting power, with an increase in the signal (poll/WuS) interval (SI) values, the PDR decreases because the LPN polling frequency or receiving WuS notification frequency decreases. Furthermore, with the increase in the capacitance, the LPN can store more energy, and thus the PDR improves. Moreover, for a harvesting power, there exists a minimum optimal capacitance. More than that, further increasing the capacitor size above that value does not affect the PDR nor the DL latency. Therefore, for each value of harvesting power, we calculate this optimal capacitance and the minimum SI that can provide the highest PDR and lowest DL latency. We consider all capacitance and SI combinations that deviate at most 5% of the maximum achievable PDR and lowest DL latency as being optimal as well. This 5% deviation allows to eliminate minor differences in the PDR and the DL latency that occurs for the optimal and higher capacitance values due to the randomness in Poisson packet arrivals at the FQ. By allowing this 5% deviation, we can smooth the curves of optimal capacitance and interval. A value smaller than 5% did not provide the necessary smoothing effect, while a larger value would affect the optimality too much. Figures 10-12 compare both the communication schemes for a Poisson packet arrival rate of 1, 10 and 60 s, respectively, showing the optimal capacitance and its corresponding PDR, DL latency value and minimum SI at different harvesting power values. Figure 10a), with the increase in the harvesting power, the PDR improves, but the DL latency does not. This is because, with the increase in harvesting power, the LPN can reach the threshold voltage faster, and so it polls to receive the data more frequently. The DL latency values do not vary at these harvesting powers because most of the packets are dropped from the FQ due to the low PDR. The observed variations in the DL latency values are due to the randomness in the Poisson packet arrivals. Moreover, as with the increase in harvesting power, the capacitor charges faster, and therefore, the optimal capacitance decreases. The minimum SI does not vary much at low harvesting power because the FQ is never empty and the LPN never gets an FU, and so the SI does not play any role. The WuR-based LPN achieves lower PDR values because some of its energy is wasted in listening to the periodic WuS while remaining in sleep mode and this delays acquiring the voltage V threshold to start the communication. With the increase in the delay to poll, more packets are dropped from the FQ. The FN sends these WuSs assuming the previous WuS has not reached the LPN (WuS might be lost in transmission or the LPN might be shutdown) as it does not get any response from the LPN. Therefore, the optimal capacitance required for the WuR-based LPN is also higher. Figure 10a shows that for low harvesting power (between 75 and 300 µW), direct Poll-based communication performs better, but neither approach achieves sufficiently high PDR. Whereas, above 300 µW both types of communication achieve the maximum PDR (cf. Figure 10b) and WuR-based communication starts to outperform direct Poll-based data communication in terms of DL latency. At the higher harvesting power values, when the PDR of around 80% is achieved (200 µW and above in Figure 10b), it is observed that with the increase in the harvesting power, the DL latency decreases and stabilizes to around 0.33 and 0.73 s for the WuRand Poll-based approaches, respectively. With the increased value of the harvesting power, the LPN takes less time to achieve the threshold voltage. Thus, the DL latency is reduced with an increase in the harvesting power as seen from Figure 10b that it decreases drastically from 15.3 to 0.34 s for the WuR-based communication by increasing the harvesting power from 0.2 to 0.4 mW. The optimal capacitance is larger for the harvesting power at which the maximum PDR is achieved (such as at 400 µW in Figure 10b, at 86 µW in Figure 11a or 80 µW in Figure 12a), and the LPN can support frequent SIs. It means the LPN can harvest enough energy to successfully receive the WuS whenever they are sent and can frequently poll without letting the FN drop packets from the FQ. At the Poisson packet arrival rate of 10 s for low harvesting powers, the PDR improves as compared to that of 1 s packet arrival rate, but the DL latency deteriorates (up to 178.6 s) as shown in Figure 11a. Similarly, it increases up to 967.67 s for a Poisson packet arrival rate of 60 s (cf. Figure 12a). As for the low Poisson packet arrival rate (1 s) and low harvesting power, a greater number of packets enter the FQ without being polled by the LPN and thus a greater number of packets are dropped due to the queue being full. However, as the Poisson packet arrival rate increases, a smaller number of packets are dropped, improving the PDR but also increasing the DL latency. With the higher Poisson packet arrival rate, the LPN receives the older packets that have waited longer, whereas low Poisson packet arrival rate drops the older packets and the LPN receives the recently added FQ packets obtaining lower DL latency. Moreover, at the Poisson packet arrival rate of 10 s, when the PDR of around 80% is achieved (80 µW and above for poll-based or 90 µW and above for WuR-based in Figure 11a), with the increase in the harvesting power, the DL latency starts decreasing drastically. For poll-based communication, it decreases from 142.3 to 8.6 s by increasing the harvesting power from 80 to 95 µW and for WuR-based communication, it decreases from 146.2 to 19.6 s by increasing the harvesting power from 90 to 99 µW. For higher harvesting power, as shown in Figure 11a, the latency of the WuR-based communication decreased lower than the poll-based communication but requires higher optimal capacitance. Similar to the Poisson packet arrival rate of 1 and 10 s, at 60 s for WuR-based communication having a PDR of 80% or above (cf. Figure 12a) shows a decrease in the DL latency from 660.7 to 0.30 s with the increase in harvesting power from 77 to 82 µW. WuR-based communication performs better in terms of DL latency for the harvesting power above 82 µW where it achieve the DL latency of 0.30 s. For poll-based communication, the latency decreases to 0.66 s, but at a much higher harvesting power of 300 µW. Moreover, for Poisson packet arrival rates higher than 60 s, the conclusions are similar to that of 60 s showing improvements in PDR at lower harvesting power values for WuR-based communication. Analysing the results for a harvesting power higher than 1 mW (graphs omitted), it is observed that the optimal capacitance for both the types of communication becomes the same at a high harvesting power. The value of harvesting power at which the optimal capacitance becomes the same decreases with an increase in the Poisson packet arrival rate. Moreover, with high harvesting power values, the DL latency values for both scenarios remain almost constant, where the WuR-based communication continues to perform better than direct Polling. WuR-based communication obtains 33.55, 30.26 and 30.21 ms of DL latency for the Poisson packet arrival rate of 1, 10, and 60 s, respectively; whereas, poll-based communication obtains 71.77, 65.42, and 64.88 ms. It can be concluded that the poll-based data communication performs better for low power harvesting techniques such as electromagnetic or piezoelectric and for low irradiance indoor light (producing up to 400 µW power). In contrast, WuR-based data communication outperforms for medium and large harvesting power techniques. For high harvesting powers (e.g., using thermal energy), a small capacitor of only 50 µF can support all data rates (1 s and above) at maximum PDR. For medium harvesting power, a capacitance of 25 mF is enough to support a packet arrival rate of 10 s or more with a DL latency of maximum 5.9 and 14.64 s for poll-and WuR-based communication, respectively. To support high packet arrival rates of 1 s achieving maximum PDR, a larger capacitor of at least 50 mF is required. Finally, for a low harvesting power, a 100 mF capacitor is required to support a packet arrival rate of 10 s at maximum PDR. Conclusions and Future Work In this article, we studied the optimal parameters to perform the communication between a friend node and a batteryless low power node in BLE mesh networks. We studied the achievable PDR and latency of DL packets, considering different parameters (i.e., capacitance, energy harvesting power and Poisson packet arrival rate). The results have proven that a batteryless BLE device can easily support DL communications by using the BLE friendship feature, both using the traditional polling-based technique, or by employing a WuR. Even with harvesting power ranges in the order of tens of micro-watts, a packet arrival rate of 10 s can be supported without any packet loss. The WuR-based approach is mainly beneficial in terms of DL latency when the packet arrival rate is very low (i.e., 1 s) or high (i.e., 60 s). In these scenarios, it provides a DL latency reduction of more than 50% compared to the polling-based technique, that is from 71.77 to 33.55 ms at 1 s packet arrival rate. In summary, this work can be used to know the minimum harvesting power and the optimal capacitor size which can provide the desired PDR and DL latency for different configurations of the batteryless LPN and FN. There are several future research directions. In our experiments, we have considered fixed values of the signal interval which can be optimised dynamically depending on the harvested power. As mentioned earlier, multiple low power nodes should be attached to a friend node to evaluate the impact of the collisions, and the interference due to the presence of other technologies. Moreover, the effect of the friend queue size and the impact on the power consumption of the friend node are interesting open research directions. As currently, the simulated results are presented, it is needed to perform experiments using real hardware. We are currently working on a hardware setup to validate the presented results. Conflicts of Interest: The authors declare no conflict of interest.
2020-09-17T13:06:15.386Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "08d87b1d8175002559b46b3ea11be5600d5dfa0b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/18/5196/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "caa1c96ae3c35a73c5166166301cbb9d598f4dcc", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14524585
pes2o/s2orc
v3-fos-license
Liver Synthesis Function in Chronic Asymp- Tomatic or Oligosymptomatic Alcoholics: Correlation with Other Liver Tests BORINI, P. et al.-Liver synthesis function in chronic asymptomatic or oligosymptomatic alcoholics: correlation with other liver tests. SUMMARY: Liver function and its correlation with bilirubin and hepatic enzymes were evaluated in 30 male chronic asymptomatic or oligos-ymptomatic alcoholics admitted into the psychiatric hospital for detoxification and treatment of alcoholism. Hypoalbuminemia, lowered prothrombin activity, hypotransferrinemia and hypofibrinogenemia were detected in 32 %, 32 %, 28 %, and 24 % of patients, respectively. Transferrin was elevated in 8 %. Greater prevalence of hyperbilirubinemia was found in patients with lowered prothrombin activity, hypofibrinogenemia, or hypotransferrinemia. No correlation was found between serum bilirubin or aminotransferase levels and normal or elevated albu-min levels, time or activity of prothrombin, and fibrinogen levels. Serum alkaline phosphatase was elevated in normoalbuminemics and gamma-glu-tamyltransferase in patients with lowered prothrombin activity. Hypoalbuminemia was associated with hypofibrinogenemia, hypotransferrinemia with elevated aspartate aminotransferase or gamma-glutamyl-transferase, and hypertransferrinemia with elevation of alanine aminotransferase. These data indicated the occurrence of hepatic dysfunction due to liver damage caused directly by alcohol or by alcoholism-associated nutritional deficiencies. Alcohol exerts direct toxic action upon the liver, producing structural and functional alterations that may be enhanced by nutritional deficiencies due to inadequate ingestion of food or disturbances in digestion or absorption of nutrients. Alcoholism frequently results in reduced protein synthesis in the liver, leading to deficiency in serum proteins such as albumin, transferrin, and blood coagulation factors. Social and psychic problems caused by alcoholism usually precede physical medical problems by years. Consequently, alcoholics presenting themselves for treatment of the habit in specialized units compose a group clearly different from those that are received in clinical hospitals or are admitted for treatment of physical problems. The majority of studies evaluating liver functional disturbances in chronic alcoholics refer to the latter, involving patients with exuberant clinical manifestations, with a paucity of observations on phases where symptoms are not evident or are very mild. This study aimed at: 1) analyzing the behavior of serum biochemical tests that are usually employed for evaluating liver function in chronic asymptomatic or oligosymptomatic alcoholics, and 2) correlating alterations of those functional tests with alterations in bilirubin and liver enzymes. Thirty male chronic alcoholics admitted to the psychiatric hospital for treatment of alcoholic intoxication were considered asymptomatic or oligosymptomatic at the admission, through a physical exam and a clinical structured anamnesis interview 5. The majority of patients was classified as low or average middle class sub Alcohol exerts direct toxic action upon the liver, producing structural and functional alterations that may be enhanced by nutritional deficiencies due to inadequate ingestion of food or disturbances in digestion or absorption of nutrients.Alcoholism frequently results in reduced protein synthesis in the liver, leading to deficiency in serum proteins such as albumin, transferrin, and blood coagulation factors. Social and psychic problems caused by alcoholism usually precede physical medical problems by years.Consequently, alcoholics presenting themselves for treatment of the habit in specialized units compose a group clearly different from those that are received in clinical hospitals or are admitted for treatment of physical problems.The majority of studies evaluating liver functional disturbances in chronic alcoholics refer to the latter, involving patients with exuberant clinical manifestations, with a paucity of observations on phases where symptoms are not evident or are very mild. This study aimed at: 1) analyzing the behavior of serum biochemical tests that are usually employed for evaluating liver function in chronic asymptomatic or oligosymptomatic alcoholics, and 2) correlating alterations of those functional tests with alterations in bilirubin and liver enzymes. MATERIALS AND METHODS Thirty male chronic alcoholics admitted to the psychiatric hospital for treatment of alcoholic intoxication were considered asymptomatic or oligosymptomatic at the admission, through a physical exam and a clinical structured anamnesis interview 5 .The majority of patients was classified as low or average middle class sub- groups.Twenty-five were smokers, and none of them had used illicit drugs or any medicines during the 30 days prior to admission. Viral antigens were not searched for, and coproparasitologic exams were negative for Schistosoma mansoni eggs. Data are presented as average + standard deviation.Statistical comparisons employed were chi-squared or one-tailed Fisher's tests for qualitative data and Student's t test for quantitative variables 33 ; the 95% confidence intervals (CI) are shown for some data.Correlation studies were conducted by linear regression, correlation coefficient, and Pearson's significance test 25.Statistically significant findings (p < 0.05) are noted (*) in the tables.4). Demographic Time and activity of prothrombin (TAP) was low in ten (32 %) patients. Prevalence of elevated total bilirubin was significantly higher for patients with low TAP (p < 0.05).GGT average was significantly higher in patients with low TAP (p < 0.02, CI 13 to 217).No difference in plasma levels of bilirubin or other liver enzymes was found in groups with normal or low TAP (Table 5), and TAP levels were not correlated with those of albumin (r = 0.04, p > 0.05).Hypofibrinogenemia was found in seven (24 %) patients.Hyperbilirubinemia was significantly more common in patients with normal fibrinogen levels (p < 0.05).Prevalence and elevated values of liver enzymes did not differ in groups with normal or reduced fibrinogen levels (Table 6).Significant correlation was observed between albumin and fibrinogen serum levels (r = 0.50, p < 0.02) but not between albumin and TAP (r = 0.04, p > 0.05). Plasma transferrin, measured through total capacity of iron binding, was reduced in eight (28%) patients increased in two (8 %) patients.Hyperbilirubinemia was significantly more common in patients with reduced transferrin levels (p < 0.02).No patient with normal transferrin levels showed increases in AST and GGT, while all with reduced levels showed elevation of both enzymes.Prevalence of alteration of ALT or AP did not differ in groups with or without transferrin alteration.In the two cases where transferrin was elevated, AST was altered in both, but ALT or GGT in only one (Table 7).Transferrin alterations were drastic, both altered subgroups being significantly different (p < 0.001) from normal, with CI of 59.2 to 132.0 and -220.2 to -78.2.Transferrin levels were not correlated with those of albumin (r = 0.014, p < 0.95), prothrombin activity (r = 0.074, p < 0.72) and fibrinogen (r = 0.087, p < 0.68), but did correlate with serum iron (r = 0.774, p < 0.0001). COMMENTS Ethanol caloric value is enough to substitute for an important fraction of calories derived from diet components and leads to reduced need for food ingestion23.About 8% of patients in this study reported not having recently having one full meal a day and 60 % just a scarce daily meal.Ethanol oxidizing enzymes and integrity of hepatocytes depend on dietary high intake of protein and other nutrients.Precursor amino acids are also needed for synthesis of antioxidants such as glutathione, which is drastically diminished in alcoholism 34 .Vitamins A and E are other antioxidants protecting cells against ethanol-induced oxidative damage 24 Our study detected such alterations in less than one-third of patients, as revealed by albumin and fibrinogen levels and time and activity of prothrombin.These tests are usually employed for evaluation of hepatocyte synthesis function, and their alterations were not correlated with those of liver enzymes. In relation to albumin, previous reports are contradictory.Low serum albumin was not related to abnormal sulfobromophthalein retention 20 , but patients with normal levels of total serum protein and albumin/globulin ratios showed altered sulfobromophthalein tests 19 .Deficient protein intake or alteration in absorption or metabolism of amino acids could lead to insufficient availability of amino acid precursors to glutathione and later deficiency of its reduced form.A vicious cycle may be established between lack of substrate and cell damage. We detected not only correlated prevalence of alterations of albumin and fibrinogen but also hypoalbuminemia usually followed by hypofibrinogenemia.It is likely that such associations could be related to both proteins being synthesized in the same functional zones of liver acini, especially zone 3, where hepatocytes become especially susceptible to aggression, having diminished glutathione reserves and receiving the highest concentrations of some toxic products from drug metabolism 31 .While prothrombin activity depends on fibrinogen, the paradox of not having detected correlation between alterations of them could be explained by normality of other blood clotting factors, especially factor VII 18 . Transferrin, the main iron transport protein, can be reduced in alcoholics due to liver damage with reduced synthesis or alterations in its metabolism 27 .As in another study 16 , average serum transferrin levels were normal. Plasma transferrin values, regulated in accordance to iron levels, are elevated in iron deficiency.Conversely, reduced hepatic transferrin synthesis is one of the causes of iron defi-ciency, which has been shown in a significant proportion of alcoholics 12 .Nonetheless, our patient sample had normal plasma iron levels.However, among reported cases with high levels of AST, ALT and AP (relative to levels found in normo-ironemics), 40% were hyperironemic 6 .Increases in serum iron follow development of histologically demonstrable liver necrosis 11 .In all cases with reduced transferrin levels there were AST and GGT alterations, and in both cases with increased serum transferrin, there was concomitant higher ALT levels.The most plausible explanations for these observations would be that reduced transferrin levels was a reflection of liver aggression with functional impairment, while its increase would correspond to the acute phase response 29,30 .In alcoholic hepatitis, leukocytes and macrophages can release cytokines -interleukins and tumor necrosis factor 4,17 -acting in the regulation of hepatic synthesis of acute phase proteins 22 .Non-correlation between levels of transferrin and fibrinogen, both acute phase proteins, could arise from dependence on different regulators of the production of them. It is intriguing that the association of low prothrombin activity, hypofibrinogenemia, and hypotransferrinemia with hyperbilirubinemia, in all cases was associated with a predominance of non-conjugated bilirubin.By explanation, various mechanisms could be proposed relating the lack of substrates and hepatocytic dysfunction, isolated or associated in chronic alcoholics.Non-conjugated hyperbilirubinemia could result from increased turnover of plasma bilirubin pools and/or reduction in its clearance.Fasting is a very common phenomenon during of alcoholic intoxication, which occurred in a significant proportion of patients in our study.During fasting, non-conjugated serum bilirubin elevation could be due to various mechanisms, acting in isolation or associated: (1) increase in intestinal absorption of nonconjugat-ed bilirubin from the enterohepatic pool, reasons for which are not yet clear 14, 26 ; (2) deficient liver uptake and conjugation, a hepatocytic dysfunction that would be similar to that happening in Gilbert's syndrome.Alcohol ingestion in this syndrome causes elevation of non conjugated bilirubin 3 ; (3) Bilirubin flux through the hepatocyte plasma membrane is bidirectional and about 40 % of bilirubin taken up during the first round through the liver is returned non altered to circulation 2 .After uptake, bilirubin is transported to the cytosolic sites of transformation such as by glutathione S-transferase B. This reaction seems important for minimizing bilirubin efflux from the hepatocyte to blood. 10.Glutathione is drastically diminished in alcoholism 34 .Structural and functional alterations of plasma and organelle membranes of hepatocytes due to lipid peroxidation 24 and reduction of intracellular glutathione caused by ethanol metabolism-derived acetaldehyde could not only reduce hepatic capacity for clearance of serum non-conjugated bilirubin but also increase the rate of its non conjugated efflux from the hepatocyte.Some reports have shown that liver is, among all organs investigated, the one losing structural proteins moat quickly and in greatest amounts during fasting, reaching 20 % loss in only 2 days1, but not all serum proteins are affected in the same way 21 .In a previous study 7 involving patients with a clinical profile similar to this, we observed correlation between prevalence of fasting hypoglycemia and hypofibrinogenemia and indicated that nutritional deficiency would have contributed to impaired synthesis of the protein. In cases of liver damage, serum albumin concentration decreased slowly due to the protein's in vivo long half-life (about 22 days) 31 , while the half-life of others, such as fibrinogen and vitamin K-dependent factors, are short (1.5 to 6.3 days) 32 .Continuation of fasting during alcoholic intoxication also interferes in serum levels of different proteins. Up to a certain period of alcohol abuse, chronic alcoholics develop greater tolerance to ethanol due to increased activity of the oxidizing microsomal system but, after about 30 years of usage, there ensues a decline in tolerance 8 in such a way that the toxic state is reached faster and admissions for detoxification become each time more frequent 9 .Patients in this study presented themselves for internation about 3 times a year.At admission, ethanol consumption is interrupted, and quality and amount of feeding and vitamin deficiencies are corrected, so that many patients recover from nutritional deficiencies. Hepatic dysfunction may occur in alcoholics without parallel damage detectable by light microscopy 15 .It is also known that there may not be correlation between the degree of liver fibrosis and plasma levels of hepatic enzymes 28 .Our findings go deeper, indicating that liver cell aggression is not necessarily followed by reduced protein synthesis, since no significant differences were detected in levels of hepatic enzymes, in groups with or without reduction of albumin, fibrino-gen, or prothrombin activity.Nonetheless, the hypothesis has not been ruled out that the lack of correlation of protein and enzymatic alterations might have been due to the former being more sensitive to nutritional deficiencies than to liver cell damage.Studies are needed employing specific methods for evaluation of the nutritional state of patients and more sensitive procedures for testing liver functions.A função hepática e suas correlações com a bilirrubina e as enzimas hepáticas foram avaliados em 30 alcoolistas crônicos do sexo masculino, assintomáticos ou oligossintomáticos, internados em hospital psiquiátrico para desintoxicação e tratamento do alcoolismo. Table 1 - Demographic characteristics, alcoholism and feeding pattern, and admission history. * Non-conjugated bilirubin predominated in all casesTable 4 -Group values and prevalence of altered values of hepatic enzymes and total bilirubin in groups with normal or reduced albumin values. Table 5 - Group values and prevalence of altered values of hepatic enzymes and total bilirubin in groups with normal or reduced prothrombin activity. Fisher's one-tailed or Student's t tests * p ≤ 0.05 Table 6 - Group values and prevalence of altered values of hepatic enzymes and total bilirubin in groups with normal or reduced fibrinogen values. Table 7 - Group values and prevalence of altered values of hepatic enzymes and total bilirubin in groups with normal or altered transferrin values .
2017-04-01T05:12:23.840Z
1999-06-01T00:00:00.000
{ "year": 1999, "sha1": "33aef8c12121b81428471193a926a3a836d16292", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rhc/a/rdHZDcSzhLXmQRVtzGb8m7y/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "33aef8c12121b81428471193a926a3a836d16292", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234744028
pes2o/s2orc
v3-fos-license
The Ubiquitin Proteasome System in Genome Stability and Cancer Simple Summary Genomic instability is a major driving force of tumour development and evolution. Cells have developed sophisticated regulatory systems to preserve the stability of the genome and defects in these mechanisms can lead to the acquisition of mutations. In this review we look at the role of ubiquitination, a common post-translational modification, in the regulation of genomic integrity. Abstract Faithful DNA replication during cellular division is essential to maintain genome stability and cells have developed a sophisticated network of regulatory systems to ensure its integrity. Disruption of these control mechanisms can lead to loss of genomic stability, a key hallmark of cancer. Ubiquitination is one of the most abundant regulatory post-translational modifications and plays a pivotal role in controlling replication progression, repair of DNA and genome stability. Dysregulation of the ubiquitin proteasome system (UPS) can contribute to the initiation and progression of neoplastic transformation. In this review we provide an overview of the UPS and summarize its involvement in replication and replicative stress, along with DNA damage repair. Finally, we discuss how the UPS presents as an emerging source for novel therapeutic interventions aimed at targeting genomic instability, which could be utilized in the treatment and management of cancer. Introduction Genome instability first emerged as a hallmark of cancer in the revised famous article "Hallmarks of cancer: The Next Generation" [1]. From there, research and drug discovery surged to understand this mechanism that underscores both development and progression of cancer. Faithful DNA replication is paramount for maintaining genome integrity and has evolved over millennia, developing sophisticated regulatory systems including DNA damage repair machinery and checkpoint kinases, to ensure that genomic material is passed on to the next generation with the highest levels of fidelity. Often it is alterations in these regulatory systems that pose the biggest threat to genome stability and give rise to the development of many cancers [2,3]. A commonly dysregulated system observed in neoplasms is the ubiquitin proteasome system (UPS). The UPS regulates a myriad of cellular processes that are altered during tumorigenesis, including cell differentiation, cell cycle, cellular homeostasis, DNA replication and DNA repair. The UPS is comprised of three specialized enzymes referred to as: E1, E2 and E3, along with the 26S proteasome, a multi-catalytic ATP-dependent protease complex [4]. The E3 ligases afford specificity to the UPS and aberrant expression or mutation of a number of these enzymes has been linked to malignant transformation [5][6][7][8]. This review focuses on the influence of the UPS, and E3 ligases in particular, on genome stability and how understanding their role in genome integrity could potentially provide novel therapeutic strategies. The Ubiquitin Proteasome System As a multi-component regulatory system, the UPS exists in all eukaryotic cells and has been widely studied in the fields of immunology and cancer. It is composed of three types of ubiquitin enzymes and the 26S proteasome [9]. Ubiquitin, a 76 amino acid protein, is highly conserved among eukaryotic organisms and gains the name from its ubiquitous expression in cells. Ubiquitination is one of the most common post-translational modifications (PTM) with ramifications in many cellular processes. It acts as a label or signal to determine the fate and/or function of the substrate protein it marks [10]. The process involves the covalent attachment of an ubiquitin molecule or chain to a lysine (K) residue on the Cterminal of the substrate protein by a cascade of enzymes. The 26S proteasome is a large multi-catalytic protease complex that recognizes and degrades ubiquitinated substrates. It is composed of two distinct complexes-a 20S core particle, capped at one or both ends by a 19S regulatory particle. The 19S regulatory particle functions to recognize ubiquitinated proteins, remove and recycle ubiquitin, unfold the substrate protein and translocate them into the 20S proteasome for degradation. The 20S core particle is a barrel-shaped structure made up of four heptameric rings; the two outer rings are composed of α subunits, which serve as a docking domain for the 19S regulatory particle and the two inner rings are composed of β subunits, three of which contain catalytic sites. Caspase-like, trypsin-like and chymotrypsin-like activities are associated with the β1, β2 and β5 subunits respectively, and confer the ability to cleave after acidic, basic and hydrophobic amino acid residues. The UPS is a highly complex regulatory system that is responsible for the degradation of over 80% of intracellular proteins and; therefore, oversees a myriad of essential processes in the cell. The mechanism and function of the UPS is illustrated below, in Figure 1. The Process of Ubiquitination Ubiquitination is performed through the action of three classes of ubiquitin enzymes: an ubiquitin activating enzyme (E1), an ubiquitin conjugating enzyme (E2) and an ubiquitin ligase (E3). The E1 enzyme functions to activate ubiquitin in an adenosine-triphosphate (ATP) dependent manner, forming a high energy thioester bond between a cysteine residue in its active site and the C-terminal of ubiquitin. Ubiquitin is then transferred to a cysteine residue of an E2 enzyme, and in the final step ubiquitin is moved to a lysine residue of a substrate protein by an E3 ligase. The E3 ligase interacts with an ubiquitin-bound E2 enzyme to facilitate the formation of an isopeptide or peptide bond between ubiquitin and a lysine residue of the substrate protein [11,12]. This is a diverse modification where proteins can have one or multiple ubiquitin molecules added to specific lysine residues, whereby both the number and location of the ubiquitin moieties have significance in regard to the form of regulation that the substrate protein will be subject to. There are two known E1 enzymes (UBA1 and UBA6), >30 E2 enzymes and over 600 E3 ligases encoded in the human genome. The addition of ubiquitin moieties to specific residues on a substrate protein is, in part, due to pairings of E2 and E3 enzymes. However, it is the E3 ligase enzymes that predominantly confer specificity to the UPS recruitment of substrate proteins [12]. E3 ligases are classified into three main groups based on their structure and function: Really Interesting New Gene (RING), Homologous to E6-AP Carboxyl Terminus (HECT) and RING-between-RING (RBR). RING finger E3 ligases constitute the largest class and are characterised based on the presence of a RING domain, a type of zinc finger, that confers E3 ligase activity by binding to a ubiquitin-loaded E2 and mediating the direct transfer of ubiquitin to a substrate protein. RING E3 ligases function either as monomers, homo/heterodimers or large multi-subunit complexes, such as the Cullin-RING ligases (CRLs), which generally comprise a RING E3 ligase, a Cullin scaffold and substrate recognition protein [13]. HECT E3 ligases contain an N-terminal substrate-binding domain and a C-terminal HECT domain containing a catalytic cysteine that accepts an ubiquitin molecule from an E2 before conjugating ubiquitin to a substrate protein [14]. RBR E3 ligases contain two RING domains (RING1 and RING2) with an InBetweenRING (IBR) domain between them, and share common features with both RING and HECT E3s. The RING1 domain binds to ubiquitin-bound E2 and transfers ubiquitin onto a catalytic cysteine on the RING2 domain before its conjugation to a substrate protein [15]. Ubiquitination Is a Diverse Modification When it was first described, ubiquitination was thought to be solely a post-translational modification that labelled proteins for degradation via the 26S proteasome by the addition of K48-linked ubiquitin chains. However, numerous additional linkages have been identified that play central roles in diverse biological processes. Currently there are seven known types of lysine associated ubiquitin linkages (K6, K11, K27, K29, K33, K48 and K63) and one methionine residue (M1) found at the N-terminal. In the most elementary form of ubiquitination, one ubiquitin molecule is added to a lysine residue of a substrate protein. This is referred to as mono-ubiquitination and has been predominantly linked with the regulation of histones [16,17]. Further, multi-mono-ubiquitination where single ubiquitin molecules are added to multiple lysine residues has been linked with endocytosis [18]. Additionally, further complexity and versatility in the system has been identified through the discovery of both homotypic and heterotypic chains. Homotypic, refers to chains in which ubiquitin molecules are connected through the same lysine residue while, heterotypic chains are conjugated through different lysine residues. [19]. The type of ubiquitin linkage determines the form of regulation placed on the substrate protein. K6 ubiquitin linkages remain poorly characterised; however, they have been implicated in the DNA damage response, with K6 ubiquitin linkages found on the tumour suppressor E3 ligase breast cancer 1 (BRCA1) and its substrate proteins [20]. K11 ubiquitin linkages have been associated with both signals for proteasome degradation and regulation of cell cycle progression. For example, the multi-subunit RING E3 ligase, anaphase-promoting complex/cyclosome (APC/C) utilizes K11 ubiquitin conjugation during mitosis, as the cell transitions from metaphase to anaphase [21]. K27 ubiquitination is reported to be an important linkage to promote DNA damage response (DDR) mediators. Gatti et al. found that activation of the DDR at double-strand breaks (DSBs) requires K27 ubiquitination of the histone 2A (H2A) by the E3 ligase RNF168 [22]. K29 and K33 ubiquitin modifications have been associated with many roles within the cell including autophagy, protein trafficking, stress responses and cell cycle regulation [23,24]. The methionine-linked ubiquitin modification (M1) or linear ubiquitin chains are added to the N-terminal of a substrate protein by the linear ubiquitin chain assembly complex (LUBAC), the only known E3 ligase capable of the addition of these linear chains, and are well characterised for their role in the activation of the transcription factor nuclear factor kappa B (NF-κB) [25]. The best characterised ubiquitin modifications are the addition of K48 and K63 poly-ubiquitin linked chains. While the K48-linked ubiquitin chains result in proteolytic degradation by the 26S proteasome, K63-linked modifications are responsible for mediating protein-protein interactions and have been associated with the DDR [26]. For example, the E3 ligase TRAF6 has been shown to assist in the trafficking of DNA repair proteins to sites of DNA damage through K63-linked poly-ubiquitination [27]. In common with most post-translational modifications, ubiquitination is reversible, and ubiquitin removal is carried out by the actions of a complex family of cysteine protease deubiquitinating enzymes, referred to as DUBs. These enzymes act to remove ubiquitin or remodel ubiquitin chains on substrate proteins allowing for the generation of free ubiquitin molecules that can be then recycled by the UPS in other cellular processes. The balance between ubiquitination and deubiquitination acts to maintain protein homeostasis and protein activities [28]. DNA Replication and Replicative Stress: UPS Surveillance of the Genome In simple terms, replication is the duplication of the genome that starts with DNA double helix and proceeds in a semi-conservative fashion, whereby each strand of the double helix acts as a template for the creation of two new strands; the finished product is two double helices, containing one old strand and one new strand. In reality replication is a complex process that is orchestrated by proteins that act almost simultaneously, and is controlled by a group of enzymes, checkpoint kinases, that regulate cycle-dependent kinase activity and respond to perturbations on DNA that risks the integrity of the genome [29]. Replication begins with the assembly of pre-replication complexes (pre-RCs) at multiple sites across the genome. Double stranded DNA is unwound at these sites by DNA helicases to form a replication fork containing two single-stranded DNA templates which are subsequently utilized by DNA polymerases to replicate the DNA [30]. Termination of replication occurs upon the convergence of replication forks; DNA synthesis is completed and so the replisome dissociates [31]. It is important for both initiation and termination of replication to be tightly regulated to ensure for timely cell cycle progression and faithful duplication of the genome. Initiation of Replication Initiation of replication begins at specific genomic sites, known as replication origins and can be divided into two phases, referred to as licensing and firing. Licensing occurs when the cell cycle is progressing from M to G 1 phase and involves the assembly of pre-RCs at replication origins. Pre-RCs are formed when the origin recognition complex (ORC), made up of six subunits (ORC1-6), recognizes and binds to replication origins. This promotes recruitment of CDT1 and CDC6, which in turn allows for the helicase mini-chromosome maintenance complex (MCM2-7) to be loaded onto DNA to form a pre-RC [30]. Origin activation, or firing, subsequently occurs upon entry to S phase, whereby MCM2-7 is activated by the kinases CDK and DDK, triggering the recruitment of CDC45 and the GINS complex to form the functional helicase CMG (CDC45/MCM2-7/GINS) [32]. Activation or firing of origins is reliant on timely coordination of each of the components to allow unwinding of the double-strand helix and to prevent re-replication of DNA. Separation of origin licensing and origin firing into different phases of the cell cycle is the key replication-limiting mechanism and it is regulated in part through cell cycledependent ubiquitination of key replication factors, including CDC6, CDT1 and ORC1 [33]. Ubiquitin-mediated degradation of CDC6 by the E3 ligase complex APC/C Cdh1 in early G 1 prevents its accumulation until late G 1 where it is required to form pre-RCs [34]. The Cullin RING Ligase CRL4 is recruited to replication origins in G 1 by the CRL4 substrate receptor replication initiation determinant protein (RepID) to facilitate initiation of replication [35]. CRL4 containing CDT2 as a substrate recognition subunit targets CDC6 for degradation once cells enter S-phase, and the SCF (SKP1-Cullin1-F-Box protein) ubiquitin ligase complex with the substrate receptor Cyclin F (SCF Cyclin F ) promotes the degradation of CDC6 in late G 2 and early M-phase, thus preventing origin relicensing [36,37]. This pre-RC protein is also ubiquitinated and marked for degradation upon DNA damage by the large HECT-E3 ligase HUWE1 in S and G 2 phases, when the APC/C Cdh1 ligase complex is inhibited. Control of CDC6 protein levels during later cell cycle stages by HUWE1 is pivotal in maintaining genome integrity by preventing replication of DNA lesions [38,39]. The removal of CDT1 from DNA replication origins is mediated by the SCF Skp2 E3 ligase complex at the G 1 to S phase transition and, subsequently, by CUL4 CDT2 in S-phase to ensure it is not available for relicensing origins [40]. CUL4 CDT2 -mediated degradation of CDT1 is dependent on its binding to proliferating cell nuclear antigen (PCNA), which directly interacts with CDT1 to promote its ubiquitination [41]. Conversely, CDT1 is stabilized in G 1 by the APC/C Cdh1 -mediated degradation of Geminin, an inhibitor of CDT1 [42]. Finally, after origin firing, ORC1, the largest subunit of the ORC complex, is ubiquitinated and degraded by SCF Skp2 [43]. A schematic of the initiation of replication, along with regulatory E3 ligases is given in Figure 2. Elongation and Termination of Replication Following origin firing, a number of additional replication proteins, including replication protein A (RPA), PCNA and DNA polymerases, are recruited to nascent replication forks to begin DNA synthesis. This involves the creation of new DNA strands that are incorporated into a double helix with the original template strand, the two strands are joined by hydrogen bonds and are subject to Chargaff's law where; adenine (A) only binds with thymine (T) and cytosine (C) will always bind to a guanine (G). The creation of these hydrogens bonds is performed largely by the DNA polymerases ε and δ in a 5 to 3 bidirectional manner, whereby the two new strands are synthesized simultaneously. The leading strand is synthesized continuously in a 5 -3 direction towards the replication fork, while the lagging strand is synthesized discontinuously by small DNA fragments referred to as Okazaki fragments [44]. The synthesis of the leading and lagging strands continues until two replication forks converge [45]. Termination of replication requires the disassembly of the replication machinery from chromatin, and is regulated by ubiquitination. Polyubiquitination of MCM7, by the E3 ligase complex CRL2 LRR1 , recruits the ATPase VCP/p97 to remove MCM7 from chromatin leading to disassembly of the MCM complex [46][47][48][49]. Ubiquitination at Stalled Replication Forks Replication can also be terminated or stalled prematurely when replication forks encounter obstacles such as DNA damage, DNA-protein crosslinks, DNA-RNA hybrids and replication stress. Arresting replication and the formation of a stalled fork serves to prevent unfaithful DNA replication, but if it persists the stalled fork can result in the formation of double-strand breaks and further compromise the integrity of the genome. Cells have developed sophisticated mechanisms to overcome replication fork barriers including fork reversal, translesion synthesis (TLS) and template switching (TS). Ubiquitination plays a crucial role in the regulation of fork stability and DNA damage response at stalled forks, the key players in this are discussed below. Regulation of RPA At stalled replication forks, the replicative DNA helicase and DNA polymerases are uncoupled from the DNA generating regions of ssDNA. This ssDNA is rapidly bound by RPA which serves as a signalling platform to recruit factors involved in replication stress and DNA damage responses, as well as the subsequent restart of stalled forks. RPA is a heterotrimeric protein composed of three subunits: RPA70, RPA32 and RPA14, and can be ubiquitinated at multiple lysines upon replication fork stalling [50]. Optimal loading of RPA onto ssDNA at stalled forks and subsequent modification of RPA requires the timely degradation of the replication stress response regulator SDE2. During replicative stress, SDE2 is first cleaved by PCNA to generate a C-terminal fragment known as SDE2 Ct . SDE2 Ct is recognized and polyubiquitinated by the UBR1/2 E3 ligase and subsequently extracted and degraded via the VCP/p97 segregase complex. Cells lacking SDE2 Ct fail to induce a ssDNA-RPA platform, leading to defects in PCNA-dependent DNA damage bypass and stalled fork recovery [51]. To date, there are several E3 ligases that are known to modulate RPA's ubiquitination during replication stress, including RFWD3 and PRP19. RFWD3 has recently been shown to facilitate the ubiquitination of RPA subunits both for normal DNA replication and in response to replicative stress. In unperturbed cells, RFWD3 is recruited to and stabilized at replication forks by PCNA, where it targets RPA for proteasomal degradation to allow fork progression to proceed [52]. Cells lacking RFWD3 display an accumulation of RPA and increased frequency of stalled replication forks. A number of roles have been reported for RFWD3-mediated ubiquitination at stalled replication forks. RFWD3 has been shown to promote non-proteolytic ubiquitination of all three RPA subunits to promote homologous recombination (HR)-dependent fork repair and restart [53]. Inano and colleagues subsequently found that RFWD3 facilitates HR through polyubiquitination of both RPA and RAD51, leading to VCP/p97-mediated degradation [54]. Furthermore, RPA-mediated recruitment of RWD3 to stalled replication forks is essential for the repair of DNA interstrand crosslinks (ICLs), lesions that inhibit DNA strand separation and therefore block replication. Mutations in RFWD3 lead to defects in ICL repair by disrupting RPA-RFWD3 binding at ICL-induced stalled replication forks and have been associated with Fanconi anaemia (FA), a rare genetic disorder characterised by genomic instability and predisposition to cancer [55]. Mutations in BRCA2, which functions in replication fork stability and HR, are also associated with FA and RWFD3 has been shown to affect stalled fork stability in BRCA2 mutant cells. In the absence of BRCA2, RPA is hyperubiquitinated by RFWD3 at stalled forks, contributing to fork instability and collapse [56]. PRP19 is an essential U-BOX E3 ligase that is well known for its role in pre-mRNA processing. While RFWD3 is constitutively associated with RPA, PRP19 binds and ubiquitinates RPA after DNA damage [57]. In the absence of this enzyme, cells exhibit a heightened sensitivity to inducers of replication-stress including UV and hydroxyurea (HU). Furthermore, knockdown of PRP19 leads to reduced K63-linked ubiquitination of RPA 70 and 32 subunits during induced replication stress with an associated attenuation of ataxia telangiectasia and Rad3-related (ATR) signalling, alongside a subsequent decrease in the abundance of phosphorylated ATR substrates RPA and checkpoint kinase 1 (Chk1) [58]. PCNA Ubiquitination Modification of PCNA by ubiquitin plays a crucial role in rescuing stalled replication forks. Monoubiquitination of PCNA at K164 by the E2-E3 ubiquitin ligase complex Rad6-Rad18 promotes error-prone TLS mediated replication, a process which uses TLS polymerases to promote replication across the DNA lesion [59]. Meanwhile, K63-linked polyubiquitination of PCNA at K164 by helicase-like transcription factor (HLTF) promotes TS, which employs the use of the newly synthesized daughter strand as a template to bypass the DNA lesion [60]. K63-linked polyubiquitination of PCNA is also important for replication fork reversal and restart. The translocase ZRANB3 is recruited to K63-linked PCNA to stabilize stalled forks and facilitate replication restart [61]. The HECT E3 ligase HUWE1 has also been reported to relieve replicative stress in cells by facilitating fork restart through its interaction with PCNA. Choe et al. [62] demonstrated that HUWE1 binding to PCNA at stalled forks resulted in the recruitment of DNA repair machinery through HUWE1-mediated mono-ubiquitination and subsequent phosphorylation of the DNA damage marker H2AX. The DNA repair proteins including BRCA1 and BRCA2 allow repair of DNA and restart of the replication fork; HUWE1 promotes the repair and ultimately the restart of stalled forks aiding the integrity of the genome that would otherwise be compromised with creation of DNA breaks from prolonged fork stalling [63]. PCNA is also ubiquitinated by several other E3 ligases, including CDT1 and BRCA1, and likely serves as an additional control mechanism to reduce the number of stalled forks and limit the incidences of DSBs [64]. TRAIP-Mediated Regulation of Replisome Stability TRAF-interacting protein (TRAIP) is a replisome-associated RING E3 ligase with important roles in replication and in promoting genomic stability. In response to replication blocking lesions such as ICLs or DNA-protein crosslinks (DPCs), TRAIP-mediated ubiquitination promotes the completion of DNA replication in a number of ways. ICLs can be repaired by two pathways: The FA pathway can create a DSB that is repaired through HR, or the DNA glycosylase NEIL3 can unhook or cleave the cross link. TRAIP functions upstream of these pathways and can determine the pathway choice. Convergence of replication forks at a crosslink triggers TRAIP-mediated ubiquitination of the CMG helicase. Short ubiquitin chains recruit NEIL3 to unhook the ICL thereby allowing completion of replication. Alternatively, the ubiquitin chain can be extended to facilitate CMG unloading by VCP/p97, enabling the FA pathway and subsequent HR repair of the lesion [65]. DPCs block progression of DNA replication and arrival of a replication fork at a DPC triggers TRAIP ubiquitination of the DPC, in turn promoting CMG bypass of the lesion and proteasomal degradation of the DPC [66]. Another important function of TRAIP in preserving genome stability is by triggering replisome unloading in mitosis. Failure to rescue stalled forks or repair DNA damage can result in unreplicated DNA persisting into mitosis, which can lead to mitotic defects including chromosomal rearrangements. TRAIP promotes replisome disassembly in mitosis through K6-and K63-linked ubiquitination of MCM7, leading to CMG unloading by VCP/p97 [67]. R-Loop-Induced Stress The formation of DNA:RNA hybrids, referred to as R-loops, pose a threat to fork progression and ultimately act as a source of replication stress that can contribute to genomic instability. While the exact mechanisms of how R-loops are formed and add to instability in the genome is not yet delineated, there is evidence of UPS-mediated regulation of R-loops during replication to counteract replicative stress (Figure 3). By their nature R-loops leave single-strand DNA (ssDNA) exposed and susceptible to harmful lesions with an additional risk of transcription-associated mutagenesis [68]. Two E3 ligases, MDM2 and RNF2, have been reported to act to prevent the formation of R-loop structures that would impair replication. They achieve this by promoting the mono-ubiquitination of H2A at K119, with coordinated deubiquitination by the DUB enzyme, BAP1, which removes the ubiquitin modification when appropriate [69,70]. It is this balance of ubiquitination and deubiquitination that supports DNA replication and prevents the formation of Rloops. Pharmacological targeting of MDM2 is currently under investigation with the aim of potentially sensitising cancer cells to topoisomerase inhibitors, a drug that induces R-loop formation [62,63]. The rationale for this might be explained by the findings of Klusmann et al. [70], where depletion of MDM2 left cells predisposed to the occurrence of these DNA:RNA structures and thereby the genomic instability that they promote. Interestingly, overexpression of MDM2 has a similar effect with cells exhibiting heightened levels of replication stress and cell cycle arrest. The UPS Facilitates the DNA Damage Response A major obstacle for genome stability is avoiding the acquisition of damage during S phase [71]. The cell has developed sensory bodies known as checkpoint kinases that initiate cascades of cellular signalling that halt replication and cell cycle progression, allowing the recruitment of DNA repair machinery to the site of damage with goal of repairing damage and restoring normal cell cycle. In response to genotoxic stress, DNA damage is recognized by the kinases ATM and ATR which coordinate a network of cellular processes to maintain genomic integrity, including the DDR, made of multiple pathways for the detection and repair of different types of DNA damage [72]. Specific repair mechanisms are designated for single-strand breaks (SSBs) including base excision repair (BER) and nucleotide excision repair (NER) pathways, while in the case of DSBs the homologous recombination (HR) and non-homologous end joining (NHEJ) pathways are activated [73]. These repair pathways are present at different points in the cell cycle ensuring minimal amounts of damage are replicated and passed on. Similar to any physiological process, post-translation modifications such as ubiquitination contribute to the tight control of DDR signalling and prevent aberrant activation of DNA damage repair. Ubiquitin-mediated regulation of individual DDR pathways has been extensively reviewed and here we focus predominantly on ubiquitin regulation of upstream DNA damage sensing and DDR signalling via ATM and ATR. UPS Regulation of ATR-Mediated Repair A broad spectrum of DNA damage stimuli, such as UV radiation, replication stress and interstrand DNA crosslinking agents, result in activation of ATR, a kinase that activates and recruits a number of substrates including the protein kinase Chk1 (checkpoint kinase 1). ATR-Chk1 signalling promotes the degradation of CDC25A phosphatase via the UPS and prevents the de-phosphorylation of cyclin dependent kinases (Cdc2/cyclinB1) leading to a halt in replication and cell cycle progression. Two E3 ligase complexes, APC/C Cdh1 and SCF βTrCP1/2 , play a role in regulation of CDC25A protein levels in cells upon DNA damage in a cell cycle dependent manner [74]. During late mitosis and G 1 phase the APC/C Cdh1 ligase is responsible for regulating the abundance of CDC25A, and the E3 ligase labels it with ubiquitin by recognition of a specific sequence known as a KEN-box motif found on the N-terminus of the protein. While during S and G 2 phase, CDC25A is modulated by an SCF ligase containing a βTrCP1 or βTrCP2 F-box protein [74]. The SCF βTrCP1/2 ligase recognizes phosphorylated CDC25A, ubiquitinates it thus promoting its degradation via the 26S proteasome. CDC25A is predominantly phosphorylated at Ser76 by the ATR-Chk1 axis, in collaboration with a DNA binding protein known as claspin and the Rad9-Rad1-Hus1 complex. Other paralogs of CDC25, including CDC25B and CDC25C, are regulated by Tribbles homolog 2 (TRIB2), a member of Tribbles family of serine/threonine psuedokinases, which promotes their ubiquitination and degradation. TRIB2 is thought to act as an adaptor protein working in conjunction with a currently unknown E3 ligase to promote the addition of K48 ubiquitin linkages and thereby regulate the G 2 /M DNA damage checkpoint [75,76]. Mutations in both βTrCP1 and βTrCP2 have been reported in cancers potentially leading to stabilisation and accumulation of CDC25A and subsequent replication stress and genomic instability [77]. UPS Regulation of ATM-Mediated Repair While ATR can be activated by a range of stimuli, ATM is predominantly activated by DSBs [78]. ATM is brought to the site of the DSB by the MRN complex (MRE11-RAD50-NBS1) in an ubiquitin-dependent manner. The E3 ligase Skp2 attaches K63-linked polyubiquitin chains to the MRN subunit NBS1, which in turn recruits ATM for activation [79,80]. Upon recruitment of ATM to the site of the DSB, it phosphorylates a plethora of substrates, including the protein kinase Chk2, histone H2AX and tumour suppressor p53, to mediate effects on DNA repair, cell cycle arrest and apoptosis. Phosphorylation of Chk2 serves to amplify and expand ATM-mediated signalling. The tumour suppressor p53 is stabilized upon DNA damage and plays a central role in maintaining genome stability. ATM phosphorylates both p53 and its inhibitor, the E3 ligase MDM2 and this serves to both activate p53 and protect it from proteasomal degradation via MDM2 polyubiquitination [81]. Once activated p53 can facilitate DNA repair by inducing a cell cycle arrest allowing time for DNA repair, and it can also directly impact many of the DDR signalling pathways [82]. P53 is frequently mutated or deleted in cancer and wildtype p53 expression can also be downregulated through MDM2 overexpression, leading to genomic instability. Phosphorylation of H2AX on Ser139 initiates the recruitment of DNA repair complexes, in part through ubiquitin signalling. Phosphorylated H2AX, or γH2AX, recruits the mediator of DNA damage checkpoint protein 1 (MDC1) to sites of DNA damage [83]. Here MDC1 undergoes ATM-mediated phosphorylation and in turn recruits the E3 ligases RNF8 and RNF168 [84]. RNF8 facilitates the K63 linked poly-ubiquitination of H1 linker histones at the site of double-strand breaks and that in turn mediates the mono-ubiquitination of H2A-type histones on K13 and K15 by RNF168 [85]. The ubiquitination of these histones results in an eventual accumulation of repair proteins p53 binding protein 1 (53BP1) and BRCA1. These proteins have reciprocal roles in the repair of DSBs, 53BP1 is associated with NHEJ repair of the DSB, while BRCA1 promotes HR-mediated repair. The actions of RNF8 and RNF168 are instrumental in the cells ability to repair DSBs with studies in mice showing that knockout of either of these ligases predisposes cancer development [86,87]. The E3 ligase RNF4 is another key player in DSB repair through its effects on MDC1 and downstream DDR factors. RNF4 is a Small Ubiquitin-like Modifier (SUMO)-targeted ubiquitin ligase (STUbL) that specifically recognizes and ubiquitinates proteins modified with SUMO. While MDC1 is required for the recruitment of DDR factors, its removal from DSBs is required for HR-mediated repair. SUMOylation of MDC1 by the SUMO E3 ligases PIAS1 and PIAS4, recruits RNF4 to promote turnover of MDC1 via the proteasome, thereby facilitating access of other DDR factors to sites of damage [88]. The loading of HR proteins, including RPA and RAD51, on to the DNA is an important part of this DDR pathway. During HR-mediated repair of DSBs, ssDNA is coated with RPA subunits to prevent ssDNA from binding to itself before RAD51 can be recruited. In order for RAD51 to be loaded on to the DNA by its binding protein BRCA2, the RPA must first be removed [89]. RNF4 also plays a key role in regulating RPA turnover and BRCA2-mediated RAD51 loading. During HR, PIAS1 and PIAS4 SUMOylate RPA70, which recruits RNF4 to RPA70, leading to ubiquitin-mediated degradation of RPA70 [90]. In the absence of RNF4, BRCA2 is not efficiently recruited to sites of DNA damage and cells exhibit a HR repair defect, similar to the HR deficiency that results from mutations in the BRCA1 or BRCA2 genes, a so-called 'BRCAness' phenotype [90]. While there are adverse effects of a HR defect such as increased mutagenesis and genomic instability there is also a targetable therapeutic vulnerability in tumour cells harbouring a HR repair defect. BRCA1/2 mutants or tumours displaying BRCAness exhibit a heightened sensitivity to PARP inhibition by eliciting cell death via a synthetic lethality mechanism [91,92]. Inhibition of PARP in HRdeficient cells prevents repair of SSBs in an already DSB repair defective setting leading to replication fork collapse, unrepaired DNA damage and cytotoxicity. Inhibition of RNF4 could offer an additional mechanism to sensitize cancer cells to PARP inhibition ( Figure 4). Therapeutic Interventions Targeting the UPS has been at forefront of many biomedical research laboratories across the world for the last few decades, with development of the first in class proteasome inhibitor bortezomib and additional second-generation proteasome inhibitors swiftly following. More recently there has been an influx in research into components of the UPS that precede the proteasome including drugs designed to target ubiquitin-conjugating (E2) and ubiquitin ligase enzymes (E3), as well as therapies focused at the DUBs. Here we focus on those aimed at DNA replication and repair pathways, with an overview of compounds in clinical development presented in Table 1. The Proteasome Inhibition of proteasome function is established as a powerful anti-cancer strategy for some haematological malignancies. Bortezomib was introduced into the clinic for the treatment of multiple myeloma in 2003 and mantle cell lymphoma in 2006 and has contributed towards improved survival for many patients [93,94]. Following the success of bortezomib, second generation proteasome inhibitors carfilzomib and ixazomib were subsequently approved for clinical use [95,96] and additional proteasome inhibitors (oprozomib, marizomib) are in clinical trials [97,98]. While the inhibitors differ in their pharmacodynamics properties, the key molecular target of all of the inhibitors is the β5 catalytic subunit [99]. Proteasome inhibitors are largely thought to exert their anti-cancer effect by inducing an acute proteotoxic effect. However, this proteotoxicity has been shown to also have an implication for the DDR. When the proteasome is inhibited this results in a reduction of free ubiquitin in the nucleus, abrogation of H2AX ubiquitination and decreased recruitment of BRCA1 and Rad51 to sites of DSBs, leading to impaired HR. This so called 'BRCAness' phenotype, induced by proteasome inhibitors, sensitizes cells to PARPi in a similar manner to the synthetic lethality observed with PARPi in BRCA-deficient tumours [100]. MDM2 The tumour suppressor p53 is the most frequently mutated gene in cancer and is fittingly the most studied human gene [101]. Playing a central role in the DDR it has been dubbed the guardian of the genome and has an influence in DNA repair, cell cycle and apoptosis. It is heavily regulated by the UPS, predominantly by ubiquitin-mediated degradation by MDM2, but also through ubiquitination by the E3 ligases HUWE1, p53induced RING H2 (Pirh2) and constitutive photomorphogenesis protein 1 homolog (COP1), among others [102]. One of the major avenues of research for targeting p53 is using small molecule inhibitors of MDM2. MDM2 is overexpressed in many malignancies including lung, liver, colorectal cancers and many blood neoplasms. The first selective inhibitors of MDM2, were the nutlins or imidazolines derivatives that function as competitors of p53-MDM2 binding [103]. The consequence of this reduced p53-MDM2 binding results in an accumulation of p53 and its substrates and a subsequent increase in apoptosis. The nutlins competitively bind to the hydrophobic pocket of MDM2 by mimicking the three crucial amino acids required for p53 binding (Phe19, Trp23, Leu26) determined by crystallographic studies [104]. Another compound developed to inhibit MDM2 is 5-deazaflavin, which targets the RING finger domain of the protein to obstruct its E3 ligases activity. This results in increased p53 levels due stabilisation of the protein, with an associated increase in p53-mediated apoptotic activity. Deazaflavin analogues function to promote p53 activity by hampering MDM2s ability to ubiquitinate both p53 and itself [105]. Development of inhibitors of MDM2 is a promising strategy not only for its anti-tumour effect but also to overcome chemotherapy resistance, a common clinical problem. Preclinical and early clinical studies have been encouraging and a growing number of MDM2 inhibitors are undergoing clinical evaluation [106]. The Anaphase Promoting Complex Another potential therapeutic target in the UPS is the anaphase promoting complex (APC/C). The APC/C uses one of two coactivators for substrate recognition, CDC20 and CDH1, which activate the complex at distinct phases of the cell cycle. APC/C CDC20 primarily controls progression from metaphase to anaphase and mitotic exit, while APC/C CDH1 and is primarily active through mitotic exit and early G 1 [107]. Two small molecule inhibitors of APC/C have been developed, pro-TAME and Apcin, which function using distinct mechanisms. Apcin disrupts the interaction of CDC20 with APC/C substrates, while pro-TAME blocks the activity of both APC/C CDC20 and APC/C CDH1 . A number of pre-clinical studies have demonstrated an anti-cancer effect for these compounds, with a combination of Apcin and pro-TAME eliciting a greater effect than either compound alone [108]. Cullin-RING Ligases The CRL protein family are the largest family of multicomponent E3 ligases and are involved in the regulation of many biological processes. Activation of CRLs requires the conjugation of NEDD8 to a key lysine residue at the C-terminal of Cullins, a process similar to ubiquitination, referred to as neddylation. A number of CRLs are involved in the regulation of DNA replication, including the SCF complex and CRL4. One promising inhibitor of CRLs is MLN4924, a small molecule inhibitor of the NEDD8-activating enzyme (NAE) [109]. MLN4924 exhibits anti-cancer effects in numerous cell types and has been demonstrated to stabilize the replication licensing factor CDT1, through inhibition of SCF and CRL4, leading to re-replication, DNA damage and eliciting a G 2 cell cycle arrest [110]. MLN4924 is currently undergoing early-phase clinical evaluation across a range of cancer types (Table 1). In addition to general CRL inhibition, another strategy under investigation is specific inhibition of the SCF subunit SKP2. The F-box protein SKP2 is overexpressed in many cancers and is associated with an inferior prognostic outcome in gastric, colon and breast cancers [111,112]. It negatively regulates the abundance of cyclin-dependent kinase inhibitors including p27, p21 and p57 and so it is not surprising that overexpression of this SCF-E3 ligase complex could result in uncontrolled cell cycle progression. The continuous replication of unchecked DNA allows harmful mutations to be passed on without correction and results in loss of genomic stability and ultimately tumorigenesis. A small molecule inhibitor of SKP2, known as compound 25, was identified through an in silico screen [113]. Compound 25 was found to significantly attenuate the Skp1-SKP2 interaction and displayed synergy with other chemotherapeutics both in vitro and in vivo. More recently, Li et al. [114], reported that treatment with SMIP004, an additional SKP2 inhibitor, led to an increased sensitivity to radiation in human breast cancer cell lines in vitro, with similar results obtained with breast cancer cell xenografts. Hijacking an E3 Ligase E3 ligases are increasingly being demonstrated to contribute to oncogenesis and have become a promising target in cancer. The development of protein-targeting chimeric molecules (PROTACS) over the last decade has shown potential in targeting proteins including; c-MET, MCL-1, MYC and TRIM24, which were once deemed undruggable [115]. PROTACS recruit and link E3 ligases to a protein of interest by acting as a bridge between the enzyme and substrate. The protein is then modified with K48-linked ubiquitin chains and is, subsequently, degraded by the 26S proteasome. The mode of action of PROTACS is depicted in Figure 5. This manipulation of the UPS has the potential to find, bind and reduce the abundance of oncogenic proteins within the cell, thereby eliciting an anti-cancer effect [116][117][118]. PROTACS targeting MDM2 and PCNA have recently been described, highlighting the potential of this approach in targeting genomic instability [119,120]. While further research into PROTACS is required to ensure clinical efficacy and safety, their ability to effectively degrade hard to target proteins and potential in overcoming drug resistance will no doubt gain them entry to the arsenal of anti-cancer treatments in the future. Conclusions and Future Perspectives Coordination of DNA replication is paramount to maintaining genome stability including origin firing, rescuing stalled forks and termination. The cell has developed many cellular cascades that respond to DNA damage and replication stress. Their response acts to facilitate repair, cell cycle arrest and, when this does not suffice, apoptosis. Deregulation of the DDR and replisome machinery fuels the genomic instability needed to drive cancer cell development and clonal evolution that affords tumour cells immunity from chemotherapies. In the past, the recognition of the deregulation of members in DNA repair and replication led to the discovery of novel therapeutics such as PARP inhibitors, which highlighted genomic stability as an Achilles heel of tumours with defective DNA repair mechanisms. Now, studies into other mediators of replication fork protection, including RPA and RAD51, their roles in cancer and the development of chemo-resistance are starting to be elucidated. This may set the stage for the development of therapies aimed at the regulators of these proteins in an attempt to manipulate DNA repair pathways by disrupting the abundance of machinery that may help in the fight against drug resistance. With further research into the ubiquitin proteasome system and its role as a regulator of genome stability, it is likely that novel therapies, such as specific small-molecule inhibitors and better defined PROTACS, will emerge.
2021-05-18T05:16:15.960Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "66d4fb53c3525d7334c76efda9ec11615a28954a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/9/2235/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66d4fb53c3525d7334c76efda9ec11615a28954a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
34791386
pes2o/s2orc
v3-fos-license
A Geometric Approach to Active Learning for Convolutional Neural Networks Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning) In this paper, we first show that uncertainty based active learning heuristics are not effective for CNNs even in an oracle setting. Our counterintuitive empirical results make us question these heuristics and inspire us to come up with a simple but effective method, choosing a set of images to label such that they cover the set of unlabeled images as closely as possible. We further present a theoretical justification for this geometric heuristic by giving a bound over the generalization error of CNNs. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin. Introduction Deep convolutional neural networks (CNNs) have shown unprecedented success in many areas of research in computer vision and pattern recognition, like image classification, object detection, and scene segmentation. Although CNNs are universally successful in many tasks, they have a major drawback; they need a very large amount of labeled data to be able to learn their millions of parameters. More importantly, it is almost always better to have more data since the accuracy of CNNs is often not saturated with increasing dataset size. Hence, there is a constant desire to collect more and more data. Although this is the behavior you want from an algorithmic perspective (higher representative power is typically better), labeling a dataset is a time consuming and an expensive task. These practical considerations raise a critical question: "what is the optimal way to choose data points to label such that the highest accuracy can be obtained given a fixed labeling budget." Active learning is one of the common paradigms to address this question. The goal of active learning is to find effective ways to choose data points to label, from a pool of unlabeled data points, in order to maximize the accuracy. Although it is not possible to obtain a universally good active learning strategy [5], there exist many heuristics [29] which have been proven to be effective in practice. To the best of our knowledge, most of these heuristics are typically not effective for CNNs. The prevalent belief for explaining this behavior is CNNs' tendency to make very confident mistakes. It is empirically observed that when CNNs make mistakes, they can assign arbitrary confidence values to their decisions. In other words, it is typically not possible to deduce that a CNN is uncertain solely by looking at its outputs. Although we agree with this observation, our empirical analysis suggests that this is not the main reason behind the ineffectiveness of active learning for CNNs. Following our empirical study, we decided not to adopt an uncertainty based method, and approach the problem from a geometric perspective. We hypothesize that given a large dataset, the desired property of the set of labeled points is to cover the set of unlabelled ones as closely as possible (See Figure 1 for a visualization). In other words, we find a set of points to label such that when they are labeled, every remaining unlabelled point in the dataset will have a close labeled neighbor. We formulate this space-covering property as an optimization problem and present an efficient solution. We carry out an in-depth analysis of our algorithm both theoretically and empirically. We study the generalization error of CNNs in a realistic setting and present a bound on the difference between the population risk and the empirical risk. We further consider the active learning case and present a bound over the risk of the unlabelled data points in terms of the maximum distance between unlabelled points and their labeled nearest neighbors. We further study the behavior of our proposed algorithm empirically for the problem of image classification using three different datasets. Our empirical analysis demonstrates state-of-the-art performance by a large margin. Related Work We discuss the related work in the following categories separately. Briefly, our work is different from existing approaches since i) it specifically targets CNNs, ii) we consider both fully supervised and weakly supervised cases, and iii) we theoretically analyze our algorithm. Active Learning Active learning has been widely studied and most of the early work can be found in the classical survey [29]. It discusses most query strategies such as information theoretical methods [22], ensemble approaches [23,9] and uncertainty based methods [32,19,16,20]. Bayesian active learning methods typically use a non-parametric model like Gaussian process to estimate the expected improvement by each query [17] or expected error after a set of queries [27]. These approaches are not applicable to deep learning scenarios since they do not scale to large-scale datasets. Ensemble methods are also not applicable to deep learning due to the large parameter space of neural networks. Such ensemble methods require an intractable number of networks to be trained. One important class is that of uncertainty based methods, which try to find hard examples using heuristics like highest entropy [16], and geometric distance to decision boundaries [32,3]. We present an empirical result in Section 4.1 which motivated us to move away from such techniques. We empirically demonstrate that even in the oracle case, such algorithms are not effective for CNNs. There are recent optimization based approaches which can trade-off uncertainty and diversity to obtain a diverse set of hard examples. Elhamifar et al. [8] design a discrete optimization problem for this purpose and use its convex surrogate. However, the algorithm uses n 2 variables where n is the number of data points. Hence, it does not scale to the deep learning case. There are also many discrete optimization based active learning algorithms designed for the specific class of machine learning algorithms like k-nearest neighbors and naive Bayes [36]. Even in the algorithm agnostic case, one can design a set-cover algorithm to cover the hypothesis space using sub-modularity [13,10]. Our algorithm can be considered to be in this class; however, we do not use any uncertainty information. Our algorithm is also the first one which applies to the CNNs. Recently, a discrete optimization based method [2] which is similar to ours has been presented for k-NN type algorithms in the domain shift setting. Although our theoretical analysis borrows many techniques from [2], their results are only valid for k-NN and are not applicable to CNNs. To the best-of-our-knowledge, the only active learning algorithm for CNNs is presented in [35]. It is a heuristic based algorithm which directly assigns labels to the data points with high confidence and queries labels for the ones with low confidence. We discuss its limitations in Section 6. Unsupervised Subset Selection The closest literature to our work is the problem of unsupervised subset selection. This problem considers a fully labeled dataset and tries to choose a subset of it such that the model trained on the selected subset will perform as closely as possible to the model trained on the entire dataset. For specific learning algorithms, there are methods like core-sets for SVM [33] and core-sets for k-Means and k-Medians [15]. The most similar algorithm to ours is the unsupervised subset selection algorithm described in [37]. It uses a facility location problem to find a diverse cover for the dataset. Our algorithm differs in that it uses a slightly different formulation of facility location problem. Instead of the min-sum, we use the minimax [38] form of the facility location. More importantly, we apply this algorithm for the first time to the problem of active learning and provide theoretical guarantees. Weakly-Supervised Deep Learning Our paper is also related to semi-supervised deep learning since we experiment the active learning both in the fully-supervised and weakly-supervised scheme. One of the early weakly-supervised convolutional neural network algorithms was Ladder networks [26]. Recently, we have seen adversarial methods which can learn a data distribution as a result of a two-player non-cooperative game [28,11,25]. These methods are further extended to feature learning [7,6]. We use Ladder networks in our experiments since adversarial architectures are notoriously hard to train. Our algorithm is agnostic to the weakly-supervised learning algorithm choice and can easily use any weakly-supervised or fully-supervised model. Problem Definition In this section, we formally define the problem of active learning and set up the notation for the rest of the paper. We are interested in a C class classification problem defined over a compact space X and a label space Y = {1, . . . , C}. We also consider a loss function l(·, ·; w) : X × Y → R parametrized over the hypothesis class (w), e.g. parameters of the deep learning algorithm. We further assume class-specific regression functions η c (x) = p(y = c|x) to be λ η -Lipschitz continuous for all c. We consider a large collection of data points which are sampled i.i.d. over the space . . , n}. We further consider an initial pool of data-points chosen uniformly at random as An active learning algorithm only has access to {x i } i∈[n] and {y s(j) } j∈ [m] . In other words, it can only see the labels of the points in the initial sub-sampled pool. It is also given a budget b of queries to ask an oracle, and a learning algorithm A s which outputs a set of parameters w given a labelled set s. The active learning with a pool problem can simply be defined as min In other words, an active learning algorithm can choose b extra points and get them labelled by an oracle to minimize the future expected loss. There are a few differences between our formulation and the classical definition of active learning. Classical methods consider the case in which the budget is 1 (b = 1) but a single point has negligible effect in a deep learning regime hence we consider the batch case. It is also very common to consider multiple rounds of this game. We also follow the multiple round formulation with a myopic approach by solving the single round of labelling as; We only discuss the first iteration where k = 0 for brevity although we apply it over multiple rounds. At each iteration, an active learning algorithm has two stages: 1. identifying a set of data-points and presenting them to an oracle to be labelled, and 2. training a classifier using both the new and the previously labeled data-points. The second stage (training the classifier) can be done in a fully or weakly-supervised manner. Fully-supervised is the case where training the classifier is done using only the labeled data-points. Weakly-supervised is the case where training also utilizes the points which are not labelled yet. Although the existing literature only focuses on the active learning for fully-supervised models, we considered both cases and experimented on both. Active Learning as a Set Cover When there is no direct measure of uncertainty over the hypothesis class, the active learning problem is typically considered as refining decision boundaries by querying hard examples. Hence, using uncertainty is an empirically proven heuristic. However, this heuristic has very limited success in CNNs. This is widely attributed to the fact that the CNNs typically make very confident mistakes and the confidence values computed via soft-max outputs do not correspond to the true confidence of the model. Here, we focus on the following more fundamental question; would classical query methods work for CNNs if CNNs had an accurate uncertainty estimate? Although the common sense answer is affirmative, our empirical analysis shows that this is typically not the case. We describe this experiment in detail in Section 4.1. Inspired by the empirical observation on the ineffectiveness of uncertainty based approaches, we propose not using uncertainty information, and instead approach the problem from a purely geometric perspective. We design an algorithm based on the heuristic of covering the set of unlabeled data points as closely as possible. We explain this algorithm in detail in Section 4.2 and further analyze it empirically in Section 6 and theoretically in Section 5. Ineffectiveness of Uncertainty based Methods It is common to attribute the ineffectiveness of uncertainty based methods in CNNs to the inaccuracy of the uncertainty estimates based on soft-max outputs. The common hypothesis is the following: Deep learning algorithms lead to an inaccurate estimate of uncertainty, hence the uncertainty based active learning methods fail. Although this hypothesis is intuitive considering the many confident mistakes CNNs make, it is not enough to answer more fundamental question: If CNNs produced accurate estimates of uncertainty, would uncertainty based active learning methods work for CNNs? We can answer this question by simply replacing the uncertainty estimate in active learning with oracle ground truth loss. In other words, we replace the uncertainty with l(x i , y i , A s 0 ) for all unlabelled examples x i . Since this is the oracle for the estimation of the uncertainty, in practice the uncertainty based methods are expected to be upper bounded by this oracle. We sample the queries from the normalized form of this function by setting the probability of choosing the i th point to be queried as is the indicator function and h c (·, w) is the activation of c th softmax output given network weights w. As the oracle, we use the maximum accuracy obtained by querying based on either of these loss functions. We perform this experiment for the fully supervised and weakly supervised cases and plot the results in Figure 2. (See Section 4.3 for implementation details) Results in Figure 2 suggest that even in the oracle case, uncertainty based methods are not effective for CNNs when compared with random sampling. We even observe that it causes the accuracy to drop in the weakly-supervised case. Hence, the aforementioned hypothesis is not entirely correct at least for the batch setting. Hence, we can conclude that inaccurate estimate of uncertainty does not explain the failure of uncertainty based active learning methods in CNNs in batch setting. We believe this counterintuitive result is mostly due to the fact that we are sampling/labelling images in batches instead of querying one by one. The batch sampling of queried samples creates strong correlation among the chosen data points. On the other hand, querying images one by one is not desired since a single point has no significant effect in deep learning due to the SGD. In order to fix this, we perform the same experiment with exploration as sampling points from the oracle density with probability 0.8 and sampling uniformly with probability 0.2. We plot the oracle with exploration in the same figure. Although it helps, the oracle does not outperform random. In order to visualize the correlation, consider an embedding of images computed using the tSNE [21] algorithm based on features learned after incorporating the entire dataset in learning. We plot the images in the initial pool (s 0 ), chosen images for labelling (s 1 ), and remaining images ([n] \ (s 0 ∪ s 1 )) with separate colors in Figure 1-a. As shown in Figure 1-a, the oracle algorithm fails to cover the space efficiently creating a bias. A successful set of queries must not only be hard-negatives but also cover the space efficiently. Hence, we believe covering the space effectively is very important for CNNs and we design an algorithm purely based on space covering in Section 4.2. We also show the tSNE plot for our algorithm in Figure 1-b. The Algorithm Algorithm 1 k-Center-Greedy Input: data x i , existing pool s 0 and a budget b We hypothesize that a good way to choose points to be labelled is to cover the unlabelled data points as closely as possible with labeled points. For example, consider a set of balls with radius δ centered at labelled points covering the entire unlabelled dataset. Intuitively, smaller δ should indicate a better performance. Hence, we try to choose a subset of points which can minimize δ as an active learning strategy. Algorithm 2 Robust k-Center Input: data x i , existing pool s 0 , budget b and outlier bound Ξ Our algorithm is simply based on the k-Center problem (minimax facility location [38]) which can intuitively be defined as follows; choose k center points such that the largest distance between a data point and its nearest center is minimized. Formally, we are trying to solve: Unfortunately this problem is NP-Hard [4]. However, it is possible to obtain a 2 − OP T solution efficiently using a greedy approach shown in Algorithm 1. If OP T = min s 1 max i min j∈s 1 ∪s 0 ∆(x i , x j ), the greedy algorithm shown in Algorithm 1 is proven to have a solution (s 1 ) such that; Although the greedy algorithm gives a good initialization, in practice we can improve the 2 − OP T solution by iteratively querying upper bounds on the optimal value. In other words, we can design an algorithm which decides if OP T ≤ δ. In order to do so, we define a mixed integer program (MIP) parametrized by δ such that its feasibility indicates min s 1 max i min j∈s 1 ∪s 0 ∆(x i , x j ) ≤ δ. A straight-forward algorithm is using this MIP as a sub-routine and performing a binary search between the result of the greedy algorithm and its half since the optimal solution is guaranteed to be included in that range. While constructing this MIP, we also try to handle one of the weaknesses of k-Center algorithm, namely robustness. To make the k-Center problem robust, we assume an upper limit on the number of outliers Ξ such that our algorithm can choose not to cover at most Ξ unsupervised data points. This mixed integer program can be written as: In this formulation, u i is 1 if the i th data point is chosen as center, ω i,j is 1 if the i th point is covered by the j th , point and ξ i,j is 1 if the i th point is an outlier and covered by the j th point without the δ constraint, and 0 otherwise. And, variables are binary as u i , ω i,j , ξ i,j ∈ {0, 1}. We further visualize these variables in a diagram in Figure 3, and give the details of the method in Algorithm 2. One of the most important design choices is the distance metric ∆(·, ·). We use the l 2 distance between activations of the final fully-connected layer as the distance. For weakly-supervised learning, we used Ladder networks [26] and for all experiments we used VGG-16 [31] as the CNN architecture. We optimized all models using RMSProp with a learning rate of 1e−3 using Tensorflow [1]. We train CNNs from scratch after each iteration. Implementation Details While implementing our algorithm, we used the Gurobi [14] framework for checking feasibility of the MIP defined in (??). As an upper bound on number of outliers, we used Ξ = 1e−4 × n where n is the number of unlabelled points. Analysis of the Algorithm In this section, we analyze our algorithm in terms of generalization error. We are typically interested in the error in unseen images E x,y∼p Z [l(x, y, A s )] in terms of the empirical loss over the labelled images However, this analysis requires joint treatment of the generalization error and the effect of query selection. For simplicity, we divide this analysis into two parts. First, we analyze the relationship between expected loss in unseen images (generalization error) and the empirical loss over the entire dataset ( 1 n i∈[n] l(x i , y i , A s )). Secondly, we analyze the relationship between the loss over the entire dataset and loss over the labelled samples. We study the first relationship by assuming a Lipschitz continuous loss function. We state the following proposition as direct result of one of the examples from [39] and defer its proof to the appendix. Proposition 1 ([39, Example 4]). Given n i.i.d. samples drawn from p Z as {x i , y i } i∈ [n] . If loss function l(·, y, w) is λ l -Lipschitz continuous for all y, w, bounded by L and X xY has a covering number N (X , | · | 2 ) = K, with probability at least 1 − γ, l(x i , y i , A s ) ≤ λ l + L 2K log 2 + 2 log(1/γ) n . First of all, this proposition is applicable to any machine learning algorithm with a Lipschitz loss function and we further prove the Lipschitz-continuity of CNNs. It can clearly be seen that the empirical loss converges to the expected loss with large number of data points n since λ l term can be made arbitrarily small. In order to complete the study about the generalization performance of CNNs, we prove the Lipschitz-continuity of the loss function of a CNN with the following lemma where max-pool and restricted linear units are the non-linearities and the loss is defined as the l 2 distance between the desired probabilities and the soft-max outputs. Lemma 1. A convolutional neural network with n c convolutional (with max-pool and ReLU) and n f c fully connected layers defined over C classes with loss function defined as the 2-norm between the softmax output and class probability is Here, α is the maximum sum of input weights per neuron (see appendix for formal definition). Although it is in general unbounded, it can be made arbitrarily small without changing the loss function behavior (i.e. keeping the label of any data point s unchanged). We can conclude that CNNs enjoy a 0 generalization error in the limiting case thanks to the Lipschitz property. In order to complete the analysis, we need to study the behavior of the loss over the dataset in terms of the empirical loss over the selected (queried) samples. Here, we make a no training error assumption; in other words, we assume that the training error for labelled images is 0 at the end of the learning. This is clearly a restrictive assumption, however, it is very feasible due to the large parameter space of CNNs. Moreover, this can also be enforced by simply converting average loss into maximal loss [30]. Using this assumption, we show that the loss over the entire dataset can be bounded using the result of our discrete optimization problem. l(xi, yi) ≤ δ(λ l + λ µ LC) + It can easily be shown that in this setting, lim n→∞ Clearly, δ decreases when m increases; however, the rate is critical. To show that our algorithm has finite query, we need to show that δ can be made arbitrarily small with finite m in the limiting behavior of number of unlabelled data points (i.e. n → ∞). Since our data points are coming from a compact space, there exists a finite sub-cover to any union of open sets. Hence, the finite query property is a straightforward result of compactness. In summary, we show that CNNs have Lipschitz continuous loss functions, making them generalize to unseen images. In addition, when the underlying data distribution has Lipschitz continuous regression functions, we further show, under reasonable assumptions, it is enough to label a small subset of the dataset as long as it covers the space efficiently. Since the difference between the empirical loss over unseen images and the optimal loss is bounded by δ(λ l + 2λ η ), direct minimization of δ is a theoretically sound approach to this problem, validating our space-covering heuristic. Experimental Results We tested our algorithm on the problem of classification using three different datasets. We performed experiments on CIFAR [18] and Caltech-256 [12] datasets for image classification and on SVHN [24] dataset for digit classification. CIFAR [18] dataset has two tasks; one coarse-grained and one finegrained. There are 100 fine-grained categories and 10 coarse-grained categories defined as strict supersets of some of these fine-grained categories. We performed experiments on both. We also conducted experiments on active learning for fully-supervised models as well as active learning for weakly-supervised models. In our experiments, we start with small set of the images sampled uniformly at random from the dataset as an initial pool. The weakly-supervised model has access to labeled examples as well as unlabelled examples. The fully-supervised model only has access to the labeled data points. We run all experiments with five random initializations of the initial pool of labeled points and use the average classification accuracy as a metric. We plot the accuracy vs the number of labeled points. We also plot error bars as three standard deviations. We run the query algorithm iteratively; in other words, we solve the discrete optimization problem min s k+1 :|s k+1 |≤b E x,y∼p Z [l(x, y; A s 0 ∪...,s k+1 )] for each point on the accuracy vs number of labelled examples graph. We present the results in Figures 4 and 5. We compare our algorithm with uniformly at random sampling as well as the uncertainty oracle explained in Section 4.1. We also compared our algorithm with CEAL [35] which is to the best-of-ourknowledge, the only active learning algorithm presented for CNNs. Since it is a weakly-supervised approach utilizing unlabeled data points, we only include it in the weakly-supervised analysis. for the case of weaklysupervised models, by a large margin. We believe the effectiveness of our approach in the weakly-supervised case is due to the better feature learning. Weakly-supervised models provide better feature spaces resulting in accurate geometries. Since our method is geometric, it performs significantly better with better feature spaces. We also observed that our algorithm is less effective in CIFAR-100 and Caltech-256 when compared with CIFAR-10 and SVHN. This can easily be explained using our theoretical analysis. Our generalization bound scales with the number of classes, hence it is better to have fewer classes. Optimality of the k-Center Solution: Our proposed method uses the greedy 2-OPT solution for the k-Center problem as an initialization and checks the feasibility of a mixed integer program (MIP). Internally, we use LP-relaxation of the defined MIP and use branchand-bound to obtain integer solutions. The utility obtained by solving this expensive MIP should be investigated. We compare the average run-time of MIP 1 with the run-time of 2-OPT solution in Table 1. We also compare the accuracy obtained with optimal k-Center solution and the 2-OPT solution in Figure 6 on CIFAR-100 dataset. As shown in the Table 1; although the run-time of MIP is not polynomial in worst-case, in practice it converges in a tractable amount of time for a dataset of 50k images. Hence, our algorithm can easily be applied in practice. Figure 6 suggests a small but significant drop in the accuracy when the 2-OPT solution is used. Hence, we conclude that unless the scale of the dataset is too restrictive, using our proposed optimal solver is desired. Even with the accuracy drop, our active learning strategy using 2-OPT solution still outperforms the other baselines. Hence, we can conclude that our algorithm can scale to any dataset size with small accuracy drop even if solving MIP is not feasible. In addition to the active learning, our algorithm can also be used for unsupervised subset selection. We further performed experiments on this setting and discuss them in the supplementary materials. Conclusion We described an active learning algorithm for CNNs. Our empirical analysis showed that classical uncertainty based methods have limited applicability to the CNNs. We design a simple but effective active learning algorithm for CNNs using geometric intuitions. We further validated our algorithm using both theoretical analysis and an empirical study. Empirical results on three datasets showed state-of-the-art performance by a large margin. A Proofs of the Theorems and Lemmas Provided in the Main Paper A.1 Proof for Lemma 1 Proof. We will start with showing that softmax function defined over C class is √ C−1 C -Lipschitz continuous. It is easy to show that for any differentiable function f : R n → R m , f (x) − f (y) 2 ≤ J * F x − y 2 ∀x, y ∈ R n where J * F = max x J F and J is the Jacobian matrix of f . Softmax function is defined as exp(x j ) , i = 1, 2, ...C For brevity, we will denote f i (x) as f i . The Jacobian matrix will be, Now, Frobenius norm of above matrix will be, It is straightforward to show that f i = 1 C is the optimal solution for J * F = max x J F Hence, putting f i = 1 C in above equation , we get J * F = √ C−1 C . Now, consider two inputs x andx, such that their representation at layer d is x d andx d . Let's consider any convolution or fully-connected layer as x d j = i w d i,j x d−1 i . If we assume, i |w i,j | ≤ α ∀i, j, d, for any convolutional or fully connected layer, we can state: On the other hand, using |a − b| ≤ | max(0, a) − max(0, a)| and the fact that max pool layer can be written as a convolutional layer such that only one weight is 1 and others are 0, we can state for ReLU and max-pool layers, Combining with the Lipschitz constant of soft-max layer, A.2 Proof for Proposiiton 1 In order to prove the Proposition 1, we use the robustness bound from [39]. Proof. We will start with Ex,y∼p Z [l(x, y, As)] − 1 n i∈ [n] l(xi, yi, As) Here, with brevity we denoted l(x, y, A s ) as l(x, y). In (a), we used the fact that the space has an cover; and denote the cover as {C j } j∈ [K] such that each C j has diameter at most . We further defined an auxiliary variable µ j = p((x, y) ∈ C j ) and n j = i 1[(x i , y i ) ∈ C j ] and used triangle inequality. In (b), we used i ∈ n j to represent (x i , y i ) ∈ C j . Finally, in (c) we used the fact that each ball has diameter at most and the loss function is λ l -Lipschitz. We can bound E[l(x, y)|z ∈ C j ] with maximum loss L and use Breteganolle-Huber-Carol inequality (cf Proposition A6.6 of [34]) in order to bound j µ j − l(x i , y i , A s ) ≤ λ l + L 2K log 2 + 2 log(1/γ) n
2017-08-01T19:50:53.000Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "82fb7661d892a7412726de6ead14269139d0310c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b2dc0e39905642a4e4cea2ce20aa2666976696c6", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
7870906
pes2o/s2orc
v3-fos-license
Microvesicles secreted from equine amniotic-derived cells and their potential role in reducing inflammation in endometrial cells in an in-vitro model Background It is known that a paracrine mechanism exists between mesenchymal stem cells and target cells. This process may involve microvesicles (MVs) as an integral component of cell-to-cell communication. Methods In this context, this study aims to understand the efficacy of MVs in in-vitro endometrial stressed cells in view of potential healing in in-vivo studies. For this purpose, the presence and type of MVs secreted by amniotic mesenchymal stem cells (AMCs) were investigated and the response of endometrial cells to MVs was studied using a dose-response curve at different concentrations and times. Moreover, the ability of MVs to counteract the in vitro stress in endometrial cells induced by lipopolysaccharide was studied by measuring the rate of apoptosis and cell proliferation, the expression of some pro-inflammatory genes such as tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), interleukin 1β (IL-1β), and metalloproteinases (MMP) 1 and 13, and the release of some pro- or anti-inflammatory cytokines. Results MVs secreted by the AMCs ranged in size from 100 to 200 nm. The incorporation of MVs was gradual over time and peaked at 72 h. MVs reduced the apoptosis rate, increased cell proliferation values, downregulated pro-inflammatory gene expression, and decreased the secretion of pro-inflammatory cytokines. Conclusion Our data suggest that some microRNAs could contribute to counteracting in-vivo inflammation of endometrial tissue. Background The regular uterine environment promotes normal embryo development, but clinical or subclinical disorders could contribute to pregnancy failure. As reviewed by Hurtgen [1], endometritis is an important cause of reduced fertility in mares in which artificial insemination by fresh or frozen semen may induce acute endometrial inflammatory reactions. If these conditions are not promptly resolved, infections become chronic and, in old pregnant mares, often result in higher pregnancy loss. A similar clinical endometritis also occurs in dairy cows following parturition [2,3]. Furthermore, cytological endometritis emerged as a problem of remarkable importance for dairy cattle reproduction because animals suffering from this disorder present a persistent inflammatory uterine environment even in the absence of clinical symptoms. A reduced conception rate and increased calving-to-conception intervals are consequences of these uterine diseases [4][5][6][7]. Successful implantation requires a complex sequence of signaling events that are crucial to the establishment of pregnancy, and a large number of molecular mediators, influenced by the level of ovarian hormones, have been involved in this early embryo-maternal interaction. These mediators include adhesion molecules, cytokines, growth factors, lipids, and others [8,9]. Koot et al. [10] underlined that infertility could occur after the early phases of implantation as a malfunction of the endometriumembryo 'dialogue'. The degree of endometrial production of these mediators could be impaired by persistent endometritis. Indeed, pro-inflammatory factors transcripted in bovine endometrial epithelial cells are elevated in cases of subclinical or clinical endometritis [11]. Repeat-breeding cows (animals that after three or more inseminations do not get pregnant because of fertilization failure or early embryonic death) show abnormalities in the growth factorcytokine network, specifically in endometrial epidermal growth factor (EGF) concentration [12]. The EGF family acts on the trophectoderm, promoting cell attachment and embryo development [13], and its impairment could explain the pregnancy failure in these animals. Many therapies have been proposed to treat or prevent mare endometritis. Post-mating endometritis is usually treated with uterine irrigation and ecbolic, while the acute endometritis treatment is performed with systemic or intra-uterine antibiotics. However, these therapies are not always effective for resolving chronic uterine inflammation. The prevention, mainly in cattle, includes nutritional supplements and hygienic conditions during parturition. Commonly, therapies in use include hormonal treatments with GnRH, exogenous gonadotrophins, and prostaglandins [14], or the exploitation of assisted reproductive techniques, such as in vitro embryo production and embryo transfer. However, in case of infertility due to endometrial damage, the embryo-maternal interaction and the restoration of uterine receptivity could be improved by regenerative medicine treatment. Regenerative medicine has several applications in the treatment of many pathologies in both human and veterinary medicine. Treatments are based on mesenchymal stem cell (MSC) transplantation but, although engraftment of the transplanted MSCs has been documented in some cases [15][16][17], only a small percentage of the injected MSCs engraft successfully in various disease models [18][19][20][21]. In an irradiated murine model, endometrial regeneration by bone marrow-derived MSCs has been studied showing a low number of cells engrafted in the regenerating endometrium [22]. Consistent with these findings, some studies recently showed that the regenerative ability of MSCs could be attributed to the production of molecules and mediators capable of activating the intrinsic repair processes in the damaged tissues. To date, the conditioned medium (CM) obtained from in vitro cultured MSCs has been proven to be sufficient to stimulate the structural and functional regeneration of cardiac [20,23], renal [19,24], spinal cord [25], and tendon [26] tissues. These results indicate that the beneficial effects of MSCs can be attributed to the activation of paracrine mechanisms enabling stimulation of endogenous stem cells. These cells are responsible for the bioactive soluble factors (lipids, growth factors, and cytokines) known to inhibit apoptosis and fibrosis, enhance angiogenesis, stimulate mitosis and/or differentiation of tissue-resident progenitor cells, and modulate the immune response [27]. In addition to soluble factors, recent findings indicate that extracellular vesicles are released from MSCs inside the CM and that these can be involved as important mediators in cellto-cell communication [28]. Microvesicles (MVs) have been categorized into exosomes (EXs), released from the endosomal compartment, and shedding vesicles (SVs), which bud directly from the cell membrane. MVs contain various active molecules such as lipids, proteins, mRNA, and microRNA (miRNA) [29]. It has been demonstrated that CM and MVs can be used in vitro and in vivo to repair tissue damage, increasing the healing rate [26,29,30]. MVs are involved in a dynamic mutual paracrine communication between the embryonic and the maternal environment at the early stage of pre-implantation embryo development [31]. Equine embryos at day 8 are thought to secrete MVs that can modulate the functions of the oviduct epithelium through transfer of early pregnancy factor (HSP10) and miRNAs [32]. On the other hand, MVs can be secreted from the maternal side, and endometrium-derived MV miRNAs are revealed to have potential targets in biological pathways highly relevant for embryo implantation [33]. Uterine miRNAs are suggested to play a potential regulatory role in the development and progression of bovine subclinical endometritis. Indeed, Hailemariam et al. [34] demonstrated that there is an aberrant expression of 23 miRNAs in cows with subclinical endometritis compared with healthy cows. Furthermore, they observed a similar expression of miRNA patterns in cytobrush samples from sick cows and in vitro cultured endometrial cells challenged by lipopolysaccharide (LPS). This suggests that in vitro endometrial cell culture, treated with LPS, could be an excellent model to test potential regenerative medicine treatments for endometritis. In human medicine, the different pattern of miRNAs between women with and without endometriotic disease have been proposed as biomarkers that could underpin the development of a noninvasive diagnostic test for endometriosis [35]. In this context, the aims of this study were to identify the presence and type of MVs secreted by amniotic mesenchymal progenitor cells (AMCs), and to elucidate whether equine endometrial cells could be targeted by MVs in vitro. In addition, we considered whether MVs are able to counteract an in vitro endometrial cell inflammatory process induced by LPS. Materials Uteri samples were collected from horses slaughtered in a national slaughterhouse under legal regulation. Chemicals were obtained from Sigma-Aldrich Chemical (Milan, Italy) unless otherwise specified, and tissue culture plastic dishes were purchased from Euroclone (Milan, Italy). Study design Initially, amniotic cells were isolated and cultured to produce MVs that were characterized using a Nanosight instrument (Nanoparticle tracking analysis, NTA; Nano-Sight Ltd., Amesbuty, UK). Endometrial cells were isolated, and specific endometrial genes were identified by qualitative reverse transcription polymerase chain reaction (RT-PCR). Isolated endometrial cells were used as the target for different concentrations of MVs. Furthermore, the effect of MVs on endometrial cells treated by LPS was analyzed by quantitative RT-PCR (qRT-PCR) expression of inflammatory genes, evaluation of the release of different cytokines, and viability cell tests. Finally, the presence of some miRNAs, regulating inflammation, inside the MVs was evaluated. Tissue collection Allanto-amniotic membranes were obtained at term from normal pregnancies in three mares. Samples of allanto-amnion were transported at 4°C in calcium-and magnesium-free phosphate-buffered saline (PBS; Euroclone, Milan, Italy) supplemented with 4 mg/mL amphotericin (Euroclone), 100 UI/mL penicillin and 100 mg/ mL streptomycin (Euroclone), and were processed within 12 h of collection. The amniotic membrane was mechanically separated from the allantois and the isolation of AMCs was performed as previously reported by Lange-Consiglio et al. [36]. Endometrial samples were obtained during the reproductive season from normal-cycling mares at diestrus stage (early-mid luteal phase). Before slaughtering, 5 ml of blood was collected in heparinized tubes from all mares. After centrifugation, plasma was separated, kept refrigerated, and immediately transported to the laboratory for progesterone determination by a quantitative enzyme linked fluorescent assay (ELFA) based on the MiniVidas (Biomerieux, Firenze, Italy) technology. According to the manufacturer, the measurement range of the assay varied from 0.25 to 80 ng/ml with an intraassay variation of 4.12 % and an inter-assay variability of 6.32 %. Only uteri belonging to mares with an obvious corpus luteum on the ovary and progesterone levels between 6 and 20 ng/ml, indicative of the early/mid diestral phase of the estrous cycle [37], were used for endometrial fragment collection and ensuing cell culture. Tissue fragments for RNA isolation were immediately immersed in RNA Later solution, whereas those destined for cell isolation and the expansion procedure were kept at 4°C in saline solution supplemented with 4 μg/ ml amphotericin B, 100 IU/ml penicillin, and 100 μg/ml streptomycin and processed within 8 h. Cell isolation Amniotic membrane-derived mesenchymal cells were isolated as recently reported by Lange-Consiglio et al. [36]. Briefly, amnion fragments were incubated for 9 min at 38.5°C in PBS containing 2.4 U/mL dispase (Becton Dickinson, Milan, Italy). After a resting period (5-10 min) at room temperature in high-glucose Dulbecco's modified Eagle's medium (HG-DMEM; Euro-Clone, Milan, Italy), supplemented with 10 % heatinactivated fetal bovine serum (FBS) and 2 mM L-glutamine, the fragments were digested with 0.93 mg/mL collagenase type I and 20 mg/mL DNAse (Roche, Mannheim, Germany) for approximately 3 h at 37°C. The amnion fragments were then removed, and mobilized cells were passed through a 100-μm cell strainer before being collected by centrifugation at 200 × g for 10 min. Endometrial cells from diestrum uteri of mares were obtained according to the protocol described by Donofrio et al. [38] and slightly modified for equine cells. Briefly, the endometrium was digested in sterile filtered Hank's buffered salt solution supplemented with 2 mg/ ml collagenase II, 4 mg/ml bovine serum albumin, and 0.4 mg/ml DNase I for 90 min at 38.5°C in a shaking bath. Cells were then filtered through a membrane with a pore size of 80 μm and centrifuged at 200 × g for 10 min, then washed twice in PBS. This protocol allowed for the isolation of the endometrial stromal portion. Before seeding, cells were counted using a Burker chamber with the Trypan Blue dye exclusion assay. To remove non-adherent cells for both cell lines the medium was replaced for the first time after 72 h, and then changed either twice per week thereafter or according to the experiment requirements. For maintenance of cultures, cells were plated in flasks of 25 cm 2 at a density of 1 × 10 5 cells/cm 2 and incubated at 38.5°C in a humidified atmosphere with 5 % CO 2 . Adherent cells were detached with 0.05 % trypsin-EDTA just prior to reaching confluence (80 %) and then reseeded for culture maintenance at a density of 1 × 10 4 cells/cm 2 . A detailed characterization of these cells was performed in the paper of Corradetti et al. [39]. In this study, a molecular characterization of EDCs was performed only at passage (P)0 as a de facto control for gene expression. Isolation and measurements of MVs MVs were obtained from the culture media of AMCs derived from three different placentas, cultured for 1 week with HG-DMEM supplemented with 10 % MV-deprived FCS and overnight in HG-DMEM deprived of FCS and supplemented with 0.5 % BSA (Sigma). The overnight culture media were pooled and centrifuged at 2000 × g for 20 min to remove debris, then at 100,000 × g (Beckman Coulter Optima L-100 K ultracentrifuge) for 1 h at 4°C, washed in serum-free medium 199 containing N-2hydroxyethylpiperazine-N-2-ethanesulfonic acid (HEPES; 25 mM) and submitted to a second ultracentrifugation under the same conditions. After ultracentrifugation, the pellet was immediately resuspended in HG-DMEM, and a fraction of the resuspended pellet was taken for measurements of MV size and concentration. A second fraction was labeled with fluorochrome PKH-26 and the remaining part of the pellet was cryopreserved with 1 % dimethylsulfoxide at -80°C and used for the in vitro test. The size and concentration of MVs were evaluated by the Nanosight LM10 instrument, which permits discrimination of microparticles less than 1 μm in diameter. The software (NTA 2.0 analytic software) allows the analysis of video images of particle movement under Brownian motion and the calculation of diffusion coefficient, sphere equivalent, and hydrodynamic radius of particles by using the Strokes-Einstein equation. This instrument was configured with a 405-nm laser and a high-sensitivity sCMOS camera (OrcaFlash2.8, Hamamatsu C11440, NanoSight Ltd). Videos were collected and analyzed using the NTA software with the minimal expected particle size, minimum track length, and blur setting all set to automatic. Ambient temperature was recorded manually and did not exceed 25°C. Each sample (5 μl) was diluted in sterile physiological solution to a final volume of 1 ml. Samples were analyzed within 15 min of the initial dilution with a delay of 10 s between sample introduction and the start of the measurement. For each sample, multiple videos of 30 s duration were recorded generating replicate histograms that were averaged. MVs labeling with PKH-26 To trace in vitro MVs by fluorescence microscopy, MVs from AMCs were labeled with the red fluorescence aliphatic chromophore intercalating into lipid bilayers PKH26 dye (Sigma). Briefly, after ultra-centrifugation, the MV pellet was diluted to 1 ml with PKH-26 kit and 2 μl of fluorochrome was added to this suspension and incubated for 30 min at 38.5°C. At the end of the reaction, 7 ml of serum-free DMEM was added to the suspension that was ultra-centrifuged again at 100,000 g for 1 h at 4°C. The final pellet was immediately resuspended in HG-DMEM. Incorporation of MVs in endometrial cells To study the incorporation capacity of MVs into endometrial cells, a dose-response growth was performed in three replicates. Endometrial cells were seeded at a density of 60 × 10 3 on culture slides (13 mm; Nalgen Nunc International, Rochester, NY, USA) in 24 wells and co-cultured with 10, 20, 30, 40, and 50 × 10 6 MVs/ ml labeled with PKH-26 dye, and pre-incubated or not with trypsin (0.5 mM) for 24, 48, and 72 h at 38.5°C. At the end of each experimental condition, cells were nuclear stained with 10 μg/ml Hoechst 33343 for 15 min at 38°C. The uptake of MVs was evaluated by an Olympus BX51 microscope equipped with a Scion Corporation 1394 video camera interfaced with a computer provided with software for image acquisition and analysis (Image-Pro Plus 5.1-Media Cybernetics, Immagini & Computer, Bareggio, Italy). Excitation wavelength was positioned at 550 nm while emission wavelength was set at 567 nm. Hoechst 33342 dye (Sigma) was excited at 353-365 nm while the emission wavelength was set at 460 nm. To detect the intensity of fluorescence, a semi-quantitative analysis was performed. Different images were acquired for each condition and then, for each image, the area of interest (AOI; where the signal was present) was manually defined by the user. Inside the AOI, up to three different background signals were sampled. The background areas were positioned by the user only where the fluorescent signal was not specific. The maximum value collected from the background areas was then used to define the threshold. Only fluorescence with an intensity above the threshold was considered to indicate fluorescence due to labeled MVs. Finally, the program measured the signal intensity expressed in arbitrary units (a.u.). Confocal microscopy analysis to assess internalization of MVs was performed using a Leica SP2 laser scanning confocal microscope (Leica Microsystems Srl, Italy) equipped with a PL Fluotar 20× AN 0.5 Dry objective. In vitro effect of MVs on endometrial cells treated with LPS The dose-response curve of LPS on endometrial cells was studied showing that 10 ng/ml and 12-24 h were the dose and the times most effective in inducing cellular stress evaluated by an apoptotic study (data not shown). Sixty thousand cells were incubated at the same time with LPS 10 ng/ml and 40 × 10 6 MVs/ml for 3, 12, and 24 h. In another experimental condition, endometrial cells were treated first for 3 h with LPS and then with MVs at the same concentrations and times. In the last experiment, endometrial cells were treated first for 24 h with MVs and then with LPS 10 ng/ml. Endometrial cells alone or endometrial cells with LPS or MVs only were used as controls at different times. At the end of each experimental condition, the MTT reduction assay method and apoptotic test were used to analyze cell proliferation and viability of cells on some samples. Cells from other samples were detached with 0.05 % trypsin-EDTA, centrifuged, and cryopreserved for molecular biology studies in liquid nitrogen using standard cryopreservation protocols. The supernatants were destined for the evaluation of cytokines released from endometrial cells. All experiments were performed in three replicates. Viability cell tests Cell proliferation test by MTT reduction assay method The MTT reduction assay method (Chemicon, Temecula, CA, USA) estimates the activity of the enzyme dehydrogenase by converting the MTT compound (3-(4,5dimethylthiazol-2-yl)-2,5-diphenyletrazolium bromide) into formazan by the mitochondria. The measurement was performed with a spectrophotometer (Perking Elmer HTS 700 plus; Boston, MA, USA) at the absorbance reading of 570 nm for each sample. Briefly, at each experimental condition of in vitro effect of MVs on endometrial cells treated with LPS, cells were washed twice in PBS, and 1 ml of 5 mg/l MTT solution was added to each well. Avoiding light, plates were then placed in a humidified incubator at 37°C for 4 h. The supernatant was discarded, 1 ml of dimethylsulfoxide was added as an extracting solution, and plates were incubated for 2 h until the precipitations were resolved completely for spectrophotometric reading. This test was performed in three replicates. Apoptotic test The percentage of apoptotic cells was assessed using an Annexin-V-FITC Apoptosis Detection KIT (Sigma) following the manufacturers' instructions; 500 μl of cells (5 × 10 5 cells) were incubated with 5 μl of Annexin V solution and with 10 μl propidium iodide for 1 h at room temperature while protected from light. Apoptosis rates were evaluated by conventional fluorescence analysis using a BX 51 microscope (Olympus) equipped with a DMU filter set. One hundred cells were analyzed using a combination of 488/560 nm emission. Cells at the early stage of apoptosis stained with the annexin V-FITC alone. Live cells showed no staining with either propidium iodide or Annexin V-FITC. Cells dead for apoptosis were stained by both propidium iodide and Annexin V-FITC, and cells dead for necrosis were stained by propidium iodide alone. The apoptotic test was performed in three replicates. Molecular biology studies Characterization of endometrial cells After isolation from endometrial tissue, cells were analyzed to detect the expression of specific endometrial genes as previously reported [39]. Total RNA was extracted from endometrial cells immediately after isolation (P0) using TRI Reagent Solution (Life Technologies, Monza, Italy) and conventional RT-PCR was performed with RBC Taq DNA Polymerase (RBC Bioscience) using previously optimized primers [39]. The primer sequences and conditions are shown in Tables 1 and 2. Glyceraldehyde-3-phosphate dehydrogenase gene (GAPDH) was employed as a reference gene. Gene expression of pro-inflammatory cytokines Genes involved in the inflammatory process, such as interleukin-1β (IL-1β), interleukin-6 (IL-6) and tumor necrosis factor α (TNF-α), were analyzed by qRT-PCR under all experimental conditions. The mRNA expression levels of all genes were measured in three samples (biological replicates). Total RNA was isolated using the mirVana™ miRNA isolation Kit (Life Technologies) according to the manufacturer's protocol and stored at −20°C. The concentration and purity of RNAs were evaluated three times by the NanoQuant spectrophotometer (Thermo Scientific, USA) and, in order to verify the integrity of extracted RNA, eight samples that were randomly chosen were analyzed on a Bioanalyzer 2100 using the Agilent RNA 6000 Pico Kit (Agilent). According to the RNA quantity, each sample was normalized to the final RNA concentration of 10 ng/μl. RT-PCRs were performed with the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems/ Life Technologies, Carlsbad, CA, USA) using 100 ng of RNA per reaction. All the qPCR experiments were run in triplicates (technical replicates) using the qPCR protocol described by TaqMan Fast Gene Expression Assays (Life Technologies™) on a 7500 Fast Real-time PCR System instrument (Applied Biosystems by Life Technologies™). To assess gene expression, each target gene and the GAPDH, as the housekeeping control gene, were co-amplified. The assay primers were available and synthesized by Life Technologies™. Average target gene threshold cycle (ΔCt g ) for each sample (calculated using the CT values of the technical replicates within each experimental conditions) were normalized to the average GAPDH values (ΔCt GAPDH ) of the same cDNA sample. Then the expression variations calculated were normalized to the internal control (i.e., control cell at 3 h) using the ΔΔCt method. Finally, the fold-change expression of each gene was calculated as 2 −ΔΔCT [40]. Gene expression of metalloproteinases Matrix metalloproteinase 1 (MMP-1) and matrix metalloproteinase 13 (MMP-13) were selected to evaluate the effect of MVs to contrast LPS activity. Gene expression was performed with the SYBR green method in a MyiQ iCycler thermal cycler (Biorad). Triplicate PCR reactions were carried out for each sample, analyzed using primer sequences reported in Table 1. The reactions were set on a strip in a final volume of 25 μl by mixing, for each sample, 1 μl of cDNA, 12.5 μl of 2× concentrated SYBR Premix Ex Taq II (Takara Bio) containing SYBR Green as a fluorescent intercalating agent, 0.2 μM forward primer, 0.2 μM of reverse primer, and MQ water. PCR efficiencies were tested and found to be close to 1. The thermal profile for all reactions was 30 s at 95°C and then 40 cycles of 5 s at 95°C, and 30 s at 60°C. Fluorescence monitoring occurred at the end of each cycle. The efficiency of amplification for each primer was monitored through the analysis of serial dilution. Additional dissociation curve analysis was performed, and in all cases showed a single peak. The data thus obtained were analyzed using the iQ5 optical system software version 2.0 (BioRad). The expression of each gene was normalized to the reference gene GAPDH in order to standardize the results by eliminating variation in cDNA quantity. Sequences used are listed in Table 1. miRNA analyses by RNA extraction and PCR amplification The MV pellet was subjected to RNase digestion to remove extraneous ribonucleic acids [41]. Total RNA was isolated from a pool of different MVs and amnioticderived cell preparations using the NucleoSpin® mRNA kit (Macherey-Nagel, Germany), in combination with TRIzol® lysis and purification of small and large RNA in one fraction (total RNA). RNAs were quantified using a NanoDrop ND-1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA). RNA quality was checked using the Agilent Bioanalyser 2100 (Agilent, Santa Clara, CA, USA), where the presence of small RNAs was verified in both MV and cell samples. RNAs from all samples were reverse transcribed with the miScript Reverse Transcription Kit and the cDNA was then pre-amplified using the miScript PreAMP PCR Kit (all from Qiagen, Valencia, CA, USA), following the manufacturer's instruction with some modification: miScript PreAMP Primer Mix was replaced with miR-specific primers: hsa-miR-26a-2, -335, -146a, and SNORD95 as forward primer, and miScript Universal Primer as reverse primer in separate reactions. Homo sapiens hsa miRNA were perfectly homologous with Equus caballus eca miRNA sequence. PCR was performed on pre-amplified products using the PCR Master Mix (2×) (Thermo Fisher Scientific Inc., Waltham, MA, USA), with the same primer couple: hsa-miR-26a-2, -335, -146a, SNORD95 in combination with miScript Universal Primer. The small nucleolar snoRNA, C/D Box 95 SNORD95 was used as the positive control. Negative controls using water in place of the pre-Amp product were performed alongside each reaction. The cycling conditions were 3 min at 95°C, followed by 35 cycles of 30 s at 95°C, 30 s at 58°C, 1 min at 72°C, and finally 7 min at 72°C. The amplified PCR products were separated electrophoretically on 2.5 % agarose gels, and visualized under UV, using the GeneRuler 50 bp as a DNA ladder (Thermo Fisher Scientific Inc.). Cytokines Cytokine release (IL-6, transforming growth factor (TGF)-β, and TNF-α) was measured in cell-free supernatants obtained by centrifugation at 1200 rpm for 5 min and stored at −80°C until measurement. Cytokine production was assessed by commercially available sandwich ELISAs (Bioptis SA, Liege, Belgium). ELISAs were performed according to the supplier's instructions. Results are expressed in pg/ml. The limit of detection was 15.6 pg/ml for all cytokines tested. Statistical analysis For quantitative PCR experiments, data were analyzed by one-way analysis of variance (ANOVA). Also, cell viability data were analyzed by one-way ANOVA applying a Bonferroni correction. For cytokines, statistical differences were determined using ANOVA followed by Dunnett's multiple comparison test, the Tukey-Kramer multiple comparisons test or unpaired t test. Differences were considered statistically significant if the value of P was <0.05. Tissue collection and cell isolation Cells were selected for their ability to adhere to plastic. For AMCs, the initial viability was >90 %, whereas for EDCs it was >85 %. EDCs (Fig. 1a) and AMCs (Fig. 1b) displayed fibroblast-like morphology. Molecular biology analyses at P3 showed that AMCs showed a typical mesenchymal stromal phenotype, with the expression of markers such as CD29, CD44, CD106, CD105, and MHCI, but not CD34 and MHCII. Moreover, AMCs showed differentiative potential in mesenchymal (osteogenic, adipogenic, and chondrogenic) and ectodermic lines (neurogenic) as reported by Lange-Consiglio et al. [36]. The molecular biology study on endometrial cells at P0 confirmed that these cells were endometrial cells because of the expression of PR, MPR, PGRMC1, HOXA-9 (Fig. 1c). Isolation and measurement of MVs In all the studied samples, the viability of AMCs at the time of MV collection was 99 % as detected by trypan blue exclusion. By Nanosight, the size of MVs ranged from 50 to 670 nm, with a mean size of 258 ± 55 nm for three samples. The number of MVs ranged from 800 to 4700 particles/cell, with a mean value of 2550 ± 71 particles/cell (corresponding to 540 × 10 6 particles/ml of medium). In a previous study [42], transmission electron microscopy (TEM) analysis revealed the presence of variably sized extracellular membranous vesicles budding from, or lying near, the cell of origin. The size of MVs ranged from 100 nm to 1000 nm, with a predominance of vesicles between 100 and 200 nm. Because of the size, by Nanosight and TEM, and morphological characteristics, the vesicles observed were mainly considered as shedding vesicles. Incorporation of MVs in endometrial cells As seen by fluorescence microscopy, in all the studied samples no fluorescence signal was detectable up to the sixth hour of co-incubation of MVs with endometrial cells, and only nuclei stained with Hoechst 33342 were visible (Fig. 2a). The increase in uptake of 40 × 10 6 MVs/ ml by endometrial cells between 24 h and 72 h is showed in Fig. 2b, c, d and e. No signal was detected after treatment of MVs with trypsin. The incorporation of MVs is gradual and constant at 24 h with a concentration of 40 × 10 6 MVs/ml, and suddenly increases at a concentration of 50 × 10 6 MVs/ml. The uptake of MVs drastically increased at 48 h at a concentration of 40 × 10 6 MVs/ml and decreased at a concentration of 50 × 10 6 MVs/ml. The internalization and accumulation of MVs peaked at 72 h for all the different concentrations but, once again, decreased at a concentration of 50 × 10 6 MVs/ml (Fig. 3a). As seen by confocal microscopy, after 24 h of incubation with MVs, endometrial cells showed a fine granular fluorescent pattern within their cytoplasm, indicating incorporation of MVs ( Fig. 3b and c). In vitro effect of MVs on endometrial cells treated with LPS Viability cell tests The effect of LPS and MVs was evaluated by apoptotic and cell proliferation tests. The rate of cells at the early stage of apoptosis increased dramatically on treatment with LPS; indeed, the percentage of apoptotic cells reached 55 ± 4.1 % at 12 h of stress, decreasing to 40.48 ± 4.82 % at 24 h. The rate of apoptosis due to MVs is not statistically different from endometrial cells alone (Fig. 4a). The results of the cell proliferation test showed the opposite trend to the apoptotic test, confirming the effect of LPS and MVs (Fig. 4b). MVs were able to counteract the action of LPS either when used simultaneously with LPS or when incorporated from endometrial cells 24 h before the treatment with LPS. In this condition, cells previously treated with MVs and after being exposed to LPS had a lower apoptotic rate (P < 0.05) than the control cells, at both 12 and 24 h of experiment (12.01 ± 1.38 % vs 18.05 ± 1.34 % at 12 h and 15.56 ± 1.5 % vs 24.5 ± 2.78 % at 24 h). On the other hand, the stress induced by LPS exposure before the treatment with MVs was not contrasted by MVs and the apoptotic rate increased up to 63.16 ± 6.8 % at 24 h (Fig. 4c). The results of the cell proliferation test showed the opposite trend to the apoptotic test (Fig. 4d). Molecular biology study The expression of some pro-inflammatory genes was evaluated by qRT-PCR. Endometrial cells, LPS, and MVS were tested alone. Data were obtained from three samples and are shown in Fig. 5. LPS at 3 h significantly upregulated (P < 0.05) the expression of TNF-α and IL-6 (0.0019 ± 0.317 E-6 and 10.54 ± 0.014, respectively) and of ILβ-1 at 24 h (9.91 ± 0.017). Endometrial cells used as controls (CTR) and MVs did not induce expression of pro-inflammatory genes. In the experiment with simultaneous use of LPS and MVs, the action of LPS was counteracted by MVs; indeed, the expression of IL-6 at first increased significantly (P < 0.05) at 3 h by LPS and then fell significantly in the presence of MVs either at 12 h (2.41 ± 0.039) or at 24 h (1.15 ± 0.081). IL-1β at 24 h was completely and significantly (P < 0.05) downregulated (0.22 ± 0.0008). The expression of TNF-α is not dependent on the presence of MVs. When LPS was used for 3 h before adding the MVs, the action of LPS was neutralized by the presence of MVs. Indeed, the expression of IL-6 at 3 h, 12 h, and 24 h was 2.08 ± 0.0019, 1.75 ± 0.0033, and 2.65 ± 0.0013, respectively, compared to the treatment with LPS only. In addition, the expression of IL-1β was significantly (P < 0.05) downregulated at 24 h (0.21 ± 0.0016). Under the final condition of MVs for 24 h and then LPS, the expression of all genes was downregulated. TNF-α expression at 3 h fell 0.0002-fold compared to the treatment with LPS. IL-6 expression was statistically (P < 0.05) downregulated at each time point, and IL-1β was statistically (P < 0.05) downregulated at 24 h. A moderate but significant (P < 0.05) increase in the expression was observed for MMP-1 (5.18 ± 0.44) and MMP-13 (2.69 ± 0.19) compared to untreated cells when endometrial cells were exposed to LPS for 24 h (Fig. 5d and e). The presence of MVs, simultaneously to LPS, significantly counteracted the effect of LPS on the expression of metalloproteinases as shown by the striking reduction in the expression levels for MMP-1 (1.67 ± 0.14) and MMP-13 (0.36 ± 0.11). miRNA analyses by RNA extraction and PCR amplification Expression of specific miRNA was determined by PCR assay that showed the presence of miR-26a-2, miR-335, and miR-146a in both MVs and cells isolated from the amniotic membrane (Fig. 5f ). Cytokines The results of the release over time (3,12, and 24 h) of pro-inflammatory (TNF-α and IL-6) and anti-inflammatory Cells stressed with LPS secreted significantly (P < 0.05) more TNF-α and IL-6 at 3 h and 12 h, respectively, when compared to control cells and cells treated by MVs. The MVs, used simultaneously or incorporated from endometrial cells 24 h before the treatment with LPS, were able to counteract the action of LPS and significantly (P < 0.05) and equally decreased the production of TNF-α and IL-6, mainly between 12 h and 24 h. Cells incubated with LPS for 3 h before the treatment with MVs secreted significantly more TNF-α and IL-6 when compared to all the other experimental conditions. TGF-β was constitutively produced by control cells but the treatment with MVs induced a higher release (P < 0.05) of TGF-β, mainly when used simultaneously with LPS or incorporated from endometrial cells 24 h before the treatment with LPS. No efficacy of MVs was evident after LPS-induced stress of cells for 3 h before the addition of MVs. Discussion MVs are secreted by MSCs. Although bone marrow represents the most widely investigated source of MSCs, cells harvested from bone marrow have limited potential in terms of in vitro proliferation capability [43,44] and do not appear to noticeably improve long-term functionality [45] compared to those from extra-fetal tissues. AMCs have already been demonstrated to be an excellent source for the treatment of tendon diseases in horse [26]. Furthermore, Lange-Consiglio et al. [46] used the conditioned medium derived from AMCs for the treatment of horse tendon diseases and showed that the positive evolution of spontaneous tendon injuries in competition horses was comparable to that achieved with AMCs, thereby showing that AMC-CM had angiogenic and immunomodulatory properties mediated by paracrine mechanisms. Corradetti et al. [39] demonstrated that equine AMCs share the same transcriptional profile as endometrial cells and express genes that are involved in early pregnancy, pre-implantation, and conceptus development. In addition, co-culture of endometrial cells by transwell in the presence of AMCs, or incubated with CM, showed a significant increase in the proliferation rate of endometrial cells compared to fibroblasts and the CM secreted by them. All these preliminary data suggest that AMCs and CM exert regenerative effects through paracrine mechanisms, and that AMCs and their CM may have the potential to improve endometrial cell replenishment. Since MVs are contained in the CM, the aim of the present study was to investigate the role of MVs produced by AMCs in in-vitro cell-tocell communication that could ultimately lead to endometrium repair. We ultimately aim to demonstrate that they could be used as effective tools for regenerative medicine purposes, especially in the reproduction field. Our results show that AMCs secrete MVs (with a mean size of 258 nm) as detected by a Nanosight instrument, and this size allows us to categorize them as shedding vesicles. Moreover, we found that MVs are easily internalized by endometrial cells. Fluorescent microscopy analysis suggests that the uptake of MVs by endometrial cells starts at 6 h after co-culture and increases gradually at 24 h, rising to the maximum internalization at 72 h at all different concentrations. At a concentration of 50 × 10 6 MVs/ml, a decrease at 48 h and 72 h was detected. We hypothesize that after 48 h and 72 h of exposure at 40 × 10 6 /ml, the cells are saturated and phagocytosis of MVs, for the release of their contents into the cell cytoplasm, begins. This phagocytosis probably results in the destruction of the MV membranes and, consequently, in the loss of fluorescence signal. Cocucci et al. [47] demonstrated that the internalization of MVs is the result of the direct fusion or endocytic uptake by target cells. Once internalized, the MVs fuse their membrane with that of endosomes, making a horizontal transfer of their contents into the cytoplasm of the recipient cells. Alternatively, MVs can remain segregated inside the endosomes and be phagocytized by lysosomes or eliminated from the cells after fusion with the plasma membrane through a mechanism of transcytosis [47]. Our data show that horizontal transfer of the contents of MVs within endometrial cells could be one of the mechanisms of action of MVs, although other methods of interaction may occur simultaneously. The process of uptake and internalization of MVs by the endometrial cells could also be facilitated by the presence of surface cell receptors. This hypothesis is confirmed by the results of experiments in which MVs were treated with trypsin before being incubated with endometrial cells. Having identified that the endometrial cells represent target cells for MVs secreted by equine AMCs, a further aim of this study was to understand whether MVs might be involved in the regeneration of endometrial diseases. Obviously, due to the difficulty of studying the repair systems of endometritis in vivo, we recapitulated in vitro the inflammatory process by stimulating cells with LPS. The inflammatory response is a complex process involving many signaling cascades and cytokines have a significant role in the recruitment of inflammatory cells [48]. In the genital tract, the initial response of the endometrium against infection is dependent on innate immunity and mucosal defense systems [49,50]. The uterine immune response is generated not only by professional immune cells but also by endometrial epithelial and stromal cells, which can respond to LPS through the Toll-like receptors (TLRs) [51]. Activated TLRs subsequently stimulate the production of proinflammatory cytokines and chemokines [52]. To understand the mechanism of action of MVs, a single concentration of 40 × 10 6 MVs/ml was used after stress induced with LPS. This concentration was chosen because, during the study of internalization, endometrial cells at 24 h of culture were not saturated with MVs and no MV degradation started. In these experiments, we used LPS and MVs under different conditions. In control cells, even if the apoptosis is higher than expected, the cell proliferation of 80 % remaining cells was very high during the 24 h of experimentation, as proven by MTT values. Indeed, viability was constantly high over the different times of culture, and values of absorbance also did not statistically change in the presence of MVs. When the cells were stressed with LPS, the viability decreased with respect to control cells and cells cultured in the presence of MVs. Indeed, apoptosis increased drastically after treatment with LPS and, vice versa, the proliferation declined to the same intensity. The dose of LPS (10 ng/ml) was chosen on the basis of data obtained by Herath et al. [53]. These authors found that this value is present in cows with clinical endometritis. Also, over a short time, this dose of LPS is probably also deleterious for endometrial cells in a static system in vitro if compared to in vivo environments, where this effect can be modulated. These results underscored the stressor effect of LPS that has been reduced by the beneficial effect of MVs on the vitality of cells. When LPS and MVs were used at the same time, the viability of cells was high without significant differences compared to control cells. These results confirm the anti-inflammatory properties of MVs. However, the beneficial effect of MVs is time-dependent. In fact, MVs did not counteract the action of LPS if the cells were previously exposed to LPS (3 h LPS exposure). On the other hand, LPS does not show any detrimental effect if the internalization of MVs in the endometrial cells occurred 24 h before LPS treatment. The expression of pro-inflammatory genes supports these data. LPS induces overexpression of IL-1β at 24 h and MVs are able to counteract this action. Indeed, at 24 h this expression is downregulated in all three experimental conditions (LPS and MVs simultaneously, LPS for 3 h and then MVs, MVs for 24 h and then LPS). This downregulation is observed even in the experiment where cells were previously exposed to LPS. The cells surviving the apoptotic process are probably able to incorporate MVs that, at 24 h, are capable of counteracting the action of LPS. The overexpression of IL-6 and TNF-α induced by LPS occurs at 3 h and, at this time, MVs used either simultaneously or added after the action of LPS are not able to block the action of LPS. This seems to indicate that the downregulation of these genes could occur only when the cells are incubated with MVs for a longer time to guarantee the necessary incorporation of MVs. Indeed, we found that MVs were visible only after 6 h and increased up to 24 h. In parallel with gene expression, the release of cytokines was studied, confirming the observations regarding gene regulation. LPS was demonstrated to be capable of inducing the release of TNF-α the peak of which was obtained 24 h after stimulation with LPS. The release of IL-6 appeared earlier and reached the highest level 12 h after stimulation with LPS, and then gradually decreased over the subsequent observation period. MVs reduce the release of these pro-inflammatory cytokines and the maximum modulatory activity was observed between 12 h and 24 h, both when MVs were used simultaneously or 24 h before LPS treatment. In both cases, no contrasting action of MVs was obtained within the first 3 h, confirming that the internalization had yet to begin. MMP-1 and MMP-13 expression is induced by IL-1β, so the expression of these two genes was investigated at 24 h after treatment with 10 ng/ml LPS, taking into account the higher expression of IL-1β at this time. MMP-1 and MMP-13 expression was statistically higher compared to the control, confirming the inflammatory effect induced by LPS. Matrix metalloproteinases (MMPs) are a family of structurally related zinc/calcium-dependent proteinases with a pivotal role in the extracellular matrix degradation during both normal and pathological tissue remodeling processes [54,55]. In addition, the MMP collagenases are key participants in extracellular matrix remodeling and are important for the separation of bovine placental tissues from the endometrium at term [56]. MMP-1, -2, -3, -9, and -13 are all highly expressed in the bovine endometrium in late gestation [57], while MMP-1 and MMP-13 expression levels are downregulated in postpartum endometrium. Our study demonstrated that MVs downregulated the expression of MMP-1 and MMP-13 at 24 h of LPS treatment. It is well known that miRNA-containing microvesicles can regulate the inflammation process [58]. In this context, from our results it is possible to assume that the cargo of MVs contributed to the anti-inflammatory effect. Since MVs contain various active molecules, such as lipids, proteins, mRNA, and miRNA [29], we studied the presence of three miRNAs in MVs involved in the regulation of pro-inflammatory genes in our in vitro model (miR-335, miR-146a, and miR-26a-2). miR-335 has been demonstrated to regulate the expression of TNF-a and IL-6 during human adipose cell inflammation [59]. miR-146a has been demonstrated to decrease the expression of IL-1β and, as an indirect effect, to suppress the level of MMP in intervertebral discs in bovine species [60]. miR-26a-2 has been widely studied and it has been correlated with human inflammation, cell proliferation, and apoptosis [61]. The DIANA tool confirmed that these miRNAs have predicted targets also in horse inflammation. The downregulation of gene expression shown in this study could be correlated to miRNA transfer from MVs to endometrial cells. Conclusion These data provide a critical starting point for beginning to dissect how equine amniotic MVs respond to and alter an inflammatory situation, shown to be a promising approach for the treatment of endometritis. Much research is still needed to establish the true biological role of miRNAs in endometrial disease with a view to translating this knowledge into clinically effective outcomes. Funding The research was financially supported by Università degli Studi di Milano. Authors' contributions CP: isolation of amniotic cells, conditioned medium and microvesicles, collection and assembly of data on vitality, cell proliferation, and LPS experiments, and final approval of the manuscript. MGS: molecular biology study, data analysis, and final approval of the manuscript. AB: molecular biology study, collection and assembly of data, financial support, and final approval of the manuscript. PE: isolation of endometrial cells, collection and assembly of data of vitality, cell proliferation and LPS experiments, and final approval of the manuscript. MGM: molecular biology study, collection and assembly of data, and final approval of the manuscript. BC: molecular biology study, collection and assembly of data, and final approval of the manuscript. DB: molecular biology study, collection and assembly of data, financial support, and final approval of the manuscript. AI: collection and assembly of data of release of cytokines, and final approval of the manuscript. SL: collection and assembly of data of release of cytokines, financial support, and final approval of the manuscript. EC: molecular biology study on miRNA, and final approval of the manuscript. FP: molecular biology study on miRNA, and final approval of the manuscript. FC: conceptions and design, financial support, and final approval of the manuscript. AL-C: conceptions and design, in vitro study of MV uptake, coordination of all experiments, collection and assembly of all data analysis and interpretation, manuscript writing, and final approval of the manuscript.
2017-08-03T02:43:42.807Z
2016-11-18T00:00:00.000
{ "year": 2016, "sha1": "1854f5abd72cfa878f8937a29247f6bc447a2a96", "oa_license": "CCBY", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-016-0429-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fd2e3c5b102fd4c7af9d61417912c88360835fc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
29065538
pes2o/s2orc
v3-fos-license
DO THE RADIOLOGICAL CRITERIA WITH THE USE OF RISK FACTORS IMPACT THE FORECASTING OF ABDOMINAL NEUROBLASTIC TUMOR RESECTION IN CHILDREN? ABSTRACT Background: The treatment of neuroblastoma is dependent on exquisite staging; is performed postoperatively and is dependent on the surgeon’s expertise. The use of risk factors through imaging on diagnosis appears as predictive of resectability, complications and homogeneity in staging. Aim: To evaluate the traditional resectability criteria with the risk factors for resectability, through the radiological images, in two moments: on diagnosis and in pre-surgical phase. Were analyzed the resectability, surgical complications and relapse rate. Methods: Retrospective study of 27 children with abdominal and pelvic neuroblastoma stage 3 and 4, with tomography and/or resonance on the diagnosis and pre-surgical, identifying the presence of risk factors. Results: The mean age of the children was 2.5 years at diagnosis, where 55.6% were older than 18 months, 51.9% were girls and 66.7% were in stage 4. There was concordance on resectability of the tumor by both methods (INSS and IDRFs) at both moments of the evaluation, at diagnosis (p=0.007) and post-chemotherapy (p=0.019); In this way, all resectable patients by IDRFs in the post-chemotherapy had complete resection, and the unresectable ones, 87.5% incomplete. There was remission in 77.8%, 18.5% relapsed and 33.3% died. Conclusions: Resectability was similar in both methods at both pre-surgical and preoperative chemotherapy; preoperative chemotherapy increased resectability and decreased number of risk factors, where the presence of at least one IDRF was associated with incomplete resections and surgical complications; relapses were irrelevant. INTRODUCTION N euroblastic tumors were described by Wright in 1910 and originate from indiferentiated nervous cells from the neural crest, present at the adrenal medulla, sympathetic ganglia and plexus. For this reason, they can grow in various parts of the body, being 48% adrenal, 25% retroperitoneal, 16% thoracic and are rarer on the neck and pelvis 1,2,3,4,11,17 . The etiology of neuroblastoma is unknown, but it seems to be related to congenital and genetic anomalies 4,7,10 . Neuroblastoma is the most common extracranial solid tumor in children and represents 10% of childhood cancers (one case for each 7000 children born) and 15% of pediatric cancer deaths 15,17 . In São Paulo State, Brazil, it represents 7.7 cases per million children, 30% having until one year of age and 90% aged until 19 months 3,4,7,8 . They are heterogeneous tumors that can maturate espontaneously or be highly indifferentiated, depending on the biology of the tumor. Thus, biological and molecular factors are related to clinical presentation and prognosis 2,3,5,9 . Signs and symptoms depend on tumor site, but these tumors envolve the main vascular trunks of the body and are often metastatic at diagnosis. Surgical ressection can be very challenging and severe complications can occur, although complete resection is the aim of the surgery. On the other hand, complete resection is often related to favorable histology 7,11,15 . Clinical symptoms may also be related to catecolamines and VIP producted by the tumor 3,4,7,17 . Image studies are essential for staging and determination of the primary tumor site 2,3,4 and diagnosis is made through tumor biopsy or bone marrow infiltration of neuroblasts 7,10 . Staging determine risk groups and individual treatment and several systems were proposed. In 1988, INSS (International Neuroblastoma Staging System) was presented as a common language for neuroblastoma staging, but it is made postoperatively and dependent on surgical expertise. So, in 2009 the International Neuroblastoma Risk Group (INRG) established a new staging system: INRGSS -International Neuroblastoma Risk Group Staging System, that evaluates the initial images at diagnosis and describes more than 20 risk factors, the IDRFs -Image-Defined Risk Factors, that predicts surgical risks and challenges for complete resection at diagnosis 2,6,7,8,13,14,15 . The presence of IDRFs is related to surgical complications and incomplete resections and literature uses this in order to equalize the type of resections among different institutions in the world 2,5,6,9,13 . The aim of this study was to compare the traditional resectability criteria (INSS) with the image risk factors criteria (IDRF) for resectability, in two moments: at diagnosis and preoperatively after chemotherapy in a reference institution for pediatric cancer in Brazil. METHOD This is a retrospective review of patients with neuroblastoma treated at the Pediatric Oncology Institute -GRAACC -UNIFESP from 2000 to 2015. Inclusion criteria were: patients with abdominal and pelvic neuroblastomas stages 3 and 4 that had images at diagnosis and before surgery. From 198 patients treated for neuroblastic tumors in the observation period, 64 met the inclusion criteria, but 25 were excluded because images could not be found, nine were referred to the institution after surgery elsewhere and three were initially diagnosed as renal tumors. Thus 27 patients were included in the study and clinical data were collected. Images at diagnosis and post-chemotherapy before surgery were reviewed by surgeons and radiologists. The aim was to evaluate resectability at diagnosis and after chemotherapy based on the presence of IDRFs described by Brisse et al (2009) as part of the INRGSS staging system and to determine if the system would impact the surgical decision made for each patient using INSS. Statistical analysis SPSS 20.0 and STATA 12 were used; 5% significance was considered. Kappa and McNemar coeficients were used to compare resectability at diagnosis and post-chemotherapy between INSS and IDRF systems. Uni and multivariate analysis Kaplan-Meier curves and Cox regression models were done. RESULTS Data from 27 children were analyzed. Age varied from 0-9 years, mean 2.5 years, median two years. Mean time from begining of symptoms and diagnosis was 1.4 year; 51.9% were females; 55.6% were aged more than 18 months at disgnosis; 66.7% were stage 4, and it was also verified, similar participations by location of the tumor (p=0,895, Table 1) Resectability was compared between INSS and IDRFs at diagnosis (n=27) and after chemotherapy (n=26). One patient was treated with surgery as initial approach and had complete resection ( Table 2). As for compared resectability between INSS and IDRFs at diagnosis (Kappa=0,362, p=0,007) and after chemotherapy (Kappa=0,354, p=0,019), fragile but significant agreement results were observed. But when comparing results between diagnosis and post-chemotherapy using IDRFs, no agreement was observed (Kappa=0,194, p=0,107). For the INSS criteria it was not possible to calculate Kappa coefficient because all 26 patients were considered unresectable at diagnosis (Figure 1). FIGURE 1 -Concordance and Kappa values As for the type of surgical resection at diagnosis and post-chemotherapy, for both INSS and IDRFs there was association for the type of resection post-chemotherapy on IDRF(p=0,001), meaning that all patients considered resectable post-chemotherapy by IDRFs had complete resections. On the other hand, 87.5% of patients considered unresectable had incomplete resections (Table 3). On the ROC curve, a cut point of 1 on the postchemotherapy IDRF was associated with 87.5% sensitivity and 66.7% of especificity for incomplete resection. Thus, if all post-chemotherapy patients with one or more IDRFs were classified as incomplete resections, 87.5% would be correctly classified and if classified as complete resections, 66.7% would be correctly classified (Figure 2). For patients who had surgery after chemotherapy (n=16), differences were observed among number of IDRFs and complete or incomplete resections (p=0,009). The median number of IDRFs was lower for patients that had complete resections. Survival was impacted by the number of IDRFs. The more IDRFs, the worse was the survival (Figure 3). DISCUSSION Neuroblastoma is a heterogeneous and multifactorial malignancy and its biology impacts survival rates. Multimodal treatment has enhanced the chances of survival and cure 8 . Literature shows predominance of males, but in the present series, 51.9% of patients were female (p=0.188). Gender had no influence on survival 5,13,17,18 . On the other hand, age at diagnosis of more than 18 months is an independent risk factor for prognosis 5,9 . 55.6% were older than 18 months in this series (mean 30 months), showing the prevalence of high staged tumors. Concerning site, 62.9% were adrenal, but had no impact on survival (p=0.266). Surgery is the best initial approach in localized disease, but there is discussion about which is the best to do initially in bigger tumors that encase other structures and advanced staged tumors. The type of surgical resection and staging influence prognosis and some groups advocate complex and risky resections. But others say that aggressive surgery is questionable and has little benefit in high risk patients heavily treated with the multimodal treatment 6,17 . Preoperative chemotherapy is of essence in neuroblastomas that envolve renal vessels, celiac trunk or SMA, after which complete resection possibilities can enhance. Nephrectomies should be prevented, when possible 5,8,9 . Mullassery et al did a systematic review on the impact of aggressive surgery in stages 3 and 4 neuroblastomas. Complete resections are associated with better prognosis for stage 3, but have limited impact in stage 4 tumors. Irtan et al compared images from the diagnosis and preoperatively, with the identification of IDRFs in both moments, along with the site and extent of the tumor and the local impact of chemotherapy for surgery. Resectability was enhanced by chemotherapy when using IDRFs: 14.8% at diagnosis and 34.6% after chemotherapy. In our series, post-chemotherapy IDRFs and the type of surgical resection were convergent since patients classified as resectable in the new criteria were actually resected in the past (p=0.001). For those considered unresectable, 87.5% had incomplete resections. In a previous study at the same institution in 1998, severe surgical complications occurred in 16.4% with 30.7% mortality 1 , but with the advances in chemotherapy, support care and bone marrow transplantation the present series had only one patient with surgical complication and the overall survival that used to be 49.4% is 66.6% today 1 . Of 17 patients with high stage disease treated with surgery, only one post-chemotherapy patient had a surgical complication (hemorrhage). This patient had six IDRFs at diagnosis and two IDRFs after chemotherapy, which correlates to challenges in surgery and incomplete resection, which he had. The low incidence of surgical complications described, even in high stage disease, can be explained by the fact that the institution is reference for pediatric cancer in Brazil and also the use of preoperative chemotherapy, which reduces the number of IDRFs. There have been changes in treatment protocols throughout the years and the relapse rate was 18.5%, lower than related in literature. Survival rates are comparable to those described in the literature (66.6%) 1,14 . There are several limitations in this study; it represents casuistic of a single institution; is retrospective; has a limited number of cases; and the biology of the tumor was not analyzed. Further prospective studies should be conducted to better compare INSS with INRGSS. CONCLUSION Resectability was similar using INSS and IDRFs systems at diagnosis and post-chemotherapy. Chemotherapy enhances the resectability (14.8-34.6%) for the numbers if IDRFs decline. The presence of at least one IDRF was associated with incomplete resections and there was only one surgical complication and low relapse rate.
2018-04-03T06:04:44.025Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "e8b65ecde177fc35125bb7b99713daf855a28fb0", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/abcd/v30n2/0102-6720-abcd-30-02-00088.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8b65ecde177fc35125bb7b99713daf855a28fb0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
103144056
pes2o/s2orc
v3-fos-license
Different electronic states at crystallographically inequivalent CuO2 planes on four-layered cuprates HgBa2Ca3Cu4O10+δ We report tunneling conductances due to two kinds of crystallographically inequivalent CuO2 planes on multi-layered cuprates Hg0.95Ba2Ca3Cu4O10.05 (Hg1234) measured by a point-contact tunneling technique. One is an outer CuO2 plane (OP) which has a pyramidal five-oxygen coordination, another is an inner CuO2 plane (IP) which has a square four oxygen coordination. These tunneling conductances exhibit two kinds of superconducting gaps Δ with clearly different sizes. That is, Δ on Hg1234 was 36 ± 2 meV and 55 ± 2 meV for OP and IP, respectively. Moreover, we report the correlation between the mode energy Ω and Δ. The Ω/Δ exhibits the common feature with other cuprates and does not exceed 2. This behaviour implies that the collective spin excitation is a candidate for the mediator on pair formation. Introduction High-T C cuprate superconductor has been over 30 years since its discovery, however mechanism for pair formation has not been elucidated yet, and therefore it is one of the challenging subject. Cuprates having three or more CuO 2 planes in a unit cell are called a multilayer cuprates (MLCs) and have two kinds of CuO 2 planes that are crystallographically inequivalent. In particular, the sufficient investigation for MLCs have not been collected since the complexity of crystal structure. Thus, it is one of the reasons why high-T C superconducting mechanisms remain unexplained. On the other hand, investigation for mono-or bilayered cuprates have been actively conducted. In particular, many common properties of cuprate superconductors have been revealed by spectroscopic studies such as point-contact/breakjunction tunneling (PCT/BJT) [1,2], scanning tunneling microscopy/spectroscopy (STM/STS) [3] and angle-resolved photoemission spectroscopy (ARPES) [4], since the electronic state can be observed directly. Specifically, for example on hole-doped cuprates, there is common understanding such as dwave superconductor, bell-shaped superconducting phase diagram, and the decreasing of magnitude of the superconducting gap ∆ with doping increasing. As shown in Fig. 1, MLC such as Hg 0.95 Ba 2 Ca 3 Cu 4 O 10.05 (Hg1234) has two kinds of crystallographically inequivalent CuO 2 plane: an outer CuO 2 plane (OP) which has a pyramidal five-oxygen coordination, another is an inner CuO 2 plane (IP) which has a square four-oxygen coordination. Investigation for MLCs has been actively done in a few spectroscopic studies [5], and has been performed energetically in NMR [6,7]. According to NMR results, the local carrier concentration at the OP is 2 1234567890 ''"" higher than that at the IP [6]. By combining NMR results with the doping dependence of the ∆, it can be expected that ∆s of different magnitudes will be observed in MLCs. However, in a spectroscopic experiment using a vacuum as a tunnel barrier, the information of the OP closest to the cleavage surface becomes the main and it is difficult to investigate the superconducting characteristics on the IP. On the other hand, the two kinds of superconducting gap originating at the OP and IP have been successfully observed by the PCT method by forming a direct junction to the OP and IP, as shown in the Fig. 1(b, c). Since the MLCs have CuO 2 planes (IP) distant from the charge supplying layer, there is a possibility that the nature of the essential superconducting CuO 2 surface can be clarified. Therefore, it is principal to investigate two kinds of CuO 2 planes of multi-layered cuprates. Hg-based cuprates are well known for having a flat CuO 2 plane and the higher T C among cuprate superconductors. Therefore, Hg1234 is one of the ideal material for the PCT method. Despite numerous experimental studies, there is still no unified understanding for the mechanism of pair formation on cuprate superconductor [8]. As a probe to explore the mechanism of the pair formation, the dip structure observed outside the coherence peak of the tunneling conductance has been drawing attention. As one of the interpretation, the dip structure has been interpreted as a result of the contribution of bosonic excitations and has been reproduced by strong coupling analysis reflecting pairing interaction [1,9,10]. These result are consistent with the theory that the peak due to the mode energy Ω is located inside 2∆ in the pairing function for the collective spin excitation model [11]. Furthermore, it has been found that the doping dependence of the Ω estimated from the dip structure coincides with the magnetic resonance mode energy observed in inelastic neutron scattering (INS) [1,12]. Thus, the magnetic interaction has been interpreted one of the candidate playing a role of glue for the Cooper pair. However, other interpretations different from the bosonic excitation have also been mentioned, such as an energy-dependent gap function model [13], gap inhomogeneities [14], charge-density-wave ordering [15], and bilayer band splitting [16]. In addition, tunneling conductance can be reproduced in a wide range by a model considering the pair-pair interaction by Sacks et al. [ Tunneling conductance by a superconductor-insulator-normal metal (SIN) junction have been measured by the PCT method using a Au tip. The insulating layer corresponds to blocking layers such as HgO, BaO and Ca. CuO 2 planes are regarded as superconducting layer. Thus, the Au tip was gradually brought close to the sample surface, and the SIN junction was formed by using the block layer as the insulating layer. Figure 1(b, c) are the schematic illustrations for the two type of SIN junction. In the PCT measurement, we have reported that the electronic state at the OP and IP can be observed by forming a junction by contacting a tip on the sample surface. That is, in the case for Hg1234, as shown in Fig. 1(b), it is considered that the S (OP) -I-N junction is formed by adopting the HgO and BaO layers as the insulating layer. On the other hand, as shown in Fig. 1(c), when the HgO layer supplying charge to CuO 2 planes are scraped off by the Au-tip, it is considered that the OP cannot become the superconducting state due to the carrier deficiency. In such the situation, it is considered that the S (IP) -I-N junction is formed by adopting the non-superconducting OP and Ca layer as the insulating layer. dI/dVs were measured by an ac lock-in technique at 4.2 K. The negative (positive) bias in the tunneling conductance corresponds to the occupied (unoccupied) state of superconductivity. Results and discussion Two kinds of the superconducting gap originating from the OP and IP for MLCs have been reported by our PCT method [19][20][21][22][23][24]. As shown in Fig. 2, two kinds of spectra with different gap magnitudes have also been observed non-selectively for Hg1234. Figure 2(a) shows the typical tunneling conductance on Hg1234. In the PCT measurement, the electronic state at the OP and IP can be observed by forming a junction by contacting a tip on the sample surface. The tunneling conductance for Hg1234 exhibits similar features to the monolayer cuprates Bi 2 Sr 2 CuO y (Bi2201) [25], Tl 2 Ba 2 CuO y (Tl2201) [26] and the bilayer cuprates Bi 2 Sr 2 CaCu 2 O y (Bi2212) [27], TlBa 2 CaCu 2 O y (Tl1212) [28]. That is the tunneling conductances exhibits not only the peak-dip-hump structure [as indicated by allows in Fig. 2(a)] but also the d-wave like shape sub-gap region. In Fig. 2(a), the upper curve corresponds to the tunneling conductance due to the electronic state at the OP and the lower curve corresponds to that at the IP, respectively. As shown in Fig. 2(a), the tunneling conductance at the OP exhibits sharp and high coherence peaks, and the dip structure (indicated by allows) was observed on both positive and negative bias sides. We note that the dip structure is clearly observed also on the positive bias side. On the other hand, the tunneling conductance at the IP is more asymmetric and exhibits the coherence peak broader The superconducting gap ∆ was estimated as ∆ = eV p , where V p is a differential voltage at the coherence peak of the tunneling conductance. Hg1234 exhibits two distinct distributions originating the OP and IP. The ∆ on Hg1234 was 36 ± 2 meV at the OP and 55 ± 2 meV at the IP, respectively. The green and blue curves are fitted with a Gaussian distribution. and lower than that at the OP. Furthermore, the dip structure is observed more strongly at the negative bias side than at the positive bias side. Such features have been discussed in strong coupling analysis considering Van Hove singularity at the M point in the Brillouin zone [10,29]. That is, the small coupling constant reproduces the coherence peak which becomes higher on the negative bias side. On the other hand, the large coupling constant reproduces that the coherence peak on the positive bias side becomes higher than that on the negative bias side. As shown in Fig. 2(b), the histogram of the gap magnitudes on Hg1234 exhibits two distinct distributions. That is, it indicates that there are two kinds of gap due to different electronic states at the OP and IP, respectively. Based on the statistical distribution, the gap magnitudes of Hg1234 are determined 36 ± 2 meV and 55 ± 2 meV for the OP and the IP, respectively. The existence of two kinds of superconducting gaps has also been observed by the PCT method [19][20][21][22][23][24]. According to the NMR results, MLCs exhibit different local carrier concentration at the OP and IP, and p (OP) is larger than p (IP) [7]. On the other hand, as investigated in spectroscopic studies, it is well known that the gap magnitude decreases with hole concentration increasing [2][3][4]. These two experimental facts lead to a conclusion; the gap magnitude at the OP is smaller than that at the IP. Therefore, the tunneling conductance which exhibits the smaller (larger) gap magnitude reflects to the electric state at the OP (IP). As another of the characteristic common features of tunneling spectra of cuprate superconductors, a dip structure has been observed outside the coherence peak of tunneling conductance. This structure has been interpreted as an effect by collective excitation mode by strong coupling theory [1,10]. This feature has also been observed for MLCs [24]. As shown in Fig. 2(a), Hg1234 also exhibits the dip structure outside the coherence peak. According to the collective spin excitation model, the resonance peak at the mode energy in the paring function corresponds to the position of the dip minimum [10,11]. Thus, we can estimate the mode energy from the dip structure as Ω = eV p − eV dip . The V p and V dip were estimated from the point that the second derivative of the tunneling current becomes zero at the peak and dip position, respectively. We now discuss the correlation between mode energy and superconducting gap. Figure 3 exhibits the Ω/∆ estimated from the dip structure are plotted as function of ∆. As shown in Fig. 3 Bi2212 [1] OPT-Bi2223 (110 K) [10] OD-Tl1223 (112 K) [24] UD-Hg1234 (127K) Figure 3. Comparison of the Ω/∆ vs ∆ between Bi2212 [1], Bi2223 [10], Tl1223 [24] and Hg1234. Squares are for Hg1234. Circles, diamonds and triangles represent results forBi2212 [1], Bi2223 [10] and Tl1223 [24], respectively. The shaded region is a guide to the eye. TlBa 2 Ca 2 Cu 3 O y (Tl1223) [24]. Squares correspond to the average value of Ω/∆ on Hg1234 containing error bars. The dataset on Tl1223 includes values estimated at the OP and IP from the PCT results. As shown in Fig. 3, the Ω/∆ vs ∆ is in good agreement between the these cuprates despite the difference in the number of CuO 2 planes and the charge supplying layer. That is, the Ω correlates with the ∆, indicating that Ω/∆ increases with ∆ decreasing, and its value is less than 2 for both IP and OP. Thus, this result is not inconsistent with the model that the excitation exhibits excitonic character. According to the result on Bi2212, it is concluded that this behavior is based on the spin resonance mode from the coincidence of the mode energy estimated from BJT [1] and that observed in INS [12]. Furthermore, these results are also consistent with the results on Bi2223 [10] and Tl1223 [24]. Although, there has not been report of the spin resonance mode on Hg1234 in INS measurement, it will be consistent with the mode energy on Hg1234 reported by our PCT. Conclusion We have succeeded in observing the two kinds of the supeconducting gap originating from the different electronic states at the OP and IP on Hg1234. The tunneling conductance on Hg1234 exhibits the common features with mono-and bilayered cuprates such as d-wave like gap shape and the dip structure. The ∆ on Hg1234 was 36 ± 2 meV at the OP and 55 ± 2 meV at the IP, respectively. In the analysis of the mode energy estimated from the dip structure, the Ω/∆ increases with ∆ decreasing but does not exceed 2. The Ω/∆ vs ∆ for Hg1234 is in good agreement with that for other cuprates such as Bi2212, Bi2223, and Tl1223. This behavior is consistent with the model that the excitation exhibits excitonic character. Thus, this common feature implies that the collective spin excitation is a candidate for the mediator on pair formation for cuprate superconductors.
2019-04-09T13:09:36.565Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "be458a3779ce3363eab4f0a807baddd495edf73b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/969/1/012031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a695f2af6ae49ec9d166dca3895f455924776291", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16377654
pes2o/s2orc
v3-fos-license
Intramuscular Haemangioma with Diagnostic Challenge: A Cause for Strange Pain in the Masseter Muscle Intramuscular hemangiomas are unique vascular tumors which are benign in nature, most commonly occurring in the trunk and extremities. When present in head and neck, they most frequently involve the masseter and trapezius muscles, accounting for less than 1% of all hemangiomas. Most of these lesions present with pain and discomfort and some patients may demonstrate progressive enlargement. Due to their infrequency, deep location, and unfamiliar presentation, these lesions are seldom correctly diagnosed clinically. Our report is a clinically misdiagnosed case of a painful soft tissue mass in the right side masseteric region of a 23-year-old female patient, confirmed as intramuscular hemangioma based on imaging studies and histopathologic examination, treated by surgical excision which had no recurrence after a 3-year followup. Introduction Hemangiomas are vascular neoplasm, constituting 7% of all benign tumors [1]. These are tumors of infancy, occurring most frequently on cutaneous and mucosal surfaces. Such tumors occurring in the skeletal muscles of head and neck region are rare. Intramuscular hemangiomas make up 0.8% of all hemangiomas [2]. Approximately 14% of intramuscular hemangiomas are manifested in the head and neck, with masseter muscle representing the most common site of involvement [3]. Due to their rarity, deep seated location, and bizarre clinical presentation, intramuscular hemangioma should be considered in the differential diagnosis of nonspecific soft tissue swelling with strange pain. Case Report A 23-year-old lady reported to the Department of Oral Medicine & Radiology, Sri Ramachandra University and Research Institute, with a complaint of swelling in the right side of the face for the past 6 months. She had incidentally noticed the swelling which remained of the same size for 3 months, after which it became more bulging. Over the last 3 months she experienced intermittent pricking type of pain. Medical history was unremarkable. Clinical examination revealed a diffuse swelling in the right mandibular body region measuring about 2 × 2 cm in dimension, which was warm, tender, and soft in consistency. The swelling was mobile in the horizontal direction and showed restricted mobility in the vertical plane. Overlying skin was pinchable ( Figure 1). On clenching the swelling became more prominent. Intraoral examination revealed no abnormality. A clinical diagnosis of buccal node lymphadenitis and a differential diagnosis of soft tissue abscess, masseteric hypertrophy, accessory parotid tumor, and lymphovascular tumor were considered and the necessary diagnostic workup was done. Routine hematological investigations were within normal limits. Posteroanterior view of mandible showed no pathology; ultrasonography showed a 2.2 × 0.6 cm mixed echoic lesion within the right side masseter muscle with a speck of calcification ( Figure 2). Left masseter appeared normal. Colour Doppler ultrasound showed dilated vascular channels with good flow (Figures 3(a) and 3(b)). There were no interarterial/venous communications. MRI showed a small well defined T2 mixed hypo and hyperintense mass signal, space occupying lesion (SOL), along the anteroinferior aspect of the right masseter, measuring 2.1 × 2.5 × 1.8 cm in dimension in the axial view (Figure 4(a)) and fat suppressed T2 coronal view (Figure 4(b)). Under general anesthesia, preauricular skin incision was made, dissecting lateral to the parotid gland, and skin flaps were raised. Within the masseter a small bulging mass, measuring about 2 × 2.5 cm in dimension, was evident. The branches of the facial nerve were preserved. The external carotid artery was looped and proximal vascular control was achieved; small feeding vessels were individually ligated and blood loss during the procedure was minimal. The mass was completely removed with a margin of normal surrounding muscle to prevent recurrence. Primary closure was done. Postsurgically she was prescribed with antibiotics and analgesics for 5 days. There was mild postoperative facial edema, which subsided within twenty days with no evidence of pain and significant cosmetic problem. Histological examination revealed fibrofatty tissue and fragments of muscle with several thick walled and thin walled vessels with occasional nerves. There were also areas of hemorrhage and congestion, suggestive of a venous hemangioma ( Figure 5). After a 3year followup patient was asymptomatic and ultrasonography revealed no evidence of lesion ( Figure 6). Discussion Hemangiomas are benign vascular neoplasms or hamartomas, which are indigenous to the site of origin. Intramuscular hemangiomas are very rare with masseter muscle accounting for 5% of all intramuscular hemangiomas; other frequently involved muscles are trapezius, extraocular muscles, sternocleidomastoid, and temporalis. Their growth may be accelerated with a growth spurt or trauma and tend to enlarge slowly. They can spontaneously regress. Malignant transformation is rare. A sudden increase in size on taking oral contraceptive pills has also been reported [2]. They are usually detected early. These tumors generally present as enlarging soft tissue masses with or without pain. Signs and symptoms may suggest the vascular nature. 90% of the cases occur before the age of 30 years [3]. Most hemangiomas can be diagnosed on clinical examination and do not require any investigation or any treatment as they tend to subside spontaneously. However, imaging is needed in cases of deep hemangioma with normal overlying skin or in cases of clinically atypical soft-tissue masses. When imaging is used, it is important to choose the modality based on the specific lesion and clinical situation. Conventional radiographs help in identifying phlebolith and calcifications, but they may not be specific. Ultrasonography and magnetic resonance imaging (MRI) are the commonly used modalities of choice. In intramuscular hemangiomas, Colour Doppler sonography is exclusively useful to demonstrate the vascular structures in and around the muscle and to evaluate the pathological changes like fibrosis and to detect calcifications. In our case presence of calcification was not evident on plain radiography and MRI, but sonography demonstrated its presence. Hemangioma can be distinguished from other soft tissue lesions by the features of abundant vascularity and high blood flow velocity. Colour Doppler signal in a welldefined hypoechoic mass with heterogeneous echotexture should raise the possibility of hemangioma [4]. Hemangioma with arterial flow can be distinguished from arteriovenous malformations (AVM) by the presence of solid parenchymal tissue and interarterial communications. MRI aids in discerning and delineating deep situated and large intramuscular hemangioma, and it gives the best diagnostic information. The MRI findings of an intramuscular hemangioma consist of an intermediate signal on T1 weighted images and an intense signal on T2 weighted images [5], but it should be noted that not all intramuscular hemangiomas will give a highintensity signal on T2 weighted MRI [6]. If pulsations, bruits, or thrills are evident on clinical examination, arteriography is indicated to identify large vessel communications [7]. On pathologic analysis, vascular lesions can be classified as capillary, cavernous, venous, and arteriovenous malformations depending on the predominant anomalous vascular channels. Case Reports in Dentistry In our case, imaging studies revealed that the lesion is present within the masseter muscle and the differential diagnosis includes masseteric hypertrophy which can be unilateral or bilateral and mostly asymptomatic. Etiology includes defective occlusion, temporomandibular joint disorder, congenital and functional hypertrophies, and emotional disorders. Increased masseteric bulk in sonography is diagnostic. Soft tissue abscesses are mostly due to bacterial infections and appear as focal hypoechoic area with echogenic debris, pus, and occasionally gas. Malformations can be grouped as either high-flow (arteriovenous) or lowflow (capillary, cavernous, and venous) vascular lesions. Flow characteristics are best demonstrated by Doppler sonography. Lymphatic malformations are present from birth and are usually detected before the age of 2 years. They generally appear cystic with thick or thin septae with interspersed solid areas on sonographic examination. The treatment of choice is total excision. Surgical excision is associated with 9-28% recurrence rate [8], because of the infiltrative growth pattern. Sclerotherapy has a role in the management of intramuscular haemangioma when excision is not possible. Intramuscular hemangioma should be considered in the differential diagnosis, whenever a soft tissue lesion with pain in a skeletal muscle of a young adult is encountered. Sonography and MRI are excellent diagnostic aids in such lesions.
2016-05-12T22:15:10.714Z
2014-06-05T00:00:00.000
{ "year": 2014, "sha1": "565f67445001a7c39070b942909dc6bf818fb97c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crid/2014/285834.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c883e3aef789454c4a0501dbc68194fc2f5af6d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34756326
pes2o/s2orc
v3-fos-license
Marble burying as compulsive behaviors in male and female mice Marble burying is considered an, albeit controversial, animal model of the compulsive like behaviors of obsessive‐compulsive disorder (OCD). Hallmark features of OCD patients are similarities and, more prominent, differences from anxiety disorders, e.g., the absence of sex differences and resistance to spontaneous remission. We report an experiment on marble burying by male and female C57/BL6/N mice. Animals were administered either the classic anxiolytic drug, diazepam, that targets the GABA receptor or a “pure” inhibitor of the serotonin transporter, escitalopram, that has been reported to be particularly effective in OCD. A burying paradigm that more precisely mimics the human condition was used, e.g., testing in the home environment, chronic drug exposure and acknowledging individual differences by pre‐selecting for high marble burying. Results were that there were no sex differences in groups treated with drugs or in control mice. Both diazepam and escitalopram decreased numbers of marbles buried compared to vehicle‐only controls in the absence of correlated changes in anxiety. Diazepam, however, was more effective than escitalopram in suppressing MB. The conclusion is that along with serotonin, GABA is involved in regulating compulsive behaviors. The marble burying paradigm may prove more useful for pharmacological drugs tests of impulsivity or attention deficit because of the involvement of serotonin and GABA in both disorders. INTRODUCTION Obsessive-compulsive disorder, (OCD) is characterized by unwanted, intrusive thoughts and images (obsessions) and repetitive, ritualistic behaviors (behavioral compulsions).The latter presumably serves to reduce anxiety caused by the obsessions.Yet, OCD now is separated from anxiety disorders in DSM-V, largely based on the obsessional component (American Psychiatric Association 2013). Our research interests are in developing and assessing reliable and valid animal models for psychiatric conditions.OCD has proven to be a particularly difficult condition to model.The task of animal modelers is to develop behavioral measures that isolate compulsions from anxiety measures. Spontaneous burying of marbles in rodents has been suggested as a compulsive-like behavior (Broekkamp et al. 1986, Deacon 2006, Gyertyan 1995).Marble burying (MB), however, has been criticized on both conceptual and empirical grounds for its ability to serve as a unique benchmark of OCD (Albelda andJoel 2012, Wolmarans de et al. 2016).Indeed, MB has also appeared in the literature as a measure for autism, motivation or general anxiety disorder (GAD) (Ene et al. 2016, Jury et al. 2015, Silverman et al. 2015).Despite the criticisms, MB continues to appear regularly in the literature as a measure of compulsion (Gawali et al. 2016, Kudryashov et al. 2016, Nichols et al. 2016, Satta et al. 2016). We designed an experiment with mice to assess MB in relation to unique features in OCD patients.1) In contrast to the notably higher frequencies of most anxiety disorders in women, incidences of OCD have no reliable sex differences (Martin 2003).Our experiment used both males and female mice.2) There are clearly individual differences in compulsive behaviors among people.Our experiment used pre-tests to select the mice who were most likely to bury marbles.3) Unlike the anxiety disorders, untreated OCD frequently fails to remit with the passage of time (Taylor et al. 2011).Our experiment tested the selected mice repeatedly to determine spontaneous reductions as marbles became familiar.4) The consensus is that the neural circuits for OCD and anxiety differ (Burguiere et al. 2015, Hoffman 2011).Neurotransmitters underlying OCD are the monoamines, mostly serotonin and, likely, glutamate (Bokor andAnderson 2014, Egashira et al. 2008).GABA remains the primary transmitter thought to underlie most anxieties.Our study compared a classic benzodiazepine, diazepam, and a SSRI, escitalopram, that has proven particularly effective for OCD (Shim et al. 2011, Zohar 2008).Finally, all animals were examined in the open field apparatus as a general measure of anxiety. Hypotheses for the study included no sex differences in MB, consistency over time for individual MB habits in untreated animals and greatest effectiveness of escitalopram in reducing MB. Animals A total of 35 male and 35 female C57/BL6/N mice, 40 days of age obtained from Charles River (Sulzfeld,Germany), were acclimatized for 2 weeks before pretests were conducted.After pretesting for spontaneous MB, 42 mice equally divided between sexes were selected as subjects for the experiment.All mice were individually housed in flat bottom plastic Macrolon type II cages measuring 360 cm 2 (Tecniplast, Italy) under SPF conditions.Standard lab diet (Rod16A, LASvendi, Soest) and water were available ad libitum.The colony room lighting was a 12:12 h reversed light/dark cycle with lights off at 9am; room temperature (20-22 °C) and relative humidity (50%) are controlled automatically.The Institutional Animal Care and Use Committee and the local authorities (Regierungspräsidium Karlsruhe) approved the experimental protocol (permit number: G-37/15). Materials All behavioral sessions were conducted during the nocturnal light cycle and under dim illumination.The open field apparatus (50×50×50 cm) was constructed of black Plexiglas.Movement of the animal in the open field was measured automatically by the Ethovision 4.0 tracking system (Noldus Information Technology, Wageningen, The Netherlands) via an overhead, infrared camera (Ikegami Digital).The tracking system can record locomotor movement in the various quadrants of the open field.The innermost quadrant (25×25cm) was designated as the center arena.After each test, the apparatus was cleaned with 70% ethanol.Marbles used were multi-colored and approximately 16 mm diameter.The home cage of the animal was used for measurement of MB.Bedding consisted of aspen wooden chips (ABEDD LTE-001, Lab & Vet Service, Vienna, Austria) that was approximately 5 cm deep.Bed-ding was changed weekly but never on the days before a behavioral test. Experimental Design Following an initial test to identify the tendency of each mouse to bury marbles, males (N=21) and females (N=21) were selected that had buried at least 6 marbles in the pre-test and randomly assigned to groups.There were 3 groups of each sex (n=7 per grp) that were s.c.injected daily for 11 days with either 0.9% saline (Fresenius Kubi, Bad Homburg) vehicle (Veh only), 2 mg/kg bwt diazepam (Diaz) or 2 mg/kg bwt escitalopram (Esc).These drug dosages are in the low to mid ranges of those employed in the rodent literature (Erfanparast and Tamaddonfard 2015, Nicolas et al. 2006, Pandey et al. 2009, Schneider and Popik 2007). Procedures Pre-selection tests of animals for the experiment were completed over 2 days.The pre-selection MB trial was treated as a pre-test, i.e., prior to drug treatments.Using the procedure described below for MB, the 35 females and 35 females were tested for spontaneous MB in their home cages.The 21 mice of each sex burying the most marbles were retained for the experiment, and the other mice were removed to another animal housing room.Drug administrations began at the beginning of behavioral tests of the animals.Injections were done 1 hr before tests. Behavioral sessions were conducted over 11 days in the open field and in the home cages.Order of tests were counterbalanced between and within groups.In addition to the pretests, experimental subjects were given 2 tests in both apparatus for a total of 4 tests separated by at least 2 days.Test 1 behaviors were conducted during the initial 4 days and Test 2 was conducted during the last 4 days of the 11 days of testing.Animals were injected on all days, including "off" days. For a session in the open field, a mouse was removed from the colony room to an adjacent experimental room.The animal was placed in the center of the open field apparatus and movement was recorded over the 30 min session.Also recorded was time spent in the center quadrant of the apparatus.The open field is a marker of activity changes under drug influences and time in the central area relative to the other areas adjacent to the walls serves as a measure of anxiety (Archer et al. 1987, Benatti et al. 2014, Ene et al. 2016). The procedure used for MB followed the paradigm used commonly in the literature (Deacon 2006, Gawali et al. 2016, Witkin 2008), except that we conducted the tests in the home cages rather than in novel cages.Logic was to mimic human OCD in which compulsive behaviors occur with unsettling disturbances of a familiar environment.For a test, the home cage was moved from the cage rack to a nearby table and the mouse removed from its home cage to a holding cage for approximately 1 min.During that time, 12 marbles were distributed equally around the perimeter of the home cage at least 2 cm from the walls.Marbles were placed on top of the approximately 5 cm-deep wood chip bedding of the home cage.The mouse was returned to its home cage that was placed back into its normal place in the cage rack. After 30 min, the cage was again moved to the table, and the animal placed in the holding cage while the numbers of marbles buried were counted.Although some marbles were buried out of sight, most often the marbles were buried only partially.We counted the marble buried if it was covered half or more by the bedding. Statistical Analyses Assessment of behavioral differences among groups was accomplished with 3-way analyses of variance (ANOVAs).The first 3 x 2 x 3 ANOVA on numbers of marbles buried had main factors of Drug (diazepam, escitalopram or vehicle only) x Sex with Trial (Pre-test, Trial 1 and Trial 2) as a repeated measure.Open field activity used a similar 3×2×2 arrangement, except with 2 trials as repeated measure.Numbers of marbles buried were the primary measure of compulsive behavior.Time in the center arena of the open field served as the measure of anxiety.Distance traveled, in cm, was a measure of general activity.Post-hoc Tukey-Kramer tests were used for pair-wise comparisons of mean group differences.The p<0.05 confidence value was used for all analyses. Posthoc group comparisons revealed statistically significant differences (p<0.05) between and within groups.The groups did not differ on the pre-test data.However, between group differences showed the mice administered diazepam burying the fewest marbles on Trials 1 and 2 and the control animals the most, with the escitalopram mice differing from both groups on both trials.Within group differences indicate that the control animals did not change burying over the pre-trial and two drug trials, but the drug groups did change over that period.Diazepam animals reduced burying from Pre-trial to Trial 1 but no further reductions for Trial 2. The escitalopram group had yet a third pattern, decreasing burying from the Pre-trial to Trial 1 and then increasing again for Trial 2. There were no reliable differences between male and female mice in any of the MB comparisons. Results for the distance traveled measure of general locomotor activity (Fig. 2) revealed no between group differences and only the single main factor of sex was statistically reliable, F(1,35)=7.02,P=0.012, η 2 =0.167.The females were more active than the male mice, independent of drug treatments.None of the interactions with the sex factor was statistically significant. Examination of time spent in the center arena of the open field was conducted as a measure of anxiety.Results are in Fig. 3.The 2×3×2 factorial analysis indicated the sex × drug interaction was statistically reliable, F(1,35)=3.61,P=0.037, partial eta squared (η 2 )=0.171.Further analyses revealed that the females in the diazepam group spent the most time in the center arena, and, surprisingly, females of the escitalopram group had the least time in the center area.All other groups were statistically similar.The 3-way interaction nor any of the other 2-way interactions achieved statistical significance. Within group results were that only the Trials main effect was significant, F(1,35)=10.30,P=0.003.Overall, both sexes spent more time in the center arena during their second trial than the first trial. DISCUSSION Results of the experiment included no sex differences in control and drug groups of mice selected for high MB.Males and females administered only vehicle showed consistent levels of burying behaviors while both sexes administered diazepam and escitalopram reduced their MB.Diazepam was more effective in eliminating the behavior than escitalopram.Indeed, the influence of escitalopram on burying appeared to weaken over time.The implication is that, along with the serotonergic system, GABA neurotransmission is critically involved in MB. The literature places emphasis on serotonin in both patients and animal models of compulsive behaviors.Findings of certain SSRIs being the first line treatment for OCD and that those same SSRIs reduce MB have reinforced the serotonin hypothesis (Egashira et al. 2008).Escitalopram is a notable example (Stein et al. 2008, Wolmarans de et al. 2016).However, longer durations and higher doses of the SSRIs often are needed to treat OCD patients compared to other psychiatric disorders (Bokor and Anderson 2014).More telling, SSRIs fail to reduce symptomology in 40-60% of OCD patients (Pallanti and Quercioli 2006), although benzodiazepines were even less effective (Goddard et al. 2008).The clear implication is that OCD is a complex disorder that involves multiple systems rather the current emphasis on the serotonin receptor (Egashira et al. 2008, Marazziti et al. 2010, Takeuchi et al. 2002). The value of MB as an animal model for OCD remains an open question (Albelda and Joel 2012).Nonetheless, there are empirical reasons indicating that rodents do not bury objects simply because they are anxious.MB has features not observed in other anxiety paradigms.For example, burying will occur in the safety of the home cage, in the absence of obvious fear or stressful stimuli and burying fails to habituate over test sessions (Chotiwat and Harris 2006, Greene-Schloesser et al. 2011, Thomas et al. 2009). Our findings provide additional evidence.The most common behaviors in OCD patients are compulsive checking, involving the performance of routines related to security, orderliness, and accuracy but without resolution (Taylor et al. 2011).Our findings confirm that the absence of a reduction in compulsive burying of the control animals.Over time the control animals revealed a similar resistance to extinction of burying despite becoming familiar with the marbles.Finally, there was no obvious relation in our study between MB and time in the center arena of our open field, a measure of anxiety (Benatti et al. 2014).That measure indicated females administered diazepam were least anxious, but escitalopram females were the most anxious.All other group comparisons were not significantly different.The data for general locomo- tor activity indicated that, over all groups, females were more active.Untreated female rodents often are found to more active than untreated males (Blizard et al. 1975, Palanza et al. 2001, Taylor et al. 2011). A combination of methodological features makes our experiment a unique contribution to this literature.Animals were tested in their familiar home cages rather than in a novel, neutral apparatus (Witkin 2008).We acknowledged the fact that there are individual differences in compulsive behaviors by pre-selecting mice that buried marbles (Fineberg et al. 2015, Wirth-Dzięciołowska et al. 2005).Whereas we examined burying by both sexes, almost all previous reports have used males, even when testing the influence of ovarian steroids (Gomez et al. 2002, Umathe et al. 2009).We compared diazepam, a classic benzodiazepine and a GABA agonist (Nicolas et al. 2006), with escitalopram, an SSRI that has been described as a pure inhibitor of the serotonin transporter (Stahl 2013).And animals were chronically exposed to the drugs as opposed to acute treatments in most reports in the literature (Jimenez- Gomez et al. 2011). Yet, diazepam proved more effective (Joel et al. 2004) than escitalopram in decreasing MB.The present experiment, essentially, failed to resolve MB as an animal model capable of dissociating compulsive and anxious behaviors (Albelda and Joel 2012). This may not be a failing so much as empirical support for OCD and some forms of anxiety being inseparable (Schneier et al. 2008), despite the newest DSM moving OCD from anxiety categories (American Psychiatric Association 2013).Notably, both disorders share neural pathology of the cortico-striatal-thalamic-cortical circuitry (Milad andRauch 2007, Stahl 2013).Generalized anxiety disorder and OCD both show frontal/striatal hyperactivity.Social anxiety and OCD both show similar anterior cingulate dysfunction (Kim and Gorman 2005). We remain convinced the MB paradigm has important potential as an animal model for psychiatric disorders.The puzzle is that it is not clear what is actually being measured.Perhaps it is too narrow of a perspective to focus on anxiety or compulsion as the only possibilities.It is entirely possible that there are other dimensions being measured, for example, impulsivity that is neither specifically anxiety or specifically compulsive.Indeed, it has been proposed that individual differences in patients suggest a continuum of compulsivity to impulsivity (Allen et al. 2003, Geller 2006).DSM-V indicates that impulsivity may be performed in patients for pleasure or gratification rather than relief of tension or anxiety.MB could prove useful for innovative pharmacological treatments for impulsive-control, attention deficit and related psychiatric disorders.These are conditions for which both serotonin and GABA, as well as dopamine, glutamate and neurosteroids, have been im-plicated (Hoffman 2011, Perry et al. 2011, Schule et al. 2011, Yates et al. 2012).We believe a fresh look at the MB paradigm is warranted. CONCLUSIONS Our results suggest the conclusion that, independent of sex, marble burying can be suppressed with therapeutic drugs used to treat both anxious and compulsive patients.However, MB cannot clearly distinguish compulsions from anxiety.MB remains an intriguing animal model partly because burying objects appears to be an inherent trait of some, but not all, mice.Moreover, burying has the virtues of ease, reliability and sensitivity to drug treatments, A broader perspective, thinking "outside the box," may reveal MB as useful for pharmacological drugs tests of impulsivity, attention deficit disorder or other psychiatric disorders that have proven difficult to model in animals. Fig. 1 . Fig. 1.Numbers of marbles buried by male and female mice during a pre-drug test and over two trials during 11 daily administrations of either vehicle only (Veh), escitalopram (Esc) or diazepam (Diaz).An asterisk (*) indicates significant differences (p<0.05) of drug groups from veh controls.The double asterisk (**) indicates Diaz groups differed significantly (p<0.05) from the Esc groups. Fig. 2 . Fig. 2. Distance traveled by male and female mice in the open field during a pre-drug test and over trials during exposure either vehicle only (Veh), escitalopram (Esc) or diazepam (Diaz).There were no differences between groups although the combination of female groups differed significantly (p<0.05) the combinations of male groups. Fig. 3 . Fig. 3. Time in a central arena of the open field by male and female mice over two trials during 11 daily administrations of either vehicle only (Veh), escitalopram (Esc) or diazepam (Diaz).An asterisk (*) indicates significant differences (p<0.05) from the other groups.
2017-12-27T10:07:03.938Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "5178cf0bb9e695d697af5ae7feb38c0777c5be08", "oa_license": "CCBY", "oa_url": "http://www.exeley.com/exeley/journals/acta_neurobiologiae_experimentalis/77/3/pdf/10.21307_ane-2017-059.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5178cf0bb9e695d697af5ae7feb38c0777c5be08", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
239470864
pes2o/s2orc
v3-fos-license
Dietary Fibre Intake in Relation to Asthma, Rhinitis and Lung Function Impairment—A Systematic Review of Observational Studies A high intake of dietary fibre has been associated with a reduced risk of several chronic diseases. This study aimed to review the current evidence on dietary fibre in relation to asthma, rhinitis and lung function impairment. Electronic databases were searched in June 2021 for studies on the association between dietary fibre and asthma, rhinitis, chronic obstructive pulmonary disease (COPD) and lung function. Observational studies with cross-sectional, case–control or prospective designs were included. Studies on animals, case studies and intervention studies were excluded. The quality of the evidence from individual studies was evaluated using the RoB-NObs tool. The World Cancer Research Fund criteria were used to grade the strength of the evidence. Twenty studies were included in this systematic review, of which ten were cohort studies, eight cross-sectional and two case–control studies. Fibre intake during pregnancy or childhood was examined in three studies, while seventeen studies examined the intake during adulthood. There was probable evidence for an inverse association between dietary fibre and COPD and suggestive evidence for a positive association with lung function. However, the evidence regarding asthma and rhinitis was limited and inconsistent. Further research is needed on dietary fibre intake and asthma, rhinitis and lung function among adults and children. Introduction Epidemiologic evidence has consistently shown that a high intake of dietary fibre is associated with a reduced risk of several chronic diseases, such as cardiovascular diseases, cancer, type 2 diabetes and obesity, as well as of total and specific-cause mortality [1][2][3]. Fibre-rich, plant-based dietary patterns, including grains, fruits, vegetables and nuts, stimulate the growth of beneficial bacterial species and contribute to a healthy colonic microbiota ecosystem due to the fermentation of fibres into short-chain fatty acids (SCFAs) [4]. Asthma is a chronic inflammatory disorder of the airways and the most common chronic disease among children. It is a cause of substantial burden of a disease, including a reduced quality of life in people of all ages and premature death [5]. Children with asthma, particularly those with persistent and severe forms of asthma, may attain a lower maximum lung function in adulthood, which increases the risk for the development of chronic obstructive pulmonary disease (COPD) [6]. Additionally, asthma frequently coexists with rhinitis, mostly among adolescents, as well as other atopic diseases, and it has been suggested that allergy-related diseases cannot be studied as isolated entities [7]. Both genetic and environmental factors have been implicated in the aetiology of the aforementioned diseases; however, the increase in the prevalence of asthma and other allergic diseases in the second half of the 20th century has been mostly associated with environmental factors, such as smoking, air pollution and changes in lifestyle and diet [8]. Following this increase, an increasing interest in identifying potentially modifiable factors has been expressed in the literature. In recent years, epidemiological studies have also explored the association between dietary fibres and respiratory and allergic diseases. Dietary fibres may influence the development of respiratory and atopic outcomes through different mechanisms-for example, through the antioxidant and anti-inflammatory effects of whole grains, by enhancing the bio-accessibility of antioxidants from fruits and vegetables or through immunomodulatory effects induced by changes in the gut microbiota [9][10][11]. However, the epidemiological evidence for this potential association has not been systematically reviewed. The aim of this systematic review is, therefore, to explore the existing evidence on dietary fibre intake in relation to asthma, rhinitis, COPD and lung function. Protocol and Registration This systematic review was performed according to the PRISMA guidelines [12] (checklist in the Supplementary Materials), and an application for registration in PROS-PERO was submitted. Eligibility Criteria Original studies reporting empirical findings on the association between dietary fibre intake and at least one outcome of interest (asthma; rhinitis; COPD, or symptoms of the aforementioned diseases, such as wheeze, cough, and phlegm; lung function) were searched. Observational studies on humans with cross-sectional, case-control or prospective designs were included. Studies on animals, case studies (case reports or case series) and intervention studies were excluded. Information Sources Systematic searches using predefined search terms were performed in multiple databases, including Medline (OVID), Embase, Cochrane Library, Web of Science and Scopus. The databases were searched from inception, limited to the English, French, German and Swedish languages. Additionally, reference lists of the articles included in the review and of relevant review studies were manually screened to identify other relevant articles. Information from conference abstracts, dissertations and grey literature (e.g., reports) was not included. Search Strategy The search was conducted in June 2021 based on the term construct used for Medline (see the Supplementary Materials), assisted by professional librarians at the Karolinska Institute University Library. The following MeSH terms were used in the Medline (OVID) search: Dietary Fiber, Lung Diseases, Obstructive, Rhinitis and Respiratory Function Tests. The MeSH terms were adapted in accordance with the corresponding vocabulary in Embase Emtree. Each concept was also complemented with relevant free-text terms. The free-text terms were, if appropriate, truncated and/or combined with proximity operators. The full search strategies are available in the Supplementary Materials. Study Selection The search results were exported to Endnote X9, where duplicates were excluded. As the first step, relevant articles were considered based on their title and abstract. At the second step, full-text versions of the selected papers were examined. In case there were multiple publications from the same cohort study, they were all included if they referred to different outcomes of interest. Following the above inclusion and exclusion criteria, two reviewers (E.S. and A.V.G.), without consideration for the results, performed the assessment of the studies for potential inclusion independently. Any differences in opinions were resolved through discussion until a consensus was reached. A third reviewer (S.E.) was consulted when necessary. Data Extraction The two reviewers independently conducted the data extraction from each study using a predefined data extraction sheet. The items extracted regarding the study characteristics comprised the first author name; year of publication; objectives; country; name of cohort (if applicable); study design; sample size; source population (age, sex and other characteristics); exposure assessment; categorisation of exposure; outcome assessment; mean follow-up period (if applicable); statistical methods; effect measures; covariates; missing data; control for selection bias and confounding, effect modifications and sensitivity analyses. Risk of Bias in Individual Studies The 'Risk of Bias for Nutrition Observational Studies' (RoB-NObS) tool recently developed by the US Department of Agriculture (USDA) Nutrition Evidence Systematic Review (NESR) team [13] was used to assess the risk of bias in individual studies independently by two reviewers (E.S. and A.V.G.) in the following domains: bias due to confounding, selection bias, bias in the classification of exposures, bias due to departures from intended exposures, bias due to missing data, bias in the measurement of outcomes and bias in the selection of reported results. On the occasion of discrepancies, a third reviewer (S.E.) assessed the study, and a consensus was achieved. Presentation/Synthesis of Results We performed a qualitative synthesis of the results, including a summary table presenting the association between the total fibre intake and the outcomes of interest (highest vs. lowest category and p for trends) from each study. Associations between different sources of fibre and the outcomes, as well as the stratified analysis results, are reported in text only. Risk of Bias across Studies The World Cancer Research Fund (WCRF) criteria [14], applied as suggested by Arnesen et al. [15], were used to grade the strength of the evidence for each outcome of interest as convincing (high), probable (moderate), limited/suggestive (low) and limited/no conclusion (insufficient). Study Selection A flowchart of the study selection is presented in Figure 1. Briefly, the search of the electronic databases yielded 1328 articles, 169 of which were considered relevant after title and abstract screening. Additionally, one article was identified in the reference lists. Finally, 20 articles were considered for inclusion in the systematic review. Of these, ten were cohort studies, eight cross-sectional studies and two case-control studies. Ten studies were published the last five years (2017-2021) and ten studies before 2017 (2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). Seven studies were conducted in the US, seven studies in Asia, three studies in Europe and three studies in Australia. Maternal Fibre Intake during Pregnancy An Australian cohort study of 639 mother-infant pairs, including infants with a family history of allergic disease, found that, although the total fibre intake during pregnancy was not associated with allergic disease in the offspring, a higher resistant starch intake was associated with a reduced risk of infant wheeze up to age 12 months (OR 0.68; 95% CI 0.49-0.95) [16]. Fibre Intake during Childhood Two cross-sectional studies reporting fibre intake during childhood were identified, with inconsistent results. An Australian study of 144 adolescents aged 12-18 years reported no association between fibre intake and self-reported wheeze [17]. However, a study among 4133 children aged 2-11 years from the US National Health and Nutrition Survey (NHANES) indicated that the odds of having asthma were higher for children who had a lower fibre intake (Q1 vs. Q4, ever asthma OR 1.31; 95% CI 0.88-1.96, p-trend 0.034 and current asthma OR 1.38; 95% CI 0.87-2.20, p-trend 0.027) [18]. In this study, the median fibre intake was 6.7 g/1000 kcal, which is below the recommended US intake of 14 g/1000 kcal. Fibre Intake during Adulthood Asthma, Rhinitis and Related Symptoms Five studies reporting fibre intake during adulthood in relation to asthma, rhinitis and related symptoms were identified, with somewhat consistent results. In a cross-sectional study of 13,147 adults from the US NHANES, a low fibre intake was associated with increased odds of prevalent asthma (Q1 vs. Q4, OR 1.4; 95% CI 1.0-1.8, p-trend 0.092), wheeze (OR 1.3; 95% CI 1.0-1.6, p-trend 0.017), cough (OR 1.7; 95% CI 1.2-2.3, p-trend < 0.001) and phlegm (OR 1.4; 95% CI 1.1-2.0, p-trend 0.011) [19]. Regarding asthma, stronger associations were seen for women and for non-Hispanic White adults. In a cross-sectional study of 10,479 adults from the Korean National Health and Nutrition Examination Survey (KNHANES), a higher dietary fibre intake was associated with reduced odds of asthma (Q4 vs. Q1, OR 0.66; 95% CI 0.48-0.91, p-trend < 0.001) and allergic rhinitis, the latter, however, only for Q2 vs. Q1 (OR 0.84; 95% CI 0.70-1.00, p-trend < 0.001), especially in males [20]. In additional analyses, fibre intake reduced the allergic rhinitis symptoms, including watery rhinorrhoea and dog allergen sensitisation, only among males. However, in a cross-sectional study of 1002 Japanese pregnant women from the Osaka Maternal and Child Health Study, no association between fibre intake and allergic rhinitis was reported [21]. A French cross-sectional study of 26,640 women and 8740 men reported inverse associations between the highest quintile of total dietary fibre compared with the lowest quintile and the asthma symptom scores both among women (OR 0.73; 95% CI 0.67-0.79, p-trend < 0.001) and men (OR 0.63; 95% CI 0.55-0.73, p-trend < 0.001) [22]. With regards to specific sources of fibre, the intake of fibre from cereals, fruit and seeds was most consistently associated with less asthma symptoms. Additionally, among participants with asthma, inverse associations were reported between the fibre intake and uncontrolled asthma. In an Australian case-control study of 137 participants with asthma and 65 healthy controls, participants with severe persistent asthma (n = 64) consumed, on average, 5 g/day less fibre as compared to healthy controls (OR 0.94; 95% CI 0.90-0.99) [23]. COPD and Related Symptoms Eight studies reporting the fibre intake in adulthood in relation to COPD and COPD symptoms were identified, with consistent results of a protective association. A crosssectional study of 11,897 participants of the Atherosclerosis Risk in Communities (ARIC) study in the US indicated a reduced prevalence of COPD with a higher total fibre intake (Q5 vs. Q1, OR 0.85; 95% CI 0.68-1.05, p-trend = 0.044). Inverse associations were also observed with cereal or fruit fibre but not with vegetable fibre [24]. No interaction with smoking status was observed, although associations were limited to current or ex-smokers. In a Japanese case-control study, high levels of total and insoluble dietary fibre were associated with a reduced risk of COPD (Q4 vs. Q1, OR 0.49; 95% CI 0.26-0.95, p-trend 0.160 and OR 0.50; 95% CI 0.26-0.94, p-trend 0.174, respectively) [25]. A study of 49,140 cohort members from the Singapore Chinese Health Study examining the association between dietary fibre and new onset of cough with phlegm reported inverse associations with non-starch polysaccharides (Q4 vs. Q1, OR 0.61; 95% CI 0.47-0.78, p-trend < 0.001), fruits (OR 0.67; 95% CI 0.52-0.87, p-trend 0.006) and soy isoflavones (OR 0.67; 95% CI 0.53-0.86, p-trend 0.001) [26]. Moreover, a large cohort study of 111,580 participants from the US Nurses' Health Study and Health Professionals Follow-up Study with long follow-up periods (16 and 12 years, respectively) reported inverse associations between the total dietary fibre intake and newly diagnosed COPD (Q5 vs. Q1, RR 0.67; 95% CI 0.50-0.90, p-trend 0.03) [27]. Inverse associations were also observed with cereal fibre but not with fruit or vegetable fibre. In stratified analyses by the smoking status, associations were stronger among current smokers than among ex-smokers. Two cohort studies on dietary fibre intake from Sweden, which used registry data to identify incident COPD cases, confirmed the aforementioned results. The first study included 45,058 men from the Cohort of Swedish Men and reported strong inverse associations with the total fibre intake (Q5 vs. Q1, HR 0.62; 95% CI 0.53-0.71, p-trend < 0.001), mainly in current smokers or ex-smokers but not in never smokers [28]. The second study included 35,339 women from the Swedish Mammography Cohort and evaluated the association between the baseline and long-term total fibre intake and COPD risk; in this study, a high long-term dietary fibre intake was associated with a reduced risk of COPD (Q5 vs. Q1, HR 0.70; 95% CI 0.59-0.83, p-trend < 0.001), mainly in current or ex-smokers. For specific fibre sources, cereal and fruit fibre, but not vegetable fibre, were associated with a lower COPD risk [29]. A cross-sectional study of 702 adults with COPD from the KNHANES evaluated the association between disease severity and dietary nutrient intake; in this study, fibre intake was associated with a decreased severity of airway impairment in elderly men (≥60 years old) with COPD but not in women [30]. Additionally, a cohort study of 1439 participants from Korea studied the relationship between new airflow limitation development, defined as FEV1/FVC < 0.70, and changes to the dietary pattern after a 5-year period; in this study, a 10% decreased intake of dietary fibre was associated with a newly developed airflow limitation (OR 2.71; 95% CI 1.54-4.81) [31]. Lung Function Six studies reporting fibre intake in adulthood in relation to lung function were identified, with generally consistent findings. The already mentioned study of 11,897 participants from the ARIC study in the US found positive cross-sectional associations between the total fibre and lung function (Q5 vs. Q1, forced expiratory volume in one sec (FEV1) 60.2 mL; 95% CI 27.7-92.7, p-trend < 0.001, forced vital capacity (FVC) 55.2 mL; 95% CI 18.2-92.3, p-trend 0.001 and FEV1/FVC 0.4; 95% CI −0.1-0.9, p-trend 0.040). Similar patterns were seen for the fibre intake from cereal and fruit sources, while no association was observed for vegetable fibre [24]. Additionally, a recent cross-sectional study of 1921 participants from the US NHANES examined the association between fibre intake and measures of lung function. According to this study, a low fibre intake was associated with reduced measures of lung function (Q4 vs. Q1, FEV1 82 mL (p = 0.05), FVC 129 mL (p = 0.01), % predicted FEV1 2.4% (p = 0.07) and % predicted FVC 2.8% (p = 0.02)) [32]. Another prospective study of 12,532 adults from the ARIC study reported an increased fibre intake associated with improved lung function when followed up three years after the baseline; the coefficients per increase in one quintile of fibre intake were %FEV1 0.201, p-trend ≤ 0.05 and FEV1/FVC 0.129, p-trend ≤ 0.01, but there was no association with FEV1 or FVC [33]. A prospective study including 5880 participants from the Korean Ansan-Ansung cohort followed for four years indicated a positive association between the fibre intake and lung function among men but not among women [34]. A study among smokers in the US Lovelace Smokers cohort (LSC), with replication in the Veteran Smokers cohort (VSC), identified, among other nutrients, the fibre intake to be significantly associated with a better average FEV1 (LSC 80.9 mL; SE 20.3, p = 0.0032 and VSC 97.8 mL; SE 41.8, p = 0.045) [35]. Finally, in cross-sectional analyses in the aforementioned Australian casecontrol study of 137 participants with asthma and 65 healthy controls, the fibre intake was positively associated with FEV1, FVC and FEV1/FVC (coefficient per unit increase in fibre intake 0.02 L (p = 0.001), 0.02 L (p = 0.002) and 0.2% (p = 0.035), respectively) and negatively associated with airway eosinophilia (−0.36% (p = 0.005)) among participants with asthma [23]. Asthma and Related Symptoms Out of seven studies, five were cross-sectional [17][18][19][20]22], one was a case-control [23] and one a cohort study [16]. With regards to fibre intake assessment, four studies used food frequency questionnaires (FFQ) [16,17,20,23], and three studies used 24-h dietary recalls [18,19,22], including repeated assessments in the last two studies. Regarding the outcome assessment, asthma was self-reported in all studies, while five studies also included a clinical examination with spirometry, skin prick tests and/or blood sampling [16,17,19,20,23]. All studies adjusted for age and sex, while most studies adjusted for body mass index (BMI) or energy intake, socioeconomic factors and smoking. Overall, the articles were assigned a moderate-to-serious risk of bias (Figure 2a). intake and lung function among men but not among women [34]. A study among smokers in the US Lovelace Smokers cohort (LSC), with replication in the Veteran Smokers cohort (VSC), identified, among other nutrients, the fibre intake to be significantly associated with a better average FEV1 (LSC 80.9 mL; SE 20.3, p = 0.0032 and VSC 97.8 mL; SE 41.8, p = 0.045) [35]. Finally, in cross-sectional analyses in the aforementioned Australian casecontrol study of 137 participants with asthma and 65 healthy controls, the fibre intake was positively associated with FEV1, FVC and FEV1/FVC (coefficient per unit increase in fibre intake 0.02 L (p = 0.001), 0.02 L (p = 0.002) and 0.2% (p = 0.035), respectively) and negatively associated with airway eosinophilia (−0.36% (p = 0.005)) among participants with asthma [23]. Asthma and Related Symptoms Out of seven studies, five were cross-sectional [17][18][19][20]22], one was a case-control [23] and one a cohort study [16]. With regards to fibre intake assessment, four studies used food frequency questionnaires (FFQ) [16,17,20,23], and three studies used 24-h dietary recalls [18,19,22], including repeated assessments in the last two studies. Regarding the outcome assessment, asthma was self-reported in all studies, while five studies also included a clinical examination with spirometry, skin prick tests and/or blood sampling [16,17,19,20,23]. All studies adjusted for age and sex, while most studies adjusted for body mass index (BMI) or energy intake, socioeconomic factors and smoking. Overall, the articles were assigned a moderate-to-serious risk of bias (Figure 2a). Rhinitis and Related Symptoms Both studies had a cross-sectional study design [20,21]. To assess the fibre intake, both studies used a FFQ [20,21]. Allergic rhinitis was self-reported in both studies and assessed based on the symptoms and, additionally, nasal endoscopy and serum IgE levels Rhinitis and Related Symptoms Both studies had a cross-sectional study design [20,21]. To assess the fibre intake, both studies used a FFQ [20,21]. Allergic rhinitis was self-reported in both studies and assessed based on the symptoms and, additionally, nasal endoscopy and serum IgE levels in one study [20], while the assessment was based on drug treatment in the previous 12 months in the other study [21]. Both studies adjusted for major potential confounders, including age, BMI, socioeconomic factors and smoking. Overall, the articles were assigned a moderate-to-serious risk of bias (Figure 2b). COPD and Related Symptoms Out of eight studies, five were cohorts [26][27][28][29]31], two were cross-sectional [24,30] and one a case-control study [25]. Fibre intake was assessed using FFQs in all but one study [24][25][26][27][28][29]31], with repeated assessments in three studies [27,29,31] and one study using a 24-h dietary recall [30]. COPD and related symptoms were assessed using selfreported questionnaires in three studies [24,26,27], spirometry in four studies [24,25,30,31] and registries in two studies [28,29] (one study used both self-reported and spirometry diagnosed definitions). All the studies adjusted for major potential confounders, including age, sex and smoking, and most adjusted for BMI or energy intake and socioeconomic factors, while some additionally adjusted for lifestyle (physical activity and alcohol intake) and other dietary factors. Overall, the articles were assigned a moderate risk of bias (Figure 2c). Lung Function Out of six studies, there were three cohort [33][34][35] and three cross-sectional studies [23,24,32]. Fibre intake was assessed using FFQs in all studies, apart from one study that used repeated 24-h dietary recalls [32]. Lung function was measured by spirometry in all the studies and additionally using eNO and combined bronchial provocation and sputum induction in one study [23]. Most of the studies adjusted for major potential confounders, including age, sex, BMI, total energy intake, smoking and socioeconomic factors. Overall, the articles were assigned a moderate risk of bias (Figure 2d). Strength of the Evidence In the present study, we did not include intervention studies in the eligibility criteria. According to the WCRF criteria, and based on the available evidence from observational studies, the overall strength of the evidence was graded as limited/no conclusion (insufficient) with regards to asthma and rhinitis, probable (moderate) for COPD and limited/suggestive (low) for lung function. Discussion This review sought to explore if there is a protective association between dietary fibre intake and asthma, rhinitis, COPD and lung function and, if so, which sources of fibre are the most beneficial. The findings show that the current evidence from observational studies is limited and inconclusive with regards to asthma and rhinitis. There is suggestive evidence that dietary fibres may be associated with improved lung function in the general adult population, with very few studies reporting fibre intake in high-risk populations. Moreover, there is probable evidence for a beneficial role of fibres in the risk of COPD, which is considered as strong evidence according to the WCRF criteria. Based on the intake level observed to protect against coronary heart disease, an adequate intake of total fibre has been set to 30-35 g/day and 25-32 g/day for adult men and women, respectively [36], and 10-40 g for children and adolescents, depending on age, gender and energy intake [37]. The mean fibre intake was reported to be below the recommended levels in all the included studies, and only subjects in the highest quartile/quintile of fibre intake met the recommendations. Additionally, geographical differences in the amount of total dietary fibre intake were observed, with lower intakes reported in studies from countries in Asia, followed by the US, and higher intakes in studies from countries in Australia and Europe. With regards to different types of diets, while the fibre content of animal products is scarce, plant-based diets include fibre-rich foods, such as cereals, fruits, vegetables and nuts, in abundance. We observed a difference in the sources of dietary fibres in the studied populations as well, reflecting different dietary patterns; however, this was not consistently reported in all the included studies. The current evidence is not conclusive about which fibre-rich foods are most beneficial for respiratory health; the grain sources of dietary fibre have been shown to be more beneficial compared to fruits and vegetables, but it is unclear if this is due to their higher fibre content, greater amounts consumed, less probability of measurement error, displacement of high-energy foods and overall diet quality or associated lifestyle factors, such as greater levels of physical activity [36]. We were able to identify only one study on maternal fibre intake during pregnancy in relation to allergic disease in the offspring. Although these results do not support an association between prenatal exposure to dietary fibre and allergic disease, the association is biologically plausible. In two recent birth cohort studies, the faecal concentration of SCFAs during pregnancy was inversely associated with asthma and allergic rhinitis in the offspring up to 6 years [38,39]. In one of the studies, which was part of a randomised controlled trial and, therefore, not included in the selection criteria of our review, the fibre intake in pregnancy was positively associated with the total SCFAs but not with any of the atopic outcomes in the offspring [38]. It was therefore hypothesised that dietary fibres contribute to offspring disease risk only in combination with the relevant intestinal microbes. These findings, supported by studies in animal models [39,40], require further replication in observational studies with a larger sample size and can potentially pave the way to microbiome-targeted interventions to prevent asthma and atopy in the offspring [9,41,42]. Additionally, the two selected studies on dietary fibre intake during childhood showed inconsistent results. However, previous reviews have reported protective associations between fruit and vegetable consumption, or dietary patterns rich in fruits, vegetables, legumes and cereals (such as the Mediterranean diet), and asthma or wheeze among children [43][44][45]. Moreover, a protective association between whole grains and asthma among children has been reported [46]. These associations may be partly explained by the concomitant intake of dietary fibres. Dietary fibres can potentially improve airway inflammation by promoting anti-inflammatory cytokines, improving glucose control, and modulating the gut immunologic response [10]. On the other hand, asthma is a heterogeneous disease, and asthma development is a dynamic process, characterised by remission, relapse and a new onset of symptoms from childhood up to adulthood [47]. A reduced fibre intake has been observed among adults with severe asthma and has been associated with increased eosinophilic airway inflammation [23]. Lung function growth may not only be impaired during early childhood but also continues throughout adolescence and early adulthood [48]. Thus, a critical period of development is missed by the current body of evidence, addressing dietary fibre intake and asthma and lung function impairment in adult populations. A paucity of studies addressing asthma severity, different asthma phenotypes and lung function among participants with asthma has also been identified. We were able to identify only two studies on fibre intake in relation to allergic rhinitis, with inconsistent results, and no study on nonallergic rhinitis. Allergic rhinitis is associated with sensitisation to inhalant allergens, whereas nonallergic rhinitis is a nasal mucosal inflammation without systemic signs of allergic inflammation, associated with exposure to irritants, hormonal dysfunction and specific medications [49,50]. Regarding allergic rhinitis, a high fibre intake in a murine model showed less eosinophil infiltration, less goblet cell metaplasia in the nasal mucosa and decreased Th2 cytokines compared to a low intake [51]. In a study among children, adherence to the Mediterranean diet has been inversely associated with allergic rhinitis [52]. Further research is needed on dietary fibre intake and rhinitis outcomes, both among children and adults. In our review, we identified eight studies on fibre intake in relation to COPD and related symptoms reporting consistent results of a protective association, which has also been suggested by two previous systematic reviews of studies on fibre intake in relation to COPD, partly based on the same studies [53,54]. The association with COPD may be explained by the antioxidant and anti-inflammatory properties of dietary fibres, including lower levels of C-reactive protein and proinflammatory cytokines and higher levels of some anti-inflammatory cytokines, such as adiponectin [54]. In addition, high dietary fibre has been suggested to attenuate innate immune-mediated systemic and pulmonary inflammation through the presence of a gut-liver-lung axis [55]. The stronger inverse association with COPD among current or ex-smokers may be explained by the higher oxidative stress in these groups, as well as the continued endogenous production of reactive oxygen species even after smoking cessation. Among non-smokers, the mechanisms related to COPD development may differ from those in current or ex-smokers and relate more to genetic predisposition and environmental exposures [28]. Among smokers, lung function was improved via the increased intake of dietary fibre, further supporting the importance of the gut-liver-lung axis in COPD [41]. On the other hand, a protective effect of fibre intake on lung function in both smokers and non-smokers has also been observed [24]. In nonsmokers, fibre intake may protect against the deleterious effects of indoor and ambient air pollutants. Considering the sources of dietary fibres, the results from the included studies suggest that fibres from cereals and fruits, but not vegetables, are inversely associated with the risk of COPD. It has been suggested that similar protective associations of a higher intake of cereal fibre and of the total dietary fibre may be because of the high dietary fibre content of cereals [56]. Nevertheless, the lack of an inverse association with vegetables has been suggested to be related to the higher uptake of heavy metals, especially cadmium and lead, from vegetables compared to fruits [29]. The strength of evidence is primarily related to the methodological quality of the included studies. Most of the studies used questionnaires to assess dietary fibre intake, which might have led to some misclassification of the exposure, however nondifferential with regards to the outcomes of interest. Although the absolute fibre intake may be difficult to be estimated by FFQs, the ranking of participant intakes is possible and sufficient in this type of analytic epidemiologic studies [57]. In studies where multiple exposures were studied or fibre was part of an overall dietary pattern, the methods used to assess the exposure were less well-described, which hampers systematic reviews and metaanalyses [58]. In line with this, an increased risk of selective reporting could be inferred from studies reporting associations with multiple outcomes and analyses among different subgroups [13]. The risk of publication bias for studies finding no associations between fibre intake and respiratory and atopic outcomes should also be considered. Although most of the studies extensively adjusted their analyses for major potential confounders and some additionally included dietary and lifestyle factors, such as smoking, physical activity and alcohol consumption, residual confounding cannot be completely ruled out. In this systematic review, we were able to assess the strength of the current evidence based on observational studies and highlight specific areas where further research is needed. Conclusions In conclusion, the current evidence from observational studies on dietary fibre intake is probable (moderate) for an inverse association with COPD and limited/suggestive (low) for an association with lung function in the general adult population. In contrast, there is insufficient evidence for an association with asthma or rhinitis in adults. Thus, further research is needed with regards to asthma, rhinitis and lung function in adults, as well as among children.
2021-10-17T15:11:06.145Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "583ac99ea2a4f2a4396aa4106b5b3a7d602e04aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/10/3594/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94140ba2a0c466210acbb9c869e8aec16a24e8de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55441508
pes2o/s2orc
v3-fos-license
Analyzing the Impact of Using Interactive Animations in Teaching This study intends to measure the impact of interactive animations on the students’ performance. Two courses from Subotica Tech were included, the subjects “Analog and Digital Electronics” and “Microcontrollers”. The experiment lasted over a period of tree years, and it involved the formation of two groups in every academic. Both groups’ members participated in traditional frontal teaching, but the experimental group could use interactive Flash animations built from selected parts of those courses as supplementary tool. At the end of the semester, the exam marks were analyzed with a Two-Sample T-Test. The results show that learning with properly created interactive animations could have positive effects on most students’ academic performance. Introduction In the era of modernization in the teaching process, when the use of novel information technologies aims to achieve easier, faster and more efficient knowledge transfer in education, the application of interactive animations has become more and more important.The questions arises as to what the reasons are which have made interactive animations a vital part of modern ecurricula, and whether there is empirical evidence to support claims that using multimedia and interactivity in e-curriculum has positive impact to cognitive development and academic achievement at students.In the first part of this paper, authors analyze characteristics of the interactive animations.The second part presents some research done with interactive animations developed at Subotica Tech.The e-contents are compiled from selected parts of the course "Analog and Digital Electronics" and "Microcontrollers" at Subotica Tech. The thorough investigation by Sekular and Blake [1] into how students take in information, how they learn pointed out that the learning process takes place primarily by way of sight, and since it is the most vital of our senses, it is also the most highly-developed one.It enables a person to gather information from one's surroundings, analyze these and then decide how to process based on the deduced data.In terms of teaching, it is by seeing that students will best grasp a complicated string of steps as it helps transform a vague idea into an image in their brains.148 R. Pinter, D. Radosav, S.M. Cisar Kraidy [2] started that, if the aim is to increase the amount of information to be processed by students within a set time frame, then giving them visual information to work with will help them reach this goal. Graphical representations are defined as visual aids that act as supplement to any other textual information and will concentrate learners' attention [3].Such representations will have maximum effect when accompanying some learning material that is (relatively) new to the learner [4].This is especially the case with computer animation that is designed to aid long-term learning in the form of focusing learners on certain objects in the beginning. The research of Rieber [5] portrayed that abstractions connected with time transitions in a process can be decreased by implementing animations to convey ideas and processes that change over time.Dual-coding theory by Paivio, [6] [7] offers an explanation as to why graphics are so effective: retaining memory over a long time is made easier if a combination of verbal and visual cues is used.This makes animations a distinctively significant support of visualizing material for long-term memorization.Animation and narration further support dual-coding [8]. What makes animations stand out is movement, as opposed to static, still images, and this demonstrates the various relationships within and along a certain process.By Goldstein, Chance, Hoisington and Buescher [9] movement will be remembered longer than static images.According to Gordin and Pea [10] and also Brodie, Carpenter, Earnshaw, Gallop, Hubbold, Mumford, Osland, Quarendon [11] visualization is a vital part in the acquisition of scientific topics, since important relationships between concepts will be pointed out for learners. It was demonstrated by research results that animations are more effective learning tools that static images, and this was further supported by lesson plans incorporating lectures as well as different learning inputs [12].Based on the dual-coding theory [7] it may be asserted that learning will be the most effective if there are lectures alongside animations, since they together form a base of reference that will help learners fully understand the knowledge that was conveyed through the animations.Lectures will cue the students, but actual studying happens through the animations [13]. Interactive Animations One of the tendencies in education is the continually growing amount of learning content which must be acquired by the student.Almost every generation's curricula are extended by a certain amount of new, updated or revised material.With this swelling of learning contents, another issue arises, namely that the time which is intended for learning these amount of contents is growing ever shorter for each subsequent generation.Besides that, students are no longer interested in the foundations of some complex system, and how it is compiled, but rather, they want to know how the system works and how it can be managed.In accordance with these tendencies the educators have been searching for learning tools which can help the students acquire knowledge. As animation are able to unambiguously portray changes over time (temporal changes), they are extremely suitable for using them in process and procedure teaching.Animations are applied to show dynamic content, and they reflect alterations in position (translation), as well as form (transformation) which form the basis of learning this kind of topic [14]. Unlike static pictures, temporal changes are shown in animations directly (instead of indirectly by some awkward auxiliary markings including arrows and motion lines).The application of animations, as opposed to static graphics, makes these extra markings unnecessary, thus stripping down the displays and making them attractive, lively and easily understandable [15].Furthermore, there is no need for the learner to process these auxiliary markings and what changes they try indicate.Interpreting the markings and the inferences may actually surpass the level of graphical skills that the learner possesses.Yet with animations, these displays immediately show all information concerning the changes, thus no extra mental depiction is required. Learning can be facilitated by animations in two ways.On the one hand, their function is to affect the learner, raise their interest and keep up motivation.The entertainment industry implements this same function in their animations.On the other hand, though, animations also have the function to facilitate comprehension and memorization of a given content.The knowledge-building process is thus supported and this cognitive function is essential to effective learning. Superficially, it may seem that animations are the perfect candidates to be applied in presenting dynamic content.Nevertheless, there is no unambiguous research evidence supporting this.Some researchers have conducted comparisons of how effective static and animated displays are in education by using a number of content domains.Although there have been positive results where animations have proven to be rather effective, these results have been countered by other investigations that have found no positive, and even negative effects of using animations.On the whole it is safe to say that animations are not by definition more effective than static graphics.Instead, the specific features of certain animations and their method of application is crucial in what kind of effect they will have on knowledge acquisition. Do Animations Make Learning Faster? Animations play an important role in computer-based learning environments.So far, however, it has not been sufficiently resolved under which conditions and in which respect animations do actually lead to better learning outcome.Well-designed animations are likely to be a real asset to the teacher.They will speed up the learning process and make it easier to grasp and memorize the material.It especially comes in handy when the teacher is trying to explain a difficult subject.The question arises: Why is a subject perceived as difficult?It may either be because it requires a certain amount of imagination.For example, in our animations we visualized a clock signals, a values and shapes of the input and output voltage signals, a states and changes of the microcontroller internal registers etc.With the help of computer animations both the teaching and learning process will be made less difficult, it will take less time and it will be livelier. However, what then explains the fact that sometimes animations are not educationally effective as one would expect them to be?A possible answer would be that students are unable to "compute" the information seen in the animation adequately.If a complex subject is to be presented with animation, it may result in an equally complex animation, thus leaving students feeling overwhelmed.This is supported by the role of visual perception and cognition in human information processing.The perceptual and cognitive systems of humans have their limits for information processing.Once the presented animation reaches or oversteps the learners' information processing limits, the learning process may no longer be effective.Also negative effects come forward if the new information being presented in animations is faster than the speed of how fast the learner is capable of processing that effectively. Replacing current static graphics with animations without careful consideration is not likely to result in improved learning; instead animations should be accompanied by textual explanations, and let the learner have control over the speed of the animation.Such user-controllable animations will enable learners to "customize" the animations by varying the playing speed and direction, labels and audio commentary to suit their own personality.The controllable animation can be realized with interactive animation.The interactivity within the animation could mean the own playing speed and walk-through, different amount of auxiliary explanations etc. Besides the visualization of the curriculum, this kind of animation offers another advantage: the possibility of modeling and simulating systems.This means that knowledge acquisition can take place also by changing the model's parameters, or otherwise experimenting with the system.So, when using interactive simulations besides the previously mentioned advantages, some new ones can be defined: • The model offers the possibility for analyzing and doing experiments with those systems, which cannot be done in real life. • The models enables studying of certain fast occurrences in a much slower mode, or timeconsuming events in a much shorter time span than in reality. • The model makes it possible to focus on the vital characteristics of the learning content being taught. • The model offers the users the freedom of experimentation without any consequences. The Advantages of Flash Animations The developing environment provided by the packet Adobe Flash CS3 (and its prior versions) was used by the authors as the tool of choice for creating these interactive animations.In a simplified form, this software tool is an application for creating vector sketches and animation, with the option of adding this interactive feature.Naturally, the Flash developing environment offers many more options, but it also includes very straight-forward ways of creating animations.The fact that it is rather easy to create interactive animations is a crucial aspect, as in such a case it is not a pre-requisite for the subject teacher to be highly educated in information technologies.This type of animation can be used for presenting the material in theoretical classes, but also for creating a fully electronic curriculum for consolidating the material previously taught in practices, as well as for independent work outside the classes. Practice shows that creating effective interactive animations still requires the close cooperation of the teacher and the expert for Flash technologies.Successful acceptance of the animations by the students primarily depends on the course teacher.It is their task to determine the following: • Goals that are to be achieved with this animation, • The content that is to be shown, • Which elements of the learning material are to be represented statically (with an image), and which will take the forms of animation or interactive animation (simulation), • Guidelines (design of the outlook, which controls are to be used, the user's options within the system, etc.) based on which the application will be developed. The task of the "Flash expert" is to realize the requirements of the teacher as best as possible.The programmability of the animation thus comes in really handy for the expert.When developing the Flash application of the programs that may be used is Action Script (the current version is 4), an object-oriented programming language.With the help of this language every element of the animation (lines, colors, sound, etc.) can be controlled, calculations can be made using the entered parameters, and finally, the results can be presented, and actually used to draw new objects or their trajectories, as well as communicate with the server, among others. It is safe to say there is no such task in creating an animation that an experienced Flash programmer cannot solve.In fact, this is the real advantage of this tool, as it can meet all the requirements irrespective of school age or learning material.Besides the listed advantages of a Flash animation, it is also rather easy to distribute this application.There are two most commonly used formats for saving this animations: the executive (*.EXE) format, which starts in its in-built player; and the standard (*.SWF) format for playing in a web browser or in the FlashPlayer player (it can be downloaded easily from the Internet).What is characteristic of these two formats is the small file size, which is a vital factor when distributing the application via the Internet.Another benefit of the Flash animation is that it is a single file, there are no separate sound files, and the images do not comprise a separate module.All this ensures that there is no special installation procedure, only the file to be saved and started, which makes it an accessible program for even the somewhat computer-wary users. Besides the so-called technical advantages, with the use of adequate design techniques, the Flash-type animation could gain further benefits.One of those benefits is the result of how a Flash animation is developed: most often the parts of a Flash animation are drawn, and there is little use of images from the real world.The advantage of drawing, i.e. of creating vector objects for animation is that the drawn objects are represented in a simpler form, with less detail than, for example, if they were shown in a bitmap format.This means once the educator has abstracted the material for the students there is yet another simplification of the learning material.But there are other design techniques which could lead to more effective learning process, for example: • Using the "Inserting and removing fragments" technique.The complexity and information load of the animation interface can be regulated by inserting or removing objects or pieces of information form it. • Using the "Dimming fragments" technique.With this technique one can differentiate between important parts of the animation and those which serve as additional information. The dimmed elements look like as if they are melting into the background. • Using background (blurred) animation to attract and keep user's attention on the interface. Also, in these projects the following design aspects were used: • Minimize the number of visual elements, thus making it easy to follow the presented process. • Minimal amount of lateral information used solely for presenting the essence as simply as possible. • " Data entry by keyboard was not incorporated.The reason for this is that the data entry option does not always mean an advantage in the learning process: they may cause the user to be preoccupied with trying to crash the application by entering invalid formats and values. As a result of these design techniques, the system will show a straight-forward form, using only the vital details, leading directly to a better and easier understanding of the model, and the user cognitive load is kept on adequate (i.e.low) level. Are these the only reasons why the animation should be used in teaching?No, they are not.There are problems which occur in educational communication called information barriers, and the Flash animation will yield some solutions to this problem.Some of these barriers can be classified in the following way: • perceptual barriers -each subject in the communication process feels and interprets events occurring to them differently, depending on their psychological, cultural and social status, • psychological barriers -the same word or event will have a different meaning for different persons, • social barriers -these barriers become apparent by the different social statuses of the subjects in the educational communication, • cultural barriers -these arise in communication due to the different cultural backgrounds of the subjects participating in the communication process, • semantic barriers -barriers of this type appear when interpreting written contents, speeches, images, and other, thus the way the message is read will change the content itself, • media barriers -this information barrier occurs when the there are different communication media used on educational communication.It is well-known fact that each carrier has their own markings, which may be helpful as well as distracting in communication, • physical barriers -informational barriers come up in educational communication when transferring the message, i.e. in the channels of connection. How and where do information barriers occur when there are PCs used in the teaching process?Some of possible sources of problems are described below: • experience shows that old programs which exclusively use the keyboard for interaction will be accepted to a lesser extent due to the fact that using the keyboard is more complicated than using the mouse, • programs (simulations) designed using too much detail will be harder to accept because first the users have to make out what is on the screen and only then move on to the explanation of the modeling system, • if there are too many options for simulation set up, result saving, parameter input, etc, where the users might 'become disoriented', then, according to Murphy 's Law, they probably will. Practical Applications The following section describes interactive animations which have been successfully in use as an auxiliary teaching tool at Subotica Tech -College of Applied Sciences [16].Unfortunately, the advantages of the animations as described before are difficult to transfer to paper only with the help of images.The applications have been designed as interactive tutorials for presenting the functioning of some of the basic systems of analogue and digital electronics (Figure 1.) and microcontrollers (Figure 9. and 10.).For the Microcontrollers course two e-contents (interactive Flash simulation) were developed.They presents exercises for three out of fourteen lessons, but these three lessons count as "difficult", for example they cover the following themes: using the microcontrollers built in timer/counter in different modes, setting and using interrupts, communication through serial port, controlling analog to digital signal (and vice versa) conversion etc.The e-content for the Analogue and Digital Electronics there are altogether 19 simulations classified into 5 groups/exercises.Through these simulations the students can practice approximately about 40% of curriculum's theory.For example, "Exercise 1" contains simulations on the topics: Sources of alternating signals, Voltage splitter, Passive voltage adder, RC low-pass filter, RC high-pass filter.Figure 1.shows the screenshot of Exercise 3 and the accompanying simulation entitled "Pojačavač sa zajedničkim" (Common emitter amplifier).The design of the application shown in this image is followed through in the rest of the simulations, as well: the upper left corner contains the sketch of the system, below are the system parameters which can be altered in the simulation, while the "oscilloscope" is situated in the right side of the screen, showing the change of the signal over time.In this part of the application, by clicking on the link labeled "Objašnjenje" (Explanation) the theoretical background comes up in text form. Below is a detailed description of the content and functions of the elements on the screen: 1. Links for transition to the next/other simulation within this exercise. 2. Sketch to be simulated.The parameters listed next to the components are changing depending on values of the checkboxes under the sketch.4. Buttons for starting and stopping the simulation. 5. The button for calling up the background explanation for how the sketch functions. 6.The list of equations used for calculating the necessary parameters of the sketch and the results of the calculation/estimation. 7. The return button leading to the introductory page where the exercises can be chosen. 8. Values of the sketch components.These parameters can be changed by choosing values from the checkboxes.Each change has affects the listing of calculated values based on the new parameters and the change of signal shape at the output ("upper canal of the oscilloscope"). The following image (Figure 2) shows the simulation "Decade counter" with the help of which students can learn the logic of the synchronous counter. All simulations in this application are entirely controlled by mouse.Changing the parameters is done with the help of combo boxes and the predefined values they contain.In this way the application is protected from irregular data.It is important to mention the following advantages of these simulations: • it is not necessary to really 'create' an electric circuit in order to see how it works, • changing the components in the system only takes a few clicks in the checkbox, • it is possible to show the state of important values in continuity, as done by an oscilloscope. The following few paragraphs present some ActionScript (version 2) programming code, which shows how one can input data from the combo box, calculate the output voltage, and draw the form of voltage signal like it is done on a real oscilloscope.The combo box is presented as an object on the main animation scene.The next figure shows a combo box, which is used for input of predefined resistor values: When the user selects a value from the "r" combo box's list, the code is executed .The first line of the code assigns the currently selected item's label (currently it is a "50k" string) to the 'r1' variable.The 'r1' variable is the label in the scheme (see Figure 5, dashed line rectangle, right from the R resistor).So changes in the values in the combo box are displayed also on the scheme.The second line of the code assigns the value (numerical value: 50000) associated with the item currently selected ("50k" string) to the "r" variable.The scheme has its own action script code, which uses the "r" variable for calculating the new output value of the voltage.Because this code changes several global variables, other movie clips on the scene which also use those variables are affected with it.In this way, for example the changes in the resistor value 9 shows one of a series of seven interactive simulations that are part of the e-curriculum which had been developed for the Microcontrollers course.The simulations present the i8051 microcontroller's timer/counter hardware, the setting and use of interrupts, and the application of the special forms of the ADD and MOV instructions. Figure 10 presents one of the four interactive simulations created specifically for the Microcontrollers course.The simulations refer to the practical use of the i8051 microcontroller. Experiments and Analysis For the purpose of this study the following research questions were specified: what is the impact of interactivity of the animations on learning?The null hypothesis is defined as follows: Interactive animations have no significant positive impacts on studying "Microcontroller" and "Analog and Digital Electronics" courses.In order to obtain answers to the research questions, the authors compared the final exam score standard deviation at "Analog and Digital Electronics" and "Microcontrollers" courses independently, where the animations were used as supplementary tools for learning and practicing after class. Participants and Data Collecting Method The data acquisition was done at Subotica Tech -College of Applied Sciences over a threeyear period.It involved the second year students from two undergraduate programs the Electrotechnical Engineering major (EE) where these two courses were obligatory and the Computer Science major (CS) where these courses were optional.The number of participants for the first course (Analog and Digital Electronics) over the period of 3 years is 441 students, 56 female (12.7%) and 385 male (87.3%) students.The second course's participants (Microcontrollers) were the same students from EE major, and from the CS major there were some old students and some new ones (those who did not select the first course).The composition of this group was: 464 participants, 58 females (12.5%) and 406 males (87.5%)See Table 1.Most participants, 98.5%, were between 18 and 20 years old; the remaining percentage is represented by a few students whose age were between 20 and 30.In these 3 years at the beginning of the semesters (the first course was in the fall and the second in the spring semester), the students were divided in two equal-sized groups, the control and the experimental group.The group members were chosen randomly, and only one condition had to be satisfied for the experimental group members: to have possibility of accessing the web application and the simulations from home.If this condition was not satisfied, that student automatically becomes the member of control group. After forming the groups accessing the web application was enabled only for the experimental group.There was no additional motivation for the students.All participants visited face to face (f2f) classes of these two courses, which were taught by the same lecturer presenting identical material.This further strengthens the consistency of comparisons. The web application collected the following data from the users: 1. How many time did he/she logged on to the system to use the e-content, 2. How many time did he/she spent using the particular simulation. Students who logged on only few times and spent less time that the authors foresaw are assumed to be not using the system in an adequate mode, and they are not taken as members of the experimental group, so they were transferred to the control group (for details see Table 1).Ineligibility meant that the number of loggings is less than half of the available exercises, and the time spent in the system is less than 2 minutes per exercise The authors took as null hypothesis that the two groups would have the same mark average at both courses.The alternative hypothesis claims that the control group will achieve better result at both courses.The data was analyzed with one-sided, t-test, assuming that the variances of the two samples are different.Because one course was in the fall semester and the second one in the spring semester, the analysis was done twice a year at the end of the semesters and independently for both courses.From the presented data, the following conclusions can be drawn: • In 4 cases out of 6 we can reject the null hypothesis, and we can say with probability of 95%, that those experimental groups achieved better results on exam than the control groups. • In two cases there are no reasons to reject the null hypothesis. The results show evidence that interactive simulation contents can be very effective tools in the learning process.It can deliver information in a very attractive way, which also can be advantageous in assembling curricula for the students who have different skill levels and learning styles. Besides that, it can help learners to understand scientific topics, with presenting important conceptual relationships.It is also important that simulations enable students to become acquainted with the shown system and make changes in the parameters with no additional costs or risks.But only well-designed animations may help to ease and shorten the learning process, and only with them, through play and experimentation can the learning process become more interesting [17] [18].The students' answers from the questionnaires show that not every simulation is accepted in the same manner.For example, the third e-content (Figure 10) was given lower grades/worse comments than the other two.The reasons for this could be the themes which were presented with the simulation, because it does not contain spectacular and experimenting options.The design/the look of the animation also received worse marks from the students.Some future researches should also investigate how effective the interactive animations are when the users have different learning styles Various researches focusing on the effectiveness of learning with the help of visualization point out that in order for the animation to be well accepted, by the [19] [20] [21] the following have to be kept in mind : • positive effects in learning can only be achieved in topics that are dynamic in character, • an exaggerated multitude of colors in the animation will have the exact opposite effect, • it is important for the application to contain an optimal amount of information. Due to the lack of a standard in creating successful visual applications [22], experiences gained from well-accepted electronic materials may serve as guidelines for defining a methodology, which, if applied in the design of animations and simulations, will lead to greater effect and efficiency in the learning process [23]. However, results also show that there is a tendency of decreasing the difference between those learners who had used the animation and those who had not.Is this because there is an increasing number of such and similar e-curricula available to students, and this kind of attractive multimedia presentations are no longer motivate students as they used to before; or was is simply the case of students of the control group getting hold of the animations and using them in their learning process.Unfortunately, the questionnaire filled in by the students at the end of the semester failed to provide definitive answers to this question.The questionnaires show that students were on the whole satisfied with the applications. A number of studies indicate that the user's performance is much better if the teaching methods are matched to the user's learning style [24].Designing the animation's interface and contents to match the students' preferred learning style could lead to a more effective learning process.For example, according to the Felder-Silverman [25] learning style model, the animations containing a lot of visual elements, such as pictures, diagrams, flow charts etc. are preferred for the visual learning profile, while written and auditory explanations are effective with the verbal type of student.And to mention another example: students with an active profile prefer the simulation (interactive animation) which allows experimenting with the system parameters. Figure 1 : Figure 1: Representation of the exercise "Common emitter amplifier" 3 . Representation of the shape of voltage signal at the input and output.Part of the image marked with the arrow 3 shows the shape of output voltage, while the one marked 4 shows the input voltage.These shapes of signals are constantly redrawn.The lighter point on the line shows the current voltage value.The break in the line is the consequence of the change in RC components on the sketch during the simulation. Figure 2 : Figure 2: Representation of the exercise "Decade counter" Figure 3 : Figure 3: Input option via combo box Figure 4 : Figure 4: Source code for combo box's onClipEvent event Figure 5 :Figure 6 : Figure 5: Scheme of the RC low-pass filter Figure 7 : Figure 7: Movie clip of the oscilloscope drawing beam Figure 8 :Figure 9 :Figure 10 : Figure 8: Appearance of the drawing beam in the oscilloscope movie clip Table 3 - Significance differences between two groups
2018-12-11T11:02:11.599Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "82e8830d52a9adfd8d230a367640cee33aa5c72d", "oa_license": "CCBYNC", "oa_url": "https://univagora.ro/jour/index.php/ijccc/article/download/1430/404", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82e8830d52a9adfd8d230a367640cee33aa5c72d", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52941248
pes2o/s2orc
v3-fos-license
Do changing medical admissions practices in the UK impact on who is admitted? An interrupted time series analysis Introduction Medical admissions must balance two potentially competing missions: to select those who will be successful medical students and clinicians and to increase the diversity of the medical school population and workforce. Many countries address this dilemma by reducing the heavy reliance on prior educational attainment, complementing this with other selection tools. However, evidence to what extent this shift in practice has actually widened access is conflicting. Aim To examine if changes in medical school selection processes significantly impact on the composition of the student population. Design and setting Observational study of medical students from 18 UK 5-year medical programmes who took the UK Clinical Aptitude Test from 2007 to 2014; detailed analysis on four schools. Primary outcome Proportion of admissions to medical school for four target groups (lower socioeconomic classes, non-selective schooling, non-white and male). Data analysis Interrupted time-series framework with segmented regression was used to identify the impact of changes in selection practices in relation to invitation to interview to medical school. Four case study medical schools were used looking at admissions within for the four target groups. Results There were no obvious changes in the overall proportion of admissions from each target group over the 8-year period, averaging at 3.3% lower socioeconomic group, 51.5% non-selective school, 30.5% non-white and 43.8% male. Each case study school changed their selection practice in decision making for invite to interview during 2007–2014. Yet, this within-school variation made little difference locally, and changes in admission practices did not lead to any discernible change in the demography of those accepted into medical school. Conclusion Although our case schools changed their selection procedures, these changes did not lead to any observable differences in their student populations. Increasing the diversity of medical students, and hence the medical profession, may require different, perhaps more radical, approaches to selection. Introduction Medical admissions must balance two potentially competing missions: to select those who will be successful medical students and clinicians and to increase the diversity of the medical school population and workforce. Many countries address this dilemma by reducing the heavy reliance on prior educational attainment, complementing this with other selection tools. However, evidence to what extent this shift in practice has actually widened access is conflicting. Aim To examine if changes in medical school selection processes significantly impact on the composition of the student population. Design and setting Observational study of medical students from 18 UK 5-year medical programmes who took the UK Clinical Aptitude Test from 2007 to 2014; detailed analysis on four schools. Primary outcome Proportion of admissions to medical school for four target groups (lower socioeconomic classes, non-selective schooling, non-white and male). Data analysis Interrupted time-series framework with segmented regression was used to identify the impact of changes in selection practices in relation to invitation to interview to medical school. Four case study medical schools were used looking at admissions within for the four target groups. results There were no obvious changes in the overall proportion of admissions from each target group over the 8-year period, averaging at 3.3% lower socioeconomic group, 51.5% non-selective school, 30.5% non-white and 43.8% male. Each case study school changed their selection practice in decision making for invite to interview during 2007-2014. Yet, this within-school variation made little difference locally, and changes in admission practices did not lead to any discernible change in the demography of those accepted into medical school. Conclusion Although our case schools changed their selection procedures, these changes did not lead to any observable differences in their student populations. Increasing the diversity of medical students, and hence the medical profession, may require different, perhaps more radical, approaches to selection. IntroDuCtIon Selection into medicine is a complex process with multiple, potentially competing priorities. Medical schools want to select applicants who will be successful both in the short term, as medical students, and in the longer term, as practising clinicians. However, in many countries, medical schools are also under strong political pressure to increase the matriculation of certain under-represented groups. [1][2][3][4][5] The rationale for this is twofold. First, to address societal issues of social justice and social mobility in terms of encouraging people from all backgrounds into higher education rather than birth dictating one's social and economic outcomes in life. [6][7][8] Second, training a diverse healthcare workforce is considered essential to improving healthcare quality by ensuring doctors are as representative as possible of the society they serve (in order to provide the best possible care). [9][10][11] There is clear evidence that significant under-representation of some social, cultural and ethnic groups in medical schools and strengths and limitations of this study ► We were able to assess if changes in selection processes for medical school resulted in increased diversity of the student population-a focus of the UK widening access strategy. ► A strength was that over 24 000 admissions were considered across 18 UK medical schools, with linkage to admissions test scores. The use of the case study approach was advantageous as it allowed a more nuanced account of the process of medical selection than is available from aggregated data. ► A limitation was that we were not able to consider any underlying secular trends that may have impacted on admissions. Open access medicine worldwide persists despite a variety of national initiatives (eg, quota systems and political imperatives) and local activities (eg, pipeline programmes) to ameliorate such under-representation. [12][13][14] Moreover, these goals of predictive validity and increasing the diversity of the medical school population are potentially conflicting because prior academic attainment, which until relatively recently has been the main selection 'tool' for medical education, is strongly influenced by factors associated with demographic disadvantage, such as ethnicity and/or socioeconomic class. 8 15-18 In other words, certain groups face inequalities in preuniversity education that then significantly limit their chances of obtaining the necessary grades/grade point average to be eligible for medical school. The precise groups that are educationally disadvantaged vary by country. In the UK, disadvantage related to socioeconomic background, status or 'class' is the main issue, [19][20][21] whereas ethnicity/ race is the foremost issue in other countries. [12][13][14] Medical schools have tried to address this dilemma by redesigning their selection processes. In the UK context, most medical schools have shifted from relying solely on prior academic attainment as an indicator of capability to using combinations of different tools designed to assess a range of other cognitive and personal attributes. [22][23][24][25] For example, all UK medical schools now include an admissions test as part of selection (eg, UK Clinical Aptitude Test (UKCAT)), and schools are also expected to make use of interviews in the selection process as well (see later). Medical schools have chosen to increasingly use evidenced-based selection tools such as multiple mini interviews (MMIs; a format of many short independent assessments, typically in a timed circuit) rather than traditional interviews and decreased their use of personal statements in attempts to become fairer, more objective and transparent in their selection methods. 26 Such a broader approach to selection seems, on face value, to address the dilemma of balancing predictive validity and widening access. However, it is critical to know if selection strategies, tools and processes are actually effective in terms of helping medical schools achieve the aim of increasing diversity. Measuring this is not straightforward. While there is much research examining whether the tools used for selection into medical school measure what they claim to measure, and do so consistently, 7 26 27 very little is known on whether selection practices support increasing diversity/widening access to medicine. What evidence is available is conflicting. For example, Tiffin et al 28 found that certain ways of using the UKCAT (an admissions test) were associated with a higher proportion of students from under-represented groups being admitted to UK medical schools. However, in a more recent longitudinal study, Mathers and colleagues 29 failed to identify any consistent effect of different usages of the UKCAT on equity in selection processes. In another context (Denmark), O'Neill et al 30 showed that selection strategy (grade based or attribute based) had no effect on the social diversity of their medical student population. There is an additional issue. As mentioned above, medical schools in the UK and many other countries use a combination of tools (such as prior academic attainment, MMIs, an aptitude test, references and personal statements). 26 However, selection research has typically focused on the qualities of one particular tool or method in its own right, rather than whether various tools can be combined effectively. Where studies have looked at combining tools, the focus has typically been on examining the psychometric properties of doing so (ie, incremental validity). 26 31 32 The few studies that have considered the impact of combining tools suggest that different weighting (eg, 50% for prior academic attainment, 30% aptitude test and 20% local assessment; or a hurdle model of 'if over x, then through to the next stage') may lead to different outcomes in terms of who is selected. 28 33 However, yet again, the literature is conflicting, with certain combinations putting some groups at less of a disadvantage but biases remaining towards other groups. 34 35 In short, while UK medical schools now typically use a combination of selection tools to discriminate between applicants, we do not know if this more systems-based approach to selection supports increasing diversity/ widening access. Building on our previous work, 7 26 36-38 we wished to examine if changes in medical school selection criteria or processes impact on the demographic composition of the student population. Specifically, we wanted to examine if a change in selection processes at medical school level impacts on the proportion of students admitted from certain groups/target populations for widening access to medicine initiatives in the UK context. 21 39 MethoDs This was a quantitative study using an interrupted time-series framework using segmented regression to examine if changes in medical school selection criteria during 2007 to 2014 impacted on the proportion of admissions in four target groups. Data source The most commonly used admissions test is the UKCAT, introduced in 2006 to help medical schools increase the diversity of medical students (www. ukcat. ac. uk). UKCAT provided data on student admissions to undergraduate programmes at 18 UK medical schools from 2007 to 2014, covering around 24 000 admissions. The data were accessed within the Health Informatics Centre Safe Haven (HIC), run through the University of Dundee, to ensure adherence to the highest standards of security, governance and confidentiality when storing, handling and analysing identifiable data. Received data were anonymised. The following datasets were provided and merged Open access together to form a working data file. The most recent UKCAT score was used, and duplicates were removed. ► Admissions: anonymised student ID, university, course and academic year. ► Demographics collected by UKCAT: anonymised student ID, gender, ethnic group, year of birth, year of UKCAT test, school type (defined according to funding criteria, whether state funded or privately funded: see later), highest qualification (indicated by academic score or tariff (the weighting applied to academic results in the admissions process)), socioeconomic group based on parental occupation (derived from National Statistics Socioeconomic Classification: see later). ► UKCAT test scores: anonymised student ID, year of test and scores for the five UKCAT subtests (verbal reasoning, decision making, quantitative reasoning, abstract reasoning and situational judgement test (SJT) and UKCAT total score. Candidates receive a scale score (300-900) for each of the first four subtests and a banding (1-4) for the SJT. Note that schools using the test in selection will have mainly relied on an aggregated 'total score' on the UKCAT, which typically ranges from 1200 to 3600. Demographics In line with UK widening access policies, 21 39 we were interested in admission of applicants who were from lower socioeconomic status (SES) groups, who had attended non-selective secondary schooling, were non-white and/ or were male. Each of these is explained below. SES was determined by the widely used parental National Statistics Socio-Economic Classification (NS-SEC) 40 where categories 4 and 5 of a 1-5 scale represent lower socioeconomic groups (group 1: managerial and professional occupations, group 2: intermediate occupations, group 3: small employers and own account workers, group 4: lower supervisory and technical occupations, and group 5: semiroutine and routine occupations). The proportion of the UK population in the five categories in 2011 was NS-SEC1: 41.4%, NS-SEC2: 12.7%, NS-SEC3: 9.4%, NS-SEC4: 6.9% and NS-SEC5: 25.2%. 41 Non-selective secondary school is typically a state school (no entrance exam, not fee paying and based on residential catchment), with the comparison being independent, typically fee-paying schools, often with an entrance exam, which are attended by about 7% of UK school pupils overall. 42 There is an overlap of some schools, but this categorisation was deemed appropriate for analysis given our knowledge of how this has been approached in UK studies from the wider field of education. The dichotomy of white and non-white is typically used in UK studies looking at selection into medical school of UK students. 31 38 43 44 Non-white is a broad categorisation that is likely to mean different things in different contexts. In the UK context, non-white participants are typically Indian or Asian, with very small numbers of black/ Afro-Caribbean participants represented in this group. There is much research indicating that 'non-white' applicants, and medical students are disadvantaged in terms of performance on selection and at medical school. 31 38 43-45 Finally, we also looked at male gender as females surpass males in high school examination performance in many countries including the UK. 46 47 Whether related to this or not (the pattern of performance is different at the extremes), 48 in the UK, the proportion of male medical students is significantly lower than female students. This is of relevance to this paper as there is much debate about the impact of this pattern on future healthcare delivery. 49 Admissions processes During the time period examined, most medical schools in the UK used a combination of prior academic attainment (eg, A Levels), a cognitive or aptitude test (eg, UKCAT), the personal statement and an interview for selection. Prior academic attainment and UKCAT scores were used by medical schools in one of two main ways: as a factor percentage in a decision to interview, offer a place or both and as a threshold score to select for interview or to make an offer, with a score typically between 1900 and 2800 used. 28 50 An assessment of the personal statement and/or reference could also be used in this process as part of the factor weighting, but the use of the personal statement as part of the selection process decreased between 2007 and 2014 (RG, personal data). Information on selection criteria used as obtained from RG who receives this information on an annual basis as part of her employment at the UKCAT consortium (the information is not published as such). We were specifically interested in whether the introduction of different usages or increases/decreases in the factor weightings or increases/decreases in the threshold score for invitation for interview led to changes in the proportion of admissions in the four target demographic groups. 51 We anticipated that increased use of the UKCAT, shifting from traditional to MMIs and decreasing the use of traditional interviews and personal statements, would potentially increase the diversity of applicants invited to interview given the evidence base for the 'fairness' of each of these selection tools. 25 35 We also anticipated that a 'stronger' use of the UKCAT score (as a factor or threshold) would be associated with increased odds of selecting entrants who were male, from a low socioeconomic background or a state school. 28 selection criteria Previous work by RG provided information on the selection policies of each of the 18 schools included in this analysis and any changes from year to year (eg, change in factor weightings, change in threshold score of UKCAT, change in prior academic attainment, for example, requirement of AAA at A Level instead of AAB and introducing interviews). The information presented here is the selection criteria for use to decide whether to invite a candidate to interview or not, the last hurdle in the Open access selection process that would ultimately lead to an applicant being admitted to medical school or not. Analysis Within the data safehaven (HIC), data were merged using STATA (V.14) after appropriate recoding. For each year of admission, total number of admissions for the 18 schools was obtained, and the proportion of those of low SES, attending a non-selective secondary school, non-white ethnicity and male was calculated across the whole sample. Due to the confidential nature of the data from schools, we have not presented this information for all schools separately, as it would potentially give away the location of the schools. The aim was to assess if the change in selection procedure impacted on the proportion of admissions. Again to maintain anonymity of the schools, we were unable to do this for every school. Thus, we selected four schools to act as individual case studies. These four schools were chosen to be representative of the 18 medical schools in terms of diverse geographical locations; student intake (ranging from 120 to 230 per year group), curricula (eg, case based or traditional learning) and age of school (from hundreds of years old, to one of the newest UK medical schools). In addition, the four chosen schools had known changes in policy that we could look at and sufficient data before and after the change to allow analyses. As the changes occurred at different times for different schools, we needed to analyse individually. For each case study, we looked at changes in selection criteria during 2007-2014 and whether any changes impacted on the proportion of admissions in the four target groups. This was undertaken within an interrupted time-series framework using segmented regression. The interruption was the year of admission in which the change had occurred (ie, selection year was one prior). In some cases, minor changes had occurred, but the interruption chosen was the biggest amendment to selection policy. It was important the change being tested was prespecified. Models were fitted with an interruption (level) effects, trend preintervention (slope) and trend postinterruption (slope) and a constant term. These coefficients represent the proportion at the start of the period, the slope in the preinterruption phase (pretrend), the change in level caused by the interruption and the slope of the postinterruption phase (post-trend). In addition to these effects, it was of interest to calculate the absolute and relative effects (rate of change) of the interruption. Statistical software R was used to calculate estimates of relative change (following the interruption). SEs for these estimates were generated using the method specified by Zhang and Wagner. 52 Patient and public involvement Patients and public were not involved in this research. results overall admissions In total, across the 18 medical schools, there were 24 346 recorded admissions between 2007 and 2014 for which data were available. Table 1 shows the number of admissions in each year across the 18 medical schools and the proportion of admissions by the four target groups (lower socioeconomic group, non-selective secondary schooling, non-white and male) along with the number of schools that provided data for that year. Not all schools provided data at each year so there is some year to year variation in the number of total admissions. Percentages presented are the percentage of admissions for known values of the characteristics (ie, excluding missing data). Figure 1 displays the proportion of admissions by year and shows that, when the data from all 18 medical schools were combined, there were no obvious changes in the proportion of admissions from each of the target groups over the 8-year period of study. Case study A Figure 2 shows the proportion of admissions by target group across each year for case study A. Case study A used UKCAT as a weighted factor along with prior academic attainment and personal statements in the decision to invite for interview. In 2012, a change was Open access made to academic attainment (increase in As at A level) and the weighting for the UKCAT increased from 7% to 14%. Year 2013 saw the introduction of MMIs and 2014 the removal of personal statements from the decision to interview and, as a result, a much larger weighting placed on both the academic attainment and the UKCAT. A segmented regression using 2012 as the interruption year was carried out. Later changes could not be investigated as there were insufficient postinterruption time points (table 2). There were no obvious interruption effects, and the trends before and after the interruption were not statistically significant. Relative change estimates were not statistically significant for proportion low SES, proportion non-selective schooling and proportion non-white. The relative change for proportion male was −0.33 (95% CI −0.73 to −0.07) indicating the proportion of males per year decreased by this amount compared with what it would have been if the selection policy had not changed. Case study b Figure 3 shows the proportion of admissions for case study B. This institution used both convenience (ie, set to select the required number of candidates for interview) and an actual UKCAT threshold up to 2013, with both increases and decreases in required scores. In 2009, the required academic attainment was increased. In 2013, the university continued to use a UKCAT threshold score but also added in a large factor weighting for both academic tariff and UKCAT to select for interview; thus, 2013 was used as the year of interruption in analysis (table 2). The only evidence of the interruption having an effect was for the proportion from non-selective school, where there was a jump up, although the trend before and after was not statistically significant. There was a jump up in the proportion of non-white at the same time but not found to be statistically significant (table 2). None of the estimates of relative change were significant, indicating that the proportions observed postinterruption were not obviously different from what they would have been if the change in selection policy had not occurred. Case study C Figure 4 shows the proportion of admissions for case study C. This school increased their academic attainment requirements in 2011 and used both a UKCAT threshold and a factor % for the UKCAT. The year of interruption for the analysis was taken as 2013 where the UKCAT factor weighting and threshold value were increased (factor percentage to 50% and threshold was additional 500 points). The factor percentage for personal statement was reduced. The interrupted time series (table 2) did not yield any statistically significant results for the trends before or after the interruption and at the 2013 interruption. None of the estimates of relative change were statistically significant, indicating the proportions observed postinterruption were not obviously different from what they would have been if the change in selection policy had not occurred. Case study D Figure 5 shows the proportion of admissions for case study D. This institution used a combination of personal statement scoring and UKCAT 'trade-off' (ie, candidates with higher UKCAT scores being considered favourably) approach up until 2010 and switched to factor weighting in 2012 (UKCAT and academic attainment). In 2014, the factor weighting for academic attainment was increased at the expense of the UKCAT. The year of interruption investigated was in 2012. There were no significant trends or intervention effects for male, non-selective or low SES students for case study D (table 2). However, there was a significant postinterruption increasing trend for proportion of non-white student (p=0.032), although there were only two additional postinterruption time points. This translated into a relative change of 1.46 (95% CI 0.37 to 2.54) showing the rate of change in non-white was increased following the change in policy. Estimates of relative change were not statistically significant for the other three widening access criteria. Open access the proportions of students accepting a place who were from lower socioeconomic groups, non-selective schools, were non-white and/or male. Yet, our case study data show that all four example schools changed their admissions practices over the time period of the study. Some schools changed their admissions practices frequently over the 8-year study period, sometimes changing multiple things in the same year. This within-school variation made little difference locally, or overall in terms of increasing the diversity of medical students: changes in admission practices to practices that seemed 'fairer' 25 did not lead to any discernible change in who was accepted into medical school. 21 There are numerous possible reasons for this. First, none of the adjustments we observed were particularly radical. A set of selection criteria results in a ranking of applicants. Small changes to weightings (such as school A increasing the weighting for the UKCAT from 7% to 14%) would not radically alter that ranking, especially when gaining a place remains largely determined by prior academic attainment in all cases. Second, historical evidence suggests that academic and cognitively oriented assessment tools, which encompass school exams and cognitive ability tests such as those used as part of the UKCAT, tend to favour 'traditional' applicants to medicine, that is, white and high social class individuals. 28 53 54 Thus, it is possible that the common usage of prior academic attainment and an aptitude test comprising cognitive ability tests as the first two hurdles within the selection process sequence may select appropriately in terms of predictive validity yet at the same time actually 'narrow' rather than widen access. 55 However, to change from this practice would require medical schools to, for example, move the assessment of personal values or attributes from its typical position as the last hurdle to the forefront of the process. Our previous work suggests that this would not be embraced by medical schools, many of which struggle to see how widening access can fit with their culture, ethos and aspirations. 36 On a pragmatic note, many need the first stages of selection to help them screen large numbers of applications for a much smaller number of interview places and ultimately medical school places. Third, we do not know why schools changed their processes. Did school A, for example, bring in MMIs and remove personal statements from the selection process with the explicit aim of opening the doors to a wider range of applicants? For example, what was the rationale for school A doubling the UKCAT weighting in 2012 (and why was it 7% originally and 14% thereafter)? What did they hope to achieve by this change in practice? We do know that the timeframe of the study was one where widening access to medicine was extremely topical within the UK, and schools were expected to provide evidence to the regulator as to how they were addressing this issue. Was change enacted for accountability purposes only, seeking improved validity or embraced as a means of really making a difference in terms of widening access? 56 Fourth, different selection processes may attract different applicants, which would change the nature of the applicant pool and hence influence the changes observed. However, our data did not indicate that the applicant pools differed notably pre and post changes. Fifth, our outcome measure of invitation Open access to interview is not equivalent to accepting a place. It may have been that changes in the selection processes did have the intended effect on diversity, but the interview process mitigated the impact of change. However, what little evidence does exist is conflicting in terms of the medical school interview introducing further social bias into the selection system. 57 58 Further work is needed to explore the potential social bias into the selection system. Sixth, we do not know how independent or otherwise the various selection tools are from each other. What is the relationship between, for example, UKCAT and MMI scores? Do different selection tools overlap in terms of what they measure? In terms of the latter, the few studies that look at this conclude modest-moderate relationships between different selection tools, even those that measure the same qualities. 59 60 Finally, an alternative, or additional explanation for this finding, is to consider the nature of applicants. Modifying selection processes is unlikely to have major impact on widening access if there is a very small pool of people from certain backgrounds applying. However, while this may be the case in the UK in terms of applicants from lower socioeconomic groups, 61 there are plenty of applicants to medicine who are male, non-white and/or who have attended non-selective schools. 15 38 This suggests there may be biases inherent in the current selection processes that need further exploration 28 33 35 and that medical schools need to increase their focus on encouraging pupils from diverse backgrounds to apply for medicine. 61 In addition, a potential limitation of an interrupted time-series approach is that there may be other secular trends occurring as well as any identified interruption. Thus, it is possible that other background trends may have swamped any signal from the changes in admission policy implemented by the medical schools. For example, as the UK economy weakened, those from under-represented groups may have been further discouraged from applying to university due to fees. Open access This work joins an ongoing conversation in the literature related to selection into medicine and how best to widen access to medical education. 1-5 7 22 25 The particular approach used in this paper (time series analysis of case studies) provides a more nuanced account of the processes of medical selection than is available from large-scale studies looking solely at aggregate data. 18 26 28 31 32 38 53 62 The time series analysis allowed for detailed analysis of individual school admission information and shows there was individual variation but, as a whole combined across medical schools, the picture is unchanged. However, the UKCAT now incorporates two different components: the cognitive ability tests that are the component referred to throughout this study and the new SJT component that measures a range of personal attributes. Recent research suggests that the SJT may not favour those from more privileged educational backgrounds. 34 To alter the demography of those offered a place would mean weighting the SJT component of the UKCAT more heavily than the cognitive ability components and/or use the interview component of selective differently. Medical schools may not be ready to do this. Moreover, in short, the implication from the current study is not just that the combination, weighting and sequencing of selection methods need rethinking but that schools need to decide what they want to achieve via selection. For example, are they trying to attract particular groups to their medical schools? Depending on the priority, to achieve their aims, the weighting and sequencing of selection methods may be different. Indeed, a framework has previously been proposed to guide how selection can be optimised in order to maintain entrant quality while minimised the adverse impact on disadvantaged groups. 63 It may also be that schools need to completely rethink their approach to increasing diversity, to depend less on comparing the performance of diverse applicants on standardised selection tools, and shift more towards an individualised approach, which gives consideration to applicant background and life experiences, and what they can bring to medicine. 64 65 Addressing calls in the literature, we focused on widening access in terms of socioeconomic background and examined other potential dimensions of disadvantage, such as gender, ethnicity and schooling. Our methodological approach also avoids the issues associated with single-site, cross-sectional work by comparing across schools and over time. We need to know more about why schools change their admissions policies and what they hope to achieve by doing so. A qualitative methodology would be appropriate to explore this question. 36 Also, on a more practical note, we urge schools to collect and scrutinise their own data at a granular level, akin to the approach taken in our case studies, as this information is essential to assess the status quo (baseline) and evaluate the impact of any change. The four medical schools were chosen from the original 18 for which data were available because they represented the diversity of the sample. The commonalities were that they all used the UKCAT, the most widely used admissions test in the UK, and offered traditional 5-year programmes. However, we took care in how we presented the (historical) data to minimise the chance of individual schools being identified by 'insiders'. We do not know if the within-school and between-school diversity we identified is also found in UK schools using other admissions tests, or in accelerated or extended medical programmes; this remains to be explored. This work was carried out in one context, the UK, and hence the findings may not be generalisable across contexts. However, the admissions 'tools' combination of prior attainment (whether school exit examinations, grade point average or specific knowledge-based examinations for medical admissions (eg, Medical College Admission Test)), some sort of aptitude test and either a traditional or MMI is typical of many countries. 7 26 Moreover, our study does not focus on the tools themselves but rather on changing admissions processes. Given that medical schools across the world are constantly reviewing and changing their Open access admissions practices, our messages will resonant across contexts. In conclusion, UK medical schools now have a political mandate to increase the diversity of medical students and a time line in which to achieve certain goals. 39 However, this study suggests that current selection processes will not help the medical profession 'throw open its doors to a far broader social intake than it does at present'. 21 If we wish to increase the diversity of the medical profession in the future, we suggest that medical schools should be better supported to take a more radical and less risk averse approach to selection. Acknowledgements Our thanks to the Medical Schools Council for funding and to the UKCAT Consortium for access to data. Neither organisation was involved in determining the study design or results reporting. Contributors JC and AJL had the original idea for the study and developed the study design in collaboration with SF and PAT within a larger programme of work including SN and FP. RG collected some of the data and provided advice on data interpretation. SF undertook data cleaning and analysis advised by PAT and AJL, with the analysis approach developed and refined through full team meetings. SF initially drafted the methods and results, JC drafted the introduction and discussion with all authors then contributing to redrafts. All authors approved the final paper before submission. Funding This work was supported by Medical Schools Council (MSC) of the UK under the Selecting for Excellence programme. Disclaimer The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. Competing interests JC, PAT, FP and SN have previously received research funding from the UKCAT Board, the MSC and the GMC. In addition, PAT, JC and SN have received travel and subsistence expenses for attendance at the UKCAT Research Group meetings. RG is employed by UKCAT. FP and the Work Psychology Group design and develop the UKCAT Situational Judgement Test. SN was Chair of the UKCAT Consortium and is now Chair of the UKCAT Research Group, while PAT and JC are members of this group. PAT is supported in his research by an NIHR Career Development Fellowship. This paper presents independent research partfunded by the National Institute for Health Research (NIHR). Patient consent Not required. ethics approval UKCAT candidates are informed at registration that their data may be used to undertake research related to admissions to medicine and dentistry that reflects the legitimate interests of UKCAT and that research and analysis only takes place on anonymised data, and UKCAT is committed to ensuring that no individual or groups of individuals can be identified in any published research undertaken on its behalf. No specific ethical approval required. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement The data contained within this study are held within a data safehaven and therefore are not available publically. open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2018-10-22T17:25:40.424Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "a3bbc78e0094fde1fb592ada260a4e74ad8c70fb", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/10/e023274.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f92cb67f261c05d58ee771d989ae67cdbd652ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6905554
pes2o/s2orc
v3-fos-license
Cellular changes in boric acid-treated DU-145 prostate cancer cells Epidemiological, animal, and cell culture studies have identified boron as a chemopreventative agent in prostate cancer. The present objective was to identify boron-induced changes in the DU-145 human prostate cancer cell line. We show that prolonged exposure to pharmacologically-relevant levels of boric acid, the naturally occurring form of boron circulating in human plasma, induces the following morphological changes in cells: increases in granularity and intracellular vesicle content, enhanced cell spreading and decreased cell volume. Documented increases in β-galactosidase activity suggest that boric acid induces conversion to a senescent-like cellular phenotype. Boric acid also causes a dose-dependent reduction in cyclins A–E, as well as MAPK proteins, suggesting their contribution to proliferative inhibition. Furthermore, treated cells display reduced adhesion, migration and invasion potential, along with F-actin changes indicative of reduced metastatic potential. Finally, the observation of media acidosis in treated cells correlated with an accumulation of lysosome-associated membrane protein type 2 (LAMP-2)-negative acidic compartments. The challenge of future studies will be to identify the underlying mechanism responsible for the observed cellular responses to this natural blood constituent. The element boron is nearly completely absorbed from drinking water and plant-derived foods in the gastrointestinal tract, and circulates in blood as boric acid (BA) (Price et al, 1997). Cells were once thought incapable of processing the element, yet this has since been disproved. Boron is utilised by bacteria in the structure of several antibiotics and autoinducer-2, a signalling molecule utilised during interspecies quorum sensing (Chen et al, 2002;Semmelhack et al, 2004). Plants require the element for growth, flowering and seed formation, and obtain boron from soil pore water using a borate transporter, BOR1, expressed in root pericycle cells (Takano et al, 2002). A human homologue, the electrogenic, voltage-regulated, Na þ -coupled borate transporter NaBC1, was recently identified in human kidney tubular cells and may function to maintain plasma BA levels (Park et al, 2004). There are several reports supporting boron as a chemopreventative agent against prostate cancer. An epidemiological study using data from the NHANES III database reported that the risk of prostate cancer in US men is inversely proportional to dietary intake of boron (Cui et al, 2004). The biological plausibility of this observation has been supported by cell culture and animal studies. Treatment of nude mice, injected with androgen-sensitive LNCaP prostate cancer cells, with BA caused a reduction in tumour growth of 25 -38%, along with a reduction in plasma PSA levels of 88% (Gallardo-Williams et al, 2004). BA inhibits the activity of serine proteases, including prostate-specific antigen (PSA), presumably by binding to its active site (Bone et al, 1987;Gallardo-Williams et al, 2003). In culture, BA has been shown to inhibit the proliferation of LNCaP and the androgen-independent prostate cancer cell lines DU-145 and PC-3, in a dose-dependent manner (Barranco and Eckhert, 2004). Since DU-145 cells do not synthesise PSA, BA's mode of inhibiting proliferation is likely not to occur by inhibiting the conversion of IGFBP-3 to IGF-1, as proposed in LNCaP tumours (Gallardo-Williams et al, 2004;Sobel and Sadar, 2005). The present investigation was initiated to define morphological and molecular responses of DU-145 prostate cancer cells to BA, which might lead to an explanation of its antiproliferative properties. In the current report, we examined the effects of pharmacological concentrations of BA on cell morphology and molecular markers of proliferation, senescence, metastasis and motility. We show that prolonged exposure to BA causes DU-145 cells to develop a flattened, angular phenotype with numerous vesicles appearing in the cytoplasm. These changes occur coincident with a decrease in the expression of cyclin proteins, p21 and P-MEK1/2, as well as a reduction in cell motility and invasion capacity. Finally, increased b-galactosidase activity reflects a conversion of DU-145s to a senescence-like cell. MATERIALS AND METHODS Experimental culture DU-145, LNCaP, and PC-3 PCa cells, donated by Dr Allan Pantuck, were cultured in RPMI 1640 media (Invitrogen, USA) supplemented with 10% FBS, penicillin/streptomysin (100 U ml À1 ; 100 mg ml À1 ), and L-glutamine (200 mM) (Gemini Bioproducts, USA). Experimental media was prepared as previously published in Barranco and Eckhert (2004). Cells were plated directly onto culture plates or glass coverslips and allowed to settle overnight in nontreated media. After 24 h, media was aspirated and replaced daily, for 7 -8 days, with BA-supplemented media (0 -1000 mM). Cell counts were performed using a hemacytometer and Trypan Blue (Invitrogen) for identifying nonviable cells. Flow cytometry Following 8 days in culture with BA (0, 250, and 1000 mM), DU-145 cells were trypsinised, resuspended as 1 ml aliquots (10 6 cells ml À1 ) in loading buffer (RPMI 1640 w/o phenol red), and incubated in 12  75 mm polystyrene test tubes for 30 min, at 371C, 5% CO 2 . Following incubation, forward light scatter and side light scatter analysis (serving as measures of cell size and granularity, respectively) were determined using a Becton Dickinson BD-LSR analytic flow cytometer on samples of 10 000 cells. Data analysis was performed with FLOWJO. Loading buffer was supplemented with Indo-1 AM (1 mM) (Sigma, USA), a cell-permeable Ca 2 þ fluorescent probe, for concordant measurements of intracellular calcium. Fluorescent probe detection of actin and acidic compartments For actin probing (F-actin, fluorescein phalloidin; G-actin, fluorescent deoxyribonuclease I conjugate) (Molecular Probes, USA), 8-day BA-treated DU-145 cells were washed 2  with PBS and fixed in PBS containing 3.7% formaldehyde, for 10 min at room temperature. Fixed cells were washed 2  with PBS before being extracted with acetone (À201C) for 5 min. 2  wash with PBS followed before cells were loaded with phalloidin (0.16 mM in 1% BSA/PBS) or deoxyribonuclease I (0.3 mM in glycerol/PBS) for 20 min, at 371C. Loaded cells were washed 2  with PBS, mounted on slides, and viewed under confocal microscopy (Fluorescein: ex 496, em 516). For intracellular acidic compartment labelling, 8-day BAexposed DU-145 cells were loaded with a nonspecific lysosome marker (LysoTracker Green) (Molecular Probes). Cells were submerged in prewarmed media containing LysoTracker (1 mM) for 1 h, at 371C. Following incubation, loading medium was aspirated, replaced with PBS, and cells were viewed under confocal microscopy (ex 504, em 511). All fluorescent images, along with light images, were recorded using an Axioskop 2 FS confocal microscope and brightened using Photoshop 6.0. Cell attachment, migration, and invasion assays For cell attachment efficiency calculations, DU-145 cells were cultured in the presence of BA (0, 250, and 1000 mM) for 8 days on 100  20 mm tissue culture plates, trypsinised and replated onto sixwell polystyrene culture plates (Fisher, USA) at 2.5  10 5 cells well À1 . Following a 24-h incubation, nonadherent cells and media were aspirated, while attached cells were trypsinised and counted. The migration analysis protocol was identical to the attachment assay's, except that 2.5  10 5 cells were loaded in the upper migration chamber of a Corning transwell permeable support (24well transwell, 8-mM polycarbonate membrane) in 0.1 ml of serumfree RPMI-1640 media. RPMI-1640 (0.6 ml) supplemented with 10% FBS, serving as a chemo-attractant, was deposited in the lower chamber. Plates were covered and incubated for 24 h at 371C, 5% CO 2 . Following incubation, cells remaining on the upper filter were removed with a cotton swab, while the migrated population on the filter underside was washed with PBS, fixed in methanol, stained with Giemsa stain, rinsed with PBS, and deionised water, and allowed to air-dry. Cells in four random optical fields were counted to determine the number of migratory cells. The invasion assay procedure was identical to that used in the migration analysis, except that each filter, prior to loading, was coated with 20 mg of growth factor reduced-matrigel (BD biosciences) in 100 ml of cold, serum-free RPMI-1640 media, and subsequently allowed to air-dry overnight in a sterile culture hood. Since plating efficiency varied among BA-treated cells on matrigel-treated and untreated polycarbonate membranes, test cells were cultured alongside experimental cells and after incubation were trypsinised from filters, counted on a haemacytometer and used to determine the motility fraction. Media pH measurements Following 8-day BA (250 and 1000 mM) exposures to DU-145 cells, with daily media refreshment, media used for the 24 h period between day 7 and 8 was removed and the pH was measured using a Pinnacle 530 pH meter (Corning). Cell counts were performed on the adherent cells from corresponding plates and utilised for calculating the pH shift per cell: Acidic pH shift per cell ¼ 7:4 À observed pH Cells per plateðÂ10 6 Þ : Statistics SigmaStat 3.1 statistical software (Systat Software, Point Richmond, CA, USA) was utilised for paired t-test. All experiments were performed in triplicate. BA alters cell morphology, while inducing cellular senescence Flow cytometry and light microscopy were used to assess morphological alterations resulting from BA exposure. Following an 8-day exposure to BA (0, 250, and 1000 mM), flow cytometry analysis showed a dose-dependent increase in cellular granularity (side light scatter) and a decrease in cell size (forward light scatter) ( Figure 1A). No differences in cell morphology were apparent between confocal images of treated and untreated cells during the first 2 days. By day 8, treated DU-145 cells became flattened and contained numerous vesicles ( Figure 1B). BA's ability to inhibit cell proliferation without cell-death inspired our investigation to determine its effects on markers of senescence. The activity of b-galactosidase at pH 4.0, a marker of senescence, increased with BA exposure in a dose-dependent manner ( Figure 1C). Enzyme activity was not detected at pH 6.0 (data not shown). BA alters proliferation-relevant protein expression DU-145 cells were exposed to BA (0 -1000 mM) for 1, 2, or 7 days. No changes were apparent at 1 or 2 days, but at 7 days, the protein expression of cyclins A, B1, C, D1, E, and the phosphorylated form of MAPK signaler MEK (P-MEK1/2) decreased at 500 and 1000 mM concentrations (Figure 2A -F). Phosphorylated ERK (P-ERK1/2) increased at intermediate exposures (100 and 250 mM), relative to control, but was reduced by higher concentrations of BA ( Figure 2B). The tumour suppressor gene p53 expression remained stable, but p21 decreased following 7-day exposures ( Figure 2H). BA induces cytoskeletal alterations, while inhibiting cell attachment, migration, and invasion Measurements were taken to assess cell attachment, migration, invasion, and intracellular cytoskeletal actin distribution to determine if BA (250 and 1000 mM) exposure for 8 days had an effect on metastasis-related aspects of cancer cells. Staining for filamentous (F)-actin, a marker for intercellular connections and extensions such as filopodia, was decreased in cells exposed to high levels of BA. A total of 1000 mM-treated cells had smoothedges and were angular in appearance ( Figure 3A). Intracellular globular (G)-actin expression was unaltered by BA exposure. BA-treated cells show a reduction in attachment efficiency to polystyrene culture dishes, with a drop in plating efficiency of 34% at 1000 mM ( Figure 3B). With 10% FBS serving as a chemoattractant, the capacity of DU-145 cells to migrate across an 8 mm polycarbonate permeable membrane was reduced by 28 and 89%, by 250 and 1000 mM BA, respectively ( Figure 3C). The same trend was observed with invasion potential, where 250 and 1000 mM BA pretreatments reduced matrigel invasion by 82 and 97% ( Figure 3C). BA induces media acidosis and accumulation of acidic vesicles Acidic yellowing of phenol red in culture media was more pronounced in BA treated cells. The pH of media was measured prior to (pH 7.4) and following exposure to DU-145 cells for 24 h, between the 7th and 8th days of culture. The pH for each concentration of BA (0, 250, and 1000 mM) was then converted into an acidic shift from pH 7.4 per cell value. Chronically BA-exposed DU-145 cells acidified the surrounding culture media in a dosedependent manner ( Figure 4A). The number of acidic vesicles (measured using Lysotracker fluorescent probe) also increased in a dose-dependent manner, but both the lysosome-specific LAMP-2 protein and early endosome marker EEA1 decreased ( Figure 4B and C). The concentrations of BA used in culture media displays no significant effect on media pH (data not shown). To exclude the possibility that BA might alter the buffering capacity of densely populated culture plates, cells were cultured to near-confluence in control media before exposure to BA-supplemented media (250 and 1000 mM) for 24 h. The pH remained unchanged at all BA concentrations, showing acidity was not associated with the media, but instead with cell changes that occurred during the 7 day exposure (data not shown). DISCUSSION Boron has a high affinity for oxygen and is present in aqueous solution, depending on pH, as either BA (B(OH) 3 ) or borate (B(OH) 4 ) À . Since the pK a of the equilibrium between B(OH) 3 and borate (B(OH) 4 ) À is 9.2, at intracellular pH (7.4) free boron exists as the weak Lewis acid, BA. BA, a small molecule with a mass of 61.83, is rapidly absorbed from the human intestine and excreted via urine with a half-life of 21 h . There is no evidence supporting metabolism of BA in any animal species (EPA, 1991). BA does bind to molecules with cishydroxyl groups, as established through mass spectrometry and NMR analysis identifying a high affinity for the ribose moieties of NAD þ , and a somewhat lower affinity for mononucleotides (Kim et al, 2003). Nucleotide phosphorylation and loss of charge greatly reduces substrate affinity for BA . Morphology Flow cytometry analysis showed that BA caused a reduction in cell volume, yet under light microcopic investigation cells appeared to have a larger diameter. We believe the DU-145 cell line is responding to higher concentrations of BA by rearranging its cell shape into a flattened, low-volume state ( Figure 3A). These structural alterations in shape and size are likely contributing to the inability of the cells to proliferate, since increased cell volume and a rounding up from the attached substrate are both critical events during mitotic division (Lang et al, 2000;Fujibuchi et al, 2005). The observation that morphological alterations did not appear following 1 and 2 day BA exposures, but did at 8 days, argues the changes reflect secondary response of long-term treatment with BA. The relative intensity of fluorescent staining for G-and F-actin was found to be unchanged, regardless of BA concentration, indicating a steady-state actin pool ratio. Although actin concentrations in general appear unaltered, F-actin-stained filopodia extending about the periphery of the cells was reduced by 1000 mM BA. With actin serving an important cytoskeletal factor in cell migration and invasion (Lambrechts et al, 2004), the observed F-actin retraction in BA-treated cells suggests a reduced capacity to perform either. This interpretation was reinforced by the analysis showing a dose-dependent inhibitory effect on motility and invasion capacity, along with incompetence for reattachment ( Figure 3B and C). Together, these results suggest that BA reduces the metastatic potential of the DU-145 cell. In BA-treated cells, granularity increased in proportion to exposure concentration, possibly due to the formation of intracellular vesicles ( Figure 1A and B). The origin and content of these vesicles is unknown, since fluorescent probes for acidic compartments, tubulin, and intracellular calcium all failed to colocalise (data not shown). Proliferation The mechanism underlying the antiproliferative activity of BA has not been elucidated. One of the intriguing properties of BA is its ability to inhibit proliferation without causing a shift in cell cycle stage distribution or cell death (Barranco and Eckhert, 2004). In the current study, BA decreased the expression of five major cyclin proteins, all presumably playing significant roles in cell cycle progression (Figure 2A). Furthermore, the ability of antiproliferative agents to inhibit the expression of these proteins is important, since cyclins A, B1, E, and D1 have been correlated with prostate cancer aggressiveness (Mukhopadhyay et al, 2002;Maddison et al, 2004;Tsao et al, 2004). The DU-145 cell line has a mutant p53 protein incapable of signalling through p21, so it was nonetheless surprising to see p21 expression reduced by BA exposure ( Figure 2B) (Lecane et al, 2003). The downregulation of p21 helps to explain why BA does not shift DU-145 cell populations into a G1 arrest (Barranco andEckhert, 2004, Shukla andGupta, 2004). BA's effects on growth has been shown to be parabolic in embryonic trout and zebrafish with poor embryonic growth occurring at very low and high concentrations (U-shaped curve) (Rowe et al, 1998). BA's growth effects are cell-type dependent with maximum growth occurring in Saccharomyces cerevisiae cells at o0.8 mM BA, whereas 500 mM of BA maximised proliferation in HeLa cervical cancer cells (Bennett et al, 1999;Park et al, 2004). Furthermore, in HeLa cells BA (300 mM) was shown to stimulate the MAPK pathway in a bell-shaped fashion, with an initial induction of P-MEK1/2 and P-ERK1/2, followed by a decline in expression of P-MEK1/2 over time. In the present study, BA reduced P-MEK1/2 expression in a dose-dependent manner, yet increased P-ERK1/2 moderately at 250 mM ( Figure 2F and G). By way of Ras/Raf signalling, the phosphorylated form of MEK phosphoylates ERK, which then translocates to the nucleus and activates transcription factors relevant in proliferative induction. Thus, by upregulation of this pathway's activity, it appears that DU-145 cells are attempting to counter the BA-induced growth inhibition (Giehl, 2005). Expression of MEK, ERK, and all cyclins were not altered following 1 and 2 day treatments suggesting, as observed with cell morphological changes, these were not the primary effect of BA. Senescence DU-145 cells were evaluated for b-galactosidase activity, a marker of senescence or reversible cellular quiescence (Coates, 2002). When enzymatic activity is measured at pH 4.0, it is thought to indicate an increase in lysosomal enzyme concentration, whereas enhanced activity at pH 6.0 reflects an increased lysosomal mass (Kurz et al, 2000). In our study, BA treatment increased the activity of b-galactosidase in a dose-dependent manner at pH 4.0, yet no activity was apparent at pH 6.0. However, the dose-dependent increase recorded at pH 4.0 suggests the BA induces some 'senescent-like' characteristics. Accumulation of acidic intracellular vesicles A peculiar manifestation of BA treatment was discovered when the media of chronically exposed DU-145 cells became increasingly acidic ( Figure 4A). This effect was dose-dependent and not due to changes in the buffering capacity of the media or BA itself. The documented accumulation of acidic intracellular compartments is supportive of an affiliation with the media pH shift, by either contributing directly to the environmental acidification, or rather resulting from it, as seen in breast cancer cells (Glunde et al, 2003). Initially, we believed the upregulation of acidic vesicles reflected an increase in lysosome organelles, yet the LAMP-2 protein, expressed on lysosomal membranes in prostate tissue, decreased in expression ( Figure 4C) (Furuta et al, 1999). It was also possible that the acidic vesicles were early endosomes, yet the protein expression of early endosome marker EEA-1 was likewise reduced (Eskelinen et al, 2003). Further studies are needed to determine if this response is unique to cancer cells or a universal response to BA (Gatenby and Gillies, 2004). Interestingly, metabolic acidosis has been reported in a case of fatal BA poisoning (Restuccio et al, 1992). Conclusion The rationale for this study was based on the fact that BA is (i) a natural constitute of human blood, (ii) readily absorbed with plasma levels determined by dietary intake, and (iii) there is epidemiological, animal, and cell culture evidence supporting its antiproliferative capacity in prostate cancer. In this report, we show that pharmacologically-relevant BA treatment causes DU-145 prostate cancer cells to convert to highly granular, low-volume, flattened cells that have a marked reduction in their capacity to migrate, invade matrigel, and attach to synthetic substrates. Reduction in the expression of proliferation-relevant proteins, along with the upregulation of b-galactosidase activity, ultimately leads to a nonproliferating entity reminiscent of a senescent-like cell. Finally, the resulting cell accumulates intracellular acidic vesicles, while acidifying its extracellular environment.
2014-10-01T00:00:00.000Z
2006-02-21T00:00:00.000
{ "year": 2006, "sha1": "ab941f3197a35ee9ecfc134dc63a993591b88a1a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6603009.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ab941f3197a35ee9ecfc134dc63a993591b88a1a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
165503316
pes2o/s2orc
v3-fos-license
Liberalising Regional Trade: Socialists and European Economic Integration The socialist contribution to the creation of the European Economic Community has long been overlooked and misunderstood. Existing scholarship emphasises short-term considerations in explaining why the French Socialist and German Social Democratic Parties supported a European Common Market in 1956–7. This article offers a new perspective by placing these parties’ decisions within a longer context of socialist views on free trade, tariffs and regional economic organisation. Based on fresh archival materials, this article explores how socialist proposals for securing an economic peace after the First World War continued to influence socialist policies on European economic integration in the 1950s. with two-thirds of the delegations, including the SFIO and SPD, declaring themselves 'very positive towards the liberalisation of foreign trade'. 2 This article contends that SFIO and SPD conceptions of international trade and organisation expounded at this meeting have their origin in their parties' peace programmes developed during the First World War. Whereas the parties had contrasting policies on trade before 1914, by 1918 both asserted that economic protectionism leads nations to war. Disappointed by the emasculation of US President Woodrow Wilson's proposals for a liberal peace, in the 1920 and 1930s they developed a woeful narrative of alternatives not taken. These alternatives, which married free trade with regional organisation, became fixtures of interwar congresses, international meetings and party programmes. During the Second World War these narratives passed through personal contact and ideological affinity from one generation of party leaders to another. The article reinterprets SFIO and SPD support for the European Economic Community (EEC), a six nation common market, by highlighting a long tradition of socialist thought on trade liberalisation in transnational and national spheres. This approach contributes focus and precision to the more abstract discussions of interwar ideas of 'Europe' in studies by Willy Buschak and Tania Maync. 3 It also accomplishes several historiographical innovations. First, it rebuts claims that SPD support for the EEC constituted what Gabriele d'Ottavio calls a 'shift in the SPD's European policy', 'a conversion' according to Rudolf Hrbek, an 'about-turn' in Detlef Rogosch's phrasing or a rejection of former SPD leader Kurt Schumacher's legacy as Paterson argues. 4 The argument here agrees with Jürgen Bellers who considers the SPD decision 'not as surprising as it seemed to the public at the time' but it does not credit the decision, as Bellers and D'Ottavio do, to the reformist impulses that led to the 1959 Bad Godesberg party platform. 5 It concurs with Talbot Imlay's claim that it represented 'not a new departure as much as the logical outcome of earlier developments', but the decision was less a 'foregone conclusion' than Imlay suggests. 6 The article publishes excerpts of internal SPD debates where the crucial decisions on the EEC Treaty were made in 1957. Further, the analysis follows Imlay's urging that scholars take international socialist meetings seriously, unlike Richard Griffiths (and others) who argues that they were 'inconclusive wrangle(s) . . . around pre-conceived, pre-rehearsed positions'. 7 The discussions were indeed often inconclusive but focusing on loose coalitions rather than unanimous compromises reveals important achievements on issues of regional trade liberalisation. Laurent Warlouzet rightfully argues that Socialist Prime Minister Guy Mollet's contribution to the EEC 'has long been ignored' in existing historiography. 8 Not only did it matter that a 'pro-European' government negotiated the Treaties of Rome, as Gérard Bossuat and Craig Parsons stress, but this article also demonstrates how Mollet's policy embodied a largely unbroken continuity in SFIO thought on regional trade. 9 The common market offered a framework for post-war cooperation between the French and German governments, the states with the largest populations and most important economies in continental Europe, and inaugurated the French-German 'motor' that has fuelled European integration until the present day. Based on fresh research in national, socialist international, party and private archives, this article explores why the SFIO and SPD, the largest parties of the non-communist left, voted for the EEC treaty. Though Julia Angster and Michael Held discuss transatlantic networks and Keynesianism in the 1930-1960s and Christian Bailey traces the interwar origins of the SPD's Ostpolitik, they do not focus on international trade or the EEC. 10 This continuity in international economic policy, though, is essential for understanding party responses to the EEC. Without their votes, there was no majority in either France or Germany to ratify the common market. Exploring party-level continuities is therefore indispensable not only for explaining socialist policies on European integration, but for analysing why a common market came into being in the first place. Socialist votes for the EEC were not inevitable consequences of interwar ideas. Nonetheless, interwar economic conceptions, reinforced at transnational socialist meetings, provided legitimacy and inspiration for socialists to endorse a European common market in 1956-7. French Socialists, German Social Democrats and the Economics of Peace In 1889 the Second International, the Universal Peace Congress and the Inter-Parliamentary Union (IPU) all held their founding congresses. A peace movement dominated by liberal activists proposed international organisations to secure peace among nations through binding arbitration, freedom of commerce and a 'United States of Europe'. 11 Socialists heaped scorn on these 'bourgeois apostles of peace', in the words of SPD theoretician Karl Kautsky, but also embraced many of their ideas. The 1907 Second International congress supported binding arbitration. In 1911 Kautsky wrote that, 'there is only one way' to 'ban the spectre of war': 'the union of the states of European civilisation in a confederation with a universal trade policy, a federal Parliament, a federal Government and a federal army -the establishment of the United States of Europe'. 12 The SPD's peace resolution during the First World War called for a 'supranational organisation' but, fearing a 'victors' peace', the party was ambivalent about Wilson's 1919 proposal for a League of Nations, as Ulrich Hochschild demonstrates. 13 Radicals in the left-wing Independent Social Democratic Party (USPD) opposed Wilson's proposal but its moderate wing welcomed it, though it preferred a supranational 'European Bundesstaat', or federal state. 14 In France, the SFIO embraced international organisation as a guarantor of peace at its 1915 congress and endorsed a 'League of Nations' the following year. French socialists who were not drawn to Vladimir Lenin's call for international communist revolution generally rallied around Wilson's vision. When the war ended SFIO officials demanded a 'socialist federal Republic of the United States of Europe'. 15 The SPD turned to Wilson as well, hoping for a mild peace. The SFIO and SPD emerged from war espousing the classic liberal assertion that free trade promotes peace among nations. This consensus was a product of the war. Previously, their views on international trade shared little in common. The SPD was an overwhelmingly urban, working-class party. French socialists, in contrast, built firm roots in agrarian France. The parties responded differently when their governments increased agricultural tariffs from the 1870s to 1890s. In France, socialists were 'flexible and pragmatic' on tariffs and courted the farming vote. Protectionism became part of France's 'liberal-democratic tradition'. 16 To German social democrats, however, tariffs represented the power of East-Elbian agrarian estates. The burden of tariffs on urban consumers was higher in Germany than in France, feeding the SPD's class-based analysis. Hochschild perceptively argues that what distinguished the SFIO among French supporters of the League was that 'it understood [it] not only as a political and military organisation to prevent war but also intended it to be for economic affairs' though here the argument is that socialists considered international economic organisation integral rather than ancillary to their anti-war program. 17 A 1916 SFIO resolution proposed that a League of Nations prevent 'prolonging the disasters of the European war in an economic war' and dismantle 'excessive protectionism'. 18 The SPD resolved in a memorandum to an aborted 1917 international socialist meeting that, 'the peace treaty should . . . prevent the military war from being prolonged by an economic war' and 'gradually eliminate protectionism' by 'suppressing all restrictions of a tariff or commercial nature'. 19 A socialist party conference of the Entente powers in 1918 championed 'a League of Nations, which implies compulsory arbitration, in order to reach general disarmament, and free trade in order to remove possible causes of conflict'. 20 Socialist peace programmes merged support for international organisation and free trade. This convergence facilitated the rebuilding of inter-party relations after the war. In 1919 an international conference in Bern assembled to offer a socialist alternative to the peace emerging from Paris. This was a dramatic meeting. For the first time since 1914 French socialists met their German social democratic counterparts. Tempers flared as the delegates debated the emotional question of German responsibility for the war. Yet there was unanimity at the conference on free trade and international organisation. The Bern resolution envisioned the League as an international economic organisation invested with powers to regulate interstate trade, approve or veto tariffs and 'supervis(e) the world production and distribution of food and primary resources'. 21 Socialist Free Trade: Peace and Economic Renewal After 1919 the SFIO and SPD advocated central planks of liberal internationalism. Daniel Laqua emphasises the 'blurred boundaries between' interwar 'socialist and liberal internationalisms', but he does not explore the economic dimension of these socialist 'politics of peace'. 22 Stefan Feucht analyses SPD foreign policy under the Weimar Republic, but economics takes a back seat to the Baltic question, the Ruhr crisis, the Locarno treaty and disarmament, as it does in René Girault's treatment of SFIO parliamentary leader Léon Blum's European policy. 23 This article will demonstrate that economics were in fact central to socialist discussions on peace in the 1920s. In its resolution rejecting the Versailles treaty, the SFIO declared that 'tomorrow like yesterday, tariff barriers will separate territories . . . competition will recover the bitterness of before [and begin again] the historical cycle: commercial rivalry, diplomatic tension, unleashing of war'. 24 In 1921 Blum supported extending most favoured nation trading status to Germany to promote Franco-German reconciliation. 25 For the SPD's leading economic thinker and two-time German Finance Minister, Rudolf Hilferding, 'this politics of disrupting international traffic . . . the import-export ban, the high tariffs are so much more dangerous [because] we know from historical experience that they fan the flames of state conflicts and increase the likelihood of war'. 26 Trade liberalisation contributed to three socialist goals: a peaceful international system, economic modernisation and, especially for the SPD, lower consumer prices. Modernisation seemed imperative to compete with the US internal market, which experienced impressive growth in the 1920s. The SFIO developed a narrative of an unambitious, lacklustre French industrial class obtuse to the requirements of the international economy. Alexandre Bracke, the idol of post-war SFIO leader Guy Mollet, railed against 'economic Malthusianism' in French industry as early as 1919. 27 The German Metal Workers Union contended that post-war cartels in heavy industry, discussed by Wolfram Kaiser in this special issue, stunted technological modernisation. 28 Trade union federations also pushed socialists to support free trade. The General Federation of German Trade Unions (Allgemeiner Deutscher Gewerkschaftsbund; ADGB) supported a European internal market to lower production costs and to preserve peace. 29 The General Confederation of Labour (Confédération générale de travail; CGT), France's largest union, favoured 'a diffusion of products throughout the world by means of rapid and free trade'. 30 Under the impetus of CGT leader Léon Jouhaux the International Federation of Trade Unions voted to end 'economic nationalism' and to remove tariffs and subsidies for 'doomed' industries 31 -preferences that the International Labour Organization also adopted as Lorenzo Mechi demonstrates in this special issue. International meetings nurtured this developing consensus into a core tenant of the socialist politics of peace. The USPD and SPD reunification in 1922 precipitated the re-founding of the Labour and Socialist International (LSI) in 1923. Invitations to its founding congress in Hamburg demanded that all parties accept the 1922 Hague Peace Congress's resolutions, thereby officialising the rapprochement between liberal and socialist internationalism. 32 The LSI's first resolution states that 'the Peace Treaties violate all economic principles . . . unrestricted protectionism . . . has balkanized economically a Europe rent in pieces, and . . . added to the catastrophe', concluding that, 'labour must . . . fight against protectionism and in favour of free trade'. 33 When the French parliament contemplated new tariffs in 1927 the SFIO contacted the socialist parties of Belgium, Germany and Switzerland to form a united front against protectionism. 34 A 1927 SPD-ADGB resolution on the World Economic Conference listed three demands, the first of which was 'the removal of restrictions on international trade'. 35 Fritz Napthali, an economic expert, wrote the SPD's proposal for the Third LSI Congress in 1928. He demanded 'the removal of restrictions on international trade, in particular inter-European trade'. 36 The Weimar-era SPD presented itself as defender of a free-trade fortress besieged by economic elites pursuing their interests at the expense of working-class consumers. Tariffs fuelled internal polemics over participation in coalition governments. 37 In the SPD's last period in office in 1928-30, frustrated SPD leaders were unable to break the protectionist alliance in the government's Foreign Trade Committee. 38 Party leaders were on the defensive at party congresses, beseeching delegates to understand that they could not block the Reichstag majority's support for higher tariffs. The situation was more complex in France, where conflicting interests continued to influence SFIO trade policy. 39 The SFIO voted for tariffs on sectors as diverse as textiles, shoes, coal and cars. 40 However, the socialists behind these measures did not challenge the party's economics of peace, asserting instead that war damages, periodic or structural crises and unfair competitive practices warranted temporary protectionist measures. 32 Despite pragmatic concessions on tariffs, most revealing is how the SFIO and SPD responded to the Great Depression. When political and economic liberalism collapsed between 1929 and 1933, the SFIO and SPD became the largest political forces in their countries committed to liberalising international trade. Both vigorously supported a moribund 'tariff truce' in 1930. SFIO economic experts lamented the world economy's regression into a 'mercantile system', abandoning the 'conquest[s] of modern society'. 41 In his notes for the Fourth LSI Congress in 1931, Blum listed 'lowering tariff barriers' as an 'essential condition for an amelioration of the crisis'. 42 Hilferding's proposal stated that 'the war was the launching point for economic nationalism . . . protectionist policies, especially the constantly increasing tariff walls'. He called for 'an international tariff peace pact' and a 'convention to remove tariffs for single goods'. Socialists, he said, should 'support all efforts to build a single European economic area free from tariff walls'. 43 The next year Rudolf Breitscheid, the SPD's parliamentary leader, dedicated a significant portion of his speech to a LSI conference on disarmament to international trade. 'We experience with a shudder', he said, how 'ever more means are found to close a country against others. . . . We know how these trade disputes are roots for political disputes, political distrust, that contribute to raising walls between states instead of tearing them down'. 44 By 1933 world trade had collapsed to half its 1929 level. Once in power the Third Reich established an unprecedented system of import restrictions. After the French Popular Front won the 1936 elections SFIO Prime Minister Léon Blum concluded several commercial accords in an attempt to alleviate international tensions, including with Nazi Germany, as Gordon Dutter explores, but Blum soon concluded that trade concessions would not lure Adolf Hitler's government from its path towards war. Blum's next government instead undertook a mass program in French rearmament. 45 Socialists, International Organisation and Economic Institutions At the second LSI congress in 1925 in Marseilles, Hilferding argued that 'we must not only desire peace but organise it. Economic competition between nations for the conquest of markets must be replaced by cooperation', comments to which Blum expressed his 'entire agreement', continuing that, 'again in agreement with Hilferding . . . we must counter national sovereignty with international organisation' by granting the League of Nations 'super-sovereignty above states'. 46 expanded market is . . . incompatible with . . . protectionism . . . the Congress thinks that we must move towards organised exchange'. 47 Hilferding and Blum expressed similar thoughts at the LSI in 1931. Hilferding wrote, for instance, that 'the removal of unhealthy protectionism alone is not enough. On top of this is needed international cooperation under the leadership of the League of Nations and the International Labour Organization . . . to replace the chaos wrought by economic nationalism with a well-planned order of world-wide exchange.' 48 Hochschild discusses how, as early as 1919-20, socialists thought that the League was inadequate to meet the challenges of peace, though he downplays the economic dimension of their critiques. The League had no executive powers to resolve economic problems, and Germany was excluded (it joined in 1926). A SFIO official wrote in 1919 of his 'disillusion': 'we cannot find the generous spirit of Wilson's messages, nor the necessary provisions for the League's composition, action, and role'. 49 SPD leader Hermann Müller considered the League a 'shameless humbug'. 50 For his party the post-war settlement was a disaster. Thrust into power with Germany's defeat, it faced the bitter task of signing a peace treaty universally reviled in Germany. Nonetheless, after 1921 the SPD supported German membership in the League on the basis of an 'equality of conditions'. 51 Interwar SFIO and SPD leaders sought international remedies for the reconstruction of war-damaged territories, for reparations, for raw material shortages and for agricultural and industrial crises. Free trade required international institutions to peacefully order economic relations among nations and mitigate negative domestic repercussions. Ernest Poisson, a SFIO delegate to international meetings, argued in 1919 that Allied wartime boards to distribute food and primary materials should serve as 'the first embryos of an international trade organisation'. 52 Former SFIO Minister Albert Thomas called for an extension of 'the role of the League of Nations in the economic sphere' and for the 'international control of commerce'. 53 Soon after Thomas became Director of the International Labour Organization (1919-32). Under SFIO influence the Lucerne international socialist conference resolved in August that the League should supervise 'credit, navigation, food, and primary resources'. 54 A SFIO newspaper aptly described socialist views on international trade as 'a synthesis that borrows from free trade the notion of a world market and from protectionism its notion of a directed economy'. 55 A 'Collective International Economic Council' would 'regulate consumption, international production, currency and transport relations, [and] raw material distribution'. 56 A year earlier, Blum proposed extending the League's programme for Austria to all of Europe: 'an international issuing institute, a system of credit for countries incapable of consuming or producing, perhaps an international money supported by international taxes or loans'. 57 When the depression struck, the SFIO appealed for an 'international bank' to serve as 'the central financial organism of the future federated Europe'. 58 Pre-war 'free traders progressively convert[ed] to the regional solution', as Eric Bussière argues, 59 and socialist internationalism merged with this evolution of liberal internationalism. In 1925 the SPD became the first major European party to enshrine the 'United States of Europe' into its party programme. For Breitscheid, the aim was 'a European customs union'. 60 SPD chair Otto Wels proposed a 'European parliament' in 1921 and then a United States of Europe at the first LSI Congress. 61 When Germany shed the shackles on its trade sovereignty established by the Versailles Treaty in 1926, the SFIO, the Belgian Workers Party (POB) and the SPD met to discuss the future of European trade. Their resolution called for a European customs union. 62 Socialist statements supporting free trade, however, were almost always followed by demands for more powerful international institutions, often modelled on interventionist wartime economies. In his 1928 LSI speech Napthali regretted that 'right after the war a revival of liberal views came about as reaction against the war economy'. 'Meanwhile', he continued, 'almost everyone now recognises that the hardest problems . . . can only be solved through . . . national and international organisations'. The resulting LSI resolution signalled socialists' disappointment with the 1927 World Economic Conference and adopted the SPD's call for an 'International Economic Office' under the League that would 'supervis[e] trusts and international cartels'. 63 In 1930 Napthali proposed that the LSI appoint an 'international secretary for economic policy' who would reside in Geneva to lobby the League for the LSI's views on 'international tariff policy . . . cartels [and] agricultural co-operation'. 64 Blum sympathised with French Foreign Minister (and former socialist) Aristide Briand's 1929-30 call for a European customs union but criticised its vagueness and reaffirmation of national sovereignty. The Briand Plan, lacking enforcement mechanisms, seemed inadequate to address the growing tensions of the time. For the SPD the Briand Plan 'was more a general conception than a concrete proposal', but a 'healthy' idea of 'great worth'. 65 The SFIO and SFIO also supported smaller integration projects within Europe including, in principle, a customs union between Austria and Germany. When proposed by the German government in 1931 without international consultation, though, it was a 'deplorable' act of aggression, prompting both parties to oppose it. 66 In the 1930s 'planning' ideas percolated within socialist parties and trade unions as alternatives to the apparent dynamism of Soviet and fascist examples, though they also met with suspicion or rejection. When crisis struck French coal the SFIO supported import quotas 'for the moment' but preferred a national coal board to fix prices and organise trade. 67 The best solution, though, was an 'international organisation of the coal industry'. 68 When crisis hit French agriculture socialists voted 'in desperation' for minimum prices and tariffs but regretted their impact on consumers. 69 In 1931 Adéodat Compère-Morel, author of the SFIO agrarian program, wrote that 'it is necessary that in the near future the representatives of European countries . . . create a vast International Wheat Office to end these disorganised and dangerous oscillations of wheat prices'. 70 The Popular Front government, in the absence of international solutions, established a Wheat Office in 1936 as a national interventionist body. Reclaiming Europe for Socialism: Exile, Resistance and War When the German military won the 1940 Battle of France, French socialists dispersed to Algeria, London, home or underground. The party organisation dissolved. Soon after Blum was imprisoned he wrote A l'échelle humaine, in which he portrayed the League of Nations as a 'magnanimous and magnificent creation'. A post-Nazi 'world must draw tomorrow a lesson from its defeat' by creating a 'Supreme State' with powers 'distinct from and superior to national sovereignties'. Invested with 'means to borrow, [its own] budget', it 'must regulate the problem of customs, manage currency crises perhaps with an international monetary institution' and 'undertake massive works of international utility'. 71 Slowly, a small socialist resistance formed around the Socialist Action Committee (Comité d'action socialiste; CAS) in the Vichy South and around Libération-Nord in the German-occupied North. Within these organisations socialist ideas about international organisation and trade passed from interwar blumistes and anti-fascists to the generation of post-war SFIO leaders. CAS leader Daniel Mayer was a Blum disciple. Important figures in the Nord and Pas-de-Calais federations, home of Libération-Nord, strongly supported Blum's interwar foreign policy and argued for international organisations with executive powers and for free trade. 72 During the war they rubbed shoulders with a younger generation of northern resisters, including Gérard Jaquet and Christian Pineau, who survived the war to become forceful advocates of European integration. Ensconced in this web of personal ties Blum's vision became the template for the SFIO's 1943 resistance manifesto laying out the party's objectives for a post-war peace. 73 The manifesto called for a 'super-state to which nations will cede part of their sovereignty', in particular over 'the distribution of primary resources, emigration, transportation, working conditions, hygiene, public works, customs legislation . . . and monetary exchange'. 74 This 'political confederation must have its own government . . . a budget, tax resources, borrowing capacities'. The SFIO resistance took up Blum's call to re-appropriate 'Europe' from its Vichy and National Socialist usurpers. It endorsed a 'United States of Europe' as a step towards a 'United States of the World', with the power to 'supervise the problem of customs'. 75 The clandestine press promoted 'unions of federation . . . of neighbouring states . . . to suppress monetary, customs, and military borders and to manage their resources in common'. 76 It also saw 'joyful' signs for convergence with exiled German socialists, reprinting a 1944 resolution of the Organisation of German Socialists of Great Britain that stated, 'we advocate a Federation of all the peoples of Europe because full national sovereignty is no longer compatible with the economic and political conditions of Europe'. 77 German social democrats fractured into splinter groups during the National Socialist dictatorship. Boris Schilmar discusses the broader exile community's diverse discourses on Europe, Paterson the exiled socialists' 'general agreement' on European federation and Bailey the importance of the International Socialist Combat League (Internationaler Sozialistischer Kampfbund; ISK) in carrying support for a united Europe into the post-war period. 78 This section builds on their work by demonstrating continuities in social democratic thought on international trade and organisation. It departs from Paterson by rejecting the implicit break he sees between exiles and the 'nationalist' post-war SPD and, though it agrees with Bailey on this point, former ISK figures were less relevant than Bailey suggests for the SPD's ECSC policy than 72 twelve years of imprisonment and hiding to lead the SPD. Ollenhauer became Schumacher's deputy and replaced him after Schumacher died in 1952. Despite the long hiatus of National Socialist rule, post-war German social democrats had maintained their affective ties to their party. Emerging from war, the SFIO promoted trade liberalisation within international organisations. When it concerned economics most often the party conceptualised regional institutions. With a socialist-led coalitional government in power, Blum wrote that liberalisation 'creates the conditions for peace, whereas tariff wars prepare the spirits for war'. 84 The government also launched a state-directed modernisation programme. Socialists insisted that there was no 'contradiction between the progressive return of free foreign commerce and an internal economic regime founded on the direction of the economy [dirigisme]'. 85 The party continued to propose supranational institutions. In supporting the Marshall Plan the SFIO announced that 'dirigisme is absolutely indispensable at the international level' and welcomed the US government's demand that European governments coordinate their recovery programmes in what became the Organisation of European Economic Cooperation (OEEC). 86 It supported executive powers for the OEEC, which, it argued, should not only liberalise, but organise trade through 'the unification of taxes, salaries and social security legislation'. 87 Plagued by coal shortages, the SFIO urged an 'internationalisation' of European raw materials and heavy industry. François Tanguy-Prigent, SFIO Agricultural Minister in 1944-7, drew inspiration from interwar SFIO proposals to call for a European agrarian union in 1949, a year before an official French proposal for a 'green pool'. 88 At the SPD's founding post-war congress, Schumacher resurrected Wels's call for a 'United States of Europe'. The SPD supported the Marshall Plan and German entry into the OEEC on the basis of an 'equality of conditions'. Kreyssig renewed his support for a European customs union. 89 Internal policy documents in 1949 emphasised the 'necessity of a European-regional connected economy' and 'striv[ed] for a true world economy on the basis of a regional (not nationstate) connected economy'. 90 Another document, titled 'Supranational Economic Relations', favoured 'planned, supranational economic relations as a foundation for European-Union'. 91 When Dutch Labour colleagues attended a SPD parliamentary meeting in January 1950 Schumacher asserted that Europe should lower tariffs and create a common currency, a united dollar pool and a European division of labour. 92 Nonetheless, growing domestic and international anxieties tempered SFIO and SPD enthusiasm for regional integration. The British and Scandinavian governments, governed by Labour and social democratic parties, refused to participate in supranational institutions. SFIO and SPD leaders worried about the submergence of socialism within continental institutions dominated by Christian democrats. The SPD fretted that new economic borders on the North Sea would stymie growth in German port cities, electoral strongholds. Both parties feared that a Franco-German tête-à-tête could spell disaster. Further, Schumacher believed that his party had to steal the thunder of anti-democratic forces by appealing to German national interests. When French Foreign Minister Robert Schuman proposed a supranational coal and steel community in May 1950 each party hesitated before the SFIO announced its support in June and the SPD its opposition in October. The SPD argued that French governments intended to 'colonise' Germany, an argument that placed it in an awkward position vis-à-vis the German Federation of Trade Unions (Deutscher Gewerkschaftsbund; DGB). DGB leaders supported the ECSC after bargaining with German Chancellor Konrad Adenauer for a law on workers' participation in the management of heavy industries. When, in 1949-50, OEEC governments negotiated trade liberalisation, French socialists complained that they should first integrate fiscal and welfare systems, otherwise 'anarchic and often disloyal competition' would prejudice French industries which had '[higher] social . . . and . . . production costs'. 93 The SFIO shared widespread concerns about French economic competitiveness and a narrative that German industrial hegemony had driven Nazi expansionism. It insisted that international institutions protect French 'economic security' vis-à-vis Germany. Manufacturers' pressure convinced the SFIO to pull its support for a customs union with Italy and the Benelux countries in 1949 after Dutch leaders demanded the inclusion of the new West German state. Lacoste summarised the French government's predicament: the formation of a large European market is a necessity of our time because we are now facing industrial and commercial dimensions that largely surpass national dimensions. . . . We cannot leave Germany out. It is necessary therefore to have enormous guarantees. The export strength of the German economy is such that we strongly risk winding up with a flooding of the French market by German products. 94 Reticence towards trade liberalisation grew within the SPD as well. In 1950-1 West Germany developed a massive balance-of-payments deficit after it entered the European Payments Union (EPU), a multi-currency clearing house. SPD leaders called for trade liberalisation to halt until conditions improved. Yet even a leading proponent of the freeze, Erik Nölting, said that 93 1 and 7 Dec. 1949, Groupe parlementaire socialiste (GPS), Archiv d'histoire contemporaine (AHC), Paris. 94 10 Nov. 1949, GPS, AHC. we are of course 'old fighters' for the idea of worldwide free trade. We are against superfluous trade restrictions, we are for the integration of Western Europe as an economic union. 95 Significantly, the parties continued to support trade liberalisation in international conferences. The compromise 'Resolution on the Liberalisation of Trade' of the 1951 First Congress of the Socialist International (SI) in Frankfurt a.M. reflected the success of the SFIO, the SPD and other parties in beating back proposals by the British Labour Party. 96 This congress was preceded by economic expert meetings, the first of which published a resolution in March 1950 that rejected trade liberalisation. Both the SFIO and SPD worked to change this resolution. Ollenhauer attended a September meeting armed with a report that the SPD was more liberal on trade than the German government, which it accused of protecting special interests (clearly the agricultural sector). The SPD called on European countries to eliminate tariffs on whole sectors of goods. West Germany would eliminate agricultural tariffs if nations like France would eliminate industrial tariffs. The goal was 'the preparation of a customs union'. 97 The SFIO, for its part, produced four documents for the December meeting discussed at the opening of this article. One document expressed indirect approval for the SPD's call to end luxury goods imports during the EPU crisis. The other documents clearly laid out the SFIO's desire for regional trade liberalisation. Lacoste's report, titled 'Trade Liberalisation', demanded multilateral rather than bilateral trade agreements, the 'elimination of quantitative restrictions on imports' and stated that 'we, socialists, we agree with . . . the goal of achieving . . . a single European market'. 98 The SFIO's International Bureau prepared another document supporting trade liberalisation that emphasised the importance of intra-European trade. Interestingly for the outcome of the EEC negotiations which included a twelve-year transitional period for the elimination of internal tariffs, the SFIO document concluded that 'perhaps one can fix a delay -that could be around a dozen years -during which political and economic measures can be taken to arrive at the final objective'. 99 As discussed above, the SPD and SFIO joined the two-thirds socialist majority that was 'very positive towards the liberalisation of foreign trade'. The British, Danish and Norwegian delegations opposed this statement and their countries did not join the EEC in 1957. The EEC and Institutional Guarantees The SPD made German reunification the centrepiece of its critique of Adenauer's government in the 1950s. During its raucous campaign against the European Defence Community (EDC) in 1952-4, anti-EDC discourses and critiques of 'small Europe' collided with older discourses supporting regional economic integration. All the while SPD economic experts promoted trade liberalisation because it would 'multiply the economic strength of Western Europe, markedly increase real income and . . . working-class living standards'. 100 After the German balance of payments crisis subsided, the SPD promoted lower tariffs and the removal of all quantitative and administrative restrictions in a 1953 resolution titled 'Closer European Economic Cooperation'. 101 The SPD delegation to the ECSC Common Assembly also adopted a constructive attitude. Birkelbach, a SPD delegate and future president of the assembly's transnational socialist group (1959-64), told his international colleagues that, whereas agreement on geopolitical issues was difficult, 'socialists can easily reach agreement on the concrete economic and social questions dealt with by the [ECSC] Assembly'. 102 In 1955 Ollenhauer joined Jean Monnet's Action Committee and supported its campaign for a six nation supranational atomic energy community. The SPD remained cautious though towards a six nation customs union because it feared that it might become a protectionist bloc, dividing Europe, already split in two, into three. French governments were far more resistant to trade liberalisation than Germany due to fears of industrial competition and widening French trade deficits in the EPU. Socialist leaders, in opposition in 1951-5, supported deeper economic integration but wanted institutional guarantees. The SFIO acknowledged French economic problems in a 1952 report to the SI but praised the EPU for 'maintain(ing) and develop(ing) intra-European commerce'. Further, 'it would be beneficial to orient ourselves towards surpassing the dilemma of trade liberalisation and bilateralism by proceeding further in the integration of Europe, that is to say, towards the creation of a homogenous space subject to the same planning'. 103 Pineau synthesised the SFIO's position in 1954, describing trade liberalisation as first of all enlarging the market by opening to goods produced in Europe, a notion that is at the base of most of our European conception. . . . On the European level, the free circulation of goods ought to include as corrective an organisation of production so that competition does not become murderous in the end for the states concerned . . . the term 'liberalisation of trade' should be opposed to 'protectionism' and not to 'organisation '. 104 NO. These controls are a consequence of the balance-of-payments deficit. Socialists . . . would like an equilibrium of these balances that would permit trade liberalisation. They know however that the balance of French trade has deep causes that will require a lot of time to eliminate. The SFIO response continued: The French Socialist Party would like trade to be liberalised, first within Europe, then in a larger space. However, it thinks that this liberalisation requires conditions that are not currently met in France and which it would dedicate itself to fulfilling if it were to take power. 111 A month later, a left-leaning coalition won the French national elections. Mollet became Prime Minister in January 1956 after promising to pursue common market negotiations in his investiture speech. The new French leaders continued to insist that European institutions mitigate the negative consequences of trade liberalisation. Gradually, though, ever-moving targets became goals to achieve, rather than pretexts for delay. Mollet gave Pineau a green light to pursue 'guarantees' in tenacious negotiations in August-October 1956. When the talks stalled Mollet assiduously marked up notes on the French negotiating position in preparation for a November 1956 meeting with Adenauer. 112 Mollet's pro-EEC position was strengthened by an ILO report that rejected making social harmonisation a precondition for a European common market, as discussed by Mechi, and ministerial reports arguing that social costs only had a marginal impact on price disparities between France and other ECSC economies. 113 A breakthrough agreement in Mollet's meeting with Adenauer included funds for retraining workers, reconverting uncompetitive enterprises, a European Investment Bank, equal pay for female workers and greater flexibility in the customs union's transitional stages in the event of economic difficulties. 114 The agreement contained fewer 'guarantees' than those sought by the French delegation, but far more than those envisioned at Messina. Mollet and Pineau considered them sufficient to implement their economic vision, rooted in interwar SFIO preferences, that trade liberalisation would foster peace and modernise France's economy. To ratify the EEC treaty they reconstructed the centrist majority that had dominated French policy-making until 1952, including the Christian democratic Mouvement Républicain Populaire, now in opposition, pro-European radicals and right-wing deputies who supported Mollet's hard line in Algeria. Bossuat, Parsons and Warlouzet are therefore validated in their assessment that Mollet and Pineau were decisive for the EEC treaty against scholars like Alan Milward and Andrew Moravcsik who downplay the importance of prointegration actors. 115 This success, though, had as much to do with long-held socialist ideas on regional trade liberalisation as their general pro-European attitudes. SPD leaders meanwhile pondered whether to support a six nation common market. Ollenhauer told the party that it could still reject the treaty but he tipped the scales towards the EEC, reflecting that 'it would be bad for the party to refuse in the economic field that which it has always pushed for'. 116 Wilhelm Mellies argued that economic integration would endanger prospects for reunification. 117 However, Wehner, who oversaw SPD policy on the German Democratic Republic, thought that the party 'must be for the European Economic Community and the customs union', though the association of colonial territories concerned him. In the end Wehner abstained. Later Mellies said the party should ratify the EEC Treaty for tactical reasons as federal elections approached. Tariffs were central in the SPD's internal discussion. Baade urged rejection, warning that ceding sovereignty over tariffs would likely result in higher tariffs. 118 In February 1957 SPD economic experts debated whether to make their support conditional on a prior agreement between a British-inspired free trade area and the EEC. The British government's clumsy attempts to derail the EEC negotiations informed the position the SPD adopted. 119 The party supported an OEEC-wide free trade area but SPD leaders knew that the British government had proposed it in order to pre-empt the customs union. Birkelbach told the meeting that: Conclusion Six nations ratified the Treaties of Rome in 1957. In retrospect, there was only a narrow window for success. In 1955 the Messina proposals did not have majority support in the French National Assembly. Nor did Charles de Gaulle, who became French prime minister in June 1958 and then President in December, support supranationalism. Adenauer's CDU/CSU won an absolute majority in September 1957. By that time the Bundestag, with SPD votes, had approved the EEC treaty over the votes of Adenauer's coalitional allies, the Free Democratic Party (Freie Demokratische Partei; FDP). The CDU/CSU could have ratified the treaty without SPD votes in fall 1957 but, if the six nation process had been delayed to 1958, the treaty may have fallen victim to the disintegrative forces that buried the French Fourth Republic. Short-term considerations offered incentives for the SFIO and SPD to support the EEC. After holding out for German reunification, by 1957 SPD leaders were more pessimistic than ever. The Soviet invasion of Hungary bolstered Adenauer's contention that only a 'policy of strength' could deter Soviet aggression. Schumacher believed that opposing the ECSC and EDC would bring the SPD electoral rewards, but European integration was a political success by 1956-7. For Ollenhauer, opposing the EEC would be a liability in the 1957 election. In France, Mollet experienced a series of political reversals in autumn 1956. Socialist critics began to abandon him due to his government's disastrous military escalation in Algeria and the French-British-Israeli invasion of Egypt. Mollet no doubt urgently desired a foreign policy success when he met Adenauer to discuss European integration. Viewed in a longer perspective, however, SFIO and SPD decisions reflected party preferences for regional economic cooperation originating in the peace programmes of the First World War, when the parties advocated free trade and international organisations. In the 1920s socialists argued that tariffs, war by another means, should come under the supranational governance of the League of Nations or of a European customs union. After 1945 the Cold War division of Germany, the challenges of reconstruction and the recalcitrance of northern European governments towards supranationalism changed the post-war calculus. In 1955-7, however, older preferences for European economic cooperation rose to the surface. This time, when SFIO and SPD leaders contemplated a six nation common market, they prioritised socialist ideas about international trade and organisation passed down by a previous generation and still firmly entrenched in party ideology and programmes. These ideas continued to shape socialist approaches to European integration in the next decade. In 1958 de Gaulle suspended negotiations for an association between the EEC and a British-led free trade group. A rival European Free Trade Association (EFTA) of the so-called 'outer seven' countries was founded in 1960. In 1963 de Gaulle vetoed Britain's application to join the EEC, ignoring objections from a wide swath of European public opinion, including from French and German socialists. Despite these setbacks the SPD embraced the EEC and argued for stronger supranational institutions than those desired by Adenauer, as did the SFIO. The SPD also remained eager to liberalise beyond the EEC. The successful conclusion of the Kennedy Round negotiations under the General Agreement on Tariffs and Trade, which lowered tariffs between the United States and the EEC by an average of 35 per cent, mollified concerns of the community becoming a protectionist bloc. In the late 1960s French socialist and German social democratic views on European integration increasingly diverged. Brandt, foreign minister (1966-9) and then Chancellor (1969-74), promoted a deepening and widening of the EEC. With Karl Schiller as finance minister, the SPD pursued a liberal approach to international trade, a legacy that Helmut Schmidt continued as Chancellor (1974-81). The SFIO, by contrast, was in decline in the 1960s, struggling to fashion a centre-left majority to win the presidency of the Fifth Republic. In 1971 the SFIO dissolved into the new Socialist Party (Parti Socialiste; PS), led by François Mitterrand. He supported the EEC but the party became increasingly critical of the 'capitalist' Community. A socialistcommunist alliance won the 1981 presidential elections. President Mitterrand tried to build socialism within the nation state while keeping France in the EEC. When his project failed, Mitterrand 'turned' to Europe, giving impetus to the Single European Act and the Maastricht Treaty, landmark agreements that established today's EU.
2019-05-27T13:22:46.716Z
2018-04-13T00:00:00.000
{ "year": 2018, "sha1": "4f9f70c838fe90f2d279f9dbcec22ba9fdb14078", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/40BFAC77550AD0402C90296E92824051/S0960777318000073a.pdf/div-class-title-liberalising-regional-trade-socialists-and-european-economic-integration-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "ce0ceb7a1811591b4dba722abaddfd70be06acca", "s2fieldsofstudy": [ "Economics", "History", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
264056112
pes2o/s2orc
v3-fos-license
Sharing economy in the smart city development . The concept of the sharing economy developed in the last decade is vastly underestimated. The study aims to substantiate the potential of the sharing economy in the development of smart cities. Based on a bibliometric analysis of research publications, it is shown that the concepts of the sharing economy and smart cities intersect in such areas as sustainable development, digital technologies, and the development of public goods. Three regression models have been built. We prove that the key parameter for the development of services of the sharing economy is the availability of free and fast access to the Internet. The development of some services, in particular, carsharing, was found to be dependent on the size of the city, which explains the expediency of its development only in large cities and nearby territories. It is also shown that the impact of bicycle rental services, as well as digital platforms of the sharing economy does not depend on the city size and can be used to develop the public goods sector, as well as ensure sustainable development, respectively. In conclusion, using the case of Moscow and Saint Petersburg, we demonstrated that the development of these services was not stable. Introduction "Smart city" and "sharing economy" are concepts that are being actively developed in the last 10 years in international practice.These concepts are based on the idea of a more efficient use of resource and call for an appropriate level of digitalization.It is important to note that the principle of sharing is not new, but it reveals new opportunities for social exchange in the changing economic, social and ecological conditions.At the same time, according to some approaches, the sharing economy is considered as a component of the circular economy, what confirms its contribution to addressing the task of more efficient exploitation and distribution of resources [1,2].However, some studies show that these concepts have different incentives for development [3]. A more detailed analysis of the sharing economy reveals that economic, social and environmental aspects of the development of this business model should be noted [4].From an economic point of view, the sharing economy set up new forms of enterprises as well as additional sources of income.An environmental perspective is that the sharing economy, through the pooling of resources, contributes to solving the climate changing problem.The social perspective is about creating and strengthening social ties and forming communities focused on more efficient resource using [5]. Thus, the sharing economy that has economic, environment and social influence and is based on digital technologies, can be used for ensuring sustainable development in cities.At the same time, platforms and services of the sharing economy are often underestimated.The novelty of this business model and its rapid expansion into different economic activities gave rise to resistance due to insufficient development of institutional mechanisms to ensure the regulatory process.The advantages of the sharing economy can be revealed through the development of special cooperation mechanisms implemented when applying this business model for the urban environment development. Thus, the aim of this study is to substantiate the potential of the sharing economy in the development of smart cities.To achieve the aim of this study, an analysis of the research literature on this topic was carried out; indicators characterizing the sharing economy in the assessment of smart cities were analyzed; regression models showing that access to highspeed Internet was a key parameter for the development of these services were built; the dynamics of the identified indicators in Moscow and Saint Petersburg were shown. Sharing economy in the context of smart cities development The concepts of "smart city" and "sharing economy" are two key trends of the fourth industrial revolution.On the one hand, they were formed independently; on the other, they can be used together, in particular, to ensure socially significant tasks.The concept "smart city" began to be used in scientific literature in the in the nineties of the 20th century.When disclosing this term, as a rule, researchers note two key aspects: 1) the application of information and telecommunication technologies (ICT); 2) improving the efficiency of the urban infrastructure utilization.At the same time, in the course of developing research on this topic, the role of human and collective capital for the development of urban agglomerations is also highlighted [6].According to Giffinger et al. [7], "smart city" implies a "smart" combination of abilities and activities of self-reliant, independent and conscious citizens.When studying this concept, Nam and Pardo singled out the institutional component in addition to the digital and human components [8].In this study, we define the «smart city» as a concept of city governance based on the application of digital technologies, taking into account the participation of society in solving socially significant problems and aimed at improving the efficiency of urban infrastructure utilization. The term "sharing economy" was introduced into research discourse by Lessig as opposed to "commercial economy" [5].The development of collaborative consumption is linked to the work of Rogers and Botsman "What's Mine Is Yours: The Rise of Collaborative Consumption" [9].It is important to note that the sharing economy is an umbrella term, and includes various aspects: applying the access right, resource sharing, using digital platforms for communication.For example, Acquier shows that the sharing economy is made up of three overlapping "organizational cores" -the "access economy", the "platform economy" and the "community-based economy" [10].The principle of sharing can also be applied to various types of resources: material, financial, information, labor.This variety of areas of application of this principle indicates its gradual introduction into socio-economic processes, changing the behavior of economic agents, as well as initiating the transformation of both informal and formal institutions.In addition, one of the reasons for the sharing economy development was the processes of globalization, which formed the requirements for mobility, which stimulated the development of services based on the principles of sharing in related activities. The sharing economy development is also associated with the search for external ways to circumvent the rules of compliance in order to achieve the final result.Оn the one hand, this determines the presence of some lacks in existing processes.On the other hand, it permits to use new forms of interaction and form new niches [11] accelerating changes in the institutional environment. Thus, both the sharing economy and smart cities are linked to the positions of technologies, institutions and societies.At the same time, the formation of these concepts is based on partnership and network relations.Coe et al. put it by the next way "…community partnerships, not wires, are the fibers that connect smart communities" [12].The dominance of digital or sharing in interactions is shaping new business models and requiring specific regulatory measures.Сomparing the components of a smart city (smart economy, smart people, smart governance, smart mobility, smart environment, smart living) and services of the sharing economy, Koźlak shows that all areas of sharing (accommodation, workspace, mobility and transport, financing, food, general goods, skill/talent) satisfy the tasks of a smart economy, 6 out of 7 areas correspond to "smart environment" and "smart living".Components "smart governance" and "smart mobility" correspond to 1 area of sharing [3].A bibliometric analysis of papers indexed in Scopus showed that there were 391 publications with the keywords "sharing economy" AND "smart city" from 2014 to 2023.A more detailed analysis of these publications made it possible to select 44 publications that discuss how to use the sharing economy projects for the urban environment development. To analyze this works we used VosViewer software.Figure 1 shows the resulting map of related topics. From the analysis of this map, four groups of publications can be distinguished.The first cluster of publications is related to the technical aspects of these concepts, in particular blockchain, network, smart contacts, etc.The second cluster concerns studies on the transport development in the urban environment, which is the fact that transport companies use this business model quite actively (carsharing, taxi, bicycle sharing, etc.).The third cluster is crossed with the first one and includes big data processing, which is one of the key conditions that allows the sharing economy model to be applied to urban development.The fourth cluster of keywords characterizes studies that directly relates to the development of smart cities, consideration of aspects of sustainable development and public goods sector, etc. The most cited paper [13] shows the potential of using the blockchain technology for the development of smart cities. Anthony discusses the issues associated in developing a decentralized data marketplace for smart cities suggesting recommendations to enhance the deployment of decentralized and distributed data marketplaces [14].He notes the emergence of digital data markets, but shows that market data has security, efficiency, and privacy concerns.In addition, the problem of ensuring trust and fairness between the owners and sellers of data during their exchange becomes relevant.To solve this problem, the scientist proposes the design of an ecosystem, which consists of a data market with blockchain technology with support for telemetry transmission with message queues (MQTT), which allows to ensure trust and fairness between data owners and sellers. Using the example of Airbnb development in London, Ferreri and Sanyal [15] show how the development of short-term rental services stimulates the authorities to develop new regulatory rules, as well as to consider proposals for the use of algorithms and big data as a means of city management. Rahman et al. described in detail the structure of building services for the sharing economy, taking into account blockchain technologies, the Internet of Things, and artificial intelligence.With the support of the proposed infrastructure, a future smart city will be able to offer the services of a cyber-physical sharing economy through IoT data.Using smart contracts, the platform is able to provide complex space-time services on a global level without requiring a central verification authority [16]. Ferraro, King and Shorten present a scheme for applying blockchain technology as a social compliance control mechanism in smart city environments [17].Akande, Cabral and Casteleyn in the analysis of predictors of the sharing economy development revealed that economic benefit is one of the key factors for participants in the sharing economy.However, sharing property with strangers comes with some risk, which negatively impacts people's propensity to share [18].Kowalska and Wolniak show that certain forms of the sharing economy function best in large cities.An obstacle to the development of the SE is the non-market placement of goods and services and a strong attachment to private property. In addition, such a concept as a sharing city is discussed in the scientific literature.An important role in the implementation of both concepts is played by citizens (bottom-up approach) and the social capital [8,19].Zvolska et al. [19] consider the potential of city sharing using experience of Berlin and London.The authors show that both cities indirectly support city sharing through smart agenda programs that promote ICT-enabled tech innovation and start-ups.However, there is the lack of programs, policies, support measures and regulations that directly target urban resource sharing initiatives.In addition, public authorities in Berlin are skeptical of organizations for the sharing of urban resources, while London is more loyal. Communication and social participation are important in the processes of integration of local communities, local development and city management.Continuing this theme, Bernardi and Diamantini [4] explore how local governments manage the sharing economy to form a sharing city.Using the analysis of Milan and Seoul, the authors show that both cities are developing three key dimensions (economic, technological and human) of the sharing paradigm to create a common city.While choosing different approaches, institutionalized cooperation mechanisms remain common. Jonek-Kowalska and Wolniak formulated and tested three hypotheses about the impact of the city size and per capita income on the municipal support for the sharing economy.In addition, the authors have verified if the degree of municipal support affect the differentiation of the implemented forms of the sharing economy [20]. Noesselt presents the Chinese experience of sharing economy regulation in smart cities and shows that regulation efforts, contrary to conventional top-down steering approaches, rely on central-local collaboration and network coordination that involves a number of multiple actors operating under the 'shadow of hierarchy' of the central party-state [21]. Using similar queries in Russian Science Citation Index, it was found 16 publications that highlight the application of sharing economy projects for the design and development of smart city environment.These publications can be divided into two groups: 1) publications related to the organization of a "smart city" based on the principles of sharing, 2) publications covering the application of sharing economy services for the development of the urban environment. So, Vulfovich call into question if it is necessary to develop city governance system as a "platform" for the interaction of multiple actors that have a real impact on the life processes in the city and the quality of life of residents [22].Buletova and Sokolov highlight how smart city technologies effect the development of the transport in million-plus cities in Russia [23].The authors note that the effects of the introduction of smart city transport technologies in Russian million-plus cities can consist of improving the environmental and transport safety of living, the emergence of new jobs through the development of services and sectors of the sharing economy, the growth of initiatives coming from public groups, the inclusion of cities with the active development of smart city transport technologies in national and international economic projects that allow for high growth rates of gross regional product and expansion of markets for products and services of the regional economy, etc. [23]. Methodology At the first stage of the study, we identified what data is available to measure the sharing economy at the smart city level.For this analysis, the IMD Smart City Index [Smart City Index, 2021.Available at: https://imd.cld.bz/Smart-City-Index-2021/6/] was used.Data for 2021 were used.The authors of this index took into account three indicators that characterize the services of the sharing economy.The initial data used in this study are presented in Appendix. This index is based on the assessment of citizens and their agreement with certain statements.Components of smart city development are evaluated in terms of technology and structure.Thus, when calculating the index, such components as Health and Safety, Mobility, Urban Development (parks, bars, museums), Opportunities (Education and work), Governance were evaluated.Also, a list of problems that respondents identified in each of the cities was shown separately. As an example, in the "Mobility" component there are two points from the "structure" direction, including the degree of consent of citizens that "Traffic congestion is not a problem", "Public transport is satisfactory".In the "technology" direction, the mobility component includes 1) "car sharing apps have reduced congestion", 2) "apps that show free parking space have reduced travel time", 3) "bike rentals have reduced congestion", 4) "Online planning and ticketing has made public transport easier."Questions 1 and 3 are related to the sharing economy. Another question characterizing the sharing economy is also placed in the "technology" direction: "a website or application allows residents to easily give away unwanted items."In the direction of "structure" it corresponds to the question: "processing services are satisfactory." Another question of this study also show how Internet technologies can be used for the development of public goods, communication with authorities, however, they were not directly attributed to the sharing economy. At the second stage, a correlation analysis was carried out to determine the impact of individual characteristics of a smart city on the development of the sharing economy.This analysis made it possible to form a set of factors that influence the development of the services of the sharing economy mentioned above. At the third stage, through regression analysis, it is shown how individual services of the sharing economy are related to the parameters of smart city development.As a result, three regression models were built that describe the impact of smart city parameters on the sharing economy development.At the end, the dynamics of the development of these services in Moscow and Saint Petersburg is presented according to the data for 2019-2021. Results The correlation analysis showed that there are some parameters that really affect the sharing economy indicators.The list of them include assessment of citizens of the statement 1) "a large proportion of everyday payment transactions are non-cash"; 2) "free public Wi-Fi has improved access to city services", 3) the current speed and reliability of the Internet correspond to the needs of the connection; 4) "processing services are satisfactory" [This indicators were presented in the Smart City Index 2021.URL: https://www.imd.org/smartcity-observatory/home/]. In general, the results obtained are quite expected, correspond to theoretical studies and confirm both the role of IT technologies and other digital solutions in the development of a smart city and sharing economy projects, and reveal the potential of sharing economy projects to solve public sector problems. The results of the regression analysis made it possible to form three models.The first model has the following form (1, Table 1): Y1=0.27X1 0.98 X2 0.063 (R 2 =0.43.p<0.001) (1) where Y1 is citizens' assessment of the statement "Car sharing apps reduced congestion"; X1 is citizens' assessment of the statement "The current speed and reliability of the Internet meet the needs of the connection"; X2 is the number of the population.This model shows that a favorable impact on the development of the city's infrastructure, in particular, road congestion, is determined not only by the availability of information and communication technologies, but also by the population.This fact confirms the presence of carsharing mainly in large cities. The second model characterizes the role of bicycle sharing that is in reducing traffic congestion (2, Table 2).Y2=0.83X1 0.93 (R 2 =0.5.p<0.005) (2) where Y1 is citizens' assessment of the statement " bike rental reduced congestion"; X3 is Citizens' assessment of the statement "Free public Wi-Fi has improved access to city services."This model shows that the impact of bike sharing on traffic congestion depends on the free internet access in the city.However, effect of the number of citizens has not found.So, while the development of carsharing is more appropriate for large cities, the beneficial effect of bikes sharing does not depend on the size of the city.The third model is related to the aspect of sustainable development, in particular, with the assessment of the development of platforms that allow to give away unnecessary things.Thus, it follows that the development of ICT technologies, as well as a favorable assessment of processing services, which characterizes the city's guidelines for the implementation of sustainable development principles, contribute to the development of digital exchange and resale platforms.Here it is also advisable to talk about the inverse effect, in which sharing platforms have a positive effect on the assessment of citizens of processing services (3, Table 3).Y3=e -0.17 X3 0.52 X4 0,15 (R 2 =0.62, p<0.001) (3) where Y3 is Citizens' assessment of the statement "a website or application allows residents to easily give away unwanted items"; X3 is Citizens' assessment of the statement "Free public Wi-Fi has improved access to city services"; X4 is Citizens' assessment of the statement "processing services are satisfactory".The quality of model was carried out.The multicollinearity was eliminated, autocorrelation of residuals was not revealed.The models were also tested for heteroscedasticity using a visual analysis of the residuals plot.Signs of inconsistency of the variance and dependence of the residuals were not found. Sharing economy in Russian cities The analysis of the joint development of the s of the sharing economy sservices and "smart cities" showed that the most significant factor is the development of ICT technologies, which is a direct characteristic of both concepts.At the same time, according to the results of the index under consideration, the dynamics of indicators in Russian cities is somewhat different, which, as we see, is connected with the heterogeneity of the sharing services development, as well as, the level of social inequality.Figure 2 shows the dynamics of the considered variables in Moscow and Saint Petersburg for the period from 2019 to 2021. Figure 2 shows that the situation in carsharing is mostly stable.There is a gradual increase in bike rental in Moscow.Saint Petersburg also have a growth, but there has been a slight decline during the pandemic period.The results for websites or applications that allow residents to easily give away unwanted items are not uniform and require additional research. When examining the presence of carsharing companies in cities of the Russian Federation, it is clear that this type of business is developing mainly in large cities. Figure 3 shows the presence of carsharing companies in cities with a million population in absolute terms.Of the 11 cities with a population of over a million, only one city (Omsk) lacks car sharing, which is due to low taxi prices in this area.Carsharing services are also presented on tourist areas.A kicksharing (scooter rental) is also quite widespread, which is confirmed by the presence of this service in large cities, in cities with a population of over 100,000 people, and in cities located in close proximity to the capital of the region.Figure 4 shows the rating of million-plus cities and cities with a population of 500,000 or more. The role of the sharing economy in solving social problems When revealing the role of the sharing economy (SE), it should be noted the connection of this business model with such concepts as "circular economy" and "collaborative economy". The main idea of increasing the efficiency of resource utilization, embedded in this business model, allows us to consider it as one of the elements of a circular economy (CE).The connection between the sharing and circular economy is reflected in research works.In particular, Henry presents a comparative analysis of these concepts based on bibliometric analysis [24].The authors indeed found the connection of these concepts in the field of sustainable development, business models, sustainable consumption and management, and also confirm the nesting of the sharing economy in the circular economy.However, a detailed analysis of the circular economy and the sharing economy also shows that the goals of SE and CE digital platforms can differ [25], due to the gap between the theoretical principles of the sharing economy and practical activities.If the circular economy is more focused on the analysis of large corporations, then the sharing economy covers small and medium-sized businesses, as well as the activities of start-ups, which reflects the promise of a comprehensive study of these concepts [26]. As for as this study, websites and applications that allow residents to give away unwanted items permit them to extend the life of the product, which is one of the models of the circular economy.The carsharing and kicksharing are focused primarily on the sharing model and consumption reduction, which can also be attributed to the circular economy models [27]. Considering sharing services, in particular, in the field of transport, it is important to note that the use of this model makes it possible to reduce CO2 emissions, use resources more efficiently, thus reducing the demand for the purchase of a personal vehicle. Whereas the connection between the sharing economy and the circular economy has common tasks and implementation goals, the collaborative economy is associated with the general principle and model of consumption, which allows revealing another side of the sharing model.At the same time, sharing and consumption concerns not only material resources, but also information, labor, and financial ones.In this context, the sharing economy goes beyond the circular economy and corresponds to an actively implemented model of sharing resource consumption on the access right.So, for example, the sharing of financial resources in the form of crowdfunding, crowdlending and crowdinvesting is an example of collaborative consymption, can have a beneficial effect on socio-economic processes, including at the city level, but is not included in the circular economy. We should also mention data sharing and the formation of the above-mentioned concept of sharing cities, which is the development of the smart city concept, shifting the focus not only to the use of digital technologies, but also increasing the role of citizens in solving socially significant tasks, providing the required level of security, as well as the level of trust. It should be noted that the development of the sharing economy services in the urban environment requires the active involvement and support of the authorities.In addition, the list of services and platforms presented is not exhaustive.In particular, it is advisable to consider the role of digital platforms of the sharing economy for the development of individual projects that have high social and economic significance.It seems to support the authorities in the development of investment and crowdfunding platforms that allow the release of underutilized resources and are a fairly effective tool to support small businesses, could be promising.Equipment sharing is also a rather popular tool for supporting business entities.The importance of such services is most clearly noted in the development of agriculture.The sharing economy services and related platforms provide access to expensive equipment.Designing effective mechanisms for cooperation between authorities, formation of an institution of trust in society, as well as increasing human capital, will reduce the threats to the sharing economy development and free up additional resources to stimulate socioeconomic processes. Conclusion In this study, in order to substantiate the potential of the sharing economy in the development of "smart" cities, the following results were obtained. First, based on a bibliometric analysis of research papers, it is shown that the intersection of the study of the concepts of sharing economy and smart cities was found in such areas as sustainable development, digital technologies, and the development of public goods. Secondly, indicators that can be used to assess the development of services in the sharing economy were identified.These indicators characterize the impact of car sharing and bike sharing on traffic congestion, as well as the assessment of citizens of sharing platforms for exchange and resale. Thirdly, three regression models showing that the key parameter for the development of the sharing economy services is the availability of free and fast access to the Internet were In addition, it was found that the development of the carsharing depends on the size of the city, which explains the expediency of its development only in million-plus cities and nearby territories. Fourthly, using data of Moscow and Saint Petersburg, it is demonstrated that the development of these services is not stable.At the same time, these data are sufficient to conclude that it is expedient to implement mechanisms for cooperation between authorities and operators of digital platforms (and services) of the sharing economy in order to improve standard of living.In addition, the paper shows the presence of carsharing and kicksharing companies in Russian cities, which shows the prospects and demand for this market.At the same time, the issues of regulating the services of the sharing economy, the development of not only norms and rules for regulating this area, but also an appropriate infrastructure remain extremely important.Thus, the presented study showed the prospects for the development of sharing economy services in order to develop a smart city and drew attention to this area of research in Russian cities. Results of this research expand theoretical studies about the role of sharing economy in economic, social and environment changes.The practical significance lies in substantiating the significance of these services for the development of smart cities. Fig. 2 . Fig. 2. Dynamics of the sharing economy indicator in the IMD Smart City Index. Fig. 3 . Fig. 3. Rating of cities by the presence of carsharing. Fig. 4 . Fig. 4. Rating of cities by the presence of kicksharing. Table 1 . Results of regression analysis for model 1. Table 2 . Results of regression analysis for model 2. Table 3 . Results of regression analysis for model 3.
2023-10-14T15:29:53.889Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "b322b10a4427dedda2c46c302d5dcfc0ee4308f6", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/72/e3sconf_rec2023_05003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "819e8e16bb27b6727f52a3ee41aae37b4dd1fedb", "s2fieldsofstudy": [ "Computer Science", "Economics", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
268754909
pes2o/s2orc
v3-fos-license
Relationship between reproductive health literacy and components of healthy fertility in women of the reproductive age BACKGROUND AND AIM: One of the key factors affecting women's behavior with fertility issues is their health literacy, but this topic has been less addressed in the existing studies. We aimed to determine the relationship between reproductive health literacy and components of healthy fertility in women of reproductive age. MATERIALS AND METHODS: This cross-sectional study was conducted from March 2019 to September 2014 on 230 married women who were referred to comprehensive health centers in Lordegan city. Data were collected using a reproductive health literacy questionnaire, demographic and fertility information checklist, and components of healthy fertility. Data analysis was done using SPSS software, version 20. Pearson, Spearman, and independent t-tests were used as appropriate. RESULTS: The mean ± SD reproductive health literacy score in the participants was 43.80 ± 18.99 depicting an average literacy level in more than half of the women. Also, the reproductive health literacy score had a statistically significant relationship with the use of low-failure contraceptive methods (P < 0.001) and planned pregnancy (P = 0.03). However, this relationship was not significant regarding pre-pregnancy care (P = 0.88) and observing the interval between pregnancies (P = 0.57). CONCLUSION: We found a relationship between the level of reproductive health literacy and the use of family planning methods with low failure and planned pregnancy. Hence, it seems that interventions to improve reproductive health literacy are effective in reducing the occurrence of high-risk pregnancies and unwanted and unplanned pregnancies. Therefore, it is suggested that the health system consider and provide education related to reproductive health literacy as a part of healthy reproductive services. Introduction R eproductive health is defined as having a satisfying and safe sex life as well as having the ability to reproduce and the freedom to decide when and how often to have children.In other words, helping all members of society to control their fertility and experience healthy fertility is one of the main missions of reproductive health programs. [1]It is possible to provide a healthy fertility experience for people in the community by providing access to all information related to reproductive health, as well as access to healthy reproductive services and care services before, during, and after pregnancy. [2]The nationwide implementation of the integrated care program of healthy reproductive services started in Iran in 2019 in comprehensive health centers.The main components of this program are respect for the right of couples to obtain correct information and necessary services about healthy fertility and childbearing, respecting the two-year interval for childbearing, providing pre-pregnancy care by qualified personnel, using low failure contraceptive methods in couples who do not have the conditions for childbearing, having a planned pregnancy and the desired number of children.The ultimate goal of this program is to reduce maternal and infant mortality and related complications. [3]ccording to the World Health Organization (WHO) report, 94% of all maternal deaths occur in low, middle, and very low-income countries.To prevent these deaths, it is necessary to prevent unwanted pregnancies.Also, providing skilled care before and during pregnancy and childbirth and after has been proposed as another solution for maternal death. [4]e success and effectiveness of healthy fertility services, on the one hand, and the unmet needs related to it, on the other hand, are the causes of unplanned and unwanted pregnancies and the inability of couples to observe the correct childbearing interval. [5]These pregnancies are common health and social problems in the country, which have a negative effect on the health of mothers and infants.Marriage at a young age, economic poverty, lack of access to contraceptives, and contradictory beliefs about sexual issues are major contributing factors. [6]In general, unplanned and unwanted pregnancies increase the physical and mental complications of the mother and child and impose a large financial burden on the health system. [7]Of the 215 million pregnancies that occur annually worldwide, more than a third of them are unwanted, and 21% of them end with induced abortion.About 21 million induced abortions are unsafe and finally, a quarter of them lead to severe complications and even the death of the mother. [8]According to the statistics announced in Iran, nearly one-fifth of pregnancies occur unintentionally, comprising 18.6% of the total pregnancy index. [9]One of the important causes of unwanted pregnancy is the lack or incorrect use of available effective contraceptive methods.A study showed that in Tehran, the use of condoms and withdrawal methods increased from 20% in 1979 to 69% in 1993.Amiri and colleagues also stated that the most frequent method of prevention in women was the withdrawal method. [10]In another study, Gholami and Shabazian stated that the causes of unwanted pregnancy are the type of contraceptive method used by women, and how they use that method. [11] mentioned, access to quality care before, during pregnancy, and around childbirth is another way to ensure healthy fertility and reduce maternal mortality.Preconception care is an important part of this care, and evidence has shown that doing it is one of the most effective measures in predicting and planning to reduce mortality and complications in mothers and newborns. [12]e amount of this care has been reported differently in several countries, for example, it was estimated to be 40% in China and 18.2% in Ethiopia. [13,14]The frequency of performing preconception care in Iran is also low with varying frequencies in different cities.For example, the frequency of performing this care in Semnan and Gorgan was 11.6% and 32.7%, respectively. [15]In a study on the barriers of preconception, it was concluded that increasing awareness, improving attitudes about the importance of care, and increasing access are among the most important strategies to improve the receival of these services. [16]e of the key factors affecting women's behavior and decisions related to fertility issues and how they use available services is their health literacy status.The findings of a systematic review show that women's health literacy is associated with knowledge and behavior in the field of contraception, fertility-related decisions, prenatal screening, vitamin intake during pregnancy, exclusive breastfeeding, and postpartum depression. [17]On the other hand, the results of a meta-analysis also showed that the average score of Iranian women's health literacy is in the low or borderline range [18] and the results showed that most of the studies focused on general health literacy and few studies have been done on reproductive health literacy, which is a specific field of health. The high rate of unplanned or unwanted pregnancies, the use of traditional contraceptive methods or incorrect use of methods, and women's insufficient attention to the use of preconception care make it necessary to conduct studies to investigate its causes.Therefore, we aimed to determine the relationship between reproductive health literacy and components of healthy fertility in women of reproductive age in Lordegan city, Iran. Study Design and Setting This cross-sectional study was conducted with a descriptive-analytical approach in Lordegan city from March 2020 to September 2021.Lordegan is one of the cities of Chaharmahal and Bakhtiari province and has a population of 450,000 people, half of whom are women (22,400). [19]The city of Lordegan has six community health centers.All these centers provide services related to healthy fertility to the population they cover. Study Participants and Sampling The study population was women of reproductive ages referring to comprehensive health centers of Lordegan city, who were included in the study by quota sampling from all comprehensive health centers of Lordegan (according to the population of women of reproductive ages covered by the center's services).In each center, convenient sampling was used.The inclusion criteria were being married and having a spouse, being in the reproductive ages (15-49 years), living with a spouse, having Iranian citizenship, the ability to read and write and answer all the questions in the questionnaire, the absence of confirmed diagnosis of primary infertility or recent secondary infertility in a woman or spouse, not being currently pregnant, absence of underlying diseases (which prohibits the use of contraceptive methods or necessitates the need for abortion treatment) and being married for at least two years.We excluded women who did not want to cooperate or those who had not properly completed the questionnaires.Ultimately, 228 women were included considering 80% power, 95% confidence interval (CI), the correlation coefficient between reproductive health literacy score and components of healthy fertility equal to 0.2, and a dropout rate of 20%. Data Collection Method and Tools Sampling started by obtaining permission from the Vice-Chancellor Office for Research of the university and Lordegan Vice-Chancellor of Health and then continued with the researcher going to comprehensive health centers, presenting a letter of introduction, and talking to eligible women.After communicating with the participants, the researcher introduced herself and by explaining the objectives and method of conducting the study, the women were asked to cooperate in completing the study questionnaires upon their will.The questionnaire was completed as a self-report.The data collection tool was a researcher-made questionnaire regarding women's reproductive health literacy.Also, the researcher's checklist was used to record demographic and fertility information and components of healthy fertility.In this research, observing a two-year interval between pregnancy and previous birth, using low failure contraceptive methods in eligible people, planned pregnancy, and performing preconception care were considered components of a healthy fertility.Also, family planning methods such as the condom method, the withdrawal method, and the rhythmic method were defined as methods with high failure and other methods as low failure contraceptive methods. The researcher-made reproductive health literacy questionnaire was adapted from a questionnaire made in a study at Shiga University in Japan. [20]This tool measured the level of women's reproductive health literacy with 21 items.After the translation and re-translation of the questionnaire, its face and content validity were first checked qualitatively by asking the opinions of 15 experts.After stating the objectives of the study, they were asked to review the questionnaire in terms of fluency, easy understanding, grammar, style of writing the items, and ease of completion and add their suggested questions to the questionnaire.Also, the content validity was investigated quantitatively by asking the opinions of 15 experts and calculating the content validity index (CVI) and content validity ratio (CVR), and its reliability by test-retest method (completion of the questionnaire by 20 women with an interval of two weeks) and calculating the internal correlation by calculating the Cronbach's alpha coefficient.Items with CVR above 0.49 and CVI above 0.79 were kept.Ultimately, 29 items remained in the questionnaire.The items were scored on a five-point Likert scale as follows: not at all (score = 0), very little (score = 1), little (score = 2), high (score = 3), and very high (score = 4).The score range of the questionnaire was 0-116 and a higher score indicated a better level of reproductive health literacy.The reliability of the questionnaire was approved with r = 0.72 and Cronbach's alpha of 0.8.Data were analyzed using SPSS software, version 20 (SPSS, Chicago, IL, USA), and an independent t-test.Before conducting the statistical test, the normality of the data distribution was checked and confirmed with the Kolmogorov-Smirnov test.The significance level in the statistical test was considered to be 5%. Ethical Considerations All ethical considerations, such as obtaining approval for the research project from the Ethics Committee (code: IR.MUI.RESEARCH.REC.1400.156),keeping the participants' information confidential, and obtaining informed consent from them, were observed.Considering the study was conducted during the COVID-19 pandemic, social distancing and health protocols were also considered for people who were referred in person. Results This study was done on 230 women aged 17-46 years living in Lordegan city with an average age of 30.69 ± 6.86 years.In terms of age, most women were 25-29 years old (27.4%).Most women had a diploma degree (43%) and were housewives (63.47%).Among the investigated women, the mean ± SD number of pregnancies was 2.74 ± 1.46 and the mean ± SD number of deliveries was 2.21 ± 1.17.Most women had 1-2 pregnancies (52.2%), 1 or 2 deliveries (62.2%), and no history of abortion (64.3%) [Table 1].The women's mean ± SD reproductive health literacy score was 43.80 ± 18.99.Reproductive health literacy was poor at 38.3%, average at 57.8%, and good at 3.9%.Based on the results of the independent t-test, a significant difference was observed in the mean reproductive health literacy score of eligible women using contraceptive methods with low failure and eligible women who did not use these methods (P < 0.001).In women who used contraceptive methods with low failure, the mean reproductive health literacy was higher.Also, the mean reproductive health literacy score in women who had a planned pregnancy was higher than in women whose pregnancy was not planned (P = 0.03).However, we found no significant difference between the mean reproductive health literacy score of women who had followed the correct interval of having children compared with those who did not (P = 0.57).Moreover, no significant difference was observed in the mean reproductive health literacy score of women who had performed preconception care compared to those who did not go for this care [P = 0.88, Table 2]. Discussion We aimed to determine the relationship between reproductive health literacy and components of healthy fertility in women of reproductive age.The results showed that the mean ± SD reproductive health literacy score was good (43.8 ± 18.99) in only 3.9% of the women and most participants (57.8%) had an average score, while 38.3% had poor literacy.This was consistent with the results of another study in Iran reporting that only 19.9% of pregnant women had sufficient health literacy and the health literacy of most women was at an insufficient and borderline level. [21]Kohan and colleagues reported a mean ± SD reproductive health literacy score of 66.16 ± 10.26 in Isfahanian women aged 18-62 years, which was higher than the mean reproductive health literacy score in our study. [22]The level of functional health literacy of pregnant mothers in Urmia city was insufficient in 24% of the participants, borderline in 25%, and sufficient in 51%. [23]In determining the health literacy status of pregnant women in Bandar Abbas city (2016), 27.2% of women had insufficient health literacy, 20.8% had borderline health literacy, and 52% had sufficient health literacy. [24]In another study on health literacy and self-care in women of reproductive age, 28%, 23%, and 49% of the studied women had insufficient, borderline, and sufficient literacy levels, respectively. [25]t seems that in all the mentioned studies, the number of people who had sufficient levels of health literacy was significantly higher than the figures obtained in our study.Two reasons can be mentioned for this difference.First, in most of the mentioned studies, general health literacy questionnaires or functional health literacy questionnaires were used, and women's health literacy was probably better in the general health field than in the reproductive health field.However, since the same questionnaire was used in the study by Kohan and colleagues, another reason can be attributed to the socio-cultural and economic differences in relation to the studied population.[28][29] Many of these factors have not been examined in our study. The results showed that the reproductive health literacy score has a significant relationship with using low failure contraceptive methods.So the mean reproductive health literacy score in women who used low-failure contraception was significantly higher than women who did not use low-failure contraception.The result of our study was consistent with the findings of Yee et al. in another study showing that a low health literacy score was related to poor knowledge of contraception and difficulty in using contraceptive methods, and these women face problems in deciding to use contraceptive methods. [23]Also, the results of the present study are in line with the results of another study showing that both factors of health literacy and knowledge related to the use of oral contraceptive pills are significantly related to adherence to the regular use of these pills.However, health literacy was the strongest predictor of adherence to oral contraceptive pills (in multivariate regression). [30]t seems that higher reproductive health literacy increases women's awareness and ability to use family planning methods, and these women use low-failure contraceptive methods more than other women to reduce the number of fertility and its timing as they wished. Data analysis showed that the mean reproductive health literacy score did not differ significantly in women who had followed the correct interval of having children and those who did not.Different studies have linked different factors with the interval between births.For example, Bagheri and Saadati's found that working women, women living in developed urban areas, and young women, give birth to their second child with a longer interval. [31]In African women, Miherti and colleagues stated that the mother's lack of formal education, lack of use of contraceptive methods, and a short period of breastfeeding (less than 24 months) are the determining factors for a short interval between pregnancies. [32]In their systematic review, Damtie and colleagues concluded that in Ethiopian women, no use of contraceptive methods, living in rural areas, and low duration of breastfeeding were associated with a short interval between births (less than two years). [33]In another study, the interval between births in women with higher education and women whose previous child was a boy was significantly greater than in other women. [34]ased on the results of these studies and the fact that the level of education has a direct relationship with the level of health literacy, [25] it may be possible to conclude that the level of reproductive health literacy has a direct relationship with the level of compliance with the correct distance between births.However, to the best of our knowledge, we found no studies that directly examined the relationship between the level of reproductive health literacy and the interval between births.Therefore, it is necessary to conduct more studies with a larger sample size to investigate the relationship between reproductive health literacy and the interval between births. We found no significant difference in the mean reproductive health literacy score in women who have performed preconception care and those who had not performed it.Inconsistently, many studies have shown that the level of health literacy was related to preventive care and care for chronic diseases.Yee and colleagues found that an insufficient level of health literacy was correlated with low levels of health-related knowledge and performing less self-care in pregnant women with diabetes. [35]Asadi et al. found that women with a higher level of health literacy used more preconception counseling than other women. [36]Since this finding of our study is in contradiction with the results of other studies and it seems logical that women with a higher level of health literacy refer more often for preconception care, it is necessary to conduct studies with a larger sample size in this field. In this study, the mean reproductive health literacy score in women who had a planned pregnancy was significantly higher than women who had an unplanned pregnancy, our results are consistent with the study of Yee et al. [37] Also, Dongarwar and Salihu concluded that a higher level of reproductive and sexual health literacy was associated with a lower rate of unplanned pregnancy and repeated pregnancy in teenagers, and these two factors have a direct relationship. [38]Based on the results it can be said that increasing the level of reproductive health literacy increases the ability of women to control their fertility and provides the possibility of planning for births. The use of a special reproductive health literacy questionnaire is one of the strengths of the present study, which increased the power of reasoning in relation to the variables of the study.However, the small sample size and the dependence on some main research variables such as health literacy and women's fertility behaviors on the sociocultural background reduced the ability to generalize the results and are the limitations of the present study.To solve this limitation, it is necessary to conduct studies with a larger sample size and in different sociocultural contexts. Conclusion We found a significant relationship between the level of reproductive health literacy and the use of family planning methods with low failure and planned pregnancy.Considering that if unplanned pregnancies continue or if legal and illegal measures are taken to terminate them, they have many consequences for the health of the mother and the fetus and contribute greatly to maternal death and complication.Therefore, it is suggested that the health system consider and provide education related to reproductive health literacy as part of the healthy reproductive services currently being provided in the country.
2024-03-31T15:51:54.497Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "d70c7144ba8ed80b61e86b402334580100a44d34", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jehp.jehp_132_23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44696291c414509037630dd70e225ea4ea4a0a07", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252606538
pes2o/s2orc
v3-fos-license
Prevalence and Genotyping of HPV in Oral Squamous Cell Carcinoma in Northern Brazil Highly oncogenic human papillomavirus (HPV) is well known to be associated with and a risk factor for various types of oral carcinomas such as oral squamous cell carcinoma (OSCC). The aim of this study was to evaluate and describe the HPV-induced OSCC prevalence and genotyping in the city of Belém, northern Brazil. This cross-sectional study features 101 participants who attended an oral pathology referral center in a dental college looking for diagnoses of oral lesions (OL). After signing the consent term and meeting the inclusion criteria, all participants went through a sociodemographic and epidemiological questionnaire. Then, OL were collected by excisional or incisional biopsy depending on OL size; after that, OL tissues were preserved in paraffin blocks to histopathological diagnoses. Afterwards, paraffin blocks were divided into benign and malignant/premalignant lesions based on the classification of potentially malignant disorders of the oral and oropharyngeal mucosa. Then, the paraffin blocks had DNA extraction performed by the ReliaPrep FFPE gDNA Miniprep method in order to identify HPV DNA of high oncogenic risk and low oncogenic risk. Then, the viral DNA was amplified and typed using the Inno-Lipa genotyping Extra II method, and the collected data were analyzed by Chi-square and G-tests. In total, 59/101 (58.4%) OL were malignant/premalignant lesions, of which OSCC was the most prevalent with 40/59 (67.7%) and 42/101 (41.6%) benign lesions. The most common area of OL incidence was upper gingiva 46/101 (45.5%). Regarding HPV DNA detection, approximately 27/101 (26.7%) had positive results; of these, 17/59 (28.8%) were malignant/premalignant lesions, and the most prevalent genotypes detected were 16, 18, 52 and 58, while among benign lesions, 10/42 (66.6%) had HPV-positive results, and the most prevalent genotypes detected were 6, 11 and 42. Age range was the only risk factor with a significant association between HPV and OSCC presence (p-value: 0.0004). A correlation between OSCC and oral HPV among analyzed samples could not be demonstrated in our small cohort. Introduction The human papillomavirus (HPV) is a member of the Papillomaviridae family. According to Monteiro et al. [1], there are more than 130 species and nearly 228 HPV genotypes have been identified so far-all with the capacity to tropism for mucosal and cutaneous Pathogens 2022, 11,1106 2 of 11 epithelia, such as squamous tissue. As explained by the International Committee on Taxonomy of Viruses (ICTV), HPV is a small non-enveloped double-stranded circular DNA virus approximately 52-55 nm of diameter, which is composed by a protein icosahedral capsid made of 72 pentameric capsomeres that surround the viral genome with 8000 nucleotide base pairs [2,3]. HPV oncogenic potential is based on the virus capacity to lodge two specific encode viral oncoproteins, E6 and E7 genes, into hosts, infecting the cells' genome [4,7,8]. This capability of HPV promotes the invalidation of the activities of important tumor suppressor and apoptosis proteins, such as Tumor Protein p53 (TP53) and Retinoblastoma Protein (pRb). E6 viral oncoproteins interact with TP53, causing its degradation due to its relationship with E6-Associated Protein [9][10][11]. Regarding E7 viral oncoproteins, it attaches to pRb, inactivating its capacity to inhibit excessive cell cycle progress and therefore resulting in carcinogenic activity in the oropharyngeal and genital epithelium [9][10][11]. Worldwide, HPV infection is a major public health issue, mainly because it is one of the most common sexually transmitted infections (STIs). Globally, HPV infection prevalence is approximately 12% with continental discrepancies due to the socioeconomic development and vaccination program of each country, and HPV-induced cancer has a prevalence of 5.1%, which ranges through genders and innumerous anatomic sites [12,13]. In Brazil, according to Colpani et al. [14,15], HPV has a national prevalence of 25.41% with some anatomic site variation as: penile region, 36.21%; anal region, 25.68%, oropharyngeal, 11.89% and among this prevalence, 17.65% was associated to HR HPV subtypes. SCC is one of the main malignant lesions of invasive skin cancer and is easily identified by an atypical, accelerated increase in squamous cells. Following the morphological similarities with oropharyngeal mucosa tissue, the presence of an oral SCC (OSCC) is possible; thus, OSCC can arise from any location of oropharyngeal mucosa [16,17]. According to van der Waal [18] and Jiang et al. [19], the most frequently affected sites are the tongue mouth floor, sublingual area, gingiva, hard palate and lips. As claimed by Syrjänen [20], Tumban [21] and Panarese et al. [22] OSCC, clinically, it appears as an ulcerative lesion with a necrotizing central area and lifted borders. In general, for OSCC, well-established risk factors are smoking, alcohol overconsumption and tobacco chewing, although since 1983, it was hypothesized that OSCC can emerge from HR HPV subtypes; in particular, HPV subtype 16 (HPV-16) is associated with unprotected sexual behavior. However, the specific role of HR HPV regarding OSCC risk factors is not fully understood. Therefore, furthermore information is needed to clarify the relationship between HR HPV infection and OSCC in northern Brazil. So, in this context, this study aimed to evaluate and describe the HPV-induced OSCC prevalence and genotyping in the city of Belém, northern Brazil. Materials and Methods This descriptive, cross-sectional single-center study was population-based on clinical symptoms, sociodemographic and epidemiological data from individuals who attended an oral pathology and malignant lesions referral center at a dental college (CESUPA) located in the city of Belém, Pará, northern Brazil ( Figure 1). All individuals who attended this referral center during the period from January 2019 to December 2019 were invited to participate in the study, and of these, 101 individuals met the inclusion criteria and were diagnosed with oral benign and malignant/premalignant lesions/tumors. This descriptive, cross-sectional single-center study was population-based on clinical symptoms, sociodemographic and epidemiological data from individuals who attended an oral pathology and malignant lesions referral center at a dental college (CESUPA) located in the city of Belém, Pará, northern Brazil ( Figure 1). All individuals who attended this referral center during the period from January 2019 to December 2019 were invited to participate in the study, and of these, 101 individuals met the inclusion criteria and were diagnosed with oral benign and malignant/premalignant lesions/tumors. All interventions were performed in accordance with the guidelines and regulatory standards for research involving human subjects of the National Health Council and Papilloma Virus laboratory of Instituto Evandro Chagas (IEC). This study was approved by the Ethics Committee on Human Research of the University Center of the State of Pará-CESUPA under protocol number 4.197.815. Written informed consent was obtained from all 101 patients for the publication of any potentially identifiable images or data included in this paper. Clinical Parameters The benign and malignant lesions were established according to the classification of potentially malignant disorders of the oral and oropharyngeal mucosa [18]. OL were subdivided in situ into benign lesions-traumatic fibroma, focal epithelial hyperplasia, pyogenic granuloma, papilloma, verruca vulgaris, condyloma acuminatum-and malignant or premalignant lesions: oral squamous cell carcinoma, leukoplakia, erythroplakia, oral lichen planus, oral submucous fibrosis and carcinoma. Although the main objective of the study is to associate HPV infection and SCC, both OL were biopsied, and benign lesions were used as a control group to evaluate HPV prevalence in different lesions [23,24]. Sample Collection and Processing The sample consisted of patients registered and treated at CESUPA. In total, 101 individuals were informed about the purpose of the study and invited to participate. Then, they all agreed to and signed a written consent form before data collection and oral evaluation. The study eligibility criteria were: (i) ≥18 years old; (ii) have a conclusive diagnosis of oral benign lesion or OSCC; (iii) resident of Pará State; (iv) medical records filled in; and (v) signed the free and informed consent form. The exclusion criteria were: (i) individuals who transferred to other cities, affecting follow up; (ii) individuals with neurological and/or cognitive impairment; (iii) medical records not filled out correctly; and (iv) refusal to sign the consent form. Individuals who met the inclusion criteria were invited to participate in this study and signed the consent form. All interventions were performed in accordance with the guidelines and regulatory standards for research involving human subjects of the National Health Council and Papilloma Virus laboratory of Instituto Evandro Chagas (IEC). This study was approved by the Ethics Committee on Human Research of the University Center of the State of Pará-CESUPA under protocol number 4.197.815. Written informed consent was obtained from all 101 patients for the publication of any potentially identifiable images or data included in this paper. Clinical Parameters The benign and malignant lesions were established according to the classification of potentially malignant disorders of the oral and oropharyngeal mucosa [18]. OL were sub-divided in situ into benign lesions-traumatic fibroma, focal epithelial hyperplasia, pyogenic granuloma, papilloma, verruca vulgaris, condyloma acuminatum-and malignant or premalignant lesions: oral squamous cell carcinoma, leukoplakia, erythroplakia, oral lichen planus, oral submucous fibrosis and carcinoma. Although the main objective of the study is to associate HPV infection and SCC, both OL were biopsied, and benign lesions were used as a control group to evaluate HPV prevalence in different lesions [23,24]. Sample Collection and Processing The sample consisted of patients registered and treated at CESUPA. In total, 101 individuals were informed about the purpose of the study and invited to participate. Then, they all agreed to and signed a written consent form before data collection and oral evaluation. The study eligibility criteria were: (i) ≥18 years old; (ii) have a conclusive diagnosis of oral benign lesion or OSCC; (iii) resident of Pará State; (iv) medical records filled in; and (v) signed the free and informed consent form. The exclusion criteria were: (i) individuals who transferred to other cities, affecting follow up; (ii) individuals with neurological and/or cognitive impairment; (iii) medical records not filled out correctly; and (iv) refusal to sign the consent form. Individuals who met the inclusion criteria were invited to participate in this study and signed the consent form. Each participant was orally evaluated in a private location in the oral pathology department. Clinical data were collected by a single researcher, a specialist in oral pathology who had previous experience in clinical studies. The intraoral clinical examination was performed in a dental office, in a dental chair, under indirect and artificial light, using a dental mirror and clinical tweezers, which were all sterile, consisting of disposable materials; the OL evaluations were performed daily. Demographic and epidemiological data were obtained through a pre-tested standardized semi-structured questionnaire and medical records. Regarding oral biopsies, all lesions included had been submitted either to excisional biopsy, when OL were sized approximately to ≤1 cm, or incisional biopsy, when they were sized approximately to size ≥ 1 cm with a scalpel. All biopsies were performed by only one researcher who had previously experience in clinical biopsy. Examinations occurred under local anesthesia, and for excisional biopsies, a 3 mm margin of normal tissue was included. Then, the biopsy material was preserved in 10% formaldehyde then transported to histopathologic exam. All pieces were processed in paraffin blocks for histological analysis. At first, it was necessary to replace tissue liquid with paraffin; then, a sequence of ethyl alcohol baths at increasing concentrations (70-99%) for approximately 6 h for tissue dehydration was made. Subsequently, the pieces went through the process of diaphanization in xylol for 3 h and impregnation in molten paraffin at 60 • C for 2 h. Finally, the pieces were transferred to steel molds, which were bathed in liquid paraffin at 65 • C and taken to rapid cooling at 0 • C [25]. After preparation, two different pathologists evaluated the hematoxylin and eosin-stained sections of all lesions for confirmation of the diagnosis. A microscopic diagnosis was rendered according to the WHO classification of potentially malignant disorders of the oral and oropharyngeal mucosa [18]. DNA Extraction from Paraffin Samples Each block of paraffin samples was cut into 10 "slices" with 5 µm thickness each, and viral DNA extraction was performed using the "ReliaPrep FFPE gDNA Miniprep Syste (Promega Corporation, Madison, USA). This system is based on whether in the use of cellulose membranes in columns, where the lysed biological material is subjected to centrifugation, the DNA is bound to membranes charged with (+) charge so that there is binding to the (−) DNA. Subsequently, washes were carried out with alcoholic solutions, and the DNA elution was carried out in a saline medium. All conditions described were specified in the manufacturer's protocol. Each "sample" was heated in a thermoblock at 80 • C for approximately 2 min, and 500 µL of mineral oil was added to dissolve the paraffin-a process which was repeated for all "slices". Subsequently, 300 µL of PBS buffer and 20 µL of Proteinase K were added for sample digestion. Soon, the samples were incubated at 65 • C for 1 h. After the complete digestion process, the DNA samples were considered homogeneous; then, they were transferred to another Eppendorf tube and were incubated at 95 • C for 15 min. After this process, the samples were kept at room temperature, and later, 220 µL of lysis buffer and 240 µL of absolute ethanol were added to assist in the aggregation of precipitated DNA. All samples were transferred to kit columns and then centrifuged at 10,000 RCF for 3 min. The fluid remaining in the tube was discarded, and 500 µL of washing solution was added, centrifuging at 10,000 RCF for 30 s with the cap closed and 16,000 RCF for 3 min with the tube cap open. Subsequently, the column was transferred to another Eppendorf tube, discarding the collection tube. Finally, 50 µL of elution buffer was added directly to the column, and it was centrifuged again at 16,000 RCF for 1 min. The column was discarded and the Eppendorf tube with the filtered sample was stored at −70 • C. HPV Detection and Typification The extracted viral DNA was identified and typed in the analyzed samples using the "Inno-Lipa Genotyping Extra II System (Fujirebio, Tokyo, Japan), which is able to amplify a portion of the HPV genome in the viral L1 gene region. Through the reverse hybridization system, this generated fragment is able to identify infection by up to 32 viral types. Therefore, the presence of up to 28 genotypes of HPV (high oncogenic risk: 16 This system is based on the hybridization of the PCR-amplified fragment through the base homology of the amplified fragment with the probe contained in a "nylon" strip that corresponds to the specific sequence for each of the mentioned types, which defines, in addition to positivity, the infecting viral type. So, the viruses were cataloged on a decreasing scale as to their oncogenic risk. Statistical Analysis The collected data were analyzed by the BioEstat program and evaluated with respect to mean, standard deviation and absolute and relative frequency, as well as p value (p < 0.005) by the Chi-square, Fisher Exact Test and G Test, in the selected groups. Discussion Since 1983, the increasing number of papers indicating a correlation between HPV infection with oropharyngeal tumors or lesions has increased [4][5][6]9,11,[17][18][19][20][21][22][23][24][26][27][28][29][30]. The present study evaluated the prevalence of HR HPV using fresh and frozen biopsied samples; through various oral lesions (OL), OSCC was included and associated with patients' sociodemographic parameters in northern Brazil. To the best of authors' knowledge, this is the first epidemiological cohort of this type in northern Brazil. However, in this study, it was not possible to demonstrate a direct correlation between HR HPV and OSCC among the analyzed samples, even with different methods regarding comparisons with other studies such as analysis of freshly frozen OSCC samples to improve positive results in histopathological analysis (the same method as Drop et al. [31]) and having a benign lesion control group. Through the years, OSCC-related HR HPV has been well documented, although different important factors could affect the outcomes of HPV-induced OSCC. The most common associations are smoking and alcohol overconsumption, which according to the literature influences various populations worldwide. Madathil et al. [32] evaluated in Montreal, Canada, 631 participants with smoking habits and its relationship with HPVassociated oropharyngeal tumors; of these, 40% were HPV positive and smokers, showing a higher prevalence compared to the control group (16%), and HPV-16 was the most prevalent subtype among participants. Auguste et al. [33] investigated the influence of tobacco and alcohol joint consumption with HPV and the occurrence of head and neck SCC among 550 individuals in Guadeloupe and Martinique, French Caribbean. The authors demonstrated that the combination of tobacco and alcohol consumption could induce a symbiotic effect on the incidence of OSCC, and HPV-52 was the most prevalent subtype. Smith et al. [34] evaluated 201 participants with smoking and alcohol consumption habits, HPV presence and its relationship with head and neck SCC in Iowa, United States. The authors demonstrated a prevalence of 46% HPV-positive cases associated with smoking and alcohol consumption, and HPV-16 was the most prevalent subtype among participants. In Brazil, Rodrigues et al. [30] evaluated the prevalence of oral HPV in various OL among 278 people who use crack-cocaine (PWUCC) in the cities of Bragança and Capanema in northern Brazil; of these, 111 (39.9%) PWUCC were HPV positive, and HPV-16 was the most prevalent subtype among PWUCC. Another interesting fact corroborated by various studies is that the increased smoking and alcohol consumption associated with the presence of HR HPV will influence in the severity of OL. In this study, risk factors such as smoking (70.3%) and alcohol overconsumption (70.3%) proved to be an important etiology factor to OSCC, confirming the results of all the studies mentioned above, although among our samples, the smoking and alcohol overconsumption associated with HR HPV prevalence could not be inferred, which was similar to the other studies prior. Perhaps the influence and the prevalence of HR HPV over OL might be impacted by other risk factors besides smoking and alcohol overconsumption, such as unsafe sex, multiple sexual partners, drug dependence, lack of access to public health services, co-infections such as HIV-HPV, and HPV non-vaccination, which would facilitate HPV throughout the state [1,25]. Bezerra et al. [35] demonstrated that smoking and alcohol overconsumption, associated or individually, are important risk factors to a higher prevalence of OSCC. The authors also stated that tobacco and ethanol may increase oral epithelium permeability by the exposure of oral tissue to various carcinogenic agents presented in tobacco and ethanol. It can also decrease interleukins (IL) expression, such as IL-18 and DDX3 protein, which regulate the cell cycle and control the progression of malignant neoplasms, as well as increase COX-2 pro-inflammatory activity. In our study, one of the main risk factors was age (p < 0.05). According to our results, age range (60-69) could be considered as a major influencing factor for high OSCC prevalence; however, a cause-effect relationship could not be established. In the present study, the OSCC prevalence was 58.5% (59/101) and the HPV prevalence was 27/101 (26.7%). Among HPV positive samples, the most prevalent HR HPV subtype was HPV-16 (10/17-58.8%), which was exactly the same subtype as previous studies. When comparing the HPV prevalence ratings to our studies, ours is the smallest prevalence, which is mainly because our cohort had a smaller sample size. Other influencing risk factors might be demographic region, socioeconomic status, anatomical sites and access to quality public health services. Bean et al. [36] demonstrated that populations with a lower income, living in countryside areas or overly dense urban areas are more likely to present severe cases of late-stage cancer. Our smaller HPV prevalence could be due to Brazil's northern region having many cities surrounded by hydrographic access routes, which make it difficult to access health care. In addition, the distances between major urban areas and countryside cities are higher than in southern Brazil [37,38]. Despite the fact that OSCC prevalence was the main objective of our study, during the results analysis, the decreased prevalence of HPV was interesting, because it is different from the literature. The higher prevalence of HR HPV subtypes among HPV prevalence highlighted the urgency to improve the distribution of specialized oral pathology services, HPV vaccination and implementation of oral cancer prevention and treatment programs in northern Brazil cities. Although there are some interesting results, this study has some limitations. The small sample size, the short study interval and the paraffinization process may degenerate HPV DNA. Conclusions This study managed to determine the prevalence of OSCC associated with HPV infection, despite the fact that HPV prevalence had a low presence. That is a good result when we consider that HPV is an important etiological factor to oral carcinoma. Our major concern was the high prevalence of 58.5%, which presents concerning data to our region, demonstrating that the lower prevalence presented in previous studies might have some bias that directly influenced the results. So, we identified the need to improve diagnosis and therapy services for various lesions of the oral cavity in the Pará state in order to provide greater assistance to the population of Pará and prevent oral cancer. Informed Consent Statement: All participants were included in the study after providing informed and written consent. Data Availability Statement: All data referred to this study are available in the manuscript.
2022-09-30T15:10:56.122Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "a707181f6c9556530ce8455f7549ae6813bbbda1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/11/10/1106/pdf?version=1664354672", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61970ac99e24916ae0546745faab3e036a29338b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238579838
pes2o/s2orc
v3-fos-license
SARS-CoV-2 subunit vaccine adjuvants and their signaling pathways ABSTRACT Introduction: Vaccines are the agreed upon weapon against the COVID-19 pandemic. This review discusses about COVID-19 subunit vaccines adjuvants and their signaling pathways, which could provide a glimpse into the selection of appropriate adjuvants for prospective vaccine development studies. Areas covered: In the introduction, a brief background about the SARS-CoV-2 pandemic, the vaccine development race and classes of vaccine adjuvants were provided. . The antigen, trial stage, and types of adjuvants were extracted from the included articles and thun assimilated. Finally, the pattern recognition receptors (PRRs), their classes, cognate adjuvants, and potential signaling pathways were comprehended. Expert opinion: Adjuvants are unsung heroes of subunit vaccines. The in silico studies are very vital in avoiding several costly trial errors and save much work times. The majority of the (pre)clinical studies are promising. It is encouraging that most of the selected adjuvants are novel. Much emphasis must be paid to the optimal paring of antigen-adjuvant-PRRs for obtaining the desired vaccine effect. A good subunit vaccine/adjuvant is one that has high efficacy, safety, dose sparing, and rapid seroconversion rate and broad spectrum of immune response. In the years to come, COVID-19 adjuvanted subunit vaccines are expected to have superior utility than any other vaccines for various reasons. SARS-CoV-2 and vaccine status Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the etiology of coronavirus disease 2019 (COVID- 19), is ravaging the human race regardless of political and geographic boundary [1,2]. As of 10 July 2021, with the lowest estimate, infection and fatality surpassed 186 million and four million, respectively. The pandemic is now broadening its target demography and geography through mutations on key structural genes [3][4][5]. Following the SARS-CoV-2 outbreak in Wuhan, China, century-old containment measures such as lockdown, physical/ social distancing, and school closing were applied. However, these measures were found to be costly and collaterally damage the global economy [6]. Vaccines are the ideal weapon against SARS-CoV-2 for bringing back life like the pre-pandemic situation [7,8]. As such, several studies investigated the aftermath of countries' routine immunization for repurposing existing vaccines [9][10][11][12][13][14]. However, major biases could not be excluded by the majority of the studies [15] and are accompanied with conflicting reports [16]. Currently, the globe is rolling out mRNA and vector vaccines [17][18][19][20][21]. Additionally, several new vaccines are under different stages of scrutiny [22]. The complete list of the candidate vaccines are available at the WHO website [23]. Of the listed candidate vaccines, the subunit protein vaccine shared 34% of the COVID-19 vaccine research and is the highest in proportion compared with other forms of vaccines [23]. By virtue of the presence of a heterogeneous mixture of structures and genetic materials that can function as an intrinsic adjuvant, live-attenuated vaccines are relatively more effective compared with subunit vaccines [24,25]. Conversely, purified subunit vaccines lack pathogen-associated molecular patterns (PAMPs) and such types of vaccines are barely immunogenic unless supplemented with adjuvants [24,26]. Synthetic DNA-based vaccines targeting the S protein of SARS-CoV-2 exhibited promising results on animal model experiments [27][28][29]. However, besides safety issues, hostrelated factors in higher animals might delaye its translation as can be inferred from other early researches [30,31]. increase the amount of vaccine production using smaller antigen, (2) dose sparing, (3) broaden the profile of adaptive immune components, and (4) hasten seroconversion rates [25,34]. These come through its depot effect, activation of PRR-mediated innate immune signaling, enhancing the activities of antigen presenting cells (APC) and activation of inflammasomes [33,35]. The immunological function of mineral salts is by their depot effect, complement activation, and inflammasome and tissue damage for releasing damage-associated molecular patterns (DAMPs) [36][37][38]. Emulsion groups have good antigen bioavailability. However, they are relatively toxic and are associated with delayed-type hypersensitivity. Unlike mineral salts that are biased toward Th2 immune response, emulsions groups activate the Th1 pathways [36,37,40]. The PAMP, cytokine, hormone, and synthetic adjuvants are considered 'novel adjuvants' by virtue of having cognate receptors for their effector function. However, toxicity is the main limitation for these classes [41][42][43][44][45]. Despite the attempts made to reduce the toxicity issues associated with synthetic adjuvants, they are found to be less bioavailable and remain localized to the injection site [43][44][45]. In general, particulate vaccines are easily taken up by APCs than soluble vaccine forms. As such, the efficacy can be increased by modifying the delivery in particulate forms [41]. In situations where measuring clinical correlates of vaccine protections (infection, transmission, or diseases) is difficult for various reasons, the immunological correlates of protections (titer, affinity, isotypes and half-life of the neutralizing antibody, and CD4 + T cells) are used as surrogate criteria for measuring vaccine/adjuvant efficacy [46,47]. Several lines of evidence revealed that each adjuvant has limitations on one or more of the desired immunological correlates of protection. For instance, alum in the spike protein subunit vaccine study induced an increased B cell and longlived neutralizing antibody (NA) production. However, alum-S adjuvants failed to induce a remarkable level of cell mediated immunity (Th1CD4 + T cell and cytotoxic CD8 + T cells) and are linked to eosinophilic associated lung pathology. The CpG adjuvant is associated with an increased production of CD8 + T cells and IgG and IgA production, but again the halflife of the produced antibodies is short and skewed toward Th1. Liang and Colleagues considered STimulator of INterferon genes (STING), AS01B, delta inulin microparticles, and matrix M1 adjuvants to be better in terms of inducing long-lived neutralizing antibody and IFN production in the mucosal area [48]. In general, despite the several remarkable success, we are far from identifying and unlocking the magic bullet vaccine adjuvants. Of note, the year 2020 was the year of human suffering but also the year of breakthrough for mRNA vaccine against the COVID-19 pandemic. Unfortunately, mRNA vaccines need freezers for transportations, which is very challenging in resourcelimited countries [49]. Additionally, mRNA vaccines are expensive and unaffordable for nations of the south. Furthermore, the existing vaccines cannot satisfy the global need. Additionally, due to the continued emergency of 'variants of concern,' developing a new generation of vaccine is a top priority of the global health [50]. Hence, effective and safe alternative second and third generation COVID-19 vaccines are urgently needed. Adjuvanted subunit vaccines are the best alternative, and such vaccines are currently under intense research. Thus, the aim of this review is to identify primary articles evaluating the efficacy and safety of the adjuvanted subunit COVID-19 vaccine, give a glimpse into the landscape and immunology of COVID-19 adjuvants, and facilitate the subunit vaccine research arena. AND Title-Abs-Key ("subunit vaccine") OR Title-Abs-Key ("vaccine adjuvant") OR Title-Abs-Key ("Recombinant Protein Vaccine")) AND (Limit to (Pubyear, 2021) OR limit to (pubyear, 2020)) AND (Limit-To (Language, "English")) .The identified articles were imported into the EndNote library, and eligible articles were filtered out following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram. The complete article selection strategy is given in the supplementary material (S1). Antigen profile of COVID-19 subunit vaccine trials The systematic literature search gave a total of 1062 articles (S1). After screening of the 1062 articles, 52 were identified, which fulfilled the eligibility criteria. Of these, 24 articles were in silico experiments, 25 articles were animal model studies, and the remaining three articles were clinical trial reports. Articles evaluating the efficacy of two or more adjuvants were treated as separate studies. Hence, 71 adjuvant experiments/evaluations were identified inside 52 included articles; 38 of the adjuvant evaluations were at pre-clinical, 29 in silico, and 4 clinical trials ( Figure 1a). The complete data set of the review including the summary of key findings of each study is found in Table S2. The full spike protein and its subunits are the leading antigens used by SARS-CoV-2 structural immunologists. For instance, out of 71 adjuvant/vaccine evaluations, 15 (21%) and 14 (19.7%) employed the spike protein and receptor binding domain (RBD), respectively. However, when derivatives and modification such as (S1, S2, S-2P, S-Trimer) and (RBD-NG, RBD-mFc, RBD-Fc, RBD-NP) are included, their role increased to 31 (43.6%) and 24 (33.8%), respectively. Similarly, around 15 subunit vaccine trials used multi-epitopes (MEs) (Figure 1b). Table 1 below depicts the antigens, adjuvants, and stages of the COVID-19 subunit vaccine trials (Table 1). COVID-19 subunit vaccine adjuvants Overall, our review identified 27 distinct types of adjuvants. However, this list might not be comprehensive due to the fact that our literature search was limited to two databases. Human beta defensin, alum, matrix-M, CpG, and MF59 are the common and top five adjuvants employed by COVID-19 subunit vaccine researchers (Table 1 & Figure 2). As shown in Figure 2 below, most of the adjuvants are PAMP and small molecules that have known receptors. The review captured several insightful findings. For instance, immunoinformatics-based vaccine construction contributed the lion share in vaccine epitope search. This in silco strategy helps predict the antigenicity, allergenicity, and toxicity of the construct. Additionally, it will predict the physical properties such as the molecular weight, half-life, and hydrophobicity nature of the predicted subunit vaccine. These collectively save resources and time and reduce the costly trial errors [86]. For instance, B-defensin adjuvanted multi-epitope (28 epitopes including 3 from replicase, 3 from NSp1, 2 from envelope, 5 from membrane, 6 from nucleocapsid, and 9 from spikes proteins) subunit vaccines have been constructed. The molecular docking demonstrated excellent affinities against TLR3 and TLR8 [87]. Such types of vaccines might have broad spectrum action including against the emerging variants of concern (VOC). Titan et al using the full S-Matrix-M adjuvanted vaccine (NVXCoV2373) elicited high titer anti-S IgG, polyfunctional CD4 + and CD8 + T cells, follicular CD4 + Th, and germinal center B cells in the spleen of mice [88]. A phase 1 subunit COVID-19 vaccine (SCB-2019) assessed the safety, efficacy, and tolerability using S-AS03, S-CpG/alum, and placebo groups at 3, 9, and 30 μg doses at 21 days interval. In terms of safety, CpG is relatively safe compared with AS03. Both S-AS03 and S-CpG/ alum induce neutralizing antibody (NA) production. However, rapid neutralizing antibody was produced by S-AS03 than by the S-CpG/alum group, showing the distinct qualities of the adjuvants. S-protein specific Th-1-biased immune responses could be induced in the two adjuvanted groups but none in the non-adjuvanted S-trimer COVID-19 vaccine. This dose finding study concluded that 9 µg S-trimer-AS03 and 30 µg Strimer-CpG/alum were the preferred candidates [89]. A phase 1 and 2 subunit vaccine study was carried out by . In both phase 1 and phase 2, adverse events were mild to moderate. In phase 2, 14 days after the second dose, the seroconversion rates of NA were 76% and 72% in the 25 μg and 50 μg dose groups, respectively. In the 3 rd dose schedule, after 14 days, the seroconversion rates reached 97% and 93% in the 25 μg and 50 μg groups, respectively. Hence, within 14 days intervals, three consecutive shoots of 25 μg dose were found to be safe and effective [90]. The adjuvant inside this vaccine is alum. From these reports, broad dimensions of the adjuvant function can be appreciated. The titer and durability of the produced antibody are revealed. Moreover, the types of cells in the adaptive immune system and dose sparing effect of adjuvants are clearly seen. Besides finding new adjuvants, researchers are also modifying the existing adjuvants for enhancing the immune inducing ability and reducing toxicity. For instance, the main limitation of alum adjuvants was the inability to induce Th1 cellular immunity. As a solution, Peng and colleagues packed alum on the squalene-water interface for forming Particulate Alum via Pickering Emulsion (PAPE). The finding showed that six times higher order of NA and three times more IFN-γ producing T cells were produced [91]. By the same fashion, modification at the epitope also further increases the immunogenicity of subunit vaccines. For instance, fusion of RBD with IgG Fc increased the half-life, stability, solubility, and uptake power of APCs, which collectively increase the Th1 response [92]. Multiple subunit vaccine reports claimed a higher order magnitude of NA production against two dose vaccine shots compared with the antibody titer of convalescent sera. For instance, according to Keech et al (2020), the geometric mean titer (GMT) levels of 5 and 25 µg doses of vaccine are nearly four times higher than those in symptomatic COVID-19 patients [93]. Additionally, two-fold higher convalescent sera was induced by single immunization with spike-Helicobacter pylori ferritin particles [94]. Collectively, higher titer of NA is induced through vaccination than natural infection. In a phase three trial of a matrix-M1 adjuvanted subunit vaccine, the overall efficacy of 96.4% was recorded against common SARS-CoV-2 strains, 86.3% efficacy against B.1.1.7 (alpha), and 51% against B.1.351 (beta) variants [95,96]. These variants of concern are now threatening all the first generation vaccines [97]. Liu and colleagues did an experiment to generate strong and broad NA using a subunit vaccine of RBD-Fc adjuvanted with FA/FIA. The experimental vaccine sera collected from immunized mice effectively neutralized seven mutant SARS-CoV-2 strains 35 days post first immunization [98]. T cell response is the hall mark and preferred immune repose than humoral immune response in viral infection. Optimal SARS-CoV-2 subunit vaccines must produce mainly Th1-skewed immune response across age groups. However, achieving this desired outcome is not straightforward and innovative approach is required either to the adjuvant, antigen, or the delivery system. Steinbuck et al designed a subunit vaccine composed of an amphiphile (AMP)-CpG (diacyl lipid with modified CpG) mixed with S-RBD. Animal experiments in young and aged mice showed greater than 25-fold higher epitope-specific and Th1 skewed polyfunctional cell induction. The induced NA reached 265-fold higher titers than convalescent sera, with higher efficiency in terms of neutralization capacity. Additionally, higher order of cellular and humoral immunity was also induced among aged mice. This is due to the art of adjuvant modification; AMP modification adroitly distributes CpG to lymph nodes [99]. The summary of the key findings of the included articles is given in S2. Additionally, a brief account has been given below for some of the common adjuvants used in the COVID-19 subunit vaccine study. B-defensin B-defensin is a TLR3 agonist used by several SARS-CoV-2 subunit vaccine studies [100][101][102]. Defensins are cationic peptides found from human innate and epithelial cells, serving as antimicrobials and signaling molecules [103,104]. Among α, β, and θ defensins, β-defensin is the most abundant antimicrobial in most cells [103].Three β-defensins; human β-defensin-1 (hBD1), hBD2, and hBD3; have been identified in human epithelial cells [104]. The hBD3 played a role in dendritic cell and T cell activation, migration, and polarization [105]. It activates the IFN-γ and plays a role in the integration of innate and adaptive immune responses [106]. A study evaluated the adjuvant role of hBD2 and demonstrated increased expression levels of antiviral molecules [107]. Alum, emulsion, and liposome Cationic adjuvant formulations (CAF01) are a liposome adjuvant containing a cocktail of dimethyldioctadecyl ammonium bromide (DDA) as a delivery vehicle and synthetic mycobacterial cord factor as an immunomodulator. Worzner et al [108] evaluated the efficacy of alum, squalene oil in water emulsion system (SE), and CAF01 and spike protein antigen in mice. The finding confirmed that CAF01 induced a higher level of B, Th, and CD4 + T cells than alum [108]. In a similar study, while CAF01 induced higher titer of IFN-γ and IL-17, alum adjuvanted vaccines skewed toward IL-5, IL-10, and IL-13 [108]. Studies confirmed that pre-fusion stabilized spike protein (S-2P), S1, and RBD based subunit vaccines produced NA regardless of adjuvants [108,109]. A study using S1 as the antigen evaluated the titer of neutralizing antibody and found that CoVaccine adjuvanted S1 protein subunit vaccines produced more neutralizing IgG antibodies than aluminum adjuvanted S1 protein vaccines [109]. CpG adjuvant Cytosine phosphate guanidine oligodeoxy nucleotides (CpG ODNs) are a popular novel adjuvant that contain unmethylated CG motifs. This adjuvant activates B lymphocytes and plasmacytoid dendritic cells and presents antigens through TLR9 [110]. It enhances the production of Th1 and proinflammatory cytokines. The adjuvant properties of CpG ODNs are improved when the vaccine antigen and ODN are in close proximity. Structurally, three distinct classes of synthetic CpG ODNs have been described [111]; namely, 'K/B,' 'D/A,' and 'C' type ODNs. Each class activates distinct immunoglobulin types [111][112][113]. Collectively, CpG ODN is a novel and recommended adjuvant that functions through enhancing the TNF-α and IL-6 production. Additionally, CpG is known to augment the surveillance power of antigen presetting cells. The utility of CpG ODNs is further increased by their dual abilities of raising mucosal and systemic immunity [111][112][113]. The frequency of the CpG motif in the genome of SARS-CoV-2 is rare, and the microevolution is toward fewer CpG genomes. The lower CpG motif might be associated with the high rate of asymptomatic and mild cases. Hence, using CpG ODN as an adjuvant might be a good approach for enhancing immunogenicity with reduced toxicity [114]. A preclinical COVID-19 subunit vaccine study was carried out to determine the efficacy and safety of the SARS-CoV-2 S-2P antigen combined with CpG and/or aluminum hydroxide. The finding showed that the induction of NA is higher when CpG 1018 and aluminum hydroxide are combined than being individual adjuvants. Addition of CpG 1018 to alum suppressed the expression levels of Th2 cytokines (IL-5 and IL-6). However, CpG is associated with liver toxicity, spleen and lymph node enlargement, and inflammation [42]. Taken together, CpG 1018 is a more potent neutralizing antibody and Th1 inducer than the alum adjuvant [115]. Saponin-based matrix M The matrix is a cocktail of two individually formed saponin matrix particles: a highly active saponin adjuvant (Fraction-C saponin) and a safe and weak saponin adjuvant (Fraction A). The admixture generated a new potent adjuvant with dose sparing nature. The matrix M adjuvant is a nanoparticulate adjuvant containing a heterogeneous mixture of saponin, cholesterol, and phospholipid [116]. This is the adjuvant of the potent COVID-19 subunit vaccine recently released [93] ( Table 1). Matrix M is known to induce high titer and durable NA and mutifunctional cell mediated immunity [116]. Nano-adjuvants Several types of vaccines including COVID-19 mRNA vaccines are designed at the nanoscale [117][118][119][120]. The architecture and application of nano-adjuvants are reviewed elsewhere [121]. Nanomaterials have several important functions, including antigen/nucleic acid delivery, limiting bioavailability, and depot effect among others [122]. For instance, according to Sun et al (2020), the nanodepot of manganese is found to be effective and safe as treatment and vaccine adjuvants compared with free Mn 2+ . NanoMn treatment increased the CD8 + memory T cell population and polarized macrophages into M1 types and increased the serum IgG, TNFα, and IFNγ concentrations. Pharmacokinetic and safety evaluation data demonstrated reduced neural inflammation. These collectively make nano-manganese as a safe and effective adjuvant for COVID-19 [123]. Another experiment was performed for evaluating the adjuvant nature of cationic nanocarriers: polyethyleneimine (PEI), DOTAP, and chitosan. The experiment compared these candidate cationic nanocarrier adjuvants with other anionic and neutral nanocarriers controls. An ELISA serum antibody titer showed the PEI adjuvanted subunit vaccine induced a significantly higher titer of NA than control nanocariers [124]. Nanoparticles are more membrane penetrating and are able to reach and accumulate inside DC and macrophages. These phenomena enhance the innate immune response power of DC and macrophages [125]. Contrary to these claims, a systematic review by Hoseini et al (2021) concluded against the effectiveness of metal nano-adjuvants [126]. Several studies in the literature and our synthesis confirmed the superior value of nano-particulate adjuvants than other forms of the same adjuvant. Signaling through pattern recognition receptors The innate immune cells sense the entry of invading pathogens by targeting PAMP and damage associated molecular patterns (DAMP). The nature of the danger is investigated, weighted, and immediately confronted by the innate immune system. The adaptive immune defense is a learned effector of the message encoded by innate immune signaling products [127,128]. Thus, it is the strength and type of PRR-PAMP/DAMP interaction that determines the nature of downstream signaling pathways across the PRR for controlling infection. Adjuvants derived from PAMP/DAMP enhance and modulate innate immunological signal transduction pathways. Whether the SARS-CoV-2 genome contains potential PAMP adjuvants or not is the future area of security. A recent imunoinformatic study identified motfis having high affinity to TLR7 and TLR8 [129]. Retinoic acid-inducible gene-I-like receptors (RLR) The RLR, melanoma differentiation associated gene 5 (MDA5), and laboratory of genetics and physiology 2 (LGP2) are members of RNA helicase and sensors of RNA of the pathogen source in the cytoplasm [130,144]. The interaction of RLR-RNA activates type I interferons (IFN-α) and proinflammatory cytokines, which are known effectors of the innate immune system [144,145]. However, the RLR signaling pathway is very prone to overactivate and leads to autoimmunity. Hence, it is under strict regulation to keep the immune homeostasis [146] ( Figure 3). Signaling pathways by RLR are employed by several viruses including SARS-CoV-2 [147]. The cGAS-STING signaling axis Cyclic GMP-AMP (cGAMP) synthase (cGAS) is one of the cytoplasmic sensors of cytosolic DNA either directly or indirectly through second messenger cGAMP [148,149]. The binding of cGAMP with STING adaptor protein at the surface of endoplasmic reticulum (ER) pushed the complex into the Golgi complex for further recruiting of the interferon regulatory factor 3 (IRF3), IKK, and TANK-binding kinase 1 (TBK1) complex. This complex formation followed by phosphorylation and dimerization leads to the production of type I interferon (IFN-α) and the type I IFN/ NFκ-β dependent proinflammatory cytokines [134,148,150,151]. Recent evidences proposed that, besides DNA viruses, RNA viral infection could also activate the cGAS-STING signaling pathways through DAMPs released from mitochondria. An indirect cGAS-STING signaling pathway inhibition experiment confirmed the upregulation of this pathways in SARS-CoV-2 infection [152]. Another study further identified the specific cGAS-STING signaling pathways leading to antiviral resistance. Based on this study, SARS-CoV-2 antiviral resistance in the cGAS-STING pathways is through selective activation of NF-κB pathways. The IRF3 pathways are suppressed [153] ( Figure 4). Conclusions Several in silico and (pre)clinical studies evaluated different types of adjuvanted COVID-19 subunit vaccines. The current COVID-19 subunit vaccine development researches included several 'novel adjuvants,' which have known PRR receptors. Of these, defensins, alum, matrix-M, and CpG are the most utilized adjuvants. Despite some controversy, nanoparticulate adjuvants are found to be superior to larger size/form of adjuvants. Novel SARS-CoV-2 adjuvants activate the innate immune defense system either through endosomal (TLR3/7/ 8/9/13) and/or cytosolic (RLRs, cGAS, and AIM2) sensors. The effectiveness of subunit vaccine relies on the art of designing vaccines, which have optimal antigen-adjuvant-PRR blending. As such, like epitopes, in-depth structural and molecular characterizations of candidate adjuvants are equally important for rational selection of adjuvants. Available evidence showed that the world would have several alternative COVID-19 vaccine adjuvants in the coming few years. Expert opinion Subunit vaccines are a state of the art and modern biotechnology products. In the immunoinformatics stage, sequence identification from the database, prediction of epitope allergenicity and toxicity, adjuvant and linker selection, construction, molecular docking, and physico-chemical characterization are key upstream research activities. The animal model experiment is a dose finding and safety evaluation stage. As such, selection of an appropriate lab animal followed by inoculation and measurement of the safety and efficacy of the vaccine are the key tasks. The clinical trial phase is the measure of safety, efficacy, and correlate of protection using clinical and immunological variables. Different types of molecular adjuvants have been applied in COVID-19 subunit vaccine development. Across the three stages (in silico, pre-clinical, and clinical), safe and effective adjuvants (adjuvanted vaccine) have been characterized in terms of their physicochemical nature, size, depot and dose sparing effect, speed in seroconversion rate, and ability to induce broad spectrum immune response. All published COVID-19 subunit clinical studies demonstrated excellent efficacy and safety profile [89,90,93]. Many more clinical trials are running against time and the pandemic (NCT04783311, NCT04780035, and NCT04813562) for producing second and third generation vaccines. However, the number of clinical trials are a few compared with that of upstream experiments. This might be due to failure in defining the appropriate immunological product profile and subsequent selection of antigens and adjuvants with synergistic immunological effect. Both (pre)clinical trials confirmed the production of several orders of magnitude higher antibody than the convalescent sera of recovered people. The reason behind this scenario is unknown, but it is likely due to the persistent stimulation and dose sparing nature of vaccines. Additionally, as immune The most commonly used COVID-19 adjuvants include βdefensin, alum, M1 matrix protein, MF59, and CpG. However, this does not guarantee their superiority in terms of efficacy and safety. For instance, all β-defensin adjuvanted experiments are at in silico stage. Despite that, it is a good step that the majority of researches are now using PAMP/small molecule adjuvants that have known PRRs. Additionally, several improved results were obtained through modification of the existing classical adjuvants and antigens. Such a strategy must be expanded. It is likely that more potent and safe adjuvants will be identified from the study of PRR's signaling pathways. Currently, major new fields are also being exploited for the identification of metabolic, cell death, and epigenetic adjuvants [38]. The finding of safe and effective adjuvants must go down to the nanoscale size and nanoparticulate form. This is because of the fact that nanomaterials concentrate the antigen and display antigens in prolonged patterns and help APCs co-localize antigens and adjuvants [154]. The smaller the size, the more inflammatory response formation. On the other hand, nanoscale materials are associated with toxicity by different mechanism than bulky materials. In nanomaterial, the toxicity has been thought to originate from nanomaterials' size and surface area, composition, and shapes [155]. Summing up, the nanoparticulate adjuvant is the future promising area of vaccine research. Safety is the single most important issue when we talk about adjuvants. The surrogate makers of the correlate of protection such as titer, durability, class switching, rate of seroconversion, and dose sparing are more common among major adjuvants and these differences are as such insignificant. Rather, the major differences are with regard to the ability of the adjuvant to induce cell-mediated immunity (polyfunctional ThCD4 + and CTH8+ cells), the balance of Th1/Th2, induction of life-long memory cells, etc. Future vaccine research must focus on identifying adjuvants that could have the potential of shortening the number of vaccine shoot/individuals and are able to induce tissue resident memory T cells and long lived plasma cells [38]. Taken together, adjuvants in subunit COVID-19 are the unsung heroes that give the most controlled, efficacious, and safe vaccines. Our review showed that the search for effective and safe subunit vaccines is broadening with unprecedented depth and speed. The search spans from modification of the existing adjuvants to mining of OMICS sciences. The results of new formulations of the existing adjuvants are astonishing. The continued spillover of pandemic infectious disease is leveraging the vaccine research arena and is expected to boost the biomedical and vaccine research funding. Hence, the search for biological adjuvants is an untouched area of innovation. Acknowledgments The authors would like to express their thankfulness to the staff of the First Affiliated Hospital of USTC and colleagues in the TJ Lab of USTC for their encouragement and support. Reviewer comments Peer reviewers on this manuscript have no relevant financial or other relationships to disclose. Author contributions D Mekonnen set the outline and subtopics, collected literatures, reviewed evidences, and drafted the manuscript, HM Mengist reviewed the draft and edited the manuscript. T Jin conceived the review topic, supervised the review process, reviewed, investigated, and validated the final manuscript. All authors read and approved the final manuscript. Data availability statement All datasets presented in this study and its supplementary materials are included in the submission.
2021-10-12T06:23:26.216Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "99aaa3616fb125dc2815223cb0beb6e3290098a9", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14760584.2021.1991794?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "088ff351b418d7da17bcb5145f3588fb289cbebe", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248094480
pes2o/s2orc
v3-fos-license
Effects of synbiotics preparations added to Pengging duck diets on egg production and egg quality and hematological traits Background and Aim: Duck eggs have high cholesterol levels; inulin addition combined with probiotic is known in several studies to lower cholesterol, while maintaining egg production capacity and blood hematology. This study aimed to investigate the effect of the addition of synbiotic preparations on egg production, egg quality, and hematology of Pengging ducks. Materials and Methods: A total of 200 female Pengging ducks aged 75 weeks (late production phase) and weighing 1467±90.87 g were maintained in litter cages, each measuring 1×1 ducks. The treatment included the addition of synbiotics between the inulin of gembili tuber (Dioscorea esculenta L. and Lactobacillus plantarum Ina CC B76) as follows: T0=control feed (“farmer feed”), T1=control feed+synbiotics 1 mL/100 g, T2=control feed+synbiotics 1.5 mL/g, and T3=control feed+synbiotics 2 mL/100 g in the feed. A completely randomized design was used in this study. The production performance, physical and chemical qualities of eggs, and hematological parameters of Pengging ducks were evaluated. Results: The addition of synbiotics had no significant impact on the production performance, physical and chemical qualities of eggs, and hematological parameters (p>0.05), except for the egg yolk cholesterol content. The cholesterol content decreased significantly (p<0.05) with T2 and T3 treatments, but they had no significant effect (p>0.05). A significant decrease (p<0.01) in cholesterol levels was observed when the synbiotic dose was given at 1.5 ml/100 g feed (T2). However, there was no further decrease in cholesterol level when the synbiotic dose was increased to 2 ml/100g fed (T3) Conclusion: The addition of synbiotics preparations at 1.5 mL/100 g reduced the cholesterol content but did not improve egg production, egg physical quality, and hematology of Pengging ducks. Introduction The Pengging duck is an Indonesian local egg producer duck. The first egg-laying age is approximately 6 months, with an egg production rate of 110-130 eggs/year [1]. In general, duck maintenance is still performed using a low-quality ration ("farmer feed"), due to which egg production and egg physical quality are very low, especially in the late production phase. Akruyek and Okure [2] mentioned that after the age of 50 weeks, egg production decreases, and egg weight increases; however, the weight and thickness of the eggshell, the pH of albumen and yolk, yolk index, and albumen index decrease. Duck eggs contain complete nutrition, that is, protein 10.7% [3], omega-3 and omega-6 [4], cholesterol 2.04 mg/g, and fat 31.88% [5]. Some consumers have an issue with the high content of cholesterol in duck eggs. It has been reported that the consumption of a half egg per day on a regular basis could increase the risk of developing cardiovascular disease in adults [6]. Feed additives are widely used in diets to improve the production and quality of chicken eggs, whereas in the case of duck eggs, the same is still extremely limited. Kiczorowska et al. [7] reported that feed additives increase the production performance of monogastric animals. Moreover, studies have shown that synbiotics supplementation (a combination of prebiotics and probiotics) as a feed additive improves health, nutrient absorption, and livestock production performance [8]; increases hemoglobin concentration; decreases the heterophil-to-lymphocyte (H/L) ratio [9]; increases eggshell weight and thickness [10]; and increases the hen-day egg production at 19,20,21,22, and 23 weeks of age [11]. Furthermore, the addition of synbiotics could also decrease egg cholesterol levels [12]. The administration of a probiotic (Saccharomyces spp. was found to decrease the cholesterol content of yolk and increase the egg mass, feed efficiency, feed digestibility, yolk color, yolk and eggshell weight, shell thickness, and Ca content in the eggshell and yolk of duck eggs [13]. Natural feed additives consisting of natural ingredients, probiotics, and phytobiotics were found to increase egg production and egg quality and decrease the cholesterol content of Mojosari duck [14]. Furthermore, Savedboworn et al. [15] reported that the use of inulin as a prebiotic increased the viability of Lactobacillus plantarum. Synbiotics (a mixture of inulin and L. plantarum) evidently inhibited the proliferation of pathogenic bacteria [16] and improved intestinal morphology, metabolizable energy, and nitrogen retention [17]. The improvement of intestinal morphology increases nutrient absorption, thereby improving physical quality and reducing the egg cholesterol content [18]. Shehata et al. [19] reported that synbiotics could reduce cholesterol content through the mechanisms of bile deconjugation with bile salt hydrolase (BSH), binding of cholesterol to the cellular surface of cells, coprecipitation of cholesterol with deconjugated bile, and incorporation of cholesterol into cellular membranes and short-chain fatty acids. They found that conjugated bile salts were regularly recirculated back into the enterohepatic circulation, whereas the circulating deconjugated bile salts were less soluble and eliminated in the excreta. In the present study, inulin derived from gembili was combined with L. plantarum Ina CC B76. Although synbiotics (based on inulin and gembili tubers and L. plantarum) have been used in broilers [17,24,25], to the best of the authors' knowledge, they have never been used in ducks, especially in the late production phase. This study aimed to evaluate the effect of the addition of synbiotics (inulin of gembili tuber and L. plantarum InaCC B76) to "farmer feed" on the production, physical and chemical qualities, especially cholesterol content, and hematology of Pengging duck eggs in the late production phase. Ethical approval The procedure of using duck in this study has been approved by the Animal Ethics Committee in the Faculty of Animal Sciences, Diponegoro University, Semarang, Indonesia, approval number 57-09/A-6/ KEP-FPP. Study period and location This research was conducted from September to December 2021. Samples were collected from Duck Breeding and Rearing Unit "Satker Itik Banyubiru", Semarang District, Central Java. Animals In this study, 200 female Pengging ducks aged 75 weeks (late production phase) with uniform body weight (1467±90.87 g) were used; they were maintained in an open-sided housing system. During the maintenance period, temperature and relative humidity were 21.64-29.86°C and 60.39-88.36%, respectively. Basal diet was formulated based on the "farmer feed" with 14.23% protein content, fat 4.58%, crude fiber 14.41%, and 2,403.74 kcal/kg EM and synbiotics (inulin of gembili tuber and L. plantarum) with a total bacterial count of 5.8×10 8 . The "farmer feed" consisted of 32.5% yellow corn, 40% rice bran, and 27.5% commercial concentrate, a product of PT Charoen Pokphand Indonesia Tbk (Hi-Pro-Vite 144). The nutrient content of Hi-Pro-Vite 144 concentrate is protein 37.0%-39.0%, fat 2%, crude fiber 6%, Ca 12%, P 1.2%, and 1750-1850 kcal/kg EM. The feed ingredients and nutritional content of "farmer feed" are presented in Table-1. Preparation of synbiotics Gembili was harvested at the age of approximately 9 months and obtained from Pati, Central Java, and L. plantarum InaCC B76 was a product from the Indonesian Institute of Sciences. Gembili was washed, peeled, sliced, sun-dried, and then mashed (gembili tuber flour). Inulin was extracted using the method developed by Setyaningrum et al. [25]. Briefly, gembili tuber flour was added to hot water (90°C) in a ratio of 1:15, heated in a water bath at 80°C for 1 h, and then filtered using a filter cloth. The resulting filtrate was precipitated with 40% ethanol and stored in a freezer for 6 h. It was then removed from the freezer, allowed to melt, and then centrifuged at 3 075 x g for 5 min to obtain inulin deposits. The resulting precipitate was dried in an oven at 50°C and ground into inulin flour. 3 14.23 Crude fat (%) 3 4.08 Crude fiber (%) 3 14.41 1 Product of PT Charoen Pokphand Indonesia Tbk (Hi-Pro-Vite 144), 2 Metabolizable energy was calculated according to formula of Bolton cited by Sugiharto et al. [52] Synbiotics were prepared by mixing 7 g/100 mL distilled water and 10 mL L. plantarum with a bacterial concentration of 1×10 9 Colony-forming unit/mL and incubated at 37°C for 24 h. Tests and procedures Egg production data were collected every day for 4 weeks, and feed consumption, egg weight, and egg mass were measured every day. Feed conversion was calculated by dividing feed consumption/individual/ day by egg mass. Data concerning egg physical quality (eggshell weight, eggshell thickness, eggshell strength, yolk weight, albumen weight, Haugh unit (HU), yolk index, yolk color, and albumen pH) were obtained by sampling that was conducted weekly, and the chemical quality of the yolk was measured by sampling three eggs/experimental unit in the final week of the study (78 weeks of age). The protein content was analyzed using the Kjeldahl method [26]. The fat content was measured by Soxhlet extraction. The calcium content was analyzed using an atomic absorption spectrophotometer (AAS-AA 6200 Shimadzu, Japan), and the yolk cholesterol content was determined using the enzymatic colorimetric method. First, the sample was saponified using methanolic KOH and then added with Fluitest kit cholesterol. Results were read using a spectrophotometer at 500 mm. The analyses of cholesterol and triglyceride contents were conducted based on the cholesterol p-aminophenazone method [13], HDL and LDL analysis was based on the enzymatic colorimetric method [13]. Blood hematology was evaluated using a hematology analyzer at the end of the study using a sample of one duck per experimental unit. Approximately 2 mL blood was collected from the brachial vein and mixed with ethylenediaminetetraacetic acid, after which the hematological profile was analyzed. Hemoglobin level was measured by cyanide-free hemoglobin spectrophotometry, and erythrocyte, hematocrit, and leukocyte levels were determined using the electrical impedance method. Statistical analysis Data were analyzed in a completely randomized design using one-way analysis of variance and Duncan's multiple range test at the 5% significance level. Analysis was performed using Statistical Analysis System (SAS) for University Edition (https://www2. nau.edu/stat-lic/sas/sas-univ.html). Several data, such as hematological results, did not follow the normal distribution, and hence, the transformation was performed according to the characteristics of the data before the analysis of variance. Production performance Table-2 shows the results of the effect of adding synbiotics (inulin extract of gembili tuber and L. plantarum) on the egg production and egg quality of Pengging ducks aged 78-82 weeks. The addition of synbiotics had no significant effect on feed consumption, egg production, egg weight, egg mass, and feed conversion of Pengging ducks. Physical and chemical egg quality The addition of synbiotics had no significant impact on egg physical quality and yolk protein, fat, and Ca contents (p>0.05), but it had a significant (p<0.05) impact on yolk cholesterol (Table-3). The yolk cholesterol content decreased with the addition of 1.5 mL/100 g (T2) and 2.0 mL/100 g (T3) synbiotics to the feed. Hematology As shown in Table-4, the addition of synbiotics had no significant effect on the hematological parameters of Pengging ducks (p>0.05). Production performance Table-2 shows that the production performance was lower than in the previous research reports. Thus, Purwati et al. [27] reported that the average weight of Pengging duck egg was 63.66 g at a feed consumption of 160 g. The ducks used in their study were old or pre-molting; for this reason, their production performance was low. In old age, the ovarium and oviduct weight decreases [28], the levels of hormones required for the process of egg formation decrease, and egg production becomes low [29]. In the present study, 78-82-week-old ducks were used; thus, the addition of synbiotics (inulin of gembili tuber and L. plantarum) had no significant effect on feed consumption, feed conversion, egg production, egg weight, and egg mass (p>0.05). The ducks were in the late production phase/pre-molting phase; hence, the added synbiotics caused no changes in the intestinal villi and did not increase the nutrient digestibility and production performance. Along with the increasing age of ducks, there was a decrease in the levels of hormones required for egg formation, so egg production decreases, and the addition of the synbiotics could not increase egg production. According to Dibner and Richards [30], the structure, dynamics, and function of digestive organs are influenced by age and diminish with age. Older poultry had lower egg production [31]. Purbarani et al. [32] reported that feed containing 18% protein/low protein and fortified by a combination of 1.2% inulin of dahlia tuber and 1.2 mL Lactobacillus spp. increased the height of jejunum villi and growth of 8-to 70-day-old chicken. The combination of probiotics and phytobiotics in "standard feed" (17% protein and 2654 EM) was found to improve the histomorphology of the ileum. However, it did not significantly affect protein digestibility, feed conversion, and egg mass of laying hens aged 72-77 weeks [33]. Tang et al. [34] reported that the supplementation of synbiotics effectively increased feed consumption, feed conversion, egg production, egg weight, and egg mass of laying hens aged 20-36 weeks but had no significant effect in laying hens aged 37-52 weeks. The addition of 1.3% inulin to the feed increased the length of the small and large intestines, egg production, and egg weight of laying hens aged 57-60 weeks [35]. Consistently, Zarei et al. [10] showed that the use of chemical/commercial synbiotics did not significantly affect feed consumption, egg production, egg mass, egg weight, and feed conversion. However, this finding was different from that reported by Getachew [36], who showed that probiotic supplementation may increase egg production. Physical and chemical eggs quality As shown in Table-3, the physical quality of eggs was not significantly different (p>0.05). This result confirmed that the addition of synbiotics (inulin of gembili tuber and L. plantarum) could not improve the egg physical quality of late-phase ducks. The supplementation of synbiotics to old ducks fed with low-protein diet ("farmer feed") also did not increase egg quality. This result indicated that this synbiotic preparation cannot improve intestinal histomorphology, and the levels of hormones required for the process of egg formation in old ducks were low. According to Table-3, the yolk egg, albumen, and eggshell weights were lower than previous research studies; Sun et al. [3] showed that the yolk weight of 50-week-old ducks was 24.06 g, whereas the egg white/albumen weight was 42.79 g. The Means in the same row with different letters show significant differences (p<0.05) eggshell weight of Pengging duck was 8.40 g, and the eggshell thickness was 0.29 mm [37]. Yolk color was within a normal range (11.75-12.45). According to Du et al. [38], the yolk color score of Shan Partridge ducks was 12.38, whereas the eggshell strength was 4.34 (kg.f), HU was 73.64, yolk weight was 24.56 g, and eggshell thickness was 0.43 mm. Yolk protein content was within the normal range. The protein content of duck was 9.24%, cholesterol content was 11.38%, egg white relative weight was 50.87%, and yolk relative weight was 32.68% [39]. The protein, fat, and calcium contents of yolk were not significantly different, but the cholesterol content was p<0.05. The addition of 1-2 mL/g synbiotics did not increase the egg protein and Ca contents, but at the levels of 1.5 and 2 mL/g, the yolk cholesterol content was reduced. In the present study, the "farmer feed" (very low protein) and very high crude fiber content (Table-1) were used so that despite synbiotics supplementation, the protein and Ca deposition in eggs were not significantly different (p<0.05). This result is consistent with the study of Sari et al. [24], who showed that supplementation with 0.5-1.5% of synbiotics into drinking water did not significantly affect the yolk protein and fat content of eggs. In the present study on Pengging ducks aged 78-82 weeks and fed with low-nutrient feed/"farmer feed," it is expected that the egg protein and Ca contents would increase and fat and cholesterol contents would decrease. However, the yolk protein, fat, and Ca contents were not significantly different, which may be because of the duck's age. At this age, their intestinal morphology did not change. Thus, the intestine absorbed nutrients only adequately for body maintenance, and egg production ( Table-2) and yolk protein and Ca contents did not increase (Table-3). Villagrán-de la Mora et al. [40] reported that supplementation of synbiotics in drinking water increased the number of lactic acid bacteria, villi length, and crypt depth and resulted in a better villous-to-crypt ratio. Synbiotics addition increased the height of villi in the duodenum and ileum of 35-week-old [41] and 48-week-old broilers [42]. However, supplementation with feed additives had no significant effect on the intestinal morphology of 73-week-old layers [43]. This was because the ducks were old, so the synbiotics did not increase the length of the villi, the depth of the crypts, and the ratio of the villi to crypts, and hence, nutrient absorption was not optimal. Prakatur et al. [44] reported that nutrient absorption was influenced by villus height and villus height-to-crypt depth ratio; that is, the greater the villus height and the villus height-to-crypt depth ratio, the higher the nutrient absorption. As shown in Table-3, the addition of synbiotics (inulin of gembili tuber and L. plantarum 1.5 mL/100 g [T2]) was able to reduce the yolk cholesterol content (p<0.05). According to Getachew [36], supplementation with probiotics reduced chicken egg cholesterol content. Shehata et al. [19] mentioned that synbiotics could reduce cholesterol content through BSH in the enterohepatic circulation, which was then eliminated through excreta. The BSH is known to facilitate bile salt deconjugation. The hypocholesterolemia effect of synbiotics was due to reduced cholesterol absorption from the gastrointestinal tract and/or by the deconjugation of bile salts in the intestine, which would prevent their reabsorption through the enterohepatic circulation. In a previous study, Elkin et al. [45] hypothesized that some caution is necessary regarding the cholesterol-lowering ability of probiotics or prebiotics because in most studies, the yolk weight was not reported. Concerning the hypocholesterolemia ability in the present study, it was manifested without affecting the HDP, the size or weight of the yolk, and the whole egg weight. Increasing cecum influx of polysaccharides could influence increasing microbial population [46]. Inulin is rich in complex polysaccharide compounds that may provide and contribute supporting substances for gut microbial proliferation, including lactic acid bacteria and other beneficial microorganisms. Similarly, it could cause alterations in the physical conditions of the digestive tract environment, such as an optimum intestine pH [47]. The availability of polysaccharides and an increased number of beneficial microorganisms can reduce blood cholesterol profiles [46], where a decrease in blood cholesterol levels was positively related to a decrease in egg cholesterol levels. The cholesterol-lowering effect of the probiotic was due to reduced cholesterol absorption from the gastrointestinal tract and/or by the deconjugation of bile salts in the intestine, which would prevent their reabsorption through the enterohepatic circulation [45]. Hematology There were no significant changes in the hematological status (p>0.05), as shown in Table-4. The addition of synbiotics successfully reduced the yolk egg cholesterol content without affecting the hematological status of the ducks. This result was similar to that of the previous studies, where the addition of synbiotics to the feed was found to have no significant impact on the hemoglobin levels in the starter, grower, and finisher phases of broilers [47,48] and H/L of laying hens [10,49]. Similarly, Zbikowski et al. [50] and Tarabees et al. [51] reported that the use of synbiotics had no significant effect on the hematology of broilers. Conclusion The addition of 1.5 mL/g synbiotics (inulin of gembili tuber and L. plantarum) preparations to the feed reduced the cholesterol content in the egg yolks of Pengging ducks. However, there was no improvement in egg production, egg physical quality, and hematology. The limitation of this study was conducted using Available at www.veterinaryworld.org/Vol.15/April-2022/8.pdf Pengging ducks (Indonesian local duck), the egg cholesterol deposition response might be different in other types of ducks. In the future, the results can be implemented to produce duck eggs with lower cholesterol levels. Authors' Contributions SK: Conducted the research, data collection, and drafted the manuscript. DS: Developed the feeding concept and supervised the study. LD: Designed the experiment. TAS: Conducted data analysis. All authors read and approved the final manuscript.
2022-04-12T15:03:23.348Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "991f60de542517cd7590b3aa43cf9fe7dc594a27", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.15/April-2022/8.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3e2faf5f5ff375e0a5ae021f8b120aeb3895de40", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
119173069
pes2o/s2orc
v3-fos-license
$S$-arithmetic Inhomogeneous Diophantine approximation on manifolds We investigate S-arithmetic inhomogeneous Khintchine type theorems in the dual setting for nondegenerate manifolds. We prove the convergence case of the theorem, including, in particular, the S-arithmetic inhomogeneous counterpart of the Baker-Sprindžuk conjectures. The divergence case is proved for Qp but in the more general context of Hausdorff measures. This answers a question posed by Badziahin, Beresnevich and Velani [4]. Introduction In this paper we are concerned with metric Diophantine approximation on nondegenerate manifolds in the p-adic, or more generally Sarithmetic setting for a finite set of primes S. To motivate our results we recall Khintchine's theorem, a basic result in metric Diophantine approximation. Let Ψ : R n → R + be a function satisfying Ψ(a 1 , . . . , a n ) ≥ Ψ(b 1 , . . . , b n ) if |a i | ≤ |b i | for all i = 1, . . . , n. (1.1) Such a function is referred to as a multivariable approximating function. Given such a function, define W n (Ψ) to be the set of x ∈ R n for which there exist infinitely many a ∈ Z n such that |a 0 + a · x| < Ψ(a) (1.2) Ghosh acknowledges support of a UGC grant and a CEFIPRA grant. for some a 0 ∈ Z. When Ψ(a) = ψ( a ) for a non-increasing function ψ, we write W n (ψ) for W n (Ψ). Khintchine's Theorem ( [29], [27]) gives a characterization of the measure of W n (ψ) in terms of ψ: Here, denotes the supremum norm of a vector and | | denotes the absolute value of a real number as well as the Lebesgue measure of a measurable subset of R n ; the context will make the use clear. The kind of approximation considered above is called "dual" approximation in the literature as opposed to the setting of simultaneous Diophantine approximation. In this paper, we will only consider dual approximation. Given an approximation function, one can consider the corresponding S-arithmetic question as follows, we follow the notation of Kleinbock and Tomanov [33]. Given a finite set of primes S of cardinality l we set Q S := ν∈S Q ν and denote by | | S the S-adic absolute value, |x| = max v∈S |x (v) | v . For a = (a 1 , . . . , a n ) ∈ Z n and a 0 ∈ Z we set a := (a 0 , a 1 , . . . , a n ). We say that y ∈ Q n S is Ψ-approximable (y ∈ W n (S, Ψ)) if there are infinitely many solutions a ∈ Z n to (1.4) We fix Haar measure on Q p , normalized to give Z p measure 1 and denote the product measure on Q S by | | S . Then, the following analogue of Khintchine's theorem can be proved. Namely, Theorem 1.2. W n (S, ψ) has zero or full measure depending on the convergence or divergence of the series (1.5) Indeed, the convergence case follows from the Borel-Cantelli lemma as usual and the divergence case can be proved using the methods in [36]. 1.1. Inhomogeneous approximation: Given a multivariable approximating function Ψ and a function θ : R n → R, we set W θ n (Ψ) to be the set of x ∈ R n for which there exist infinitely many a ∈ Z n \ {0} such that |a 0 + a · x + θ(x)| < Ψ(a) (1.6) for some a 0 ∈ Z. For ψ as above, the set W θ n (ψ) is often referred to as the (dual) set of "(ψ, θ)-inhomogeneously approximable" vectors in R n . The following inhomogeneous version of Theorem 1.1 is established in [4]. We denote by C n the set of n-times continuously differentiable functions. Theorem 1.3. Let θ : R n → R be a C 2 function. Then (1.7) We remark that the choice of θ = constant is the setting of traditional inhomogeneous Diophantine approximation and in that case the above result was well known, see for example [19]. Similarly inhomogeneous Diophantine approximation can be considered in the S-arithmetic setting. For a multivariable approximating function Ψ and a function Θ : Q n S → Q S , we say that a vector x ∈ Q n S is (Ψ, Θ)-approximable if there exist infinitely many (a, a 0 ) ∈ Z n \ {0} × Z such that (1.8) The convergence case of Khintchine's theorem in this setting again follows from the Borel Cantelli lemma. The divergence Theorem when S = {p} comprises a single prime p is a consequence of the results in this paper. Diophantine approximation on manifolds. In the theory of Diophantine approximation on manifolds, one studies the inheritance of generic (for Lebesgue measure) Diophantine properties by proper submanifolds of R n . This theory has seen dramatic advances in the last two decades, beginning with the proof of the Baker-Sprindžuk conjectures by Kleinbock and Margulis [32] using non divergence estimates for certain flows on the space of unimodular lattices. Motivated by problems in transcendental number theory, K. Mahler conjectured in 1932 that almost every point on the curve f (x) = (x, x 2 , . . . , x n ) is not very well approximable, i.e. ψ-approximable for ψ := ψ ε (k) = k −n−ε . This conjecture was resolved by V. G. Sprindžuk [41,42] who in turn conjectured that almost every point on a nondegenerate manifold is not very well approximable. This conjecture, in a more general, multiplicative form, was resolved by D. Kleinbock and G. Margulis in [32]. The following definition is taken from [33] and is based on [32]. Let f : U → F n be a C k map, where F is any locally compact valued field and U is an open subset of F d , and say that f is nondegenerate at x 0 ∈ U if the space F n is spanned by partial derivatives of f at x 0 up to some finite order. Loosely speaking, a nondegenerate manifold is one in which is locally not contained in an affine subspace. Subsequent to the work of Kleinbock and Margulis, there were rapid advances in the theory of dual approximation on manifolds. In [11] (and independently in [1]) the convergence case of the Khintchine-Groshev theorem for nondegenerate manifolds was proved and in [6], the complementary divergence case was established. As for the p-adic theory, Sprindžuk [41] himself established the p-adic and function field (i.e. positive characteristic) versions of Mahler's conjectures. Subsequently, there were several partial results (cf. [34,7]) culminating in the work of Kleinbock and Tomanov [33] where the Sadic case of the Baker-Sprindžuk conjectures were settled in full generality. In [23], the second named author established the function field analogue. The convergence case of Khintchine's theorem for nondegenerate manifolds in the S-adic setting was established by Mohammadi and Golsefidy [37] and the divergence case for Q p in [38]. In the case of inhomogeneous Diophantine approximation on manifolds, following several partial results (cf. [18] and the references in [12,13]), an inhomogeneous transference principle was developed by Beresnevich and Velani using which they resolved the inhomogeneous analogue of the Baker-Sprindžuk conjectures. Subsequently, Badziahin, Beresnevich and Velani [4] established the convergence and divergence cases of the inhomogeneous Khintchine theorem for nondegenerate manifolds. They proved a new result even in the classical setting by allowing the inhomogeneous term to vary. The divergence theorem is established in the same paper in the more general setting of Hausdorff measures. In this paper, we will establish the convergence case of an inhomogeneous Khintchine theorem for nondegenerate manifolds in the S-adic setting, as well as the divergence case for Q p . As in [4], the divergence case is proved in the greater generality of Hausdorff measures. Prior results in the p-adic theory of inhomogeneous approximation for manifolds focussed mainly on curves, cf. [14,15,43,44]. Main Results. To state our main results, we introduce some notation following [37], recall some of the assumptions from that paper and set forth one further standing assumption. The assumptions are as follows. (I0) S contains the infinite place. (I1) We will consider the domain to be of the form Here, the norm is taken to be the Euclidean norm at the infinite place and the L ∞ norm at finite places. is an analytic map for any ν ∈ S, and can be analytically extended to the boundary of U ν . (I3) We assume that the restrictions of 1, f to any open subset of U ν are linearly independent over Q ν and that f (x) ≤ 1, ∇f ν (x ν ) ≤ 1 and |Φ β f ν (y 1 , y 2 , y 3 )| ≤ 1 2 for any ν ∈ S, second difference quotient Φ β and x ν , y 1 , y 2 , y 3 ∈ U ν . We refer the reader to Section 3 for definitions. (I4) We assume that the function Ψ : Z n → R + is monotone decreasing componentwise i.e. The divergence case of our Theorem is proved in the more general setting of Hausdorff measures. However, we need to impose some restrictions: we only consider the case when S = {p} consists of a single prime, the inhomogeneous function is assumed to be analytic, and the approximating function is not as general as in Theorem 1.4. We will denote by H s (X) the s-dimensional Hausdorff measure of a subset X of Q n S and dim X the Hausdorff dimension, where s > 0 is a real number. Theorem 1.5. Let S be as in (I0) and U as in (I1). Suppose f : U ⊂ Q m p → Q n p satisfies (I2) and (I3). Let Ψ(a) = ψ( a ), a ∈ Z n+1 (1.10) be an approximating function and assume that s > m − 1. Let Θ : U → Q p be an analytic map satisfying (I5). Then Given an approximating function ψ, the lower order at infinity τ ψ of 1/ψ is defined by The divergent sum condition of Theorem 1.5 is satisfied whenever Therefore, by the definition of Hausdorff measure and dimension, we get Corollary 1.1. Let f and Θ be as in Theorem 1.5. Let ψ be an approximating function as in (1.10) such that n + 1 ≤ τ ψ < ∞. Then (1.13) Remarks. (1) We have assumed S contains the infinite place in Theorem 1.4. This is not a serious assumption, the proof in the case when S contains only finite places needs some minor modifications but follows the same outline, details will appear in [20], the PhD thesis, under preparation, of the first named author. In [37], the (homogeneous) S-adic convergence case is proved in slightly greater generality than in the present paper. Namely, instead of Q, the quotient field of a finitely generated subring of Q is considered. This, more general formulation will also be investigated in [20]. (2) Our proof for the convergence case, namely Theorem 1.4 blends techniques from the homogeneous results, namely [33,11,37] and uses the transference principle developed by Beresnevich and Velani in the form used in [4]. The structure of the proof is the same as in [4]. We also take the opportunity to clarify some properties of (C, α)-good functions in the S-adic setting which may be of independent interest. (3) The proof of Theorem 1.5, follows the ubiquity framework used in [4] but needs new ideas to implement in the p-adic setting. At present, we are unable to prove the more general S-adic divergence statement. We note that the S-adic case remains open even in the homogeneous setting. (4) We now undertake a brief discussion of the assumptions (I1) -(I5). The conditions (I1)-(I4) are assumed in [37] and, as explained in loc. cit., are assumed for convenience. Namely, as mentioned in [37], the statement for any non-degenerate analytic manifold over Q S follows from Theorem 1.4. In [4], the inhomogeneous parameter Θ is allowed to be C 2 when restricted to the nondegenerate manifold. However, we need to assume it to be analytic. (5) Theorem 1.5 is slightly more general than Theorem 1.2 of [38] in the homogeneous setting. In [38], the approximating function is taken to be of the form Ψ(a) = 1 a n ψ( a ), a ∈ Z n+1 (1.14) which is a more restrictive class of approximating functions. For an n-tuple v = (v 1 , · · · , v n ) of positive numbers satisfying v 1 + · · · + v n = n, define the v-quasinorm | | v on R n by setting Following [4] we say that a multivariable approximating function Ψ satisfies property P if Ψ(a) = ψ( a v ) for some approximating function ψ and v as above. As noted in loc. cit. when v = (1, . . . , 1) we have that a v = a and any approximating function ψ satisfies property P, where ψ is regarded as the function a → ψ( a ). The proof of Theorem 1.5 can be modified to deal with the case of functions satisfying property P. Structure of the paper. In the next section, we recall the transference principle of Beresnevich and Velani. The subsequent section studies (C, α)-good functions in the S-adic setting. We then prove Theorem 1.4 and then Theorem 1.5. We conclude with some open questions. Inhomogeneous transference principle In this section we state the inhomogeneous transference principle of Beresnevich and Velani from [12, Section 5] which will allow us to convert our inhomogeneous problem to the homogeneous one. Let (Ω, d) be a locally compact metric space. Given two countable indexing sets A and T, let H and I be two maps from T × A × R + into the set of open subsets of Ω such that and Let Ψ denote a set of functions ψ : T → R + : t → ψ t . For ψ ∈ Ψ, consider the limsup sets (2.4) The sets associated with the map H will be called homogeneous sets and those associated with the map I, inhomogeneous sets. We now come to two important properties connecting these notions. The intersection property. The triple (H, I, Ψ) is said to satisfy the intersection property if, for any ψ ∈ Ψ, there exists ψ * ∈ Ψ such that, for all but finitely many t ∈ T and all distinct α and α in A, we have that The contraction property. Let µ be a non-atomic finite doubling measure supported on a bounded subset S of Ω. We recall that µ is doubling if there is a constant λ > 1 such that, for any ball B with centre in S, we have where, for a ball B of radius r, we denote by cB the ball with the same centre and radius cr. We say that µ is contracting with respect to (I, Ψ) if, for any ψ ∈ Ψ, there exists ψ + ∈ Ψ and a sequence of positive numbers {k t } t∈T satisfying such that, for all but finitely t ∈ T and all α ∈ A, there exists a collection C t,α of balls B centred at S satisfying the following conditions: We are now in a position to state Theorem 5 from [12] Theorem 2.1. Suppose that (H, I, Ψ) satisfies the intersection property and that µ is contracting with respect to (I, Ψ). Then (2.10) (C, α)-good functions In this section, we recall the important notion of (C, α)-good functions on ultrametric spaces. We follow the treatment of Kleinbock and Tomanov [33]. Let X be a metric space, µ a Borel measure on X and let (F, | · |) be a local field. For a subset U of X and C, α > 0, say that a Borel measurable function f : U → F is (C, α)-good on U with respect to µ if for any open ball B ⊂ U centred in sup µ and ε > 0 one has The following elementary properties of (C, α)-good functions will be used. One can note that from (G2), it follows that the supremum norm of a vector valued function f is (C, α)-good whenever each of its components is (C, α)-good. Furthermore, in view of (G3), we can replace the norm by an equivalent one, only affecting C but not α. Polynomials in d variables of degree at most k defined on local fields can be seen to be (C, 1/dk)-good, with C depending only on d and k using Lagrange interpolation. In [32], [11] and [33] (for ultrametric fields), this property was extended to smooth functions satisfying certain properties. We rapidly recall, following [40] (see also [33]), the definition of smooth functions in the ultrametric case. Let U be a non-empty subset of X without isolated points. For n ∈ N, define The n-th order difference quotient of a function f : U → X is the function Φ n (f ) defined inductively by Φ 0 (f ) = f and, for n ∈ N, and ( This definition does not depend on the choice of variables, as all difference quotients are symmetric functions. A function f on X is called a C n function if Φ n f can be extended to a continuous function Φ n f : U n+1 → X. We also set D n f (a) = Φ n f (a, . . . , a), a ∈ U. To define C k functions in several variables, we follow the notation set forth in [33]. Consider a multiindex β = (i 1 , . . . , i d ) and let We are now ready to gather the results on ultrametric (C, α)-good functions that we need. We begin with Theorem 3.2 from [33]. The following is an ultrametric analogue of Proposition 1 from [4]. Then there exists a neighbourhood V ν ⊂ U ν of x 0 and C, δ > 0 satisfying the following property. For any Θ ∈ C l (U ) such that and for any f ∈ F we have that Proof. We follow the proof of [4], which in turn is a modification of the ideas used to establish Proposition 3.4 in [11]. Here ν = ∞ is exactly Proposition 1 of [4] so we assume that ν = ∞. By (3.4) there exists C 1 > 0 such that for any f ∈ F there exists a multiindex β with By the compactness of F, inf f ∈F max |β|≤l |∂ β f (x 0 )| will be actually attained for some f and we may take that value to be C 1 . Since there are finitely many β, we can consider the subfamily which is also compact in C l (U ) and satisfies (3.4). Proving the theorem for F β will yield sets U β where (1) and (2) above hold. Setting V ν := β U β then proves the Proposition. We may therefore assume without loss of generality that β is the same for every f ∈ F. We wish to apply Theorem 3.2 of [33] and to do so we need to satisfy (3.3). We are going to show that there exists such that every element in the left side of (3.7) above is nonzero knowing that for at least one β, x 1 = g(a 11 , · · · , a d1 ) . . . , · · · , a dd ) and g is a homogeneous polynomial of degree k. We already know that ∂ k β=(i 1 ,··· ,i k ) f (x 0 ) = 0 for at least one β, so at least one x (i 1 ,··· ,i k ) = 0 and thus g is a nonzero polynomial. Now g should have at least one nonzero value on {1 + πO} × {πO} × · · · × {πO}, otherwise g is identically zero. So take (a 11 , · · · , a 1d ) to be the point of the aforementioned set where g(a 11 , · · · , a 1d ) = 0. Then by a similar argument choose (a i1 , · · · , a id ) ∈ {πO} × · · · × {1 + πO} × · · · × {πO} such that g(a i1 , · · · , a id ) = 0. Choosing A this way we will automatically get that det(A) is a unit, which implies that in fact there exists a uniform C > 0 such that This is because we can take Taking limits, we get that which is a contradiction to (3.8). Consider the following map In particular, . Thus any Θ satisfying (3.5) will also satisfy By the compactness of F and (3.5) there is a uniform upper bound for every f ∈ F and Θ of the aforementioned type. Now applying The- This completes the proof of the first part. Now consider the set F Clearly this is a closed subset of the compact set F, so it is also compact. may, without loss of generality, take the same A for every f ∈ F. Now we want to apply the first part of this Proposition. Suppose |β| ≥ 2 in (3.6), then to apply part(1) we have to check condition (3.4) Then by compactness of F we have that for some f ∈ F, which implies that Φ 1 (A, f, x 0 ) = 0, which is a contradiction. Thus by applying the first part of the Proposition we get that for every j = 1, · · · , d, ) and so is |∇(f + Θ)|. The case |β| = 1 in (3.6) is trivial (See property (G3) of (C, α)-good functions). This completes the proof. As a Corollary, we have, (1) a 0 + a.f ν + Θ ν is (C, 1 dν l )-good on V ν , and (2) |∇(a.f ν + Θ ν )| is (C, 1 dν (l−1) )-good on V ν . Proof. For the case ν = ∞, see Corollary 3 of [4] and also [11]. So we may assume ν = ∞. Let F := {a 0 + a.f ν + Θ ν | (a 0 , a) ∈ O n+1 }. This is a compact family of functions of C l (U ν ) for every l > 0 since O is compact in Q ν . Now if this family satisfies condition (3.4) for some l ∈ N, then the conclusion follows from the previous Proposition. Hence we may assume that the family does not satisfy (3.4) for every l ∈ N. Then by the continuity of differential and the compactness of O, there exists c l ∈ O n such that for every 2 ≤ l ∈ N we have Now this sequence {c l } ∈ O n has a convergent subsequence {c l k } converging to c ∈ O n since O n is compact. By taking limits we get that However, as each of the f ν and Θ ν are analytic on U ν , there exists a neighbourhood V x 0 of x 0 such that First consider the case where |a 0 + u| < 2|a − c|, then is compact in C l (U ν ) for every l ∈ N. Then by linear independence of 1, f ν , · · · , f (n) ν , F 1 satisfies (3.4) for some l ∈ N. And then by Proposition 3.1 we can conclude that every element in F 1 is (C, 1 dν l )good on some V ν ⊂ V x 0 ⊂ U ν together with conclusion (2) of the Corollary above. This also implies a as |a 0 + u| ≥ 2|a − c| and it turns out to be a trivial case. This implies that for C ≥ 3 and 0 < α ≤ 1 the aforementioned functions are (C, α)-good. Corollary 3.2. For j = 1, · · · , n, let X j be a metric space, µ j be a measure on X j . Let U j ⊂ X j be open, C j , α j > 0 and let f be a function on U 1 × · · · × U d such that for any j = 1, · · · d and any x i ∈ U i with i = j, the function y → f (x 1 , · · · , x j−1 , y, x j+1 , · · · , x d ) (3.12) is (C j , α j )-good on U j with respect to µ j . Then f is ( C, α) -good on U 1 ×· · ·×U d with respect to µ 1 ×· · ·×µ d , where C = d, α are computable in terms of C j , α j . In particular, if each of the functions (3.12) is (C, α)-good on U j with respect to µ j , then the conclusion holds with α = α d and C = dC. Now combining Corollary (3.1) and (3.2) we can state the following: Then there exists a neighbourhood V ⊂ U of x 0 and C > 0, k, k 1 ∈ N such that for any (a 0 , a) ∈ Z n+1 the following holds: From the definition, it follows that W f Ψ,Θ admits a description as a limsup set. Namely, As the set S is finite, we have where W large To prove Theorem 1.4, we will show that each of these limsup sets has zero measure. Namely, the proof is divided into the "large derivative" case where we will show |W large f (Ψ, Θ)| = 0, and the "small derivative" case which involves |W small ν,f (Ψ, Θ)| = 0 ∀ ν ∈ S. Remark 4.1. We will consider |.| the measure to be restricted on some bounded open ball V x 0 around x 0 ∈ U. Then we will get |Λ ν I (φ δ ) ∩ V x 0 | = 0. But because the space is second countable, we eventually get . We have to show that for φ δ there exists φ * δ such that for all but finitely many t ∈ T and all distinct α = (a 0 , a), α = (a 0 , a 0 ) ∈ A, we have that and Now subtracting the respective equations of (4.8) from (4.7) we have α = (a 0 − a 0 , a − a ) satisfying the following equations Observe that a = 0, because otherwise , which is true for the finitely many t's that we are avoiding. Therefore α ∈ A and x ∈ H ν t (α , φ δ (t)). So here the particular choice of φ * δ is φ δ itself. This verifies the intersection property. 4.5. Verifying the Contraction Property : Recall that to verify the contraction property we need to verify the following: for any φ δ ∈ Φ we need to find Φ + δ ∈ Φ and a sequence of positive numbers {k t } t∈T satisfying t∈T k t < ∞ such that for all but finitely many t ∈ T and all α ∈ A, there exists a collection C t,α of ball B centred at a point in S = V = V satisfying (2.7), (2.8) and (2.9). Let us consider the open set 5V x 0 in Corollary 3.3. So we have that for any t ∈ T and α = (a 0 , a) ∈ A Using this new function F ν t,α , we can write the previous inhomogeneous sets as following : (4.11) We also note that If I ν t (α, φ δ (t)) = ∅ then it is trivial. So without loss of generality we can assume that I ν t (α, φ δ (t)) = ∅. Because for every We recall Corollary 4 of [4] , for sufficiently large |t| . The measure restricted to V x 0 will be denoted as | | Vx 0 and and (4.14) holds for all but finitely many t . The second inequality holds because we would otherwise have V x 0 ⊂ I ν t (α, φ + δ (t)), a contradiction. Then take C t,α := {B(x) : x ∈ S ∩ I ν t (α, φ δ (t))}. Hence (2.7) and (2.8) are satisfied. By (4.14) we have for all but finitely many t. So in view of the definitions we get (4.16) Therefore for all large |t| and α ∈ Z n+1 we have Hence finally we conclude since 5B ⊂ 5V x 0 . Here we are using that the measure is doubling and the centre of the ball 5B is in V x 0 . So C is only dependent on d ν . We dk |t| and as ( δ 2 − ε 4 ) < 0 we also have k t < ∞ as required in (2.6). This verifies the contracting property. 4.6. The large derivative. In this section, we will show that |W large f (Ψ, Θ)| = 0. Let us recall Theorem 1.2 from [37]. Note that the function (f , Θ) : U → Q n+1 S satisfies the same properties as f . So as a Corollary of the previous theorem we get, (4.20) Then |A (T i ) n 1 | < Cδ |U|, for large enough max(T i ) and a universal constant C. Now take T i = 2 t i +1 and δ = 2 n 1 t i +1 Ψ(2 t ). As 2 t i ≤ |a i | S < 2 t i +1 , this implies by (1.3) that Ψ(a) ≥ Ψ(2 t+1 ) and we have using (4.1) that (4.21) Note that Ψ(a) ≥ Ψ(2 t 1 +1 , · · · , 2 tn+1 )2 so the convergence of Ψ(a) implies the convergence of the later. Therefore by (4.21) and by the Borel-Cantelli lemma we get that almost every point of U are in at most finitely many W large f (a, Ψ, Θ). Hence |W large f (Ψ, Θ)| = 0 completing the proof. The divergence theorem for Q p In this section we prove Theorem 1.5 using ubiquitous systems as in [4]. In [6], the related notion of regular systems was used. As mentioned in the introduction, the divergence case will be proved for a more restrictive choice of approximating function than the convergence case, namely for those satisfying property P. Indeed a more general formulation which includes the multiplicative case of the divergence Khintchine theorem remains an outstanding open problem even for submanifolds in R n . Without loss of generality, and in an effort to keep the notation reasonable, we will prove the Theorem for the usual norm, i.e. we will assume v = (1, . . . , 1). The interested reader can very easily make the minor changes required to prove it for general v. For δ > 0 and Q > 1 we follow [4] in defining Φ f (Q, δ) := {x ∈ U : ∃ a = (a 0 , a 1 ) ∈ Z × Z n \{0} such that We now recall definition of a nice function. Definition 5.1 ([4], Definition 3.2). We say that f is nice at x 0 ∈ U if there exists a neighbourhood U 0 ⊂ U of x 0 and constants 0 < δ, w < 1 such that for any sufficiently small ball B ⊂ U 0 we have that If f is nice at almost every x 0 in U then f is called nice. The following Theorem from [38] plays a crucial role. It's proof involves a suitable adaptation of the dynamical technique in [11]. Theorem 5.1. [38] Assume that f : U → Q n p is nondegenerate at x ∈ U. Then there exists a sufficiently small ball B 0 ⊂ U centred at x 0 and a constant C > 0 such that for any ball B ⊂ B 0 and any δ > 0, for sufficiently large Q, one has This implies that if f is nondegenerate at x 0 then f is nice at x 0 . We will now state the main two theorems of this section. Let ψ : N → R + be a decreasing function. Theorem 5.2. Assume that f : U ⊂ Q m p → Q n p is nice and satisfies the standing assumptions (I1 and I2) and that s > m − 1. Let Θ : U → Q p be an analytic map satisfying assumption (I5). Let Ψ(a) = ψ( a ), a ∈ Z n+1 be an approximating function. Then, In view of Theorem 5.1, Theorem 5.2 implies Theorem 1.5. Note that condition (I3) implies the nondegeneracy of f at every point of U. Ubiquitous Systems in Q n p . Let us recall the the definition of Ubiquitous systems in Q n p following [4]. Throughout, balls in Q m p are assumed to be defined in terms of the supremum norm | · |. Let U be a ball in Q m p and R = (R α ) α∈J be a family of subsets R α ⊂ Q m p indexed by a countable set J. The sets R α are referred to as resonant sets. Throughout, ρ : R + → R + will denote a function such that ρ(r) → 0 as r → ∞. Given a set A ⊂ U, let where dist(x, A) := inf{|x − a| : a ∈ A}. Next, let β : J → R + : α → β α be a positive function on J. Thus the function β attaches a 'weight' β α to the set R α . We will assume that for every t ∈ N the set J t = {α ∈ J : β α ≤ 2 t } is finite. The intersection conditions: There exists a constant γ with 0 ≤ γ ≤ m such that for any sufficiently large t and for any α ∈ J t , c ∈ R α and 0 < λ ≤ ρ(2 t ) the following conditions are satisfied: where B is an arbitrary ball centred on a resonant set with radius r(B) ≤ 3 ρ(2 t ). The constants c 1 and c 2 are positive and absolute. The constant γ is referred to as the common dimension of R. Furthermore, suppose that the intersection conditions (5.4) and (5.5) are satisfied. Then the system (R, β) is called locally ubiquitous in U relative to ρ. Let (R, β) be a ubiquitous system in U relative to ρ and φ be an approximating function. Let Λ(φ) be the set of points x ∈ U such that the inequality dist(x, R α ) < φ(β α ) (5.7) holds for infinitely many α ∈ J. We are going to use this following ubiquity lemma from [4] in our main proof. Lemma 5.1. Let φ be an approximating function and (R, β) be a locally ubiquitous system in U relative to ρ. Suppose that there is a 0 < λ < 1 such that ρ(2 t+1 ) < λρ(2 t ) ∀ t ∈ N. Then for any s > γ, We will also need the strong approximation theorem mentioned in [45]. there exists a rational number r ∈ Q such that (5.10) Before we start the proving the main theorem in this section we would like to calculate a covolume formula of certain lattices. Proof. Let π : Q m p → Q m−1 p be the projection map given by π(x 1 , x 2 , · · · , x m ) = (x 2 , · · · , x m ), and let and We claim that the R F are resonant sets. The intersection property, namely (5.4) and (5.5) can be checked exactly as in the case of real numbers as accomplished in [4], Proposition 5. We only need to note that implicit function theorem for C l (U ) in R n was used in [4]. The Implicit function theorem in Q p holds for analytic maps and all our maps have been assumed analytic, so the proof in [4] goes through verbatim. It remains to check the covering property (5.6) to establish ubiquity. Without loss of generality we will assume that the ball U 0 in the definition of (5.1) satisfies From the Definition 5.1 of f being nice at x 0 , there exist fixed 0 < δ, w < 1 such that for any arbitrary ball B ⊂ U 0 , So for sufficiently large Q we have that Therefore it is enough to show that (a 0 , a 1 , · · · , a n ) ∈ Z n+1 : |a 0 + a 1 f 1 (x) + · · · + a n f n (x)| p < δQ −(n+1) 20) and the convex set K = [−Q, Q] n+1 in R n+1 . Note that if and only if So by Lemma 5.3 we have that δ . (5.23) Since f 1 (x) = x 1 , the determinant of this aforementioned system is det(a j,i ) = 0. Therefore there exists a unique solution to the system, say (η 0 , η 1 , · · · , η n ) ∈ Q n p . By the argument above, there is at least one |a j,i | ∞ > Q. Without loss of generality assume |a 0,0 | ∞ > Q. Using the strong approximation Theorem 5.2 we get r i ∈ Q such that |r i | q ≤ 1 for prime q = p. Observe that As ψ is an approximating function so we got that the above series This completes the proof of the Theorem. 6. Concluding Remarks 6.1. Some extensions. An interesting possibility is an investigation of the function field case. In [23], the function field analogue of the Baker-Sprindžuk conjectures were established and similarly it should be possible to prove the function field analogue of the results in the present paper. 6.2. Affine subspaces. In [30], analogues of the Baker-Sprindžuk conjectures were established for affine subspaces. In this setting, one needs to impose Diophantine conditions on the affine subspace in question. Subsequently, Khintchine type theorems were established (see [22,24]), we refer the reader to [25] for a survey of results. Recently, in [10], the inhomogeneous analogue of Khintchine's theorem for affine subspaces was established in both convergence and divergence cases. It would be interesting to consider the S-adic theory in the context of affine subspaces. 6.3. Friendly Measures. In [31] a category of measures called Friendly measures was introduced and the Baker-Sprindžuk conjectures were proved for friendly measures. Friendly measures include volume measures on nondegenerate manifolds, so the results of [31] generalize those of [32], but also include many other examples including measures supported on certain fractal sets. In [12], the inhomogeneous version of the Baker-Sprindžuk conjectures were established for a class of measures called strongly contracting which include friendly measures. It should be possible to prove an S-adic inhomogeneous analogue of the Baker-Sprindžuk conjectures for strongly contracting measures.
2018-01-26T15:30:19.000Z
2018-01-26T00:00:00.000
{ "year": 2018, "sha1": "f2d6c7ba094d112911567fd12ad2b4f1b0da203d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f2d6c7ba094d112911567fd12ad2b4f1b0da203d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53641707
pes2o/s2orc
v3-fos-license
Measurement of the neutrino-oxygen neutral-current interaction cross section by observing nuclear deexcitation $\gamma$ rays We report the first measurement of the neutrino-oxygen neutral-current quasielastic (NCQE) cross section. It is obtained by observing nuclear deexcitation $\gamma$-rays which follow neutrino-oxygen interactions at the Super-Kamiokande water Cherenkov detector. We use T2K data corresponding to $3.01 \times 10^{20}$ protons on target. By selecting only events during the T2K beam window and with well-reconstructed vertices in the fiducial volume, the large background rate from natural radioactivity is dramatically reduced. We observe 43 events in the $4-30$ MeV reconstructed energy window, compared with an expectation of 51.0, which includes an estimated 16.2 background events. The background is primarily nonquasielastic neutral-current interactions and has only 1.2 events from natural radioactivity. The flux-averaged NCQE cross section we measure is $1.55 \times 10^{-38}$ cm$^2$ with a 68\% confidence interval of $(1.22, 2.20) \times 10^{-38}$ cm$^2$ at a median neutrino energy of 630 MeV, compared with the theoretical prediction of $2.01 \times 10^{-38}$ cm$^2$. We report the first measurement of the neutrino-oxygen neutral-current quasi-elastic (NCQE) cross section. It is obtained by observing nuclear de-excitation γ-rays which follow neutrino-oxygen interactions at the Super-Kamiokande water Cherenkov detector. We use T2K data corresponding to 3.01 × 10 20 protons on target. By selecting only events during the T2K beam window and with wellreconstructed vertices in the fiducial volume, the large background rate from natural radioactivity is dramatically reduced. We observe 43 events in the 4 − 30 MeV reconstructed energy window, compared with an expectation of 55.7, which includes an estimated 17.3 background events. The background is primarily non-quasielastic neutral-current interactions and has only 1.2 events from natural radioactivity. The flux-averaged NCQE cross section we measure is 1.35 × 10 −38 cm 2 with a 68% confidence interval of (1.06, 1.94) × 10 −38 cm 2 at a median neutrino energy of 630 MeV, compared with the theoretical prediction of 2.01 × 10 −38 cm 2 . I. INTRODUCTION Nuclear de-excitation γ-rays are a useful tool for detecting neutrino-nucleus neutral-current (NC) interactions where the final state neutrino and assosciated nucleon are not observed in a Cherenkov detector. They have previously been observed in neutrino-carbon interactions [1,2]. The most well known γ-ray production process on oxygen is coherent inelastic scattering, ν + 16 O → ν + 16 O * , where the residual oxygen nucleus can de-excite by emitting a nucleon or γ-rays with energies between 1 − 10 MeV. This process can be used to detect supernova neutrinos [3], which have an average energy of 20 − 30 MeV. Most theoretical work on γray production in NC interactions has been performed in this low neutrino energy range with the assumption that it is applicable up to neutrino energies of several hundred MeV [4][5][6]. A recent calculation of γ-ray production in neutrino NC interactions shows that quasi-elastic (QE) nucleon knockout, ν + 16 O → ν + p + 15 N * (ν + n + 15 O * ) overwhelms the coherent process at E ν 200 MeV [7]. The NCQE cross section is more than an order of magnitude larger than the NC coherent cross section from [5] at E ν ≈ 500 MeV. The γ-rays produced when the residual nucleus de-excites are labeled primary γ-rays. Secondary γ-rays can also be produced when the knocked out nucleon goes on to interact with other nuclei in the water. Both types of γ-rays, produced in interactions of atmospheric neutrinos, are a major background for the study of astrophysical neutrinos in the 10 MeV range [8,9] and a direct measurement of the rate of this process with a known neutrino source will be useful for ongoing and proposed projects [10][11][12][13]. This paper reports the first measurement of the neutrino-oxygen NCQE cross section via the detection of de-excitation γ-rays. The neutrinos are produced using the narrow-band neutrino beam at J-PARC and measured with the Super-Kamiokande (SK) water Cherenkov detector. * also at J-PARC, Tokai, Japan † also at Institute of Particle Physics, Canada ‡ affiliated member at Kavli IPMU (WPI), the University of Tokyo, Japan § also at Moscow Institute of Physics and Technology and National Research Nuclear University "MEPhI", Moscow, Russia ¶ also at JINR, Dubna, Russia * * deceased † † also at BMCC/CUNY, Science Department, New York, New York, U.S.A. II. THE T2K EXPERIMENT The Tokai-to-Kamioka (T2K) experiment [14] is a long-baseline neutrino oscillation experiment consisting of a neutrino beam, several near detectors, and using Super Kamiokande as a far detector. It is designed to search for ν µ → ν e appearance, which is sensitive to the neutrino mixing angle θ 13 , and to precisely measure the mixing angle θ 23 and the mass difference |∆m 2 32 | by ν µ disappearance. The accelerator at the Japan Proton Accelerator Research Complex (J-PARC) provides a 30 GeV proton beam which collides with a graphite target to produce charged mesons. Positively-charged pions and kaons are collected and focused by magnetic horns and ultimately decay in flight to produce primarily muon neutrinos inside a 96 m long cavity filled with helium gas. The proton beam is directed 2.5 • away from SK. The off-axis neutrino beam has a narrow peak with median energy 630 MeV at SK because of the two-body decay kinematics of the π + which dominate the focused beam. This peak energy was chosen because it corresponds to the first maximum in the neutrino oscillation probability at the location of the far detector. The narrow energy peak also allows for the measurement of the NC cross section at a particular energy. Typically, it is not possible to make energydependent measurements of this cross section because the invisible outgoing neutrino makes accurate energy reconstruction impossible. The T2K experiment has several near detectors located 280 m from the neutrino production target. The on-axis near detector, INGRID, which consists of 16 modules made up of alternating layers of iron and plastic scintillator arranged in a cross, monitors the neutrino beam direction. The off-axis near detectors, ND280, measure the neutrino beam spectrum and composition for the oscillation analyses. The neutrino measurements at the INGRID and ND280 detectors are consistent with expectations [15], but this information is not used to constrain systematic uncertainties in this analysis so that an absolute cross-section measurement can be made. Super-Kamiokande [10] is a cylindrical water Cherenkov detector consisting of 50 ktons of ultrapure water, located 295 km from the neutrino target at J-PARC. It was built in the middle of Mt. Ikenoyama, near the town of Kamioka, 1000 m below the peak. The tank is optically separated into two regions which share the same water. The inner detector (ID) is a cylinder containing the 22.5 kton fiducial volume and is instrumented with 11,129 inward-facing photomultiplier tubes (PMTs). The outer detector (OD) extends 2 m outward from all sides of the ID and is instrumented with 1,885 outward-facing PMTs. It serves as a veto counter against cosmic-ray muons as well as a shield for γ-rays and neutrons emitted from radioactive nuclei in the surrounding rock and stainless steel support structure. III. EVENT SIMULATION T2K events at SK are simulated in three stages. First, the neutrino beamline is simulated to predict the flux and energy spectrum of neutrinos arriving at SK. Next, the interactions of those neutrinos with the nuclei in the SK detector are simulated, including final-state interactions within the nucleus. Finally, the SK detector response to all of the particles leaving the nucleus is simulated. FLUKA [16] is used to simulate hadron production in the target based on the measured proton beam profile. Hadron production data from NA61/SHINE at CERN [17,18] is used to tune the simulation and evaluate the systematic error. Once particles leave the production target they are transported through the magnetic horns, target hall, decay volume, and beam dump using a GEANT3 [19] simulation with GCALOR [20] for hadronic interactions. A more detailed description of the neutrino flux prediction and its uncertainty can be found in Ref. [21]. Neutrino interactions based on the above flux are simulated using the NEUT event generator [22,23]. The NCQE cross section on oxygen is simulated using a spectral function model [24,25] with the BBBA05 form factor parameterization [26], which is then reweighted as a function of neutrino energy to match the recent theoretical calculations from [7]. In order to simulate the de-excitation γ-ray emission, it is necessary to identify which state the remaining nucleus is in after the neutrino interaction. The spectroscopic factors for three possible states, the ground state, 1p 1/2 , and excited states, 1p 3/2 and 1s 1/2 are used for this determination. The excited states can release primary γ-rays at a variety of energies ranging from 3 to 15 MeV, though more than 80% have an energy close to 6 MeV. The branching ratios for γ-ray production from the 1p 3/2 nucleon hole state are taken from a theoretical estimate in Ref. [27], while the branching ratios of the 1s 1/2 proton hole state are estimated using the result of the 16 O(p, 2p) 15 N experiment (RCNP-E148) [28]. We used the same branching ratios for γ-ray production from neutron hole states as from proton hole states. Non-QE NC interactions make up the largest neutrinoinduced background component and predominantly consist of NC single-pion production where the pion is absorbed during final state interactions in the nucleus. This resonant production is simulated using the Rein-Sehgal model [29], the position dependence within the nucleus is calculated with the model from [30], and the scale of the microscopic pion interaction probabilities in the nu-clear medium is determined from fits to pion scattering data [31][32][33]. The simulation of primary de-excitation γrays from this process is based on measurements of π − absorption-at-rest on H 2 O at CERN [34]. These pionabsorption interactions can also release nucleons which go on to produce secondary γ-rays as described below. More details about NEUT, including the models used to simulate the smaller charged current backgrounds can be found in [14,23]. SK's GEANT3-based simulation [19] is used to transport all the particles leaving the nucleus through the detector, produce and transport the Cherenkov light, and to simulate the response of the photodetectors and electronics. Charged pions with momenta above 500 MeV/c are simulated with GCALOR [20] while lower momentum pions are simulated with a custom routine based on the NEUT cascade model for final state hadrons. GCALOR also simulates the interactions of nucleons with nuclei in the water, including the production of secondary γ-rays. In this simulation, secondary γ-rays are typically produced in multiples: 95% of events with secondary γ-rays have at least two. The total secondary γ-ray energy per event is distributed widely with a peak around 7 MeV and a long tail towards higher energies. There is an additional signal-like contribution from the coherent inelastic process, ν + 16 O → ν + 16 O * . However, since there is no accurate estimation of γ-ray production induced by the NC coherent process in the T2K energy range, we do not subtract its contribution in the final result. If we assume that the rate of γ-ray production after a coherent interaction is similar to that of a nucleon knockout reaction, and extrapolate the NC coherent cross section predicted in [5] to the energy region of this analysis, we expect its contribution to be no larger than a few percent of our final sample. IV. ANALYSIS The results presented in this paper are based on T2K RUN1-3 data from 3.01 × 10 20 protons on target (POT) [35]. The expected number of beam-related events after the selections described in the next section are summarized in Tab. I, which categorizes them by neutrino flavor and interaction mode. For the computation of the CC components, we assume three-flavor oscillations with |∆m 2 32 | = 2.44 × 10 −3 eV 2 , sin 2 θ 23 = 0.50, sin 2 2θ 13 = 0.097. The majority of the beam-related background comes from non-quasi-elastic NC events, in particular single-pion production followed by pion absorption within the nucleus. The CC background comes from interactions where the outgoing charged lepton has low momentum and is misidentified as an electron or where the charged lepton itself is below Cherenkov threshold but de-excitation γ-rays are emitted. The expected number of beam-unrelated events after all selections are applied is estimated to be 1.2 by sampling events at least 5 µs before the T2K beam trigger so that no beam-related activity is included. The measured event rate is normalized to the total livetime of the analyzed beam spills. Since the beam-unrelated background is directly measured with data outside the beam window, the systematic uncertainty associated with it is small. A. Event selection The reconstruction of the event vertex, direction, and energy is the same as that used in the SK solar neutrino analysis [36]. The reconstructed energy is defined as the total energy of a single electron that would have produced all Cherenkov photons in the event. We use this definition because it is used by the SK low-energy reconstruction tools, though we know many events have multiple particles and a variety of particle species. The first selections applied are a cut on the reconstructed energy, only allowing events between 4 MeV and 30 MeV, a standard fiducial volume cut of 2 m from the detector wall, and an event timing cut. An energy threshold of 4 MeV, lower than previous SK analyses, is possible in this analysis thanks to the sharp reduction in accidental backgrounds due to the beam timing cut. This low threshold significantly increases the detection efficiency for these lowenergy events, which is predicted by the Monte Carlo to be greater than 99% for 6 MeV de-excitation γ-rays from 1p 3/2 proton and neutron hole states. The neutrino beam spill has a bunch structure, reflecting the underlying proton bunch structure, with 6 or 8 bunches separated by 581 ns gaps, delivered every 3 s. A timing cut of ±100 ns, much longer than the lifetimes of the de-excitation modes relevant to this analysis [37], is applied between the event time and the closest neutrino beam bunch time, which is synchronized between the near and far sites using a common-view GPS system. The bunch timing is calibrated using the higher energy T2K neutrino events at SK, and the RMS of the observed timing distribution is about 24 ns. The Cherenkov angle distribution in data and MC expectation after the beam-unrelated selections and the pre-activity cut. The expectation has a three-peak structure corresponding to low-momentum muons around 28 • , single γ-rays around 42 • , and multiple γ-rays around 90 • . A selection cut is applied at 34 • to remove the muon events, but no attempt is made to separate single-and multiple-γ events. Further selection cuts are applied based on the event vertex and reconstruction quality to remove beamunrelated background, similar to those used in SK solar [36] and supernova relic neutrino analyses [8]. These cut criteria are simultaneously optimized in an energydependent way to maximize the figure-of-merit defined as N beam / √ N beam + N unrel , where N beam and N unrel denote the number of expected beam-related and beamunrelated events, respectively. The cut optimization is done separately for each of the three T2K run periods since the beam intensities and beam bunch structures differ. Most of the beam-unrelated background comes from radioactive impurities in the PMT glass, cases, and support structure and so is concentrated near the ID wall. Cuts on the distance from the nearest wall, D 1 , and the distance from the wall along the backward direction of the reconstructed track, D 2 , together effectively eliminate background events produced at or near the ID wall. A minimum cut of 2 m is applied for both, with the cut on D 1 increasing linearly below 4.75 MeV to about 3.2 m and the cut on D 2 increasing linearly below 5.75 MeV to about 10 m. Beam-unrelated background events that pass the fiducial cuts typically have reconstruction errors which move the vertex to the center of the tank. These errors can be identified based on the distribution of hits in time and space. The hit time distribution should be a sharp peak after time-of-flight correction from the correct vertex, which we quantify as the timing goodness, g t . The hit pattern should also be azimuthally symmetric around the reconstructed particle direction, which we test using g p , the Kolmogorov-Smirnov distance between the observed hit distribution and a perfectly symmetric one. The reconstruction quality cut criterium, Q rec , is defined as the hyperbolic combination of these two parameters: Q rec ≡ g 2 t − g 2 p and is shown in Fig. 1. The cut on Q rec is also energy-dependent and varies from about 0.25 at its tightest at the low end of the energy spectrum to effectively no cut above 11 MeV. More detailed descriptions of g t and g p are found in Ref. [38]. Before selection, the beam-unrelated background rate from natural radioactivity is 284 counts per second, or 1.2 million events expected during the 1 ms beam windows used for other T2K analyses [39]. Applying the tight timing cut reduces this background to 1,816 events. The fiducial and reconstruction quality cuts further reduce the beam-unrelated background to 1.77 events, or 2.2% contamination. These beam-unrelated selection cuts reduce the estimated NCQE signal efficiency to 74%. Among the selected signal events, we estimate 97% have true vertices within the fiducial volume. Finally, to suppress the beam-related charged-current (CC) interaction events, two additional cuts are applied: a pre-activity cut and a Cherenkov opening angle cut. The pre-activity cut rejects electrons produced in muon decays with more than 99.9% efficiency by rejecting events which occur less than 20 µs after a high-energy event, defined as a group of 22 or more hits in a 30 ns window. The likelihood of this selection rejecting a signal event because of accidental dark noise hits is less than 0.1%. For this low-energy sample, the Cherenkov angle of an event is defined as the peak of the distribution of Cherenkov angles calculated for every combination of three PMTs with hits following the technique from [8]. For single particles this peak will be close to the opening angle of the particle while the more isotropic light distributions from multiple particles will have peaks close to 90 • . The Cherenkov angle depends on the velocity of the particle, approaching 42 • as the velocity approaches c. The electrons produced by the de-excitation γ-rays selected in this analysis are highly relativistic and so peak at 42 • . The heavier muons from CCν µ events have smaller opening angles, peaking around 28 • ; the higher momentum muons with larger opening angles having already been removed by the energy cut at 30 MeV. These muons are removed by a cut at 34 • . The Cherenkov angle distribution for events passing all other selection criteria can be seen in Fig. 2. The data-expectation disagreement in the multi-γ peak is likely due to the approximations made in the model of γ-ray emission induced by secondary neutron interactions used by GEANT3 and GCALOR. After all selections, 55.7 events are expected, of which 38.4 are expected to be NCQE signal for a purity of 69%. The overall selection efficiency is estimated to be 70% relative to the number of true NCQE events in the true fiducial volume which produce either primary or secondary γ-rays (approximately 25% of NCQE events produce no photons and are consequently unobservable). The beam-unrelated contamination remains 2.2% after the final beam-related selections, giving 1.2 background events in the final sample. Figure 3 shows the observed event timing distribution in a region from −1 µs to 5 µs with respect to the beam trigger time, before the tight ±100 ns timing cut on each bunch has been applied. Six events are found outside the tight bunch time windows, which is consistent with the 3.6 beam-unrelated events expected for this amount of integrated livetime. These events are separate from the 1.2 beam-unrelated events expected to fall within the 200 ns bunch windows. B. Observed Events After all cuts, 43 events remain in the 4 − 30 MeV reconstructed energy range, compared with 55.7 expected. The vertex distribution of the sample is shown in Fig. 4, in which no non-uniformity or biases with respect to the neutrino beam direction are found. The energy distribution of the data after all the selection cuts is shown in Fig. 5. A peak due to 6 MeV prompt de-excitation γrays is clearly seen in data, and the observed distribution matches well with the expectation. The high energy tail originates primarily from the contribution of additional secondary γ-rays overlapping the primary γ-rays. C. Systematic uncertainties The sources of systematic uncertainty on the expected number of signal and background events and their size Reconstructed energy (MeV) 5 10 15 20 25 30 Number of events are summarized in Tab. II. The methods for calculating these uncertainties are described below. The flux errors, calculated in correlated energy bins, are determined based on beam monitoring, constraints from external measurements (particularly NA61/SHINE [17,18]), and Monte Carlo studies of focusing parameters (e.g. horn current, beam alignment, etc.) [21]. The neutrino interaction uncertainties which affect the normalization of the background are evaluated by comparing NEUT predictions to external neutrinonucleus data sets in an energy region similar to T2K [15]. The systematic uncertainty on primary γ-ray production in signal (and the QE component of the CC background) comes from several sources. The largest contribution is from final-state nuclear interactions: NEUT assumes that the de-excitation γ-ray production is the same whether the final state contains a single nucleon or multiple nucleons. We estimate the systematic uncertainty introduced by this assumption by observing the change in the number of signal events with the extreme alternate assumption that no de-excitation γ-rays are released from events with multi-nucleon final states. Additional uncertainty comes from the spectroscopic factors, the errors on which are estimated as the difference between models from Benhar [24] and Ejiri [27], and the relative branching ratios for the 1s 1/2 state, estimated from Kobayashi et al. [28]. For the non-QE NC background events, a conservative uncertainty was calculated by removing all primary γ-rays from the events and evaluating the difference in total selected events. The effect is relatively small since the pion-absorption events which make up the bulk of the NC non-QE background produce many secondary γ-rays and so are still detected thanks to the low threshold of the analysis. The uncertainty on secondary γ-ray production is dominated by uncertainties on the production of neutrons. It was evaluated by comparing alternate models of neutron production and how they altered the observed Cherenkov light level for our simulated events, for both signal events and the pion-absorption background. The detector uncertainty includes contributions from uncertainties in the SK energy scale, vertex resolution, and selection efficiency. It is estimated by comparing simulation and data from the linear electron accelerator (LINAC) installed above SK [40]. The systematic uncertainty due to the atmospheric oscillation parameters, θ 23 and |∆m 2 32 |, is estimated by varying the parameters within the uncertainties from the T2K measurement of these parameters [35]. There are two final systematic uncertainties that were evaluated but have a negligible impact on the result. We evaluated the potential non-uniformity of the selection efficiency with respect to Q 2 by changing the value of the MC axial mass to distort the differential cross section. This variation changes the final calculated cross section by less than a percent. The beam-unrelated background is estimated from the out-of-time events which have a statistical error of 0.8%. V. MEASURED CROSS SECTION The NCQE cross section is measured by comparing the NCQE cross section as calculated in recent theoretical work [7] averaged over the unoscillated T2K flux with the observed number of events after background subtraction: Neutrino Energy (GeV) [7]. The dashed line shows the cross section versus neutrino energy, the solid horizontal line shows the flux-averaged cross section. The vertical error bar on the data represents the 68% confidence interval on the measured cross section while the horizontal error bar is placed at the central value from our data and represents 68% of the flux at each side of the median energy. The solid gray histogram shows the unoscillated T2K neutrino flux. where σ obs ν,NCQE is the observed flux-averaged NCQE cross section and σ theory ν,NCQE = 2.01 × 10 −38 cm 2 is the flux-averaged cross section from [7]. The total number of observed events is N obs (43), the total number of expected events is N exp (55. 7), and N exp bkg (17.3) denotes the expected number of background events. The obtained flux-averaged neutrino-oxygen NCQE cross section is 1.35 × 10 −38 cm 2 at a median neutrino flux energy of 630 MeV. The 68% confidence interval on the cross section is (1.06, 1.94) × 10 −38 cm 2 and the 90% confidence interval is (0.84, 2.34) × 10 −38 cm 2 . They include both statistical and systematic errors and were calculated using a Monte Carlo method to account for the systematic errors that are correlated between different samples. While the underlying systematic uncertainties are symmetric and gaussian, the confidence interval is asymmetric around the central value because some of the uncertainties, primarily the production of secondary γ-rays and to a lesser extent the neutrino flux, are correlated between the background expectation and the signal expectation which are found in the numerator and denominator, respectively, of Eq. 1. Figure 6 shows our result compared with a theoretical calculation of the NCQE cross section [7]. The vertical error bar for data shows the 68% confidence interval on the data, and the horizontal error bar represents 68% of the flux at each side of the median energy. The measurement is lower than the recent theoretical calculation and outside the 68% confidence level, but consistent at the 90% confidence level. VI. SUMMARY We have reported the first measurement of the cross section of neutrino-oxygen NCQE interactions via the detection of nuclear de-excitation γ-rays in the Super-Kamiokande detector using the T2K narrow-band neutrino beam, below but consistent with the theoretical expectation at the 90% confidence level. Due to the similar peak energies for T2K neutrinos and atmospheric neutrinos, the present work will shed light on the study of the atmospheric background events for low energy astrophysical phenomena in neutrino experiments.
2022-08-04T19:27:36.548Z
2014-03-13T00:00:00.000
{ "year": 2014, "sha1": "6cc8de425889bd8d92e92d6b05566a63ac3a4bf0", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.90.072012", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "6cc8de425889bd8d92e92d6b05566a63ac3a4bf0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
3693463
pes2o/s2orc
v3-fos-license
Access to health insurance coverage among sub-Saharan African migrants living in France: Results of the ANRS-PARCOURS study Background Migrants’ access to care depends on their health insurance coverage in the host country. We aimed to evaluate in France the dynamic and the determinants of health insurance coverage acquisition among sub-Saharan migrants. Methods In the PARCOURS life-event retrospective survey conducted in 2012–2013 in health-care facilities in the Paris region, data on health insurance coverage (HIC) each year since arrival in France has been collected among three groups of sub-Saharan migrants recruited in primary care centres (N = 763), centres for HIV care (N = 923) and for chronic hepatitis B care (N = 778). Year to year, the determinants of the acquisition and lapse of HIC were analysed with mixed-effects logistic regression models. Results In the year of arrival, 63.4% of women and 55.3% of men obtained HIC. But three years after arrival, still 14% of women and 19% of men had not obtained HIC. HIC acquisition was accelerated in case of HIV or hepatitis B infection, for migrants arrived after 2000, and for women in case of pregnancy and when they were studying. Conversely, it was slowed down in case of lack of a residency permit and lack of financial resources for men. In addition, women and men without residency permits were more likely to have lost HIC when they had one. Conclusion In France, the health insurance system aiming at protecting all, including undocumented migrants, leads to a prompt access to HIC for migrants from sub-Saharan Africa. Nevertheless, this access may be impaired by administrative and social insecurities. Results In the year of arrival, 63.4% of women and 55.3% of men obtained HIC. But three years after arrival, still 14% of women and 19% of men had not obtained HIC. HIC acquisition was accelerated in case of HIV or hepatitis B infection, for migrants arrived after 2000, and for women in case of pregnancy and when they were studying. Conversely, it was slowed down in case of lack of a residency permit and lack of financial resources for men. In addition, women and men without residency permits were more likely to have lost HIC when they had one. PLOS Introduction With 244 million international migrants worldwide and increasing migration to Europe, migration is a global phenomenon that could influence the health of individuals [1,2]. The question of the health of migrants and their access to the health care system is therefore more acute. Despite an increasing focus on migration globally, there are insufficient data on the interaction between migration and health and of how health systems cope with immigration [3]. The migrant population is very heterogeneous, depending on their country of origin, the circumstances of migration and the living condition at arrival in the host country. However, many migrants arriving in Europe from developing countries, and particularly those arriving from Africa, experience difficult migration pathways and find themselves in a precarious situation after arrival in the host countries [4]. They are thus considered at higher risk for a range of health problems in Europe, especially the undocumented ones, which are the most vulnerable [5,6]. This higher risk is partly due to poor socioeconomic conditions and, in some countries, is due to the lack of rights to health coverage for undocumented migrants [7][8][9]. Existing evidence from different European countries highlights the difficulties to access health services that migrants are facing [10][11][12][13]. These difficulties are due to various reasons as lack of health insurance coverage or insufficient knowledge of rights and structures [14][15][16][17][18]. Access to health insurance that provides coverage for medical and hospital care is a major determinant of healthcare access and reduction of morbidity and mortality [19][20][21][22]. Universal health coverage is the subject of a globally approved United Nations General Assembly resolution and is the third Sustainable Development Goal of the UN Development Programme [23,24]. In addition, the specific challenges encountered in the field of migration and health has been recognized as a priority for research, as the need for better evidence to improve health system responses to migration [3,25]. In France, the health-care system was built at the end of World War II as part of the social security system and, to date, has continuously improved to ensure health access for all [26]. It is based on a public health insurance system named Health Insurance (HI) (see the supporting information S1 for a detailed description) (S1 Text). HI is based on compulsory social insurance funded by social contributions. Government provides basic Health Insurance Coverage (HIC) for French and foreign people residing in France regularly and working, studying or being linked to a recipient of the social security system (assignee). This Standard health Insurance is supplemented by a voluntary private insurance that covers health care costs not reimbursed. However, such supplementary insurance is less common among the lower segments of the population. In 1999 was created the Universal Health insurance Coverage (UHC) for French people and foreign nationals living legally in France under an income ceiling and who were previously excluded from Health Insurance based on administrative and/or socio-professional criteria. UHC provided them the right to basic health insurance for basic welfare and, depending on income, for complementary health insurance. Thus, UHC is basic health insurance coverage for inactive people living regularly in France without an assignee. At the same time, the State Medical Assistance (SMA) was created for undocumented immigrants [27]. SMA covers the entire cost of care. Several supporting documents are required to apply: passport or identity card, an address of domiciliation and over three months' presence in France. Beneficiaries must be below a resource threshold similar to the supplementary UHC coverage threshold (in the order of $ 10,000 annually for a single person). Dependent people can also benefit from State Medical Assistance (i.e. partners and children). In theory, all healthcare professionals are obliged to accept SMA beneficiaries and forbid them from exceeding fee. The period of entitlement is one renewable year. With the SMA, France is one of the few European countries to ensure a wide access to care for undocumented migrants, but through a separate system. [28]. In contrast to these theoretical possibilities of universal access to care, some reports shows that this access to care is not as easy as it should be [20,26,[29][30][31]. There is limited empirical research available that analyses migrants' access to Health Insurance Coverage. People from sub-Saharan Africa are at a higher risk of HIV and chronic hepatitis B (CHB) infections and need preventive services and access to diagnostic, care and treatment [32,33]. For migrants living with HIV or CHB, being engaged in care promotes medication adherence, prevents complications, and decreases the risk of transmission [34][35][36]. Health insurance coverage could play an important role in their diagnosis, entry and retention in care [37]. Using the data from a large life-event survey of people from sub-Saharan Africa living in France with or without HIV or CHB, we aimed to investigate the acquisition time of Health insurance after arrival in France and how acquisition and disruption are associated with social, administrative and medical determinants. Study design and participants The PARCOURS study was conducted to analyse how health trajectories and social and migratory paths are interlaced for migrants from sub-Saharan Africa who are living in France. This retrospective quantitative life-event survey was conducted from February 2012 to May 2013 in health-care facilities in the Paris metropolitan area (Ile-de-France). Three groups of migrants born in sub-Saharan Africa have been studied: one group followed in care for HIV infection in dedicated HIV centres (HIV group), one group in care for Chronic Hepatitis B (without concomitant HIV infection) followed in dedicated CHB centres (CHB group), and a third group of people who visited primary-care centres for any reason (reference group). The study used time-location sampling [38], in which healthcare facilities were randomly selected from three exhaustive lists of primary-care centres (including primary-care centres for vulnerable populations), HIV outpatient hospital clinics and hepatitis treatment clinics. We constructed three distinct sampling frames (one for each healthcare specialty) by each half-day that the healthcare facilities were open. All eligible patient visits were included from each healthcare facility and each half-day time interval. To construct a sample that reflected the contribution of the various types of healthcare facility found in Île-de-France, the number of individuals to include from each facility was determined according to the group's weight within the total population of migrants from sub-Saharan Africa in the Paris metropolitan era. The data were weighted according to each individual's probability of inclusion in the survey. Patients were eligible if they were born in sub-Saharan Africa, were citizens of a sub-Saharan African country at birth, were between 18 and 59 years old, and had not been diagnosed with HIV or hepatitis B (for the primary-care group) or with HIV infection or CHB (the other two groups) for at least 3 months. Recruitment occurred at the healthcare facilities. Physicians asked their eligible patients to participate and acquired their written consent. A trained interviewer administered a face-to-face standardized life-event history questionnaire to each participant. Information collected included sociodemographic characteristics, conditions of migration and life in France, relational, sexual, and reproductive histories, and healthcare pathways that included HIV and hepatitis B virus (HBV) testing, healthcare insurance coverage, and engagement in care. Each parameter of interest was documented year to year from birth until the time of data collection. To collect retrospectively this life-event information, we used the life history calendars or "Ageven" sheet (also known as life event or life grid calendars). This tool has been shown to be effective in reducing recall bias and improving data quality in retrospective studies by providing a graphical time line that helps participants to anchor their responses in relation to different life stages and events. [39][40][41][42] Clinical and laboratory information was documented from medical records. All information was anonymously collected. Ethical considerations The Advisory Committee on Data Collection in Health Research (CCTIRS) and the French Data Protection Authority (CNIL) approved the study protocol (CD-2011-484 approval on 7 December 2011). All information was anonymously collected. To take into account difficulties in participating in the survey due to poor or no knowledge of the French language, the patient questionnaire was available in French or English, and, by appointment, an interpreter could be made available to conduct the interview in an African language spoken by the respondent. Outcomes and variables of interest For each year between the arrival in France and the year of data collection, HIC was documented. The first outcome was the delay of acquisition of first HIC since arrival in France. HIC was defined as any type of basic HIC that lasted for at least one year without considering supplementary health insurance. The others outcome were incidence of first HIC interruption after obtaining it and basic HIC at the time of the study (HI, UHC, SMA or none). The fixed covariates for the analysis of the factors associated with the acquisition delay included the period of arrival, the age, the level of education, place of birth and the reported reasons for migration. Living conditions in France were documented for each year between arrival and the year of data collection through several time-dependent variables: permit of residence, housing situation, economic resources, and activity. Medical conditions including pregnancy, hospitalization, HIV and/or CHB diagnosis were dated and treated as time-dependent variables. Statistical analyses The analysis focused on people who arrived in France after 1980, who have been in France at time of interview for at least 2 years, aged over 18 on arrival and without missing data in the model variables. Persons who arrived before 1980 or who were under 18 years of age on arrival were not included. Persons who arrived in the previous year did not allow for a satisfactory analysis of the factors related to time. The database and analysis file for reproducing this analysis is available in supporting information (S1 Table and S2 Text) Sociodemographic characteristics, including the main reasons for coming to France and the hardships experienced in France were compared between groups with a design-based [chi] 2 test to compare proportions. Medians of duration were compared with non-parametric equality-of-medians tests. Characteristics associated with the acquisition of HIC each year since the time of arrival in France were identified using mixed-effect logistic regression models. Models included both fixed and time-dependent covariates and were systematically adjusted for time since arrival in France. Given the retrospective nature of the data and the heterogeneity regarding the time since arrival in France, migrants with a delayed access to HIC may have been particularly underrepresented among those who arrived within the most recent period. To assess possible bias, an additional analysis was performed in a database restricted to participants who had been in France for at least 3 years at the time of interview. In the same way, we analysed factors associated with the loss over time of this first HIC among men and women in the 4 years after it was obtained. Data were weighted according to each individual's probability of inclusion in the survey and the weights were applied to all percentages. All analyses were stratified by sex due to differentiated migratory patterns and exchanges with the healthcare system. All analyses were performed in Stata SE 13.1 (Stata Corporation, College Station, TX, USA). Study population A total of 1184 (reference group), 1829 (HIV) and 1169 (CHB) individuals met the eligibility criteria, among which 124, 141 and 25, respectively, were not offered participation by their physicians due to health problems or cognitive impairment. Eventually, 763 migrants in the reference group, 926 migrants with HIV, and 778 migrants with CHB agreed to participate. A total of 552 subjects were excluded for different reasons: 76 people arrived in France before 1980, 81 had been in France for less than a year, 210 were under the age of 18 at the time of their arrival, and 185 were excluded because of missing data in the variables. Consequently, a total of 1008 men and 907 women were included in the analysis: 547 in the reference group, 749 in the HIV group and 619 in the CHB group. The sociodemographic characteristics of the participants are described in Table 1. Women accounted for 55.6% of the reference group, 62.4% of the HIV group, and 26.8% of the CHB group. The median age at arrival was 29 years in the reference group for both sexes. Men and women in the HIV group arrived when they were older. Most came from Western and Central Africa. Men most often reported coming to France to seek work and women reported that they came for family unification. The median duration of residence in France was 9 years [IQR: [2][3][4][5][6][7][8][9][10][11][12][13][14][15] for men and women in the reference group. Absences of residency permits, of personal housing, or of resources on arrival in France were frequent. The absence of a residency permit on arrival was more common in the HIV or CHB groups (Table 1). Delay to acquisition of first HIC since arrival in France The proportion of participants with HIC is presented year to year after arriving in France in Fig 1. HIC was obtained the year of arrival in France in median (IQR [0-1]) with no difference across groups. Among men, 55.3% had acquired an HIC the first year after arrival. This percentage rises to 74.6% the second year and 80.9% the third year after arrival in France; these figures were 63.4%, 80.2% and 86.0%, respectively, for women. Factors associated with acquisition of HIC year to year after arrival in France In the univariate analysis, women acquired HIC more quickly than men (odds ratio = Among men, men who arrived after 2000 acquired HIC faster than those who had arrived earlier ( Table 2). Men were more likely to acquire HIC during the year of hospitalization and if concerned, once they were diagnosed with HIV or CHB. Conversely, men were less likely to obtain HIC during years without a residency permit and during years without resources. In the multivariate analysis, characteristics that were significantly associated with a faster access to HIC were the arrival in France after the year 2000 (adjusted OR = 1.57 [1.20-2.06]) and HIV or CHB diagnosis (aOR = 1.72 [1.10-2.69] and 3.17 [2.17-4.64]). Characteristics associated with a delayed access to HIC were the absence of a residency permit (aOR = 0.36 [0.18-0.72]) and the absence of resources (aOR = 0.52 [0.29-0.96]). Among women, the same associations were observed with arrival times after 2000, hospitalization, being diagnosed with HIV or CHB and lack of a residency permit ( Table 2). In addition, women were more likely to have acquired HIC when they had a secondary level of education or higher at arrival. Furthermore, women were more likely to acquire HIC during the year of a pregnancy and during the years that they were students. In the multivariate analysis, characteristics that were significantly associated with a faster access to HIC were arrival in France after the year 2000 (aOR = 1.54 [1.12-2.12]), secondary level of education at arrival (aOR = 1.65 [1.21-2.26]), French nationality (aOR = 4.84 [1.23-19.13]), years of school (aOR = 12.47 [5.17-30. Access to health insurance coverage among sub-Saharan African migrants living in France Table 2 When the same analysis was performed on the restricted database (first 3 years after the arrival in France in people who had arrived more than 3 years ago, N = 1736), the period effect was still significant. Participants who arrived after the year 2000 acquired HIC more rapidly than participants who had arrived before the year 2000 (adjusted OR = 1.57 [1.12-2.20] for men and 1.94 [1.38-2.72] for women, detailed results not shown and available on request). HIC interruptions Four years after obtaining their first HIC, 7% of men and 3% of women had lost their HIC for more than a year (Fig 2). Factors associated with HIC interruption year to year in the 4 years after obtaining it among men and women are presented in Table 3. In the univariate analysis, men and women without a residency permit were more likely to have lost HIC. Men who came to France because of a medical reason, men diagnosed with HIV and women under 25 years of age on arrival in France were less at risk of losing their HIC. In the multivariate analysis, the only characteristic that was significantly associated with interruption in HIC was the lack of a residency permit for both men and women (aOR = 4.51 [2.17-9.37] and 4.41 [1.50-12.94], respectively). Women under 25 years of age on arrival in France lost their HIC less often (aOR = 0.14 [0.03-0.64]). Of the 84 participants who lost health insurance coverage in the four years after obtaining it, 62% (N = 49) did not have a residency permit during the year of lapsed HIC. Among them, 42% (N = 22) previously had a residency permit and had lost it. HIC at the time of the study At the time of the survey, most African migrants had basic health insurance coverage (from 88.5 to 98.3% depending on the group and sex), most often the Health Insurance (Table 4). The Universal Health insurance Coverage (UHC) was often used (from 17.7 to 24.9%). The Access to health insurance coverage among sub-Saharan African migrants living in France State Medical Assistance (SMA) was more frequent in the CHB group (24.0% of men and 21,9% of women) and among men in the reference group (13.6%). In the reference group, 11.4% of men and 5.8% of women were uninsured (p = 0.05). This proportion was less frequent in HIV and CHB groups in both sexes (p<0.01). Uninsured participants arrived in France more recently (in median 2 years before vs 10 years for others, p<0.001) and were more often without resident permit (72.2% vs 15.7% of those with a health insurance coverage, p<0.001). Discussion The PARCOURS survey provides original life-event data on documented and undocumented migrants living in France. This study is the first to have evaluated access to Health Insurance Coverage (HIC) among migrants year to year after their arrival in Europe. It shows that migrants from sub-Saharan Africa quickly gain access to HIC after their arrival, especially where they are in a health need. Nevertheless, this access is impaired by administrative and social insecurities. For most migrants, access to HIC occurs in the first year after arrival in France. This finding emphasizes the positive role played by the existence of French regulations that allow health coverage for all citizens, including undocumented migrants. A comparative study of regulation regarding access to health care for undocumented migrants in European Union had placed France among the countries where access was the highest [45]. Migrants who arrived after the year 2000 were more likely to have acquired HIC early. In France, this could be related to the implementation of the UHC for unemployed persons, including asylum seekers, and of the SMA for undocumented migrants in 1999 [27]. Previously, these people could not benefit from Health Insurance system but only from incomplete social assistance granted by local authorities. Despite this apparent good access, access to HIC for migrants is not always effective and requires knowledge of the French system and social assistance for access [20,26,[29][30][31]. Majority of people who have recently arrived in France are not informed of their rights and existing HIC for undocumented migrants. In the Doctors of the World medical centres, only 14.2% of people who can theoretically benefit from health cover have open rights [29]. They could also be afraid to interact with the institutions for fear to be arrested and held in detention. Among the consultants without residence permit, 35% declare to limit their movements for fear of being arrested and nearly one in four missed an address to access the rights [29]. According to the Platform for International Cooperation on Undocumented Migrants and NGOs, thousands of undocumented migrants in France do not have the SMA coverage to which they are entitled [12,27,29,30,46]. The main reasons cited include uneven interpretation and implementation of the law across the different social security desks, undocumented migrants' lack of awareness of the law, lack of acceptable identification documents or adequate evidence regarding residency requirements, language barriers and the fear of being arrested. The defender of rights, that is an independent constitutional authority, has also documented many barriers to rights in France [30]. It noted, in particular, that social security desks sometimes require excessively restrictive entitlement conditions and request for unjustified documents. Institutional barriers to access have also been reported by French NGOs [29,47]. These practices can thus be interpreted as a way of limiting the goals of the law. One of the reasons put forward by the desks for this practice is the fight against fraud. These administrative practices and abuses are variable in the territory and could be corrected through training and control. As described before, newly arrived migrants often go through an extended period with hardships (lack of residency permit, economic resources, and housing) in France [4,43]. Half of the women did not obtain their first valid, one-year residency permit until their third year in France, and half of men obtained this permit in their fourth year. When they obtain a residency permit, it is often a temporary permit that could be not renewed the year after. The Parcours study shows that the absence of a residency permit delayed the acquisition of HIC and was the main reason for HIC lapse. Undocumented migrants cannot benefit SMA in the first three months after their arrival in France and do not always access their rights beyond as has been described above. It is also important to note that the major French surveys addressing the issue of access to care systematically exclude undocumented migrants because of research legislation. This results shows the effect of the management of residence permits with delays and interruptions linked to the immigration policy. The lack of financial resources is also an obstacle to access HIC in men, once again emphasizing the weight of social and administrative insecurity in the access to care. Thus, despite the introduction of UHC and SMA for unemployed and undocumented migrants residing in France, illegal stay and financial hardship remain barriers to access to care. During this hardship period, health is not a priority and access to the legal rights for medical assistance are often restricted to situations where there is an acute health concern and/or severe illness. Thus, contacts within the health care system, particularly at the time of diagnosis or complication related to HIV or CHB infections, pregnancy and hospitalization, promote health coverage. Due to a need for care, contacts within the health system facilitate access to social assistance and therefore make effective the health coverage rights provided by law. Furthermore, HIV diagnosis allows applying for a residency permit for health reasons. This may play a role in better access to HIC. In addition, there is no restriction to access to care in the case of pregnancy in France. Among women, the level of education and current student status appear to be factors favouring HIC. This trend was described elsewhere and particularly in another large French study, where education and incomes appear to be the most important drivers of inequalities between French and immigrant populations in the propensity to access a medical specialist [20,48]. The lack of basic health insurance coverage at the time of the study is more frequent in the reference group, among participants recently arrived and without resident permit. This confirms that despite the right to State Medical Assistance for undocumented migrants in France, some do not apply. This study is limited by focusing on patients who are engaged in care. The distribution of Health Insurance Coverage for migrants in care may be different from that of persons not in care. Additionally, our findings may not be generalized to all HIV or CHB patients and care settings since it was conducted only in the Paris metropolitan area. However, 60% of sub Saharan migrants in France live in the Paris area and our sample is highly diversified, with patients having a variety of demographic and clinical characteristics. Conclusions In conclusion, the French social security system provides quick access to Health Insurance Coverage for the majority of immigrants arriving in France. This access is facilitated by the existence of the Health Insurance and, since 2000, by the existence of the Universal Health insurance Coverage and the State Medical Assistance. However, despite a system built to facilitate access to care for all, including undocumented migrants, socioeconomic and residency permit insecurity remain as barriers to full access. At the time of questioning of the French social model in the context of the increasing arrival of refugees, vigilance is essential to continue to secure their access to HIC, which is a condition for access to care. In particular, it is a priority to maintain the State Medical Assistance and the complementary Universal Health insurance Coverage, or better to merge it into the Health Insurance. This is all the more important as the benefit of the UHC also concerns, beyond migrants, all people in precarious situations in France. It is also important to develop actions to facilitate access to rights and care for newly arrived migrants. This is of particular interest for migrants living with HIV or CHB to improve early diagnosis, linkage to and retention in care. This is in line with the individual and public health benefits associated with HIV care and treatment: improved health outcomes and reductions in transmission risks. It is also essential to homogenize European policies to achieve the United Nations goal of universal health coverage.
2018-04-03T04:37:15.716Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "5e9a6374c0784a5533751e441bdb6cab2d7ed9c9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192916&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e9a6374c0784a5533751e441bdb6cab2d7ed9c9", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
116311310
pes2o/s2orc
v3-fos-license
An Exploratory Analysis of Sound Field Characteristics using the Impulse Response in a Car Cabin Sound environments in cars are becoming quieter and receiving attention because of the prevalence of low-noise engines such as hybrid and electric engines and the manifestation of automated driving. Although the car cabin has potential as a listening space, its acoustic quality has not been examined in detail. The present study investigated sound field characteristics in the car cabin using acoustic parameters obtained by impulse response analysis. In particular, effects of the passenger position, open windows and the use of an air conditioner on acoustic parameters were investigated. The passenger position affected the sound strength at low frequencies. Rear seats, except for the rear central seat, had lower interaural correlation than the front seats, suggesting that rear seats have more diffused sound fields. The opening of windows and use of air conditioners attenuated the ratio of earlyand late-arriving energy at high frequencies, suggesting a loss of clarity for music. Introduction Sound environments in cars are becoming quieter because of technological advances in active noise control [1], acoustic insulation and absorption, as in the case of a rubberized road surface [2] and the prevalence of low-noise engines such as hybrid and electric engines [3]. In the near future, automated driving will change the roles of sounds in cars and sound will be able to be used to improve the in-car sound environment. A car cabin therefore has appreciable potential to be a safer environment through the reduction of background noise and the emphasis of informative signals and to be a listening space. Many sounds are heard in a car cabin such as sounds generated by interaction of the road pavement and rolling tires, the engine, gears, brakes and wind. These sounds affect a passenger's safety and comfort. Many studies have therefore evaluated the sound quality in car cabins, considering sounds of the engine [4,5], doors closing [6,7], power windows [8][9][10], switch buttons [11], hard disk drives [12], heating, ventilation and air-conditioning [13][14][15], tires [16] and wind [17]. An objective sound quality evaluation model for the cabin noise of cars idling or moving at constant speed and cars accelerating and decelerating has been constructed on the basis of sound metrics used in psychoacoustics and an artificial neural network technique [18][19][20][21]. Dimensions of vehicle sound perception have been investigated by conducting an online survey [22]. The important dimensions are timbre, loudness and roughness/sharpness. These three dimensions are consistent with the three dimensions of more general human perception of sound [23]. The quality and design of sound in car cabins are thus receiving much attention. However, previous studies have not paid much attention to the sound field characteristics of car cabins. There has been much research on the reproduction of the sound field in car cabins. The majority of studies acquired impulse responses in a car cabin using microphones, typically dummy-head microphones. The obtained signals were then used to reproduce the binaural field over headphones or loudspeakers [24,25]. An extension to the method has been proposed, allowing for natural head movement during evaluation by dynamically updating the appropriate measurement angle [26]. The analysis and synthesis of microphone array measurements provide more accurate spatial sound reproduction when compared with dummy-head measurements [27][28][29]. Using such sound field reproduction methods with higher accuracy, the effects of physical characteristics of a sound field in the car cabin on human perception have been investigated [29][30][31][32]. Perceptual attributes, such as the bass, brightness and envelopment, have been proposed as important attributes for acoustic evaluation in the car cabin. In addition, binaural measurements have been made to identify unwanted sounds in the environment [33,34]. Such sounds should also be avoided inside a car while studying noise attenuation from the outside to inside to ensure a better listening environment for music. The relationships between human perception and physical acoustic characteristics in car environments are not well understood. Although studies investigated the acoustic characteristics in a car cabin [35,36], basic characteristics, such as the reverberation time (RT) and balance between earlyand late-arriving energy (C x ), have not been clarified. In addition, the effects of passenger position, open windows and the operation of an air conditioner on the sound field have not yet been clarified. Understanding acoustic characteristics in the car cabin could aid the development of the evaluation and optimization of automotive audio. The present study found factors that change the sound field characteristics in car cabins. To understand the present situations of drivers and passengers, the effects of the passenger position, absorption by passengers, open windows and use of an air conditioner were specifically investigated as a first step although there are many other factors that may affect sound field characteristics, such as the characteristics of loudspeakers, settings of car audio systems and interior materials. The present study investigated sound fields for only two car cabins. The findings of the study are but a starting point for the investigation of a wide variety of car cabins. Methods Two cars equipped with normally tuned audio systems were chosen for the measurement. Car A was a sedan while car B was a small car. Six loudspeakers were installed in each car as shown in Figure 1. Midrange loudspeakers were installed at the bottom of the left and right front and rear doors in both cars. Tweeters were installed at the left and right A pillars in car A and at the dashboard in car B. The frequency characteristics of the loudspeakers in car A and B are shown in Figure 2. Impulse responses were measured three times for each setting in each car. A sinusoidal signal with an exponentially varying frequency sweeping from 20 Hz to 20 kHz over a period of 20 s, recorded on a compact disc, was sent through the installed loudspeakers and recorded by a laptop computer at a sampling rate of 48 kHz and a sampling resolution of 24 bits via a head and torso simulator (HATS, Type 4128C, Brüel & Kjaer, Naerum, Denmark) and an AD/DA converter (Fireface UCX, RME, Haimhausen, Germany). The recorded signals from the HATS were deconvolved to obtain the impulse responses [37]. To clarify the effect of the position in the car, the HATS was located on the driver, passenger, or rear seat in Experiment 1. The HATS always faced forward. The effects of an open window, air conditioner and absorption by a person were investigated in Experiment 2. The HATS was fixed on the driver seat. All windows were open and the air conditioner was turned off in the open-window setting. The effect of the air conditioner was investigated by setting the air-conditioning mode to off, weak and strong. To clarify the effects of the absorption by a human, passenger and rear seats were occupied by a person for one experimental setting. When all seats were occupied, the air conditioner was set off in car A and set to weak in car B. The background noise level was measured according to the A-weighted equivalent continuous sound pressure level (L Aeq ) and is summarized in Table 1. The car was in an idling state and stationary during the measurement. Orthogonal parameters obtained from the binaural impulse responses in a sound field have been proposed to evaluate the subjective preference at each seat in a concert hall [38,39]. The four orthogonal parameters are the sound pressure level (SPL), initial time delay gap between the direct sound and first reflection (ITDG), reverberation time (RT) and magnitude of the interaural cross-correlation function (IACC). Three subjectively different aspects of an objective parameter have been proposed to describe the properties of a sound field [40]: loudness (sound strength (G), which corresponds to the SPL), reverberance clarity (RT, early decay time (EDT) and balance between early-and late-arriving energy (C te )) and spaciousness (IACC). te denotes the time limit of either 50 or 80 ms while C 80 denotes the clarity for music. To evaluate sound fields in car environments, we calculated G, ITDG, RT, EDT, C 80 and IACC from the impulse response according to the ISO 3382 standard [41] although we could not comply with some rules such as those of the sound sources. G values were normalized by all-pass values. RT was derived from the times at which the decay curve first reaches 5 and 25 dB below the initial level and is denoted T 20 . G, RT, EDT, C 80 and IACC values were obtained at one-octave band enter frequencies between 125 Hz and 4 kHz. Sharpness was also calculated to evaluate the high-frequency content of the impulse response [42]. The sharpness of the impulse response was calculated via the addition of a weighting function to the specific loudness spectrum. The values obtained for the binaural impulse response were calculated as arithmetic means for the two ears. The analyses were conducted using a Matlab based analysis program (Mathworks, Natick, MA, USA). The effect of the position in the car, an open window, air conditioner, the absorption by a human and the type of a car on G, ITDG, RT, EDT, C 80 , IACC and Sharpness values were statistically analyzed using a repeated-measures analysis of variance (ANOVA). The analyses were carried out using SPSS statistical analysis software (SPSS version 24.0, IBM, New York, NY, United States). (Figure 3d). The G values between 125 and 1000 Hz were lower than those at 2000 and 4000 Hz. One-way ANOVA indicated significant effects of the frequency on G values (p < 0.01). This attenuation at lower frequency has not been observed in concert halls, churches, or temples [43][44][45] although the attenuation at lower limited frequency by seat dip effect was observed in concert halls and theaters [46][47][48]. Figure 6c) and B (Figure 6d). Repeated-measures-ANOVA indicated significant effects of the open window, air conditioner and the absorption by a human on C 80 values (p < 0.01). The strong air-conditioner setting reduced C 80 by more than 3 dB between 250 and 8000 Hz in car A and between 500 and 8000 Hz in car B. The weak air-conditioner setting reduced C 80 by more than 3 dB between 1000 and 4000 Hz in cars A and B. The open-window setting reduced C 80 between 1000 and 4000 Hz; in particular, there was a reduction of more than 30 dB at 4000 Hz. Discussion Effects of the passenger position on G, EDT, C 80 and IACC were observed yet the effects were not always the same for cars A and B because of differences in, for example, the volume and upholstery. G values between 125 and 1000 Hz were different between front and rear seats, although the tendency was not same in cars A and B. EDT values were longer in the passenger and rear-left seats of car A and rear seats of car B. C 80 values decreased around 2 kHz in the rear seats of cars A and B. IACC values were high below 1 kHz in the rear central seat of car A because of the strong direct and reflected sounds received from the front direction. IACC values averaged in the range from 125 to 4000 Hz octave bands in rear-left and rear-right seats were smaller than those in front seats, suggesting more complex reflections and diffused sound fields in rear seats. No effects of passenger position on ITDG were observed. Opening windows attenuated G values at 125 and 250 Hz and C 80 values above 1 kHz, suggesting that lower-frequency components of the reflections emitted from windows and higher-frequency components of the reflections were delayed. Opening the windows increased sharpness, confirming the emission of lower-frequency components. IACC values at 500 Hz increased under the open-window condition. A prominent increment was also observed when there were multiple passengers. Open windows and absorption by passengers, in combination with small volumes and differences in upholstery, affect the IACC behavior in a car cabin. No effects of opening windows on ITDG, RT and EDT were observed. A fluctuation in SPL due to the use of the air conditioner was observed in the frequency domain above 2 kHz in a large space [49]. This is explained as the result of the combination of the direct wave (regular) and the changing delay time of reflected sound (irregular). In this study, the use of an air conditioner reduced C 80 values from 1 kHz under the weak mode and from 500 Hz under the strong mode, suggesting that noise generated by the air conditioner blurred high-frequency components. RT values became slightly longer under the strong air-conditioning mode, although the increment was less than a threshold [50]. No effects of using the air conditioner on G, ITDG, EDT and IACC were observed. Conclusions Factors that affect sound field characteristics in a car cabin were investigated. An effect of the passenger position on sound strength, G, was found between 125 and 500 Hz. The rear central seats had the highest magnitude of the interaural cross-correlation function, IACC. Opening windows and using an air conditioner attenuated the balance between early-and late-arriving energy, C 80 , above 1 kHz, resulting in a loss of clarity of music. In the field of architectural acoustics [38,39,51], theory proposes optimal ranges of acoustical factors, such as the reverberation time, RT, early decay time, EDT and balance between earlyand late-arriving energy, C 80 and is used in the actual design of concert halls, opera houses and churches [44,52]. The theory can also be applied to sound fields in car cabins although the optimal ranges of acoustical factors may be different. The present study is part our investigation of the optimal ranges of acoustical factors in car cabins. The optimal range of an acoustic factor is affected by the type of music [38,39,51] and the characteristics of the music source [53,54]. Sound field characteristics in the car cabin can affect which music is suited to the car cabin. It is possible to harmonize music to the car cabin using the acoustic factors used in this study and factors calculated for the music source. These results will be helpful in understanding sound fields, guiding improvements to the sound field and finding appropriate music for car cabins. The presented study thus provides results that are needed to commence further studies. Author Contributions: Yoshiharu Soeta and Yoshisada Sakamoto conceived and designed the experiments; Yoshiharu Soeta performed the experiments analyzed the data; Yoshisada Sakamoto contributed the adjustments of the cars; Yoshiharu Soeta wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.
2019-04-16T13:27:52.038Z
2018-03-24T00:00:00.000
{ "year": 2018, "sha1": "aea6aedc47e00b033b8c4b422a7ff4a360a362d7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3298/5/4/44/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9945d73938e71c5b570996d79f9e638fd73fbc28", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
248010720
pes2o/s2orc
v3-fos-license
Cognitive Functioning among Older Adults in Japan and Other Selected Asian Countries: In Search of a Better Way to Remeasure Population Aging Japan is the oldest society in the world. It has the highest proportion of the population aged 65 and over, a demographic indicator that has been used by demographers for more than a century. One of the main objectives of this study is to apply a new indicator — the cognition-adjusted dependency ratio (CADR) — to remeasure the level of population aging from an innovative point of view. To compute this new index, we apply the mean age-group-speci fi c immediate recall scores for Japan and four other Asian countries, and we compare the results with those derived from the United States and various developed nations in Europe. Our analysis shows that Japan ’ s pattern and level of age-related decline in cognitive functioning are highly comparable to those of many other developed nations, particularly in Continental Europe. Among the other Asian countries, Malaysia shows a pattern of change similar to countries in Southern Europe, although Malaysia has slightly lower scores than Southern Europe in all age groups. More importantly, these comparative results based on CADR are astonishingly different from the corresponding results obtained from conventional old-age dependency ratios. The Japanese case is the most salient example. survey gathered various information concerning family and social network, social activities, psychosocial measures, life satisfaction, health conditions, and health-care utilization. In the section on mental health, the following cognitive task scores were collected: time orientation, short-term memory (2019) and another by Schneeweis, Skirbekk, and Winter-Ebmer (2014). These studies successfully addressed the issue of causal ordering by drawing heavily on powerful instrumental variables created based on the variation caused by major policy reforms. Although we have, in the hope of addressing the issue of endogeneity, attempted to identify appropriate instrumental variables by going through various datasets available in the fi ve Asian countries, our attempts have met no success at the time of revising our paper. Thus, following many earlier studies on this research topic, we con fi ne ourselves in this study to examining the association between individuals ’ cognitive performance and their demographic and socioeconomic backgrounds. The issue of endogeneity remains to be addressed in our future work. Introduction Since the second half of the 1960s, the tempo of world population growth has been gradually slowing down due to substantial fertility declines in various countries, both developed and developing. Population aging has become a worldwide phenomenon, attracting growing attention from researchers and policy makers particularly for its escalating economic and social costs (Sanderson and Scherbov 2010). The field of demography has increasingly recognized that while the 20th century was the century of "population explosion," the 21st century is becoming the century of "population aging" (Hermalin 2003;Lutz, Sanderson, and Scherbov 2004;United Nations 2007;Clark, Ogawa, and Mason 2007;Fu and Hughes 2009;Uhlenberg 2009;Arifin and Ananta 2009;Tuljapurkar, Ogawa, and Gauthier 2010;Eggleston and Tuljapurkar 2010;Lee and Mason 2011;Park, Lee, and Mason 2012;Kendig, McDonald, and Piggott 2016). At present, almost 60% of the world population inhabits Asia, making it the most populous region in the world. Also, the proportion of Asia's population aged 65 and over in the world's elderly population has been continuously rising since the end of World War II. In 1950 it was 44%, but it reached 57% by 2020 and is now projected to grow to 62% in 2050 (United Nations 2019). In parallel with such rapid growth of its older population, Asia has also witnessed dramatic changes in its demographic landscape, particularly its population's age composition. Asia's total dependency ratio, which is expressed as the ratio of the number of dependents to the working-age population {[(0-14 years old) þ (65 years old and over)]/(15-64 years old)}, reached its peak value (0.81) in 1966, after which its projected long-term trend showed a U-shaped pattern reaching its trough value (0.47) in 2015. In addition, there have been substantial inter-country differences in the trends and levels of population aging within Asia in the past several decades (Mason 2001, Lee and Mason 2011, Ogawa et al. 2021. 1 To compare the burden of population aging across countries, we frequently use conventional demographic indicators such as the age dependency ratio, which is defined as the ratio of the number of elderly people to the working-age population [(65 years old and over)/(15-64 years old)], and the index of aging, expressed as the ratio of the number of elderly persons to the young population [(65 years old and over)/(0-14 years old)]. Based on these commonly used demographic indicators, we characterize and rank how old countries are. Although these demographic indicators are readily available to researchers, one of their most serious limitations is that they are exclusively based on chronological age distributions. Because of this, they fail to provide a powerful base for deriving persuasive conclusions on the consequences of and possible responses to population aging. To cope with this major drawback, Skirbekk, Loichinger, and Weber (2012) recommend a new approach in which age variation in cognitive abilities among older persons is incorporated into a revised version of the conventional total dependency ratio, with a view to comparing the extent of aging across countries from an innovative standpoint. It is important to note that this new approach has become feasible primarily thanks to an increasing number of surveys collecting individual data on cognitive abilities among older persons in numerous countries, both developed and developing, particularly since the 1990s. Among them, the Health and Retirement Study (HRS), a longitudinal survey of a representative sample of United States (US) citizens over the age of 50, is the most well-known and has served as a public resource for data on aging since 1990. The HRS has a number of sister studies in many countries all over the world. In recent years, such studies have been implemented in five Asian countries: Japan (the Japanese Study of Aging and Retirement or JSTAR), the People's Republic of China (PRC; the China Health and Retirement Longitudinal Study or CHARLS), India (the Longitudinal Ageing Study in India or LASI), Thailand (the Health, Aging, and Retirement in Thailand or HART), and Malaysia (the Malaysia Ageing and Retirement Survey or MARS). 2 By drawing heavily on microlevel datasets gathered from these surveys, we compute age-specific cognitive abilities among adults aged 50 and over in each of these Asian countries, and then compare the differences in their cognitive performance. We also compare them with their counterparts in selected Western countries. In the second half of the paper, we examine, by applying microlevel data from the Asian surveys to the regression model, how and to what extent the cognitive abilities of older adults in each country are related to a host of demographic, socioeconomic, and biopsychological factors. Based on the computed results, we discuss both similarities and dissimilarities of the relationships between cognitive functioning and its covariates such as demographic, socioeconomic, and medical factors in the five Asian countries. Subsequently, we briefly discuss the likely future trends in older adults' cognitive abilities in these countries. The paper is organized as follows. Section II discusses cognition measures and matters related to them to facilitate later on in the paper a variety of analyses on inter-country variations in cognitive functioning of older workers in the five selected Asian countries, the US, and a number of industrialized nations in Europe. To provide a solid base for conducting such analyses, Section III reviews several important earlier studies, which have examined numerous key factors linking the relationships between cognitive functioning and a host of demographic and socioeconomic factors. Section IV describes the data from the five Asian surveys mentioned earlier, which will be used in Section V to compute the mean age-group-specific immediate recall scores in the Asian countries. These scores will be compared to those for Europe and the US, as derived from an earlier study by Skirbekk, Loichinger, and Weber (2012). In Section VI, we relate the computed mean age-group-specific immediate recall score to population aging using the cognition-adjusted dependency ratio (CADR). In Section VII, we attempt to identify the factors associated with immediate recall scores among the adults aged 50-79. Section VIII summarizes the main findings with a few policy implications. II. Measuring Cognitive Functioning Over the past few decades, rapid population aging worldwide has compelled several countries in Europe, Asia, and elsewhere to gradually postpone the mandatory retirement age to maintain financial solvency and sustainability of their public pension schemes Ogawa 1992, Clark et al. 2008). At the same time, because cognition affects the capacity to acquire and use information, improving cognitive functioning at older ages has been adopted as a top public health priority in many countries to enable individuals to make good decisions and, ultimately, to remain independent and care for themselves longer (Maurer 2010). However, cognitive functioning of older workers can vary widely across countries, which can create large differences between them in the severity of various problems arising from aging (Skirbekk, Loichinger, and Weber 2012). Due to the importance of cognition and cognitive variation among older adults, the HRS and its sister studies have made cognitive measurement a priority (Ofstedal, Fisher, and Herzog 2005;Weir, Lay, and Langa 2014). In general, the following activities are regarded as cognitive processes: thinking, knowing, remembering, judging, and problem-solving. Both fluid intelligence and crystallized intelligence are used in these cognitive activities (van Aken et al. 2016). Fluid intelligence is the ability to use logic and solve problems in new or novel situations without resorting to pre-existing knowledge. Fluid intelligence plays a role in the creative process, and we often use it to handle nonverbal tasks such as mathematical problems and puzzles. On the other hand, crystallized intelligence is the ability to make use of information or knowledge previously acquired through education and experience. We usually employ crystallized intelligence when we encounter verbal tasks, such as reading comprehension or grammar. Crystallized intelligence is generally retained or even improved over time. By contrast, because fluid intelligence is rooted in physiological functioning, it typically peaks in young adulthood (approximately at an age of 25) and then steadily declines. Although fluid and crystallized intelligence represent two distinctly different sets of abilities, they often work jointly. For instance, when taking a test in mathematics, we use mathematical formulas and notations such as (þ) and (À), which come from crystallized intelligence as pieces of pre-existing knowledge, but also utilize fluid intelligence to develop strategies and derive solutions to accomplish the task. In the HRS family of studies, different cognitive tests have been used to measure specific mental capacities. For instance, the capacities frequently tapped in previous research investigations are episodic memory, numeracy, orientation, attention, working memory, and verbal fluency. One of the key domains of measuring cognition in an aging population is short-term memory, which is frequently evaluated using word recall tasks (Weir, Lay, and Langa 2014). 3 The ability to recall words read from a randomly selected list of a certain number of given words generally declines with age. 4 This ability is usually measured in two ways: (i) immediate word recall and (ii) delayed word recall. In immediate word recall, the respondent reads a list of 10 words and, after a very brief interval, recalls as many words as possible, not necessarily in order, within one minute. In delayed word recall, the respondent is asked to recall as many words as possible approximately five minutes after the immediate word recall task out of the same list the respondent had read for the immediate recall task. The number of words and the length of time allowed for recalling the words can vary from survey to survey. For example, in the HRS, for both immediate and delayed recall tasks, a respondent reads one out of four possible lists of 10 words and then has two minutes to recall the words. Despite being given two minutes, a majority of HRS respondents do not make use of the second minute: 90% of the respondents used less than 49.2 seconds in the immediate recall task, and less than 50.4 seconds in the delayed recall task. Based on these results, Skirbekk, Loichinger, and Weber (2012) assert that recalling the words within one or two minutes does not affect the validity of inter-country comparative results in cognitive abilities. In addition to short-term memory (e.g., immediate word recall and delayed word recall), a person's working memory is also used in the literature to measure variation in cognitive functioning. A common procedure for assessing working memory is a task called serial-7s, which is the repeated subtraction of sevens starting from 100. This activity involves numeric ability as well as the ability to attend to a task, thus falling into the category of fluid intelligence. Apart from these measures of fluid intelligence, indicators of crystallized intelligence such as verbal fluency are also used to measure cognitive functioning. A well-known task in verbal fluency is naming animals-respondents list as many names of animals as they can in one minute. 5 In view of the availability and comparability of HRS-type survey data among the five Asian countries, we confine our analysis of the cognitive functioning of older adults to immediate word recall scores. 6 Another reason for this restriction is that an overwhelming majority of empirical studies conducted outside Asia thus far have been based on such scores, which means that we can compare our results with these studies (e.g., Weber et al. 2014;Bonsang, Skirbekk, and Staudinger 2017). III. Earlier Studies Pertaining to Cognitive Functioning among Older Adults The 20th century saw considerable growth in cognitive functioning in many countries. The factors that induced such cognitive improvements include greater exposure to cognitive stimulation through better education, improved living conditions, steady improvements in health, and declining average family size triggered by lower fertility and changing marriage values (Sundet, Borren, and Tambs 2008;Lynn 2009). The study conducted by Skirbekk, Loichinger, and Weber (2012) examined inter-country age variation in cognitive functioning by measuring the immediate recall scores. The authors computed the mean age-group-specific immediate recall score using data from the HRS, the World Health Organization Study on global AGEing and adult health (SAGE), and the Survey of Health, Ageing and Retirement in Europe (SHARE). Caution should be exercised, however, in interpreting their results. For each 5-year age group, the mean value of the immediate recall score for older persons in a certain age group was computed from each relevant survey, but there are some 5 Orientation is another measure of crystallized intelligence. Orientation is measured using a set of tests involving simple questions about the date and day of the week. The HRS contains additional items concerning the names of US presidents and vice-presidents. 6 Immediate word recall has been shown to be important for a variety of outcomes, ranging from financial decision-making to the risk of developing dementia (Fein, McGillivray, and Finn 2007;Skirbekk, Loichinger, and Weber 2012). Moreover, technological advances and changes in working procedures imply that the importance of the ability to learn and remember is increasing (Machin and Van Reenen 1998). Employers are particularly interested in whether their employees are able to learn new work procedures and process new information (Munnell, Sass, and Soto 2006), which also suggests that employers view the ability to immediately recall information as advantageous to labor market performance. differences in the way the respondents were tested by the interviewers. That is, respondents in as many as 18 countries were given one minute for recalling, but respondents in the US were allowed two minutes. 7 Furthermore, the interviewers read out the 10 words to be recalled only once in all surveys except for SAGE, where interviewers read out the words three times before the respondents recalled the words. Despite these differences in the way the data on immediate word recall were collected, the computed results showed a statistically significant age-related decline in all the countries within the 50-84 age group. In the face of rapid societal improvements over time, particularly during the 20th century, cognitive gender differences continue to be a source of scientific and political debate, and the magnitude, pattern, and explanation of these differences remain important research topics. By using data from SHARE, Weber et al. (2014) investigated gender differences in cognitive performance in the middle-aged and older populations across 13 European countries. They found that the magnitude of the differences varied systematically across cognitive tasks, birth cohorts, and geographical regions. In addition, both the living conditions and educational opportunities the individuals were exposed to during their formative years were related to increased gender differences favoring women in episodic memory (immediate word recall scores), decreased gender differences in the case of numeracy (arithmetic computation), and the elimination of differences in verbal fluency (animal naming). It is also interesting to note that their analysis of immediate word recall scores shows that although women in Northern Europe perform at a higher level than men across all birth cohorts, the pattern is different in Central and Southern Europe. In Central Europe, the female advantage is found only for birth cohorts born in 1932 or later, but not in earlier cohorts. In Southern Europe, there is even less of a female advantage, which gradually switches to a male advantage in earlier cohorts. Weir, Lay, and Langa (2014) also examined gender inequality in cognition, by analyzing data from the PRC's CHARLS and India's LASI pilot survey, as well as from SAGE, using individual data derived from cognitive tests such as immediate word recall, orientation, serial-7s, and listing the names of animals. In both countries and in virtually all the cognitive tasks, men performed considerably better than women. In addition, the study found that despite some notable differences in survey samples and measures, a strong general association of cognition in older ages with education emerges as a potential explanation for the gender gaps and cohort differences. They also found that the female disadvantage in cognition is greater in India than in the PRC, both before and after controlling for education. It is generally considered that being married is associated with a healthier lifestyle and greater daily social interaction (Fuller 2010). These behaviors may improve cognitive reserve and reduce dementia (Kuiper et al. 2015). In this context, as briefly mentioned in footnote 4, the incidence of Alzheimer's disease (one of the subtypes of dementia) is closely connected with the ability to recall words. 8 For this reason, it is highly conceivable that being married is positively related to the ability to perform short-term memory tasks such as immediate word recall. More importantly, a recent study carried out by Sommerlad et al. (2018), which is based on a systematic review and meta-analysis of 15 studies on the association between marital status and the risk of developing dementia, shows that being married is associated with a significantly smaller risk of dementia compared to lifelong single people. 9 Hence, changing one's marital status may affect cognitive abilities throughout one's life. 10 It is well known that old age tends to be related to a host of health risks such as cardiac infarction and cerebral hemorrhage (Slomski 2014). Furthermore, it is increasingly recognized that cognitive functioning tends to be a good predictor of future morbidity and mortality (Negash et al. 2011). Therefore, individuals with higher cognitive abilities are more likely to be healthier and live longer than those with low cognitive abilities. Cognitive abilities predict individual productivity better than any other observable individual characteristic, and they are increasingly relevant for labor market performance (Skirbekk, Loichinger, and Weber 2012). Moreover, this finding is applicable to many countries, both developed and developing, and in different settings, both urban and rural (Behrman, Ross, and Sabot 2008). Over the past few decades, the number of seniors have been increasing in labor markets at an accelerating pace. Because certain cognitive abilities decline substantially at late adult ages, most studies previously conducted on older workers have focused on those aged 50 and over (Anderson and Craik 2000). A substantial fraction of these seniors can remain in the labor market for a long time, but how long they stay depends on how long they can retain high cognitive performance. 8 Among dementia subtypes, Alzheimer's disease occupies the largest share in most countries in the world. In Japan, for example, approximately 70% of persons with dementia fall under the category of Alzheimer's disease. 9 The following three Asian economies are included in this meta-analysis: Japan; Taipei,China; and the Republic of Korea. 10 There is no significant difference in the risk of dementia among those currently married, divorced, or widowed. Staying in the labor market could even improve cognitive performance. Using six waves of the HRS (1998HRS ( -2008, Bonsang, Adam, and Perelman (2012) demonstrated that retirement negatively influences cognitive functioning for older Americans. This finding suggests that reforms aimed at promoting labor force participation at an older age may not only ensure the sustainability of social security systems but also create positive health externalities for older individuals. However, a study by de Grip et al. (2015) based on a Dutch survey dataset have found the opposite result. Using data from the Maastricht Aging Study (MAAS), they examined the relation between retirement and cognitive development in the Netherlands and showed that retirees experienced lower decline in cognitive flexibility than those who remained employed. 11 Primarily due to the growing availability of representative surveys on the cognitive functioning of elderly persons in different countries and regions, an increasing number of empirical studies on the determinants of cognitive performance among the elderly have been carried out in recent years. In addition, almost all of these surveys have used highly comparable questionnaires, thus making inter-country comparisons feasible. One salient example is the study carried out by Maharani and Tampubolon (2016). Using data from the 2006 round of the English Longitudinal Study of Ageing (ELSA) and the 2007 round of the Indonesian Family Life Survey Wave 4, the authors examined the associations between central obesity, as measured by waist circumference, and the cognition level in adults aged 50 and over in England and Indonesia. Conducting regression analysis, after controlling for some selected demographic, socioeconomic, and biomedical variables, they found that centrally obese respondents had lower cognition levels than non-centrally obese respondents in England, while the opposite was true for Indonesia. Similarly, using data gathered in rural Central Java, LaFave and Thomas (2017) examined the relationship between the respondents' height and cognitive ability. By and large, taller workers earned more. In lower income settings, an adult's height is normally a marker of strength, which is rewarded in the labor market. Adult height is also a proxy for cognitive performance or other dimensions of human capital such as school quality, a proxy for health status, and a proxy for family background and genetic characteristics. Taking these observations into account, the authors conducted a regression analysis and showed that the respondents' cognitive abilities were significantly related to their height. By drawing on data derived from SHARE, Doblhammer, van den Berg, and Fritze (2013) examined cognitive functioning at the age of 60 and over. In their study, a total of 17,070 persons in 10 SHARE member countries were included in the analysis of several domains of cognitive functioning, which were linked to macroeconomic conditions during their birth year. 12 One of the main findings of this study was that economic conditions at birth significantly influenced cognitive functioning in late life in various domains. Another finding was that economic recessions adversely affected numeracy, verbal fluency, and recall abilities, as well as scores on omnibus cognitive indicators. Furthermore, Bordone, Scherbov, and Steiber (2015) investigated if and why individuals aged 50 and over who were born into more recent cohorts performed better in terms of cognition than their counterparts of the same age born into earlier cohorts, a phenomenon called the "Flynn effect." They used data from two waves of ELSA and the German Socio-Economic Panel (SOEP) surveys and showed that cognitive test scores of participants aged 50 and over in the later wave were higher than those of participants aged 50 and over in the earlier wave. In addition to identifying the Flynn effect based on the two cross-sectional waves, they pointed out that the reason why they used two waves was because a repeat cross-sectional design overcomes potential bias of retest effects. They also showed that although compositional changes of the older population in terms of education partly explain the Flynn effect, the increasing use of modern technology (i.e., computers and mobile phones) in the first decade of the 2000s also accounts for it. IV. Description of Data Sources Used In the rest of the paper, we aim to shed light on the age-specific pattern of cognitive abilities among older adults in Japan and four selected Asian countries, and then offer a statistical analysis of the demographic, biomedical, and socioeconomic factors associated with cognitive functioning in each country. To facilitate these quantitative analyses, we employ the following survey datasets: JSTAR for Japan, CHARLS for the PRC, LASI pilot survey for India, HART for Thailand, and MARS for Malaysia. A. Japanese Study of Aging and Retirement JSTAR is a longitudinal, interdisciplinary survey that collects internationally comparable data on middle-aged and older adults. The JSTAR project commenced in 2007 and the survey has been implemented in 2-year intervals. Because JSTAR is a sister survey compatible with the HRS, a considerable proportion of the content included in the JSTAR questionnaire is comparable to the content in the other four Asian surveys, which were also modeled after the HRS. JSTAR's design and sample methodology are described elsewhere (Ichimura, Hashimoto, and Shimizutani 2009). The baseline sample consists of male and female respondents aged 50-75 from 10 Japanese municipalities. 13 The respondents were randomly chosen from household registries in each of the 10 cities, towns, or villages. The sample size and the average response rate at the baseline were approximately 8,000 and 60%, respectively. JSTAR collects a wide range of variables, including economic, social, family, and health conditions of the sampled respondents. As for cognition-related variables, JSTAR gathers data on cognitive tasks such as short-term memory (both immediate and delayed word recall) and serial-7s. Caution should be exercised in interpreting our results because we use data only from the first round of JSTAR from the following three groups: the five municipalities surveyed in 2007 (Takikawa, Sendai, Adachi, Kanazawa, and Shirakawa), the two municipalities added in 2009 (Naha and Tosu), and the three that joined the survey in 2011 (Chofu, Tondabayashi, and Hiroshima). This data treatment is chosen for the purpose of avoiding problems that arise from nonrandom dropout and retest-practice effects associated with cognitive tests in longitudinal surveys, including JSTAR (Thorvaldsson et al. 2006;Skirbekk, Bordone, and Weber 2014). As is the case with most internationally comparable surveys such as SHARE, the JSTAR respondents listened to 10 words read out by the interviewers and were given one minute each to recall them, both in the immediate and delayed word recall tasks. B. China Health and Retirement Longitudinal Study CHARLS is a nationally representative longitudinal survey of persons 45 years of age or older and their spouses, and includes assessments of the social, economic, and health circumstances of community residents in the PRC. CHARLS examines health and economic adjustments to the rapidly aging population of the PRC. The national baseline sample size is 10,287 households and 17,708 individuals, covering 150 counties in 28 provinces. The first national baseline wave was fielded from June 2011 to March 2012, followed by wave 2 in 2013 and wave 3 in 2015. Core CHARLS questionnaires include numerous sections dedicated to demographics, family structure and changes, health status and functioning, general health now and before the age of 16, physician-diagnosed chronic illnesses, lifestyle and health-related behaviors (smoking, drinking, and physical activities), subjective expectation of mortality, activities of daily living (ADLs), instrumental activities of daily living (IADLs), helpers, cognition testing (short-term memory task: two minutes to recall 10 words), depression (Center for Epidemiological Studies Depression Scale or CES-D), health care and insurance, work, retirement and pension, income and consumption, and assets (individual and household). The interviewers conduct and carry equipment for measurements of health functioning and performance in respondents' households. These include the anthropometric measurements of height, weight, waist circumference, lower right leg length and arm length, lung capacity, grip strength, speed in repeated chair stand test, blood pressure, walking speed, and balance tests. C. Longitudinal Ageing Study in India In 2010, a LASI pilot survey was undertaken in four Indian states (Karnataka, Kerala, Punjab, and Rajasthan) on a targeted sample of 1,600 individuals aged 45 and older and their spouses. To capture regional variation, two northern states (Punjab and Rajasthan) and two southern states (Karnataka and Kerala) were included in the survey. Punjab is an example of an economically developed state, while Rajasthan is relatively poor, with very low female literacy, high fertility, and persisting gender disparities. Kerala, which is known for its relatively efficient health-care system, has undergone rapid social development and is included as a potential harbinger of how other Indian states might evolve. The survey questionnaire consisted of sections such as the household roster, housing environment, household consumption, individual income of all household members, household real estate, household financial and non-financial assets, and household debts. In addition, the survey gathered various information concerning family and social network, social activities, psychosocial measures, life satisfaction, health conditions, and health-care utilization. In the section on mental health, the following cognitive task scores were collected: time orientation, short-term memory (two minutes to recall 10 words), verbal fluency (animal naming), numeric ability (counting backwards from 20), and computation (serial-7s). D. Health, Aging, and Retirement in Thailand The primary objective of the HART project is to create a national longitudinal and household panel dataset on aging in Thailand. 14 HART is a biannual household panel survey designed to provide panel data on the multidisciplinary dimensions of aging in older Thai adults, including (i) demographic characteristics, (ii) family and transfers, (iii) health and cognition, (iv) employment and retirement, (v) income, (vi) assets and debts, and (vii) life expectations and life satisfaction. Five thousand and six hundred households from five regions and Bangkok and its vicinity were sampled to represent national households. More concretely, 13 provinces were selected for forming a household panel in the baseline survey. In each household, one member aged 45 and over was selected as the respondent. 15 The data collected from the national longitudinal survey in 2015 (wave 1) and 2016 (wave 2) are maintained in the data archive at the Intelligence and Information Center of the National Institute of Development Administration, Bangkok. The cognitive test consisted of three tasks: (i) word recall (immediate and delayed word recall tasks: two minutes to recall 10 words), (ii) numeracy (serial-7s), and (iii) data memory. Because cognitive test scores are available only in wave 2, we draw upon the individual data gleaned in wave 2 for our statistical analysis on the cognitive performance of older adults in Thailand. E. Malaysia Ageing and Retirement Survey MARS is a longitudinal study launched in 2018 which aims to produce nationally representative data on topics related to aging. MARS was motivated by the country's aging population and the importance of having such data to formulate and implement relevant policies. The baseline sample consists of households from all states in Malaysia, which were randomly selected based on Malaysia's 2010 Population and Housing Census. The Department of Statistics Malaysia (DOSM) selected the sample using a multistage sampling procedure. For each selected household, any member aged 40 and above who lived in the household most of the time would be eligible to be selected as a respondent. Should there be more than one eligible member, a maximum of three oldest eligible members would be selected. The sample size of the first wave was 5,613 respondents with a response rate of 84%. These respondents will be interviewed every 2 years to measure changes in their health and economic and social circumstances. MARS collects comprehensive information on various aspects of life and personal experiences covering six sections: (i) respondent background; (ii) family information and support; (iii) health and health-care utilization; (iv) work and employment; (v) income and consumption; and (vi) savings and assets. The cognitive abilities of respondents are measured in the health section where they are required to perform several tasks such as word recall (both immediate and delayed: two minutes to recall 10 words), serial-7s, time orientation, and semantic fluency (animal naming). The word recall task was included in the questionnaires of all five Asian countries. We have carefully examined the inter-country comparability of the words asked in the immediate word recall tasks. Because all the countries developed their survey questionnaires through a close contact with the US HRS team and its international network, the words chosen for the immediate word recall test are not only very basic but also similar. Respondents were assigned a list of words from multiple word lists. In India's LASI pilot survey, for example, the interviewer randomly assigned one of the three lists each consisting of 10 words to a respondent. Interviewers for the PRC's CHARLS, Thailand's HART, and Malaysia's MARS randomly assigned one out of four lists each consisting of 10 words to a respondent. 16 To overcome language barriers in India, the questionnaire was translated into four regional languages: Hindi, Kannada, Malayalam, and Punjabi. In Malaysia, the questionnaire was prepared in the following four languages: English, Malay, Chinese/Mandarin, and Tamil. It is conceivable that these additional adjustments incorporated in the questionnaires to overcome language barriers reduce some of the potential biases likely to emerge in inter-country comparative analyses of the immediate word recall task. V. Inter-Country Comparison of Immediate Word Recall Scores By closely following the computational steps taken by Skirbekk, Loichinger, and Weber (2012), we compute the mean age-group-specific immediate recall scores for the five Asian countries by drawing on the microlevel data derived from JSTAR, CHARLS, LASI pilot survey, HART, and MARS. In addition, we quantitatively show where the cognitive abilities of the five Asian countries stand relative to developed Western countries, which have been computed by Skirbekk, Loichinger, and Weber (2012) The study by Skirbekk, Loichinger, and Weber (2012) computed the immediate word recall scores for all the countries reported in this study, drawing on the microlevel survey data together with their sampling weights. To keep the Asian results compatible with those derived from the study by Skirbekk, Loichinger, and Weber (2012), we have attempted to use the RAND harmonized version of each country survey in Asia. One of the great advantages of using the RAND harmonized versions is that the sampling weights are computed for each data file. 19 At the time of writing this paper, however, the RAND harmonized versions with sampling weights were available only for JSTAR, CHARLS, and the LASI pilot survey. We have encountered another limitation with the harmonized JSTAR. As pointed out in footnote 13, the 10 survey sites did not join the harmonized JSTAR in the same year but in three different years. Moreover, to avoid problems arising from the nonrandom dropout and retest-practice effects associated with cognitive tests in longitudinal surveys, we have used only the immediate recall scores from the first round for each JSTAR survey site (cohort 1 residing in the five survey sites in 2007, cohort 2 living in the two sites in 2009, and cohort 3 in the three sites in 2011). For this 17 We are grateful to Skirbekk and his associates for providing us with the data on immediate word recall scores used in their study published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) in 2012. 18 At the time of writing this paper, we did not have access to the individual data gathered in main wave 1 and wave 2 of LASI conducted during 2016-2020. We plan to update our findings for India when a new dataset becomes available. 19 Since 1989, the RAND Center for the Study of Aging has been producing harmonized versions of various national aging survey data files to facilitate international comparative research on aging. reason, three different sets of computed sampling weights covering different numbers of survey sites exist. Technically, no single set of sampling weights can be computed for the entire sample combining the three different cohorts. Hence, in this study, we do not use the sampling weights available in the harmonized JSTAR. 20 Despite this limitation, we still use the harmonized JSTAR data file (without sampling weights) for computation since the JSTAR dataset was carefully cleaned by the country team who, in collaboration with RAND, conducted a consistency check. For this reason, we assume that various types of data entry errors and unreasonable outliers have been expunged before the datasets became available for public use. With MARS and HART, we have not been able to obtain information on sampling weights to retain the comparability of the computed results. In the case of MARS, the Social Wellbeing Research Centre of the University of Malaya is currently collaborating with the RAND Center to rearrange the survey data file to be in line with RAND's harmonized version, and their work is expected to be completed by the end of 2021. Furthermore, in their preliminary computations, they have found that there is only a very small difference between weighted and unweighted results, which seems to indicate that the use of sampling weights may not be critically important. Thailand's HART has not started to compute its sampling weights. Thus, in the rest of the paper, we will employ as the base for our computation the harmonized versions of the LASI pilot survey, CHARLS, and JSTAR. In addition, we will use the original MARS and HART to analyze Malaysia and Thailand, respectively. Figure 1 compares the mean age-group-specific immediate recall scores 21 for the five Asian countries, the US, and three European regions. 22 Clearly, the mean age-group-specific immediate recall scores continuously decline with age in virtually all the countries and regions. 23 For the sake of clear exposition, we have plotted the results in Figure 1 using a solid line for the countries computed from the harmonized data files, and a dotted line for the remaining countries computed without sampling weights. 20 We have compared the results for the immediate recall scores, calculated with and without sampling weights for each of the three cohorts. The computed results show virtually the same for each cohort, which seems to indicate that our analytical results are fairly comparable whether or not we use sampling weights. The plotted results for the three cohorts are available from the authors upon request. 21 The scores are expressed in terms of the number of words recalled, ranging from 0 to 10. 22 The SHARE data were used for the following three European regions: Northern Europe (Denmark, Ireland, and Sweden), Continental Europe (Austria, Belgium, Czech Republic, France, Germany, the Netherlands, Poland, and Switzerland), and Southern Europe (Greece, Italy, and Spain). The three European groups of economies were set up by Skirbekk, Loichinger, and Weber (2012), who also added England to the Northern Europe group, using data collected by ELSA. 23 In the case of the Northern European countries, the immediate recall score increased slightly from the 50-54 age group to the 55-59 age group. Figure 1 reveals a few interesting results. First, the US has the highest score for the 50-54 age group (6.1 words recalled out of 10), followed by the Northern European group (six words). Both the US and the Northern European group show a similar declining pattern with age in almost all age groups except for 50-54 and 80-84. Second, immediate recall age trajectories for Continental European countries and Japan are fairly comparable. 24 Furthermore, though not displayed in Figure 1, Japan's trajectory of change in the age-group-specific immediate recall scores is between the Netherlands and France-Japan's scores are slightly lower than those for the Netherlands but are consistently higher than those for France by a considerable margin. Among the five Asian countries, Japan's age-group-specific immediate recall scores are the highest in all groups until the 70-74 age group. Note that India's score (4.5 words) for the 75-79 age group is higher than the corresponding value for the Continental European countries. This result needs to be interpreted with great caution. The total number of observations in India's LASI pilot survey used for calculating the age-group-specific immediate recall scores is 1,007, but the number of observations for the 75-79 age group is only 65. 25 For this reason, the reliability of India's score for those aged 75-79 is open to question. 26 The age-group-specific immediate recall scores for the Southern European group show a pattern of change similar to that for Malaysia, although Malaysia has slightly lower scores than Southern Europe in all age groups. Furthermore, Thailand exhibits a declining pattern in age-group-specific immediate recall scores and has the lowest scores among the five Asian countries in the 60-64 age group and older. Attention should be drawn to the pattern of change in the PRC's age-group-specific immediate recall scores. In the age groups 50-54 and 55-59, the PRC's scores are marginally lower than those for Thailand, but in the remaining age groups, the PRC has substantially higher scores than Thailand. Moreover, the PRC overtakes Malaysia at ages 75-79. In Figure 1, we have also drawn a horizontal dotted line at score 4 to facilitate an interesting discussion. Let us briefly turn our attention to Thailand and Continental Europe. In the case of Thailand, the average score for the 60-64 age group plunges below four words, which is a result obtained by those aged 80-84 in Continental Europe. Although the age difference amounts to approximately 20 years, the cognitive performances of these two groups are at the same level (four words). This suggests a huge difference in cognitive functioning between Thailand and the countries in Continental Europe. Such inter-country differences in cognitive abilities are likely to constitute a crucial and decisive drawback in the future to the transfer of new digitalized technologies and innovative production methods from advanced countries with higher cognition levels to the countries with lower cognition. More importantly, 25 The number of observations for India's age group 80-84 is only 32. 26 The cohorts that are presently 50 years and older in India have grown up during a period of widespread poverty and high mortality and, as a result, the population has been positively selected in terms of cognitive performance at a more advanced age (Skirbekk, Loichinger, and Weber 2012). We plan to substantiate the validity of this view once we gain access to data from waves 1 and 2 of LASI (2016-2020). in view of the slow process of cohort replacement, those countries whose seniors already have higher cognitive levels today are very likely to continue to be at an advantage for many decades to come. Thus, the legacy of low cognition among the older populations of today's developing countries will put them at a disadvantage for a very long time (Skirbekk, Loichinger, and Weber 2012;Weir, Lay, and Langa 2014). VI. Introducing Cognitive Performance into the Measurement of Population Aging In this section, we relate the computed mean age-group-specific immediate recall score to the context of population aging. For this purpose, we draw upon a new indicator that focuses on cognition and demographic change: the cognition-adjusted dependency ratio, which was proposed by Skirbekk, Loichinger, and Weber (2012). The formula for CADR is expressed as follows: where m x represents the memory score of person x, age x represents the age of person x, while P stands for the population. To compute CADR, we have applied the mean age-group-specific immediate recall scores for Japan and other countries in Asia, as well as in the US and Europe, to the relevant age-composition data derived from the United Nations (UN) population projection prepared in 2019. 27 This formula implies that if a country has a low value of CADR, then it is effectively "younger," since it has a lower share of seniors with poor cognitive performance. The calculated results are displayed in Table 1. 28 Although Japan's CADR value (0.22) is higher than the corresponding values for the US and Northern Europe (Denmark, England, Ireland, and Sweden), Japan's dependency ratio adjusted by age-specific cognitive scores is fairly comparable to that (0.18) of Continental Europe (Austria, Belgium, Czech Republic, France, Germany, the Netherlands, Poland, and 27 Because CADRs have already been computed based on the data derived from the 2009 UN population projection for the year 2005 for many European countries by Skirbekk, Loichinger, and Weber (2012), we have applied the 2005 age-composition data gleaned from the 2019 UN population projection to all the Asian countries except Japan to facilitate inter-country comparisons. In the case of Japan, because of the unique survey setup of JSTAR (2007-2011) described in Section V, we have applied the 2010 age composition. 28 As mentioned earlier, data for age groups 75-79 and 80-84 are not available in JSTAR. To calculate CADR, however, cognitive scores for these two old-age groups are required. For this purpose, we have conducted a linear extrapolation based on the data for those aged 50-74, and the linearity has been confirmed by comparing the extrapolated values with the observed values for other countries. Switzerland), and is considerably lower than that (0.32) of Southern Europe (Greece, Italy, and Spain). More importantly, these comparative results based on the CADRs are astonishingly different from those shown in Table 2, which reports the conventional age-composition indicators such as old-age dependency ratios and age dependency ratios for various countries, both developed and developing. Among the countries listed in Table 2, Japan's population is by far the oldest, but based on the CADRs listed in Table 1, Japan's is fairly close to the medium level observed among the European countries. This finding seems to justify the UN's recent efforts to raise awareness regarding the urgent need for remeasuring population aging in both developed and developing nations with a view to formulating effective policies for coping with aging. 29 Furthermore, in Table 1, Japan's CADR (0.22) is the highest among the five Asian countries, followed by Thailand (0.21) and the PRC (0.20). Because the PRC and Thailand have recently passed the first demographic dividend stage, 30 as illustrated in Figure 2, their aging process will be accelerating in the future, so their CADR values will also be swiftly rising in the years ahead. In contrast, both Malaysia and India have a considerably younger age composition than the other three Asian countries, as reported in Table 2. Moreover, as depicted in Figure 2, Malaysia and India are still enjoying the benefits of the first demographic dividend. Depending on their future fertility trends, their CADR values may vary greatly in the future. For instance, assuming the UN's low-fertility-variant population projection, Malaysia's CADR will be higher than Japan's current level by the mid-2060s. VII. Factors Associated with Cognitive Functioning among Older Adults in the Five Asian Countries In this section, we attempt to identify the factors associated with immediate recall scores among the adults aged 50-79 who are included in recent aging surveys in the five Asian countries by running a linear regression. Before going any further, caution should be exercised with regard to our empirical analysis. In our regressions, the dependent variable representing immediate recall scores and a few explanatory variables such as education and work status have a problem of causal ordering. However, we are not able to resolve this endogeneity issue due to the absence of powerful instrumental variables in the datasets available to us. 31 Thus, the regression results presented in this section primarily indicate associations between the dependent variable and the explanatory variables that cannot be interpreted in terms of causal effects. Nevertheless, our statistical analysis can be used to see whether the relationships between the immediate recall scores among the respondents and their individual attributes that have been discovered in various Western countries can also be confirmed in the context of the five selected Asian countries. Let us first look at the computational results derived from the harmonized JSTAR (without sampling weights) in relative detail. As shown at the bottom of Table 3, the total number of observations is 4,873. The dependent variable is the number of words recalled by the respondent immediately after 10 words were read out by the interviewer. Except for the respondents' height, all other explanatory variables are dummy variables, with the dagger notation ( †) representing the reference group. In this 31 In our regression model, the issue of causal ordering between the dependent variable, cognitive performance, and some explanatory variables, such as education and work status, needs to be properly addressed. In the past, numerous studies have been undertaken which shed light on the relationships between cognitive functioning (measured in terms of immediately recalled words) and a host of other variables (demographic, socioeconomic, cultural, psychosocial, biomedical, etc.). However, most of these studies have not addressed the issue of potential endogeneity bias in their estimations, primarily because of the unavailability of appropriate instrumental variables. The issue of causal ordering has been solved successfully only in a very limited number of studies, including a study by Atalay, Barret, and Staneva (2019) and another by Schneeweis, Skirbekk, and Winter-Ebmer (2014). These studies successfully addressed the issue of causal ordering by drawing heavily on powerful instrumental variables created based on the variation caused by major policy reforms. Although we have, in the hope of addressing the issue of endogeneity, attempted to identify appropriate instrumental variables by going through various datasets available in the five Asian countries, our attempts have met no success at the time of revising our paper. Thus, following many earlier studies on this research topic, we confine ourselves in this study to examining the association between individuals' cognitive performance and their demographic and socioeconomic backgrounds. The issue of endogeneity remains to be addressed in our future work. regression, we introduced the following 10 explanatory variables: age groups (50-54, 55-59, 60-64 † , 65-69, and 70-74), sex (man, woman † ), marital status (currently married † , widowed, divorced/separated, and single), work status (working, not working † ), education (junior high school † , senior high school, junior college, and university or higher), self-rated health status (excellent, very good, good, fair † , and poor), CES-D (!16, < 16 † ), IADLs, height (centimeters), and survey cohorts (cohort 1 † consisting of those residing in Takikawa, Sendai, Adachi, Kanazawa, and Shirakawa in 2007, cohort 2 comprising those living in Naha and Tosu in 2009, and cohort 3 consisting of those residing in Tondabayashi, Hiroshima † , and Chofu in 2011). ***, **, and * indicate 1%, 5%, and 10% levels of statistical significance, respectively. Source: Authors' estimates based on data from the Japanese Study of Aging and Retirement of the Research Institute of Economy, Trade and Industry, Hitotsubashi University, Japan, and The University of Tokyo, Japan. The respondents' age and education have been incorporated in this regression to capture the effect of two types of intelligence on cognitive functioning. Fluid intelligence refers to the ability to reason and think flexibly, while crystallized intelligence refers to the accumulation of knowledge, facts, and skills throughout life (Cattell 1978). The explanatory variable, age, is expected to capture the change in fluid intelligence, which peaks approximately at an age of 25. Because the respondents included in the regression are 50 years or older, the estimated coefficients are expected to have negative signs. The other explanatory variable (educational attainment) is intended to capture the effect of education on crystallized intelligence, which is based on facts and rooted in experiences. As we age and accumulate new knowledge and understanding, crystallized intelligence becomes stronger. More importantly, because education can also improve learning techniques such as memorization skills, education helps improve performance in fluid intelligence even in the case of immediate word recall. Therefore, since fluid abilities are improved by crystallized intelligence to a substantial degree, we expect the estimated coefficient for education to have a positive sign. In addition, we can anticipate that the higher the level of education the larger the estimated coefficient will be. As discussed in Section III, the magnitude, pattern, and explanation of cognitive gender differences remain important research topics. As demonstrated in a SHAREbased study undertaken by Weber et al. (2014), the magnitude of the gender differences in cognitive performance in middle-aged and older populations across 13 European countries varies systematically across cognitive tasks, birth cohorts, and geographical regions. Bonsang, Skirbekk, and Staudinger (2017) have also found that both living conditions and educational opportunities to which individuals are exposed during their formative years are related to increased gender differences, favoring women in immediate word recall scores. Whether these findings based on the European data are applicable to Japan and other Asian countries will be examined later in this study. As for the other explanatory variables, the health-related variables such as self-rated health status, CES-D, 32 and IADLs 33 are expected to be associated with cognitive performance. Moreover, a respondent without a spouse is likely to be left alone without anybody to communicate with, which may weaken his or her cognitive 32 Scores on the CES-D range from 0 to 60, where higher scores suggest a greater presence of depression symptoms. A score of 16 or higher is interpreted as indicating a risk for depression. 33 In JSTAR, the respondents were asked 15 questions pertaining to IADLs, and the variable's score, which ranges from 0 to 15 (IADLs sum), represents the number of activities that the respondent has no difficulty performing, such as shopping, preparing meals, housekeeping, managing finances, taking responsibility for having medication in correct dosages at the right time, etc. functioning. Similarly, whether the respondent holds a job is likely to affect his or her level of life satisfaction and career development, both of which may affect cognitive performance. The respondent's height has been incorporated in the regression because adult height is closely related to childhood nutritional condition which, in turn, affects cognitive functioning and other dimensions of human capital, such as school ability (Weir, Lay, and Langa 2014;LaFave and Thomas 2017). We have also included in the regression a set of explanatory variables representing survey cohorts, which differ significantly in terms of the level of urbanization of the areas where the respondents live and their lifestyles. It is quite conceivable that, because a considerable proportion of the respondents included in cohort 3 live in wealthy urban areas such as Chofu in Tokyo, this cohort is more likely to be exposed to modern technologies, such as the Internet and computers, than their counterparts in cohort 1. 34 It is plausible that those who often use such modern technologies, by doing so, stimulate their crystallized intelligence (Bordone, Scherbov, and Steiber 2015). For these reasons, we expect that modern technologies will be more significantly associated with the cognitive score in cohort 3 than in cohort 1. Table 3 shows the estimated results derived from the JSTAR dataset. Except for work status and height, all explanatory variables introduced in the regression are statistically significant, with the coefficients having expected signs. As expected, the cognitive abilities of Japanese older adults are negatively associated with age. It is important to observe that education is positively related to immediate recall scores-the higher the educational level, the better the cognitive performance. Another important finding is that the respondent's own health evaluation (self-related health status) and physical limitations (IADLs) are also positively associated with the immediate recall score. Moreover, women show a considerably higher cognitive score than men, which is comparable to the pattern widely seen in the Northern and Central European regions. Those who are currently married have higher cognitive abilities than those who have never been married. In view of the rising prevalence of lifetime singlehood in Japan over the past few decades, this variable may play an increasingly important role in the future. Where respondents live also plays a role in cognitive performance-the coefficient for cohort 3, which includes a considerable number of respondents who live in relatively wealthy residential areas, is not only statistically significant but also positive, which agrees with our a-priori expectation. 34 For example, Shirakawa Town, which is included in cohort 1, is predominantly rural. Let us now compare these JSTAR-based regression results with those estimated based on CHARLS, the LASI pilot survey, HART, and MARS. The results based on the PRC's CHARLS in Table 4 show that the coefficients of all the explanatory variables, except for marital status and work status, are statistically significant with theoretically expected signs. Compared to JSTAR, the coefficients for age, sex, education, and self-rated health status are statistically significant for both datasets with the theoretically expected signs, while marital status is statistically significant only for Japan. However, unlike in Japan, both height (childhood nutritional conditions) and CES-D (representing the level of depression) yielded statistically significant results in the case of the People's Republic of China. Table 5 displays the regression results based on India's LASI pilot survey data. As mentioned, the number of observations in this dataset is relatively small-only 832 observations, which casts some doubt on the reliability of some of the estimated results. For instance, age, sex, work status, and CES-D are not statistically significant. However, education is a statistically significant predictor at the 1% significance level. It is also worth noting that Punjab, which is the most developed state among the four Indian states included in the pilot, exhibits a considerably higher cognitive performance. Table 6 presents the regression results estimated from Thailand's HART dataset. Age, sex, marital status, education, and CES-D are significant predictors with theoretically expected signs. Due to the paucity of data, however, we could not incorporate the explanatory variables representing self-rated health status and IADLs. All provinces except for Surin have higher cognitive abilities than Bangkok (reference group). This result is rather unexpected, and we do not have a reasonably good explanation at hand. Table 7 shows the regression results based on Malaysia's MARS data. All categories of the explanatory variables are statistically significant. As expected, cognitive functioning decreases as age increases. In addition, the estimated coefficients for sex, marital status, education, depression signs, 35 IADLs, and work status are statistically significant. Work status, unlike in other Asian countries, has a positive coefficient. 36 35 The depression symptom score is constructed using 17 negative and positive statements related to a respondent's experienced psychological well-being. The response scale for positive statements is inversely converted. The total score was calculated as the aggregate for all 17 statements. The total scores thus range from 17 to 85, with the scores in the top 15th percentile (44 or higher) interpreted as indicating a higher risk of depression ( !44, < 44 † ). 36 Although the Malaysian survey data indicate that many of those still working are in the agriculture sector, it is not clear why this contributes to increasing their cognitive functioning. Another significant finding is the link between the respondent's health and cognitive ability, whereby poor self-rated health and higher signs of depression relate negatively to immediate recall scores. Nutrition, as represented by the respondent's height, is also seen to play an important role in cognitive functioning. We also observe that the coefficient for the more urbanized states, such as Kuala Lumpur, Pulau Pinang, and Perak, are positive and statistically significant. Better cognitive ability in these states may be attributed to greater exposure and utilization of technology in the subjects' daily lives. Several points of interest emerge from the foregoing discussions on the regression results for the five Asian countries. First, in all five Asian countries, the cognitive abilities of older adults decline with age. Second, education is highly and positively associated with immediate recall scores. Third, health condition is positively related with cognitive performance, and height is positively linked to better cognitive abilities, which implies that nutritional condition in childhood plays an important role in developing cognitive functioning at a later stage. Fourth, those who are currently married have higher cognitive abilities than those who have never been married. In view of the recent gradual shift from universal marriage to lifetime singlehood in Japan and other Asian countries, policy makers and researchers should pay more attention to Asia's changing marriage patterns in the years to come, particularly from a standpoint of cognitive performance among older adults. Fifth, women show considerably higher cognitive scores than men. Nevertheless, to gain further insights into Asia's gender differences in cognition, we plotted male and female age-specific immediate recall scores for the five Asian countries in Figure 3. Although our regression results have uniformly indicated that women have higher scores than men in these Asian countries, 37 Figure 3 reveals considerable differences across the five Asian countries-in the PRC and India, for example, women have distinctively lower cognitive scores than men. To account for the gap between our regression results and the patterns in Figure 3, we need to pay attention to gender differences in educational attainment in these Asian countries. Once we control for education as we did in our regressions, women have an advantage over men in cognition, suggesting that Asia's gender difference in cognition performance is primarily caused by gender gaps in education. Although it falls outside the scope of this study, we are planning to carry out a series of simulation exercises in a future study by using the regression results for the five Asian countries generated here. In the case of Japan, for instance, the statistical results indicate that the cognitive ability of Japanese elderly persons is likely to improve due to the following potential factors: (i) the level of education among those 50 years and over is expected to rise at a phenomenal rate, as shown in Figure 4; (ii) future generations of the elderly are likely to have an advantage over past generations because children's nutrition started to improve considerably in Japan in the late 1950s when the school lunch program was introduced nationwide; and (iii) the use of modern communication technologies among the elderly is likely to increase at a remarkable rate because the overwhelming majority of young cohorts have already been exposed to extensive use of computers and mobile phones, as illustrated in Figure 5. By conducting various simulation exercises of this nature, we will be able to project to what extent cognitive functioning among older adults in Japan will improve, and how high Japan's CADR will be in the years ahead. VIII. Concluding Remarks In recent years, the five Asian countries intensively analyzed in this paper have been facing increasingly difficult policy challenges induced by rapid population aging. Among these five countries, Japan's level of population aging has been the most pronounced over the past few decades. Japan has the highest proportion of those aged 65 and over, an indicator which has been used by demographers for more than a century. One of the main objectives of this study was to introduce from an innovative angle a new index to measure the level of population aging to shed a different light on policy-oriented research on this phenomenon. To compute this new index-the cognition-adjusted dependency ratio-we applied the mean age-group-specific immediate recall scores for Japan and four other Asian countries and compared the computed results with those derived from the US and various developed nations in Europe. Our computed results have shown that Japan's pattern and level of age-related decline in cognitive functioning are highly comparable to those of many other developed nations, particularly those in the group designated as Continental Europe in previous research. This finding seems to have a few important policy implications for the aging Japanese economy, particularly its labor market. The population census data show that the size of Japan's total labor force, after reaching a peak in 1995, has been shrinking continuously, while the overall labor force participation rate has been on a downward trend since 1970. Despite these substantial changes, Japan's age-based employment practices, which comprise lifetime employment, seniority wage system, and mandatory retirement age, remained virtually intact (Kato 2016). Particularly, Japan's policies related to the mandatory retirement age-requiring workers to leave the company at a relatively young age, typically at an age of 60-are considered extreme compared to the practices in other industrialized nations. 38 Due to the existence of such age-based employment practices, many Japanese businesses that face fierce competition from overseas rivals have been confronted in recent years with a shortage of highly qualified workers with specialized skills acquired from career experiences. Our research finding is not yet widely known in Japan's labor market, but once the market recognizes that older Japanese have reasonably good cognitive performance, the finding could provide a strong incentive for many employers to modify or even abandon the long-running age-based employment practices. This would allow a sizable number of older Japanese with a reasonably good level of cognitive functioning to be recruited, which would likely generate a considerable amount of economic dynamism. It is also worth noting that among the selected Asian countries, Malaysia shows a pattern of change in age-specific cognitive functioning that is similar to the Southern European group, although Malaysia has somewhat lower scores than Southern Europe in all age groups. More importantly, these inter-country comparative results based on cognitionadjusted dependency ratios are astonishingly different from the results emerging from the conventional old-age dependency ratios. This conclusion seems to justify the UN's recent efforts to raise awareness regarding the urgent need for remeasuring population aging with a view to formulating more efficient and effective policies to cope with rapid population aging in both developed and developing nations.
2022-04-08T15:08:52.510Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "a042ed14445cc2166905171a97cc9008e5f4671f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1142/s0116110522500068", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fadc4af10a37d119da9048ec1c95a2146be65e6b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12010779
pes2o/s2orc
v3-fos-license
A Hitchhiker's Guide to Functional Magnetic Resonance Imaging Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. INTRODUCTION Introduced in the early nineties, functional Magnetic Resonance Imaging (fMRI) (Bandettini et al., 1992;Kwong et al., 1992;Ogawa et al., 1992;Bandettini, 2012a;Kwong, 2012) is a variant of conventional Magnetic Resonance Imaging (MRI) intended to measure brain activity and connectivity. It is a fundamentally non-invasive method, and one which provides a method to assess brain function with unparalleled spatial specificity. Amongst its attributes are high spatial resolution, signal reliability, robustness, and reproducibility. Functional brain mapping is most commonly performed using the venous blood oxygenation level-dependent (BOLD) contrast technique (Ogawa and Lee, 1990;Ogawa et al., 1990a,b;Ogawa, 2012). The magnitude of the BOLD signal is an indirect measure of neuronal activity, and is a composite which reflects changes in regional cerebral blood flow, volume, and oxygenation. Functional MRI principles and basic concepts have been extensively described and reviewed in the literature (Le Bihan, 1996;Gore, 2003;Amaro and Barker, 2006;Norris, 2006;Logothetis, 2008;Buxton, 2009;Faro and Mohamed, 2010;Ulmer and Jansen, 2010;Poldrack et al., 2011;Bandettini, 2012b;Ugurbil and Ogawa, 2015). In summary, the basic concept underlying all fMRI measurement is that an increase in local neuronal activity stimulates both higher energy consumption and increased blood flow. The resultant indirect determination of brain function is typically represented as a statistical map which reflects regional activity. Information transfer between neurons is a metabolically demanding process, which requires an increased flow of oxygenated blood, oxyhemoglobin. The local influx of oxygenated blood results in a net increase in the balance of oxygenated arterial blood to deoxygenated venous blood (associated with elevated deoxyhemoglobin). The increase in the oxy-/deoxy-hemoglobin ratio leads to an increase in the MRI signal compared to that of the surrounding tissue. It is important to note that as local neuronal activity increases, there is an intrinsic delay before regional vasodilation occurs and flow increases. This mechanism, which is a function of the properties of the local vascular network, is referred to as the hemodynamic response function (HRF) and has a time course of several seconds after the increase in activity. The BOLD signal can be characterized by the shape of this HRF, which reflects its vascular origin. Typically, fMRI software model the HRF with a set of gamma functions, commonly designated by canonical HRF, that is characterized by a gradual rise, peaking ∼5-6 s after the stimulus, followed by a return to the baseline (about 12 s after the stimulus) and a small undershoot before stabilizing again, 25-30 s after ( Figure 1A) (Miezin et al., 2000;Buxton et al., 2004;Handwerker et al., 2012). Occasionally, an initial dip is reported but its origin and implications are still under debate (Hu and Yacoub, 2012). This also highlights that, despite the good fit of the canonical HRF for most situations, the true HRF is known to present some variability. Whenever a researcher suspects that the canonical HRF is not good enough, it is common practice to include its temporal and dispersion derivatives in the model in order to estimate the variability in latency and shape, respectively (Friston et al., 1998;Calhoun et al., 2004). It has become an established practice in fMRI studies to investigate the differential neuronal responses to various forms of stimuli and activity during task performance. Typical investigations have compared periods of brain activation during a task with periods of a matched baseline task or a "rest" condition (Bandettini et al., 1992;Blamire et al., 1992;Vallesi et al., 2015). However, stimulus-evoked responses are only the tip of the iceberg in brain activity. More recently, a new perspective in functional imaging has brought with it the recognition that spontaneous/intrinsic brain activity is a fundamental aspect of normal brain function. Technical advances in neuroimaging methods have contributed to this paradigm shift, and have led to the recognition that the brain is more accurately considered a network of functionally connected (co-varying) and constantly interacting regions, requiring a focus on understanding patterns of connectivity as well as localized activation (Biswal et al., 1995;Carlson et al., 2003;Fox et al., 2005;Fox and Raichle, 2007;Raichle, 2009;Smith et al., 2011). For this reason, resting state fMRI (rs-fMRI) analyses rely upon spontaneous coupled brain activity to reveal intrinsic signal fluctuations in the absence of external stimuli or demands of imposed tasks (Damoiseaux et al., 2006;Fox and Raichle, 2007;Schölvinck et al., 2010;van den Heuvel and Hulshoff Pol, 2010;Friston et al., 2014b). Complimentary approaches, combining rs-fMRI with functional deactivation (shifting from periods of stimulation to those of rest) also have been described to study functional activity transitions (Greicius and Menon, 2004;Anticevic et al., 2012;Soares et al., 2016). With its popularity steadily increasing among clinicians and researchers, the technique of fMRI has demonstrated great utility in the study of the functioning brain, both in health and disease. It is important to recognize, however, that it has an intrinsically complex workflow (summarized in Figure 1) which assumes broad knowledge of task design, imaging artifacts, complex MRI acquisition techniques, a multitude of preprocessing and analysis methods (in several software packages see Tables 1-4), statistical analyses, as well as interpretation of results. Several papers and books describing the main technical issues and pitfalls related to both intrinsic and evoked activity have been published (Jezzard and Song, 1996;Le Bihan, 1996;Norris, 2006;Haller and Bartsch, 2009;Cole et al., 2010;Margulies et al., 2010;Poldrack et al., 2011;Davis and Poldrack, 2013;Lee et al., 2013;Ugurbil and Ogawa, 2015). However, given the complex nature of the data processing, constant methodological advances and the increasingly broad application of fMRI to both the clinical and research domains, we have sought to compile a practical "hitchhiker's guide, " containing essential information and primary references. These guides have proven to be important to assist in the optimization of data quality and interpretation of results . We also have provided an analysis of the principal software tools available for each step in the workflow, highlighting the most suitable features of each. Through this process it is our goal to enable investigators/clinicians to design and implement practical workflows which will lead to robust and reproducible results. In the following sections, information about each specific fMRI workflow step, from the current technique applications to the final results interpretation, will be discussed in detail. We have started by presenting a list of common software tools used for fMRI pipelines (Table 1), including both applications for general and wide-ranging purposes (e.g., AFNI, BrainVoyager, FSL, or SPM) as well as for very specific tasks (e.g., Marsbar and NBS). APPLICATION FIELDS The use of the technique of fMRI has led to significant expansion of understanding in multiple areas of cognitive neuroscience (Cabeza, 2001;Raichle, 2001;Poldrack, 2008Poldrack, , 2012. It has, for example, been successfully used to study systems involved with sensory-motor functions (Biswal et al., 1995;Calvo-Merino et al., 2005), language (Woermann et al., 2003;Centeno et al., 2014), visuospatial orientation (Formisano et al., 2002;Rao and Singh, 2015), attention (Vuilleumier et al., 2001;Markett et al., 2014), memory (Machulda et al., 2003;Sidhu et al., 2015) affective processing (Kiehl et al., 2001;Shinkareva et al., 2014), working memory (Curtis and D'Esposito, 2003;Meyer et al., 2015), personality dimensions (Canli et al., 2001;Sampaio et al., 2014), decision-making (Bush et al., 2002;Soares et al., 2012), and executive function (Just et al., 2007;Di et al., 2014). Functional MRI has also been used as a tool in the study of topics as diverse as addiction behavior (Chase and Clark, 2010;Kober et al., 2016), neuromarketing (Ariely and Berns, 2010;Kuhn et al., 2016) and politics (Knutson et al., 2006), among others. FIGURE 1 | Typical fMRI workflow. In order to perform the most appropriate fMRI study (either task-based or resting state), researchers/clinicians need to understand its main application fields, intrinsic hemodynamic characteristics (A) and how to best design the experiment [Resting State (B), Block (C), Event related (D), or Mixed (E) designs]. Identification of the most appropriate acquisition techniques (F) and the recognition of the primary artifacts involved (G) are essential. The acquired data then undergoes several quality control and preprocessing steps [acquisition quality control (H), format conversion (I), slice timing (J), motion correction (K), spatial transformations (L), spatial smoothing (M), and temporal filtering (N)]. The intended analysis methods should be implemented for task-based (O) or resting-state fMRI (P) and statistical inferences performed (Q). Analysis can be complemented with a variety of different methods for multimodal studies (R). Finally, results interpretation should be made with extreme caution. Frontiers in Neuroscience | www.frontiersin.org *To the best of our knowledge at the date of submission, based on information gathered from the software manuals, main webpages and published papers. EXPERIMENTAL DESIGN The number of variables (such as the specific nature of the research question, availability of imaging instruments, demand of data handling, and cost) associated with each study makes it essential to optimize BOLD signal acquisition time and statistical efficiency of the analysis. There is not one optimal design which will encompass all fMRI studies. However, optimizing certain parameters can significantly improve the study efficiency and reliability of the final results. Some reviews and book chapters have already provided the basic fMRI experimental design concepts (Amaro and Barker, 2006;Friston et al., 2007;Filippi, 2009;Bennett and Miller, 2013;Maus and van Breukelen, 2013). The experimental designs used in fMRI are resting state and task-based. Resting State Characterization of the resting state is the most straightforward experimental design in fMRI. The subjects are not performing any explicit task ( Figure 1B). During acquisitions performed under these circumstances, consistent and stable functional patterns, which are reproducible across individuals, sessions, scanners, and methods can be identified and are known as Resting State Networks (RSNs) Damoiseaux et al., 2006;Long et al., 2008;Choe et al., 2015;Jovicich et al., 2016). That said, the specific resting conditions and the duration of the acquisition both have an important effect on the final functional signals. The most traditional design consists of instructing the participants to keep their eyes closed, not to think about anything in particular and not falling asleep. Alternative approaches have included keeping the eyes open or keeping the eyes open while fixating upon an object in the visual field, such as a cross, during scanning. The most suitable approach depends on the research question and purpose. If reliability and consistency are of upmost importance, the eyes fixated condition should be preferred, except for the primary visual network whose connectivity is more reliable with the eyes open but not fixated condition (Yan et al., 2009;Patriat et al., 2013;Zou et al., 2015). On the other hand, if the focus is on obtaining higher functional connectivity (FC) strength, eyes open, either fixated or not, should be used (Yan et al., 2009;Van Dijk et al., 2010). The chosen approach can also have a significant impact on the topological organization , global signal amplitude Wong et al., 2016), and directionality (Zhang et al., 2015). Nevertheless, the different resting-state conditions present comparable results, and thus the choice of the condition should also take into account which is more comfortable/appropriate for *To the best of our knowledge at the date of submission, based on information gathered from the software manuals, main webpages and published papers. the study population, keeping in mind that it should be consistent for all the study participants. Differences in scan length also have a demonstrable impact, with acquisition times of 5-7 min shown to yield a reasonable trade-off between time/robustness of RSNs FC (Van Dijk et al., 2010;Whitlow et al., 2011), 5.5 min shown to be acceptable in young children (White et al., 2014), but both increased reliability and greater in-depth analysis are possible with scans of ∼13 min . Task-Based When employing task-based fMRI studies, the way in which the stimuli are presented as a function of time is of upmost importance. The typical experimental designs are termed block ( Figure 1C), event-related ( Figure 1D), and mixed block/eventrelated ( Figure 1E). The most simple task design, block-design, consists of presenting consecutive stimuli as a series of epochs, or blocks, with stimuli from one condition being presented during each epoch, followed by an epoch of stimuli from another condition, or with rest/baseline epochs. Specific block duration depends on the type of stimulus, with 15-30 s the most commonly used range, although some researchers suggest an optimal length of 15 s (Maus and van Breukelen, 2013). The order of the conditions is also important, and these are recommended to be counter-balanced across subjects of the same study. Block design allows a straightforward approach, good statistical power, signal amplitude and robustness. However, because each block is of such long duration, the participant's rapid habituation to task as well as the inability to accurately define responsetime courses are intrinsic limitations of this design (Dale and Buckner, 1997;Amaro and Barker, 2006;Dosenbach et al., 2006). Event-related designs are intended to delineate the association between brain functions and discrete events (typically randomized and of short duration between 0.5 and 8 s), separated by an inter-stimulus interval (ISI, normally ranging from 0.5 to 20 s). By incorporating great task flexibility and participant's unpredictability, this design provides the means to detect transient variations in local hemodynamic response. It presents however a more complex analysis process and a decreased signal-to-noise ratio (SNR), the combination of which leads to diminished detection power (Dale, 1999;Miezin et al., 2000;Huettel, 2012;Liu, 2012). Two types of event-related designs can be implemented and are characterized by different ranges of ISI: slow event-related designs, where the individual stimuli are well-separated in time (usually by more than 15 s), which prevents the overlap of successive stimuli HRFs, and rapid event-related designs, where stimuli are closely spaced in time (less than the HRF of the previous stimulus) resulting in the overlap of their HRFs. The latter protocols allow higher stimulus frequencies, resulting in greater statistical power, as well as diminished participant anticipation and boredom (Amaro and Barker, 2006;Huettel, 2012). Additionally, the randomized or pseudo-randomized order of stimuli presentation also is of importance in minimizing habituation. For these rapid eventrelated designs, implementing variable ISIs (jittering) allows differential overlap of HRFs, reduces multicollinearity problems and may provide better characterization of each condition response (Dale, 1999). Alternative methods as m-sequences (Buracas and Boynton, 2002;Liu, 2004) and genetic algorithms (Wager and Nichols, 2003;Maus et al., 2010) also can be used in event-related experimental designs, in order to reach flexible trade-offs between estimation efficiency and detection power. Some tools which can facilitate the implementation of randomized design are Optseq2 (https://surfer.nmr.mgh. harvard.edu/optseq/), RSFGen (http://homepage.usask.ca/g es125/fMRI/RSFgen.html) and the fMRI Simulator (http:// www.mccauslandcenter.sc.edu/crnl/tools/fmristim). Combining stimuli in discrete blocks (mixed block/eventrelated design) provides information about both sustained and transient functional activations during task performance. While the technique offers the advantages of both block and eventrelated designs, it involves more assumptions, has a poorer HRF estimation and decreased statistical strength of sustained signal, and requires more subjects in order to measure statistically significant and sustained effects (Visscher et al., 2003;Amaro and Barker, 2006;Petersen and Dubis, 2012). Independent of the experimental design, the specific way with which the study conditions are modeled (model specification) also plays an important role in the signal optimization process (Price et al., 1997;Friston, 2005;Amaro and Barker, 2006;Friston et al., 2007). The most basic comparison consists of subtracting two or more conditions (e.g., A − B), in which one is typically a control condition. Factorial designs expand this principle to two or more factors (e.g., different cognitive processes), each one with two or more levels. A simple example of such design would be the visualization of two different words in two different colors which would result in 4 conditions: the first word with the first color (A) the first word with the second color (B), the second word with the first color (C) and the second word with the second color (D). This design, not only enables the exploration of the effect of the two main factors (words and colors), but also their interactions, specifically how one factor affects the relation between the other factor and the response variables [e.g., (A − B) − (C − D)]. If the researcher is interested in assessing if the BOLD response to trials is modulated by a continuously varying parameter, a parametric design (e.g., A < A < A < A) would be more suitable. A typical example would be a study where the goal is to assess if the BOLD response increases/decreases linearly with the difficulty of the task. Choosing appropriate baselines and controls is of paramount importance since neural activity may vary unpredictably and overlap (or even exceed in amplitude) regions activated during the target task. A properly defined baseline should allow for maximum sensitivity in the detection of brain activity related to the study target (target isolation) while controlling for as many extraneous variables and unrelated confounds as possible (Stark and Squire, 2001;Peck et al., 2004;Diers et al., 2014). Generic recommendations include the use of multiple baseline conditions, scan times as long as possible (the more trials the better, with several shorter runs preferred over one long run), randomized conditions when possible, avoidance of comparison between trials widely separated in time and keeping participants engaged . Several software tools can be used to implement the stated principles and present the task to the participants in the scanner ( Table 2). When designing a study involving both task-based and rs-fMRI, in order to avoid contamination of rs-fMRI with residual activity from previous task performance, it is recommended that one perform the resting state acquisition before the task-based or, at the minimum, after a suitable delay (Stevens et al., 2010;Tung et al., 2013). Power Analyses The question about "how large is enough" is a matter of debate in the neuroimaging field to determine the appropriate study sample size. For example, in an attempt to establish the boundaries for an adequate sample size, sensitivity and sensibility analyses were conducted, demonstrating that sample sizes of at least 27 subjects provide adequate reliability for fMRI investigations (Thirion et al., 2007). Additionally, in a controversial technical note (Friston, 2012), it was suggested that there is an optimal sample size, compared to which sample sizes could be either too small (studies with less than 16 subjects) or even, although less frequently, too large (studies with more than 32 subjects), under the arguments of reduced power or meaningless/trivial findings resulting from overpowered studies, respectively. It was mentioned that: on one hand, significant findings obtained in small samples (n = 16) indicate large effects being stronger than the same level of significance obtained with larger sample sizes; on the other hand, the relevance of significant findings obtained with large samples can be illustrated with the magnitude of observed effect-sizes. However, criticisms have been outlined (e.g., Yarkoni, 2012), particularly focusing on the liberal assumptions (e.g., significance threshold) in which Friston's arguments were built. Furthermore, it was recently described that a substantial number of published studies are statistically under-powered (Button et al., 2013). In this context, it is important to highlight the use of power analyses as a means to obtain robust and meaningful findings in these studies. Power analyses refer to the probability of rejecting the null hypothesis (given that the alternative hypothesis is true) and allow the establishment of a sample size that will increase the confidence of detecting true effects (Ioannidis, 2008). Functional MRI studies are often characterized by low statistical power, primarily due to limited sample size and large number of comparisons (Murphy and Garavan, 2004). Calculations of power are rarely performed in fMRI research, possibly due to the uncertainty associated to the unknown variance of the BOLD response and also due to the difficulty in predicting expected effects (Guo et al., 2012). Software tools have been developed in order to facilitate calculation of the statistical power, both for estimating the number of subjects to be included in the study, and for the number of stimuli to be presented. In order to employ these tools, information about the mean activation, the variance, the Type I error rate, and the sample size must be provided (Mumford, 2012). The power calculation should use either the statistical images (t/F maps generated by simple study designs) from pilot studies (PowerMap software; Joyce and Hayasaka, 2012), the estimated parameters in specific regions-of-interest (fMRIPower tool) (Mumford and Nichols, 2008) or the prevalence of active peaks (NeuroPower; Durnez et al., 2016). DATA ACQUISITION TECHNIQUES AND ARTIFACTS Performing effective fMRI studies requires a thorough understanding of specific MRI acquisition techniques and artifacts, and how to deal with them ( Figures 1F,G). When the activity of a population of neurons within a voxel (minimum spatial resolution unit in each image, the volume element) changes, the associated hemodynamic response can be determined using T2 * weighted MRI acquisitions (details in Buxton, 2009;Hashemi et al., 2012). Detection of the BOLD signal is the most commonly used technique in fMRI, due primarily to its ease of implementation and inherent functional contrast. Alternative detection methods do exist and are based on the measurement of a combination of additional parameters including: changes in cerebral blood volume (CBV), cerebral blood flow (CBF), and cerebral metabolic rate of oxygen (CMRO2) (Davis et al., 1998). The alternative methods are: calibrated BOLD, based on BOLD contrast but also taking into account physiological variation (e.g., heamatocrit, oxygen extraction fraction, and blood volume) (Davis et al., 1998;Blockley et al., 2012); Arterial Spin Labelling (ASL) used to measure regional CBF by tracking intravascular water as an endogenous tracer (Williams et al., 1992;Buxton et al., 1998;Telischak et al., 2015); Vascular-Space-Occupancy (VASO) based on differences between blood and surrounding tissues and determined through dynamic measurement of local CBV (Lu et al., 2003;Lu and van Zijl, 2012); Venous Refocusing for Volume Estimation (VERVE), based on changes in venous cerebral blood volume (Stefanovic and Pike, 2005); Signal Enhancement by Extravascular Protons (SEEP) based on the determination of proton-density changes associated with cellular swelling (Stroman et al., 2003;Figley et al., 2010); and diffusionweighted fMRI, which measures structural changes in the neural tissues related to cell swelling during activation (Le Bihan, 2012;Aso et al., 2013). Functional MRI data are generally collected over the entire brain through the acquisition of sequential volumes (timepoints), each one composed of a set of slices. The typical sequence used for fMRI studies is echo planar imaging (EPI), which is attractive due both to its imaging speed and BOLD contrast sensitivity, but also associated with inherent artifacts and diminished image quality (Stehling et al., 1991;Poustchi-Amin et al., 2001;Schmitt et al., 2012). EPI may be performed using gradient-echo, spin-echo, or combination techniques. When compared to spin-echo EPI, gradient echo acquisitions have higher BOLD sensitivity, imaging speed and versatility, and have been used in the majority of fMRI studies. On the other hand, spin-echo sequences have been proposed as a viable alternative when the goal is to obtain increased functional localization in the capillary bed (especially at high fields) and when specific regions of interest (ROIs) are less superficial regions such as for example the ventromedial frontal and anterior inferior temporal cortex are the primary focus of the study (Norris, 2012;Boyacioglu et al., 2014;Halai et al., 2014;Chiacchiaretta and Ferretti, 2015). Data Acquisition Techniques In order to minimize artifact, and to obtain the most reliable data it is critically important to optimize the acquisition phase. There is no single "gold standard" fMRI protocol due to the great variability in parameters such as the MRI hardware vendor and configuration, field strength, scanning time available, specific regions under study and subsequent analyses intended. For this reason, we here confine ourselves to a series of suggestions based upon the use of a standard single-shot gradient-echo EPI 3 T fMRI acquisition. When defining an fMRI acquisition protocol, a reasonable strategy is to start from a well-characterized "standard" protocol usually provided by the vendor, and then to modify it according to the specific requirements of the study to be undertaken. A practical description of the parameters involved in a typical fMRI acquisition, and guide to how they should be reported, is provided in Inglis' checklist (Inglis, 2015). While many characteristics of the individual MRI scanner and of the specific acquisition protocols have a strong impact on the fMRI results, magnetic field strength is amongst the most defining. The amplitude of signal usually associated with the BOLD contrast is very low (around 1% of baseline or less). With increased field strengths the sensitivity is increased as is the spatial resolution and SNR (Gore, 2003;van der Zwaag et al.,, 2009;Wald, 2012;Skouras et al., 2014), but all at the cost of increased artifact (Triantafyllou et al., 2005). The majority of scanners currently in use, both in diagnostic and research centers, are units having field strengths of 1.5-3 T, but some research groups are already utilizing 7 T fields, and it is expected that the availability and use of such scanners will increase (Duyn, 2012). Typically, fMRI data are acquired using a series of 2D axial slices to cover the whole brain (one volume) and then the process is repeated to collect a number of volumes over time (timeseries). Each volume can be acquired using either interleaved or sequential slice acquisitions. While interleaved acquisitions have less adjacent slice interference, they can be more vulnerable to spin history effects generated by head motion (Muresan et al., 2005). To reduce the influence of both these potential issues, most fMRI acquisitions utilize a gap between slices (around 10-25% of the total slice thickness). Slice acquisition also can be performed either in an ascending (foot-to-head) or descending order, with the former theoretically affected by excitation and saturation of in-flowing blood. Although no significant differences have been reported between the two directions, the most robust approach seems to favor the use of descending sequential acquisitions (Howseman et al., 1999). An important trade-off in fMRI acquisition is between temporal and spatial resolution. Since the BOLD signal changes as a function of time, optimizing the temporal resolution is critical. Typical fMRI acquisitions with full brain coverage have repetition times (TRs) of 2-3 s (the time it takes to acquire one volume). For task-based studies, shorter TRs are usually chosen for event-related designs than for block designs, due to the relative lack of experimental power and greater importance of time-course information. Shorter TRs may lead to a significant reduction in SNR while longer TRs are theoretically associated with higher sensitivity to motion (Filippi, 2009;Wald, 2012;Craddock et al., 2013). Due to the necessity of optimizing temporal measurements, spatial resolution is usually sacrificed. With high-field strengths and/or if full brain coverage is not mandatory for the specific study, the TR can be made as low as 1 s, or even less. One way of increasing temporal resolution while still maintaining full brain coverage is to use a parallel imaging method, such as GRAPPA (Griswold et al., 2002), SENSE (Pruessmann et al., 1999), or multiplex-EPI (Feinberg et al., 2010). GRAPPA and SENSE work by reducing the time required for acquiring a single slice but increasing the sensitivity to motion. Thus, extra care should be taken, especially with participants prone to move a lot during scanning. On the other hand, multiplexed-EPI works by simultaneously acquiring more than one slice at a time (Feinberg et al., 2002). However, the simultaneous excitation of slices causes signal leaking from one slice to the other, which increases with the number of slices acquired simultaneously (i.e., the acceleration factor) and also induces artifactual thermal noise correlations, critical for functional connectivity studies (Setsompop et al., 2013). The combination of both techniques can also be employed, further reducing the acquisition time and with revealed increased sensitivity to detect RSNs at moderate acceleration factors (Preibisch et al., 2015). Isotropic voxels are recommended (inplane resolution and slice thickness with equal dimensions) because the folded cortex has no dominant orientation. At 3 T fields, typical voxel sizes range between 2.8 and 3.5 mm 3 (Wald, 2012;Craddock et al., 2013). Higher spatial resolution can be achieved at higher field strengths, but is associated with increased artifact (Olman and Yacoub, 2011). A square Field of View (FOV) ranging between 192 and 224 mm, with a matrix size of 64 and slice number of 30-36, is common at 3T. The most critical parameter when optimizing an fMRI protocol with respect to timing is the interval between slice excitation and signal acquisition, known as echo time (TE). The interval choice of TE in order to maximize the BOLD contrast depends on the tissue characteristics and the field strength and is ideally equal to the apparent tissue T2 * . The TE for 3 T field strength is typically around 30 ms (ranging from 25 to 40 ms) (Gorno-Tempini et al., 2002;Craddock et al., 2013;Murphy et al., 2013). The appropriate flip angle also is of relevance when optimizing the BOLD signal. One recommended practice is to select a flip angle equal to the Ernst angle (Ernst and Anderson, 1966) for gray matter. More recently, however, it has been shown that the use of much lower flip angles is possible, as long as physiological noise is the dominant noise source in fMRI time-series (Gonzalez-Castillo et al., 2011). For field strength of 1.5 T and a TR of 3 s, the Ernst angle is ∼89 • , resulting in the common choice of 90 • for flip angle. For 3 T and a TR of 2 s, the angle is closer to 77 • (Ernst and Anderson, 1966). These specifications are even more complex when a multicenter study is planned, and a number of considerations need to be taken into account in order to maximize reproducibility (Stöcker et al., 2005;Friedman and Glover, 2006;Glover et al., 2012;Keator et al., 2016). Some important tips include: for studies involving both resting state and task-based fMRI, it is recommended that the same acquisition protocol be used, or at least, as similar as possible, in order to most accurately integrate and compare results (Ganger et al., 2015;Pernet et al., 2016); when performing task-based studies, it is of upmost importance to precisely synchronize scan acquisition with stimulus presentation. Such synchronization can be achieved through the use of manual configurations (e.g., sending triggers between the scanner and stimulus presentation software) or with integrated solutions such as the Lumina Controller (http://cedrus.com/ lumina/controller/), SyncBox (http://www.nordicneurolab. com/products/SyncBox.html), SensaVue fMRI (http://www. invivocorp.com/solutions/neurological-solutions/sensavue/), or nordic fMRI solution (http://www.nordicneurolab.com/ products/fMRISolution.html). Artifacts The primary goal of any fMRI acquisition is to obtain the highest possible SNR and contrast-to-noise ratio (CNR) (Welvaert and Rosseel, 2013) while minimizing the impact of artifacts. The artifacts in fMRI are usually related to the pulse sequence, gradient system hardware, acquisition strategy used as well as physiological noise. Three artifacts are characteristic of the traditional EPI pulse sequence: spatial distortions (Figure 1G1), signal dropouts (Figure 1G2), and ghosting ( Figure 1G3). Geometric and intensity spatial distortions may result from static field inhomogeneity and appear locally either as stretched or compressed pixels along the phase-encoding axis, being worse at higher field strengths. A number of strategies have been suggested to correct the distortions, and include the use of shimming coils (Reese et al., 1995;Balteau et al., 2010), field mapping Zeng and Constable, 2002), point spread function mapping, or reversed phase gradients (Holland et al., 2010;In et al., 2015). Signal dropouts due to field inhomogeneities near air/tissue interfaces, particularly prevalent in the frontal and temporal lobes, also occur in EPI. The choice of an appropriate echo time (TE, described below; optimum BOLD contrast occurs when the TE matches the local T2 * of the tissue of interest), greater number of thinner (rather than lower number of thicker) slices, as well as optimizing slice tilt, the direction of the phaseencoding or the z-shim moment may all help to reduce these dropouts (Weiskopf et al., 2006;Balteau et al., 2010). Ghosting artifacts, which occur only in the phase-encoding direction, are triggered because odd and even lines of k-space are acquired with opposite polarity. Techniques such as implementing a multi-echo reference scan, two-dimensional phase correction or applying dual-polarity generalized autocalibrating partially parallel acquisitions (GRAPPA), can reduce the magnitude of these effects (Schmithorst et al., 2001;Chen and Wyrwicz, 2004;Robinson et al., 2013;Hoge and Polimeni, 2015). Hardwarerelated artifacts such as scanner and head coil heterogeneities, spiking, chemical shifts, and radiofrequency (RF) interferences all can significantly impact the fMRI image quality and compromise results (Bernstein et al., 2006;Poldrack et al., 2011). One approach for reducing the impact of these artifacts is to implement an Independent Component Analysis (ICA) or Robust Principle Component Analysis (RPCA) (Behzadi et al., 2007;Griffanti et al., 2014;Campbell-Washburn et al., 2016). Although hardware-related artifacts can, at least theoretically be fixed, participant related confounds will always be present. Participant's physiological confounds such as head motion (Power et al., 2012), cardiac, and respiratory "noise" as well as vascular effects all have a significant impact on the final fMRI results (Faro and Mohamed, 2010;Murphy et al., 2013). The most common and critical artifact in fMRI is head motion. Even though it is common to correct for subject motion during preprocessing (see preprocessing section), the best approach is to prevent motion as much as possible in the first place using comfortable padding and optimized head fixation (Edward et al., 2000;Heim et al., 2006), as well as to fully inform the subject in advance about scanner noise and the confining environment. Performing multi-echo acquisitions can also help reduce motion artifacts (Kundu et al., 2013). Cardiac pulsation and the respiratory cycle can have an impact similar to that of head motion. Due to the long repetition time (TR, see below) of standard BOLD EPI acquisitions (2-3 s) the fluctuations are aliased into low-frequency signals which may be mistaken for neural activity-related BOLD oscillations, especially on rs-fMRI (Birn, 2012;Murphy et al., 2013;Cordes et al., 2014). A number of strategies have been used in an attempt to reduce these artifacts, and include the use of band-stop filtering, dynamic retrospective filtering (Särkkä et al., 2012), image-based methods (RETROICOR; Glover et al., 2000), corrections based on canonical correlation analysis (Churchill et al., 2012c) and through the use of externally recorded cardiac and respiratory waveforms as regressors (Falahpour et al., 2013). Thorough understanding of the link between neural activity and the hemodynamic changes that give rise to the BOLD signal (neurovascular coupling), as well as the variation in its response, should help to reduce the inter-subject variability and increase the homogeneity and statistical power of the studies Handwerker et al., 2012;Liu, 2013;Phillips et al., 2016). One key feature is that as the signal increases (field strength, array coils), the physiological noise increases proportionally (Triantafyllou et al., 2005(Triantafyllou et al., , 2006Hutton et al., 2011). A great variety of software tools have been developed to minimize the impact of artifacts, for example the Artifact detection Tool (ART-http://www.nitrc.org/projects/artifact_ detect/), the Physiological Artifact Removal Tool (PARThttp://www.mccauslandcenter.sc.edu/CRNL/tools/part), the PhysIO Toolbox (http://www.translationalneuromodeling. org/tnu-checkphysretroicor-toolbox/), the ArtRepair Software (http://cibsr.stanford.edu/tools/human-brain-project/artrepairsoftware.html), the FMRIB's-based Xnoisifier (FIX) (http://fsl. fmrib.ox.ac.uk/fsl/fslwiki/FIX), and the RobustWLS Toolbox (http://www.icn.ucl.ac.uk/motorcontrol/imaging/robustWLS. html) (Diedrichsen and Shadmehr, 2005). While a significant problem in task-based fMRI, artifact identification and removal is even more complex with rs-fMRI. In the absence of an a priori hypothesis, it may be hard to distinguish the signal related to neural activity from the sources of noise, particularly when the artifacts are spatially or temporally correlated and may share a degree of spatial or spectral overlap with the RSNs. Whenever artifacts cannot be corrected, it may be necessary to adopt some alternative strategies such as the exclusion of the affected subject, volume or slice, or to limit the analysis to regions without significant artifacts. QUALITY CONTROL AND PREPROCESSING Quality control and preprocessing procedures are key steps in the detection and correction of artifacts in fMRI, thus providing consistency and reliability to maps of functional activation. A variety of automated preprocessing pipelines have been described and implemented [e.g., DPABI, LONI (Rex et al., 2003), Nipype (Gorgolewski et al., 2011), BrainCAT and C-PAC], but there is a lack of consensus about which workflow is the most effective. Several studies and reviews have explored the effects of preprocessing techniques on both task-based (Strother, 2006;Churchill et al., 2012a,b) and rs-fMRI results (Aurich et al., 2015;Magalhães et al., 2015). Herein we attempt to provide a practical guide to the most commonly used methodologies. Acquisition Quality Control and Data Conversion The first quality control point comes during the acquisition phase. It is important to loop through the images using realtime display of the scanner, while it is still possible to repeat the acquisition and not lose data. Assessing the images using two different contrast settings, standard anatomical (to verify the appearance of the brain, gross head motion and spiking) and background noise contrast (to verify hardware issues and important small motion) is a wise strategy ( Figure 1H). Following data acquisition, it is important to verify that all images have been imported and sorted correctly, and to ensure the same acquisition protocol has been used for all study participants. At this point, inspection of the scans to screen for obvious brain lesions (except for those specifically being studied) as well as visible artifacts can be performed using general-purpose viewers, such as Osirix, MRIcro, RadiAnt, or ImageJ (Escott and Rubinstein, 2003;Rosset et al., 2004). Due to the absence of a standard file format, it is necessary to start by converting the original scanner data from DICOM format (Mildenberger et al., 2002;Liao et al., 2008;Mustra et al., 2008) to the most common file format used by fMRI preprocessing tools, the NIfTI format (Neuroimaging Informatics Technology Initiative, allows both separated * .img and * .hdr files or both combined on a single * .nii file) (Poldrack et al., 2011), which is an extension from the Analyze 7.5 format (set of two files: * .img containing the binary image data and * .hdr with the metadata) ( Figure 1I). In the NIfTI format most of the DICOM header information is discarded (e.g., patient information) and only basic acquisition information (e.g., TR, resolution, FoV, image orientation) is kept. Most of the fMRI processing packages include file converting tools, and several dedicated converters also are available [e.g., dcm2nii (https://www.nitrc.org/plugins/ mwiki/index.php/dcm2nii:MainPage), MRIConvert (https://lcni. uoregon.edu/downloads/mriconvert/mriconvert-and-mcverter) and NiBabel (http://nipy.org/nibabel/index.html)]. Initial Stabilization, Slice-Timing, and Motion Correction Upon beginning an acquisition, the scanner typically takes some seconds to completely stabilize its gradients, and the tissue being imaged requires some time to reach the necessary excitation. To remove the influence of these factors, it is common to discard the initial volumes (usually around the 10 initial seconds) of the fMRI acquisitions whether for task-based or rs-fMRI. Because fMRI volumes are acquired as 2D images, one slice at a time, and even though short and fixed TRs are utilized, there is an intrinsic delay between the real and the expected slice acquisition times, which may substantially decrease the ability to discern a given effect. The interval between the first and the last acquisition slice depends on the TR selected. Slice timing correction adjusts the time-course of voxel data in each slice to account for these differences by interpolating the information in each slice to match the timing of a reference slice (first or mean TR slice) (Calhoun et al., 2000;Sladky et al., 2011) (Figure 1J). The impact of using slice-time correction is described as quite variable, depending on the type of study, ranging from very important for event-related designs (especially for time-course analysis), to less important for block designs, to having minimal effect on rs-fMRI. However, it seems that it never has a negative impact on the results (Henson et al., 1999;Sladky et al., 2011;Wu et al., 2011). In addition to the debate about whether or not to employ slice timing correction is the issue of, if used, when such correction ought be done, as this step can interact strongly with motion correction (described below). Common suggestions include: for interleaved acquisitions, it is usually performed before motion correction and for sequential acquisitions thereafter; for subjects with low head motion performed before motion correction and with high head motion after (it is recommended to keep the order consistent for all the study subjects) (Sladky et al., 2011). Nevertheless, the issue remains poorly addressed as slice timing and motion correction are two inextricably linked steps (Bannister et al., 2007). An alternative option is to perform slice timing and motion simultaneously through 4d realignments using the Nypipe 4d realignment function (Roche, 2011) or with the Seshamani data reconstruction framework (Seshamani et al., 2016). Additional methods exist for slice timing adjustment, such as adding regressors as nuisance variables (Henson et al., 1999) or altering the model rather than the data, as in dynamic causal modeling (DCM) , though that specific approach is not suitable for interleaved acquisitions. Head motion during scanning is probably the most common and critical confound for both task and rs-fMRI studies, both of which are dependent upon precise spatial correspondence between voxels and anatomical areas over time Satterthwaite et al., 2012;Maclaren et al., 2013;Zeng et al., 2014;Power et al., 2015). The most common strategy used to perform motion correction is first to realign each volume to a reference volume (mean image, first, or last volume) using a rigid body transformation (x, y, and z rotations and translations) (Jiang et al., 1995) (Figure 1K). While there is no standard rule about the motion threshold to be used, it is a rule of thumb to discard data sets with motion greater than the dimensions of a single voxel (Formisano et al., 2005;Johnstone et al., 2006). Because most traditional realignment strategies take into account each volume at a single point in time, and due to the fact that residual motion-induced fluctuations still are present in the data set and decrease the reliability and statistical sensitivity of the study, a different strategy was proposed. This technique was to include in the subject-level general linear model (GLM) the motion parameters estimated during the realignment step as "nuisance variables" (covariates of no interest), possibly also including the temporal derivatives of those variables Johnstone et al., 2006;Power et al., 2012). Most of the commonly used fMRI packages include motion correction tools, and significant differences in their performance are not evident (Oakes et al., 2005;Morgan et al., 2007). Several groups have recently demonstrated that small head motion produces spurious but structured noise, which then triggers distance-dependent changes in signal correlations (Power et al., 2012(Power et al., , 2015Satterthwaite et al., 2012;Van Dijk et al., 2012;Siegel et al., 2014). The method proposed to reduce these effects has been called Scrubbing, and is based on two measures to capture the head displacements, Framewise Displacement (FD) or the brain-wide BOLD signal displacements (temporal Derivative VARiance-DVARS) derived from volume to volume measurements over all brain voxels (Power et al., 2012). After FD or DVARS calculation, a threshold is applied and, despite a lack of standardization, it is common to use FD > 0.2-1 mm and DVARS > 0.3-0.5% of BOLD signal in order to identify outliers. Scrubbing corrections can be implemented with several tools including the C-PAC, Artifact Detection Tools (Mazaika et al., 2007), DPARSF (Yan and Zang, 2010) and fsl_motion_outliers tool. By default, fsl_motion_outliers detects outliers if FD or DVARS exceeds the 75th percentile + 1.5 times the InterQuartile Range. The identified outliers are commonly regressed out later in the preprocessing pipeline (but before temporal filtering) from the data with a GLM where each outlier is entered as a nuisance regressor. Additional alternative motion correction strategies are available, such as the use of slice derived information (Beall and Lowe, 2014), task associated motion (Artifact Detection Tool), expansion to 24-36 motion regressors , independent component analysis de-noising (Mowinckel et al., 2012;Griffanti et al., 2014;Pruim et al., 2015), and grouplevel motion covariates (Van Dijk et al., 2012). Furthermore, the use of non-gray matter nuisance signals (Behzadi et al., 2007;Jo et al., 2013) and regression of global signal have been shown to help reducing the impact of motion. Spatial Transformations Performing spatial transformations to align the images from the individual's native space with those acquired from a different modality or subject [(co-)registration] or into a common standard space (normalization) is a fundamental step of the fMRI preprocessing (Brett et al., 2002) (Figure 1L). If homologous brain regions are not properly aligned between individuals, sensitivity is lost and leads to an increase in the false negatives rate. On the other hand, systematic normalization errors between groups may trigger false positive activations. In fMRI studies there are two main standard coordinate systems which have been used in order to reduce intersubject variability and to facilitate the reporting of results in the form of standard stereotactic (x,y,z) coordinates. These are the Talairach space, where the principal axis corresponds to the anterior commissureposterior commissure (AC-PC) line, and which is based upon the brain of a single individual (Talairach and Tournoux, 1988), and the Montreal Neurological Institute (MNI) templates (there are several MNI templates available, being the MNI152 the most commonly used), which are based on the average of T1-weighted MRI scans of a large number of subjects (Mazziotta et al., 1995(Mazziotta et al., , 2001. These templates normally are associated with an atlas (Cabezas et al., 2011;Evans et al., 2012) and allow the localization of designated anatomical features in coordinate space, as well as the association of functional results to identified anatomical regions. The Automated Anatomic Labelling (AAL) (Tzourio-Mazoyer et al., 2002), the Talairach atlas (Lancaster et al., 2000), and the Harvard-Oxford atlas (Desikan et al., 2006) are amongst the most commonly used. It is important to note that the Talairach and MNI coordinates do not refer to the same brain regions or structures (Laird et al., 2010), and it is frequently necessary to convert between the two (e.g., for meta-analyses). Available tools to implement transformation between the two coordinate spaces include the "icbm2tal" (Lancaster et al., 2007;Laird et al., 2010) (GingerALE, http://www.brainmap. org/icbm2tal/) and the "mni2tal" (Brett et al., 2002) (BioImage Suite, http://bioimagesuite.yale.edu/mni2tal/). Tools also are available to localize and label brain regions according to the MNI (MRIcron, http://www.mccauslandcenter.sc.edu/mricro/ mricron/, Neurosynth, http://neurosynth.org/) or Talairach (Talairach software, http://www.talairach.org/, WFU_PickAtlas, http://www.nitrc.org/projects/wfu_pickatlas/) coordinates. Normalization strategies rely on optimization functions which maximize the similarity between two images (Jenkinson and Smith, 2001) by applying translations, rotations, and scaling in multiple axes. Transformations are usually divided into two subtypes: linear, applied uniformly along an axis and usually represented as affine matrices, and non-linear, defined locally (meaning that different points along an axis undergo unique transformations) usually defined by warp or distortion maps. Several deformation algorithms are available which can be applied to MRI registrations (Klein et al., 2009). An alternative registration method is the use of surface registration techniques, in which the functional time series are mapped onto cortical surface models [e.g., automatically implemented by Freesurfer (Fischl et al., 1999)], improving the computational efficiency and the mapping of the cortical surface, beneficial for subsequent processing and analysis steps (surface-based smoothing kernels and surface-registration can be used) (Klein et al., 2010;Khan et al., 2011). In fMRI there are two commonly used processing streams for spatial normalization. In one, a single step strategy is used to normalize directly to a standard EPI template, while the other employs a multi-step method which first aligns to the matching structural image using rigid-body or affine transformations, following which the composite image is then registered to the reference space, using either affine or nonlinear transformations (Poldrack et al., 2011). Complementary techniques for removing non-brain areas from the analysis and reducing the data size, such as skull striping or masking, may also help to improve the normalization step (Tsang et al., 2007;Andersen et al., 2010;Fischmeister et al., 2013). The choice of the optimal atlas template and mapping function depends on a multitude of factors and is influenced by age, gender, hemispheric asymmetry, normalization methodology, and disease-specificity (Crinion et al., 2007). Following the normalization step, it is always important to perform visual quality control, for example by displaying the fMRI data of each participant along with a reference EPI template. Spatial Smoothing and Filtering The next preprocessing step normally implemented is that of spatial smoothing/filtering, a process during which data points are averaged with their neighbors, suppressing high frequency signal while enhancing low frequency ones, and results in the blurring of sharp edges ( Figure 1M). Smoothing simultaneously increases the SNR and the validity of the statistical tests (from random field theory) by providing a better fit to expected assumptions while reducing the anatomical differences. On the other hand, smoothing reduces the effective spatial resolution, may displace activation peaks (Reimold et al., 2006) and extinguish small but meaningful local activations depending on the filter parameters chosen (Yue et al., 2010;Poldrack et al., 2011;Sacchet and Knutson, 2013). The standard spatial smoothing procedure consists of convolving the fMRI signal with a Gaussian function of a specific width (as, spatially, the BOLD signal is expected to follow a Gaussian distribution). The choice of the proper size of the Gaussian kernel [Full Width at Half Maximum (FWHM)], which determines the extent to which the data is smoothed, will be dependent on specific features of the study undertaken, such as type of paradigm and inference expected, as well as on the primary image resolution. The amount of smoothing always should be the minimum necessary to achieve the intended results, and a reasonable starting point is a FWHM of twice the voxel dimension (care must be taken when using large smoothing kernels as they make the detection of smaller patterns of activation harder). The typical smoothing values used range between 5 and 10 mm for group analyses (Beckmann and Smith, 2004;Mikl et al., 2008;Poldrack et al., 2011). Alternative approaches to smoothing are the use of varying kernel widths (Worsley et al., 1996), adaptive smoothing (Yue et al., 2010;Bartés-Serrallonga et al., 2015), wavelet transforms (Van De Ville et al., 2007), and prolate spheroidal wave functions (Lindquist et al., 2006). Despite its common use, care must be taken when performing smoothing due to its effects on the final results (Geissler et al., 2005;Molloy et al., 2014), its interaction with motion correction (Scheinost et al., 2014) and impact upon analyses which are sensitive to the activation of individual voxels (such as ROI-to-ROI analysis, Regional Homogeneity and Multi-voxel Pattern Analysis). This step is not recommended for connectomic approaches in order to prevent the BOLD signal from extending across different regions of interest (Zuo et al., 2012;Tomasi et al., 2016). A final step in the data preprocessing pipeline is temporal filtering (Figure 1N). This step is performed in order to remove the effects of confounding signals with known or expected frequencies. The use of frequency filtering (and/or spatial smoothing) may help attenuate noise and thus increase the SNR (White et al., 2001). Functional MRI time-courses often manifest low-frequency drifts which may reduce substantially the statistical power of the results. It is therefore of great relevance to attempt to identify which frequencies are those of interest and which are noise (Kruggel et al., 1999). For example, fMRI noise may be associated with slow scanner drifts (∼ <0.01 Hz), as well as cardiac (∼ 0.15 Hz) and respiratory (∼ 0.3 Hz) effects (Cordes et al., 2001(Cordes et al., , 2014. The most frequently used filters for task-based fMRI acquisitions are high-pass filters (typically ∼ 0.008-0.01 Hz, 100-128 s), generally deployed with a rough rule of using a cut-off value at least 2 times that of the fundamental taskfrequency (the interval between one trial start and the next one). With rs-fMRI the standard strategy is to apply a band-pass filter (0.01-0.08 Hz) following the reports of Biswal and colleagues (among others), which have shown that spontaneous BOLD low frequency (∼ <0.1 Hz) fluctuations were physiologically meaningful and reflect spontaneous neural activity (Biswal et al., 1995;Fransson, 2005;Shirer et al., 2015). Nevertheless, high frequency signals (>0.1 Hz) have also been shown to present functional significance (Chen and Glover, 2015;Gohel and Biswal, 2015). Exploring such frequency band requires extra caution in controlling for physiological sources of noise (e.g., respiratory and cardiac effects) as these are known to present frequencies greater than 0.1 Hz. This can be achieved using simultaneous monitoring of pulse oximetry, electrocardiogram and/or breathing belt. Effective quality control is of fundamental importance in the optimization of data usability reliability and reproducibility. Software tools have been developed in order to implement quality control procedures complementary to the ones already mentioned, such as BIRN QA (http://www.nitrc.org/projects/ bxh_xcede_tools/), NYU CBI Data Quality tool (http://cbi.nyu. edu/software/dataQuality.php), and the CANLAB Diagnostic Tools (http://wagerlab.colorado.edu/tools). ANALYSIS METHODS The next stage in the fMRI workflow is the selection of the most suitable method to extract the relevant functional information. There are many fMRI analysis methods and software tools for both task-based ( Figure 1O) and rs-fMRI ( Figure 1P). Thus, choosing the one most suitable for a specific study may be a complex, often confusing and time-consuming task. In order to assist with this choice, we herein present a table with the most commonly used software tools for the analysis of task-based and rs-fMRI data ( Table 4). Some existing reviews have already explored fMRI analysis methods (van den Heuvel and Hulshoff Pol, 2010;Lohmann et al., 2013;Smith et al., 2013;Sporns, 2014;Haynes, 2015;Zhan and Yu, 2015;Pauli et al., 2016). In the following sections we distinguish between task-based and resting-state fMRI analysis according to the prominence use of each method, nevertheless, some are suitable for both fMRI acquisitions. Other distinctions could be performed, namely between methods suitable for localization and for connectivity approaches. The appropriateness application of each method will also be discussed below. Typical Task-Based Analyses Methods The most employed method in the analysis of task-based fMRI is Statistical Parametric Mapping (SPM), which is based on the GLM (Figure 2A) (Friston et al., 1994a;Kiebel and Holmes, 2003;Poline and Brett, 2012). GLM's popularity is based on its straightforward implementation, interpretability and computability. It incorporates most data modeling structures and provides the means for minimizing/controlling the effects of confounding factors such as motion, respiratory and cardiac and HRF derivatives (Calhoun et al., 2004;Lund et al., 2006;Bright and Murphy, 2015). One common approach in the use of this technique is to convolve the stimulus onsets and durations with a canonical HRF, which results in quantifying an estimate of the expected BOLD signal for any condition of interest. These estimates are then defined, along with intrinsic confounding factors (e.g., motion parameters), as the independent variables of the GLM. Each voxel time-series is then set as the dependent variable. The result of this process is to generate a test statistic for each voxel in the brain, which makes possible the creation of a parametric map (SPM). The process is performed separately for each subject and is commonly designated as first-level analysis. The GLM can be used very generally, ranging from the simplest subtraction method to parametric correlations with behavior, and also serves as the reference for several methods used to estimate connectivity. The main criticism of the GLM is based upon the intrinsic assumptions which must be made related to parametric testing in general, and the GLM in particular, and which are not usually verified nor are they tested (Monti, 2011). Despite these reservations, GLM analysis remains extremely popular for fMRI. Task based connectivity analysis is being performed with increasing frequency and its results are quite sensitive to the choice of analysis tool. For it to be used appropriately, it is necessary to distinguish undirected associations between brain regions (functional connectivity-FC) from directed and causal relationships (effective connectivity) (Horwitz et al., 2005;Friston, 2011). Functional connectivity will be discussed in greater detail below, since the methods involved are more widely used for rs-fMRI, although some of the same principles apply also to task-based analysis. Also closely related to the GLM, and concerned with effective connectivity, psychophysiological interaction (PPI) is a method used to quantify how taskspecific FC between a particular brain ROI (source/seed) and the rest of the brain voxels are affected by psychophysiological variables (Figure 2B) O'Reilly et al., 2012). Some caveats when using PPI analyses are the hemodynamic deconvolution, the low power, and the difficulties intrinsic to event-related designs (Gitelman et al., 2003;O'Reilly et al., 2012). Similar to PPI analysis in that it explores how the experimental context affects connectivity between a group of regions, the structural equation model (SEM) is used to assess the effective connectivity based on an a priori model of causality (Figure 2C) (McLntosh and Gonzalez-Lima, 1994;Büchel and Friston, 1997;Kline, 2011). It starts with the definition of a set of ROIs, and then tries to determine the connection strength between those ROIs that best fit the model. SEM allows the investigation of several brain regions simultaneously, and incorporates prior anatomical and functional knowledge to determine causal relationships, but assumes that the interactions are linear and, (similar to PPI) it cannot take into account the dynamic changes of the BOLD signal (Tomarken and Waller, 2005). Most often used for taskbased fMRI, SEM has seen application with rs-fMRI (James et al., 2009). DCM allows estimation of the effective connectivity (model states) between brain regions by determining hemodynamic response (model output) as a function of specified external experimental variables (model input) (Figure 2D). One of the primary characteristics of DCM is that it allows exploration of the brain as a dynamic system, accounting for changes in populations of neurons, and is able to build non-linear models of interacting regions Penny et al., 2004;Stephan et al., 2008;Friston, 2009). DCM is a reliable and potentially a more biologically realistic method for fMRI in that it deals with function at the neuronal level. It does require prespecified models and based on its non-linearity and complexity, involves the estimation of many parameters (using Bayesian estimation) and thus considerably more processing time. Each region ultimately is characterized by a single parameter (neuronal activity) Frässle et al., 2015). DCM is primarily used for task-based fMRI but can also be applied to rs-fMRI analyses (Friston et al., 2014a;Razi et al., 2015;Rigoux and Daunizeau, 2015). Another method which may be used to investigate effective connectivity is Granger Causality Mapping (GCM) (Figure 2E). The process is based upon determining temporal precedence in neural time-series and infers causality from time-lagged correlation (Goebel et al., 2003;Friston et al., 2013;Seth et al., 2015). GCM does not require the specification of an a priori model, but does have significant limitations imposed by inherent latency differences in the HRF across different brain regions, lowsampling rates and noise (Wen et al., 2013). It has been applied both to task-based (Anderson et al., 2015) and rs-fMRI (Liao et al., 2011). Enjoying increasing popularity, Multivoxel Pattern Analysis (MVPA) uses pattern-classification algorithms (classifiers) (Haynes, 2015) in the attempt to delineate different mental states, as well as to correlate the patterns with specific perceptual, cognitive, or disease states ( Figure 2F) (Norman et al., 2006;Mahmoudi et al., 2012;Premi et al., 2016). In contrast to the standard GLM approach (focus on patterns of activity of individual voxels), MVPA incorporates the signal from the distributed activity or connectivity across multiple voxels simultaneously, allowing to infer mental states from patterns of distributed neural activity and the formulation of proper reverse inferences . Furthermore, enables a greater sensitivity and specificity, as well as the possibility to test hypothesis with designs that cannot be implemented in mass-univariate methods implemented with the standard GLM approach (Etzel et al., 2013). Another difference between the approaches relies on the fact that while t-tests model the complete set of time points, a classification trains on a subset of data (Coutanche, 2013). MVPA analyses are typically implemented using a "decoding" approach, which is based on the use of classifiers, such as neural networks (Polyn et al., 2005;Nickl-Jockschat et al., 2015), support vector machines (Meier et al., 2012;Månsson et al., 2015), and linear discriminant analysis (Cox and Savoy, 2003;Mandelkow et al., 2016), as a mean to differentiate between different classes or groups of individuals. Despite its popularity in the neuroimaging field, the "decoding" approach has some limitations, particularly related with the different results obtained with different parameters and/or algorithms. An alternative approach, the "searchlight" mapping, performs multivariate analysis on a spherical "searchlight" centered on each voxel in turn, resulting in a statistical map of local multivariate effects (Allefeld and Haynes, 2014), which can be interpreted similarly to a GLM statistics output map (Kriegeskorte et al., 2006). MVPA analyses can be applied both to task-based and rs-fMRI and, with their high sensitivity and effective use of spatial information, allow pattern detection of increasingly complex scenarios. On the other hand, the use of complex and specific classifiers may make it difficult to generalize the results of employing this technique (Dosenbach et al., 2010;Cole et al., 2013). Typical Resting State Analyses Methods Historically, the first method applied to rs-fMRI was seedbased correlational analysis ( Figure 2G) (Biswal et al., 1995). The method is based on the activity in an a priori defined ROI (the seed region) which may be a volume or a single voxel, which is compared to that in all other voxels in the brain (Lee et al., 2013). Seed-based analyses are characterized by simple implementation and statistics and are straightforward to interpret, but do require an a priori selection of ROI. Such selection can be optimized using the data itself (Golestani and Goodyear, 2011). This form of analysis is widely used for rs-fMRI (each RSN can be extracted from a specific associated ROI), but can additionally be applied to fMRI tasks (Schurz et al., 2015) and to PPI analysis, which is in principle a seed-based analysis. Regional Homogeneity analysis (ReHo) (Figure 2H) uses Kendall's coefficient of concordance to measure the synchronization between the time-series of each voxel and that of its nearest neighbors (based on a pre-defined ROI) (Zang et al., 2004). The ReHo method is easy to implement and interpret, and is normally applied to rs-fMRI determinations (Zang et al., 2007;Pedersen et al., 2015). The Amplitude of Low-Frequency Fluctuations (ALFF) and more recently, the fractional ALFF (fALFF, which has reduced sensitivity to physiological noise), measures signal magnitude on a voxel by voxel basis (Figure 2I) . ReHo and (f)ALFF both are methods which reflect properties of local spontaneous activity and, because they manifest different properties of the BOLD signal (synchronization and amplitude), they are usually implemented as complementary analyses. In order to overcome the limitations of model-based analyses, exploratory data-driven methods, which require neither prior information nor a previously defined model, have been applied to fMRI. The three primary techniques are Principal Component Analysis (PCA), Independent Component Analysis (ICA), and clustering. PCA is a method built on finding a set of orthogonal axes (identified as principal components) that can maximize the explained variance of data and separate the relevant information from the noise (Figure 2J) (Wold et al., 1987;Viviani et al., 2005;Abdi and Williams, 2010;Smith et al., 2014). The efficacy of PCA is strongly dependent on assumptions of linearity, orthogonality of principal components, and high SNR. It can be applied both to task-based (Nomi et al., 2008) and rs-fMRI (Zhong et al., 2009). The method most frequently used for studies of rs-fMRI FC is ICA (an extension of PCA) ( Figure 2K) (Jutten and Herault, 1991). This processing technique separates individual elements into their underlying components, and models the fMRI data set as a constant number of spatially or temporally independent components, which then are linearly mixed (Kiviniemi et al., 2003;Beckmann, 2012). For fMRI, ICA maps are normally generated using spatial ICA methods (spatially independent components) however temporal ICA also can be implemented and is used primarily for task fMRI. Limitations to the use of the technique in the temporal domain are its high computational demands and necessity of relying on fewer data points than studies considering spatial components (Calhoun et al., 2001). ICA generates a set of spatial maps and corresponding time-courses. The selection of components of interest is not trivial (in the absence of an a priori hypothesis) and is usually performed by visual inspection or correlation with a predefined RSN template. While straightforward to implement in single-subject analyses, group ICA analyses are more complex and require choosing between several different workflows and algorithm definitions (Beckmann and Smith, 2004;Calhoun et al., 2009;Schöpf et al., 2010;Du et al., 2016). ICA methods also have been used extensively in rs-fMRI studies (Beckmann et al., 2005;Soares et al., 2016), task-based fMRI (Calhoun et al., 2008), and for artifact removal (Perlbarg et al., 2007;Feis et al., 2015;Pruim et al., 2015). The use of clustering methods constitutes a different approach based on mathematical algorithms that groups data into subsets (clusters) such that parameters of the same cluster are more similar to one another than they are to those of different clusters (Figure 2L). Similarly to PCA and ICA, clustering is a totally data-driven approach that enables, for example, the grouping of brain voxels with similar connectivity in the same cluster. The main difference relies on the fact that ICA assumes that there are spatially independent regions that form a network through a shared fMRI time-course, while clustering does not rely on assumptions and simply groups voxels with similar timecourses. Clustering methods have been successfully implemented both with rs-fMRI (Mezer et al., 2009;Lee et al., 2012) and task-based fMRI (Goutte et al., 1999;Heller et al., 2006). The major challenges associated are the requirements that the spatial reproducibility of networks be optimized across subjects and that individual network homogeneity be maximized (Shams et al., 2015). Clustering can be implemented using hierarchical techniques (Cordes et al., 2002), partitional clustering (such as k-means) (Fadili et al., 2000), spectral clustering approaches (Craddock et al., 2012), or sparse geostatistical analysis (Ye et al., 2011). Despite serving similar purposes as ICA, clustering methods were shown to outperform ICA for classification purposes (Meyer-Baese et al., 2004). An increasingly prominent and powerful tool for the study of functional brain networks is graph theory. These methods model the brain as a network comprised of nodes (voxels or regions) and edges (connections between nodes, e.g., time-series correlations). This enables the establishment of functional interactions between every possible brain region, constituting an extension of the seed-based analysis where all possible seeds are explored, also known as the functional connectome. This whole-brain network is mathematically modeled as graph and, consequently, graphtheory metrics can be used to study the topological properties of such network ( Figure 2M). Properties such as clusteringcoefficient, characteristic path length, centrality, efficiency, modularity, among others, provide insights about functional integration, segregation, resilience or organization of the network as whole or of its individual nodes Stam and Reijneveld, 2007;Bullmore and Sporns, 2009). The approach has been used extensively with rs-fMRI (Wang et al., 2010;Ye et al., 2015;Marques et al., 2016) and, to a lesser extent, for task-based fMRI (Cao et al., 2014), where it has been described as sometimes difficult to implement and interpret (Fornito et al., 2013). Another approach, which is somewhat more straightforward to implement, is to characterize the edges of the graph, rather than to consider the topological properties of the entire network. In contrast to most rs-fMRI strategies, which are based on the assumption of stationarity, dynamic functional connectivity (dFC) addresses the temporal component (fluctuations) of spontaneous BOLD signals ( Figure 2N). Dynamic FC analysis has the potential to clarify the constant changes in patterns of neural activity and may be a more appropriate choice for the analysis of rs-fMRI studies (Bassett et al., 2011;Cabral et al., 2011;Madhyastha et al., 2015;Kaiser et al., 2016). The technique can be implemented using the sliding window correlations approach (most common) (Hindriks et al., 2016), time-frequency analysis (Chang and Glover, 2010), single-volume co-activation patterns , repeating sequences of BOLD activity (Pan et al., 2013), or through phase synchronization (Glerean et al., 2012). A number of limitations associated with the approach include the initial steps of sliding-window specification and specificity of pre-processing, as well as its sensitivity to physiological noise and complexity of the attendant statistical analysis (Hutchison et al., 2013;Leonardi and Van De Ville, 2015;Tagliazucchi and Laufs, 2015). STATISTICAL ANALYSES In a single fMRI experiment, images made up of roughly 100,000 voxels are acquired from hundreds to thousands of times, resulting in a massive data set which has a complex spatial and temporal structure (Figure 1Q). Group-Level Analyses In order to make inferences at the group-level (i.e., secondlevel), the analyses of fMRI data most widely used are performed within the GLM framework. In general terms, the GLM approach models the time series of the fMRI signal as a linear combination of different signal components, in order to test whether the activity in a defined brain region is systematically associated with a particular condition of interest (Lindquist, 2008). The GLM is expressed as: where Y is the observed BOLD response, X corresponds to the design matrix, β is related with the parameter estimates and ǫ is the error. Hypothesis testing in the GLM framework include a set of parametric approaches, comprising the familiar T-Tests (independent and paired), Multiple Regression and ANalysis Of VAriance (ANOVA) . Commonly, the research question leads to more complex experimental designs which involve both within-subjects (e.g., condition A vs. B) and between-subjects (e.g., control vs. experimental group) factors. More than one within-subjects factor or the analysis of between-subjects factors, cannot be performed with the traditional tools in a single model. Even though most allow the parametrization of such models, the results can be invalid due to the inherent inability of the tools to incorporate all the factors into a single model ). An alternative approach, the GLM Flex tool (Harvard Aging Brain Study, Martinos Center, MGH, Charlestown, MA, http://mrtools.mgh.harvard.edu/index.php/GLM_Flex) was developed. The tool can handle multiple within-and betweensubjects' factors, while also modeling all the possible interactions between factors within the same model. Parametric tests are popular due to their simplicity and ease of application. However, these tests make some strong assumptions that are minimally met, or not met at all, in fMRI data sets (e.g., assumption of normality). As a result, it is often more appropriate to use nonparametric tests. Such tests estimate the null distribution from the data itself. The most common non-parametric tests used in fMRI analysis are permutation (randomization) tests. Tools that implement such tests include randomize from FSL (Winkler et al., 2014) and SnPM (Nichols and Holmes, 2002). Statistical Significance As in all standard statistical inference, the evaluation of fMRI data requires the establishment of a criterion for statistical significance. In early fMRI studies, the commonly-used standard for statistical significance was an uncorrected p-value of 0.001 at each voxel, a value that is 50 times more restrictive than that typically used in scientific research (Lieberman and Cunningham, 2009). In a typical fMRI experiment, more than 100,000 statistical tests may be performed (one test per voxel). Because this number of determinations is so great, a p < 0.001 would likely produce up to 100 voxels which would be erroneously identified as significant. Such a false-positive rate would clearly be unacceptable, so a variety of methods have been proposed to cope with the multiple comparisons issue. They can be divided into two main categories: voxel-based thresholding, including the family-wise error rate (FWER) and the falsediscovery rate (FDR); and cluster-extent based thresholding (Forman et al., 1995). A widely used method for voxel-based thresholding consists of using the FWER in combination with Random Field Theory (RFT). The technique is implemented by estimating the smoothness of the image, expressed in the number of resels (image resolution element), since the neighboring voxels share statistical dependency. Although it can be thought of as roughly similar to the Bonferroni correction, FWER control using the RFT approach has a number of unique attributes and limitations due to the inherent smoothness of fMRI data. While enabling a great control over type I error, it often is overconservative and may prevent true results from being detected (Hayasaka and Nichols, 2004). The FDR approach, another popular technique to control false-positives in neuroimaging studies (Genovese et al., 2002), considers the proportion of false positives in all the rejected tests. FDR control is less stringent than FWER and usually results in increased power. Because this approach is applied to p-values (rather than to the test statistics themselves) it can be used with any valid statistical test, but is highly dependent on the sample size. The most widely used FDR approach to functional imaging data is the Benjamini-Hochberg (BH) procedure, which assumes independence between tests (Benjamini and Hochberg, 1995). Statistical tests in fMRI are known to be dependent, however, so concern has been raised regarding its applicability (Chumbley and Friston, 2009;Chumbley et al., 2010). Most common software tools, specifically AFNI, FSL, and SPM, implement this type of correction method. A significant problem associated with conservative approaches is the increased probability of committing type II errors (failure to detect true effects), particularly evident with small samples (Nichols and Hayasaka, 2003). It has also been postulated that this approach may favor the extraction of more obvious effects (such as sensorimotor processes), associated with signals of large magnitude. While failing to capture more subtle phenomena (such as complex cognitive and affective processes) often associated with signals of low amplitude (Lieberman and Cunningham, 2009), cluster-extent based thresholding has been put forward in order to address some of these shortcomings. It detects significant clusters based on the number of contiguous voxels that surpass a pre-determined primary threshold (Friston et al., 1994b). The main rationale for its use is that adjacent voxels are more likely to be involved in the same neuronal processes and thus are not independent (Smith and Nichols, 2009). The net result is that instead of estimating the false positive probability of each voxel, this approach estimates the false positive probability of the region as a whole (Woo et al., 2014). The cluster size is determined from the sampling distribution of the largest null cluster size under the null hypothesis of no signal. The reasoning behind this correction is based on the observation that false-positives are randomly distributed and thus are not likely to occur in contiguous groups of voxels (Woo et al., 2014). Cluster-extent approaches also are associated with reduced spatial specificity, describing the likelihood of finding a cluster of a given size or greater under the null hypothesis. The implication is that the larger the cluster, the less spatially specific the inference, though this is often an overlooked aspect of functional imaging (Woo et al., 2014). The most well-known cluster-size estimation methods are based on RFT as implemented in SPM, or on Monte Carlo simulations such as AlphaSim distributed with AFNI and with the REST toolbox. All these methods require the definition of an arbitrary primary cluster-defining threshold. An alternative method, termed threshold-free cluster enhancement (TFCE), was developed in order to eliminate the need for the definition of the primary threshold and is implemented in FSL (Smith and Nichols, 2009), CAT toolbox (http://dbm.neuro.uni-jena.de/cat/), and MatlabTFCE (https://github.com/markallenthornton/MatlabTFCE). Yet another method of performing fMRI statistical analyses is through the use of specified ROIs. Analyses of this type are usually performed when the researcher has some a priori hypothesis regarding a specific brain region, which renders the previously discussed corrections for multiple comparisons too restrictive (see Poldrack, 2007 for other rationales). Generally, ROI analyses lead to an increased sensitivity (signal is average across groups of voxels) but a false sense of specificity of a given activation (activity patterns in regions outside the ROIs are masked out). The simplest approach consists of averaging the estimates over the voxels from the ROI and then performing the statistical testing with the averaged estimate. An alternative method, commonly named Small-Volume Correction (SVC), consists of restricting the voxel-wise analysis to the voxels inside the ROI, thus reducing the number of tests required to account for multiple comparisons corrections. Most software tools, such as SPM, FSL, and AFNI, contain routines for ROI-based analysis. The Marsbar tool (http://marsbar.sourceforge.net) for SPM was specifically developed for this purpose. Effect Sizes Contrary to the standard practice in other research areas, effect estimates (i.e., the effects' magnitude) are usually not provided in most neuroimaging reports. A recent publication highlighted that the statistic value does not provide information regarding the actual significance of the findings, serving rather as auxiliary evidence for the existence of the targeted effect. On the contrary, the effect estimate provides a clear picture of the property of interest and, consequently should be the focus of the investigation. For this reason, the absence or misreporting of effect-sizes has direct implication on the reliability and interpretability of fMRI findings . Taking this into consideration, it is strongly recommended that effect size maps/images are made available. With this practice, the whole range of effects and not only significant findings can be used to compare and properly aggregate effect sizes across different studies/research centers, and also allowing the use of power analysis in future studies . Meta-Analysis The number of fMRI publications continues to grow exponentially, but the results are often not consistent across studies . Therefore, the metaanalysis of functional imaging studies may be essential for the continued development of new hypothesis about the neural mechanisms of cognition, emotion, and social processes (Wager et al., 2007). Individual studies generally provide evidence about brain activity rather than mental states, weather meta-analyses can help to identify consistently activated regions related to the same psychological state (Wager et al., 2007). Neuroimaging meta-analysis pools statistically significant results and offers the potential to improve predictive power, to build analytic tools and models, and to detect emergent properties of neural systems through large-scale data mining and computational modeling . The methods work by counting the number of activation peaks in each local brain area and comparing the observed number of peaks to a null-hypothesis distribution in order to establish a criterion for significance. Functional MRI meta-analysis can be performed using either full statistical parametric maps-image-based meta-analysis (IBMA)-or the coordinates of significant findings-coordinatebased meta-analysis (CBMA). Whereas, IBMA captures consistent patterns of brain activation across studies, even though these patterns are not identified as significant in individual studies, neuroimaging studies rarely provide full statistical parametric maps, which preclude these analyses. Thus, the majority of analysis aggregating neuroimaging results relies on CBMA, in which each eligible study included, reports using standard atlas or template based, 3-dimensional locations of peak activations. As a result, CBMA only aggregate results that are reported as significant across studies, and fail to capture individually non-significant, but consistent findings across different studies. A number of different algorithms have been developed for CMBA analyses, including the Activation Likelihood Estimation (ALE) (Eickhoff et al., 2012), Kernel Density Analysis (KDA) (Wager et al., 2004), Multi-level Kernel Density Analysis (MKDA) (Wager et al., 2007), and the Effect-size Signed-Differential-Mapping (ES-SDM) . MULTIMODAL STUDIES Collecting multimodal brain data using different neuroimaging methods has become increasingly popular and is definitely a future trend, which provides an opportunity to develop a more global description of brain structure and function ( Figure 1R). A number of different modalities and techniques have been used to complement fMRI analysis, either simultaneously or separated in time, and have been reviewed elsewhere (Biessmann et al., 2011;Uludag and Roebroeck, 2014;Liu et al., 2015a;Garcés et al., 2016). One particularly powerful approach to better understanding the brain is to model it as a network of functional connections between every possible region. The connectomic paradigm provides the investigator with an effective framework with which to study how dynamic changes in function are related to structural change, and how both are connected with brain states. Several extensive studies and worldwide projects [e.g., Human Connectome Project (Van Essen et al., 2013), Developing Human Connectome Project (http://www. developingconnectome.org/), Baby Connectome Project (http:// www.fnih.org/what-we-do/current-research-programs/babyconnectome), or MyConnectome project (http://myconnectome. org/wp/)] are currently under way and have been enhancing multimodal approaches by combining fMRI data with structural information (e.g., diffusion data, volumetric data, cortical thickness, and voxel based morphometry) (Labudda et al., 2012;Crossley et al., 2014;Horn et al., 2014;Frank et al., 2016). Another approach employing complementary methodology is the combination of fMRI with the recording of brain electrical activity (electrophysiological response) using either electroencephalography (EEG) or magnetoencephalography (MEG) (Bledowski et al., 2004;Vaudano et al., 2012;Tewarie et al., 2015). Both techniques add improved temporal resolution to the very good spatial resolution of fMRI (Huster et al., 2012;Hall et al., 2014;Jorge et al., 2014). Positron emission tomography (PET) and single-photon emission computerized tomography (SPECT) both have a long history of providing fundamental information regarding brain metabolism. Though lacking the time resolution of fMRI, they complement that methodology by having the ability to study such parameters as neurotransmitter-receptor interactions and local glucose metabolism for longer periods in time (minutes) (Price, 2012;Sander et al., 2013;Tousseyn et al., 2015). It is possible to perform fMRI and functional near-infrared spectroscopy (fNIRS) simultaneously, and such a multimodal approach may be used to improve the temporal resolution of the former, thus allowing better correlation of the BOLD signal with local hemodynamic changes (Steinbrink et al., 2006;Sato et al., 2013). Inducing small direct currents in the brain using transcranial magnetic stimulation (TMS) or transcranial direct current stimulation (tDCS), make possible relatively focal excitation or inhibition and, when performed concurrently with fMRI, allows the study of functional interactions (Ruff et al., 2009;Peters et al., 2013;Weber et al., 2014;Leitão et al., 2015). The rapid growth of multimodal neuroimaging techniques has triggered the parallel development of computing methods and workflows capable of analyzing the resultant complex data sets (for review Liu et al., 2015b), and has led to the development of several tools dedicated to this type of study (Casanova et al., 2007;McFarquhar et al., 2016). While the primary focus of this guide has been that of human neuroimaging, it is useful to note that many of the concepts and strategies described also can be applied to animal experimentation. The availability of ultra-high field scanners, capable of achieving very high resolution, has made feasible the application of fMRI to brains as small as that of a mouse (Jonckers et al., 2011;Schlegel et al., 2015). Other animals studied using this technique are rats (Liang et al., 2012;Henckens et al., 2015), non-human primates (Hutchison et al., 2015;Petkov et al., 2015), dogs (Andics et al., 2014;Berns et al., 2015), and cats (Brown et al., 2013;Hall et al., 2016). Translational research opportunities allow the investigator to develop animal models for studies which cannot be undertaken in patients or volunteers. A number of technical issues which must be considered when designing protocols for animal work are: the impact of higher magnetic fields and the ability to detect functional contrasts (Ciobanu et al., 2015); the use, or not, of anesthesia or sedation and its effects on regional and global brain activity (Kalthoff et al., 2013;Schlegel et al., 2015); physiological differences between animals and humans (Kalthoff et al., 2011;Sumiyoshi et al., 2012); and the fact that relatively few reference templates and atlases are available for animals (Stoewer et al., 2012;Nie et al., 2013;Papp et al., 2014). REPORT AND INTERPRETATION OF RESULTS The results reported for a typical fMRI study include such information as the peak cluster coordinates (in x, y, and z), cluster size, the multiple comparisons correction method used, the statistical score (usually T-statistics or Z-values), and the brain regions of interest labeled with reference to a standard atlas and/or visual inspection. The correct interpretation of fMRI results is never straightforward and is dependent upon factors which range widely from the technical and methodological to the conceptual and statistical issues. Because there is such great variation in the manner with which studies are performed (Lange et al., 1999;Carp, 2012a;McGonigle, 2012), it is critically important that researchers/clinicians fully describe and report the methodological details as well as results, thus allowing replication as well as the potential incorporation of the findings into metaanalytic studies (Carp, 2012b). Comprehensive guidelines for reporting an fMRI study , as well as the principles of open and reproducible research for neuroimaging have been proposed, and have been accompanied by the development of a number of databases (Van Horn and Ishai, 2007;Poldrack and Gorgolewski, 2014;Poldrack and Poline, 2015). Specific examples of such data pools include OpenfMRI (Poldrack and Gorgolewski, 2015), ConnectomeDB (Hodge et al., 2016), Neuroinformatics Database (NiDB) (Book et al., 2016), or NeuroVault.org (Gorgolewski et al., 2016b). Effective communication of the results of fMRI investigations requires that the information has been organized and described in a clear and straightforward manner, using an unambiguous ontology (formal description of all terms and syntax) (Burns and Turner, 2013;Poldrack and Yarkoni, 2016) and format (Gorgolewski et al., 2016a). The BOLD signal itself has a number of characteristics which present challenges to the accurate interpretation of fMRI data acquired with its use (Aguirre et al., 1998). BOLD responses are known to vary with different acquisition parameters (Renvall et al., 2014) and to be highly dependent on the specific parameters of neurovascular coupling which are known to vary with age, medication and in certain pathological states Bangen et al., 2009;Di et al., 2013;Tsvetanov et al., 2015). In addition, the nature of the BOLD signal has been shown to be affected by a variety of chemical compounds (e.g., caffeine and alcohol) (Levin et al., 1998;Mulderink et al., 2002;Perthen et al., 2008) as well as by respiration (Birn et al., 2008) and oxygen level (Cardenas et al., 2015). The fundamental challenge of fMRI research is to draw conclusions which are completely supported by the data and which are unbiased. The literature contains numerous examples wherein foci of static regional activation are interpreted as associated with specific cognitive functions. Such empirical conclusions, termed "reverse inference" (Poldrack, 2006) are based on the implicit assumption that when a region of activation changes as a function of the performance of a specific task, that the region whose activity has changed is responsible for the associated cognitive process. This assumption fails to take into account either brain compensatory mechanisms Meade et al., 2016) or plasticity (Poldrack, 2000;Colcombe et al., 2004;Amad et al., 2016). It is now generally accepted that a more complete description of brain function must include not only the notion of causality but also recognize the relationship between interconnected regions (network properties) through the characterization both of functional specialization (specific roles played by the different regions) and integration (how the regions interact with one another) ( Van Horn and Poldrack, 2009). For all these reasons, drawing significant conclusions about mental states from fMRI data is challenging at best and the use of classification and predictive models such as machine learning algorithms have increasingly been tasked for this purpose (Pereira et al., 2009;Dosenbach et al., 2010). As stated often throughout this guide, the statistical analysis of fMRI data is a complex process and great caution must be exercised when interpreting the experimental results. Questions have been raised, for example, about whether certain studies have reported findings able to be supported by the methodology used and the data obtained. Some studies purport to find extremely high degrees of correlation between individual behavioral characteristics (including personality, emotion, and social cognition) and specific regions of increased brain activity (Vul et al., 2009). Critics have pointed out that, considering the degree of methodological imprecision both of fMRI and in the measurement of individual characteristics, that the reported results may not be robust (Vul et al., 2009). Another issue is that of circular analysis, unfortunately seen with some frequency in functional studies. The issue arises when the data first are analyzed, subsets of those data selected, and then the same subsets are re-analyzed to obtain the results (Kriegeskorte et al., 2009). An fMRI example might be to define a ROI on the very basis of a statistical mapping which highlights the voxels of which it is composed in response to a functional activation state (Kriegeskorte et al., 2009). Such "double dipping, " the use of the same data for selection and subsequent selective analysis, results in an invalid statistical inference. It violates the criterion that the test statistics must be inherently independent of the selection criteria under the null hypothesis. CONCLUSIONS AND FUTURE DIRECTIONS Functional MRI currently is enjoying popularity in the study of brain function and promises to become even more prominent in the future. A number of factors contributing to the optimism about the expanding role of fMRI in neuroscience include: greater understanding of the BOLD and other contrast mechanisms; higher resolution and increased sensitivity; the use of new, more optimized preprocessing and analytic techniques; more powerful computational models; and extensive data sharing, enabling the design of studies comprised of large numbers of participants. Strategically, functional neuroimaging appears to be moving from the description and characterization of brain states toward predictive models of function based on the whole brain network. It is hoped that such models will incorporate behavioral features, genetic factors and biomarkers and will evolve to play an increasingly prominent clinical role in diagnosis, monitoring, and treatment of central nervous system disorders. In order to contribute to future progress, this article has sought to highlight the typical challenges faced when performing fMRI studies, and to offer some practical strategies with which they may be overcome. We have provided guidelines and references for the tools most commonly used at each step of the principal fMRI pipeline. As a concluding remark, we outline a set of general recommendations that we consider to be of upmost relevance for a better transparency and reproducibility of neuroimaging studies: before the study, perform a suitable experimental planning, including a proper design, power analysis (e.g., use the reported estimates as a means to estimate the adequate sample size) and identify the specific targets and analyses to be implemented; during the study, define the adequate acquisition protocols, identify as soon as possible and prevent the potential artifacts (in order to avoid losing data), carefully check the quality of the data, perform an accurate preprocessing, analysis and statistical testing and organize all the information in a standardized way, preferably with open-source software; after having the results, discuss them with caution, report them as well as the methodological details with great detail and following the guidelines (allowing study replication) and share the full statistical maps ideally in open repositories (allowing meta-analyses and power analyses for other similar studies). It is our hope that this guide will be of assistance both to those beginning to explore the potential of functional imaging as well as those who might appreciate a source book of current practice. AUTHOR CONTRIBUTIONS JMS, RM, PSM, AS (Alexandre Sousa), and PM contributed in literature search, figures, study design, and writing. EG contributed in writing. AS (Adriana Sampaio), VA, and NS contributed in study design and writing. ACKNOWLEDGMENTS This article has been developed under the scope of the project NORTE-01-0145-FEDER-000013, supported by the Northern Portugal Regional Operational Programme (NORTE 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER). We are also thankful to FCT-ANR/NEU-OSD/0258/2012 founded by FCT/MEC. RM and PSM are supported by FCT fellowship grants, from the Ph.D.-iHES program, with the references PDE/BDE/113604/2015 and PDE/BDE/113601/2015, respectively. PM is supported by a grant from the project "Better mental health during ageing based on temporal prediction of individual brain ageing trajectories (TEMPO)" (Contract grant number: P-139977) funded by Fundação Calouste Gulbenkian.
2017-05-04T18:19:28.225Z
2016-11-10T00:00:00.000
{ "year": 2016, "sha1": "d8e2bad4290d39e5a62c7fa0835552cb38e28054", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2016.00515/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8e2bad4290d39e5a62c7fa0835552cb38e28054", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
246331625
pes2o/s2orc
v3-fos-license
Case of unilateral pellucid marginal corneal degeneration progressing to corneal perforation with keratoconus in contralateral eye Purpose To report our findings in a case of pellucid marginal corneal degeneration (PMCD) in the left eye and keratoconus (KC) in the right eye, and to review earlier cases of PMCD and KC. Observations A 45-year-old woman visited our hospital with a complaint of reduced vision in her right eye. She was predisposed to allergies since childhood and had a habit of rubbing her eyes. Based on the results of the corneal topographic study, we diagnosed her with KC in the right eye and PMCD in the left eye. We prescribed a rigid, gas permeable contact lens and treated her allergic conjunctivitis with ocular medications. Three years after her initial visit, she developed a corneal perforation in the left eye. The perforation was closed by conservative treatment consisting of therapeutic soft contact lens wear. One year after the cornea healed, the corneal astigmatism in the left eye was about one-half of what it was before the corneal perforation. Her corrected visual acuity improved to 1.0 with conventional spectacles. Conclusionand Importance We found a difference in the progression of KC and PMCD even when they occurred in same individual. We suggest that the atopic predisposition, which is considered a risk factor for acute hydrops in KC, to be a risk factor for acute hydrops and corneal perforation in eyes with PMCD. Introduction Pellucid marginal corneal degeneration (PMCD), first described by Schlaeppi in 1957, is a relatively rare disorder that is associated with non-inflammatory thinning of the lower periphery of the cornea. 1 Patients often visit ophthalmologists in their 30s or later with a main complaint of vision decrease. In these patients, slit-lamp microscopy shows a noninflammatory band of thinning in the periphery of the inferior cornea and an adjacent anterior protrusion of the upper region. This appearance has been described as a "beer belly" cornea. Corneal analysis shows a "crab claw" pattern. 2 PMCD is more common in men than in women and is more often bilateral. 3 The etiology of PMCD has not been definitively determined, but it is believed to be a disorder related to keratoconus (KC) because it is often seen in the same family. In addition, KC is often present in the contralateral eye of patients with PMCD. 4 The difference between the two disorders is that KC often occurs at a younger age, 10-20 years, while PMCD occurs after the age of 30 years. The common denominator of both is that acute hydrops, a condition characterized by stromal edema due to leakage of aqueous into the stroma through a tear in Descemet's membrane, can develop when the corneal thinning is advanced causing the tears in Descemet's membrane. The risk factor for acute hydrops in KC has been reported to be an atopic predisposition of the patient, 5 but there have been no reports describing the risk factors for acute hydrops in PMCD. This may be because PMCD is much rarer than KC. We have examined a case of unilateral PMCD that progressed to a corneal perforation in a patient whose contralateral eye had KC. We report our findings in this case and compare them with the findings of previous cases of PMCD and KC. Case report The patient had been aware of the vision decrease in her right eye since she was about 20-years-old. When she could no longer be corrected by conventional spectacles, she visited the eye clinic around the age of 30 years. She was prescribed soft contact lenses. After the age of 40 years, she had a decrease in her visual acuity in the right eye in spite of wearing the soft contact lenses, and she was referred to our hospital. At the initial examination, the patient had no specific complaints about the left eye. However, she reported that she had a history of atopic predisposition since her early childhood and had a habit of rubbing her eyes. There were no other relevant family histories. Her corrected decimal BCVA was 0.1 with a correction of − 8.00 DS = − 10.00 DC Ax 53 • in the right eye and 1.0 with − 0.50 DS = − 7.00 DC Ax 103 • in the left eye. We noted the very high astigmatism in both eyes. The intraocular pressure was 8 mmHg in the right eye and 17 mmHg in the left. Slit-lamp microscopy showed an anterior protrusion of both corneas ( Fig. 1A and B). In addition, there were allergic changes in both conjunctivas with hyperemia and follicular formation in the upper eyelid. She was aware of her unpleasant pruritus sensation that provoked her desire to rub her eyes. There were no obvious abnormalities in the lens, vitreous cavity, and retina that could have caused the reduction of vision. Examination of the corneal topography showed a marked protrusion of the paracentric inferior cornea in the right eye and a band-shaped, highly refractive area in the most peripheral part of the inferior cornea of the left eye ( Fig. 1C and D). Corneal pachymetry maps showed mild thinning in the inferior part of the cornea of both eyes ( Fig. 1E and F). We diagnosed her with KC in the right eye based on the age of onset and corneal topography. We diagnosed her left eye with PMCD based on the area of corneal thinning and band-shaped protrusion. Rigid gas permeable hard contact lenses were prescribed for both eyes to treat the astigmatism due to the abnormal corneal shapes. The decimal visual acuity with the contact lenses was 1.0 for the right eye and 1.5 for the left eye. The corneal topography obtained 2.5 years after the initial examination showed that there was almost no progression of the corneal topography in the right eye, but the inferior part of the cornea in the left eye was approximately 100 μm thinner than that at the initial visit. The axial power map showed a typical crab claw pattern ( Fig. 2A-D). She complained of a foreign body sensation in both eyes, and our examination of the upper eyelid conjunctiva showed an increase in the degree of conjunctival follicles and edema. She was prescribed topical epinastine, an antihistamine, ophthalmic solution. Three years after the initial visit, the patient visited our hospital with a complaint of pain, ocular discharge, and tearing in the left eye. Slitlamp biomicroscopy showed corneal edema and a corneal perforation in the inferior part of cornea of the left eye, and the anterior chamber was completely collapsed (Fig. 3A). The patient was treated with a bandage contact lens, systemic carbonic acid dehydrogenase inhibitor, and topical atropine. One week later, the bandage contact lens was removed after we confirmed that the anterior chamber was fully formed, and the leakage of the aqueous humor had stopped. One year after the healing, a corneal opacity was present ( Fig. 3B and D), but the corneal astigmatism has been reduced compared to that before the perforation (Fig. 3C). In addition, the corneal thickness around the corneal perforated area was preserved (Fig. 3D). Her corrected decimal BCVA at the final visit was 1.0 with a correction of − 2.25 DS = − 4.00 DC Ax 20 • in the left eye. Discussion Our results showed that there was a progressive thinning of the cornea in the left eye with PMCD over a three-year period. This led to a corneal perforation. Marked corneal edema around the perforated area suggested that the corneal perforation occurred secondary to acute hydrops. On the other hand, the right eye diagnosed with KC showed little change in its corneal topography. Eyes with PMCD and KC can develop acute hydrops but the frequency in PMCD is not known because PMCD is very rare. 2 Cases leading to corneal perforation in PMCD are even rarer, and according to previous reports, 20 eyes with a corneal perforation were identified in 16 PMCD patients ( Table 1). The mean age at the time of the corneal perforation was 50.1 ± 14.6 years. We examined the published findings in 18 eyes with KC leading to corneal perforation that were reported after 1987, and the mean age at the time of corneal perforation was 36.9 ± 16.3 years (Table 1). [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34] We believe that the progression of corneal degeneration in eyes with PMCD has a later age of occurrence than KC. This then resulted in a later age of corneal perforation in eyes with PMCD than eyes with KC. The question then arises on whether the age of the onset of these corneal disorders differ even when they occur in the same individual. Earlier studies in Japan reported that 9 of 108 (8.3%) patients diagnosed with PMCD had KC or suspected KC in the contralateral eye. 3 However, there have been no reports on whether the timing of progression of these two disorders coincided when they occurred in the same individual. Furthermore, in our current case, the axial power map of the right eye at 3 years after the initial examination showed crab claw-like changes. This suggested that the patient may have developed both KC and PMCD in the right eye. According to Barraquer et al., 30% of patients with PMCD have atypical corneal topographic findings, and they reported that they found KC and PMCD in the same eye in those cases. 6 In our case, the degree of progression of the corneal degeneration was different between the right eye, which may have had both KC and PMCD, and the left eye, which had only PMCD. However, there have been no report that reported the detailed clinical course of eye that developed KC and PMCD in the same eye. We will continue to monitor the patient to follow the changes in the corneal topography and if there is any difference in the speed of corneal degeneration between the left eye and the right eye. We examined earlier publications on whether there were acute hydrops in 21 eyes with PMCD that developed corneal perforation including our case and found that there were 15 eyes with acute hydrops and 6 eyes without acute hydrops. Cases without acute hydrops were characterized by a relatively wide areas of corneal thinning even before the perforation and a large corneal perforation. [7][8][9][10] In addition, there was one case of a corneal perforation that occurred during a fundus examination with scleral indentation. 10 Conservative treatments failed in these cases, and it required corneal suturing or corneal transplantation. In cases of extensive corneal thinning, the patient may need to be informed of the risk of corneal perforation and to avoid any degree of pressure on the eye. Allergic predispositions to vernal conjunctivitis, asthma, and other allergic disorders are risk factors for acute hydrops in eyes with KC. 5 Although there are no reports on the risk factors for acute hydrops in eyes with PMCD, a study of 108 patients with PMCD in Japan reported that 22.2% of patients had an allergic predisposition. 3 Allergic predispositions, such as atopic dermatitis, were found in 6 of the 17 cases of KC that developed a corneal perforation, and 6 of 18 patients with PMCD who developed a corneal perforation had allergic predisposition and/or habit of rubbing the eyes (Table 1). Furthermore, an analysis of the risk factors for acute hydrops in 22 cases of corneal ectatic disorders showed that 95% of the patients had seasonal allergic disease and 91% had allergy associated with an eye-rubbing behavior. 11 These reports and the present case suggest that allergic predisposition is most likely a risk factor for acute hydrops and corneal perforation in eyes with PMCD. Currently, there is no effective treatment to prevent the progression of PMCD, and we should consider aggressive treatment for cases with allergy associated eye-rubbing behavior to reduce the risk the development of acute hydrops and corneal perforation. In our case, epinastine eye drops were administered to reduce the eye-rubbing behavior, but the disease had already progressed to corneal perforation. If we had started the anti-allergic treatment earlier, we may have avoided the acute hydrops and corneal perforation. We reviewed the cases that had undergone treatment of a corneal perforation associated with PMCD and found that conservative treatments with tissue adhesive and/or therapeutic contact lenses were performed in 13 of 20 eyes. 7,9,10,[12][13][14][15][16] The conservative treatment was successful in 4 eyes, 4 eyes required corneal sutures, and 12 eyes required corneal transplantation. [7][8][9][10][12][13][14][17][18][19] Three eyes that had been treated by therapeutic contact lenses alone were healed, 15,16 and one eye was successfully treated by a combination of tissue adhesive and therapeutic contact lenses. 13 Although corneal transplants have been the most performed treatment, we need to be aware of the possibility of corneal astigmatism induced by the surgery. Therefore, we should attempt conservative treatments with therapeutic contact lenses especially in cases of relatively small perforations. Conclusions We examined a case with PMCD in one eye and keratoconus in the contralateral eye. Our findings showed that the degree of progression differed between the eyes, and an allergic predisposition was likely the risk factor for acute hydrops and corneal perforation in this eye with PMCD. Thus, clinicians should be aware of allergic predisposition and eye-rubbing behavior when following these corneal disorders. Patient consent Written informed consent was obtained from the patient for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal.
2022-01-28T16:30:33.571Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e54f03c8fca6960f0d1a0568b7faf69a67b636cd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ajoc.2022.101293", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f0a80002b2bbb68d2bb44841cdd5427534cc01c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14682514
pes2o/s2orc
v3-fos-license
PrPSc spreading patterns in the brain of sheep linked to different prion types Scrapie in sheep and goats has been known for more than 250 years and belongs nowadays to the so-called prion diseases that also include e.g. bovine spongiform encephalopathy in cattle (BSE) and Creutzfeldt-Jakob disease in humans. According to the prion hypothesis, the pathological isoform (PrPSc) of the cellular prion protein (PrPc) comprises the essential, if not exclusive, component of the transmissible agent. Currently, two types of scrapie disease are known - classical and atypical/Nor98 scrapie. In the present study we examine 24 cases of classical and 25 cases of atypical/Nor98 scrapie with the sensitive PET blot method and validate the results with conventional immunohistochemistry. The sequential detection of PrPSc aggregates in the CNS of classical scrapie sheep implies that after neuroinvasion a spread from spinal cord and obex to the cerebellum, diencephalon and frontal cortex via the rostral brainstem takes place. We categorize the spread of PrPSc into four stages: the CNS entry stage, the brainstem stage, the cruciate sulcus stage and finally the basal ganglia stage. Such a sequential development of PrPSc was not detectable upon analysis of the present atypical/Nor98 scrapie cases. PrPSc distribution in one case of atypical/Nor98 scrapie in a presumably early disease phase suggests that the spread of PrPSc aggregates starts in the di- or telencephalon. In addition to the spontaneous generation of PrPSc, an uptake of the infectious agent into the brain, that bypasses the brainstem and starts its accumulation in the thalamus, needs to be taken into consideration for atypical/Nor98 scrapie. Introduction Scrapie in sheep and goats, which has been reported for more than 250 years [1], belongs to the transmissible spongiform encephalopathies (TSEs) -also known as prion diseases. This group of fatal diseases includes bovine spongiform encephalopathy (BSE) in cattle, chronic wasting disease (CWD) in deer and Creutzfeldt-Jakob disease (CJD) in humans. TSEs are characterized by the accumulation of protein aggregates, which are relatively stable against proteolysis. According to the prion hypothesis, a misfolded protein is the relevant part of the infectious agent [2]. It is widely accepted that this "proteinaceous infectious particle" is the pathological isoform of the physiological prion protein (PrP c ) which is encoded by a cellular gene [3]. Recently, it has been shown that infectivity can be generated from a synthetic misfolded form of the prion protein [4]. Depending on the kind of prion disease, the pathological prion protein (PrP Sc ) is detectable solely in the central nervous system (CNS) or may also be found in other tissues, especially in those of the lymphoreticular system (LRS) [5]. In the worldwide population of small ruminants, BSE and scrapie are considered to be the relevant TSEs affecting sheep and goats. Scrapie, however, is not a homogenous disease form, as demonstrated by the existence of several strains upon transmission to rodents [6] and the peculiar molecular properties of the sheep-passaged scrapie isolate CH1641 [7,8]. The discovery of a novel type of scrapie in Norway in 1998 (Nor98) that was clearly distinguishable from all previously reported forms of scrapie [9], and that was soon after detected in several other countries, added to the diversity of this TSE [10]. In our present work we concentrate on scrapie field cases that include cases of "classical" scrapie as well as "atypical"/ Nor98 scrapie. Obvious differences exist between the two scrapie forms with regard to the epidemiology of the disease and the properties of the proteinaceous particle. The latter include Western blot profiles and the stability against denaturation and proteases [11][12][13]. The two forms of sheep scrapie also differ with regard to the genotypes affected. Amino acids at codon 136 (A/V), 154 (H/R) and 171 (H/Q/R) are considered to markedly influence susceptibility to classical scrapie; the most susceptible alleles are V 136 R 154 Q 171 (VRQ) and A 136 R 154 Q 171 (ARQ), while the A 136 R 154 R 171 allele (ARR) seems to confer a certain resistance against the disease [14,15]. Atypical/Nor98 scrapie affects a number of genotypes, including the ARR allele, and animals with the AHQ allele or a Phenylalanin (F) instead of Leucin (L) at codon 141 in the ARQ allele are proportionally overrepresented [16][17][18]. The results of a number of case reports and studies have shown that the deposition form and distribution of PrP Sc aggregates in atypical/Nor89 scrapie sheep are clearly distinct from classical scrapie; immunohistochemical methods and recently the sensitive PET blot method have been used for the detection of PrP Sc in the ovine brain [9,[19][20][21][22][23]. Formerly, the PET blot had only been used for the sensitive detection of PrP Sc in extracerebral organs of classical scrapie sheep [24][25][26][27]. Surprisingly, the anatomical distribution of PrP Sc in the ovine brain found in the literature is more thoroughly documented for atypical/Nor98 scrapie than for classical scrapie. Although the pathogenesis of classical scrapie is well-studied [28,29], detailed descriptions on how the infectious agent spreads once it has reached the brain seem to be lacking for both scrapie types. For classical scrapie, numerous reports exist on the different forms of PrP Sc that can be found in the brain tissue and the presence of PrP Sc aggregates in peripheral neural and non-neural tissues -at least in sheep carrying susceptible PrP genotypes. Also, the entry of the infectious agent into the CNS has been described thoroughly for field classical scrapie infections and has been shown to agree with the oral infection of sheep with BSE and scrapie as well as the oral infection of rodent models infected with scrapie [29][30][31][32]. The infectious agent apparently enters the CNS via the intermediolateral column of the thoracic spinal cord (Th 8 -Th 10 in natural scrapie infection) and the dorsal motor nucleus of the vagus nerve (DMNV) in the brainstem. Unfortunately, reports on the spread of ovine PrP Sc from the brainstem into the brain are usually not very detailed. In atypical/ Nor98 scrapie, most of the PrP Sc load in affected sheep is found in the cerebellum and cerebrum. It still needs to be determined whether this novel disease is a sporadic prion disease or not. If sheep could acquire the disease from their environment, where would the infectious agent enter the CNS? The pattern of PrP Sc deposition is apparently reproduced when atypical/ Nor98 scrapie is transmitted from one sheep to another via intracerebral inoculation [33]. In this study the PrP Sc deposition pattern in the CNS of 24 classical and 25 atypical/Nor98 field scrapie sheep was determined using the sensitive and specific PET blot method. Different amounts of PrP Sc in the CNS of classical scrapie have been assigned to different stages of PrP Sc spread into the brain, depending on the affected neuroanatomical structures. Material The brains and, if available, the spinal cords as well as lymphatic tissue (tonsils and/or retropharyngeal lymph nodes) were collected from 49 scrapie field cases and 6 further sheep from scrapie-free flocks as controls. Scrapie positivity was diagnosed either ante mortem by tonsil biopsy or post mortem using the respective methods stipulated by the EU VO999/2001 at that time (samples were collected during a time span of 12 years). The scrapie-positive group included 19 German and 5 Norwegian sheep diagnosed with classical scrapie and 24 Norwegian atypical/Nor98 scrapie cases, plus one German atypical/Nor98 case. The control group was made up of six German sheep derived from scrapie-free flocks. The PrP genotypes were determined either by PCR and melting curve analysis [34] or by automated sequencing as described previously [9]. Further information on the individual animals including age, breed, genotype, presence of clinical signs and availability of LRS and spinal cord is listed in Table 1. Depending on the circumstances under which the samples were collected, the post mortem times of tissues varied between 2 h and 4 days. Usually one half of the brain/tonsil/lymph node was fixed in 4% buffered formaldehyde, cut into slices and embedded in paraffin within five to seven days, while the other half was frozen and stored at -80°C. Histopathology One to three μm-thick CNS/lymphatic tissue sections were cut, collected on silane-coated glass slides and stained with haematoxylin and eosin (H&E). Brain sections were also stained with Luxol Fast Blue then counterstained by periodic acid Schiff`reagent (LFB/PAS) for the orientation and discrimination of neuronal nuclei and neural tracts. PET blot The PET blot procedure followed the protocol as described previously [23,35] using the monoclonal antibody (mAb) P4 (R-Biopharm, Darmstadt, Germany), which had proved to give the best results regarding sensitivity and specificity for the detection of PrP Sc in classical and atypical/Nor98 sheep scrapie [23]. In brief, immunolabeling of PrP Sc was performed after a 1-3 μm tissue section had been placed on a nitrocellulose membrane (0.45 μm, Bio-Rad, Hercules, CA, USA) which was then deparaffinized and rehydrated. This was followed by treatment with proteinase K (250 μg/mL; Sigma-Aldrich, MO, USA) overnight at 56°C and the decontamination of the membranes in 4 M guanidine thiocyanate (GdnSCN) for 30 min. Membranes were blocked with 0.2% casein in PBS containing 1% Tween before the primary antibody (mAbP4) was applied 1:5000 in TBST. An alkaline phosphatase-coupled goatanti-mouse antibody (Dako, Glostrup, Denmark) and the formazan-reaction with NBT/BCIP were used to visualize the result. Thorough rinsing of the membranes with TBST was required between the different steps. Examination and evaluation of immunolabelled sections From each sheep all available sections of the CNS and the LRS were examined with the PET blot, and the intensity of the PrP Sc staining as well as the forms and distribution of the PrP Sc deposition were evaluated. The presence of PrP Sc deposits and the deposition forms in CNS and LRS sections were verified by immunohistochemistry. This was usually done using either mAb P4 (German cases) or mAb F89/160.1.5 in combination with mAb 2G11 (Norwegian cases), but if considered necessary immunohistochemistry was repeated with further antibodies as stated above. The intensity of PrP Sc deposits in the PET blots was evaluated on a scale of 0.5 to 4 (0 = no PrP Sc deposits visible; 0.5 = very little indefinable deposits; 1 = very little distinct PrP Sc deposits; 1.5 = little distinct PrP Sc deposits; 2 = moderate PrP Sc deposits, all deposition forms well distinguishable; 2.5 = moderate to pronounced PrP Sc deposits, all deposition forms well distinguishable; 3 = pronounced PrP Sc deposits, deposition forms partly interfere with each other; 3.5 = pronounced PrP Sc deposits, deposition forms interfere with each other; 4: maximal PrP Sc deposits, deposition forms interfere with each other). The value system of the scale itself was established and agreed on by two independent persons that routinely evaluate PET blots. Western blot analysis Ten percent tissue homogenates (wt/vol) were either prepared in PBS containing 0.5% desoxycholic acid sodium salt (DOC) using glass grinding tubes and pestles or 20% homogenates were obtained by the standard sampling procedure of the TeSeE Western Blot Kit (Bio-Rad, Hercules, CA, USA). Twenty percent homogenates were processed using the TeSeE sheep/goat Western Blot Kit according to the manufacturer's instructions. The antibody P4 was added at a dilution of 1:1000 to the primary antibody of the kit. Ten percent homogenates were subjected to a different protocol using homemade 15% acrylamid gels, a 0.45 μm nitrocellulose (NC) membrane (Bio-Rad) for semi-dry blotting and mAbP4 (1:2000). The membrane was treated with 4 M GdnSCN and blocked with 0.2% casein in PBS including 1% Tween for 30 min respectively before the primary antibody was applied overnight at 4°C. An HRP-conjugated goat anti-mouse antibody (Dako, Carpintera, CA, USA) and Super Signal Femto West Maximum Sensitivity Substrate (Perbio, Erembodegem, Belgium) were used to visualize the result on x-ray film. The molecular size of PrP Sc was compared only within one system. Western blot In all sheep that had been classified as atypical/Nor98 scrapie cases, the characteristic small fragment of 11-12 kDa [9] was present in CNS tissue samples after proteinase K digestion. Hereof the typical triplet pattern of 18-30 kDa in all classical scrapie sheep was clearly distinguishable. We usually used CNS tissue for Western blotting to determine the molecular profile. Only in one sheep with classical scrapie PrP Sc were amounts in the brainstem so minimal that lymphatic tissue was needed to perform a valid Western blot. To ensure that Western blot PrP Sc patterns of different tissues were comparable in one sheep, lymphatic tissues of further sheep with classical scrapie (from this study) were examined as well. PET blot and immunohistochemistry Disease-associated prion protein could be identified in the CNS of all scrapie sheep with the PET blot and there was no PrP Sc detectable in the tissues of the negative control group. As previously described, immunohistochemical methods were able to confirm the presence of PrP Sc deposits in all sheep except for one atypical/ Nor98 case, despite using of a panel of antibodies [23]. Immunolabeling with the PET blot method allowed the identification of a number of deposition forms of PrP Sc in the CNS and all were confirmed by immunohistochemical methods. As described before [23] PrP Sc was detectable in the LRS tissue of all classical scrapie sheep where it was present, but in none of the atypical/Nor98 scrapie animals with available LRS tissue could PrP Sc be found (for availability of lymphatic tissue see Table 1). Figure 1a/b shows PET blot and immunohistochemical staining of the PrP Sc aggregates in the follicle of a tonsil derived from a classical scrapie case. Deposition forms Intra-and perineuronal PrP Sc aggregates were found with the PET blot solely in classical scrapie (Figure 1f), as were subpial, subenpendymal, and perivascular deposits. Extra neuronal PrP Sc aggregates in the brains of sheep affected by classical scrapie often had a ramified appearance and were found in grey and white matter structures (Figure 1d and 1f). They were addressed as glia-associated PrP Sc aggregates and found to be relatively conspicuous in the cerebellar molecular layer where they took a stellate form [36] (Figure 1d). In contrast, PrP Sc aggregates found in the white matter of atypical/Nor98 scrapie sheep were always well-defined granules that varied a bit in size and were occasionally arranged like pearls on a string. The latter deposition form could also be observed in classical cases, but here also linear PrP Sc was sometimes present. PrP Sc deposits in the grey matter of atypical/Nor98 scrapie cases generally showed a fine granular pattern, also termed "synaptic/reticular" in human TSEs rather than "fine granular" [12,37] (see Figure 1c). In some atypical/ Nor98 scrapie cases, larger plaque-like aggregates could be seen in the substantia nigra, basal ganglia, thalamic nuclei and white matter. However, a differentiation between real plaques (amyloid) and plaque-like deposits is not possible with immunohistochemical detection methods or with the PET blot method as demonstrated before [38]. A discrimination between globular and punctuate deposits in the white matter of atypical/Nor98 cases [11,21] was irreproducible with the PET blot, which is why the term "granular" for the PrP Sc deposits that were present in the white matter was chosen. Punctuate PrP Sc deposits, comprising smaller aggregates than granular PrP Sc deposits but more defined than the reticular PrP Sc aggregates, were detected in the grey matter of classical scrapie sheep. Small deviations in the composition of the complex deposition pattern could not be related to genotypes in the sheep examined. Distribution of PrP Sc in the CNS Sequential appearance of PrP Sc distribution in the CNS of classical scrapie sheep To determine the sequential appearance of PrP Sc in the CNS, all field cases of classical scrapie were subjected to a thorough examination regarding the anatomical structures affected by PrP Sc deposition. In the following, all cases were arranged according to the amount of PrP Sc they had accumulated in total, and the occurrence of PrP Sc in a panel of 127 neuroanatomical loci was compared between the cases. From this evaluation arose a classification of the classical scrapie cases into four stages of PrP Sc spread in the CNS (see Figures 2, 3, 4 and 5). Criteria for these turned out to be certain neuroanatomical Figure 2 Classification of the PrPSc spread during disease development in classical scrapie.The examined classical scrapie cases classified into four stages of PrP spread according to certain affected neuroanatomical sites (PET blots, mAb P4). In the CNS entry stage (a -d) only discrete PrP Sc deposits are visible in the obex region, while in the brainstem stage (e -h) PrP Sc aggregates are clearly visible in the brainstem and start to appear in more rostral structures. Once PrP Sc deposits can be found in the deep cortical layers of the frontal cortex (i), the cruciate sulcus stage (i -l) is reached. In the basal ganglia stage, intense deposits in basal ganglia and thalamic nuclei can be found (m -p). Brain sections shown for the first, third and fourth stage derived from sheep with the genotype ARQ/ARQ while the sheep whose brain sections are depicted in the brainstem stage carried the genotype ARH/VRQ (bar = 5 mm). structures whose involvement marked a stage, meaning that the respective structure accumulated PrP Sc aggregates (with a minimal score of 1) in all animals belonging to this stage and the following stage/stages. They are described in detail below and visualized in Figures 3 and 4. CNS entry stage: One sheep showed only few discrete PrP Sc deposits in the brain that were restricted to the dorsal motor nucleus of the vagus nerve (DMNV), the solitary tract nucleus and the spinal trigeminal tract in the brainstem. Further PrP Sc aggregates could be detected in the substantia intermedialis lateralis and centralis of the thoracic spinal cord. This first stage, where PrP Sc is detectable only in these CNS areas, can be considered the "CNS entry stage" in accordance with studies of other authors who have monitored the ascension of PrP Sc from the intestines to the CNS [29,30]. Brainstem stage: In the second stage, all segments of the spinal cord and all nuclei of the obex region accumulate PrP Sc which also disseminates to the more rostral parts of the medulla; this may therefore be called "brainstem stage". In the caudal medulla the cellulae marginales and substantia gelatinosa of the spinal trigeminal tract nucleus show a very intense staining. The mesencephalon and thalamus display discrete PrP Sc deposits which are generally found to be subpial and/or perivascular while the mamillary body, habenular nuclei and the hypothalamic nuclei accumulate substantial amounts of PrP Sc . The cerebellar nuclei accumulate PrP Sc if the rostral medulla is largely involved and focal deposits of PrP Sc are visible in the cerebellar cortex. Cruciate sulcus stage: During the next stage, the mesencephalon, amygdaloid nuclei, septal nuclei, optic tract, cerebral peduncle, hippocampus formation, frontal cortex and subcortical white matter are increasingly affected. Regarding the frontal cortex, it is notably the sulcus cruciatus -and in a number of cases only this part of the cortex -that accumulates PrP Sc in its deeper cortical layers (see Figure 2i). This stage is therefore designated "cruciate sulcus stage". PrP Sc deposits in the cerebellar cortex are not yet evenly distributed. Basal ganglia stage: In the final stage, PrP Sc deposits can be seen also in the medial thalamic nuclei (mediodorsal, ventrolateral, ventral posterior and anterior group), the corpora geniculata and the basal ganglia. A positive staining for PrP Sc in the latter determines a classical case in our definition for the "basal ganglia stage". The white matter also displays remarkable amounts of PrP Sc , which are strongly linked to perivascular distribution. All stages are depicted in Figures 2, 3 and 4 and the stage at which PrP Sc reaches a respective neuroanatomical site is indicated in Figure 5 using a colour code. In the sheep examined in this study we could not find any influence of the different genotypes on the neuroanatomical distribution of PrP Sc aggregates. Comparison of PrP Sc deposition patterns in classical and atypical/Nor98 scrapie PrP Sc deposits in atypical/ Nor98 scrapie cases were examined and evaluated in the same way as with the classical field cases. In contrast to the classical scrapie cases, differentiating distribution/spread stages of PrP Sc in the CNS was not feasible with the atypical/Nor98 scrapie cases. In Figure 6, the same brain sections that illustrate the different stages of classical scrapie in Figure 2 are depicted for a case of atypical/Nor98 scrapie. In all atypical/Nor98 scrapie sheep, where brainstem material was available (n = 15) apart from one (see below), PrP Sc aggregates were detectable in the rhombencephalon and mesencephalon. Regularly affected neuroanatomical structures were the spinal trigeminal nucleus, reticular formation, pyramid, pontine fibres, substantia nigra and cerebral peduncle. In the spinal cord the corticospinal tract and substantia gelatinosa accumulated PrP Sc in most cases (Figure 1c). Certain grey matter structures such as the DMNV, hypoglossal nucleus, dorsal tegmental nucleus, oculomotor nucleus, red nucleus Figure 6 PrP Sc distribution in the brain of atypical/Nor98 scrapie: Brain sections of an atypical/Nor98 scrapie case stained with the PET blot (mAb P4) have a different PrP Sc distribution than the ones of classical scrapie cases as shown in Figure 2 (bar = 5 mm). Brain section derived from a sheep with the genotype ARQ/AHQ. and central grey of the mesencephalon, never displayed any PrP Sc in the examined atypical/Nor98 scrapie cases. These listed neuroanatomical sites, however, accumulated large amounts of PrP Sc in the respective stage of PrP Sc distribution in the CNS of classical scrapie sheep as explained above (Figures 2, 3, 4 and 5). There were no PrP Sc aggregates detectable in the cerebellar nuclei of the examined atypical/Nor98 scrapie cases, in contrast to the classical scrapie cases as described above. The synaptic or reticular PrP Sc staining pattern in the cerebellar cortex of atypical/ Nor98 scrapie sheep was in most cases more intense in the molecular than in the granular layer (Figure 1e). Intraand extracellular complex PrP Sc aggregates in the cerebellar cortex of classical scrapie sheep were predominantly present in the granular layer and surrounding the Purkinje cells; the molecular layer displayed mainly glia-associated PrP Sc deposits that took a stellate form (Figure 1d). The cerebellar peduncles and white matter of the cerebellum itself showed PrP Sc aggregates for both scrapie types. In the diencephalon of most atypical/Nor98 scrapie sheep, the corpora geniculata, medial thalamic nuclei and reticular nucleus accumulated PrP Sc aggregates. In all atypical/ Nor98 cases where the anterior striatum could be examined (n = 14), PrP Sc deposits were also present in the caudate nucleus and putamen. The white matter of diencephalon and telencephalon showed PrP Sc deposits in both types of sheep scrapie. In atypical/Nor98 scrapie, these were mainly confined to the subcortical fibres and certain white matter tracts, e.g. the corpus callosum or the commissura rostralis (Figure 7d, arrow), while the distribution in classical scrapie was more disseminated. There was one case in which PrP Sc deposits were detectable with the PET blot only in the supratentorial (cerebral) brain structures and to a very small degree in the cerebellar cortex. The brainstem, including midbrain and spinal cord, were completely spared in this case, which was eventually considered to represent an early stage of atypical/Nor98 scrapie [23]. In Figure 7 the contrasts in PrP Sc intensity existing in the grey and white matter between the two types of scrapie are demonstrated in a case of atypical/Nor98 scrapie and a classical scrapie case of the "basal ganglia stage": in classical scrapie it is the centromedial amygdaloid nuclei (Figure 7b) as well as the septal nuclei and basal ganglia (Figure 7d) that show substantially more PrP Sc than the external capsule (Figure 7b, arrow) and the rostral commissure (Figure 7d, arrow). In atypical/ Nor98 scrapie, this principle turns out to be exactly the opposite, with the external capsule (Figure 7a, arrow) and the rostral commissure (Figure 7c, arrow) accumulating rather intense PrP Sc deposits in contrast to the adjacent grey matter. The lateral olfactory tract displayed PrP Sc aggregates in both scrapie types with the respective PrP Sc deposition patterns described above. Yet, the Islands of Callejaclusters of neuronal granular cells in the olfactory tubercle -showed dense PrP Sc deposits solely in classical scrapie cases and were completely devoid of PrP Sc in atypical/Nor98 scrapie sheep. Regarding the hippocampus formation in classical scrapie cases, there was usually a more intense staining of the hippocampus and the fissura hippocampi compared to the dentate gyrus. In contrast to the atypical/Nor98 scrapie cases, there was no obvious accentuation of any layers. Atypical/Nor98 scrapie sheep showed a rather intense PrP Sc staining of the granular layer of the dentate gyrus, the fissura hippocampi and the interconnective fibres between hippocampus and alveus (similar to the subcortical white matter) in comparison to the adjacent layers. The pyramid layer of the hippocampus appeared to be completely devoid of PrP Sc deposits. The intensity of PrP Sc staining in a single case was usually in agreement with the intensity of PrP Sc deposits that could be found in the cerebral cortex of both scrapie types. As mentioned above, the complex PrP Sc aggregates in classical scrapie were mainly confined to the deeper cortical layers (laminae V and VI) while reticular/synaptic PrP Sc deposits in the cortices of atypical/Nor98 scrapie sheep were distributed more evenly, although an accentuation of laminae I and IV could be noted in some cases. Like in classical scrapie, differences regarding the distribution of PrP Sc deposition could not be related to genotypes. Discussion In this study 24 cases of classical and 25 cases of atypical/Nor98 scrapie cases were examined with the PET blot method, focusing on the similarities and differences in the distribution of PrP Sc deposits that were detectable with this method. Recently the PET blot has been shown to provide a sensitive and specific detection of PrP Sc in both types of sheep scrapie in the same manner as had been previously shown for human, bovine and rodent neuronal and non-neuronal tissues [30,35,[38][39][40][41]. The high sensitivity of this method allows PrP Sc deposits to be detected even in FFI patients where conventional immunohistochemistry fails to detect them, and contrasts with Western blotting, which requires up to 1 g of tissue equivalent [42]. The PET blot provides, apart from its sensitivity and specificity, a good overview of where to find PrP Sc in a brain section (Figure 2), as no counterstaining is necessary. The fine resolution of the immunolabeling gives a good impression of the structures that accumulate PrP Sc (Figure 1f), but the general delineation of the single cell is better with immunohistochemistry, which is why these two methods complement each other in a sensible way. Neuroinvasion and spread of PrP Sc in the ovine brain In this study, we also give a more detailed account of how the disease-associated PrP aggregates seem to spread in the CNS tissue of sheep infected with classical scrapie. The sequential detection of PrP Sc aggregates in the CNS of classical scrapie sheep implies that a cell-tocell spread takes place from the entry sites in the spinal cord and obex to the cerebellum, diencephalon and frontal cortex via the rostral brainstem. From these entry sites we conclude that the vagus nerve for the DMNV and sympathetic fibres for the spinal cord are the structures that transport the infectious agent to the CNS. This is very similar to the results obtained in hamsters after oral inoculation with the 263K scrapie strain [43]. The cerebellum may also receive PrP Sc via the cerebellar tracts of the spinal cord. Noticeable perivascular PrP Sc deposition in the brains of scrapie-affected sheep also raises the possibility that the infectious agent reaches the brain via the haematogenous route [44]. The distribution of brain metastases in humans reflects a haematogenous entry into the brain as it is proportional to the cerebral blood flow per area. From this one can conclude that a general PrP Sc uptake from the blood would cause quite a different cerebral distribution pattern of PrP Sc deposits than we observed [45]. There are three other possible explanations for the perivascular accumulation of PrP Sc aggregates in classical scrapie. Cells of glial origin, e.g. microglia, might use the blood vessels as a structural lead for their movement and carry PrP Sc molecules with them, possibly also distributing them among the astrocytes forming the blood brain barrier (BBB). This would be a vascular spread in the broader sense. As a second possibility, microglia cells that have incorporated PrP Sc move to the blood vessels in order to dispose of the aggregates and this leads to a perivascular deposition of the aggregates. Another way for PrP Sc to reach blood vessels could be that they spread via sympathetic nerve fibres of the Plexus nervorum perivascularis. Haematogenous neuroinvasion has also been discussed with regard to the circumventricular organs (CVOs) due to the fact that these are usually affected in scrapie-infected sheep and that they are not protected by the BBB [46]. The possibility that the CVOs might be in contact with PrP Sc from the blood during the pathogenesis of the disease cannot be excluded, but our results argue against a major involvement of the CVOs in neuroinvasion. In a very early case the DMNV was affected, but the area postrema and further CVOs were devoid of PrP Sc (Figure 2d). This agrees again with the results obtained for the oral infection of hamsters with scrapie [30]. In contrast to the classical scrapie cases, a sequential development of PrP Sc distribution cannot be seen upon analysis of the present atypical/Nor98 scrapie cases. PrP Sc distribution in one sheep of presumably early disease phase suggests that the aggregation of PrP Sc has its origin in the di-or telencephalon. A spontaneous genesis of misfolded PrP could arise in the cerebral cortex. On the other hand, an ascending spread of the infectious agent that bypasses the brainstem and enters the CNS via sensible nerve fibres should be taken into consideration, e.g. proprioceptive fibres [47] or the spinothalamic tract. This would lead to further spreading of PrP Sc from the thalamic nuclei to the cerebellar and cerebral cortex and from these to the brainstem and spinal cord, e.g. via the corticospinal tract. Where does the spread of Nor98-PrP Sc start in the brain? It has been speculated by Nentwig et al. [48] that the PrP Sc deposits and histopathologic lesions in atypical/ Nor98 scrapie possibly evolve from the cerebrum to the cerebellum and the brainstem, but according to their examination of six sheep brains, this concept would not explain the PrP Sc distribution in one sheep where they found PrP Sc mainly in the cerebellum. However, immunohistochemistry -as used by these authors -is sometimes not able to detect the fine reticular deposits, e.g. seen in the cortex of Creutzfeldt-Jakob disease type 1, especially in the rare VV1 subtype [49]. The sensitive PET blot method, in contrast, is able to visualize these reticular deposits [38]. PrP Sc deposits in one case described by Nentwig et al. could have therefore simply been missed in the cortex by immunohistochemistry. If this proved to be correct, according to the argumentation of Nentwig et al., PrP Sc deposition and histopathologic lesions could indeed evolve from the cerebrum into the cerebellum and the brainstem. The 15 whole brains of atypical/Nor98 scrapie sheep examined by Moore et al. [21] should accordingly represent more or less the final stage of disease, as PrP Sc can be generally found in all parts of the brain, including the brainstem. In other reports on the occurrence of atypical/Nor98 scrapie, cases have been described in which no PrP Sc was detectable by immunohistochemistry in the obex region at all, but in the cerebellum and cerebrum [50][51][52][53][54]. If the misfolding of PrP c in atypical/Nor98 scrapie does really start in the cerebrum it is obvious why early stages are not present in the worldwide pool of preserved atypical/Nor98 brains, as only the sampling of brainstem and cerebellum is compulsory in small ruminants according to EU regulations. Thus the question of whether PrP Sc accumulation might start sporadically in the cerebrum -and if so, at one or more sites at the same time? -cannot be resolved by this or any other current study using field cases of atypical/Nor98 scrapie. This situation is comparable to the one with CJD type 1, where a spontaneous misfolding of PrP c in the cerebral cortex and a caudal spread from there is assumed, but not proven [55]. The incidence of atypical/Nor98 in sheep is higher than that of CJD in humans [17]. A case control study of atypical/Nor98 scrapie has shown that animal movement does not seem to be a factor for the transmission of atypical/Nor98 scrapie between flocks; thus if sheep were to acquire this prion disease from their environment, its contagiousness would indeed be very low [56]. It has been speculated that this might be due to the relatively low protease stability, which could also explain the lack of intracellular PrP Sc deposits [33]. There are certainly small differences between the PrP Sc distribution detected by Moore et al. [21] in their described atypical/Nor98 scrapie cases and the ones revealed here by the PET blot method. For instance, in the present atypical/Nor98 scrapie material, PrP Sc was never detectable in the cerebellar nuclei. Also, the affected parts of the hippocampus appear to be different. This might be due to differences in the treatment of tissue, the methods and/or differences in the antibodies used (mAb2G11 versus mAbP4). As previously reported, perineuronal staining has also been detected for the substantia nigra in some atypical/Nor98 scrapie sheep using immunohistochemistry [11], whereas in our study only plaque-like PrP Sc deposits could be seen in this neuroanatomical structure. Similarly, neuronal deposits could be found in many affected sites of classical scrapie, but in contrast to previous publications [22], neither PET blot nor immunohistochemistry revealed PrP Sc in the Purkinje cells of the cerebellum. It is known that especially intraneuronal immunoreactivity needs to be interpreted with caution [10]. However, the congruence between previous reports on PrP Sc deposition patterns and the present results is obvious. Conclusion In summary, this study gives a basic description of PrP Sc deposition patterns in classical as compared to atypical/ Nor98 scrapie cases using the sensitive and specific PET blot method. We were able show a sequential appearance of PrP Sc aggregates in the CNS of sheep with classical scrapie, but not in atypical/Nor98 scrapie. The four emerging stages of spread in classical scrapie were defined by the accumulation of PrP Sc in certain neuroanatomical structures. These structures accumulated PrP Sc aggregates in all animals belonging to this stage and the following stage/stages. Further conclusions drawn from this study regarding atypical/Nor98 scrapie might help in future to elucidate its origin and potentially related prion disease types like Creutzfeldt-Jakob disease type 1.
2014-10-01T00:00:00.000Z
2011-02-15T00:00:00.000
{ "year": 2011, "sha1": "d60075f7560e04ca1c511014ffc1fa8000765b89", "oa_license": "CCBY", "oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/1297-9716-42-32", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d60075f7560e04ca1c511014ffc1fa8000765b89", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
219518326
pes2o/s2orc
v3-fos-license
Narrow/Broad-Band Absorption Based on Water-Hybrid Metamaterial In this work, the possibility of a switchable metamaterial absorber is proposed to control absorption bandwidth in the WiMAX/LTE (worldwide interoperability for microwave access/long term evolution) band, by taking advantage of the low cost and myriad structural configurations afforded by water-based metamaterials. By exploiting truncated cone-type resonators, the fractional bandwidth of 27.6% of absorption spectrum can be adjusted flexibly to be 7.4% of the narrow-band absorption depending on the volume of injected water, in both simulation and experiment at room temperature. In particular, this control method can be applied stably for different temperature of injected water. We describe a dynamic mechanism for broadband MA, as well as a principle for controlling the absorption characteristics utilizing a combination of magnetic resonance and perfect impedance matching. These results are a stepping-stone towards the realization of smart electronics integrated with multi-functional metamaterials in military, biomedical, communication and other fields. Introduction The idea for metamaterial absorbers (MAs)-which appeared over a decade ago-is that incoming electromagnetic (EM) waves can be trapped inside the sub-wavelength scale, which is not observed in natural materials [1,2]. Effective permeability and permittivity can be controlled flexibly to be equal at the intrinsic resonances by changing the physical structure of assembled materials. Therefore, the operating frequencies of MAs can be tuned from the radio to the visible range, creating extensive applications for both civilian and military uses [3][4][5][6][7][8][9][10]. Multilayer structures have been exploited to achieve broadband absorption and spatial frequency dispersion; however, their bandwidths were hard to tune, owing to the fact that their properties were immutable after fabrication [11][12][13][14][15]. To remedy this, several typical modulatory principles have been intensively developed to achieve multi-functionality, including tunable lumped elements [16], phase-changing materials [17,18] and graphene [19,20]. While these approaches offer the ability to excite multiple resonances in close proximity, there are still inherent difficulties due to the fact that multi-absorption peaks are difficult to cancel or recover independently. Therefore, it is imperative to develop the improved types of tunable/switchable broadband MAs by hybridization of the modulatory materials in multilayer MAs. Recently, water has become a good candidate for broadband MAs due to its low conductivity and relatively large imaginary part of the permittivity in the GHz range (because of the hydrogen-bonded network among water molecules [21]). It is also well known that between 0 and 100 • C (at normal pressures) water is a liquid that can be easily equipped for mechanical-and thermal-tunable purposes [22][23][24][25][26][27]. In 2015, Yoo et al. devised that water droplets could be periodically positioned on different substrates to present broadband absorption by exploiting their electric and magnetic resonances [28]. The absorption of the obtained spectra was approximately 93% from 8 to 18 GHz (for mobile communication, satellite and radar applications). In the higher-frequency bands, 20-40 GHz, Song et al. found that a flexible water-based MA achieved near unity electromagnetic absorption by using a water sphere cap sandwiched between top and bottom membranes made of PDMS (polydimethylsiloxane) with a bonded-metallic backside [29]. Their optically transparent absorber could be useful in stealth technology [30,31]. To the best of our knowledge, the tunable broadband absorption of water-based MAs in recent literature has mostly concentrated on the frequency range above C-band (over six GHz) where water has high absorption, and the low cost and the ability to make complex structures without masks are advantageous. However, further improvement of broadband absorption at lower frequencies, especially, below six GHz, is an interesting and relatively unexplored challenges. In addition, there is lack of independent-switchable ability for narrow/broad-band MAs, since previous tuning performances were realized totally with effective impedance-matching values (which affect the absorption) by changing any physical feature of water (permittivity, volume, etc.). Moreover, the experimental confirmation was hardly clarified at different temperatures of water, owing to the complicated and/or expensive techniques for those hybrid MAs. In this work, we propose a simple approach to obtain switchable MAs which can tune their absorption bandwidth to be narrow/broad-band. By hybridization of water into a multilayer MA operating in the WiMAX/LTE (worldwide interoperability for microwave access/long term evolution) band (4-6 GHz), the magnetic resonances can be independently modulated by injection of water to have different volume ratios, which is equivalent to the increase or reduction of each absorption frequency. A proposed water-hybrid MA was tested for the thermo-stable performance, based on both simulation and experiment. We believe that the proposed design can find various applications in manipulating the EM wave-matter interaction. Materials and Methods First, our multilayered sample was designed with CST Microwave Studio to simulate a broadband absorption, as shown in Figure 1a. The unit cells were the truncated cone-type resonators (TCRs) with height L, which utilized 27 sandwiched layers (metal-dielectric-metal) deposited on a substrate with thickness t 1 . The meta-surfaces were circular with diameter varying linearly from D 1 at the top surface to D 2 at the bottom. Dielectric and substrate layers were selected as flame retardant-4 (FR-4) with a dielectric constant of 4.3 and a loss tangent of 0.03. The 0.036 mm-thick copper layer (conductivity of σ = 5.8 × 10 7 S/m) was designed for all metallic layers. The geometrical parameters were optimized to be D 2 = 21.8, D 1 = 13.8, A = 23.8, L = 6.3 and t 1 = 2.2 mm. As in the enlargement in Figure 1a, a sandwiched layer contained a pair of metallic plates (thickness of 0.036-mm), which was separated by a dielectric spacer (0.182-mm thick). This lay on a dielectric substrate of t 1 = 2.2 mm. Second, water was directly injected into the air fissures between adjacent unit cells and its volume was controlled by height h. The fabrication procedure is simplified in Figure 1a. The precise milling and the heat pressing processes were simultaneously applied to fabricate a TCR as shown in Figure 1b. In order to integrate water, 10 by 12 grids of unit cells were kept inside a boundary wall, which has the same height as TCRs. The temperature of water was kept at 303 K (room temperature). The characteristics of switchable narrow/broad-band absorption were estimated by the reflection spectra measured with a ZNB-20 vector network analyzer (VNA) with a pair of radiating/detecting horn antennae, as shown in Figure 1c. The absorption (A) was calculated by A = 1-|T| − |R| = 1 | | | | , where T | | and R | | were the transmission and the reflection, respectively. In our structure, the thickness of bottom continuous copper layer was 36 μm, which was much thicker than the penetration depth of copper in the investigated frequency range (the penetration depth was calculated approximately to be 1 μm at 5 GHz). Therefore, the transmission of bottom layer was minimized to be zero [ 0], since the EM wave could penetrate through the thickness of copper film at the frequencies of interest. Consequently, the absorption was basically calculated as A= 1 -|S11 (ω)| 2 , where S11(ω) can be directly extracted from the CST Microwave Studio (for simulation) and measured by using ZNB-20 VNA (for experiment). In the frequency range of interest, the permittivity of water was approximately defined by the Debye formula [32,33] in CST Microwave Studio: and, The precise milling and the heat pressing processes were simultaneously applied to fabricate a TCR as shown in Figure 1b. In order to integrate water, 10 by 12 grids of unit cells were kept inside a boundary wall, which has the same height as TCRs. The temperature of water was kept at 303 K (room temperature). The characteristics of switchable narrow/broad-band absorption were estimated by the reflection spectra measured with a ZNB-20 vector network analyzer (VNA) with a pair of radiating/detecting horn antennae, as shown in Figure 1c. The absorption (A) was the transmission and the reflection, respectively. In our structure, the thickness of bottom continuous copper layer was 36 µm, which was much thicker than the penetration depth of copper in the investigated frequency range (the penetration depth was calculated approximately to be 1 µm at 5 GHz). Therefore, the transmission of bottom layer was minimized to be zero [S 21 (ω) = 0], since the EM wave could penetrate through the thickness of copper film at the frequencies of interest. Consequently, the absorption was basically calculated as A = 1 − |S 11 (ω)| 2 , where S 11 (ω) can be directly extracted from the CST Microwave Studio (for simulation) and measured by using ZNB-20 VNA (for experiment). In the frequency range of interest, the permittivity of water was approximately defined by the Debye formula [32,33] in CST Microwave Studio: and, where, ε s (T), ε ∞ (T) and τ(T) were the static permittivity, the high-frequency temperature dependent permittivity and the rotational relaxation time, respectively. In these Equations, a 1 = 87.9, b 1 = 0.404 K −1 , c 1 = 9.59 × 10 −4 K −2 , d 1 = 1.33 × 10 −6 K −3 , a 2 = 80.7, b 2 = 4.42 × 10 −3 K −1 , c 2 = 1.37 × 10 −13 s, T 1 = 406 K and T 2 = 924 K. It is well recognized that the permittivity of water depends on the intrinsic temperature. The real and the imaginary parts of dielectric constant of water in the investigated GHz range are shown in Figure 1d, when the intrinsic temperature was changed from 303 to 273 K. The imaginary part indicates a relatively low dielectric loss (or low conductivity of 1.59 S/m) because of the small value [28]. To estimate the dependence of effective impedance of TRC on the volume of water, the effective permittivity [ε e f f (ω)] and the effective permeability [µ e f f (ω)] can be calculated by [34] Here, d and k 0 are the distance to be traveled by the incident wave and the wave number of free space, respectively. Consequently, the total impedance between TRC and the surrounding environment (Z 0 = µ 0 /ε 0 = 377 Ω) be effectively switched for the perfect matching in narrow/broad-band as In other words, the specific frequency range allowing ε e f f (ω) = µ e f f (ω) [or Z = Z 0 ] was dominated by the height of water in the TRC. This was why, when the water was full between a pair of metallic plates of each sandwiched layer in TCR, the effective capacitor for equivalent LC circuit was reduced, owing to the conductive property of water. The efficiency of modulation for the proposed TRC was evaluated by the fractional bandwidth (FBW), where f low and f high indicate the lowest and the highest frequencies where absorption was over 90%, respectively, 3. Results Figure 2a shows the comparison between simulated and measured absorption spectra in case of no water (h = 0) and water integrated (h = 4.0 mm) at 303 K. Without water, the prediction by Equation (6) of a wideband absorption of FBW = 27.6% is in good agreement with both simulation and experiment. Obviously, Figure 2b presents that, without water in the range where the absorption was over 90% (around 4.0 to 5.28 GHz), the real part of effective impedance tends toward one while the imaginary part was suppressed. Meanwhile, when h = 4.0 mm the wide-band absorption was switched to be narrow band with an absorption of 40% at 4.74 GHz. The dependence of simulated broadband absorption on the height (h) of water at different temperatures are shown in Figure 3. It was found that the increase of intrinsic temperature (from 273 to 303 K) of water affected slightly the FBW of TCR absorber. The value of FBW gradually decreases (27.6%, 21.6%, 18.2%, 15.6% and 7.4%) as h was increased (0, 1.0, 1.5, 2.0 and 2.5 mm) in Figure 3a,b. At h = 3.5 mm, the wide-band absorption was switched to be narrow band with absorption of 76% Crystals 2020, 10, 415 5 of 9 at 5.29 GHz (for 273 K) and 79% at 5.28 GHz (for 303 K). It was further predicted that, for a given h, the FBW was nearly constant as the temperature of water was varied from 273 to 303 K. In addition, Figure 3c,d (T = 303 K), shows that the real and the imaginary parts of Z(ω) tend to be nearly 1.0 and 0, respectively as calculated by Equation (5) where absorption exceeds 90%. It is well known that, in these frequency ranges, the real and the imaginary parts of effective permittivity and permeability were approximately equal [Re(µ) ≈ Re(ε); Im(µ)≈Im(ε)] in Equations (3) and (4). Furthermore, the imaginary part of impedance was always positive in the investigated frequency range. This condition implies that the dispersion of permeability was more important than that of permittivity [35]. In other words, the observation suggests that the magnetic resonances were dominant instead of the electric ones in the proposed MA. Here, d and k0 are the distance to be traveled by the incident wave and the wave number of free space, respectively. Consequently, the total impedance between TRC and the surrounding environment (Z0 = / 377 be effectively switched for the perfect matching in narrow/broad-band as In other words, the specific frequency range allowing ε (ω) = µ (ω) [or Z = Z0] was dominated by the height of water in the TRC. This was why, when the water was full between a pair of metallic plates of each sandwiched layer in TCR, the effective capacitor for equivalent LC circuit was reduced, owing to the conductive property of water. The efficiency of modulation for the proposed TRC was evaluated by the fractional bandwidth (FBW), where flow and fhigh indicate the lowest and the highest frequencies where absorption was over 90%, respectively, For the experimental confirmation, the measured absorptions over 90% with an FBW of 22.6% (from 4.19 to 5.26 GHz, in Figure 4a) can be tuned to be FBW=16% (from 4.46 to 5.24 GHz, in Figure 4b) by increasing h from 1.0 to 2.0 mm. In the case of h = 3.0 mm, in Figure 4c, the measured absorption was maintained at a maximum value of 82% from 5.02 to 5.28 GHz. These obtained results indicate a good agreement with the simulated data. For the experimental confirmation, the measured absorptions over 90% with an FBW of 22.6% (from 4.19 to 5.26 GHz, in Figure 4a) can be tuned to be FBW=16% (from 4.46 to 5.24 GHz, in Figure 4b) by increasing h from 1.0 to 2.0 mm. In the case of h = 3.0 mm, in Figure 4c, the measured absorption was maintained at a maximum value of 82% from 5.02 to 5.28 GHz. These obtained results indicate a good agreement with the simulated data. Discussion It is normal that the underlying switching mechanism for narrow/broad-band absorption by utilizing a water-hybrid TCR relies on the distribution of induced magnetic energy as shown in Figure 4. As mentioned above, the broadband absorption feature of TRC structure was caused only by the magnetic resonances, which are induced by the antiparallel surface currents in successive layers (along the k direction). As discussed deeply in our previous work on sandwiched-layer TCR [36], the induced electric and magnetic fields are excited continuously from the bottom to the top of TCR, leading to broad-band absorption. In other words, the varying diameters of these metallic disks result in close absorption peaks and water plays an important role in canceling or activating the antiparallel surface currents flowing on them. Consequently, the modulation was triggered by the bottom-up cancellation of magnetic resonances (i.e., the low-frequency absorption peaks were gradually deactivated as the water-level rises). Thereafter, the induced magnetic energy was located and enhanced only in a specific volume of the TCR above the water level. Therefore, having both well-matched impedance and strong magnetic resonance at a given frequency was limited and Discussion It is normal that the underlying switching mechanism for narrow/broad-band absorption by utilizing a water-hybrid TCR relies on the distribution of induced magnetic energy as shown in Figure 4. As mentioned above, the broadband absorption feature of TRC structure was caused only by the magnetic resonances, which are induced by the antiparallel surface currents in successive layers (along the k direction). As discussed deeply in our previous work on sandwiched-layer TCR [36], the induced electric and magnetic fields are excited continuously from the bottom to the top of TCR, leading to broad-band absorption. In other words, the varying diameters of these metallic disks result in close absorption peaks and water plays an important role in canceling or activating the anti-parallel surface currents flowing on them. Consequently, the modulation was triggered by the bottom-up cancellation of magnetic resonances (i.e., the low-frequency absorption peaks were gradually deactivated as the water-level rises). Thereafter, the induced magnetic energy was located and enhanced only in a specific volume of the TCR above the water level. Therefore, having both well-matched impedance and strong magnetic resonance at a given frequency was limited and depends on the injected-water level, as shown in Figure 3c,d. This impedes the ability of device to consume the energy of incoming EM waves at this switching frequency. The small deviations between measured and simulated absorption spectra in Figure 4 can be largely explained by the scattering from imperfections in the fabricated sample and the distorted shape of injected water due to its surface tension [37]. In spite of these discrepancies, our obtained results are robust enough for the stable modulation applications working in thermally variable conditions. The modulation performance of water-hybrid TRC structure was further predicted to be insensitive to the polarization angle of EM wave owing to its symmetric multilayered design (not shown here). Conclusions We proposed and demonstrated a simple model to control the bandwidth of near-perfect absorption of the TCR metamaterial by leveraging the hybridization with water. By exploiting the role of magnetic resonance in a multilayered structure, it was found that the relative-bandwidth of absorption spectrum, FWB = 27.6% can be switched flexibly from 27.6% to 7.4% or absorption over 90% to below 80% in the WiMAX/LTE band. These results emphasize the important role of water in deactivating (or activating) induced magnetic resonances in the multilayered structure. This control method can provide further the insensitive operation to the polarization of incoming EM radiation and the thermal variations of injected water. The proposed model is an important step toward integrating MAs with the next generation of smart electronic components, especially, camouflage equipment, sensors and hygienic energy storage devices.
2020-05-28T09:14:54.641Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "4554dac5705da15e8b6b58ac952fe21f67138a86", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/10/5/415/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6eccff70383cd15aa5b27c13007fe276893b796e", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
236333827
pes2o/s2orc
v3-fos-license
Modern Business Intelligence: Big Data Analytics and Artificial Intelligence for Creating the Data-Driven Value Currently, business intelligence (BI) systems are used extensively in many business areas that are based on making decisions to create a value. BI is the process on available data to extract, analyze and predict business-critical insights. Traditional BI focuses on collecting, extracting, and organizing data for enabling efficient and professional query processing to get insights from historical data. Due to the existing of big data, Internet of Things (IoT), artificial intelligence (AI), and cloud computing (CC), BI became more critical and important process and received more great interest in both industry and academia fields. The main problem is how to use these new technologies for creating data-driven value for modern BI. In this chapter, to meet this problem, the importance of big data analytics, data mining, AI for building and enhancing modern BI will be introduced and discussed. In addition, challenges and opportunities for creating value of data by establishing modern BI processes. Introduction Recently, in the fourth industry revaluation, there is a very huge amount of created and generated data by computer machine such as GPS, sensors, website or application systems or by people through social media (twitter, Facebook, Instagram, or LinkedIn) [1]. Every moment, the data servers store huge amount of data which are produced by organizations. This is a huge amount of data comes from website, social media, tracking, IoT applications, sensors, and online news articles. Also, the advancement in computing and communication technologies have facilitated collecting a large volume of heterogeneous data from multiple sources. This data consists of structured and unstructured, complex and simple information. Currently, business gets a revenue from the analysis of such data with unstructured form up to 80% [2]. So, the organization can improve the business productive process due to this analysis of unstructured data that contains valuable information. In addition, it is significant for education, security, healthcare, and manufacturing. and code breaker Alan Turing envisioned a clear way forward in his groundbreaking 1950 paper, "Computing Machinery and Intelligence." At the time, computer technology could not keep up with Turing ideas. But, due to the advancement in computing, AI was established. At Oxford University, the Future of Humanity Institute introduced a 2018 report for surveying a panel of AI researchers on timelines for Strong AI. This report found that in 45 years, 50% chance of AI will outperform humans in all tasks and in 120 years it will automat all human jobs. As well as, AI will bring many opportunities for creating new jobs. Also, removing the need to do tedious and repetitive tasks is one of the great values of AI, as many experts said. Instead, users can focus on their main skills and values. For reducing human error, shrinking labor costs, and subsequently increasing profit, the application of technology in many industries and business has been aimed. This was true for the advancements made during the fourth Industrial Revolution (FIR) on through to the birth of the computer, and still true for the era of AI. In this chapter, the importance of big data analytics, data mining, AI for building modern BI and enhancing will be introduced and discussed. In addition, challenges and opportunities for creating value of data by establishing modern BI processes. Business intelligence (BI) Business Intelligence (BI) can be described as an automated process for deriving models and insights form raw data that are collected from heterogeneous data sources and are organized in a systematic way for improving business operations and processes. In enterprise BI architectures, the best practice is splitting the data collection and data organization processes that are associated with back-end architecture from data analysis and display to a user through the frontend. In BI, the processed transactions generate data, which are stored in Operational Data Sources called Online Transaction Processing servers (OLTP). With OLTP, the data is stored in a structured data repository called data warehouse after extraction and transformation processes. With data warehouse, there are different query optimization techniques can be applied for speeding-up of data analysis and running the analytics query. To achieve this speed-up, data warehouse creates subsets of the data warehouse called data marts. Also, reporting mechanisms for accessing transaction data stored in data warehouse are used in traditional BI systems. Therefore, analyzing these transaction data can help us for detecting patterns and predicting business trends. Recently, the data sources of BI are not only traditional data sources as transaction data, but they include modern data sources as mobile devices and sensor data, and web messages which were sent by company intranets and profiles of employees and customers. Most of modern data sources are unstructured, for example, posted messages in online social networks (OSN) and data from various sensors. Therefore, the main challenge is how to maintain these modern data sources as traditional relational database and achieve query efficiency. From the data analysis perspective, additional data means additional opportunities for discovering more insights. However, the big data challenges remain the big problem from the analytic perspective. Due to the increase in data, there are expanded opportunities within the scope of BI, which is not only a mechanism to analyze historical data trends, but it can combine data from sensors and other real time personal information for inferring insights that are not commonly available that is called situational BI [13]. For business operations, BI is called operational BI, which provides insights in real time to these operations as getting instant feedback for a call center operation as benefits from their work. In addition, the analytics rules may be composed depend on metainformation of the exposed data to his/her which can be considered as a self-service BI. Therefore, these new BI approaches must be managed carefully such that the compliance models and governance of enterprise are not violated. The three-tier architecture of traditional BI system is shown in Figure 1. This architecture consists of three layers: 1) Presentation layer, 2) Application layer, and 3) Database layer. The main challenge with this three-tier architecture, is how to fulfill service level objectives such as minimal throughput rates and maximal response time. This is because, the data storage management at the low-lever layers is hidden from the application layer which makes some difficulties to predict execution times. However, traditional BI systems are efficient in extracting and analyzing data, but they are rigid, slow, time-consuming, and requiring knowledge experts for maintenance. Therefore, many research works have been done for adding modern features to improve the three-tier architecture, which will establish the next generation BI. Modern business intelligence (MBI) In the traditional BI platforms, the main goal is giving answer "What happened?" Question by providing the efficient analyses. While, the BI modern platforms are giving the answer for "What is happening, what will happen, and why?" which offers the ability to monitor and obtain a continuous development of organization within fast analytics, while for accomplishing objectives of mission using predictive analytics. Traditional business intelligence platforms over the past two decades have mainly succeeded to provide users with historical comprehensive reports and easy-to-use custom analysis tools. Due to the underlying data architecture, which consists of a central data storage solution such as an enterprise data warehouse (EDW), the availability of BI functionality is largely. EDWs form the backbone of traditional data management platforms and usually connect vast network systems of data source into a central data warehouse. The data is then consolidated, refined, and pulled into different reports and dashboards after converting data in EDW to display old business information, such as weekly revenue metrics or quarterly sales. Although, this traditionally BI provides a basis for these types of dashboards and interim reports. While users have gained immense value from traditional platforms for historical reports capabilities, there are more users now require data analysis technologies that need direct access to data without depending on IT professionals. Federal agencies highlighted the following challenges associated with traditional BI solutions in analytics [13]: 1. On-Demand Analysis Capabilities Lacking: advanced users of BI today do not need to wait for answers to more business complex problems. Additional users need capabilities of self-service for linking and analyzing specific datasets depend on their own understanding, for any purpose, and at any time. 2. Needed Predictive Analyses: Historical reports capabilities provide just one puzzle piece: an insight about what happened in the past. Companies look to predictive analytics or insight about the future to forward thinking and truly be driven by data. With predictive models, the companies can use patterns and forecasts to get next actionable steps using their data. 3. Mixed Data Types Analysis: Traditional BI platforms have largely focused on structured data, but today users require the ability for viewing and analyzing semi-structured, unstructured data and third-party data. In recent years, the massive number of produced information has increased, partly due to new data mining technologies, the Internet of Things (IoT), the proliferation of data sensors and automated data collection tools. Now, advanced BI users and data scientists need access to unutilized data in different formats to mix data types and create their own algorithms, where on demand insights are available to make accurate and quick decisions. A lot of organizations that lack the processes, technology, and people needed to extend data-analysis capabilities to the next level become frustrating. These challenges need a strategy and platform for analytics that goes faraway the traditional BI platforms scope, as shown in Figure 2. Figure 2. Grows of BI platforms based on insights: Hindsight to insights to foresight [14]. Integration of traditional and modern BI Platforms is essential to laying the groundwork for enterprise-wide data transformation and organizations are truly concerned for getting rid of IT infrastructure and starting over. Data warehouses play a major role in existing data platforms, which provide the data that fully cleaned, organized, and managed for most businesses and companies. The data warehouse gives business managers, executives and others ability to obtain insights from historical data with relative ease without deep technical knowledge. The obtained data from data warehouse is very accurate due to careful testing, IT cleaning, and accurate knowledge of data layers. However, traditional BI challenges create a demand to increase EDW with different form of optimized architecture for fast access to ever-changing data: Lake Hadoop Data. Organizations look to upgrade their platforms of analytics are beginning to adopt the data lakes concept. Data lakes store information in its raw and unfiltered form, whether structured, semi-structured, or unstructured. Unlike the standalone EDW, the data lakes themselves perform little of the automated data cleaning and transfer operations, allowing data to be swallowed more efficiently, but they transfer the greatest responsibility for preparing and analyzing data to business users. Data Lakes can offer a low-cost solution by using Hadoop's Distributed File System (HDFS) for efficiently storing various types of data and analyzing them in their original structure. As shown in Figure 3, a data lake coupled with the data warehouse to identify the next generation of BI and provide the optimal basis for data analysis. In the system shown in Figure 3, EDW receives system data from different sources through the ETL process (Extract, Transform, and Load). After the data is cleaned, transformed, and standardized, it will be ready for analysis by a diverse group of users using dashboards and reports. In the interim, a data lake collects raw data from single or multiple source systems or all systems, and the data is absorbed and ready for discovering or analyzing processes. The result: a broader user base for exploring and creating relationships between vast amounts of various data for individual analyzes, upon request. Features of modern BI 1. Operational BI (real-time): Today, the competitive pressure of businesses has increased the requirement for almost real-time BI, which called operational BI. The operational BI goal is reducing latency between data analysis time and data acquisition time. Reducing response time enables the system to take suitable action when an event exists. With operational BI realization, companies can discover patterns or time trends across flow of operational data. 2. Situational BI: it enables situational awareness. In companies, BI positioning is important where a rapid turnaround in positions, commonly external business trends, has affected business [15]. However, this external data, which mostly comes from the intranet of company, external vendor, or the Internet, is unstructured. Moreover, this unstructured data must be combined with other structured data from the local data warehouse of the company for supporting real-time decision marking. For example, the company may want to know if its users and customers are posting negative or positive comments about their new products. Through the analysis of these comments, companies can provide immediate comments to the development team for making the product more competitive and qualified. Another example is important for a company to know whether natural disasters have affected their contract suppliers. Recognizing natural disasters and enable businessmen to take appropriate measures to reduce losses [16]. 3. Self-Service BI (SSBI): it enables end users for generating analyzes and analytical queries without involving of the IT department. In SSBI, the user interface of applications must be easy to use and intuitive, therefore technical knowledge of the data repository is not needed. In addition, the user should be allowed for accessing or expanding data sources organized by IT, but also nontraditional sources. Data architecture 1. Background: Traditional business application architecture has three layers: data, application, and presentation. In the three-tier architecture, execution time is very difficult for predicting, due to the relationship between processes of low-level data management and operations of high-level. Usually, workload management solutions are built on top of general-purpose DBMS, which need time delays for executing parallel requests. With modern business applications, this will create challenges for functions as operational information in real time. Therefore, technologies that enable simultaneous business transaction and analytical queries to be performed on the same data are important. Organizations today use the ETL to extract data, make transfers, and upload data that is converted into a data warehouse. This model is based on two types of business process critical processes: Online Analytical Processing (OLAP) and Online Transaction Processing (OLTP). OLTP is used for managing business operations, such as processing of an order. OLAP is used for supporting strategic decision making as sales analytics. 2. Challenges: Traditionally, OLAP and OLTP workloads are performed on the same database system. However, workloads of OLAP mostly consist of bulk reads on data only that is updated by OLTP, constantly. Therefore, transactionprocessing performance may be unexpected due to competition for a resource when both workloads are performed in a single database. Thus, it is necessary to separate workloads from OLAP and OLTP. Data sources, data warehouse, and data analytics in modern BI [14]. In this architecture, each workload of OLAP must wait until the data in the date pool is completely refreshed and visible which will cause delays. Today, for reducing the delay, BI operating systems execute OLTP and short-term analytical queries together on the DBMS, as shown in Figure 4-b. These workloads are called short OLAP workloads. However, long-term OLAP workloads may be conflicted with many short OLTP transactions that make changes to the database. So, high synchronization is needed to deal with resource competition, which produces lower utilization of all resources. Also, the commercial database management system (DBMS) uses special techniques as shadow copy [17], for handling mixed workloads with lower overheads. That is, on different logical versions of the data, different workloads will be separated and performed. Therefore, the additional space may be increased, which increases the infrastructure costs and requirements. Therefore, in current diskbased DBMSs a major challenge is managing these mixed workloads (OLAP and OLTP) [18]. Current BI systems 1. Extended systems of traditional BI: Current traditional BI technologies can perform OLAP queries and OLTP transactions on the same database without interfering with each other. Combining these mixed workloads with the same system needs extreme performance improvements due to the huge explosion in dynamic data size. • "In-memory database (IMDB)": Today, in most BI systems, OLAP and OLTP mixed workload on a one system can be handled using an In-Memory database (IMDB) (also, called Master-Memory). This technique needs that the system stores all data in the main memory, because it is faster than the optimized databases on disk and the internal optimization algorithms use less CPU instructions and are simpler. In case of querying data, this technique provides more predictable and faster disk performance by reducing the time of search. However, the IMDB systems can lack durability because of stored information losing when the device is reset or loses power. Many IMDB systems have proposed various mechanisms for supporting durability such as snapshots, non-volatile DIMM, non-volatile RAM, transaction logging, and high availability. Table 1 shows systems of modern BI that use various methods to hold most or all the data in the main memory for obtaining high OLTP productivity. For example, a distributed set of shared devices is used to run H-Store system, where the data is completely located in the main memory. The H-Store can execute transaction processing at high productivity rates, by removing traditional DBMS features as buffer management, lock and close. Recently, the H-Store prototype was marketed by a startup called VoltDB [19]. • "Hybrids with on-disk database": The main-memory has become big enough for handling most OLTP databases, nevertheless this may not constantly be the best choice. For OLTP workloads by using access patterns, where some records are "cool" (rarely or not accessed at all), others, "hot" (accessed frequently). So, the coldest records are stored on fast secondary storage devices in the modern systems to ensure good performance. For example, Stoica and Ailamaki [19] suggested a way to migrate primary memory DB data to cheaper and larger secondary storage. In [20], for improving major memory heart rates and reducing I/O operating system migration, relational data structures are reorganized using access statistics for workloads of OLTP. Recently, Siberia was introduced as a cold data management framework in Microsoft Hekaton IMDB [21]. Like [19], it does not require storing an entire database in the main memory. Hekaton focuses on how records are migrated to and from a cold store and how records are accessed and updated in a cold store in a consistent manner for transactions. So, only some tables can be declared and managed in the main memory by Hekaton. Experience evaluation shows that when cold storage is located on commodity ash, Siberia can lead to an appropriate productivity loss of 7-14%, given that cold data access rates are for an improved main memory DB. 2. Modern features of BI Systems: There are three modern information survey indicators: operational biological investigation, situational temporary survey, and self-service self-examination. Whereas, the H-Store system is only for OLTP transaction processing, a modern system called HyPer can handle it mixed workloads of both OLTP and OLAP are extremely high throughput rates using a low-overhead mechanism to create differential shots [22]. This system is used an unlocked approach which allows all OLTP transactions to be carried out in sequence or on special sections. In parallel with OLTP processing, HyPer system performs OLAP queries on the same shot and consistent. Castellanos et al. [23] proposed a new platform called to notify business managers to situations that could affect their business. SIE-OBI integrates the functions required for exploiting relevant rapid flow information from the web. They proposed new schemes for extracting and linking information that obtained from the web with the stored historical data in the data warehouse to reveal position patterns. The relevant information is extracted only from two or more different unstructured data sources, usually one stream of internal slow text and stream of external fast text. This time and effort minimization platform were built to build slow and fast data streams that integrate structured and disorganized flows, and to analyze them in almost real time. Data governance 1. Background: in DAMA I [24] data governance is defined as "the exercise of authority and control over the management of data assets, planning, supervision and control over the management and use of data". Data governance describes the responsibilities and roles of the organization in promoting desired behavior in the use of data [25]. Data management differs from data management, which involves setting data quality standards, making decisions and implementing them [26]. It is also different from BI Governance, which aims to provide a dedicated decision-making framework through the governance of all activities within the BI environment [27]. DAMA I [28] identifies 10 data management functions as shown in Figure 5. The data management function is high-level supervision, planning, and control of all other functions. There are four data management functions related to the next generation of biological information that requires fast access to data, external data utilization, and analyzing data by users, generally. Data architecture management includes setting of data standards, maintaining, and developing structures of enterprise data and linking application projects and architecture. The department of data quality focuses on planning, implementing, and controlling activities that apply techniques of quality management to measure, evaluate, improve and ensure the use of data. Data storage and business intelligence management focus on providing decision support data for reporting, query and analysis. Metadata management focuses on activities to enable easy access to high-quality metadata, such as architecture, integration, control, and delivery. 2. Deploying Next Generation BI in Data Governance: Data management has become vital for the organization as the data becomes inherent. The business derives its business value and decides based on the information derived from the data. Consequently, data control is required to ensure the quality of the data that directly affects the quality of the decisions made by the organization [29]. More effective data governance (DG) can lead to a higher scale of decision-making. To achieve effective data governance, maturity models of enterprise data governance help to understand DG and to determine what the next expected plan is [30]. Many data management maturity models [31] have been proposed for directing an organization to understand what data management level is. In [29], Oracle anticipated that the maturity model of data governance would help the organization in locating it in its data governance system evolution, identifying steps of short-term needed to reach the next level, and enhancing capabilities of data management. In the Oracle model, the highest level of maturity is the integration of data management with BI. The next generation of BI supports almost insights of real-time with using of external information that generates a large data amount and its manipulations. So, this requires very mature DG for providing data quality, reliability, and integrity. The three characteristics are crucial to extract accurate insight through techniques of data mining. For example, in a "self-service" BI (for example Tableau and QlikTech), allows users to discover insight from many data sources without modeling the data environment and implementing complex ETL operations, which is one of the most time-consuming and difficult tasks in BI. So, these new features allow users to easily access data, get quick results and visual data visualization. To enable the evolution of the next generation of biological information, data management is critical to the reliability of data from the discovered vision. For example, in the case of BI self-service, the fact that end users can access and process their data reduces the reliability of BI results [32]. In data management, useful functions to ensure reliability can be considered such as tracking data ratios to source and creating records of how data is processed or transferred. However, integrating data governance into the next generation of biological information has faced some challenges due to the requirements of flexible and reliable responses while there is an enormous amount of external data and public user engagement. 3. Data governance challenges: There are two main advantages to the next generation of biological information that affects the data management model. Decision making in the next generation BI, should be more effective and faster between a huge amount of data that comes from many data formats and sources. However, data from many sources makes data management more difficult to manage and sophisticated to control properly. This can also lead to ineffective decisions being made. In case of data comes from different conflicting sources, more research and analysis of the data and the different sources of that data to determine what is true and accurate or its approximation must be done by the decision-maker, which will be costly operations. Therefore, management of data across heterogeneous sources in the next generation BI system is very important. In the next generation of personal information, especially "selfservice" business users participate in procedures of decision-making. In general, the central IT organization and many data supervisors have been involved in data management initiatives and have a metadata repository for the data management platform and a set of data management tools to deal with varied data. In advance, they standardize common data definitions of master data and reference data that are widely shared across many enterprise applications. When they receive Framework of data governance as defined in Dama I [27]. disparate data, they match it to define predefined shared data, determine its quality, determine which rules, convert, and merge them. However, in the next generation of BI, users also select, manipulate or merge their data names themselves using various "self-service" tools of BI. They may want to upload to the DB and share their vision with others. Participation of business user in the data process can lead to data in a mess where the same data can be converted and combined in various ways through data managers and a central organization using tools of data management and by business users who have tools of BI for "self-service". Consequently, metadata sharing criteria are crucial through this sharing to transfer shared data, shared data names, and shared integration rules [33]. 4. Data governance model for next generation BI: The data management model design is designed to centralize versus. Decentralization and hierarchy versus cooperative. Central design assigns all decision-making authority in the central IT department while decentralized design assigns authority to individual business units [25]. The term big data is a group of huge and complex data sets from various sources where data the management and traditional application processing techniques face difficulties to process it. Big data is a collection of a large amount of structured or unstructured data that is processed and analyzed for informed decision-making or evaluation. These data can be taken from various sources including browsing history, geographic location, social media, medical records, and purchasing record. Big data is made up of complicated data that will smash the processing power of traditional simple database systems [34]. In [35], the authors mentioned that, there are three main characteristics associated with big data: (1) Volume is a feature used for describing the vast data amounts that big data uses. Usually, the range of data amounts starts from GB to YouTube. Big data should be able to handle any data amount even with its highly anticipated growth. (2) Variety is a feature used for describing various types of data sources that are used as portion of a large data analytics system. Currently, there are many data storage formats used by computers all over the world. One format is the structured data such as databases and. Csv, video, short message service (SMS) and excel papers. Unorganized data can be in the handwritten notes form. All data from these sources will be ideally used for Big Data Analytics. (3) Velocity is a feature used to describe the speed at which data is generated. It is also used to describe the speed at which generated data is processed. With the click of a button, an online retailer can quickly view big data about a specific customer. Speed is also important to ensure that data is updated and updated in real time, allowing the system to perform at its best. This speed is necessary as real-time data generation helps organizations accelerate operations. Which can save institutions a large amount of money. Today, many companies are increasingly interested in using technologies of big data to support their BI, so that it becomes very important to understand the different practical issues from previous experiences in BI systems. Today's BI systems sense the world and harness these data points for recommending the best possible options and forecast results, accurately. As BI systems continue to be built in real time, the demand for data collection, integration, processing, and visualization increases almost in real time. BI systems are characterized by high sensitivity opportunities as seen in sensors with the rich diversity of sensors ranging from mobile phones, personal computers and health tracking devices to technologies of Internet of Things (IoT) designed to give contextual and semantic sound to entities that could not previously contribute Intelligent in key decisions. So, many companies are analyzing big data today. Big data analytics is needed and is machine learning techniques because of often distributed data sets, and its privacy and size considerations are evidence of distribution techniques, where data is on platforms with different computing capabilities and networks. The benefits of application diversity and big data analytics pose challenges. As an example, every hour the servers of Walmart handle more than million transactions for a customer, and this information is stored into databases that contain larger than 2.5 petabytes of data, which is 167 times the number of books in the Library of Congress. Herein, CERN's Collider Hadron Collider produces around 15 petabytes of data annually, and that is enough to fill over 1.7 million double layer DVD discs annually [36]. Big data analytics are used for education, health care, media, insurance, manufacturing and government. Big data analyzes of business intelligence and decision support systems that enable healthcare organizations to analyze data size, diversity and tremendous speed have been developed across a wide range of healthcare networks to support evidence-based decision-making and action [37]. Therefore, it is clear from the discussion that data management and big data analytics [38] are important in BI for 4 reasons: 1. Better-decision-making (BDM): Big data analytics can analyze current and old data for making predictions about the future. So, companies can make not only better current decisions, but also preparing for the future. Cost reduction (CR): Big data technologies like cloud-based analytics and Hadoop offer great cost advantages when storing large data amount. In addition, he provided insights on the effect of various variables. New products and services (NPS): With the ability to measure needs and satisfaction of customers through analytics, the strength comes for giving customers what they want. So, more companies are creating new products and services to meet customer needs. Understand the market conditions (UMC): By analyzing big data, we can get a better understanding of current market conditions for retrieving important information. In addition, there are a few features and challenges that must be considered in the tools and techniques of big data analytics, and they include scalability and fault tolerance as well [39][40][41]. The following Table 1 represents a few of the widely used tools with the advantages of Big Data Analytics. The rapid development of business intelligence and analysis attracted the attention of researchers. The reason is that organizations no longer rely on traditional technologies as data grows exponentially. This huge amount of data requires advanced analytical techniques in order to convert it into valuable information that helps organizational growth. BI&A is the contemporary methodology for extracting value from this vast amount of data, driving strategic decision-making, and forecasting and benefiting from future opportunities. BI&A is necessary in most organizations. BI&A has proven effective support in decision making. In addition to that data and IT infrastructure is clearly influenced by the good use of BI&A practices. Nowadays, business intelligence and analysis have played a vital role in most institutions and sectors due to their value and benefits. BI&A helps organizations gain a better view of their private data and thus improves fact-based decision-making. These methodologies and data analysis also help to maintain competitive advantage in addition to resolving technical and quality problems that will enhance the performance and productivity of enterprises [42,43]. According to Abai et al. [44] BI&A helps to build an integrated framework that supports speeding up organizational performance. Many factors and technological developments have shaped the past and present trends of BI&A. With the rapid development of technology, it is not enough to use traditional analytical techniques. The future direction of business intelligence and analysis will expand to include areas of diversity. According to Chen et al. [45]. The success opportunities associated with data analysis technologies have generated future interest in business intelligence and analytics. Additionally, BI&A contains different practices and methodologies that can be applied to different sectors; Health care, security, market intelligence, e-government, and others. According to Mohammed and Westbury [46] BI&A is contributing to future development systems. By mapping all the facts, BI&A has become soon biotechnology in developing cities by supporting real-time information that will turn countries into smart cities. One of the most important responsibilities in the data mining process is choosing the appropriate data extraction technology. The nature of work and the type of object or difficulty experienced by the work provides appropriate guidance for identifying the best techniques [47]. Application of data mining techniques There are some generalized approaches that can indicate enhanced efficiency and cost-effectiveness. Many of the basic techniques that are performed in the data mining process, determine the nature of the mining process and the option of data recover y. Artificial intelligence (AI) represents a step in the evolution of technology that has been actively pursued since the British mathematician and code-breaking Alan Turing was conceived as a clear way forward in his pioneering research of 1950, "Computing and Intelligence." At the time, computer technology could not keep up with Turing's ideas. But as computing advanced, Amnesty International advanced. Most of the artificial intelligence that we see today is narrow artificial intelligence (ANI), which means it can perform a well-defined task. A 2018 report by the Future of Humanity Institute at Oxford University has surveyed a group of AI researchers in the schedules of strong AI. She found "50% chance of artificial intelligence outperforming humans in all tasks in 45 years and automating all human functions in 120 years." However, AI will bring with it many opportunities to create new business opportunities as well. As many experts have pointed out, one of the great values of artificial intelligence is its ability to eliminate the need for strenuous and repetitive tasks. Alternatively, users can focus on their core values and skills. Technology was applied in many industries mostly aimed at reducing human error, reducing labor costs, and thus increasing profit. This was true of the progress made during the Industrial Revolution until the birth of the computer, and still true of the emergence of artificial intelligence. Artificial intelligence has advanced significantly in the past few years due to a number of factors, starting with a massive increase in the computing power available. The once-trained AI model now takes days or even hours with machine learning (more on this soon). Another factor is wider data access. You may have heard that the data is "new oil" or something similar. However, the data must be processed using advanced tools such as analyzes and machine learning algorithms to reveal useful information. This processing is where the AI in BI becomes an invaluable tool. Machine learning is the engine of artificial intelligence systems. It strengthens artificial intelligence models by analyzing complex data sets. Machine learning enhances models by analyzing complex data sets through a set of self-acquired rules and knowledge as shown in Figure 6. The machine learning model learns from big data and from frequent human interactions so that it can provide information and answers related to the user's interests or goals. Big data refer to very large data sets DOI: http://dx.doi.org /10.5772/intechopen.97374 that can be mathematically analyzed to reveal patterns, trends, and correlations, especially about human behavior and interactions. In the space of artificial intelligence, deep learning represents a major leap forward in technology. As we just touched on, programmers write a code that directs the device how to interpret a series of words, pictures, or commands to reach a decision and execute an order. The end user then introduces the entry (data), while internal engineers may define more specific rules for interpreting and analyzing that data. Finally, the system provides outputs (analysis) based on the specific inputs and defined rules. In [48], the authors proposed a demand-forecasting model by BI with machine learning. Why BI needs AI? Does it matter if the constant awareness of the original, or is the copy going to be alive anyway 19? For better or worse, the future comes faster than we realize. There will be no before or after artificial intelligence but a slow transition for a decade or more. As we have seen with Google Glass, it's currently impossible to guess what acceptable results would look like. But how much can we trust in our future assistants? Will they work with us or unknown entities? If we do not ask the right questions now, we'll get the default app. It will be free, but what will include small prints? Good morning John. Here's today's program. Any questions? Perhaps it does not matter after all: Using a good learning algorithm, the program will know what we need and what we need to do, better than we can ever guess. The power of statistics will win the war against the gods and we will lose our soul. It is known that job candidates can lose their chances in a decisive way when they think that no one is watching by bad behavior or rejection of reception staff and waiting staff. Once NLPs and other AIs are widespread, it will not be long before the same literature test is introduced. Looking at 2050, the future of humanity lies in the transition to a civilization of the first kind. We are type 0, extinct. We are about to become halfgods. Most likely, we will merge with our own processing technology and each of us will have our own virtual world to dominate it with absolute control in every aspect of it, and the countless millions of planets of "life" that we may control or merge with as well. Just as video game programmers have absolute control over the worlds they create. Immortals, omniscient and omnipresent, are all capable of our universes. Of course, he can explore this universe as well, maybe contact directly with his creative being and know that we are characters in his game. Our last question will be morality and maturity. Will we only have one universe? Or does force drive us into madness and transform us into "invaders of the universe" and penetrate the universes of others, based on greed, against the desire for more force? Will we be good? Or evil? Or both? Will we be able to achieve wisdom, and secure peaceful and harmonious coexistence with all other demigods, or will we go to war? Or will we merge into one excessive force? Or are we tired one day from the divine and start the final game again, and transform ourselves into a universe that we will have to evolve for billions of years for us to be re-created one day? Maybe this is exactly what is happening. Improving BI with AI In this section, we explore how BI's AI raises and improves the way of an organization that are used for analysis and interpretation the lifeline of its business. Turning Business Users into Data Experts (TBUDE): Typically, business analysts (BA) and IT officials control the access to data and its interpretation. Although these occupations are crucial until now. With the AI tools in today's BI tools, including LOB, NLIs users no longer need to depend on data science experts for analyzing their data. AI allows users to obtain actionable answers easily and directly for helping "democratize" data. In other words, it gives users a twoway conversation ability with their data and feel empowered for acting with answers in a reliable way. Here is an example of how AI works in practice: a certain organization is deploying a BI solution that uses an advanced NLI and instead of waiting for system administrators or data scientists for analyzing data, the manager of business unit arrives at the BI solution directly. The manager makes the data available by calling or downloading and asks questions in simple language. Then, the user receives insight into these questions along with a dashboard and visuals ready for presentation to help communicate these answers. A pre-trained model of AI can target even specific tasks of BI such as visualization recommendations, scenarios of "what-if ", and prediction for helping managers to make important decisions for their business. Helping You Explore Your Data (HYEYD): There is something inherently satisfying to explore your data with the right tool of AI that supports artificial intelligence. In minutes, you can move from loading data sets to revealing hidden facts in the data and introducing these results in beautiful visualizations. At a starting moment the data is available, the AI in the system of BI heavy lifting by sorting automatically, marking columns and joining matching data across groups. Accessing the NLI is the first step in data exploration for the user. The AI tool will suggest questions that might be helpful if you get stuck. You can also start with the basics, like "How did the retail store department perform during the X period?" The AI will provide answers and suggest ways to explore data to get additional insights into performance. Exploration is exciting because you can continue to delve deeper into visions that only AI can achieve. What embodies the imagination of users is imagination. Visuals are an essential feature of all modern BI solutions, but with AI enabled AI solutions, users receive suggested, automated visualizations that best fit the answers to their questions. 3. Learning from the End User (LFEU): The leading AI systems in BI systems are customized and improved all times through machine learning that indexes and learns traditional questions and behaviors of a user. The more user interacts with the tool of BI, the better the AI will know what this user wants in the presentation and analysis. If the user usually uses forecast data, the system will begin to prepare and present the data in the prediction model via dashboards. Automatically Cleansing and Prepping Data (ACPD): To successfully interpret it, your data must be organized in a unified and searchable manner. As any business knows well, multiple datasets cause multiple headaches. What if names are formatted as first name/last name in one spreadsheet, and last name/first name in another? What if there are duplicate records? What if there are records in one dataset and not the other? What if the data in one set is daily, and the other is monthly? AI in BI reduces data cleansing and contact preparation and provides massive aspirin for headaches. By setting up data automatically (one of the biggest artificial intelligence in saving time), you can move from making data available to working with it in minutes, instead of hours or days. The future AI function will allow users to enter structured and unstructured data without skipping any win; A big change since most of the data being created today -such as photos, videos and audio -is disorganized. Removing barriers to effective analysis is one of the ways in which the advanced AI in BI tool helps users who are not data scientists to access and interpret their data. Gaining Competitive Advantage (GCA) : AI now makes a critical difference between the companies that enable it to succeed and those that will be left behind soon. Gartner predicts that by 2021, 75% of pre-prepared reports -such as those used to extract data -will be either replaced or strengthened using automated insights. The robust AI in BI tools also provides improved accuracy for critical operational use reporting. If they do not, the data and analytics leaders should plan to adopt Enhanced Analyzes (AI) immediately in their business as the capabilities of the platform mature. Rita Sallam, vice president of Gartner Research, warned at a recent conference that "data and analytics leaders should examine the potential impact of business" from increasing reliance on predictions using enhanced and automated insights "and adjusting business and business models accordingly, or risking losing the competitive advantage of Those who do. " AI are already offered in BI solutions today, and those companies that adopt technology are poised to succeed more safely than those that do not. By uncovering trends and correlations in data and proposing ways to interpret results in natural language along with providing the best coordination for presenting these results, AI saves time and provides actionable insights to increase profitability and avoid potential problems before they arise. Conclusion In this chapter, the traditional and Modern BI were reviewed in detail which became a critical and important process and received a great interest in both industry and academia fields. So, the data management, data mining and machine learning techniques are needed for extracting the insights from big data. By using such techniques, business intelligence gets better decision making, cost reduction, new products and services and understand the market conditions. In addition, the importance of big data analytics, data mining, AI for building modern BI and
2021-07-27T00:06:21.136Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "7372b57e308a0fc83cd8ae43c3fad03ea8413ccc", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/76332", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "36aefef81211c90c3895e7d3eb32d4a957fb2c94", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
254300470
pes2o/s2orc
v3-fos-license
Bridging finite element and deep learning: High-resolution stress distribution prediction in structural components Finite-element analysis (FEA) for structures has been broadly used to conduct stress analysis of various civil and mechanical engineering structures. Conventional methods, such as FEA, provide high fidelity results but require the solution of large linear systems that can be computationally intensive. Instead, Deep Learning (DL) techniques can generate results significantly faster than conventional run-time analysis. This can prove extremely valuable in real-time structural assessment applications. Our proposed method uses deep neural networks in the form of convolutional neural networks (CNN) to bypass the FEA and predict high-resolution stress distributions on loaded steel plates with variable loading and boundary conditions. The CNN was designed and trained to use the geometry, boundary conditions, and load as input to predict the stress contours. The proposed technique’s performance was compared to finite-element simulations using a partial differential equation (PDE) solver. The trained DL model can predict the stress distributions with a mean absolute error of 0.9% and an absolute peak error of 0.46% for the von Mises stress distribution. This study shows the feasibility and potential of using DL techniques to bypass FEA for stress analysis applications. Introduction Stress analysis is an essential part of engineering and design. The development of various design systems continuously imposes higher demands on computational costs while preserving accuracy. Numerical analysis methods, such as structural finite element analysis (FEA), are typically used to conduct stress analysis of various structures. Researchers commonly use FEA methods to evaluate the design, safety, and maintenance of different structures in various fields, including aerospace, automotive, architecture, and civil, structural systems. The current workflow for FEA applications includes: a) modeling geometry and its components, which can be time-consuming based on the system complexity; b) specifying material properties, boundary conditions, and loading; c) applying a meshing strategy for geometry. The time-consuming and complexity of current FEA workflows make it impractical in real-time or near realtime applications, such as in the aftermath of a disaster or during extreme disruptive events that require immediate corrections to avoid catastrophic failures. Based on the steps of FEA described above, performing a complete stress analysis with conventional FEM has a high computational cost. To resolve this issue, we propose a Deep Learning (DL) method [1,2] to construct deep neural networks (DNN), which, once trained, allow bypass of FEA. This method may enable real-time stress analysis by leveraging machine learning (ML) algorithms. DNNs can model complicated, nonlinear relationships between input and output data. Thus, these models help us acquire adequate knowledge for predictions of unseen problems. Data-driven approaches that model physical phenomena have been lauded for their significant and growing successes. Most recent works have included design and topology optimization [3][4][5][6], data-driven approaches in fluid dynamics [7][8][9][10], molecular dynamics simulation [11][12][13][14], and material properties prediction [15][16][17][18]. Also, Atalla et al. and Levin et al. [19,20] have used neural regression for FEA model updating. Recently, DL has shown promise in solving conventional mechanics problems. Some researchers used DL for structural damage detection, a promising alternative to conventional structural health monitoring methods [21][22][23][24]. Javadi et al. [25] used a typical neural network in FEA as a surrogate for the traditional constitutive material model. They simplified the geometry into a feature vector which approaches cases that are hard to generalize or more complicated. The numerical quadrature of the element stiffness matrix in the FEA on a per-element basis was optimized by Oishi and Yagawa [26] using DL. Their approach helps to accelerate the calculation of the element stiffness matrix. A convolutional neural network (CNN) is a type of neural network which has shown great performance in several applications related to Computer Vision and Image Processing. The significant learning ability of CNN is mainly due to several feature extraction stages that can intrinsically learn representations from the feeding data. Madani et al. [27] developed a CNN architecture for stress prediction of arterial walls in atherosclerosis. Also, Liang et al. [28] proposed a CNN model for aortic wall stress prediction. Their method is expected to allow real-time stress analysis of human organs for a wide range of clinical applications. In this work, we tackle the limitations of stress analysis using FEA. We propose an end-to-end DL method to predict the stress distribution in 2D linear elastic steel plates. The algorithm takes geometry, boundary conditions, and load as input and renders the von Mises stress distribution as an output. We model steel gusset plates with loading applied at different edges, different boundary conditions, and varying complex geometries. A dataset initialized with 104448 samples with varying geometry, boundary conditions, and loads is used to train and evaluate the network. Background on deep learning and convolutional neural network Artificial intelligence (AI) developed into ML over time from pattern recognition and learning theory [1]. Samuel [29] defined ML as a "field of study that allows computers to learn without being explicitly programmed". ML algorithms can learn from data, and during the learning process, they build models which will be used to make decisions or data-driven predictions. DL is a subfield of ML which focuses on modeling hierarchical representations or abstractions to define higher-level concepts. The DL community is making significant advances in solving problems that AI has struggled to solve for many years [1]. Data of high dimensions have proven to be highly useful in discovering complex structures. Thus, DL is practical for many domains such as government and business, specifically computer vision and image recognition. These methods have shown significant performance in image classification [30], natural sentence classification [31], and image segmentation [32]. DL techniques can extract features; however, we should be careful in choosing the appropriate technique to use when dealing with a specific task. Within these approaches, CNNs have been demonstrated to be particularly efficient at acquiring a representation of the input data, including grid type data such as matrices or images. LeCun et al. [33] proposed the initial skeleton of CNN to classify handwritten digits. Over the last few years, massive hierarchical image databases, GPU programmable units, and highly parallel computing have significantly improved CNN. CNN architectures have developed greatly since the earliest work [34][35][36], and the performance has improved remarkably as the networks have become more complicated and deeper layers [37]. CNNs use four concepts to enhance their performance: local connections, weight sharing, pooling, and multiple layers. CNNs are composed of a series of steps. The first step involves a convolutional layer, with units in this layer organized in feature maps. Local patches in the feature maps of the previous layer are connected to each unit by a set of weights known as a filter bank. In the second step, the output of this locally weighted sum is then passed through a nonlinearity, such as a ReLU or other activation functions. The weights are then passed on to the pooling layer. Pooling layers are used to assemble semantically similar features into one. Finally, a series of convolutional, nonlinear, and pooling stages are stacked, followed by even more convolutional and fully-connected layers. A CNN uses backpropagating gradients similar to a typical deep network, allowing all the filter banks to be trained simultaneously [37]. Deep learning in civil and mechanical engineering Artificial neural networks with multilayer perceptron (MLPs) have been used in civil and mechanical engineering for many years. Researchers use ANN for structural analysis [38][39][40], regression of material constitutive properties [41,42], and materials' failure and damage [43,44]. Gulgec et al. [23] proposed a CNN architecture to classify simulated damaged and intact samples and localize the damage in steel gusset plates. Modarres et al. [45] studied composite materials to identify the presence and type of structural damage using CNNs. Also, for detecting concrete cracks without calculating the defect features, Zang et al. [46] proposed a vision-based method based on CNNs. An approach for predicting stress distribution on all layers of non-uniform 3D parts was presented by Khadilkar et al. [47]. More recently, Nie et al. [48] developed a CNN-based approach to predict the stress field in a 2D linear cantilever beam. However, the above works used DL techniques for structural analysis. Guo et al. [49] studied the bending analysis of Kirchhoff plates of various shapes, loads, and boundary conditions. Anitescu et al. [50] presented an artificial neural network-based collocation method for solving boundary value problems. Samaniego et al. [51] studied DNN as the basis of a technique for approximating data, and such Networks have shown promising results in areas such as visual recognition. Zhuang et al. [52] proposed a deep autoencoder-based energy method (DAEM) for the bending, vibration, and buckling analysis of Kirchhoff plates. Guo et al. [53] presented a modified neural architecture search method based on physicsinformed DL for stochastic analysis of heterogeneous porous materials. Guo et al. [54] proposed a deep collocation method (DCM) based on transfer learning for solving potential problems in non-homogeneous media. To our knowledge, the current work is the first 'DL-FE substitution' approach to perform a fast and accurate prediction of high-resolution stress distributions in 2D steel plates. Data generation Two-dimensional steel plate structures with five edges, E1 to E5 denoting edges 1 to 5, as shown in Fig. 1, are considered to be made of homogeneous and isotropic linear elastic material. The 2D steel plates have similar geometry to that of gusset plates, as used for connecting beams and columns to braces in steel structures. The boundary conditions and loading angles simulate conditions that are similar to those affecting common gusset plate structures under external loading. Analysis of the behavior of these components is essential since various reports have observed failures of gusset plates subject to lateral loads [55][56][57][58]. The distributed static loads applied to the plates in this study range from 1 to 5 kN with intervals of 1 kN. Moreover, loads are applied with three angles, including π/6, π/4, and π/3, on either one or two edges of the plate. The load is decomposed to its horizontal and vertical direction components. Also, four types of boundary conditions are considered, as shown in Fig. 2, based on real gusset plates' boundary conditions. All the translational and rotational displacements are fixed at the boundary conditions. All input variables used to initialize the population are shown in Table 1. The minimum and maximum range for the width and height of the plate are from 30 to 60 cm. Various geometries are generated by changing the position of each node in horizontal and vertical directions, as shown in Fig. 1, which lead to 1024 unique pentagons. The material properties remain unchanged and isotropic for all samples. Input data The geometry is encoded into a 600 × 600 matrix as a single channel binary image. 0 (black) and 1 (white) denotes the outside and inside of the geometry, as shown in Fig. 3 (a). The boundary conditions are also represented by another 600 × 600-pixel binary images, where the constrained edges are defined by 1 (white) (Fig. 3(b)). The stress values of all the elements outside the material geometry are assigned a zero, as shown in Fig. 3(e). The dimensions of the largest sample are 600 mm × 600 mm, and the smallest are 300 mm × 300 mm. Therefore, the size of each element is 1 mm × 1 mm, which means that each image has 360000 pixels. This high-resolution dataset offers significant accuracy. The maximum and minimum von Mises stress values for elements among the entire dataset are 96366 and -0.73 MPa, respectively. We normalized all the output data between 0 and 1 to ensure faster convergence and encoded it to 600 × 600 matrices. Convolutional neural network architecture The CNN can be built using a sequence of convolutional layers. The convolutional layers learn to encode the input in simple signals and reconstruct the input [59]. Our CNN architecture consists of three stages of layers: The first stage is downsampling which consist of seven convolutional layers (E1, E2, E3, E4, E5, E6, E7), and the second stage has three layers (RS1, RS2, and RS3) of Squeeze-Excitation and Residual blocks (SE-ResNet). In addition, the Inception and MobileNetV2 blocks are swapped with the SE-ResNet block to check if these modules can further enhance the network's performance. The third stage is upsampling, consisting of six deconvolutional layers (D1, D2, D3, D4, D5, D6), as illustrated in Fig. 4. Residual block We use residual blocks [37] to address the vanishing gradient problem. In addition, residual blocks are computationally lightweight and result in only very small increases in model complexity. The shortcut connection in residual bocks simply performs identity mapping, and its output is added to the output of the stacked layers. F tr Squeeze-and-Excitation (SE) blocks improve the representative capacity of the network, enabling dynamic channel-wise feature recalibration. An SE block can be implemented with five phases. 1) The number of channels and the input convolutional block are given to the algorithm. 2) Using average pooling, the function reduces each channel to a single numeric value. 3) A fully connected layer is used to add nonlinearity followed by a ReLU function and the output channel complexity is also will be reduced. 4) Sigmoid activation followed by a second fully connected layer provides smooth gating for each channel. 5) Finally, the function, weight feature maps of each convolutional block based on the network results. Figure 5 depicts an SE-ResNet module which the SE block transformation. is regarded as the nonidentity branch of a residual module. Before summation of the identity branch, both SE act. Using both SE and ResNet in the network outperforms using ResNet [60]. Inception block Inception Modules are used to reduce the computational cost of CNNs. Since neural networks have to deal with a vast array of images, each with different content, they must be carefully designed. Using the vanilla version of the inception module, we can perform a convolution on the input meaning three different sizes of filters (1 × 1, 3 × 3, 5 × 5) instead of one. Also, max pooling is performed. The outputs are then concatenated and sent to the next layer. Therefore, convolutions occur at the same level in CNNs, where the network gets wider, not deeper. Compared with shallower and less wide CNNs, this method offers significant quality gains at a modest computational cost increase [36]. Figure 6 depicts the inception module. MobileNetV2 block MobileNetV2 is based on an inverted residual block with shortcut connections between thin bottleneck layers [61]. A lightweight depth-wise convolution technique is used in the intermediate layer to filter features as a source of nonlinearity. The nonlinearities must be removed in the narrow layers to maintain representational power. In general, in this model, the bottlenecks encode the intermediate inputs and outputs of the model, while the inner layer encodes how the model can transform from lower-level concepts such as pixels to higher-level features such as categories of images. Lastly, shortcuts can improve training speed and accuracy, just like traditional residual connections. Figure 7 depicts the MobileNetV2 module. Network layers and hyperparameters All the details of the network layers and hyperparameters can be found in Tables 2 and 3. As can be seen, the models consist of 7 Conv layers, 3 different bottleneck blocks, and 6 ConvT layers. Of various combinations of Conv and ConvT layers with maximum channels of 512, 1024, 2048, and 4096, the model with 1024 channels shows the best performance. Therefore, we keep the network with 1024 channels as the primary model and swap the bottleneck each time with the SE-ResNet, Inception, and MobileNetV2. We keep the bottleneck dimension the same for all models to match the ConvT first layer. The batch size is set to 16, leading to the best accuracy compared to other batch sizes. Different learning rates from 1e−3 to 1e−6, and 1e−5 lead to the best convergence. Loss function and performance metrics We used MSE (mean squared error) for the training loss defined in Eq. (1). MSE gives a more significant penalty to large errors than MRE (mean relative error). Also, the errors are normally distributed. Using MAE (mean absolute error), MRE, PMAE (percentage mean absolute error), PAE (peak absolute error), and PPAE (percentage peak absolute error) helps evaluate the overall quality of predicted stress distribution. These metrics are defined in Eqs. (2)-(5), respectively. where s(i) is the stress value at a node i computed by FEA as the ground truth and, is the corresponding predicted stress by the DL model. Also, n is the total number of elements in each sample which is 360000 in our work. Symbol | | denotes the absolute value. Our model's prediction and ground truth are displayed as 600 × 600 resolution images. For measurement of the accuracy of predictions by comparing them to the ground truth, we use MRE: ∈ where the term is a small value to avoid a division by zero. The percentage mean absolute error is defined as: where max{ } is the maximum value in a set of ground truth stress values, and min{ } is the minimum value. PAE and PPAE measure the accuracy of the most significant stress value in the predicted stress distribution. PAE and PPAE are defined as: 7 Results and discussion All codes are written in PyTorch Lightning and run on two NVIDIA TITAN RTX 24G GPUs. We use AdamW (Adam algorithm with weight decay) optimizer to speed up the convergence of models. We train and evaluate different models based on Table 3 to find the model with the best performance. The training data size of models 1 to 3 is 83558, and the testing data size is 20890, randomly divided with a train/test ratio of 80%-20%. Figure 8 shows MSE and MAE losses as a function of epochs in model 1. Figures 8(a) and 8(b) are linear and logarithmic scales. Figure 8(a) shows that the MSE and MAE curves rapidly decline after a few epochs. However, Fig. 8(b) gives a more precise representation of the model's behavior. Figure 8(b) shows that MSE is smaller than MAE, but both have similar general trends. We save the best checkpoint during validation, and all error metrics are based on the best checkpoint. Models 4 to 6 are validated with K-fold cross-validation to ensure that the model is generalizable. To reduce the computational cost, we divide the dataset into three folds. K-fold cross-validation shows the best performance in all models based on most metrics, as can be seen in Table 4, which means the model is generalizable. We replace the SE-ResNet block in the bottleneck with the Inception and MobileNetV2 block in models 2 and 3, respectively. Model 1, has the best performance in terms of PPAE, with an error of 0.46% and model 2 is the best model, based on PMAE with a 0.57% error. Figure 9 depicts the performance of different models in terms of MAE. As can be seen in Fig. 9, models 3 and 1, which have MobileNetworkV2 and SE-ResNets in a bottleneck, have almost the same performance, and model 2 with inception block is the best in terms of MAEs. We deem these results satisfactory for stress distribution predictions, specifically the PPAE, the most critical load value for stress distribution in engineering domain applications. Figure 10 illustrates the cumulative distribution of PMAE and PPAE in the test dataset of model 1. Figure 10(a) shows the probability of mean in PMAE is 80%, which means that about 80% of predicted samples have a PMAE of less than 0.9, and 50% of samples have a PMAE of less than 0.46, which is the median. Figure 10(b) shows that about 99% of predicted samples have a PPAE of less than 0.46, and 50% of the predicted samples have a PPAE of 0.06. The predictions produced from some randomly selected samples from the test dataset of model 1 are visualized in Fig. 11. Each row represents a sample. Columns (a) to (d) represent geometry, boundary conditions, and load in horizontal and vertical directions, respectively. Columns (e) and (f) represent the ground truth and predicted stress distributions. As can be seen, there is a high fidelity fit between ground truth and predicted stress distributions in both maximum stress and stress distribution in the different samples. Also, some inaccurate predictions are shown in Fig. 12. These predictions still provide useful information. 7.1 Effect of dataset size on the performance of the network We break the data into different sizes to evaluate the effect of data size on the network's performance of model 1. Therefore, besides training with the entire dataset, 104448 samples, we train the network with 10000, 20000, 30000, 40000, 50000, and 70000 samples. Figure 13 demonstrates that training with just 10% of the dataset can achieve a mean error of 1.85%, which is acceptable in most engineering applications. Also, it can be seen that if we want to accomplish a mean error of less than 1%, we should train the network with at least 90% of the dataset. We also evaluate the effect of data size on the Gaussian distributions of PMAE and PPAE, illustrated in Figs. 14(a) and 14(b). As shown in Fig. 14(a), increasing the data size decreases the standard deviation of PMAE. However, a 70000 data size and the total data size have almost the same standard deviation. Figure 14(b) shows that the standard deviation of PPAE decreases when the data size increases from 50000 to 70000. As a result, we should train the network with at least 70000 examples, 67% of our dataset, to ensure PPAE's acceptable standard deviation accuracy. Conclusions In this work, we used end-to-end DL techniques. We developed a CNN to alleviate the need for finite element methods for prediction of high-resolution stress distributions in loaded steel plates. The CNN was designed and trained to use the geometry, boundary conditions, and load as input, and it provided highresolution stress contours as the output. We used the PDE toolbox of MATLAB to generate the output data for training, containing 104448 FEM samples. We trained and evaluated different models to find the model with the best performance. The best model can predict the stress distributions with a mean absolute error of 0.9% and a maximum stress error of 0.46% in the von Mises stress distribution. The effects of dataset size on the model performance were also studied. Training the network with just 10% of the dataset achieved a mean error of 1.85%, which can be considered acceptable in specific engineering applications. Moreover, we evaluated the effect of dataset size on the Gaussian distribution of mean and maximum stress errors. Increasing the data size decreased the standard deviation of mean error. The standard deviation of maximum stress error also decreased with increase of the number of samples. Furthermore, the Gaussian distributions of mean and maximum stress errors demonstrated that a greater quantity of data induced less standard deviation in PMAE and PPAE.
2022-12-07T16:52:29.945Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "674d0988522414eac3ec850623c6b74402ec47d5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11709-022-0882-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "07664ed0796176835df707ed250cbdda2c20aa30", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
257376771
pes2o/s2orc
v3-fos-license
Variation in maternal mortality in Sidama National Regional State, southern Ethiopia: A population based cross sectional household survey Introduction Maternal mortality studies conducted at national level do not provide information needed for planning and monitoring health programs at lower administrative levels. The aim of this study was to measure maternal mortality, identify risk factors and district level variations in Sidama National Regional State, southern Ethiopia. Methods A cross sectional population-based survey was carried in households where women reported pregnancy and birth outcomes in the past five years. The study was conducted in the Sidama National Regional State, southern Ethiopia, from July 2019 to May 2020. Multi-stage cluster sampling technique was employed. The outcome variable of the study was maternal mortality. Complex sample logistic regression analysis was applied to assess variables independently associated with maternal mortality. Results We registered 10602 live births (LB) and 48 maternal deaths yielding the overall maternal mortality ratio (MMR) of 419; 95% CI: 260–577 per 100,000 LB. Aroresa district had the highest MMR with 1142 (95% CI: 693–1591) per 100,000 LB. Leading causes of death were haemorrhage 21 (41%) and eclampsia 10 (27%). Thirty (59%) mothers died during labour or within 24 hours after delivery, 25 (47%) died at home and 17 (38%) at health facility. Mothers who did not have formal education had higher risk of maternal death (AOR: 4.4; 95% CI: 1.7–11.0). The risk of maternal death was higher in districts with low midwife to population ratio (AOR: 2.9; 95% CI: 1.0–8.9). Conclusion The high maternal mortality with district level variations in Sidama Region highlights the importance of improving obstetric care and employing targeted interventions in areas with high mortality rates. Due attention should be given to improving access to female education. Additional midwives have to be trained and deployed to improve maternal health services and consequently save the life of mothers. Introduction Ethiopia is the second most populous country in Africa having more than 110 million people [1] constituted by 11 regional states and two chartered administrative cities. The regional states and the two administrative cities are further divided into 800 woredas (districts). Important priorities on the government's agenda include Improving maternal health and consequently decreasing maternal mortality [2]. To improve maternal health and reduce maternal deaths, accurate data on maternal mortality should come from studies conducted at subnational level. However; the country's maternal mortality data mainly comes from studies carried out at national level [3,4]. National maternal mortality estimates may not provide sufficient details to understand the distribution of maternal deaths at local levels relevant for health planning and monitoring. Hence, sub-national maternal mortality estimates are needed for program monitoring and local decision making. To understand the distribution of maternal deaths at local level and improve the monitoring of the progress towards reducing maternal deaths, maternal deaths need to be accurately counted and the likely causes identified [5][6][7]. However, most developing countries do not have systems at national or sub-national level to register vital events including maternal deaths [8][9][10]. In areas without a vital registration system, maternal deaths can be measured through population based household surveys [11]. Most maternal deaths occur during labour, delivery or within 42 days postpartum. Important causes are obstetric haemorrhage, infections and hypertensive disorders of pregnancy [12]. Most of these deaths can be avoided through cost effective interventions including skilled birth attendance [13][14][15]. In countries with many maternal deaths, the coverage and usage of essential interventions is low, if available, often provided with poor quality, with a persisting gap between the rich versus the poor, and urban versus rural populations [16]. The Sustainable Development Goal (SDG) aims at reducing MMR to less than 70 per 100,000 live births (LB) by 2030, but this will not be achieved if universal coverage for essential interventions are not improved [5]. Since the launching of the Millennium Development Goal (MDG), the government of Ethiopia has taken measures to improve access to universal health coverage, emergency obstetric care and implemented other interventions focusing on maternal health services to reduce maternal mortality [17]. Hence, the maternal mortality ratio (MMR) was reduced from 1030 in 2000 to 401 in 2017 per 100,000 LB in the country [4]. Despite these improvements, still many mothers die [4]. The 2016 Ethiopia Demographic and Health Survey (DHS) reported a MMR 412 per 100, 000 LB [3]. Despite the prevailing problem, measuring maternal mortality remains a challenge in Ethiopia as the country lacks functional vital registration system [7]. Well organized household surveys, using large and representative samples with verbal autopsy (VA) can provide information on local distribution and causes of maternal deaths. To the best of our knowledge, there have been few studies describing maternal mortality estimates and trends in reduction of MMR at sub-national and district level in the country. Population based studies conducted in south-west Ethiopia [18] and northern Ethiopia [19] found a MMR of 425 and 266 per 100,000 LB respectively. An implementation study from south-west Ethiopia demonstrated a reduction of MMR by 64% during the intervention period from 477 to 219 deaths per 100,000 LB [15]. As there is no previous population-based study describing maternal mortality estimates and district-level variations in Sidama National Regional State, and as the principal investigator (AZK) is affiliated with Hawassa University, which is located in Sidama National Regional State, it was natural to conduct such a comprehensive study on this population. We carried out this study in the Sidama National Regional State, southern Ethiopia with the following specific objectives: 1) measure the maternal mortality ratio; 2) measure variations of maternal mortality ratio at district level; 3) assess determinants of maternal deaths. This study could provide essential information to improve maternal health services relevant for lifesaving comprehensive emergency obstetric care in Sidama National Regional State. Furthermore, it will provide important information to the region used for priority setting and resource allocation identifying areas with high rates of maternal mortality. Therefore, this study can also inform other regional states in the country to carry out similar studies to understand the magnitude and variations in maternal mortality to improve maternal health care. The information from this study will help the design of maternal health programs at large which support the country's effort towards attaining the SDG. Study design and setting We used a cross sectional study design employing population-based survey in households that reported pregnancy and birth outcomes in the past five years (July 2014-June 2019). The study was conducted in six woredas (districts): Aleta Chuko, Aleta Wondo, Aroresa, Daela, Hawassa Zuriya and Wondogenet of Sidama National Regional State, southern Ethiopia from July 2019 to May 2020. Sidama National Regional State is one of the 11 regional states in Ethiopia. The region had a population of 4.3 million people in 2020 [20] and administratively divided into 30 rural districts, 6 town administrations and 536 rural kebeles (the smallest administrative structure with average population of 5000). Under the kebele, there are local structures known as limatbudin (administrative unit organized by 40-50 neighbouring households). The region has 18 hospitals (13 primary, 4 general and 1 tertiary), 137 health centres and 553 health posts operated by the government [21]. In the region, there are also 4 hospitals (1 general and 3 primary), 21 speciality and higher clinics, 131 medium clinics and 79 primary clinics run by private owners. The health centres provide basic emergency obstetric and new born care (BEmONC) whereas hospitals are responsible for comprehensive obstetric and new born care (CEmONC) in addition to the BEmONC [22]. Study population and sampling technique All women who experienced pregnancy and birth outcomes in the past five years in Sidama National Regional State were the source population. Women residing in sampled households and who had pregnancy and birth outcomes (live births, stillbirths and neonatal deaths) in the past five years preceding the survey were the study population. Fig 1 shows the sampling strategy of the study. We followed multistage cluster sampling technique to select the study population. Probability sampling technique: the gold standard technique recommended to observe reliable findings (precision) was employed at each sampling stage [23]. In first stage, we listed all the 30 rural districts of the region with unique identification code. Then, we selected 6 districts (20% of the districts) by simple random sampling. At the second stage, we listed all the kebeles in the 6 districts and randomly selected 40 kebeles proportional to the size of the kebeles in the districts. We employed complex sampling technique and used seed number (245987) in statistical package for social science (SPSS) to generate the sample of kebeles. In third stage, we listed all the limatbudins for each of the selected kebele and randomly selected 6 limatbudins from each kebele; altogether 240 limatbudins from the 40 kebeles. To identify a mother who experienced pregnancy and pregnancy outcomes in the past five years, we visited all the households in the selected limatbudins and listed all the households that reported births in the past five years. Finally, we selected 37 households from each limatbudin; which amounts 8880 households in total from 240 limatbudins. Variables Maternal mortality was the outcome measurement of the study. Explanatory variables were: educational level of mother, educational level of husband, road type used to reach the nearest health facility, distance to the nearest health centre, distance to the nearest hospital, occupation of household head, number of births given in past five years, family size, wealth index, hospital to population ratio, health centre to population ratio, doctor to population ratio and midwife to population ratio. The geographic locations of the households, the nearest health centres and the nearest hospitals were mapped with a global positioning system (GPS) receiver by data collectors who visited all the sampled households during data collection. Traveling time by walking to the nearest hospital was assessed by the data collectors based on reports from the respondents. Data on number of hospitals, health centres, doctors and midwives of the sampled and other districts of the region was obtained from Sidama National Regional Health Bureau, Human Resource Department (unpublished). Wealth index was created using 15 household asset variables [18] broadly categorized in five groups: assets owned (radio, mobile phone and motorbike), livestock owned (cattle, horse or mule or donkey and sheep or goat), housing characterstics and utilities (flooring materials, roofing materials, number of rooms used for sleeping, source of drinking water, type of toilet facilities, access to electricity and use of kerosene lamp), cash crop grown and ownership of horse or mule used for transportation. Household utilities and asset variables used for household wealth index creation are presented in S1 Table. Type of road to the nearest health facility was obtained from the report of participant interview. Definitions Maternal death. A death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management, but not from accidental or incidental causes; International classification of diseases and related health problems (ICD-10) [24]. Late maternal death. A death of a woman from direct or indirect obstetric causes, more than 42 days but less than one year after termination of pregnancy [24]. Comprehensive maternal death. A grouping that combines both early and late maternal deaths (ICD-11) [25]. Maternal mortality ratio (MMR). Is the number of maternal deaths during a given time period per 100,000 live births during the same time period. Verbal autopsy for maternal health. A method of finding out the medical causes of death and ascertaining factors that may have contributed to the death in women who died outside of a medical facility. The VA consists of interviewing people who know about the events leading to the death such as family members, neighbours and traditional birth attendants [26]. Data sources and measurement The data was collected from households that reported pregnancy and pregnancy outcomes in the past five years. In a household which did not have maternal death, a mother was interviewed about her pregnancy experiences and household characteristics using interviewer administered questionnaire. When a mother was absent during the initial visit, the data collectors revisited the household the next day. The data was collected by diploma level teachers recruited from each kebele. In a household where maternal death occurred, we interviewed a father or any adult knowledgeable about the death of a mother. The data was obtained through administering VA questions adapted from the WHO manual for maternal death [27]. Two public health officers who were familiar with the language and culture of study area independently conducted the VA interview. The VA interviewers determined the cause of death using pre-coded options of major causes of maternal deaths: bleeding (haemorrhage), fever (sepsis), convulsion (hypertension), prolonged or obstructed labour and including the option of other causes [24]. Data quality control The questionnaire was developed after reviewing similar studies. Initially, the questionnaire was prepared in English, translated into local language (Sidaamu Afoo) and then back translated to English by another individual. VA interview questions were adapted from the World Health Organization (WHO) VA guideline [27]. We used the WHO ICD-10 guideline for the ascertainment of causes of maternal deaths [24]. Inter-rater agreement between the two VA interviewers while ascertaining the cause of maternal deaths was assessed by kappa statistic. We used the Landis and Koch inter-rater reliability classification to interpret the kappa coefficient: < = 0.4: poor to fair; >0.4-< = 0.6: moderate agreement, >0.6-0.8: substantial agreement and >0.8-high agreement [28]. The computed Kappa statistics test result was Kappa = 0.75 (95% CI: (0.62-0.87) which indicates substantial agreement between the two VA interviewers. Internal consistency of the variables used for wealth index creations was determined using Cronbach's Alpha reliability statistics which was found 0.54 and the sampling adequacy was assessed by Kaiser-Meyer-Olkin test with test result of 0.64. The data collectors, the supervisors and VA interviewers were given training by the principal investigator. Key terms and concepts were translated into local terms during the training. The questionnaire was pretested in one district not included in the survey. The supervisors followed the data collectors, checked consistency and completeness of the questionnaire on daily basis. The data was double entered and validated using EpiData version 3.1 software (EpiData Association 2000-2021, Denmark). Sample size estimation Sample size estimation for the survey was determined based on the following assumptions: MMR of 412/100,000 LB, crude birth rate of 32 per 1000 population and average household size of 4.6 [3]. With the assumption of a MMR of 412 per 100,000 LB, we used design effect of 2 (as the study employed multistage cluster sampling method) and 0.14% precision level to obtain the number of LB needed for this study. The estimated sample was 15879 LB. We wanted to estimate maternal mortality within 0.14 percent point of the true value with 95% confidence. From a population of 100,000 people and assumed crude birth rate of 32 per 1000 people, we would have (32/1000 � 100000) 3200 LB per year (16000 LB in 5 years). Hence, we expected to observe 66 maternal deaths over five years among 16000 LB with 95% confidence interval of MMR; 412 (324-524) per 100,000 LB [29]. We assumed that two LB would occur in one household over a five-year period [18] and hence 8000 households would be visited to get the 16000 LB. By considering 10% nonresponse, the final households estimated for the survey were 8800 households. We used Open-Epi software to calculate the sample size (Source Epidemiologic Statistics for Public Health version 3.01, www.OpenEpi.com) [29]. Statistical analysis We used Stata version 15 for data analysis (Stata Corp., LLC. College Station, Texas, USA). This study used data obtained through multistage cluster sampling design [30,31]. To account for the sampling design, we employed complex survey data analysis method with sampling weight adjusted for non-response [30,32]. The sampling weight was employed to correct for unequal probability of selection so that to produce meaningful estimates which correspond to the population of interest [33]. This study had four sampling units: district, kebele, limatbudin and household. In primary sampling unit, we applied similar sampling weight since the districts were selected with equal probability of selection. However, the kebeles, limatbudin and households were selected with different selection probability at their respective levels and hence we computed the sampling weight for each of them that differ according to their sampling probability. We computed sampling weight adjusted for non-response by using three steps stated below [32]. We initially calculated the sampling weight for each sampling unit. The sampling weight was computed as the inverse of selection probability. Secondly, we adjusted for non-response for each sampling unit. Nonresponse was calculated as the inverse of response rate. Finally, we calculated sampling weight adjusted for non-response by multiplying the inverse of sampling probability (inverse of inclusion probability) with the inverse of response rate at each sampling unit [32]. We also estimated finite population correction (FPC) factor for each sampling unit to adjust for variance estimators as the survey data was sampled from finite population without replacement [34]. The FPC was calculated using the following formula where N is population and n is sample: Principal component analysis (PCA) was computed to create wealth index [35]. We categorized the wealth index using the first principal component with eigenvalue of 2.3 that explained 15.2% of the total variance. We used geographic coordinates of households, the nearest health centres and hospitals to calculate distance between them. We calculated straight-line distances using proximity analysis "generate near table function" in ArcGIS 10.4.1 [36] and exported the data to Stata 15 for further analysis. Walking time to the nearest hospital according to the participants' report was also used. We did descriptive statistics like mean, proportions and ratios. Chi-square test was computed to test the association between the outcome variable and potential explanatory variables. Complex sample logistic regression analysis was used to measure the effect of explanatory variables with maternal mortality. We carried out both weighted and non-weighted analysis, but reported only weighted analysis. Ethical approval The ethical approval for this study was obtained from institutional review board of Hawassa University College of Medicine and Health Sciences (IRB/015/11) and Regional Ethical Committee of Western Norway (2018/2389/REK vest). Support letter to respective district (woreda) health offices was obtained from Sidama National Regional State Health Bureau (formerly known Sidama Zone Health Department). Letter of permission to respective kebeles was sought from each woreda health office. Informed written (thumb print and signed) consent was obtained from the study participant before interview. Participant identifiers were anonymized during data entry and analysis to maintain confidentiality of the participants. Table 1 summarizes the background characterstics of the six districts included in the study. Daela and Wondogenet districts did not have a hospital. Doctor to population ratio in Aroresa district was about 1 per 26000 while there were no doctors in Daela and Wondogenet districts. The midwife to population ratio was 1 per 6200 in Hawassa Zuriya district whereas 1 per 52000 in Aroresa district and 1 per 45900 in Daela district. Hawassa Zuriya is the nearest district, 21 kilometre distant from the regional capital, Hawassa, whereas Aroresa is the farthest district situated 181 kilometre away from Hawassa. Table 2 shows background characterstics of the study participants. In this study we interviewed 8755 participants out of 8880 households visited, with response rate of 98.6%. On average there were 5 persons (ranging 1-14 persons) per household. Concerning educational status, 2304 (24.3%) of the mothers and 1467 (15.7%) of the husbands had no formal education. Subsistence farming was the main occupation for 6332 (71.8%) head of the households. To access the nearest hospital, 7653 (89.6%) of the families needed more than an hour of walking. The nearest health centre for 8050 (93.3%) of households was found within 5km distance, while the nearest hospital for 6440 (77.7%) households was found within 10km distance. Table 3 shows place and assistance of delivery. We identified a total of 10851 births: 10602 (LB) and 249 stillbirths. On average, there were 1.2 births in a household in the past five years, and 56.2% of the births took place at home. Traditional birth attendants (TBAs) assisted 18.3% of the births, 38.0% were assisted by family or neighbour and 43.2% were assisted by skilled health personnel. Table 4 describes characterstics of deceased mothers. The mean age of deceased mothers was 29 years. Twenty two (47%) of the deaths occurred in the 25-29 age group. Twenty eight (55%) of deceased mother had no formal education, 38 (84%) were house wives, 40 (80%) were multiparous, 32 (67%) had pregnancy related complaints, 22 (50%) attended antenatal care (ANC) and from those who had ANC check-up, 5 (24%) attended four or more ANC visits. Twenty-four (59%) of deceased mothers gave birth at home from which 10 (21%) were assisted by TBA and 14 (38%) were assisted by family or neighbours. Table 5 shows causes, time and place of maternal deaths. We registered 10602 LB and 48 maternal deaths yielding the overall MMR of 419 (95% CI: 260-577) per 100,000 LB. In addition there were 7 late maternal deaths. Haemorrhage was the most common 21 (41%) direct cause of maternal deaths followed by eclampsia 10 (27%). Direct obstetric causes were responsible for 89% of the deaths while indirect obstetric causes accounted for 11%. Thirty (59%) Table 7 shows results of complex sample logistic regression analysis of risk factors for maternal deaths. The risk of maternal death was higher among mothers without formal education than among those with formal education (AOR 4.4; 95% CI 1.7-11.0). Also, the risk of maternal death was higher in districts with low midwife-to-population ratio than those with high midwife-to-population ratio (AOR 2.9; 95% CI 1.0-8.9). Principal findings In a population-based survey in the Sidama National Regional State, southern Ethiopia, we found an overall MMR of 419 per 100,000 LB with great variation by district. The most remote districts far from the central city, with poor infrastructure and inadequate skilled health personnel had the highest maternal mortality ratio compared to those districts found nearer to the central city, with good infrastructure and adequate skilled health personnel. Haemorrhage and eclampsia were the leading causes of death. Nearly half of maternal deaths occurred at home and about two fifths in health facilities. The risk of maternal death was high among mothers who had no formal education and in districts which had low midwife to population ratio. Strengths and weaknesses of the study To the best of our knowledge, this is the first population based study describing maternal mortality estimate with district level variations in Sidama National Regional State, southern Ethiopia using large and representative sample. We used data collectors recruited from the study area that enhanced understanding and trustworthy communication with the study population. Each maternal death was independently reviewed by two public health officers using standard PLOS ONE VA guidelines which improved precise assignment of cause of maternal deaths. To account for the multi-stage cluster sampling technique employed for the study, we used survey data analysis methods which improved precision of the estimates. This study had some limitations. We studied maternal deaths that occurred in past five years, hence, recall bias and underreporting were important limitations of this study. However, to minimize recall bias, we used local calendar and events, that helped the respondents recognize the time of maternal death. Secondly we employed data collectors who were familiar with the study setting and who took part in social events who supported the respondents to recall the death of mothers. In spite of our efforts to circumvent recall bias, there might be some maternal deaths not reported. Though this study was done in one of the regional states of Ethiopia, we assume that the region could represents other regional states of the country in terms of health services and demographics. We also believe that the study was done using representative sample of the region as we followed probability sampling techniques in each sampling stage. Misclassification could be another limitation for this study. Unlike medically certified deaths, our conclusion on causes of maternal deaths was based on lay family member report which is liable to misclassification. The following measures were taken to reduce misclassification. Firstly, we used two independent VA interviewers to ascertain cause of maternal deaths. Secondly, in case of lack of consensus between the two interviewers, we used third VA interviewer. Thirdly, when we did not get clear information from the first interviewee, we interviewed more than one family member. We did not ask about deaths that occurred during early pregnancy due to abortion as we did not get ethical approval from the Regional Committee for Medical and Health Research Ethics (REK Western Norway) to include abortion in our study. Studies conducted in south-west Ethiopia estimated that maternal deaths ascribed to abortion accounted 8-10% of maternal deaths [37,38]. There might be abortion related maternal deaths which were not reported in our study. Hence, we believe that the MMR was underestimated as we did not include abortion in our study. We did not also ask about early pregnancy maternal deaths due to ectopic pregnancy since ascertaining ectopic pregnancy could be difficult in a rural setting. A study from Tigray PLOS ONE Region, northern Ethiopia showed that the prevalence of ectopic pregnancy was 0.52% of the total deliveries [39]. Though we assume that the prevalence of ectopic pregnancy is to be low, there might be ectopic pregnancy related maternal deaths which were not reported in our study. All mothers in surveyed households were married women and we did not find single or women who were not in a marital union in our study. We believe that majority of pregnancies in rural community are a result of marriage. However, there might be maternal deaths in single or unmarried women which were not identified and reported by our study. A study from eastern Ethiopia reported that maternal deaths among never married women constituted 1 (2.4%) [40]. We lack some data on health system and other factors that might have contributed for variations of maternal deaths in the study districts. Due to resource limitation, we did not use software assisted VA algorisms and expert panel of obstetrician to ascertain the deaths. However, we provided adequate training for VA interviewers, pilot tested the questionnaire and the VA interviews were conducted by two independent interviewers. Another limitation which we can mention is, in this study we found low birth rate than we had planned initially. We also noted that there were differences in birth rates across the districts in the region. This shows that the true birth rate in the region is lower than we had expected at first. A recent study conducted in Sidama National Region State is in agreement with our finding that found the fertility in the region has shown a falling trend [41]. In this study we were not able to find the number of maternal deaths we had anticipated initially. Our aim was to find 66 maternal deaths. However, we registered 48 maternal deaths. Our results also show that we have a wide 95% CI as we estimated MMR of 419 (95% CI: 260-577). A limitation of our study is thus reduced sample size. Though it is costly to attain precise estimate of MMR, since it needs a large number of maternal deaths, we could have obtained more precise estimate of MMR if our sample was larger than the current one. Magnitude and district level variations of maternal mortality This study identified the overall MMR 419 per 100,000 LB. Our finding is in agreement with previous MMR estimates in Ethiopia [4,18,42]. However; its higher than the 2017 global average, 211 per 100,000 LB [4]. To achieve the SDG, countries must reduce their MMRs by at least 6% each year between 2016 and 2030; in Ethiopia between 2000 and 2017 the annual rate of MMR reduction was 5.5% [4]. We observed a very high MMR in Aroresa district. Similar finding has been reported from a study in Tigray region, northern Ethiopia [19]. Provincial differences in MMR has also been PLOS ONE reported from other African country [43]. Low utilization rate of maternal health services might have contributed for the high number of maternal deaths in the district. For instance, a study by Limaso et al. documented that in 2018 the coverage of institutional delivery for Aroresa district as reported from the district health office was 38% [44]. Aroresa district is the most remote district in the region situated 181 Km distant from the regional capital [44]; most of PLOS ONE the kebeles in this district have difficult topography and poor road conditions, where health facilities were hard to reach. Weak referral system and lack of emergency transportation might have contributed for the high MMR in the district. Aroresa district had the lowest midwife-topopulation ratio among the 6 districts included in the study. Remoteness, distance of health facilities from households and lack of adequate and skilled health personnel are known risk factors for maternal death [45]. Time, cause and place of maternal mortality In our study we found that about 60% of maternal deaths occurred during labour or within 24 hours postpartum. This finding is in agreement with a study conducted in eastern part of Ethiopia where 55.6% of maternal deaths were reported to occur within the first day [40]. The time around labour and the first 24 hour postpartum is a critical period that a mother should get emergency obstetric care at health facility. We observed that around 50% of maternal deaths occurred at home. This is in agreement with the study reported from eastern part of Ethiopia where 56% maternal deaths were found to occur at home [40]. The high proportion of maternal deaths that occurred at home could be associated with the low coverage of skilled birth attendant which we observed in the study area. Home deaths might be reflection of poor access to emergency obstetric care. In this study we found significant number of maternal deaths occurred in health facilities. A study from eastern part of Ethiopia found similar finding [40]. A study from Indonesia documented that poor quality of care at health facility was associated with high chance of maternal deaths [54]. Poor and inadequate emergency obstetric care at health facilities might have contributed for the deaths occurred at health facilities [55][56][57]. Skilled delivery This study found that more than half of the births took place at home assisted by either TBA, family or neighbours. Studies documented that community trust on TBA, lack of transportation and poor quality of maternal health services were associated with low utilization of maternal health services [58]. It has been reported skilled assistance at delivery associated with less maternal deaths [59,60]. Independent predictors of maternal mortality Mothers with no formal education had increased risk of maternal death. A study conducted in eastern part of Ethiopian showed that around 84% deceased mothers were illiterate [40]. The association of low education level with severe maternal outcomes has been documented [61]. A multi-country study showed mothers with no education had 2.7 times higher risk of mortality than mothers with high education [62]. Educated women better utilize maternal health services [63,64] and recognize pregnancy complications, prepare for births and obstetric emergencies [65]. In this study, we observed that there was increased risk of maternal mortality in districts which had inadequate number of midwives compared with districts which had adequate number of midwives. It has been documented that availability of skilled midwives at health facilities increases the uptake of institutional deliveries and other maternal health services [66] and consequently reduce the maternal mortality [14,67]. In contrary to this; lack of skilled personnel for maternal health services increases the risk of maternal deaths. An Indonesian study documented that lack of adequate number of doctors working at community health centre and village was associated with high maternal mortality [45]. Some studies indicated low wealth status is associated with increased risk of maternal deaths [68]. However, in this study we did not see association of wealth status with maternal death. Similar finding has been reported by a study from southwest Ethiopia [18]. This study also found that place of birth was not associated with maternal death. Lack of association of wealth status and place of birth with maternal deaths could be explained by maternal death is rare event in terms of absolute number. Policy and clinical implications This study from Sidama National Regional State highlights that Ethiopia needs more regional studies to address high maternal mortality rates in the country. The high MMR with significant district level variations in this study indicate there is a need to amplify the efforts to decease maternal deaths, identify risk factors and institute interventions tailored to areas with high maternal mortality in the region. Similar to other studies, haemorrhage was the leading cause of maternal deaths in the region. This highlights the importance of improving the skill and practice of health workers in active management of third stage of labour. The Sidama National Regional State Health Bureau as well as Ministry of Health should also consider inclusion of misoprostol provision to postpartum bleeding with the health extension packages for women who give birth without a skilled provider [69,70]. Comprehensive emergency obstetric care including blood transfusion services should be available within the reach of community. The high number of deaths attributable to hypertensive disorders of pregnancy in this rural community indicates the necessity of improving screening and detection of preeclampsia at community level using the health extension workers [71]. Referral system and management of preeclampsia/eclampsia has to be strengthened in the region. In our study many maternal deaths occurred at home without skilled birth attendants which shows the need for strengthening access to emergency obstetric care. Evidences from rural settings of Ethiopia showed that interventions focused on strengthening emergency obstetric care improved the uptake of maternal health services and decreased the maternal mortality [15]. During ANC visits, pregnant women should be counselled and encouraged for skilled birth attendant. Avoiding barriers to skilled delivery and integrating the services of TBA with formal health system may increase use of skilled birth attendants [72]. The occurrence of significant number of maternal deaths at health facility signals the importance of improving emergency obstetric care in health facilities [57,73]. The association of inadequate number of midwives with maternal mortality indicates the Ministry of Health and the Sidama National Regional State Health Bureau should train and deploy adequate number of midwives so that quality maternal health services are provided and consequently maternal deaths averted. In addition, the assignment and distribution of midwives and doctors should be fair so that the gap between the central and remote districts in distribution of skilled health personnel be minimized. Increased risk of maternal deaths among mothers who did not have formal education indicates there is a need to improve educational status of female in the region. Conclusion This study found high MMR in rural areas of Sidama National Regional State with high variations in the districts. The study highlights that Ethiopia needs regional studies to understand magnitude of maternal mortality and local variations in order to reduce the high maternal deaths. The quality of emergency obstetric care including lifesaving interventions have to be improved in the region. Sidama National Regional State Health Bureau should design maternal health interventions targeting local variations and areas with high mortality rates. The shortage of midwives should be alleviated to improve provision of skilled maternal health services and consequently save the life of mothers. Supporting information S1
2023-03-08T06:18:27.443Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "3f313cb471c8862a189900d6ee5d8524f45f1a29", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "be188b85edaf1fe76270c2e49046ec6fc2d670ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255818211
pes2o/s2orc
v3-fos-license
Chlamydia Outer Protein (Cop) B from Chlamydia pneumoniae possesses characteristic features of a type III secretion (T3S) translocator protein Chlamydia spp. are believed to use a conserved virulence factor called type III secretion (T3S) to facilitate the delivery of effector proteins from the bacterial pathogen to the host cell. Important early effector proteins of the type III secretion system (T3SS) are a class of proteins called the translocators. The translocator proteins insert into the host cell membrane to form a pore, allowing the injectisome to dock onto the host cell to facilitate translocation of effectors. CopB is a predicted hydrophobic translocator protein within the chlamydial T3SS. In this study, we identified a novel interaction between the hydrophobic translocator, CopB, and the putative filament protein, CdsF. Furthermore, we identified a conserved PxLxxP motif in CopB (amino acid residues 166–171), which is required for interaction with its cognate chaperone, LcrH_1. Using a synthetic peptide derived from the chaperone binding motif of CopB, we were able to block the LcrH_1 interaction with either CopB or CopD; this CopB peptide was capable of inhibiting C. pneumoniae infection of HeLa cells at micromolar concentrations. An antibody raised against the N-terminus of CopB was able to inhibit C. pneumoniae infection of HeLa cells. The inhibition of the LcrH_1:CopB interaction with a cognate peptide and subsequent inhibition of host cell infection provides strong evidence that T3S is an essential virulence factor for chlamydial infection and pathogenesis. Together, these results support that CopB plays the role of a hydrophobic translocator. Background Chlamydia infections represent a significant disease burden worldwide. C. trachomatis infection can lead to pelvic inflammatory disease (PID), salpingitis, and infertility in women and epididymitis and infertility among men [1]. Furthermore, Chlamydia pneumoniae is a respiratory pathogen causing approximately 10 % of community acquired pneumonia [2]. Additionally, C. pneumoniae infections have been associated with asthma exacerbations, cardiovascular disease, Multiple Sclerosis, and Alzheimer's [3][4][5][6][7]. Combined, C. pneumoniae and C. trachomatis represent a significant disease burden. An essential component of Chlamydia's survival is creating an environmental niche that exhibits the necessary requirements for replication and survival. Type III secretion (T3S) is a complex mechanism utilized by important Gram-negative bacterial pathogens. Salmonella, Shigella, Yersinia, Pseudomonas, and Chlamydia all contain the highly conserved type III secretion system (T3SS) of approximately 20-30 proteins [8][9][10][11]. To manipulate their host environment, these bacteria secrete toxic effector proteins directly into their target cell. Functionally, the whole apparatus can be referred to as an injectisome; however, it consists of smaller functional components, which include the cytoplasmic C-ring, the inner and outer membrane rings, the needle complex, and needle-tip complex [8][9][10]12]. Each of these components display numerous essential protein-protein interactions. Despite the identification and characterization of many putative T3S proteins, it remains unclear whether Chlamydia truly has a functional T3SS, and whether it plays a role in replication and survival given the absence of a robust genetic manipulation system for gene knockouts [13]. Chlamydia spp. undergo a unique biphasic life-cycle starting with an infectious, non-metabolically active elementary body (EB) [14][15][16]. Upon attachment of the EB to the host cell, there is a conformational change within the host membrane that allows for invasion of the EB into a membrane-derived vacuole termed an inclusion [16]. Once inside the inclusion, an as of yet unknown signal triggers differentiation of the EB into a metabolically-active, non-infectious reticulate body (RB) that divides through binary fission until late in the infection cycle [16]. The infectious EB will then leave either through a packaged release mechanism, called extrusion, or through cell lysis, to repeat the infection cycle [17][18][19]. Throughout this process the T3SS is belived to play an essential role in pathogenicity [12]. The translocator proteins of the T3S system are believed to be critical to the survival of Chlamydia, by forming a pore in the host cell membrane to allow for translocation of effector proteins from the bacterial cytosol to the host cell cytoplasm [8][9][10]. Analysis of the chlamydial genome suggests that there may be two sets of translocator proteins, CopB and B2 and CopD and D2, both of which are located in the same operon as a predicted class II chaperone [20]. To date, there has been limited characterization of the translocator proteins from Chlamydia spp. Early work on the translocator proteins in Chlamydia indicated that both CopB and CopB2 can be secreted from Yersinia spp. in a T3S-dependent manner and that Scc2 co-precipitated with CopB from a C. trachomatis infected monolayer [21]. More recently, localization experiments have shown that CopB and CopB2, when ectopically expressed in HeLa cells, associate with the cytoplasmic and inclusion membrane, respectively [22]. Our laboratory has previously characterized the minor hydrophobic translocator (CopD) from Chlamydia pneumoniae. We have shown that it associates with T3S components and contains an essential PxLxxP motif for interaction with its class II chaperone, LcrH_1 [23]. Although many hypotheses can be made regarding the possible function of the translocator proteins based on orthologous T3SS translocator proteins, there is limited information on the biochemical characterization of chlamydial translocator proteins owing to the inherent difficulties of working Chlamydia spp. In this report, we characterize the putative T3SS translocator protein CopB of C. pneumonia, explore interactions between CopB and other T3SS proteins, and characterize the chaperone binding domain of CopB. In addition, we generated a novel peptide mimetic that blocks the interaction between the translocators, CopB and CopD, and their chaperone, LcrH_1, and showed that the peptide mimetic prevents infection. We also identify a CopB epitope which is immunogenic and elicits neutralizing antibodies that block C. pneumoniae infection supporting an essential role for CopB in the infection of host cells. Protein expression and purification All constructs were transformed into E. coli BL21 and recombinant protein was expressed following induction with Isopropyl β-D-thiogalactopyranoside (IPTG). Protein expression and purification were performed as described by Bulir et al. (2014), with the following modifications [23]. Briefly, 6 L of LB containing 100 μg/mL ampicillin was inoculated with 1:100 dilution of an overnight culture and split equally into 6x 2 L flasks. The cultures were then grown at 37°C with shaking at 250 RPM until an optical density of 0.500 at 600 nm was reached. Cultures were induced with 0.2 mM IPTG and were left incubating at room temperature, shaking at 250 RPM for 3 h. Glutathione-S-transferase (GST) pull-down assay Glutathione-S-transferase pull-down assays were performed as described by Bulir et al. (2014) [23]. Briefly, GST-tagged proteins were bound to 1 mL GST beads for one hour at 4°C on a mixing platform. GST beads were centrifuged at 3000 x g for 5 min to remove the supernatant and then blocked with blocking solution (5 % BSA in PBS + 0.1 % TWEEN-20) overnight at 4°C. Blocked beads (50-100 μL) were mixed with E. coli lysates containing overexpressed His-tagged protein for one hour. For experiments involving the blockade of interaction between GST-and His-tagged constructs, the chemically synthesized peptide was incubated with the bait construct for 1 h at 4°C prior to the addition of the overexpressed His-tagged E. coli lysate. The beads were then centrifuged at 16,000 × g for 10 s, the supernatant was removed, and the pellet was washed with high salt wash buffer (500 mM KCl, 20 mM Tris-HCl pH 7.0, 0.1 % Triton X-100). The washing procedure was repeated seven times to ensure complete removal of adventitiously bound protein. For GST pull-downs involving synthetically produced peptide, the peptides were used at a concentration of 500 µM. The glutathione-agarose beads were then resuspended in 75 μL of SDS-PAGE loading dye. The samples were analysed by SDS-PAGE and Western blot analysis using a mouse anti-His antibody (GenScript, New Jersey). Bioinformatics Orthologous proteins to CopB were identified using BLASTP (Basic Local Alignment Search Tool Protein) and PSI-BLAST, excluding Chlamydiaceae family from the search. CopB was analyzed using the TMpred software tool to predict transmembrane domains, using a minimum transmembrane window of 17 and maximum of 33. Coiled-coil prediction software, COILS, was used to predict the presence of coiled-coil domains within CopB, using the MTIDK scoring matrices, and weighting for positions a & d. Antibody and peptide inhibition of C. pneumoniae infection in HeLa cells Infection was performed as previously described by Johnson et al. [15]. At approximately 72 h post infection, chlamydial inclusions were stained with the Pathfinder Chlamydia detection reagent (BioRad) and visualized with multiple, random fields of view. Percent reduction of infection was calculated compared to a control infection, and statistical significance was calculated using a Student's t-test. A polyclonal antibody raised against a 15 amino acid peptide (SGKDKTSSTTKTETC) from CopB was obtained from GenScript (New Jersey). C. pneumoniae was pre-incubated for 2 h at 37°C with dilutions of affinity purified CopB antibody, control antibody (anti-GST), or pre-immunization sera. Additionally, chlamydial infection inhibition was performed using a synthetic peptide (500 uM), vehicle alone (PBS), or control peptide (anti-RSV peptide). Briefly, 5×10 5 IFUs were incubated with the peptide or vehicle alone (PBS) for 2 h at 37°C prior to performing a standard infection and inclusions were visualized as previously described. Bioinformatic analysis of Chlamydia outer protein (Cop) B Translocator proteins have a conserved function across numerous bacterial species, facilitating the translocation of effector proteins from the bacterial cytosol to the host cell cytoplasm through formation of pores within the host cell membrane. However, there is limited sequence orthology between Chlamydia spp. translocators and other wellcharacterized bacterial translocator proteins. BLAST-P analysis identified potentially orthologous sequences in the recently sequenced genome of Bacteroides fragilis with an expect value of 6e −141 and percent identity of 54 %. CopB is a 493 amino acid protein with a predicted molecular weight of 50.5 kDa. Potential transmembrane domains were identified using online prediction software, TMpred, which suggests the presence of two transmembrane domains, spanning amino acids 256-274 and 383-406, respectively, and a hydrophobic stretch of amino acids from 180 to 200. COILS software identified three potential coiled-coil domains located at amino acids 117-140, 234-347, and 410-437. Sequence analysis of the N-terminal region of CopB identified a conserved chaperone binding motif of PxLxxP at amino acids 166-171 with the sequence of PELPKP (Fig. 1). Together, these results are consistent with features characteristically found in T3S translocator proteins [22,23]. CopB interacts with the putative needle filament protein, CdsF CopB is believed to be a T3S protein, and thus it should interact with other proteins within the T3SS [10]. Cloning fragments of CopB lacking the transmembrane domains allowed us to identify specific domains of CopB that are responsible for interactions with other type III secretion components. GST pulldowns were performed between CopB and Cpn0803, CdsF, and CopN. No interactions were observed between any fragments of CopB and Cpn0803 or CopN ( Fig. 2a and b). There was a positive interaction between the N-terminal (amino acids 1-255) and middle fragment of CopB and CdsF, but not the C-terminus of CopB (Fig. 2c). These observations are consistent with a role in the T3S apparatus of Chlamydia pneumonia, since translocator proteins from orthologous systems have been shown to interact with the needle filament protein. LcrH_1 interacts within the N-terminus of CopB Cpn0811 (LcrH_1) is a small, basic isoelectric protein located upstream in the same operon as CopB (Cpn0809) [20]. We explored the possible interaction between LcrH_1 and CopB and found that His-LcrH_1 interacts within the N-terminus of CopB (Fig. 3a). Both CopB and CopB 1-255 interacted with His-LcrH_1, but CopB 1-180 did not, suggesting the hydrophobic stretch of amino acids spanning residues 180-200 plays an important role in this interaction. Since CopB 1-200 was the smallest truncation construct that maintained an interaction with His-LcrH_1, we examined the amino acid sequence for the presence of a conserved chaperone binding motif, PxLxxP, which begins at amino acid 166. To elucidate the importance of the conserved motif, we performed an alanine walkthrough of the conserved amino acids in the PxLxxP motif starting at amino acid 166( P166A CopB 1-200 , L168A CopB 1-200 , P171A CopB ). Mutation of the PxLxxP motif abrogated the interaction between His-LcrH_1 and CopB (Fig. 3b). To ensure that the absence of interaction was the result of the specific amino acid substitution, as opposed to gross misfolding of the mutant protein, L168A CopB 1-200 was subjected to a GST pulldown against CdsF. As expected, L168A CopB 1-200 maintained the interaction with HisMBP-CdsF (Fig. 3c), suggesting that the PxLxxP is a critical interaction domain between the chaperone and CopB. A CopB peptide mimetic blocks the LcrH_1 and CopB interaction Given the necessity of the PxLxxP motif for the interaction between translocator proteins and their class II chaperones, a synthetic peptide containing the chaperone binding motif was synthesized and tested for its ability to block the interaction between LcrH_1 and both CopB and CopD. To determine whether a synthetic peptide consisting of a cell penetrating peptide sequence (YGRKKRRQRRR) and the 10 amino acids (ETPELPKPGV) encompassing the chaperone binding motif of CopB is capable of preventing the chaperone:translocator interaction, the peptide was incubated with GST-CopB 1-200 or GST-CopD 1-157 prior to the addition of His-LcrH_1. In the presence of the peptide, no interaction was observed between the putative (Fig. 4a). To explore the hypothesis that the CopB:LcrH_1 and CopD:LcrH_1 interaction are essential for infection, we then tested the ability of the peptide to block C. pneumoniae infection. We pre-incubated C. pneumoniae with the peptide or vehicle alone and then infected host cells. The peptide inhibited infection by 90 % compared to the control infection with vehicle alone (Fig. 4b). Anti-CopB antibody inhibits C. pneumoniae Since T3S translocators are believed to be surface exposed proteins in other T3SS, we hypothesized that antibodies to CopB would inhibit infection [24][25][26]. We generated an antibody to a peptide (15-mer) in the N-terminal region of CopB and tested its ability to inhibit C. pneumoniae infection. To test whether this antibody could inhibit infection, we pre-incubated C. pneumoniae with the polyclonal antibody for 1 h at 37°C prior to infection. C. pneumoniae infection was inhibited by the CopB antibody. (Fig. 5a-d), resulting in a 98 % reduction in inclusion forming units, as compared to control antibody (Fig. 5e). Using a Western blot, polyclonal antibodies were able to detect both recombinant and native CopB (Fig. 5f). The ability of the CopB antibody to block infection suggests that CopB is surfaced exposed, and plays a critical role in the infection process. Discussion Despite our increasing understanding of the T3SS in Chlamydia spp., there is limited or no evidence for a direct role for the translocator proteins during infection. Our laboratory has previously characterized the putative minor hydrophobic translocator, CopD, showing that it plays an essential role during chlamydial infection [23]. In this report, we provide an initial characterization of the major hydrophobic translocator, CopB. The interaction of CopB with the filament protein CdsF suggests that it plays an essential role in T3S. As seen with other translocator proteins, the putative chaperone located immediately upstream of CopB interacted with the first N-terminal 200 amino acids of CopB. Using an alanine walkthrough of the conserved PxLxxP motif, we show that amino acids P166, L168, and P171, in addition to amino acids 180-200, are required for the interaction between CopB and its' cognate chaperone LcrH_1. We demonstrated that a cognate CopB peptide encompassing the chaperone binding motif can block the interaction between LcrH_1 and both CopB and CopD, suggesting that the CBD is a critical binding domain. Furthermore, we show that this peptide when pre- incubated with C. pneumoniae, blocked infection. Together, these results strongly suggest that the PxLxxP motif is required for the translocator-chaperone interaction, and for infection. We also show that a polyclonal antibody raised against an N-terminal epitope within CopB significantly reduced infection. Collectively, these results are consistent with CopB's role as a translocator within the Chlamydia T3SS. Initial bioinformatic studies were performed to gain insight into the role of CopB in C. pneumoniae [21][22][23]. Chellas-Géry et al. identified potential hydrophobic and coiled-coil domains within CopB from C. trachomatis [22]. Given the moderate level of sequence identity between C. trachomatis and C. pneumoniae CopB (approximately 52 % amino acid identity), a thorough bioinformatics analysis of CopB was performed. BLASTP analysis of CopB yielded one significant result from Bacteroides fragilis, typically a commensal bacterium found in the gastrointestinal tract, but no matches were found for other T3S systems, suggesting that the C. pneumoniae T3SS may be quite unique among orthologous systems, which is in keeping with Chlamydiae containing an ancient T3SS. Although no orthologous sequences of the chlamydial translocator proteins were identified in the archetypal secretion systems using our bioinformatics approach, the proteins are predicted to have similar structure and function. Since CopB is likely anchored within the host-cell cytoplasm to facilitate translocation of effector proteins, we utilized TMpred online software to identify potential transmembrane regions. Our analysis identified two potential transmembrane domains spanning amino acids 256-274 and 383-406, respectively. This is consistent with other translocator proteins possessing two transmembrane domains to anchor themselves within the host cell membrane [27,28]. Using the COILS online prediction software, we identified three potential coiled-coil domains, which may be important for mediating protein-protein or protein-membrane interactions. The N-terminus of CopB contains a conserved PxLxxP motif, followed by a sequence of hydrophobic amino acids, which is seen in other translocator proteins from C. pneumonia, and has been shown to be important for mediating the essential translcoator:chaperone interaction in other bacterial systems (Shigella, Yersinia, Salmonella) [23]. Due to the difficulty in genetically manipulating Chlamydia and the inherent challenges of establishing structure-function relationships for T3S proteins of obligate intracellular pathogens dependent on T3S for infection, it is difficult to ascertain the role of chlamydial T3SS proteins. We therefore explored the possible interaction between CopB and other proteins within the chlamydial T3SS. The needle filament protein, CdsF in Chlamydia, is believed to polymerize forming needle structure for the translocation of effector proteins. The translocator proteins are believed to be docked on the tip of the injectisome to form the needle-tip complex, prior to host cell contact. Two domains of CopB, amino acids 1-255 and 275-382, interacted with CdsF using a GST-pulldown assay. CopN, the putative plug protein, is believed to be localized to the base of the needle apparatus and prevents premature secretion of effector proteins. No apparent interaction between CopN and CopB was observed using a GST-pulldown assay. Although an interaction was observed between CopD and CopN, the lack of interaction between CopB and CopN suggests that CopB may be secreted through the apparatus and docked on the end of the needle complex before CopN plugs the needle apparatus. Considering the fact that recent work has suggested that Cpn0803 may be a chaperone protein given its biophysical properties and putative interactions, it is not surprising that CopB failed to interact with Cpn0803 [29,30]. The interaction between CopB and the needle filament protein, CdsF, in this report is a novel observation not previously reported in the literature. It has been reported that the translocators are recruited to the tip of the needle complex either upon detecting host cell contact or under secretion conditions. Once the translocators are inserted into the host cell membrane, the filament protein must anchor to the host cell via the translocator proteins, which are now imbedded in the host membrane. This result is consistent with the role of the hydrophobic translocator proteins in other T3SS since the needle protein must interact with the translocator proteins on the host cell to facilitate translocation of effector proteins [31,32]. Interactions between class II chaperones and translocator proteins have been documented in Chlamydiae spp. previously. Initial identification of the LcrH_1 and CopB interaction, from C. trachomatis, was performed by Fields et al. (2005) [10,21]. Using a GST pulldown, we demonstrated that the N-terminus of CopB 1-255 interacts with LcrH_1, which is in keeping with LcrH_1 orthologs interacting within the N-terminus of translocator proteins [28,33]. An additional truncation series showed that the removal of the hydrophobic amino acids from 180 to 200 eliminated the interaction of LcrH_1 and CopB despite the presence of the PxLxxP motif within the CopB 1-180 construct. The PxLxxP motif is conserved in members of the Chlamydiaceae family, despite the low amino acid sequence identity, suggestive of an important role for the chaperone binding motif (Table 1). An alanine walkthrough of the conserved amino acids in the PxLxxP motif disrupted the interaction between CopB 1-200 and LcrH_1. Our data indicates that the interaction between CopB and LcrH_1 is dependent on both the PxLxxP motif and the CopB 180-200 domains. The co-crystal structure of class II chaperones (LcrH_1 orthologs) with the translocator PxLxxP domain confirms the interaction between the two proteins [28]. Based on this interaction we hypothesized that a cognate peptide of CopB containing the PxLxxP peptide sequence could disrupt the translocator-chaperone interaction. We therefore tested a peptide containing the 10 amino acids encompassing the PxLxxP domain plus a cell penetrating peptide sequence to test this hypothesis. Since the PxLxxP motif is conserved between CopB and CopD, we preincubated LcrH_1 with the cognate peptide then added either CopB or CopD fragments to the GST pull down and showed that the peptide blocked the interaction between LcrH_1, and CopB and CopD. Since the cell penetrating peptide sequence allows proteins to enter cells, we hypothesized that this cognate peptide would block chlamydial infection and intracellular replication [34,35]. After pre-treating C. pneumoniae with this peptide, we observed a significant reduction of 90 %, compared to control infection, with vehicle alone. Since it is currently not possible to create genetic knockouts in Chlamydia, peptide mimetics could be used to create functional knockouts to study the resultant phenotype. The peptides ability to significantly reduce infection reinforces the importance of the chaperone binding motif for the chaperone-translocator interaction and may represent a novel target for therapeutic intervention using peptide mimetics. Conclusions Antibodies to the translocator proteins in orthologous secretion systems have been shown to inhibit infection, suggesting that the translocator proteins play an essential role during infection [36][37][38][39][40]. Using antibodies raised to an N-terminal epitope of CopB, we demonstrated that anti-CopB antibodies inhibited C. pneumoniae infection by 98 %. Inhibition of infection by anti-CopB antibodies indicates that CopB is surface exposed at some time during infection and play an essential role in infection. Given the fact that CopB is surface exposed during the initial phase of chlamydial infection, the translocator proteins may represent a novel class of antigens for use in vaccination strategies to prevent chlamydial infections. CopB (C. pneumoniae) P E L P K P 100 % CT578 (C. trachomatis serovar D) P G L P K P 52 % SseC like family protein (C. psittaci) P D L P K P 53 % TC_0867 (C. muridarum) P G L P K P 5 0% CPE1_0913 (C. pecorum) P E L T P P 5 3% CAB923 (C. abortus S26/3) P D L P K P 5 4% PopB (Y. enterocolitica) P A L G R P 1 8% IpaB (S. dyseteriae) P E L KAP 1 7 % Putative chaperone binding domains were identified within the N-terminal regions of orthologous proteins to CopB from C. pneumoniae. P1, P3, P6, represent positions 1, 3, and 6, respectively of the PxLxxP motif. Percent identity refers to amino acid sequence identity comparing full length CopB to full length sequences of orthologous proteins
2023-01-15T14:44:50.921Z
2015-08-14T00:00:00.000
{ "year": 2015, "sha1": "a807d02926fd25cc5cfed4acdb713576526404e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12866-015-0498-1", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a807d02926fd25cc5cfed4acdb713576526404e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
259936061
pes2o/s2orc
v3-fos-license
Observations of feeding practices of US parents of young children with Down syndrome Abstract Parental behaviours influence food acceptance in young children, but few studies have measured these behaviours using observational methods, especially among children with Down syndrome (CWDS). The overall goal of this study was to understand parent feeding practices used during snack time with young CWDS (N = 111, aged 11–58 months). A coding scheme was developed to focus on feeding practices used by parents of CWDS from a structured home‐use test involving tasting variously textured snack products. Behavioural coding was used to categorise parental feeding practices and quantify their frequencies (N = 212 video feeding sessions). A feeding prompt was coded as successful if the child ate the target food product or completed the prompt within 20 s of the prompt being given without a refusal behaviour. CWDS more frequently consumed the test foods and completed tasks in response to Autonomy‐Supportive Prompts to Eat (49.3%), than to Coercive‐Controlling Prompts to Eat (24.2%). By exploring the parent–CWDS relationship during feeding, we can identify potentially desirable parent practices to encourage successful feeding for CWDS. Future research should build upon the knowledge gained from this study to confirm longitudinal associations of parent practices with child behaviours during feeding. | INTRODUCTION Parent-child interaction during feeding is important, as it contributes to the development of children's eating behaviour and food preferences (Polfuss et al., 2017;Savage et al., 2007).Parents influence their children's feeding by providing food and creating a feeding environment during consumption, which in turn can impact the child's early experiences with food (Kral & Rauh, 2010;Savage et al., 2007).Encouraging healthy eating habits may be the intent of a parent during mealtime, but specific behaviours used by parents during feeding, such as controlling practices, may have a negative impact on child eating habits, eating behaviour, body composition and nutrient and energy intakes (Blissett, 2011;Fogel et al., 2019;Fries & van der Horst, 2019;Fries et al., 2017;Savage et al., 2007;Matern Child Nutr. 2023;19:e13548.wileyonlinelibrary.com/journal/mcnShloim et al., 2015;Wehrly et al., 2014).By contrast, evidence suggests that modelling healthy eating behaviours can improve a child's food acceptance (Fries & van der Horst, 2019;Palfreyman et al., 2015).Parent-feeding practices during child feeding have been extensively studied in children with typical development (CTD), but much less research has been conducted among populations with developmental and/or intellectual delays, such as children with Down syndrome (CWDS; Polfuss et al., 2017).DS is the most common chromosomal condition in the United States, with approximately 1 in every 707 babies born diagnosed with DS (CDC, 2020; Mai et al., 2019).Feeding challenges, and specifically texture selectivity, are more prevalent in CWDS compared to CTD (van Dijk & Lipke-Steenbeek, 2018;Ross et al., 2021).Approximately 80% of CWDS have oral motor delays that may contribute to these challenges (Field et al., 2003).Understanding the impact these challenges impose on child food acceptance and the resultant parental feeding practices is important for the development of evidence-based guidelines to support parents aiming to promote healthy diets. In addition to food rejection, another challenge that may influence the parental feeding practices used by parents of CWDS is the concern for childhood obesity among this population.O'Neill et al. ( 2005) compared parental feeding behaviours between CWDS and their siblings (n = 36, aged 3-10 years) and reported parents tended to use controlling behaviours more frequently for CWDS than for their siblings during feeding.This was attributed to the parents' increased concern about obesity in their CWDS, prompting increased use of controlling feeding behaviours (Costanzo & Woody, 1985). The increased use of controlling feeding practices was not observed in general parenting practices outside of a feeding context (Phillips et al., 2017).This observational result in general parenting of CWDS is interesting in comparison to the parent report of increased use of controlling feeding behaviours during feeding of CWDS from O'Neill et al. (2005).Further research is needed to determine if there is a parent behaviour change from general parenting to feeding CWDS.Accompanying parent-report surveys with observational methods in both general parenting and feeding contexts of CWDS will answer this question. Exploring the relationship between parent feeding practices and CWDS feeding responses is important to inform the development of guidance on the most effective practices used by parents that facilitate a positive feeding experience (e.g., decreased stress, increased food exploration; Caldwell & Krause, 2021).The overall goal of this study was to understand parent feeding practices used during snack time with young CWDS.For the observational measures, video data from a home-use test was used to explore the parent-CWDS relationship during feeding (Surette et al., 2021). The home-use test evaluated child eating behaviours and acceptance of commercial solid snack food products of various textures in a CWDS population.A behavioural coding scheme was developed to quantify parental feeding practices during snack time.We hypothesised that parents of CWDS would use more supportive feeding practices (e.g., Autonomy-Supportive Prompts to Eat) than controlling feeding practices (e.g., Coercive-Controlling Prompts to Eat) during the feeding sessions, based upon the observational results from the Phillips et al. (2017) that reported no observations of increased use of controlling feeding practices in general parenting practices.We further hypothesised that parents would be consistent in their behaviours/practices across two observed days. | Home-use test overview A home-use test was developed to compare food texture acceptance and identify mealtime behaviours in CWDS.A detailed description of the methods, participant eligibility criteria and textured products is provided by Surette et al. (2021) and the results have been published by Ross et al. (2022). Key messages • A coding scheme was developed to focus on feeding practices used by parents of children with Down syndrome (CWDS). • Parents of CWDS were observed to use more autonomysupportive feeding practices to convince their children to eat the target food compared to coercive-controlling practices. • Parents and feeding practitioners for CWDS may consider using supportive feeding practices to encourage acceptance in this population. In summary, the test involved shipping four products and additional study materials (e.g., video recording and feeding study instructions) to the homes of participants.The study was completed over six consecutive days and participants were asked to evaluate each test food once per day.Parents recorded their own liking of the food products as well as their perception of their child's liking of the food products using a nine-point scale presented through an online platform. Parents were instructed to provide their children with their normal feeding environment, use a consistent location for filming every day, and to minimise distractions and facial gestures that might influence their child's evaluation of the food.From the videos, a panel of trained coders used a pre-defined behavioural coding scheme to capture parents' verbal and non-verbal behaviours during snack time, as well as child mealtime behaviours and food acceptance. To reduce the number of videos that needed to be coded to explore parent feeding practices in response to a new task and the same feeding practices after the parent and child were used to the task, 2 of the 6 days of video recordings were selected as a representative for behavioural annotation and coding.To determine which of the 6 days to include, several analyses of variance (ANOVA) were conducted.Results from the home-use test showed that the overall disposition of the children to foods on early feeding days (Days 1-3) did not significantly differ (p > 0.05); no significant differences were noted on later feeding days (Days 4, 5 and 6 either. Thus, Day 2 was selected as it was in the middle of the early study days, while Day 5 was in the middle of the later study feeding days. | Coding scheme development The coding scheme used was developed to focus on the feeding techniques used by parents with CWDS.The coding scheme incorporated elements from previously established coding schemes of feeding practices (Edelson et al., 2016;Fries et al., 2017;Fries, van der Horst, et al., 2019;Orrell-Valente et al., 2007;Surette et al., 2021) and was applied to each child's feeding session.Qualtrics software (Version April-August 2021) was used to collect all data pertaining to the coding scheme of the parent feeding behaviours. | Coding periods For each feeding session, parental feeding practices and child eating behaviours were coded across the three distinct periods of the eating occasion: (i) baseline state, (ii) initial presentation and (iii) food engagement as were used in our previous study (Surette et al., 2021). The baseline state included the period during which the child was waiting for a product to be presented to them.The initial presentation was the period between the child looking at the product and the child trying/ rejecting the product.Food engagement was defined as the period between the child trying/rejecting the product, and the child finishing the product or the parent taking the product away from the child. | Preliminary coding scheme Before the official coding of the CWDS video data, the lead researcher/coder designed and performed a preliminary study with the first version of the coding scheme.The preliminary coding were randomly selected and coded, with a balance of the day (1-6 of the home-use test feeding study) and sex (male or female).The coded behaviours are summarised in Table 2. | Final coding scheme The final coding scheme was modified with respect to the type and amount of feeding practices incorporated.Since no 'Hurrying' or 'Slowing' were observed during the preliminary study, these practices were converted from being coded as prompts to then being counted as distinct point events (not coded as a prompt).These two feeding practices were kept in the coding scheme to capture the practices if present during the final coding of the CWDS data set so as to not miss these behaviours that other feeding studies have reported. Another modification of the preliminary coding scheme was the addition of the 'Interference', 'Instruction' and 'Water prompt' categories.It was necessary to further distinguish the CCP from Interference parent practices, and ASPs from the Instruction parent practices.This allowed coders to account for differences in parental practices with respect to both the food product and the parent practice that either improved a child's action (Instruction practice) or discouraged certain child behaviours during snack time (Interference practice).The 'Water prompt' category was introduced to distinguish between prompts to drink from the other prompts to eat.A prompt was coded as successful if the child ate the target food or completed the prompt within 20 s of the prompt being given without a refusal behaviour (Edelson et al., 2016).A refusal behaviour was defined as the child turning their head away, increasing distance from the stimulus, throwing food, verbally saying 'no', or similar (Surette et al., 2021).Unsuccessful prompts were those that had a refusal behaviour or did not complete the prompt/eat the target food within 20 s of the prompt being given.An example of a feeding practice that is nonfood related is a parent asking their child to, 'Say "hi" to the camera', and a successful coded child response would be the child saying 'hi' to the camera.Prompts were also coded as unsuccessful if the child did not have the ability to complete the prompt (e.g., parent prompted the child to eat the food, but there was no food in front of the child within the 20 s timeframe).If multiple prompts (same or different type) were given within the 20 s allotment, those prompts were not counted if the child had not responded to the first prompt. | Coder training Coders were trained using methods similar to those described by Surette et al. (2021) and Edelson et al. (2016).Coders received training materials before training that included: the project overview, the coding scheme, feeding practice definitions and examples, video period timings (i.e., baseline state, initial presentation and food engagement periods), and serving orders per child participant. Training (2 h/day) occurred over three consecutive days with six randomly selected videos from the data set.The lead researcher reviewed the coding of feeding practices and led the practice coding of two videos on the first day.On the second and third days of training, the lead researcher led the coding of one video and then allowed for separate coding of another video before discussion. After the initial 6 h of training, the lead researcher and two coders each coded 24 videos randomly selected by the lead researcher.Per cent agreement was the measurement used to monitor the reliability between the two coders for all videos in the study since the presence of zeros is often an issue with behavioural coding.Once the goal of >80% agreement was reached, the coding of the study videos began (Edelson et al., 2016;Surette et al., 2021). The statistical analysis of inter-coder reliability was performed using Stata v.14 (Stata Corporation).All videos, 222 videos (Days 2 and 5 T A B L E 2 Coding scheme and corresponding examples of feeding practices per video period. Video period Coding Feeding practice Example Baseline State: Period in which the child was waiting for a product to be presented to them.Initial Presentation: Period between the child looking at the product and the child either trying/rejecting the product.Food Engagement: Period between the child trying/rejecting the product, and the child finishing the product or the parent taking the product away from the child.Defined as making a comment that is not obviously positive or negative.or the count measurement sum differed between coders, videos were re-coded until sufficient agreement was met. | Statistical analysis XLSTAT 16.0 (Addinsoft) was used to perform a paired t-test to determine if the video lengths were the same across both days of the feeding study. XLSTAT was used to perform paired t-tests to determine whether parents were giving the same type and number of feeding practices to their child across both days, and to test the consistency of the feeding prompt success rates across both days.A twoproportion z-test was performed to determine if the success of prompts differed by type of prompt. XLSTAT was used to perform an ANOVA to determine if specific feeding practices were used more for parents who fed their children during the home-use test study compared to parents with children who fed themselves independently. The distribution, mean number, and standard deviation of the feeding practices were calculated using Microsoft Excel (2021).The frequency of prompting the child to perform a task (i.e., number of ASP, CCP, Instruction, Interference, Water, and 'Other') per minute and the frequency of talking to the child (i.e., Hurrying, Slowing, Positive Talking, Negative Talking and Neutral Talking) per minute were also calculated using Microsoft Excel.Hurrying was removed from the analysis because there were no recorded counts of this feeding practice. Content analysis of coder comments was performed in Microsoft Excel to explore additional behaviours occurring during the feeding sessions.Comments from every coded feeding session (Days 2 and 5) were categorised and counted. | Ethical statement The home-use test was approved by the Institutional Review Board of Washington State University (IRB #14706), with written informed consent obtained from all study participants. | RESULTS The average video length coded was 11.5 min (±5.5 min).Days 2 and 5 video lengths were not significantly different from each other (p = 0.090). Table 3 shows the mean number of feeding practices used overall, as well as the percentage of success of prompts overall. Overall, the average number of feeding practices experienced over one feeding session was approximately 21 prompts and 65 counts of talking.The frequency of prompting the child to perform a task per minute during a feeding session was 1.8 prompts per minute, while the frequency of talking to the child was 5.7 per minute.The most common feeding practices observed were positive talking (mean 8.1 times per video), autonomy-supportive prompts (5.9), and neutral talking (4.0). a A successful child response means that the child ate and/or followed the prompt direction without a refusal within 20 s of the prompt. b Counts were made for these feeding practices in case they were present in the data set, as none were observed in the preliminary data set. c Counts were made for these feeding practices, as the talking practices are not indicative of giving a child a specific prompt/task. d This response scale was consolidated to a 3-point scale for all analyses, with strongly negative and negative combining into negative (−1), with neutral remaining the same (0), and with strong positive and positive combining into (1). Overall, parents used more feeding practices on Day 2 than on Day 5 (p = 0.015).Specifically, more Neutral Talking was observed on Day 2 than on Day 5 (p = 0.003).Parents who fed their children used significantly more ASP, Instruction and Positive Talking, and more CCP than children feeding themselves independently (p < 0.05). The overall success of the feeding prompts was not significantly different across both days of the feeding study (p = 0.230).The overall success of the feeding prompts across both days ranged from 24.2% (with CCP) to 69.3% (with Instruction).The success of feeding prompts depended on the type of prompt the parent used.CWDS successfully completed more prompts in response to ASP than CCP (p < 0.05). Examples of 'Other' prompts that were not food-related (n = 37 prompts as indicated by coder comments) included a parent asking the child to look at the camera and talk, asking the child to say a specific word or phrase, asking for the food product cup to be returned to the parent, and asking the child to sign for a specific item or action.Examples of 'Other' prompts that were food-related (n = 23 prompts as indicated by coder comments) included the parent encouraging the child to play a game with the food, or the parent singing a song to get the child to eat the food. Table 4 shows the results from the content analysis of the coder's comments from Days 2 and 5 during video coding.Seven content themes were observed from the comments: modelling (e.g., parent modelled eating); sign language (e.g., the parent used sign language to communicate); deviations from home-use test directions (e.g., parent offered milk instead of water); obstacles for the child (e.g., the child fell asleep); distractions (e.g., siblings were distracting child); environment-related prompts (e.g., the parent asked the child to say something) and positive experiences (e.g., the parent hugged the child). The majority of comments from all themes were recorded while the child was in the food engagement video period.The most frequently reported comment theme was modelling within the food engagement period, with 123 counts of parents either modelling eating or modelling drinking water.During the initial presentation video period, the most frequently reported comment theme was modelling as well, with 46 counts.During the baseline state video period, the most frequently reported comment theme was environment-related prompts (e.g., the parent asked the child to say something, and the parent asked the child to give the sample cup or water cup back to the parent). | DISCUSSION The current study sought to observe parent feeding practices used during snack time with young CWDS when exposed to solid snack food products of various textures.Autonomy-supportive prompts were more likely to convince CWDS to eat the target food (49.3% successful) than were controlling prompts (24.2%).Support through positive experiences included parents letting their children play with the food, parents hugging their children and parents encouraging their children to practice feeding on their toys.Since parents of CWDS used more ASP than CCP, and ASP was the more successful approach, this might suggest that parents have noticed that using ASP may be a more effective method to guide children's eating behaviour. Since parents of CWDS may have more interactions with feeding specialists, this may be why they are utilising supportive practices when feeding CWDS (Marshall et al., 2015;Ross et al., 2019).Parents of CWDS have been encouraged to be attentive, use verbal encouragement, and teach new skills through play (Bruni, 2006).A previous study in CWDS during playtime also found that parental support elicited more engagement in play (Daunhauer et al., 2017). Parental modelling of feeding behaviours has been observed to be an effective way to influence toddlers to eat target foods (Edelson et al., 2016).In a study comparing different types of prompts to eat, modelling was the approach that most successfully convinced toddlers to eat the target food (n = 60 children, aged 12-36 months; Edelson et al., 2016).However, when parents of CWDS were surveyed about modelling during feeding (n = 25 of 40 CWDS, aged 7-63 months), the average response was that the parents slightly agreed that they actively demonstrate healthy eating for the child (from a set of four questions; Melbye et al., 2011;Rogers et al., 2022).Parental modelling in this study was coded as ASP and noted by the coders in the respective video period's comment section; thus, the report of the direct success of modelling was limited.A total of 174 counts of modelling were recorded across both days of the study, and primarily recorded during food engagement-the video period where the child was T A B L E 3 Mean number of overall feeding practices (with standard deviation, SD), overall per cent success of feeding prompts and per cent success of feeding prompts for CWDS.eating the food product or had rejected the product and the parent was encouraging them to eat.Modelling was mostly accompanied by another ASP.For example, some parents would eat the product in front of their child, then say, 'Your turn to try!' and then wait for a response to their action and prompt. The large amount of Neutral Talking recorded during the video coding may be attributed to parents, siblings, or others having Our second hypothesis was that no differences existed in the amount and type of feeding practices used on Day 2 versus Day 5. This hypothesis was incorrect since parents gave significantly more feeding practices earlier than later during the home-use test.Parents and children may have become more familiar with the feeding study procedures by Day 5, which may explain why fewer practices were observed on this day.Parents may have also become more comfortable with feeding in front of the camera and felt less need to 'perform' as the study progressed.Another reason for this result may be due to the bidirectionality of child feeding (Fogel et al., 2018;Quah et al., 2018Quah et al., , 2019)).Parents may adapt their feeding practices in response to the child.If their approach is successful, then the parent may keep performing the same practice; if their approach is unsuccessful, then a change in feeding practice may be observed. In the present study, we also accounted for children directly fed by their parents.The children fed by their parents did not appear to have the gross and fine motor skills foods (e.g., poor coordination and weak pincer grasp) required to feed themselves for some or all of the textured snacks.Parents who fed their children used more ASP, CCP, Instruction and Positive Talking than the parents of children feeding themselves independently.This suggests that parents who are physically feeding their children are generally interacting more with the child, across the different types of feeding practices. The parent-CWDS relationship during feeding was important to explore since there are no known studies that explore this relationship with observational methods (Nordstrøm et al., 2020).Perhaps the reason that observational studies are scarce in the context of feeding is due to recruitment (Surette et al., 2021).Recruiting a specialised population is a common challenge shared by previous studies with CWDS (Gisel et al., 1984a(Gisel et al., , 1984b;;Spender et al., 1996), with obstacles such as geographical location and feasibility of logistics to travel to an on-site evaluation location.This study was the first of its kind to explore the parent-CWDS relationship during feeding with observational measurements.Another strength of this study was the number of parent-CWDS dyads observed (N = 111).Of the few studies that explored parenting practices and parenting dimensions in CWDS, study populations have ranged from 10 to 35 mother-child dyads (Blacher et al., 2013;Gilmore & Cuskelly, 2012;Phillips et al., 2017).Our larger sample size provided the adequate statistical power needed to draw meaningful conclusions (Surette et al., 2021).Additionally, we recruited a hard-to-reach population nationwide through social media and by using the snowball method (Surette et al., 2021).The in-home nature of the study meant that a larger and more widespread population could be recruited. In addition to longitudinal assessment, future research could conduct an intervention using ASP (e.g., modelling and positive reinforcement) with a cohort of high food rejecting/challenging feeding CWDS.If such an intervention could increase food acceptance and intake with this cohort of CWDS, this would confirm the desirability of these practices for feeding CWDS. As with all studies, this study experienced several limitations. First, this study was conducted with video data from a home-use test where CWDS evaluated snack products, and thus, results may not generalise a typical family meal (with vegetables, novel foods; Moding & Fries, 2020).Next, future work should include questionnaires that measure parenting styles and parent behaviours/practices. Questions related to parental concerns about childhood obesity (O'Neill et al., 2005), parental stress level (Phillips et al., 2017), and choking concerns (Spender et al., 1996) should also be included in future work as these factors may influence parent behaviours/practices.Also, exploring socioeconomic status and access to support for parents with CWDS may be important when understanding parentfeeding practices (Marshall et al., 2015;Phillips et al., 2017). Differences in socioeconomic status may affect the access of support to services for CWDS (e.g., cost and quality of medical and therapeutical services, Caldwell & Krause, 2021). The coding scheme was designed to count the number and type of feeding practices observed during a feeding session.For Neutral Talking, it was not possible to differentiate between child-directed speech and background conversations.All counts of talking during the feeding session were counted since this practice was a part of the feeding environment.When an 'Other' prompt was observed, the coder was directed to describe the specific prompt in the comment section of the scheme; however, this was not consistently completed. As such, the interpretation and categorisation of these prompts into food-related 'Other' prompts and non-food-related 'Other' prompts was slightly limited.We also may have missed parental modelling occurring behind the video recording device, potentially underestimating this behaviour. | CONCLUSION This is the first study to explore parental practices used during feeding of young CWDS using observational methods, providing more context to CWDS feeding experiences.A coding scheme was developed to focus on feeding practices used by parents of CWDS during feeding of variously textured solid snack food products.We observed both the feeding practices used by parents and the CWDS response to those parent practices.We observed that autonomysupportive prompts were more likely to convince CWDS to eat the target food than coercive-controlling prompts.Parents and feeding practitioners for CWDS should consider using more supportive feeding practices, such as modelling desired feeding behaviours, to encourage acceptance and successful intake in this population. Future research can build upon the knowledge gained from this study to confirm longitudinal associations of parent practices with child behaviours during feeding. scheme consisted of eight feeding practices which have previously been defined by Fries et al. (2017) and Fries, van der Horst, et al. (2019): Autonomy-Supportive Prompts to Eat (ASP), Coercive-Controlling Prompts to Eat (CCP), Hurrying, Slowing, 'Other', Positive Talking, Negative Talking, Neutral Talking.Eighteen CWDS videos Amount of Sample Consumed Coder responses: All = 3 More than half (not all) = 2 (Continues) per child participant), were reviewed by at least two coders.All questions about coding, including disagreements and clarifications, were resolved by the lead researcher.The training video data set results from the reliability analysis indicated that >80% agreement would be very difficult to achieve with the subjectivity of the ASP, Positive Talking and Neutral Talking practices.Therefore, a reliability analysis and count measurement were conducted at the end of each week of the official coding (10 weeks with 20 videos coded per week per coder, and 1 week with 22 videos coded per coder).The count measurement consisted of the lead researcher counting the number of ASP, Positive Talking and Neutral Talking practices, and ensuring the sum was the same per food product for each video coded.If coders failed to reach 80% agreement on their responses discussions during the feeding session.The frequency of these Neutral Talking instances was coded as they are a part of the feeding environment and experience for the child eating, as these practices may keep children engaged with the feeding session and attentive to their environment.Previous findings fromFogel et al. (2019) have shown a relationship between the frequency of talking and eating speed.T A B L E 4 Content analysis of the number of overall (Days 2 and 5) coder comments from the baseline state, initial presentation and food engagement video periods.Examples of comments are indented under respective themes.to say something Parent asked child to look at something (food and non-food related) or someone Parent asked child to give the water cup or product cup back to the a toy to practice feeding on the toy Parent played a game or a song to encourage the child to eat Child played with food Child asked for more food Table 2 lists the feeding practice, its definition, how the practice was coded, and corresponding examples encountered by previous observational studies and the current study.It also includes how the amount of food consumed and the child's overall disposition to the foods were coded.Autonomy-Supportive Prompts to Eat (ASP), Coercive-Controlling Defined as asking the child to eat more slowly. Water− 'Want some water?' Defined as asking the child to take a drink.b − 'Slow down'.− 'Take your time'.
2023-07-18T06:17:20.158Z
2023-07-17T00:00:00.000
{ "year": 2023, "sha1": "c261d739c00f182f99be1437085575861b1d42e1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mcn.13548", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0a85f729f39f9f415bf2d322fbeeeb781caa4c8", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
218611353
pes2o/s2orc
v3-fos-license
A Review of Practice and Implementation of the Internet of Things (IoT) for Smallholder Agriculture : In order to feed a growing global population projected to increase to 9 billion by 2050, food production will need to increase from its current level. The bulk of this growth will need to come from smallholder farmers who rely on generational knowledge in their farming practices and who live in locations where weather patterns and seasons are becoming less predictable due to climate change. The expansion of internet-connected devices is increasing opportunities to apply digital tools and services on smallholder farms, including monitoring soil and plants in horticulture, water quality in aquaculture, and ambient environments in greenhouses. In combination with other food security e ff orts, internet of things (IoT)-enabled precision smallholder farming has the potential to improve livelihoods and accelerate low- and middle-income countries’ journey to self-reliance. Using a combination of interviews, surveys and site visits to gather information, this research presents a review of the current state of the IoT for on-farm measurement, cases of successful IoT implementation in low- and middle-income countries, challenges associated with implementing the IoT on smallholder farms, and recommendations for practitioners. Introduction In the 1930s, one farmer in the United States could grow enough to feed four people. Today, one farmer can feed 155 people [1]. Approximately 1 billion people worldwide are involved in agriculture, and although the total number of farmers are declining, the demand for agricultural crops is expected to double as the world population reaches 9 billion by 2050 [2]. This will require an increase in agricultural productivity, especially from low-and middle-income countries (LMICs). Recent technological advances have contributed to the rise of precision agriculture, enabling farmers to make better decisions with more information about their soil, water, crop, and local climate [3,4]. However, uptake of these advances has been limited to commercial-scale and cash-crop cultivation [1]. There is a growing body of evidence, which indicates that the implementation of precision agriculture tools and practices across different types of farming offers benefits. For example, a digital decision support system (DSS) that utilized crop information provided by growers, in combination with weather data, effectively alerted growers when to apply fungicide to their potatoes, resulting in the effective mitigation of late blight disease and more efficient fungicide use, saving them up to 500 USD per acre cultivated [5][6][7]. Another DSS platform integrated weather data and electrical capacitance sensors for real-time monitoring of soil water content along with soil water balance and irrigation scheduling models to provide recommendations to durum wheat farmers on timing and intensity of irrigation, resulting in water savings of at least 25% compared to traditional scheduling An important enabler for IoT services in LMICs (and a sub-set of LMICs-global food security strategy (GFSS) countries [20]) is reliable cellular infrastructure [21]. Country level data from the World Bank suggests that the GFSS country set average for cellular connections and smartphone penetration is on par with India, a common benchmark for digital services among developing countries ( Figure 2). However, the linkage has not yet been made to the agricultural sector. Internet usage in GFSS countries ranges from 10%-50% of the population and correlates inversely with the population employed in agriculture (Figure 3a). Where access to internet falls short, digital services often benefit from cellular connectivity. However, while agricultural productivity has increased in most GFSS countries, those gains show no correlation with mobile phone penetration (Figure 3b). For example, a recent study has shown that mobile phone ownership in farming households is nearly universal; however, fewer than 25% use a phone to access information about agriculture and livestock, or for buying and selling products [22]. These data, and more from the GFSS agriculture dataset, can be accessed in the Supplementary Materials. [23]. Crop production index (CPI) shows agricultural production for each year relative to the base period 2004-2006 (CPI = 100). It includes all crops except fodder crops. Approach A mixed method approach consisting of a combination of literature reviews, expert interviews, web-based surveys, and site visits were used to carry out this research [25][26][27]. The literature review included a review of 87 relevant publications, which are catalogued in an open online repository [25]. This review provided an overview of sensors and IoT for agriculture globally. Next, 25 interviews with relevant experts worldwide were conducted. These expert interviews were used to identify the gaps in understanding the value of the IoT to smallholder farmers and potential directions for technology development in this sector. Seventy web-based surveys were then distributed to stakeholders in the IoT and agricultural technology communities from which we received 37 responses; these web-based surveys served to not only triangulate the expert interview data, but also to provide use case profiles for our site visits. Finally, five site visits and discussions with farmers at IoT implementation sites in India and Kenya were conducted. The site visits were used to validate findings from the literature review, expert interviews, and web-based surveys. Two countries were selected for the site visits; (i) India, which was selected because of the large number of agricultural technologies being piloted and implemented there on smallholder farms, and (ii) Kenya, where we conducted a majority of our site visits. Among the different GFSS countries, Kenya had the most active IoT for agriculture projects, and this influenced our selection. Current State of IoT for Agriculture For this overview of IoT for agriculture, sensor and communication technology applications are classified into five categories of (a) climate, (b) livestock, (c) plant, (d) soil, and (e) water. Within each category, there are many common, measurable parameters that can influence the performance of the agricultural system ( Figure 4). This categorization focuses on "ground-based" measurements, while other methods exist including aerial-and space-based earth observation or remote sensing, for example in [26,28]. In some cases, remote sensing and ground-based measurements are combined to provide temporally (ground-based sensors) and spatially (remote sensing) dense crop measurements [29]. Table 1 summarizes common electronic sensors, their applications in agriculture, and articles that describe those in more detail. Wherever available, articles that report on applications in LMICs are included, although relevant precision agriculture research from across the globe is also included. Some parameters can use different technologies to estimate the same output. Methods for measuring basic agricultural parameters, including soil and atmospheric conditions, are well-established, and commercial products are available for IoT applications. New applications for optical sensors, in particular, are evolving as the cost of semiconductor technology and data storage and transmission decreases. A particular challenge with using low-cost sensors in agriculture is the need to calibrate the sensor for the specific implementation conditions. For applications where artificial intelligence (AI) is applied to classify events based on measurement patterns, complex training datasets are necessary to teach the AI algorithm. The IoT for smallholder agriculture represents a challenge for data transmission due to remote locations, with devices distributed over large areas or multiple farms that have potentially limited access to electricity and cellular networks. Therefore, the range, data rate, and power consumption are important design considerations and are compared for the common communication protocols. In Figure 5, communication protocols are grouped based on data rate and range of transmission, which is divided into "Long range", including low-power wide area network (LPWAN), which includes LoRa and SigFox, "Short range", including Bluetooth Low Energy (BLE), Zigbee, and Z-Wave, and cellular communication including GSM 2G, 3G, 4G, and 5G. With respect to power consumption, solutions that are tailored to IoT applications offer superior performance to more general protocols ( Figure 6). While the range and power consumption of protocols like LoRa and SigFox are well suited for IoT applications, their device compatibility is more limited compared to generic wireless protocols like Bluetooth and WiFi. Additionally, cellular and satellite communication offer the advantage of providing a direct link to a web server instead of passing through an intermediate gateway. . Power consumption vs. range for common communication protocols [81,82]. Bubble size is proportional to data rate. Even though many organizations in LMICs have developed IoT systems for agriculture, few studies have reported on the implementation in a smallholder context and associated outcomes. Research in Embu County, Kenya tested different proximal soil sensors to estimate soil properties and composition on smallholder plots and provided management recommendations to farmers [57]. The measurements showed considerable variation, both within plots and regionally, indicating a need for management recommendations based on both local measurements and regional soil maps. Another application implemented a tracing system for smallholders who raised live poultry that were traded in Vietnam to prevent the spread of avian influenza [75]. Low-cost radio-frequency identification (RFID) tags on transport cages and electronic tag readers at markets tracked birds from farm-to-market, revealing that birds passed through up to six intermediate traders, creating a high risk of virus transmission. Many commercial entities serving smallholder farmers use the IoT to track assets and interface digitally with customers; for example, tractors and implements for hire by Hello Tractor in West Africa and EM3 AgriServices in India [83], and irrigation equipment by Agriworks, Futurepump, and SunCulture in East Africa [84]. Enrollment in these services is enabling more productive growing; however, low digital literacy is a barrier that requires a mix of traditional and technology-enabled engagement with farmers. A notable case is the use of sensors to monitor borehole pump reliability for potable and irrigation water supply in East Africa, which is discussed later in this article. Implementation Cases In addition to reviewing the literature and interviewing experts, we found it useful to study on-the-ground implementations of the IoT for smallholder agriculture. In particular, this research aimed to understand what helps to create an enabling environment at the country level, and the keys to success and the risks associated with active IoT in smallholder agriculture projects in India and Kenya. India is home to over 130 million smallholder farms [85] and is a common testbed and incubator for digital services for farmers. Site visits and discussions in India focused on identifying the factors that enable innovations in the agricultural sector in order to promote those in GFSS countries. The following is a summary of those factors. • Mobile network connectivity and cost: Relatively cheap monthly mobile data plans of 0.21 USD/GB [38,39] have helped IoT companies explore opportunities to work with sensors using cellular services for data transmission. • Market opportunity: The large population size and density, and increasing incomes of the Indian middle class, make India a lucrative market for IoT providers [41]. • Policies to support farmers: Agriculture accounts for 17.32% of India's GDP and employs over 50% of the population, and some state governments are providing subsidies for new farm equipment that could be leveraged towards precision agriculture purchases [40]. • Academic institutions: Some of the highest ranking academic institutions in India are performing research that benefits Indian farmers and are raising the awareness of farming challenges to students through hackathons. Kenya represents a very different landscape compared to India. The population and total number of farm holdings are vastly smaller, and many of the enabling factors discussed above are yet to emerge. However, Kenya was selected because, among the different GFSS countries, the authors' investigations and discussions revealed several active IoT for agriculture cases across different applications and locations in Kenya. These cases are summarized in this section along with some of the keys to the success and the associated risks with project sustainability. As part of the Kenya Resilient Arid Lands Partnership for Integrated Development (RAPID) project to manage the recently discovered Lodwar Basin Aquifer in northern Kenya, electrical current sensors were installed on solar-electric borehole pumps to monitor "water system functionality, the approximate number of pumping hours and volume extracted per day, and the last report date for the sensor" [86]. Data from the current sensors are transmitted via cellular or satellite network to a web server and dashboard where county government staff can monitor borehole use. While the pumped water serves a variety of community needs, some is used for irrigating small farm plots on municipal land that would otherwise be infertile (Figure 7). One of the keys to the success of this IoT implementation was the clear value and utility to the government officials responsible for the pumps, who indicated that access to the dashboard significantly reduced maintenance costs and pump downtime. The Kenya RAPID project also benefits from having a large, distributed, and well-coordinated team where each organization plays a clear role, including the supplier of the IoT technology. The sensor measurements have revealed detailed pump usage patterns in relation to rainfall [74], and this, coupled with a machine learning algorithm to predict failures and reduce detection time, has resulted in an increase in system-wide pump uptime from 70% to > 99% [87]. A risk for the project is the current reliance on grant funding, although the IoT component has now been incorporated into the county government budget. The IoT solution for the aquaponics system at Kikaboni Farm was developed by Upande, and it monitors water conditions (temperature and pH) in the fish tanks and environmental conditions (ambient temperature and relative humidity) in the hydroponic vegetable growing area (Figure 8). The sensors are battery-powered, charged by 3-10 W solar PV panels, and transmit measurements over a LoRa network to a gateway with a mobile data connection. Data are stored and processed on Upande servers and regularly fed back to the farm's horticulture manager who can perform necessary adjustments. For example, the cover material for the vegetable structure was changed after measurements showed that temperatures were far above the recommended growing temperatures for leafy green vegetables (Figure 8). One of the keys to success for this application is the ability of the horticulture manager to interpret the data, which has allowed them to make significant improvements to their product yield and quality. Kikaboni Farm has also collaborated on product development by providing a testbed for improvements to Upande Vipimo IoT products. Borehole pump monitoring for small plot community farming in Turkana A risk for this project is the reliance on the expertise and willingness of the horticulture manager. Water and greenhouse monitoring for aquaponics in Olooloitikosh With little vegetation in the Mara River watershed, rainy season precipitation causes rapid river level rise and destructive flooding (Figure 9). The IoT solution developed by Upande consists of solar-battery powered sound navigation and ranging (SONAR) level sensors at several points along the river, connected by LoRa to several grid-connected gateways with cellular access. In the event that the level sensors detect a rapid river level rise, an SMS-based system is activated, which alerts downstream farmers to pump water out of the river in order to open capacity to receive the upstream surge. One of the keys to the success of this system is the dedication of volunteers to coordinate activities, maintain the IoT system, and host workshops to engage the local communities. Some of the risks to the sustainability of this project are a lack of financial support, the rugged and remote conditions that the IoT must survive in, and the inconsistent support from the county government to allow the system to operate. Greenhouses offer the opportunity for small farmers to grow high-value vegetables throughout the year in a controlled environment. However, uncontrolled greenhouse conditions can be harsh and damaging to plants. Researchers at the Dedan Kimathi University of Technology developed an IoT temperature, relative humidity, and soil moisture sensor coupled to an internet-connected gateway to assist farmers on their research farm ( Figure 10) [34]. In this case, the system relies on the expertise of the farmers to interpret data and make the proper adjustments to the greenhouse. The farmers showed a high level of satisfaction with the system and reported that it had greatly improved productivity of tomatoes in their greenhouses. A key to the success of this project is the close connection and proximity between the IoT developers and the farmers, and the ability of the farmers to interpret the sensor measurements. This project is also equipping engineering students with the skills and experience needed to provide commercial IoT for agriculture solutions in Kenya [88]. A potential risk to the sustainability of this project is that steady funding is needed to build a pipeline of work given the expertise, location, and access to research agriculture facilities. To supplement the limited information available to smallholder farmers, Arable has developed a multi-parameter IoT device with a suite of sensors for measurements including ambient temperature, humidity, precipitation, Normalized Difference Vegetation Index (NDVI), and photosynthetic active radiation ( Figure 11) [22,60]. During current pilots in central Kenya, the devices are installed on smallholder farms and data is transmitted through cellular network to cloud servers where it is stored and analyzed. Reduced data is fed back to Kenyan partner researchers and agriculture extension agents who offer advice to area farmers. Agriculture extension agents reported that the local-scale information is a valuable supplement to regional forecasts provided by the Kenyan government and help them to provide better advice to farmers. A risk to this project is the challenge of determining an economical pathway to sustain and expand the IoT technology and staff in the ecosystem. Greenhouse monitoring in Nyeri Monitoring smallholder maize plots in Nanyuki Figure 11. A researcher and farmer discussing a multi-parameter sensor installed on a farm in Nanyuki. Discussion of Challenges and Recommendations Based on our literature review, expert interviews, surveys, and site visits, the team has synthesized a list of the challenges in IoT for smallholder agriculture in GFSS countries (summarized in Figure 12), and proposed recommendations for some of the relevant players involved. The following section is a summary of challenges grouped into five categories, which correspond to the IoT architecture in Figure 1: (i) measurement device, (ii) data transmission, (iii) data storage and analytics, (iv) feedback and implementation, and (v) project structure and support. A detailed discussion of the challenges, opportunities, and recommendations for the IoT for smallholder agriculture can be found in the full project report [25]. We believe that this section will be appealing to audiences beyond the academic and research community, and specific recommendations are segmented towards Technologists, Project managers, and Funders. Measurement Device Challenges Access to components: Off-the-shelf IoT products are often not available, suitable, or affordable for commercial technologies in developing countries. Therefore, custom made solutions are designed in-house or by local IoT companies. During our site visit in Kenya, a number of IoT implementation teams that we spoke to indicated that procuring electronic and hardware components during the product development phase often delayed the project and increased costs. Technologists: A good starting point for prototype circuit components is the local electronics and scrap market (e.g., CBD in Nairobi, Kisenyi in Kampala, Suame Magazine in Kumasi). These markets usually stock basic circuit and prototyping components that can be used as a "good enough" solution to reach a proof-of-concept prototype. Device design: Smallholder farms are often in rugged and remote locations and require special consideration when designing a connected, electronic device for long-term monitoring. Technologists: As soon as possible in the design process, test your device at a pilot site that is representative of the actual implementation site. Project managers: Involve the farmers, agriculture extension agents, and farmer co-operatives in the product design phase to help your team identify non-obvious challenges and improve the likelihood that farmers will accept the idea. Sensor calibration: Correlating the raw sensor measurements to actual physical values requires performing controlled calibration tests that can be expensive and time consuming. Technologists: Aim to eventually provide calibration documentation for your product so that it can be benchmarked against other products and measurement methods. Perform some simple tests to check factory-calibrated sensors in conditions as close to the implementation conditions as possible in case there is a need to apply a correction factor. Access to expertise: Many teams we visited mentioned a lack of access to expertise and resources for technical and business challenges. For example, several engineers reported spending significant amounts of time combing through websites to find technical solutions during their product development. Technologists: Participate in online communities focused on IoT for agriculture hardware, especially for idea exchange, technical support, and recruiting. For example, we found the Gathering of Open Ag Tech to be a good example of such a resource in the agriculture sector (forum.goatech.org). Data Transmission Challenges: Poor connectivity: Due to the remote location of some farming communities, poor mobile network connectivity and reliability is a common challenge. Technologists: As a last resort, data can be collected manually, i.e., from a central hub connected to individual devices by a local wireless network. Project managers: Check mobile network coverage in your implementation area using GSMA maps and cross-check with non-industry sources, for example, a compilation of user contributed Nperf data. Transmission cost: While data costs have decreased significantly, the recurring cost of providing IoT services was frequently identified as a challenge for commercial applications. Technologists: For some applications, satellite and LPWAN-based service providers are increasingly cost-competitive with conventional mobile data. Data Storage and Analytics Measurement to feedback: Raw sensor measurements can be difficult to reduce into actionable recommendations for farmers. Project managers: While measurements are relatively easy to display on a dashboard, correlating them with crop growth and other effects requires input from a topical expert. Accessing the right expertise can be a challenge in itself, but resources at universities and agricultural extension agencies can often give some direction. Technologists: Take a human centered approach by having farmers and agriculture experts from the area provide input on (a) what kind of recommendations would be useful and actionable and (b) the best means for the farmer to receive the recommendation. Equity of access: The ownership of data collected is often overlooked in IoT projects and can lead to disagreements among stakeholders. Project managers: Negotiate data access with project partners and funders. While it is important to maintain community access to data, service providers and funders may have requirements for data access. Funders: Communicate early with the project implementers and have clear guidelines on data access, privacy, ownership, and sharing mechanisms of the data with the community. Feedback and Implementation Smartphone penetration: Many agriculture advisory mobile apps are designed for use on a smartphones; however, smartphone penetration is low among rural populations in GFSS countries [89,90]. Project managers: Make sure the output is reaching the intended audience in a format that is accessible to them. If smartphone penetration is low, then using another medium such as radio, television, local print media, or extension agents can be appropriate to provide recommendations to farming communities [17]. Early discussions with farmers, farmer co-operatives, and agriculture extension agents could help to make sure that the output is reaching the audience through the right communication channel. Remote location: Many smallholder farms are in remote, difficult-to-access locations, which can add significant cost to IoT services if technicians need to regularly visit farms. Technologists: Incorporate onboard diagnostics for your device (e.g., battery voltage, microcontroller temperature, accelerometer) in order to minimize maintenance visits. Project managers: Identify a local farmer or extension agent who can assist with basic sensor maintenance and troubleshooting, for example, the horticulture manager at Kikaboni Farm described in Section 3.2. IoT revenue generation: Identifying the best customer for the information collected is important. In many cases, there is an opportunity to provide a service along with the hardware, which can provide recurring revenue. Project managers: Many IoT for agriculture projects are initially grant funded, and determining approaches to monetize data and analytics services will help ensure project sustainability. Consider this aspect when agreeing on the terms of the funding. Project Structure and Support Complicated ecosystem: Successfully implementing an agriculture-IoT project involves coordination of a large, and sometimes complicated, ecosystem of actors over a long timescale. Project managers: Try to organize a team of collaborators that includes data scientists, sensor experts, and agriculture experts so that each is contributing to solving a specific part of the problem in which he/she is an expert. IoT business model: Most people we connected with during this project agreed that data from on-farm sensors is valuable. However, identifying a specific customer, or buyer, of the device or service is less straightforward, especially when the beneficiary is unlikely to have the means to afford the technology. Project managers: Identifying and understanding whom the end customer is and who will be paying for the data is important. A good example is the Kenya RAPID project, described in Section 3.2, in which the county government includes the IoT services in their annual budget. Short funding timelines: Two-and three-year grants from donor agencies are short timelines; when it comes to farming seasons, this is a relatively short time period because you get data from only one or two cropping cycles, so collecting quality data and making actionable recommendations is a challenge. Funders: Increase funding timelines to five to seven years to allow for data collection over multiple cropping periods. A larger dataset also enables better pattern recognition and predictive analytics, leading to suitable actionable recommendations. Vertical integration: Many organizations are providing an end-to-end solution in which a single organization takes on the responsibility of farmer recruitment, IoT technology development, implementation, data analysis and recommendations, and monitoring and evaluation. Project managers: Some of the key stakeholders the team spoke with indicated that, while they were experts in a few aspects of the IoT and agriculture, trying to work on the entire end-to-end process of an agriculture IoT ecosystem was stretching them thin; thus, incorporating a horizontal structure in which each organization offers specific expertise to solve the overall problem in a piecemeal fashion is favorable for project success. Funders: Funding agencies can play a facilitation role to connect diverse groups with specific expertise to solve the overall problem. Organizations forming new partnerships could particularly benefit from guidance in negotiating agreements and contracts. Conclusions This review presents a perspective of the current state of the IoT in smallholder agriculture, including summaries of state-of-the-art sensor and communication technologies and real examples of IoT implementation on farms in Kenya, and the challenges and recommendations associated with the implementation of IoT on smallholder farms in LMICs. The barriers and challenges are largely known and surmountable, and viable implementation of the IoT is already occurring. Currently, applications for scenarios with easily detectable and actionable parameters offer more immediate promise and likelihood of uptake than those implementing measurements that are more difficult to interpret. Additionally, targeting measurements that address a significant pain point (e.g., water supply reliability) can ensure buy-in from and value to the communities involved. Major evolving challenges due to climate change will motivate implementation of the IoT to improve resilience and secure safety nets for global food supply. Greater resilience and agricultural productivity have the potential to strengthen rural economies and help developing countries along their journey to self-reliance. The IoT for agriculture objective remains: To gather sufficient quantities of data of the right type, from the right location, at a low cost, and with sufficiently well-informed analysis and understanding for farmers to take action. Conflicts of Interest: The authors declare no conflicts of interest.
2020-05-12T23:23:17.053Z
2020-05-06T00:00:00.000
{ "year": 2020, "sha1": "7f99fd5958e45031f3431c95ea3c9c576064f8bc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/9/3750/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f8343f17ae2faabd48508abb04da8754b3a9a561", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
46515017
pes2o/s2orc
v3-fos-license
Signatures of non-monotonic d-wave gap in electron-doped cuprates We address the issue whether the data on optical conductivity and Raman scattering in electron-doped cuprates below $T_c$ support the idea that the $d-$wave gap in these materials is non-monotonic along the Fermi surface. We calculate the conductivity and Raman intensity for elastic scattering, and find that a non-monotonic gap gives rise to several specific features in optical and Raman response functions. We argue that all these features are present in the experimental data on Nd$_{2-x}$Ce$_{x}$CuO$_4$ and Pr$_{2-x}$Ce$_{x}$CuO$_4$ compounds. I. INTRODUCTION The studies of electron-doped cuprates, Nd 2−x Ce x CuO 4−δ (NCCO) and Pr 2−x Ce x CuO 4−δ (PCCO) are attracting considerable attention from high-T c community. The phase diagram of electron-doped cuprates is not as involved as in hole-doped materials. It contains sizable regions of antiferromagnetic and superconducting phases, and only a small region showing pseudogap behavior 1 . The superconducting dome is centered around a quantum-critical point at which the antiferromagnetic T N vanishes, in close similarity to phase diagrams of several heavy-fermion materials 2 . Scanning SQUID 3 and ARPES experiments 4,5 on electron-doped cuprates provided strong evidence that the gap symmetry is d x 2 −y 2 , same as in hole-doped cuprates. This gap has nodes along the diagonals of the Brillouin zone, and changes sign twice along the Fermi surface. The functional form of the d x 2 −y 2 gap is a more subtle issue, however. In hole-doped cuprates, the gap measured by ARPES follows reasonably well a simple d−wave form ∆(k) = ∆0 2 (cos k x − cos k y ) (equivalent to cos 2φ for a circular Fermi surface), at least near and above optimal doping 6 . In the electron-doped cuprates, high-resolution ARPES data on the leading-edge gap in Pr 0.89 LaCe 0.11 CuO 4 (Ref. 4 ) show a non-monotonic gap, with a maximum in between nodal and antinodal points on the Fermi surface. Such gap was earlier proposed in Ref. 7 as a way to explain Raman experiments in NCCO, particularly the higher frequency of the pair-breaking, '2∆' peak in the B 2g channel than in the B 1g channel. Recent measurements of optical conductivity σ 1 (ω) in Pr 1.85 Ce 0.15 CuO 4 (Ref. 8 ) were also interpreted as an indirect evidence of a non-monotonic gap. The interpretation of the experimental results is still controversial, though. ARPES data on PCCO below T c in Ref. 4 show a non-monotonic leading-edge gap, but the spectral function all along the Fermi surface does not display a quasiparticle peak, from which one would generally infer the functional form of the gap more accurately. The interpretation of the Raman data has been criticized in Ref. 9 on the basis that, within BCS theory, the shapes of B 1g and B 2g Raman intensities for the non-monotonic gap proposed in Ref. 7 do not agree with the data. Finally, optical results for PCCO in Ref. 8 do show a maximum at about 70meV , which is close to 2∆ max inferred from B 2g Raman scattering. However, it is a-priori unclear whether one should actually expect such maximum in the optical conductivity. In particular, in hole-doped materials, σ 1 (ω) is rather smooth at 2∆ max 10 . From theory perspective, the non-monotonic d−wave gap appears naturally under the assumption that the d x 2 −y 2 pairing is caused by the interaction with the continuum of overdamped antiferromagnetic spin fluctuations. Spin-mediated interaction is attractive in the d x 2 −y 2 channel and yields a gap which is maximal near the hot spots -the points along the Fermi surface, separated by antiferromagnetic momentum, Q AF . In optimally doped NCCO and PCCO, hot spots are located close to Brillouin zone diagonals, and one should generally expect the d x 2 −y 2 gap to be non-monotonic 11 . More specifically, in the spin fluctuation scenario, the maximum of the gap is slightly shifted away from a hot spot towards antinodal region, such that d−wave superconductivity with a non-monotonic d x 2 −y 2 gap survives even when the hot spots merge at the zone diagonals 12 . The solution of the gap equation in this case yields Here, φ is the angle along the (circular) Fermi surface (φ = π/4 corresponds to a diagonal Fermi point), a > 1/2 is a model-dependent parameter, and ∆ max is the maximum value of the gap located at cos 2φ = (1/2a) 1/2 . The gap at various a is shown in Fig.1. As a increases, the nodal velocity increases, the maximum of the gap shifts towards the zone diagonal, and the value of the gap at the antinodal point φ = 0 decreases. A similar functional form of the gap can be obtained by adding higher harmonics cos 6φ, cos 10φ etc. to the cos 2φ gap. We have found, however, that Eq. (1) is somewhat better for experimental comparisons than the gap with a few higher harmonics. ARPES measurements 4 place the maximum of the gap slightly below φ = π/6. This is best reproduced if we set a = 2. However, since ARPES results have not been yet confirmed by other groups, we will keep a as a parameter and present the results for various values of a. The goal of our work is to verify to which extent optical conductivity σ 1 (ω) and Raman scattering R(ω) in a d−wave superconductor with a gap given by Eq. (1) are consistent with the experimental data. For this, we computed σ 1 (ω) and R(ω) in B 1g and B 2g geometries assuming that the scattering is elastic. The latter does not necessarily have to come from impurities -scattering by collective excitations in spin or charge channels is also dominated by processes with small frequency transfers. For simplicity we assume that the normal state damping rate is independent on frequency and only focus on the effects associated with the pairing. We have found several features which distinguish optical and Raman responses in superconductors with nonmonotonic d-wave gap from superconductors with a cos 2φ gap. Optical conductivity in a pure d−wave superconductor with elastic scattering has a weak maximum followed by a broad suppression region at frequencies of the order ∆ max (Ref. 10 ). For the non-monotonic gap, we have found a rather strong maximum in σ 1 (ω) slightly below 2∆ max , followed by a sharp drop in conductivity down to very low frequencies, where the conductivity begins to increase again towards a constant value at ω = 0+ (see Fig. 4). For Raman scattering, we have observed that the peak in the B 2g channel is located at a higher frequency than in the B 1g channel, and also that the shapes of the two Raman profiles are very different -the B 2g peak is near-symmetric, the B 1g peak is very asymmetric with shoulder-like behavior above the peak frequency. We argue that these features are con-sistent with the experimental conductivity and Raman data. From this perspective, our findings give additional support to the idea that the d x 2 −y 2 gap in electron-doped cuprates is highly non-monotonic. We present the formalism in Sec. II, and the results in Sec. III. In the latter section, we also consider the comparison with the data in more detail. The last section is the conclusion. II. THE FORMALISM We adopt a conventional strategy of analyzing optical and Raman responses in non-s−wave superconductors with impurity scattering 13,14,15 . We assume that the scattering originates from the s−wave component of the effective interaction (which includes the impurity potential), and gives rise to a k−independent fermionic selfenergy Σ(ω). The pairing comes from a different, d−wave component of the interaction. As in earlier works 13,14,15 , we assume that the d−wave anomalous vertex is frequency independent, and to a reasonable accuracy can be replaced by ∆(φ) from Eq. (1). The time-ordered normal and anomalous fermionic Green's functions in this approximation are given by whereω = ω+Σ(ω). The self-energy is by itself expressed via the (local) Green's function via where the local Green's function is ( G L = 1 in the normal state), and the parameter C is interpolated between C >> 1 in the Born limit, and C << 1 in the unitary limit. Optical conductivity σ 1 (ω) and Raman intensity R(ω) are both given by the combinations of bubbles made out of normal (GG) and anomalous (F F ) Green's functions. Optical conductivity is proportional to the currentcurrent correlator, while Raman intensity is proportional to the density-density correlator weighted with angledependent Raman vertex factors To a first approximation, B 1g Raman scattering then gives information about electronic states in the antinodal regions, near φ = 0, while B 2g scattering gives information about nodal regions, near φ = π/4. The overall sign of the F F contribution is different for σ(ω) and R(ω), as the running momenta in the side vertices in the F F term are k and −k, between which the current operator changes sign, but the density operator remains intact. For a constant density of states, which we assume to hold, the integration over ǫ k in GG and F F bubbles can be performed exactly, and yields Here, σ 1 is the real part of the conductivity, ω pl is the plasma frequency, the index i labels the various scattering geometries,ω = ω + Σ(ω), ω ± = ω ± Ω 2 , and R 0 is the normalization factor for the Raman intensity. The conductivity in a superconductor also contains a δ(Ω) contribution (not shown) related to the superconducting order parameter. A. Fermionic self-energy We computed the fermionic self-energy by solving numerically the self-consistent equation (4) in the Born and unitary limits. The results for the imaginary part of the self-energy in the Born limit are presented in Fig.2. For a d−wave superconductor with a monotonic gap, ImΣ is linear in frequency at small ω and has a cusp at ω = ∆ max . For a non-monotonic gap, ImΣ is reduced at small frequencies, and then rapidly increases to a value comparable to that for a monotonic gap. This behavior resembles, particularly for a = 2, the formal solution of Eq. (4) for an angle-independent gap ∆ = ∆ max . In the latter case, ImΣ = 0 up to a frequency ω = ∆(1 − (γ/∆) 2/3 ) 3/2 , whereγ = γ/C 2 , and rapidly increases above this frequency. Forγ = 0.05∆ max , used in Fig. 2, the jump occurs at ω ≈ 0.8∆, much like in the plot for a = 2. We emphasize that the solution of (4) for a constant ∆ is not the result for an s-wave supercon-ductor. For the latter, the fermionic self-energy and the pairing vertex are renormalized by the same interaction, and the self-consistent equation for Σ(ω) does not have the form of Eq. (4) with frequency independent ∆. In Fig. 3 we show ImΣ in the unitary limit C = 0. We observe the same trend. For a monotonic d-wave gap, ImΣ is nearly monotonic, and has only a slight minimum around 0.8∆ max . For a non-monotonic gap, particularly for a = 2, ImΣ has a more pronounced structure with a sharp minimum around 0.8∆ max . This behavior again resembles that for a constant gap ∆. In the latter case, a formal solution of (4) for γ ≪ ∆ yields a zero ImΣ(ω) between 2 √ γ∆ and ∆ 2 + γ 2 . At larger ω, ImΣ gradually approaches the normal state value γ, at small frequencies it is also finite and approaches √ γ∆ at zero frequency [for a generic C, a non-zero ImΣ(ω = 0) (the unitary resonance) appears when γ exceeds ∆C 2 √ 1 + C 2 ]. The region of vanishing ImΣ shrinks to zero when γ exceeds the critical value of 2∆/(3 √ 3) ≈ 0.4∆. For the same γ = 0.3∆ max as used in Fig. 3, ImΣ for a constant gap sharply drops around 1.1∆ max , and rebounds both at larger and smaller frequencies, much like our actual solution for a = 2. B. Optical conductivity Substituting the results for the self-energy into Eq (7), we obtain optical conductivity. The results are plot-ted in Fig. 4 (a) and (b) for Born and unitary limits, respectively 18 . The behavior of the conductivity in the two limits is not identical, but the interplay between the monotonic and the non-monotonic gap is similar. In both cases, the conductivity for a non-monotonic gap passes through a well-pronounced maximum at some frequency below 2∆ max , sharply drops at smaller frequencies, and then increases again at very low frequencies, and at ω → 0 + approaches the universal limit 13 in which the conductivity depends on the nodal velocity but does not depend on γ as long as γ << ∆ max . The universal behavior is, however, confined to very low frequencies, while in a wide frequency range below 2∆ max the conductivity in case of a non-monotonic gap is strongly reduced compared to its normal state value. The frequency at which the conductivity has a maximum depends on a, and is closer to 2∆ max for a = 2 than for a = 4. The existence of the maximum in σ 1 (ω) below 2∆ max can be also understood analytically. Expanding the gap ∆(φ) near its maximum value ∆ max and substituting the expansion into (7), we find, after some algebra, that the conductivity has a one-sided non-analyticity below ω = 2∆ max -it contains a negative term proportional to (2∆ max − ω) 3/2 . This negative term competes with a regular part of σ 1 (ω), which smoothly increases with decreasing ω, and gives rise to a maximum in σ 1 (ω) below 2∆ max . The behavior of the conductivity in a superconductor with a non-monotonic gap is consistent with the available data on σ 1 (ω) in optimally doped PCCO 8 . The measured conductivity has a rather strong peak at 70cm −1 , and decreases at smaller frequencies. The authors of Ref. 8 explained the existence of the maximum in the optical conductivity by a conjecture that the conductivity in a d−wave superconductor with a non-monotonic gap should largely resemble the conductivity in an s−wave superconductor. Our results are in full agreement with this conjecture. The authors of Ref. 8 also associated the peak frequency with 2∆ max . We found that the peak frequency is actually located below 2∆ max , and the difference between the two depends on the shape of the gap. For our a = 2, the peak frequency is at 1.8∆ max in the Born limit, and at 1.3∆ max in the unitary limit. For a = 4, the deviations are higher. Experimentally, 2∆ max in optimally doped PCCO can be extracted from B 2g Raman scattering (see below) and equals 77cm −1 , see Ref. 19, i.e., the peak in σ 1 (ω) is at 1.8∆ max . This agrees with our a = 2 case in the Born limit. C. Raman intensity The results for the Raman intensity are presented in Figs. 5-6. In a BCS superconductor with a monotonic gap, Raman intensity has a sharp peak at 2∆ max in B 1g scattering geometry, and a broad maximum at around 1.6∆ max for B 2g scattering. This behavior holds in the presence of impurity scattering, both in Born and unitary limits, see Figs. 5(a) and 6(a). The B 1g and B 2g Raman intensities for a nonmonotonic gap are presented in Figs. 5 -6 (b)-(c) for Born and unitary limits and a = 2 and a = 4. In all cases, we find the opposite behavior: B 2g intensity has a sharp peak at 2∆ max , while B 1g intensity is very small at small frequencies, rapidly increases around ∆ max , passes through a maximum, then gradually decreases at higher frequencies and displays a weak kink-like feature at 2∆ max . The position of the B 1g peak depends on a -in both Born and unitary limits it is close to 1.6∆ max for a = 2, and is close to ∆ max for a = 4. The occurrence of the ′ 2∆ ′ peak in B 2g channel at a higher frequency than in the B 1g channel was the main motivation in Ref. 7 to propose a non-monotonic d−wave gap. The argument was that the gap with a maximum at intermediate 0 < φ < π/4 will have more weight in the nodal region and less in the antinodal region, thus increasing the effective ′ 2∆ ′ for B 2g intensity and decreasing it for B 1g intensity. In the optimally doped PCCO, the B 2g peak occurs at 77cm −1 , while the maximum in B 1g scattering is around 60cm −1 . In optimally doped NCCO, the B 2g peak occurs at 67cm −1 , while the max-imum in B 1g scattering is at 50cm −1 (Ref. 19. The ratios of the peak positions are 1.28 in PCCO and 1.34 in NCCO. This is consistent with our result for a = 2 (same a that gives the best fit of ARPES and conductivity data), for which this ratio is 1.25. Also, taking experimental 67cm −1 for 2∆ in NCCO, we obtain ∆ max = 4.2meV , in reasonable agreement with 3.7meV observed in tunneling 20 . In addition, the data in (dashed) and B2g (solid) scattering geometries for a monotonic gap ∆(φ) ∝ cos 2φ (a), non-monotonic with a=2 (b) and a=4 (c) in the unitary limit 3 of Ref. 7 show that the B 2g peak is nearly symmetric, while B 1g intensity is asymmetric around a maximumit rapidly increases at frequencies around 40cm −1 , passes through a maximum, and then gradually decreases at higher frequencies. This behavior of B 1g intensity is fully consistent with Fig. 5 -6 Blumberg et al. 7 also analyzed Raman intensities at various incident photon frequencies and found resonance enhancement of the B 2g intensity, but no resonance enhancement of B 1g intensity. We didn't attempt to analyze the resonance behavior of the Raman matrix element (this would require to consider the internal composition of the Raman vertex 22 ). We note, however, that the shape of the B 2g Raman intensity virtually does not change between the resonance and the non-resonance cases, only the overall magnitude increases near the resonance much like it happens in resonant Raman scattering in insulating cuprates 23 . We therefore believe that our analysis of the Raman profile as a function of transferred frequency is valid both in the non-resonance and in the resonance regimes. Finally, we note that our results for R(ω) are quite similar to Ref. 9 , whose authors criticized the explanation of the Raman data in terms of non-monotonic gap 21 . However, contrary to Ref. 9 , we argue that the theoretical results for R(ω) obtained for a non-monotonic gap agree well with the data in Ref. 7 . At the same time, we agree with Ref. 9 that one can hardly extract from the data in N CCO and P CCO the ω 3 behavior of B 1g intensity (which is the Raman hallmark of d x 2 −y 2 pairing), as the low-frequency behavior of the B 1g intensity is dominated by a sharp increase at frequencies of order ∆ max . In the analysis above we neglected final state interaction (the renormalization of the Raman vertex). There are two reasons for this. For B 2g scattering, the final state interaction is given by the B 2g component of the effective four-fermion interaction. This component is repulsive, at least if the effective four-fermion interaction comes from spin-fluctuation exchange. The repulsive final state interaction does not give rise to excitonic resonances, and generally does not substantially modify the Raman profile 24 . For B 1g scattering, final state interaction is the same as the pairing interaction, i.e. it is attractive. In general, such interaction affects the Raman profile 25 . However, the interaction which gives rise to a non-monotonic gap in the form of (1) is the largest at angles φ close enough to π/4. At these angles, the B 1g matrix element γ B1g ∝ cos 2φ is reduced, and we do not expect that repeated insertions of B 1g vertices will substantially modify the Raman profile. IV. CONCLUSION In this paper, we analyzed the behavior of the optical conductivity and Raman intensity in B 1g and B 2g scattering geometries in the superconducting state of electron-doped cuprates. We found that the results are best fitted by a non-monotonic d x 2 −y 2 gap. Such gap was originally suggested as a way to explain Raman data 7 , and later extracted from ARPES measurements of the leading edge gap along the Fermi surface 4 . The nonmonotonic gap has also been obtained theoretically in the analysis of quantum-critical pairing mediated by the exchange of overdamped spin fluctuations 12 . We found that the non-monotonic gap which agrees best with the ARPES data (Eq. (1) with a = 2) also fits best the data for optical conductivity and Raman scattering. The agreement with the data is quite good, not only in the positions of the maxima in optical conductivity and Raman response, but also in the shapes of σ 1 (ω) and R(ω). We argue that this good agreement is a strong argument in favor of a non-monotonic d x 2 −y 2 gap in electron-doped cuprates. We thank G. Blumberg and C. Homes for useful conversations. AVC acknowledges support from nsf-dmr 0604406 and from Deutscheforschungsgemeinschaft via Merkator GuestProfessorship, and is thankful to TU-Braunshweig for the hospitality during the completion of this work. IE is supported by the DAAD under Grant No. D/05/50420.
2007-09-26T12:14:56.000Z
2007-09-26T00:00:00.000
{ "year": 2007, "sha1": "d4e90c6e033606ebfb82505ca3af00f702d2efc2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0709.4146", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c76bda0ad80c0b94a31feeace346ab3c10b6a6d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }