id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
252726060 | pes2o/s2orc | v3-fos-license | Gastrointestinal parasites in wild and exotic animals from a zoo in the State of Bahia, Brazil - first record
Parasitic infections can be a serious health problem for wild animals kept in captivity, however, coproparasitological assessments in Brazilian zoos are scarce and spaced. Therefore, this study aimed to evaluate the occurrence of endoparasites in the feces of wild and exotic captive animals in the zoo of Matinha Municipal Park, Itapetinga, Bahia, Brazil, the only zoo in the interior of Bahia, through the Mini-FLOTAC® technique, providing subsidies for the diagnosis and therapeutic treatment of parasitized animals. From May to August 2022, 124 stool samples from 35 species of reptiles, birds and mammals were collected. Analyzes were performed using the Mini-FLOTAC® technique in combination with Fill-FLOTAC®. The results show that 70.97% of the samples were positive for at least one gastrointestinal parasite. Birds (76.7%; 33/43) were the most parasitized animals. Twenty-seven taxa of gastrointestinal parasites were identified, whether cysts, oocysts or eggs, being 8 protozoans and 19 helminths, with a predominance of coccidia, Oxyurideae and Angusticaecum sp. for reptiles, coccidia, Ascaridia spp., Heterakis spp. and Strongyloides spp. for birds, coccidia, Ancylostomatidae, Strongylida and Strongyloides spp. for mammals. In summary, the results presented reveal the importance of periodically carrying out coproparasitological examinations in zoos, in order to subsidize interventions by the technical team to promote the health and well-being of animals. This work constitutes the first publication on the coproparasitological evaluation of animals from a zoo in the state of Bahia.
Introduction
A zoo is defined as a legal entity enterprise, consisting of a wild animals' collection kept alive in captivity or in semifreedom and exposed to public visitation, to meet scientific, conservationist, educational and sociocultural purposes. (Brasil, 2015). Zoos play an important role in welcoming and conserving endangered species or individuals unable to survive in the wild (Silva et al., 2019). Orsini and Bondan (2006) state that the long period of captivity causes functional changes, as a result of somatic (sounds, images and strange odors, among others), psychological, behavioral and mixed stressors (malnutrition, intoxication, action of infectious and parasitic agents, among others), which can make animals weakened and lacking the physical and psychological skills necessary for survival. Research, Society andDevelopment, v. 11, n. 13, e19111334959, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i13.34959 3 Parasitism can be defined as an obligate trophic association between individuals of two species in which one (the parasite) obtains its food from a living organism of another species (the host). This symbiotic relationship is very common in nature, playing an important role in ecosystems, regulating host populations, stabilizing food chains and structuring animal communities (Atkinson, 2008).
Parasitic infections can be a serious health problem for wild animals kept in captivity, and the morbidity and mortality of infections are dependent on the host species, the parasite and the parasite load, nutritional status, immunocompetence and physiological conditions of the host. The weaknesses in the proper management for each species pose a great risk to the health of the animals (Santos et al., 2015). Lima (2018) points out that environmental and ecological changes combined with the proximity between humans, domestic and wild species offer numerous opportunities for the emergence of interspecific interactions, which contribute to the spread of numerous parasitic zoonoses.
For Capasso et al. (2019) and Guo et al. (2021) animals raised in restricted environments, like zoos, are highly susceptible to gastrointestinal infection by helminths and protozoans. Zoos are environments with high contamination by parasites. These authors proved that the Mini-FLOTAC® ® technique in combination with the Fill-FLOTAC® can be used not only for the rapid diagnosis of parasitic infections in zoos, but also for monitoring control programs quickly and reliably.
Thus, the present study aimed to evaluate the occurrence of endoparasites in the feces of wild and exotic captive animals in the zoo of Matinha Municipal Park, Itapetinga, Bahia, Brazil, the only zoo in the interior of Bahia, using the Mini-FLOTAC® technique, providing subsidies for the diagnosis and therapeutic treatment of parasitized animals. It is worth mentioning that this is the first work carried out in a zoo in the Bahia state.
Study area
The Matinha Municipal Park (Figure 1), created by municipal decree nº 860 of October 11, 1973 and law nº 528 of December 19, 1991, is located in the urban perimeter of the municipality of Itapetinga, southwest of Bahia, covering 24 hectares of which 10 hectares constitute a remaining area of the Atlantic Forest Biome surrounded by the Catolé Grande River up to the bridge next to the Bus Station (Kulka, 2014).
The park aims to preserve and conserve the representation of the Atlantic Forest, serving as a refuge for many species (Itapetinga, 2004). It houses a zoo, whose squad has species of birds, mammals and reptiles from the Center for the Triage of Wild Animals -CETAS (animals that are victims of trafficking and that often are no longer able to return to nature), in addition to exotic specimens from other zoos or breeding sites, thus constituting the only environmental protection area in the municipality and the only zoo in the interior of Bahia (Freitas et al., 2007). Research, Society andDevelopment, v. 11, n. 13, e19111334959, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i13.34959
Ethical aspects
The project was submitted to the Ethics Committee on the Use of Animals in Research of the Federal University of Bahia, Campus Anísio Teixeira of the Multidisciplinary Institute in Health (UFBA) (CEUA -IMS/CAT -UFBA) and approved (Opinion No. 104/2022).
Sampling
This work is descriptive quantitative research (Dalfovo et al., 2008;Pereira et al., 2018). Thus, 124 stool samples were collected from captive animals from the zoo at Matinha Municipal Park, Itapetinga, Bahia, Brazil between May and August 2022. 15 species of birds, 14 of mammals and 6 of reptiles were sampled (Table 1). Research, Society and Development, v. 11, n. 13, e19111334959, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i13.34959 Table 1 shows the number of wild and exotic animals in captivity, with their respective orders and families, and the number of fecal samples collected.
The samples examined in this study were obtained by the keepers while cleaning the enclosures, prioritizing the collection of individual fresh fecal pellets. Each sample was defined as a sample of feces containing an amount greater than or equal to 2 grams, spontaneously eliminated by the animals, collected individually or in pools on the floor of the enclosure so as not to stress the animals. The definition of pool adopted in this work follows Fagiolini et al., (2010) and Capasso et al (2019), which consists of 2 grams of each individual fecal sample.
The fecal samples were placed in isothermal boxes (2 to 8ºC) and immediately transported to the Zoology Laboratory of the Multidisciplinary Institute in Health, Campus Anísio Teixeira of the Federal University of Bahia, in Vitória da Conquista, Bahia, for analysis.
The fecal samples were processed using the Mini-FLOTAC® technique, following all the steps and guidelines as per the original description of the technique, using two flotation solutions: FS2 (Sodium Chloride, specific gravity SG = 1,200) and FS7 (Zinc Sulfate, SG = 1,350), and each sample was analyzed twice (Cringoli et al. 2017).
The preparations were examined under a binocular Optical Microscope at 100X and 400X magnifications.
Photomicrographs and measurements of the parasitic structures were performed with the aid of a digital camera and micrometric eyepiece, respectively. Fecal samples were considered positive when at least one evolutionary form of a parasite (egg, cyst and/or oocyst) was detected (Barbosa et al., 2019).
Data analysis
Data were tabulated and analyzed using the GraphPad Prism® version 5 software. The results were expressed as the arithmetic mean number of eggs/oocysts/cysts per gram (EPG/OPG/CPG) of feces, in addition to the minimum and maximum values (Capasso et al., 2019). Prevalence was estimated by dividing the number of positive samples by the total number of samples collected from each group of animals under study (Barbosa et al., 2019).
In total, 27 taxa of gastrointestinal parasites were identified, whether cysts, oocysts or eggs, being 8 protozoans and 19 helminths (Table 2 and Figure 3). Among the protozoans, 12.5% were amoeba, 12.5% ciliated (both identified at the generic level) and the vast majority were coccidia, about 75% (5 taxa were identified to genus and the others grouped as nonsporulating coccidia). As for helminths, 10.5% were Trematoda flatworms, one of them identified at the genus level and the other only in the Trematoda class. All other helminths belonged to Nematoda, representing 89.5%. Of these, 11.8% were identified to the family, 11.8% to the order, 5.9% to the superfamily, 64.7% were identified at the genus level and 5.9% at the species level (Table 2).
and Strongyloides spp. in Artiodactyla (Table 2) Quantitatively, the parasite intensity expressed in eggs, cysts and oocysts per gram of feces (EPG, CPG and OPG) detected in the feces of reptiles from the Matinha Municipal Park is presented in Table 3. For C. carbonaria, Oxyuridae eggs form the most abundant, ranging from 10-1060 EPG. For the snakes B. constrictor and M. reticulatus oocysts of the coccidian Caryospora spp. was the most abundant parasite, ranging from 0-3500 OPG and 0-1200 EPG, respectively. In the lizard S. merianae, eggs of Strongyloides spp. were the most abundant, representing about 170 EPG (Table 3). (Table 4).
In the feces of P. cristatus there was an absolute predominance of protozoans, reaching the highest parasitic densities among all the birds studied, being the oocysts of Eimeria spp., the most abundant, ranging from 0-44,380 OPG, followed by non-sporulated coccidia with varying densities from 0-1520 OPG. The second bird species that presented the highest densities of parasites per gram of feces was S. camelus, being Alaria sp. (0-2090 EPG) and non-sporulating coccidia (0-1200 OPG) were the most abundant parasites (Table 4). (Table 4).
Discussion
Most fecal samples from animals from the zoo at Matinha Municipal Park, Itapetinga, Bahia were positive for gastrointestinal parasites. This same pattern was recorded in several studies with captive animals, whether in zoos or CETAS (Hofstatter & Guaraldo, 2015;Barbosa et al., 2019;Oliveira et al., 2020;Batista et al., 2021). Among the zoological groups evaluated, birds and mammals were more parasitized than reptiles. This same pattern was detected by other authors (Batista et al., 2021;Mewius et al., 2021).
The most abundant parasites in fecal samples of C. carbonaria (Oxyuridae eggs), B. constrictor and M. reticulatus snakes (Caryospora spp. oocysts) and S. merianae lizard (Strongyloides spp eggs) are commonly the most representative recorded in other works (Rataj et al., 2011;Souza et al., 2014;Rom et al., 2018). According to Ruivo (2019), the presence of oxyurids is very frequent in the lumen of the large intestine of herbivorous reptiles, being considered beneficial for the host by improving the passage of food content through the intestinal tract and contributing to the regulation of the microbiota of the cecum, through ingestion of bacteria by the parasites, however it can cause intestinal obstructions (Troiano, 2018). Infections caused by Strongyloides spp. in reptiles they can trigger asymptomatic conditions or anorexia, weight loss, lethargy, enteritis, diarrhea, urethral obstructions, nephritis, which can lead to their death (Ruivo, 2019). The genus Caryospora is found in the intestinal mucosa of snakes, lizards and turtles and its infection is usually asymptomatic (Schneller & Pantchev, 2008), but can cause destruction of the intestinal, biliary and renal epithelium with fibrosis and ulcerations (Troiano, 2018).
In most birds, coccidia were very abundant, and this pattern is commonly recorded in other studies (Hofstatter & Guaraldo, 2015;Lima et al., 2017;Oliveira et al., 2020). Coccidiosis is rare in free-ranging birds and is usually related to captive breeding, crowding or stress, where infected birds usually do not show any clinical signs in low-intensity infections, as coccidia destroy a limited number of epithelial cells, which can be replaced quickly. However, at high parasite densities, many cells are destroyed, leading to reduced food and water consumption, decreased intestinal absorption, hemorrhage, lack of appetite, weight loss, fall, loss of coordination, ruffled feathers and decreased egg production (Atkinson et al., 2008).
The parasitic intensity of coccidia recorded in fecal samples of P. cristatus was the highest when compared to all species of animals sampled in this work. Peacocks commonly have high densities of coccidia, as indicated by several studies in the literature (Rodrigues et al., 2020;Lozano et al., 2021;Yadav et al., 2021). Coccidia are spread by water and food contaminated by oocysts, affecting several species of birds and even mammals, such as man, and may be a zoonosis. Infections caused by coccidia can trigger severe damage to birds, promoting diarrhea, dehydration, apathy, reduced reproductive rate, weight loss and death (Marietto-Gonçalves et al., 2009). These results point to the need for the periodic use of anti-coccidial agents and the intensification of cleaning and disinfection of the enclosures, drinkers and feeders, in order to prevent the spread of this parasite to the zoo animals.
Samples of S. camelus, exhibited high densities of Alaria sp. and non-sporulating coccidia. Alaria sp. are trematode parasites that can cause asymptomatic conditions up to diarrhea and hematochezia (Batista et al., 2008). Its presence has already been recorded in several carnivores (canids, felids, mustelids and procyonids) (Ruas, 2005) and even birds, including S.camelus (Batista et al., 2008). Even when in large numbers, the presence of parasites in these birds may not be accompanied by characteristic clinical signs (Batista et al., 2008).
The animals of the Carnivora order studied exhibited the highest densities of parasites among all the mammals studied, with Ancylostomatidae being the most abundant for P. concolor and P. leo. This parasite was recorded by other studies that evaluated felid parasites (Srbek-Araujo et al., 2014;Gressler et al., 2016;Solórzano-García et al., 2017;Silva et al., 2021). The main adverse effects of hookworms for their hosts (humans, domestic animals and wild species) are anemia, growth retardation, secondary bacterial infections and mortality (Seguel & Gottdenker, 2017).
P. flavus showed the highest abundance of Eimeria spp. In the literature, the few studies on parasites with this mammal species generally report the presence of helminths (Taira et al., 2013;Tokiwa et al., 2014). Barbosa et al. (2019) found non-sporulating coccidian oocysts for the P. flavus sample from the Rio de Janeiro Zoo, being the record of Eimeria spp.
of the present article the first record of this coccidian genus for Jupará. The parasite recorded in the H. hydrochaeris sample (Trichostrongyloidea) was also recorded in another study carried out with capybara populations in seven cities in the state of São Paulo (Souza et al., 2021). According to Souza et al., (2021), identification at the genus or species level based only on Trichostrongyloidea eggs is impossible, however, necropsy-based studies point to the parasites Viannella hydrochoeri and Hydrochoerisnema anomalobursata as the specific trichostrongyloids of capybaras, with V. hydrochoeri the most likely parasite that affects capybaras in natural and man-made areas.
In the sample of T. terrestris, there was a record of 140 EPG of Strongylida, the only parasite found for this mammal.
These results differ from other analyzes carried out with this species, where Batista et al (2021) found only protozoans in the tapir samples, with a predominance of trophozoites and cysts of Balantidium sp., and non-sporulated oocysts of coccidia.
For Batista et al. (2021), the physical proximity of animals in zoos makes parasitic infections inevitable, which can be aggravated by the immune status of the host, whose circumstances of confinement and stress weaken the animal, thus aggravating its survival. Furthermore, some of these parasites can be zoonotic, impacting the health of zookeepers and workers (Iatta et al., 2020). Redoubled care with the hygiene and deworming of the animals are necessary in order to prevent the transmission of the parasites within the studied zoo.
The increase in the breeding stock promoted by the acquisition of new specimens and species at the zoo, may have contributed to the increase in the parasitic community, since some specimens obtained from other zoos, breeding sites and/or sorting centers were parasitized and the lack of establishment of an effective deworming and quarantine protocol can lead to contamination of individuals residing in the zoo. Oliveira et al. (2022) carried out a physical-chemical and microbiological evaluation of the water used by the animals of the zoo in the Matinha Municipal Park, being suitable for the watering of animals, but they recorded nonconformities in the microbiological parameters (Escherichia coli presence) for most of the animals' enclosures. These data indicate fecal contamination in the water of the zoo's enclosures. Therefore, we can infer that there is probably parasitic contamination in the water used for drinking, which may serve as a means of dissemination among the species that live in each enclosure. helminths were identified, with a predominance of coccidia, Oxyurideae and Angusticaecum sp. for reptiles; coccidia, Ascaridia spp., Heterakis spp. and Strongyloides spp. for birds; and coccidia, Ancylostomatidae, Strongylida and Strongyloides spp. for mammals. In summary, the results presented reveal the importance of periodically carrying out copro-parasitological examinations in zoos, in order to support interventions by the technical team to promote the health and well-being of animals.
Conclusion
It is worth mentioning that new specimens must undergo a period in quarantine, before relocating them in the enclosures, and the performance of these exams, once again, become essential for clinical diagnosis and establishment of appropriate therapeutic conduct for each case, with administration of specific antiparasitic drug for each type of parasite, whether they are protozoans or helminths. Therefore, it is evident that routine copro-parasitological assessment of captive animals in zoos effectively contributes to diagnosis and improvements in park management. This work constitutes the first publication on the coproparasitological evaluation of animals from a zoo in the Bahia state. | 2022-10-06T15:01:48.783Z | 2022-09-26T00:00:00.000 | {
"year": 2022,
"sha1": "6c7c86241a1baf190dcc5cb88fe956df3dacc229",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/34959/29432",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d325daa42ba0e0cc4c5152d712af2f8faf9eada",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
115705979 | pes2o/s2orc | v3-fos-license | Correlation energy within exact-exchange ACFD theory: systematic development and simple approximations
We have calculated the correlation energy of the homogeneous electron gas (HEG) and the dissociation energy curves of molecules with covalent bonds from a novel implementation of the adiabatic connection fluctuation dissipation (ACFD) expression including the exact exchange (EXX) kernel. The EXX kernel is defined from first order perturbation theory and used in the Dyson equation of time-dependent density functional theory. Within this approximation (RPAx), the correlation energies of the HEG are significantly improved with respect to the RPA up to densities of the order of $r_s \approx 10$. However, beyond this value, the RPAx response function exhibits an unphysical divergence and the approximation breaks down. Total energies of molecules at equilibrium are also highly accurate but we find a similar instability at stretched geometries. Staying within an exact first order approximation to the response function we use an alternative resummation of the higher order terms. This slight redefinition of RPAx fixes the instability in total energy calculations without compromising the overall accuracy of the approach.
I. INTRODUCTION
Kohn-Sham (KS) methods that treat the exchange and correlation energy on the basis of the Adiabatic Connection Fluctuation-Dissipation (ACFD) theorem 1,2 have raised considerable interest in recent years 3-17 mainly because they provide a route to overcome the shortcomings of standard local-densityapproximation/generalized-gradient-approximation density-functional theory (LDA/GGA DFT). In particular i) an exact expression for the exchange-correlation (xc) energy in term of density-density response function can be derived from the ACFD theorem providing a promising way to develop a systematic improvement for the xc functional; ii) all ACFD methods treat the exchange energy exactly thus canceling out the spurious self-interaction error present in Hartree energy; moreover iii) the correlation energy is fully non-local and automatically includes van der Waals interactions.
The ACFD method is computationally very demanding and most often it is limited to a post self-consistent correction where the xc energy is computed from the charge density obtained from a self-consistent calculation performed with a more traditional xc functional. The basic ingredients needed to compute the correlation energy within the ACFD formalism are the density-density response function of the non-interacting KS system and the density-density response function of a system where the electron-electron interaction is scaled by a coupling constant. While for the former an explicit expression exists, the latter is usually calculated from the Dyson equation of time-dependent density functional theory 18 containing the xc kernel, f xc , that needs to be approximated.
The random phase approximation (RPA) is the simplest approximation; the xc kernel is simply neglected and only the frequency-independent Coulomb or Hartree kernel is taken into account. While correctly describ-ing van der Waals interactions 19,20 and static correlation 4,14,21 , as seen for instance when studying H 2 dissociation, RPA is known to overestimate the correlation energies and thus to poorly describe total energies. 5,6 In this respect various approaches have been developed in order to correct the RPA 9,15,22 . A systematic possibility to address the shortcomings of RPA is to include all terms up to a given power of the interaction strength in the kernel. To linear order this implies including not only the Coulomb kernel, defining RPA, but also an exchange contribution. The frequencydependent exact-exchange kernel, f x , has been derived by from the time-dependent optimized effective potential (TDOEP) method and by Hellgren and von Barth 6,7,26 from a variational formulation of manybody perturbation theory (MBPT). The corresponding approximation for the density-density response function, named RPAx, is obtained by solving the Dyson equation setting f xc = f x and has been successfully used in the ACFD formula to compute correlation energies of atoms 7,13 and molecules. 14,27,28 Here we set the RPAx within the context of a general scheme which allows to formally define a power expansion of the xc kernel combining the general ACFD theory with a many-body approach, specifically the Görling-Levy Perturbation Theory 29 (GLPT), along the adiabaticconnection path. To first order this reduces to the RPAx for which a novel and efficient implementation based on an eigenvalue decomposition of the interacting timedependent density response function in the limit of vanishing electron-electron interaction, is proposed.
The performance of the RPAx has in this work been tested on the homogeneous electron gas (HEG) at different values of r s as well as on the dissociation of diatomic molecules with covalent bonds such as H 2 and N 2 . The results give further support to the accuracy of the RPAx but also reveal an instability or pathological behavior in the low density regime of the HEG and N 2 , which leads to a break-down of the approximation. This breakdown points to the need for including correlation or a screening of the exchange kernel. However, we here show that such a procedure is not always necessary and, in particular, if the aim is to calculate total energies. Instead we reduce the effect of the "bare" particle-hole interaction by omitting all higher order particle-hole scatterings. This can be achieved by expanding the RPA response function in the irreducible polarizability, approximated to first order. In this way we are able to fix the instability and at the same time keep the overall accuracy of the RPAx.
where υ c = e 2 /|r−r | is the Coulomb kernel and χ 0 (iu) is the density response function of the non-interacting KS system. For λ > 0 the interacting density response function χ λ (iu) can be related to the non-interacting one via a Dyson equation obtained from time-dependent density functional theory (TDDFT): where f λ xc (iu) is the scaled frequency-dependent xc kernel. Spatial coordinates dependence is implicit in the matrix notation. When the xc kernel is specified one can thus determine a corresponding correlation energy via Eq. (5).
In the following we will describe a general scheme which allows us to compute the xc kernel to a given order, thus establishing a link between the TDDFT expression for the response function in Eq. (6) and the power expansion of χ λ in the interaction strength, which can be obtained resorting to the well established GLPT 29 along the adiabatic-connection path.
Considering the power expansion for the xc kernel . . and explicitly expanding the Dyson equation (6) in power of the interaction strength it can be seen that the first order kernel, υ c + f x , is intimately related to the first order variation of χ λ with respect to λ and similarly higher order correlation contributions to the kernel are related to the corresponding power in the χ λ expansion. Therefore i) we can define an arbitrarily accurate approximation to the density-density response function considering the expansion of the kernel up to a desired order in λ: where ii) the kernel up to order λ n can be exactly determined by comparing with the λ n expansion of χ λ from GLPT and iii) the solution of the Dyson equation for χ (n) λ leads to a density-density response function which is exact to order λ n but also contains higher-order terms.
In order to solve the Many Body Hamiltonian in Eq. (2) so as to obtain the xc kernel to a given order in λ, the xc potential, and hence the xc energy, must be known up to the same level. This apparent circular dependence does not actually hinder the application of the procedure since, thanks to the coupling constant integration involved in Eq. (5), the knowledge of the xc energy, and therefore its functional derivatives, up to order λ n only depends on the xc kernel up to order λ n−1 . Our strategy can thus be applied in the sequential way showing that to 0 th order, i.e. replacing χ λ with its non-interacting counterpart χ 0 , the exact-exchange KS energy is obtained; moving to the next step, the exactexchange kernel can be derived from first order GLPT and we recover the so-called RPAx approximation for the response function, i.e. χ (1) λ , and for the correlation energy, i.e. E (r2) c . Notice that the RPAx correlation energy E (r2) c is exact to order λ 2 but also contains, although in an approximate way, all higher-order terms, and should not be confused with the 2 nd perturbative correction to the correlation energy in the Görling-Levy perturbation theory 29 .
The mathematical complexity of this sequential procedure increases very rapidly and makes extremely hard its application already at the second order; nevertheless the prescription is in principle given. The functional derivative of E (r2) c with respect to the density defines the exact λ 2 correction to the Hamiltonian in Eq. (2) and allows to apply the GLPT to second order and hence to have access to corresponding second-order contribution to the xc kernel. Solving the Dyson equation with the improved kernel defines a new approximation for the response function χ (2) λ which is exact up to second order. Plugging χ (2) λ into the ACFD formula (5), leads to a new approximation for the correlation energy, E (3r) c , which is exact to order λ 3 but also contains, although in an approximate way, all higher-order terms.
Essentially this scheme can be regarded as a revised version of the standard GLPT 24,29,30 with the additional step provided by the solution of the Dyson equation for the response function and the calculation of a nonperturbative correlation energy (all order in the couplingconstant appears in E (r2) c and following approximation to E c ) from the ACFD formula in Eq. (5). In this way we expect this approach to be applicable also to small gap or metallic systems where finite-order many-body perturbation theories break down 31,32 .
Having introduce the general framework, we apply our strategy to first order in the coupling strength, hence we focus on the frequency-dependent exact-exchange kernel f x and on the calculation of the contribution E (r2) c to the correlation energy (previously denoted as RPAx 7,13 or EXXRPA 14,28 ) for which we propose a novel and efficient implementation.
III. EFFICIENT CALCULATION OF RPAX CORRELATION ENERGY
Our implementation for computing the RPAx correlation energy is based on an eigenvalue decomposition of the time-dependent response function χ λ in the limit of vanishing coupling constant. The scheme described below is a generalization of the implementation proposed by Nguyen and de Gironcoli 8 for computing RPA correlation energies.
A. RPAx Correlation Energy
Let us start by defining the following generalized eigenvalue problem: where the eigenpairs {|ω α , a α } and all the operators implicitly depend on the imaginary frequency iu. Once the solution of the generalized eigenvalue problem (10) is available, the trace in Eq. (5) is simply given by and the integration over the coupling constant can be calculated analytically, leading to the final expression Notice that Eq. (10) and (12) demonstrate that knowledge of χ 0 f x χ 0 is sufficient for computing the RPAx correlation energy and the exact-exchange kernel alone is not needed.
B. Exact-Exchange kernel
The exact expression for h x = χ 0 f x χ 0 in term of the KS eigenvalues and eigenfunctions has been derived by Görling starting from the time-dependent optimized potential method equation 23 and by Hellgren and von Barth starting from the variational formulation of many-body perturbation theory 6,7 . Here we propose an alternative derivation staying within the general scheme described in the previous section.
In section II it has been shown that h υx = χ 0 (υ c +f x )χ 0 is the first order correction to the non-interacting response function χ 0 due to the switching on of the perturbation δV =Ŵ −υ H −υ x . Moreover in the previous sub-section it has been shown that the eigenvalues and eigenvectors of h υx are sufficient for computing RPAx correlation energies. In what follows we derive the exact expression for the matrix elements of h υx in term of the KS eigenvalues and eigenfunctions and their first order corrections only, and show how they can be efficiently computed resorting to the linear-response techniques of density functional perturbation theory 33 .
Let us start by considering the matrix element of χ 0 on two arbitrary, α and β, time-dependent perturbing potentials ∆V = ∆V (r)e ut at imaginary frequency ω = iu For a non degenerate ground state the linear response density ∆n at imaginary frequency ω = iu can be written as where |∆Φ ± 0 are the first order corrections to the KS wavefunction |Φ 0 due to the perturbation ∆V and satisfy the linearized time-dependent KS equations Eq. (13) becomes χ αβ and if the (static) perturbation δV is turned on, the first order correction to χ 0 , i.e. h υx , in the coupling constant λ can be computed: where |δ∆Φ 0 is obtained by taking the linear variation of Eq (15) [ while the static correction vector |δΦ 0 satisfies the linearized time-independent Schrödinger equation With a simple manipulation it's easy to show that δχ 0 depends only on the GS wavefunction and its first order corrections (and not on the second order correction |δ∆Φ 0 ). Taking the h.c. of Eq. (15) and multiplying it on the right by |δ∆Φ 0 and Eq. (17) on the left by ∆Φ 0 | and subtracting the two identities so obtained, an is obtained where the second order corrections cancel out.
The final expression for h αβ υx becomes: Eq. (19) together with Eq. (15) and Eq. (18) defines the matrix elements h αβ υx as a function of the KS many-body ground-state wavefunctions |Φ 0 and its first order corrections |∆Φ ± 0 and |δΦ 0 . Introducing their definitions in terms of the single particle KS orbitals, φ a 's, and their first order variations, ∆φ (±) a 's and δφ a 's, Eq. (19) becomes: where the sums run over the occupied single-particle KS state only and |∆φ (±) a and |δφ a are the (conductionband projected) variations of the occupied single-particle state. They can be efficiently computed resorting to the linear-response techniques of density functional perturbation theory 33 : where V x is the non-local exchange operator identical to the Hartree-Fock one but constructed from KS orbitals, P υ = occ a |φ a φ a | is the projector on the occupied manyfold and γ is a positive constant larger than the valence bandwidth in order to ensure that the linear system is not singular even in the limit for iu → 0.
and 28 and in
Ref. 26 is recovered.
The scheme described above has been implemented in the Quantum ESPRESSO distribution 34 . The basic operations involved in the calculation of the matrix elements h αβ υx are the same required for the calculation of RPA energy and potential in the implementations proposed by Nguyen and de Gironcoli 8 and Nguyen et al. 35 respectively, meaning that our RPAx calculation has a computational cost comparable to their RPA implementations and maintains their favorable scaling.
IV. HOMOGENEOUS ELECTRON GAS
As a test of the accuracy of the new approximation we choose the simple homogeneous electron gas. The homogeneous electron gas is an idealized system of electrons moving in a uniform neutralizing background. At zero temperature it is characterized by two parameters only, i.e. the number density n = 1/(4πr 3 s a 3 B /3), or equivalently the Wigner-Seitz radius r s , and the spin polarization ζ = |n ↑ − n ↓ |/(n ↑ + n ↓ ), where n ↑(↓) is the density of spin up (down) electrons and n = n ↑ + n ↓ . Despite its simplicity, (i) the HEG model represents the first approximation to metals where the valence electrons are weakly bound to the ionic cores, (ii) the system is found to display a complex phase diagram including transition to the Wigner crystal and in addition (iii) it provides the basic ingredient of any practical density functional calculation. The most widely used approximations for the unknown xc-energy functional are based on properties of the HEG.
A. Unpolarized HEG
We begin by studying the unpolarized HEG. While the solution of Dyson equation is demanding in general, it becomes trivial in the case of the HEG; the response functions and the kernels are all diagonal in momentum space and the RPAx Dyson equation can be easily solved as where υ c (q) = 4πe 2 /q 2 and f x (q, iu) is the exchange kernel at a given momentum and frequency. The correlation energy per electron c follows from Eq. (5) where the trace has been replaced by an integral over momentum q and the integration over λ has been done analytically Here K(q, iu) has been defined as While the Lindhard function χ 0 (q, iu) at imaginary frequency iu is known exactly 32 , the function h x (q, iu) can be directly derived from the general expression given in Eq. (20) and is given by a six-fold integral over crystal momenta. Its static values was computed first numerically by several author [36][37][38] and later analytically by Engel and Vosko 39 .The frequency dependence of h x has been calculated by Bronsens, Lemmens and Devreese 40,41 for real frequencies and by Richardson and Ashcroft 42 for imaginary frequencies. Following Bronsens et al. four integrations can be done analytically using cylindrical coordinates; we used numerical quadrature for the two remaining integrations. Our numerical integration is able to recover the analytic results of Engel and Vosko 39 in the limit u → 0. Finally the integration over momentum q and imaginary frequency u in Eq. (23) has been computed numerically. The results are listed in Table I and Fig. 1. RPA can be easily obtained from Eq. (23) and Eq. (24) with h x = 0 and can be seen to seriously overestimate the correlation energy at all densities. Including the exact exchange kernel greatly improves over simple RPA and the RPAx correlation energy per particle is close to the accurate Quantum Monte Carlo (QMC) results 43 . As expected RPAx works well for small values of r s and becomes less accurate when r s increases. According to our calculation, within RPAx for r s > 10.6 there is a charge density instability with wave-vector q ≈ 2k F . In Fig. 2 the critical behavior of the static density-density RPAx response function is shown for the full interacting system (Eq. (22), λ = 1). When the density decreases a pronounced peak appears at q ≈ 2k F indicating the instability with respect to charge modulations with this wave-vector. As can be seen from the inset in Fig. (2), for sufficient large values or r s , K = (υ c + f x )χ 0 approaches unity and the denominator in Eq. (22) tends to vanish leading to the appearance of the peak. Beyond r s = 10.6, K exceeds unity and RPAx approximation breaks down as the density-density response function χ λ is not anymore negative definite.
This instability resembles the charge density wave instability, already observed at the Hartree-Fock level by Overhauser 32,44 and is an artifact of the truncation of the kernel expansion to first order in the interacting strength. A full treatment of correlation in the QMC calculations moves the density instability toward the Wigner crystal to much smaller densities corresponding to r s ≈ 80. 43
B. Alternative RPAx resummations
In Sect. II we have established a strategy for a systematic improvement of the xc kernel. However, because of the complexity of the procedure, rather than proceeding along this way we propose here two simple modifications to the original RPAx approximation which are able to fix the instability problem and, at the same time, to give correlation energies on the same level of accuracy as RPAx. Introducing the irreducible polarizability P λ , it is possible to write the interacting response function χ λ as 32 where P λ = χ 0 +χ 0 [λf x +f c (λ)]P λ . Neglecting f c (λ) and summing up to infinite order leads again to the RPAx approximation defined above. If we instead replace P λ with only its first order expansion we can define a new approximation, here named tRPAx, which contains only a subset of the original RPAx expansion: with P (1) In this way we are only including terms which contain first order particlehole interactions.
A similar idea has been proposed in Ref. 45 where the authors suggest to expand the TDDFT response function χ λ in a power series of the RPA response function (instead of the non-interacting one), and then to keep only the first order. This amounts in a alternative resummation, here named t RPAx, for the interacting response function: We notice that tRPAx and t'RPAx both only require h x to be defined. Both approximations thus neglect all higher order particle-hole scatterings which in the original RPAx are simulated by the kernel.
Up to first order the alternative RPAx response functions coincide with the original one, while they have different power expansions starting from the λ 2 term, meaning that only contributions already approximated at the RPAx level are affected by these different re-summations. Fig. 3 shows the correlation energies per particle obtained starting from the alternative RPAx approximations of the response function. As expected, for high density electron gases (small values of r s ) the correlation energies are essentially identical to the original one, since the underlying response functions are the same in the limit for λ → 0. At the same time they are well behaved also where the original RPAx approximation breaks down.
In Fig. 4 we compare the corresponding static density response functions (calculated at full interaction strength λ = 1) with the exact one, obtained from QMC calculation 46 , for a density corresponding to r s = 5. The difference between RPA and QMC results reveals that exchange and correlation effects in the kernel are important already at this density; including the exact-exchange kernel (original RPAx) overcorrects the RPA deficiency, in particular between k F and 2k F , while both the alternative RPAx approximations give a much better agreement with accurate QMC calculations. Thus despite the fact that the RPAx energy is better at this value of r s the static response function is worse suggesting that the RPAx results are subjected to a cancellation of errors when integrated over the frequency. In the range of densities analyzed, tRPAx and t RPAx response functions do not show any critical behavior; moreover when the density decreases a trend opposite to the one found for the RPAx response function is observed with a reduction (instead of the enhancement shown in Fig. 2) of the height of the peak near 2k F , suggesting no divergence would appear even for smaller densities.
C. Spin-polarized HEG
We continue our analysis of the HEG at the RPAx level by studying the spin magnetization dependence of the correlation energy of the system. We start noticing that for the non-interacting system the spin-up and spin-down components of the gas are independent so that a simple scaling relation between the non-interacting densitydensity response functions of the polarized and unpolarized gas can be derived: while χ ↑↓ 0 = χ ↓↑ 0 = 0. The spin-up and spin-down components behave as independent constituents of the system at the exchange level too and a scaling relation similar to Eq (28) holds true also for the exchange energy 47 and, accordingly, for the exchange potential and kernel: while f ↑↓ x = f ↓↑ x = 0. Thus at the RPAx level the interaction between the spin-up and spin-down components of the system is only mediated by the Coulomb kernel υ c . Although more involved than for the unpolarized case, the solution of the RPAx Dyson equation for the polarized gas is anyway straightforward and using the definitions in Eq. (28) and Eq (29), the RPAx response function of the polarized HEG can be written as: where χ 0 and f x are the same functions already used for the unpolarized case but evaluated at density 2n ↑ or 2n ↓ .
Integrating Eq. (5) with the new definition of χ λ in Eq. (30), gives the correlation energy per particle, c , as a function of n ↑ and n ↓ or, equivalently, as a function of r s and ζ. At the RPA level the dependence of the correlation energy on the spin magnetization has been already calculated long time ago by Von Barth and Hedin 51 and more recently by Vosko, Wilk, and Nusair 52 . Our RPA results, simply obtained by setting f x = 0 in Eq. (30), are, within the numerical accuracy, in perfect agreement with both the above mentioned calculations. Fig. 5 shows the spin-polarization function γ defined as for the case r s = 2 evaluated at the RPA and RPAx level, and compares it with the exchange-only dependence that is the one assumed in the Perdew-Zunger parametrization 48 is essentially no difference between the RPA and RPAx spin-polarization functions. For this value of r s , calculations done with the alternative re-summations (tRPAx and t RPAx) give essentially the same results as the original RPAx and are not shown in Fig. (5). Thus for this property of the system RPA and all the RPAx (original and alternative) approximations give results in very good agreement with accurate Quantum Monte Carlo calculations 50 performing much better than the Perdew-Zunger parametrization and slightly better than the more sophisticated Perdew-Wang parametrization.
V. BOND DISSOCIATION OF DIMERS
As a second test for the RPAx approximation we studied the dissociation curve of the hydrogen and nitrogen molecules.
Within standard Density Functional Approximations (DFAs) the proper (singlet) KS ground state of these molecules at large interatomic separations has too high total energy (as illustrated later in Figs. 6 and 7). A better agreement with the experimental potential energy curve can be achieved resorting to a spin-polarized calculation which give good energies, however at the price of a qualitatively wrong spin-density. In a spin-unrestricted calculation, beyond a certain value of the interatomic separation the two spin components, defining the total electron density are no longer equal leading to solution which is no more a singlet as it should.
The H 2 and N 2 dissociation curves at the RPA level has been previously studied 4,21,54 . In Refs. 4 and 55 the authors have shown RPA to be size-consistent, and thus to correctly describe the dissociation without resorting to any artificial spin-symmetry breaking. However the total energy is far too negative because of the well know over- Here we would like to asses the performance of the RPAx (original and alternative) approximations for molecules beyond their equilibrium geometries studying the dissociation curves of H 2 and N 2 .
The dimers and the corresponding isolated atoms were simulated using a simple-cubic super cell with a size length a = 22 and a = 25 bohr, respectively. A kinetic energy cut-off of 50 Ry were used for both systems and up to 200 lowest-lying eigenpairs of the generalizedeigenvalue problem in Eq. (10) were used to compute the RPA and RPAx correlation energies. All the calculations have been done starting from well converged PBE orbitals.
In Figs. 6 we report our results for the dissociation curves of H 2 and in Tab II the structural parameters extracted from them. Comparison with accurate calculations 53 illustrates the aforementioned deficiencies of PBE and RPA dissociation curves: standard DFAs give too high total energy in the dissociation limit while RPA overestimates the correlation energy leading to a curve well below the reference one. Including the exactexchange kernel leads to a sensible improvement in the total energy description; as can be seen from the inset in Fig. 6 the RPAx total energies around the equilibrium position is in very good agreement with accurate quantum chemistry calculations. The alternative re-summations while essentially giving the same energy as the original RPAx in the minimum region, have a positive effect on the dissociation curve at intermediate distances reducing the height of the repulsive hump. We notice that at large interatomic separations all the RPAx approximation drops below the exact dissociation limit of 2 Ry in agreement with the analysis reported in Ref. 4.
With the simple H 2 example in mind we can turn to analyze the more interesting case of the N 2 molecule. In Fig. 7 we report our results for the dissociation curve and in Tab. II the structural parameters obtained from them. As already observed for the H dimer also in this case the whole RPA dissociation curve lies far below all the other curves. Nevertheless the structural parameters at the RPA level are in very good agreement with results from accurate quantum chemistry calculations 56 . Including the exact-exchange contribution to the kernel corrects for the RPA overestimation of the correlation energy shifting the RPAx dissociation curve upward. At the same time, the good performance for the equilibrium bond length and the vibrational frequency already obtained at the RPA level is maintained. However, unlike what happens for the H 2 molecule, in this case the original RPAx approximation breaks-down when the nitrogen atoms are separated. For bond lengths greater than R = 1.45Å the RPAx response function is no more negative-definite leading to an instability which is very similar the one observed for the low-density homogeneous electron gas and, ultimately, causes the break-down of the approximation. The alternative re-summations proposed to fix the pathological behavior of the RPAx response function in the HEG, turn out to be effective also in this very different situation. The tRPAx and t RPAx dissociation curves are close to the RPAx one in the equilibrium region (see the inset in Fig. 7) but they are well-behaved also for bond lengths greater that R = 1.45Å overcoming, also in this case, what appears to be an intrinsic inadequacy of the original RPAx approximation. In this work, we have setted the RPAx approximation for the correlation energy within a general scheme that combines the general framework of the ACFD theory with a systematic many-body approach along the adiabatic-connection path and allows in principle to improve the xc kernel for the purpose of calculating increasingly more accurate correlation energy. We have shown that, in a perturbative approach, RPA is an "incomplete" approximation and that the exact-exchange kernel has to be taken into account for a consistent description to first order in the interaction strength. An efficient method for the calculation of the RPAx correlation energy has been proposed, based on an eigenvalue decomposition of the time-dependent response function of the many body system in the limit of vanishing coupling constant.
The accuracy of the RPAx approximation has been tested on the homogeneous electron gas revealing a great improvement over RPA results and a very good agreement with accurate QMC calculations. The spin magnetization dependency of the RPA and RPAx correlation energies has been calculated as well, showing a big improvement if compared to standard parametrization and a nearly perfect agreement with QMC calculation.
These encouraging results are however disturbed by the break-down of the procedure for large values of r s where the RPAx density-density response function unphysically changes sign thus indicating that correlation contributions to the kernel are needed to obtain accurate results for the HEG at low densities. Staying within an exact first order approximation to the particle-hole interaction we have suggested two simple and inexpensive modifications of the RPAx approximation which lead to a good description of the correlation energy of the system even in the limit of small densities.
We then examine molecular dissociation of H 2 and N 2 within the RPAx approximation, discovering the same virtues and vices already observed in the HEG case. A sensible improvement of the total energy description is disturbed by a pathological behavior of the response function which ultimately poses doubts on the broad applicability of the RPAx approximation. The alternative re-summations, tRPAx and t RPAx, proposed here, have been shown to be able to fix the RPAx inadequacy without compromising its virtues. Although more tests are needed in order to completely characterize them, tRPAx and t RPAx emerge as promising and stable alternatives to the original RPAx approximation. | 2014-09-01T10:29:11.000Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "a6eb2cdb38f4130b1b6776a5dc1cc20242f952e7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.0354",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a6eb2cdb38f4130b1b6776a5dc1cc20242f952e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231713198 | pes2o/s2orc | v3-fos-license | Long-distance migrants vary migratory behaviour as much as short-distance migrants An individual-level comparison from a seabird species with diverse migration
migratory Abstract 1. As environmental conditions fluctuate across years, seasonal migrants must deter mine where and when to move without comprehensive knowledge of conditions beyond their current location. Animals can address this challenge by following cues in their local environment to vary behaviour in response to current condi -tions, or by moving based on learned or inherited experience of past conditions resulting in fixed behaviour across years. 2. It is often claimed that long- distance migrants are more fixed in their migratory behaviour because as distance between breeding and
| INTRODUC TI ON
Seasonal environments offer animals predictable periods of high productivity, though also presenting the challenge of scarcity during the other portion of the annual cycle. Seasonal migration is a lifehistory strategy by which animals can exploit fluctuations in habitat suitability by moving between distant regions at predictable times throughout the year (Shaw & Couzin, 2013). Animals that match their movements more precisely to coincide with environmental patterns in their landscape (e.g. food, weather) typically have higher survival and reproductive success (Both et al., 2006). Environmental conditions, however, vary unpredictably among years. This poses a challenge for migrants who must determine when and where to move without comprehensive knowledge of the environment through which they must move.
When environmental conditions are spatially and temporally autocorrelated (Koenig, 1999), conditions at one location can provide information about conditions in another. Such correlations can be used as cues for migratory behaviour (Saino & Ambrosini, 2008), but as the distance an animal migrates increases, the correlation of conditions between wintering and breeding areas, and thus the reliability of these cues, is expected to decrease. A lack of reliable information favours tracking of past conditions, that is, average long-term trends), rather than responding to current conditions (Bauer et al., 2020). Due to these differences in the availability and reliability of cues for predicting remote conditions, it is commonly suggested that long-distance migrants should be more fixed in their migratory behaviour than short-distance migrants (Gwinner, 1977;Hagan et al., 1991, reviewed by Knudsen et al., 2011. Long-distance migrants are thus expected to move based on learned (Campioni et al., 2020), socially transmitted (Jesmer et al., 2018) or genetically inherited (i.e. endogenous, Åkesson et al., 2017;Berthold, 1996;Gwinner, 2003) information about spatiotemporal resource availability in the past. Movement based on past information is synonymous to memory-based movement (Fagan et al., 2013), and movements should coincide with average climatic conditions (Abrahms et al., 2019;Thorup et al., 2017). Using past information should result in low intra-individual variation in time and space across years, and consistent differences in behaviour between individuals. Short-distance migrants, on the other hand, are generally expected to adjust migratory behaviour based on current conditions, resulting in intra-individual variation across years. This may be done by either following current resource gradients (i.e. surfing resource waves, Armstrong et al., 2016;Van der Graaf et al., 2006) or using local environmental cues such as temperature (Deutsch et al., 2003) or vegetation (Balbontín et al., 2009;Merkle et al., 2016;Van der Graaf et al., 2006) to predict remote and future resource patterns. Explicit laboratory experiments for differential information use by migration distances have not been performed, while support from inter-species comparisons of variation in phenology of wild populations is mixed (Knudsen et al., 2011): most report that timing of migration in long-distance migrating species is less varied than short-distance ones (Butler, 2003;Hagan et al., 1991;La Sorte et al., 2016;Miller-Rushing et al., 2008;Murphy-Klassen et al., 2005;Rainio et al., 2006;Rubolini et al., 2010), while others observe no differentiation or even more advancement in long-distance migrants (Hüppop & Hüppop, 2003;Jonzén et al., 2006).
Most field-based studies examining the influence of migration distance on variation in migration behaviour occur at the population level (Charmantier & Gienapp, 2014), and the extent to which individual-level behavioural plasticity contributes to populationlevel changes in migratory behaviour remains unclear (Knudsen et al., 2011). Repeated-measures of migratory traits from individuals to measure inter-and intra-individual variance is a commonly used method to assess plasticity in migratory behaviour (Conklin et al., 2013;Fraser et al., 2019). Consistent individual differences, or repeatability, may be indicative of inherited or learned preferences based on past conditions, while the residual within-individual behavioural variability reflect the combination of plastic responses to the environment (i.e. adjustment to current conditions) and flexibility (i.e. variation independent of the environment ;Hertel et al., 2020;Nakagawa & Schielzeth, 2010;Noordwijk et al., 2006). While repeatability of migratory behaviour has been calculated previously for many avian species (reviewed by Both et al., 2016;Phillips et al., 2017), typically the spatial accuracy of these studies are low due to the tracking technology used, and comparisons among individuals or populations using different strategies are seldom carried out. It is therefore challenging to compare results across these studies to understand the ultimate ecological cause for differences in behavioural variation across taxa (Charmantier & Gienapp, 2014).
Species containing individuals with different migratory strategies are interesting systems for examining whether migration distance influences individual variation in migratory behaviour. than exactly tracking current environmental conditions. Yet, variation in behaviour across years was observed in many individuals and could be substantial. This suggests that individuals, irrespective of migration distance, have the capacity to adjust to current conditions within the broad confines of their individual strategies, and occasionally, even change their strategy.
K E Y W O R D S
GPS tracking, individual differences, migration, movement ecology, phenology, plasticity, repeatability, seabird Lesser black-backed gulls Larus fuscus are medium-sized, longlived seabirds that migrate to diverse wintering regions. A single colony typically contains individuals ranging from short-distance migrants that remain local and only move to winter roosting sites 50 km away, up to intercontinental long-distance migrants travelling thousands of kilometres (Shamoun-Baranes et al., 2017;Stienen et al., 2016;Thaxter et al., 2019). An individual's wintering region is thought to be consistent across years and is not related to either sex or size (Baert et al., 2018). Lesser black-backed gulls are capable of using a range of resource types, including marine, terrestrial and urban (Baert et al., 2018;Camphuysen et al., 2015), though within a given period, many individuals tend to specialize on a particular foraging strategy (Camphuysen et al., 2015;Isaksson et al., 2016). Having the capacity to forage in a broad range of habitats and survive in a range of climatic conditions provides many potential options with regards to how, when and where they migrate.
Using a long-term, high resolution GPS-tracking dataset of lesser black-backed gulls breeding in colonies in Belgium, the UK and the Netherlands, with individuals that have been tracked for multiple years, we measured variation in the following migratory behaviours: non-breeding distribution, fine-scale wintering site fidelity, migratory routes and date of arrival and departure from breeding and wintering areas. One of the advantages of our study system is the high spatio-temporal resolution of our data across all colonies (hourly at ±3 m spatial resolution) which enables us to accurately quantify at a fine spatio-temporal scale each of the migratory behaviours we studied. Our first objective is to quantify inter-and intra-individual variation of these migratory behaviours in lesser black-backed gulls and determine whether individuals use consistent strategies. While a range of behavioural options may be available to an individual, there are benefits to behaving consistently in space and time (Gunnarsson et al., 2004;Piper, 2011). Thus, we hypothesize that individuals will generally be consistent in their migratory behaviour, with population variation being largely a result of inter-individual differences. Our second objective is to determine whether individual variation in migratory behaviour changes with migration distance. Studying variation in migration behaviour at the individual-level, a high spatio-temporal resolution and along such a broad range migration distances has rarely been possible, allowing us to address this question from a new ecological perspective.
| Tracking and data processing
We used GPS tracking data from adult lesser black-backed gulls tracked for two or more years from eight colonies in the Netherlands, Belgium and the UK (Table 1, Figure 1). Gulls were captured during the breeding season using walk-in traps set over the nest during incubation. Subsequent movements were recorded using solar-powered GPS-trackers (UvA-BiTS, Bouten et al., 2013), attached with a Teflon wing harness (Thaxter et al., 2014). Total mass of tracker and harness were less than 3% of total body mass.
The breeding season was defined as the period of the year during which an individual occurs in the breeding colony, regardless of their breeding status. The non-breeding season therefore starts with date of colony departure (last detection within 10 km of the breeding colony following the breeding season) and continues until date of colony arrival (first detection within 10 km of the colony prior to the breeding season). To quantify time spent in different areas throughout the non-breeding season (non-breeding distribution), we calculated a utilization distribution (UD) from the 95% kernel density estimates of GPS locations taken during the nonbreeding season. Tracking data were subsampled to a 12-hr interval to reduce autocorrelation and help distribute data equally through time (in the case of multi-day data gaps) and were projected onto a Lambert equal-area projection (EPSG 3035). UDs were created using the r package 'adehabitatHR' (Calenge, 2006) with a bivariate normal kernel on a grid with a 10-km resolution, using a fixed bandwidth (h) of 100 km.
Gulls can use several distinct core areas over the course of a non-breeding season. These core areas were identified by polygons of the 50% contour from the non-breeding distribution UD (see Figure S1 for examples). Core areas identify coarse-scale regions (hundreds of kilometres in diameter) where birds either wintered or stopped-over for prolonged periods.
Many non-breeding seasons contained multi-day gaps caused by low battery or device malfunction which can influence the UD. Any non-breeding season with a consecutive gap longer than 21 days (the minimum time spent in a core area from gap-free seasons) was removed. If these removals resulted in an individual with only one remaining season, this individual was removed from the study. One individual who remained within 10 km of its colony year-round was also removed.
The core area in which an individual spent the most amount of time between December and March was considered the wintering area, and apart from one individual, was the furthest core area from the colony.
Date of arrival to wintering area and date of departure from wintering area were the dates of the first and last GPS detection within this polygon, respectively. The remaining core areas are considered to represent stopover areas. Time spent in these stopover areas sometimes exceeds time spent in the wintering area, and these areas are typically occupied in summer and autumn months (Klaassen et al., 2012). Migration distance, representing the migration strategy of an individual (i.e. direct rather than cumulative distance travelled), was measured as the great circle distance between the colony and centroid of the wintering area.
For four individuals, the wintering area from 1 year was overlapping with multiple small polygons in another year. To make behaviour comparable across years, these fragmented polygons were grouped into single wintering areas.
| Non-breeding distribution
To quantify intra-individual variation in non-breeding distributions, we calculated mean overlap in the 95% non-breeding season UDs (described above) between all possible paired combinations of non-breeding seasons per individual using Bhattacharyya's affinity (BA, Bhattacharyya, 1943), a recommended method for quantifying home-range overlap (Fieberg & Kochanny, 2005). BA is a function of the product of two UDs which quantifies their similarity, with 0 indicating no overlap and 1 being identical (as we are using 95% UDs, 0.95 would be the highest potential overlap). This metric is independent of area so is comparable across areas of different size (i.e. consistent use of a concentrated area ranks the same as consistent use of a larger, diffuse area). Because BA uses the complete probability distribution, individuals overlapping in areas with higher probability of occurrence (i.e. similar use of wintering and stopover sites across years), will have higher overlap than those overlapping in areas of low probability (i.e. if stopping over in different areas or for less time).
Inter-individual variation in non-breeding distribution was quantified by calculating non-breeding season overlap between pairs of individuals using similar migration strategies. Pairings were constrained so that neither the breeding colonies nor wintering areas used by paired individuals were further than 250 km apart. The 250 km constraint was chosen a-priori to any statistical analysis, and was selected because, considering the motion capacity of this species, the area within a 250-km range represent accessible alternatives for an individual while being large enough that most individuals could be paired to at least one other individual. The paired F I G U R E 1 Mean individual migration routes in autumn and spring based on GPS tracking from two or more years. Variation around the mean route is shown by colour. Colonies are indicated with yellow diamonds tracks were not required to be from the same year, and if multiple nonbreeding seasons were within the distance constraints for a pair of individuals, one non-breeding season per individual was randomly selected.
Following Guilford et al. (2011), to determine if individuals were significantly more consistent in their behaviour across years relative to the behaviour demonstrated by others, we used randomization tests. First, the difference between median variation between pairs of individuals and median variation within individuals was calculated.
The data were then randomly re-arranged into new 'between' and 'within' groups and the difference between medians of these random groupings was found. Randomizations were repeated 10,000 times. The probability of the difference in medians from randomly generated groups being larger than that found between the actual within-individual and between-individual groups was then reported.
The relationship between migration distance and intra-individual variation in non-breeding season distribution was examined using a linear model of non-breeding season overlap against the median migration distance used by each individual. Individuals who had a wintering area which did not overlap with previous years were excluded from this, and all other comparisons of the influence of migration distance on intra-individual variation, so that the measured intraindividual variation could be associated with a single migration distance and wintering area. A likelihood ratio test between this model and a model with no explanatory variables was used to test whether migration distance significantly influenced individual variation.
However, short-distance migrants are more constrained in how much they can reasonably change their behaviour, and thus should demonstrate less variation regardless of their inclination for behavioural variation. To address this bias, the relationship between migration distance and overlap found between paired-individuals was used as a null model for expected variation at a given migration distance, assuming inter-individual variation should be similarly influenced by this spatial constraint. The variation predicted in this null model for a given migration distance was subtracted from the intra-individual non-breeding season overlap to determine whether intra-individual variation changed more or less than expected.
| Winter site fidelity
As a measure of consistency in fine-scale space use, we calculated winter site fidelity. All GPS points between arrival and departure from the wintering area were used to maximize temporal resolution of movement data, rather than subsampling as done for nonbreeding distributions. The biased random bridge approach was used to calculate a winter area UD, which considers the sampling interval of GPS points thus accounting for spatio-temporal autocorrelation in high frequency measurement schemes (Benhamou, 2011). Winter area UDs were calculated on 500-m 2 grids using the BRB function in the r package 'adehabitatHR', with the plug-in method for estimating the diffusion coefficient. The maximum duration was set to 3 hr, with a minimum distance of 20 m and a minimum smoothing parameter of 150 m. Site fidelity was then calculated using BA overlap of the winter area UD up to the 95th percentile, which is used as a measure of individual consistency (Abrahms et al., 2018;Wakefield et al., 2015).
A linear model of winter site fidelity against individual median migration distance was fit and compared to a model with no fixed effects to test whether migration distance significantly influences winter site fidelity (excluding individuals who changed wintering areas).
| Migratory routes
To quantify variation in migration routes, defined as the path rec- Figure S2).
Inter-individual variation in migration routes was calculated using the same between-individual pairing method used for non-breeding season overlap, and individual consistency was determined using randomization tests, as described above. Influence of migration distance on migration route variability was assessed using linear models for each season. Similar to non-breeding season overlap, shortdistance migrants are expected to be more spatially constrained than long-distance migrants, so this relationship was also considered in comparison to that found for between-individual pairings.
| Timing of migration
As measures of intra-individual variation in annual timing we report the range of dates individuals departed and arrived at their colony and wintering areas. One individual was removed from analysis of departure from colony and two individuals from arrival to wintering area as data gaps occurred during this transition.
To quantify individual consistency in timing we calculated re- and s 2 a and s 2 are the variance among and within individuals, respectively. If individuals are highly consistent in their behaviour relative to variation occurring among individuals, R is close to one.
We calculated s 2 a and s 2 using linear mixed models (LMM) for each trait, with migration distance as a fixed effect and colony and individual as random effects (REML method using lme4 package in r, Bates et al., 2015), where variance of the individual-level random effect is s 2 a and variance of the random error is s 2 (Nakagawa & Schielzeth, 2010). For arrival to wintering area, migration distance was excluded to achieve model convergence. As we were interested in the degree of behavioural variation an individual could exhibit, year was not included as a random effect so that behavioural variation in response to inter-annual changes in environmental conditions would contribute to intra-individual (residual) variation. We used the r package 'rptR' (Stoffel et al., 2017) to calculate repeatability with 95% confidence intervals based on parametric bootstrapping over
1,000 iterations (presented as R[Lower CI − Upper CI]).
For individual-level measures of variation in arrival and departure dates, using the LMMs above, we calculated an individual-level repeatability, R i , by substituting the residual variance for the ith individual, s 2 i , for s 2 (excluding individuals who changed wintering areas; Potier et al., 2015;Wakefield et al., 2015). R i for each arrival or departure was then used as the response variable in the linear models with migration distance.
All analysis was completed in R version 3.5.1. Final sample sizes for each behaviour can be found in Table 1 and Table S1.
| Variation in migratory behaviour of lesser black-backed gulls
Individuals used between 1 and 3 core areas during the non-breeding season. For all but five individuals (n = 77, 94%), winter areas overlapped across all years. These five individuals switched wintering areas between France and Western Sahara (n = 1), Mauritania and Portugal (n = 1), France and UK (n = 1) and Morocco and UK (n = 2).
Migration distance was therefore highly repeatable (R = 0.81 [95% confidence interval: 0.57-0.93]). Sixty-two individuals (76%) used a stopover in at least 1 year, and 96% of total time spent in stopover areas occurred before arrival to the wintering area. Use of stopover areas was less consistent than wintering areas: 18 out of the 62 individuals using a stopover (29%) had a stopover area that did not overlap among years (compared to 6% individuals who had nonoverlapping wintering areas).
Despite some variation in stopover area use, overlap in nonbreeding distributions was generally high, with a median overlap of 0.91 (range: 0.51-0.95). Non-breeding distributions were considerably more similar within individuals across years than between individuals (median between-individual overlap = 0.61, range = 0.19-0.93, Figure 2a and Figure S3), with none of the randomized sets producing a difference in medians more extreme than the actual data (p < 0.001). Site-fidelity within wintering areas was lower than non-breeding distribution overlap, with a median overlap of 0.62, and differed substantially among individuals (range: 0.00-0.91, Figure 2b). (Table S2).
Individual variation in migration route
While most individuals tended to be highly consistent, for each behaviour, a few individuals demonstrated high variation (Figure 2, Figure S5). The individuals demonstrating the most variation were not the same for each behaviour, with 26 different individuals (32%) being in the upper 5th percentile of variation for at least one behaviour (Table S3).
| Influence of migration distance on individual variation
Migration distances ranged from 53 to 4,572 km (median = 1727 km, n = 77). Intra-individual non-breeding distribution overlap decreased with migration distance (overlap = 0.936-2.0 × 10 −5 ·migration distance, F I G U R E 2 Mean overlap in (a) non-breeding distribution and (b) winter areas, (c) autumn and (d) spring variation in migration routes, and individual repeatability in date of (e) colony departure, (f) arrival to winter area, (g) departure from winter area and (h) colony arrival, across multiple non-breeding seasons from lesser black-backed gulls, versus their median migration distance. The y-axis for autumn and spring route variation (c and d) is reversed so that the order of variation is consistent among plots. Black lines showing trends predicted by the linear models were included if significant. Distributions from between-individual pairs used to calculate 'residual' intra-individual variation are shown in grey (only used for behaviours compared across multiple spatial scales). Individuals who changed wintering areas (n = 5) have been excluded F 1,75 = 28.142, p < 0.001; Figure 2a). However, intra-individual variation increased at a significantly lower rate than the increase between individuals (residual overlap = 0.232 + 3.1 × 10 −5 ·migration distance, F 1,75 = 68.642, p < 0.001; Figure S6a), suggesting that longer distance migrants were less variable in their behaviour than expected when considering the total space traversed during their movements (and vice-versa for shorter distance migrants). Intra-individual route variation increased slightly but significantly with migration distance in spring (variation = 22.652 + 0.014·migration distance, F 1,67 = 14.237, p < 0.001; Figure 2d), even after accounting for increasing betweenindividual variation (variation = −46.208 + 0.010·migration distance, F 1,67 = 6.564, p = 0.013, Figure S6c). Autumn route variation did not change with migration distance (F 1,52 = 3.941, p = 0.052; Figure 2c), nor did it significantly differ from variation observed between individuals (F 1,52 = 1.544, p = 0.220, Figure S6b
| D ISCUSS I ON
This study quantified inter-and intra-individual variation in nonbreeding distributions, winter site fidelity, migration routes and timing of migration in lesser black-backed gulls at the individual-level, using high spatio-temporal resolution tracking data, and covering a broad-range of migration distances, to test the hypothesis that migratory behaviour should become more fixed as migration distance increases. However, we found that migration distance did not explain which individuals were most variable across years, contrasting with many previous inter-species comparisons of population phenology.
Instead, we found that regardless of migration distance, individuals consistently differed from each other in their behaviour, suggesting that individuals predominantly follow learned and/or inherited behavioural strategies.
F I G U R E 3
The range of (a) departure dates from colony, (b) arrival dates to wintering area, (c) departure dates from wintering area and (d) arrival dates to colony used across non-breeding seasons by individual lesser black-backed gulls. Individuals who changed wintering areas (n = 5) are identified by white boxplots. Individuals are ordered by their median migration distance. Repeatability [95% confidence interval] is reported at the bottom of each plot
| Variation in migratory behaviour
For all behaviours examined, intra-individual variation was small compared to that of the population, resulting in distinct individual behavioural strategies, consistent with our hypothesis that gulls will be inclined to rely on past experience. Repeatability was high in comparison to findings across a range of taxa for diverse behavioural traits (Bell et al., 2009), but consistent with studies of avian migration (reviewed by Both et al., 2016;Phillips et al., 2017). This suggests that many avian species preferentially use learned or inherited knowledge of previously reliable wintering and stopover areas, rather than risk searching for the best locations in a given year.
Winter area overlap demonstrated individuals also had high site fidelity at a fine scale (500 m resolution), suggesting repeated use of foraging areas and roosting sites among years. Individual consistency in space use may provide more stable energetic rewards than plastic behaviour (Abrahms et al., 2018), as familiarity with a site can improve foraging efficiency (Piper, 2011;van den Bosch et al., 2019).
Efficiency resulting from familiarity may be sufficient to balance the benefits of switching to a new location with better environmental conditions for a given year. Consistent individual differences in timing of migration may be a result of individual differences in foraging type and habitat quality at their respective wintering and stopover areas, resulting in different optimal migration times (Studds & Marra, 2005), and it may also be a mechanism to reunite with mates in the breeding colony (Gunnarsson et al., 2004). Understanding how these individual strategies are determined (genetically inherited, socially transmitted or learned) is important for assessing the adaptive scope of migratory animals to changes in their environment. Current studies on avian species suggest migratory behaviour may be under strong genetic influence in early life, but refined or replaced by learning as an individual gains experience (Campioni et al., 2020;Sergio et al., 2014).
Inter-individual variation for most behaviours examined was high.
High inter-individual variation might suggest that selective pressure on these behaviours is low for this species (Verhoeven et al., 2019).
Low selective pressure on migratory traits may be typical for generalist species, such as gulls, for whom the ability to use a range of behaviours at fine spatio-temporal scales (e.g. diet and habitat), and the ability to survive under a range of climatic conditions, may buffer the effects of inter-annual variation, enabling consistency in behaviours at mid-to-broad spatio-temporal scales (e.g. wintering and stopover regions, migratory period). This is conductive with the fact that spatial overlap measured at finer scales (winter site fidelity) was lower than regional-scale, non-breeding season overlap.
While most individuals follow a distinct strategy, the intraindividual variation observed suggests that gulls still adjust behaviour across years, and thus behaviour is not rigidly fixed. Instead, consistent behavioural strategies likely define a broad window in space or time within which an individual can adjust its behaviour based on current conditions, thus allowing for the integration of information based on both past and current conditions (Åkesson & Helm, 2020). Additionally, for each behaviour examined, there were a few individuals with extremely high variation across years (i.e. a change in the behavioural strategy). The individuals which exhibited this high variation were not consistent across all behaviours, suggesting that the ability to change strategy could be common across all individuals. The causes of these drastic changes are unknown, but suggests that individuals can change strategies to adapt to shifting long-term conditions within their lifetime.
Intriguingly, for migration routes, inter-and intra-individual variation was low, suggesting the entire population is being constrained to the use of certain migratory corridors. Despite reduced interindividual variation, intra-individual variation was still lower, suggesting individuals travelling between similar breeding and wintering areas consistently use different routes. This is in contrast to many migratory bird species who typically demonstrate high variation in migration routes, presumably as they adjust routes among years to current wind conditions (Dias et al., 2013;López-López et al., 2014;Stanley et al., 2012). This may suggest that there is high selection pressure for moving along coastlines in this species, implying an advantage to foraging or roosting in coastal habitats while migrating.
Coastal areas may also represent energy efficient pathways, as the dunes and cliffs typical of these areas can generate orographic lift enabling gulls to switch from flapping flight to energetically cheap soaring flight (Sage et al., 2019).
| Influence of migration distance on individual variation
No clear effect of migration distance on individual variation was found in lesser black-backed gulls from these populations. This is in contrast to numerous phenological studies, covering a range of avian taxa, which have found that species migrating long distances are more fixed in their timing of spring migration compared to short-tomid-distance migrants, both in response to long-term climate change (Hagan et al., 1991;Miller-Rushing et al., 2008;Murphy-Klassen et al., 2005;Rubolini et al., 2010) and year-to-year changes in environmental conditions (La Sorte et al., 2016;Rainio et al., 2006). However, these phenological studies are inter-specific comparisons focusing either on population means or 'first individual' observations, rather than examining individual-level variation using repeated measures. Similar to our study, Verhoeven et al. (2019) found no influence of winter region on intra-individual variation in migration timing. High intraindividual variation has also been reported for some long-distance migrants (e.g. Fraser et al., 2019), but not all (e.g. Conklin et al., 2013), providing poor support for a general trend for fixed migratory behaviour in long-distance migrants at the individual-level. This highlights the importance of integrating individual-and population-level data to better understand the mechanisms and implications of how species react to changing climates (Visser et al., 2010).
While long-distance migrants may not have reliable cues regarding remote environmental conditions, they may still adjust their migratory behaviour to changes in their intrinsic state or local conditions. Therefore, similar behavioural variability across individuals migrating different distances does not mean all migrants can respond equally well to environmental variation on short or long time-scales. To draw such conclusions, deviations from an individual's strategy should be correlated with changes in environmental conditions in breeding areas. Indeed, our study is limited by our inability to relate movement to a single preferred resource, as can be done for dietary specialists (Abrahms et al., 2019;Thorup et al., 2017;Van der Graaf et al., 2006). In the future, a better understanding of the underlying motivation and environmental cues gulls use to inform migratory behaviour would help further elucidate the mechanisms underlying migratory decision making in this species.
Given the readily accessible environmental information available to the shortest distance migrants, it is particularly surprising that we still observed individual consistency in space and time. Conditions on wintering areas are typically thought to be less reliable at higher latitudes (Danner et al., 2013), favouring behavioural flexibility and innovation in short-distance migrants (Sol et al., 2005). However, while availability of marine and terrestrial resources may be scarce at high latitudes during the winter, some anthropogenic resources (e.g. waste treatment centres) remain dependable year-round. Such consistency in the environment may limit the need to be plastic, instead favouring reliance on past experience leading to high site fidelity on even fine spatial scales such as we observed. Learned patterns and consistency may be a generally favourable strategy for species utilizing reliable and abundant anthropogenic resources.
| CON CLUS IONS
Due to the challenge migrants face of determining when and where to move without comprehensive knowledge of environmental conditions at remote destinations, concern has been raised regarding whether migrants, particularly long-distance migrating species who are thought to be more fixed in their behaviour, can sufficiently adjust migratory behaviour to human-induced environmental change (Møller et al., 2008;Saino et al., 2011). Lesser black-backed gulls demonstrated consistent individual differences in migratory behaviours, suggesting a preference for relying on past conditions to guide movement, and we found no consistent influence of migration distance on intra-individual variation. Use of consistent strategies, even by individuals migrating short-distances who presumably have reliable information regarding current conditions, suggests that familiarity with a strategy may be preferential to trying to track optimal conditions. While this may apply to species who use resources that are predictable year-round, such as anthropogenic resources (Riotte-Lambert & Matthiopoulos, 2020), in unpredictable systems a consistent strategy may be detrimental (Abrahms et al., 2018). Importantly, despite an apparent preference for consistency, individuals, regardless of their migration distance, can vary behaviour within the confines for their individual strategies, and occasionally even change strategies. We encourage further examination of the influence of migration distance on behavioural plasticity at the individual-level to determine how universal our findings are, as well as extending this research to systems where behavioural variation can be linked with environmental variables to assess whether observed behavioural variation is equally adaptive across migration distances. | 2021-06-30T20:32:41.726Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3c920c47992760f22e3e335e7e1329f3dc6e1e25",
"oa_license": "CCBY",
"oa_url": "https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1365-2656.13431",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "64bd63316a21b399509953415aa6ceb1786fbfbe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
235772603 | pes2o/s2orc | v3-fos-license | Visual Attention and Sexual Function in Women
Theoretical models situate attention as integral to the onset and regulation of sexual response and propose that problems with sexual response and subsequent sexual dysfunction result from insufficient attentional processing of sexual stimuli. The goal of this paper is to review literature examining the link between attentional processing of sexual stimuli and sexual function in women. Specifically, we sought to understand whether women with and without sexual dysfunction differ in their visual attention to sexual stimuli and examined the link with sexual response, which would support attention as a mechanism underlying sexual dysfunction. Across women with and without sexual concerns, sexual stimuli are preferentially attended to relative to nonsexual stimuli, suggesting that sexual stimuli are more salient than nonsexual stimuli. Differences between women with and without sexual dysfunction emerge when examining visual attention toward the most salient features of sexual stimuli (e.g., genital regions depicting sexual activity). Consistent with theoretical models, visual attention and sexual response are related, such that increasing attention to sexual cues facilitates sexual arousal, whereas reduced attention to sexual stimuli appears to suppress sexual arousal, which may contribute to sexual difficulties in women. Taken together, the research supports the role of visual attention in sexual response and sexual function. These findings provide empirical support for interventions that target attentional processing of sexual stimuli. Future research is required to further delineate the specific attentional mechanisms involved in sexual response and investigate whether these are modifiable. This knowledge may be beneficial for developing novel psychological interventions targeting attentional processes in the treatment of sexual dysfunctions.
Introduction
Sexual function is defined as the absence of difficulty when moving through the different stages of sexual response (i.e., desire, arousal, and orgasm), including an absence of pain with sexual activity, as well as subjective feelings of satisfaction and pleasure during partnered and solitary sexual behavior [1]. Sexual dysfunction involves recurrent problems with sexual response that are distressing to the individual [1]. Contemporary models of sexual response posit that attentional processes, which operate at different stages of awareness and control, are required for sexual arousal [2•, 3, 4]. In this paper, we review literature on how attentional processesspecifically visual attention to sexual stimulidiffer between women with and without sexual dysfunction. First, we situate attention as a key component in sexual response by providing an overview of the emotion-motivational model (EMM) [2•]. Second, we review experimental studies that examine the role of attention on sexual response in samples of women without sexual dysfunction, providing initial support for the EMM. Third, we focus on the limited research examining patterns of visual attention in women with varying levels of sexual functioning. We conclude by reviewing gaps in the current state of knowledge, discussing the clinical implications for interventions that capitalize on attentional mechanisms, and reflecting on research areas that have not yet been explored.
Emotion-Motivational Model
Sexual response is an emotional state involving physiological responses (e.g., genital response), cognitive processes (e.g., attention to sexual stimuli), affective responses (e.g., excitement), and motivation behaviors (e.g., sexual desire) [5,6]. Given that attention acts early on in the emotion regulation process, several theoretical models situate attention as a key component in the initiation and regulation of sexual arousal [3,4,7]. The EMM highlights the dynamic and multi-faceted processes involved in sexual response [2•]. According to this contemporary model, genital arousal is triggered by sexually salient stimuli that are pre-attentively processed and automatically capture attention. Automatic appraisals of the stimuli as sexual and rewarding facilitate genital arousal, through increased or decreased attentional engagement. Genital arousal reinforces maintenance of attention and facilitates the conscious appraisal of sexual stimuli. The evaluation of sexual stimuli as meaningful (e.g., past experiences with the stimulus), along with emotional state of the individual (e.g., strength of sexual motivation), further determines how sexual response unfolds. When such appraisals result in positive or rewarding interpretations, subjective awareness of sexual arousal and desire follows. Working in a reciprocally reinforcing manner, the ongoing subjective and physiological responses may trigger motivation to engage in sexual behaviors [2•]. Conversely, any negative or unrewarding interpretations experienced during this process may divert attention away from relevant sexual stimuli and thus impede sexual response and contribute to difficulty with sexual arousal, desire, and orgasm (i.e., sexual dysfunction).
The EMM can be a useful model for conceptualizing sexual dysfunction. For example, imagine a woman who presents with problems of low sexual desire and difficulties with arousal. It is possible she may not notice a sexual stimulus (e.g., her partner's naked body) or notices her partner but perceives the nudity to be nonsexual (e.g., not arousing and not indicative of sexual activity), and as a result, she may not experience an automatic genital response. In the absence of a genital response, her attention may be maintained on nonsexual stimuli (e.g., her to-do list). This may result in the conscious appraisal of her nude partner as unrewarding or perhaps even evoke negative emotions, if memories of unfulfilling sexual encounters are triggered. These negative appraisals may further shift her attention away from her partner, impeding genital response, and preventing subjective sexual arousal and desire. Given the absence of sexual arousal, it is unlikely she would be motivated to initiate or be receptive to sexual activity. If she does engage in sexual activity, her lack of sexual response (e.g., genital arousal/lubrication and sexual desire) coupled with her lack of approach motivation may result in an unfulfilling sexual encounter, which may instigate or perpetuate her sexual dysfunction. Taken together, the EMM not only specifies the intermediary processes by which sexual response unfolds but also has implications for mechanisms underlying sexual dysfunction [8].
Experimental Studies of Attention
Generally speaking, attention is used to prioritize and select the most motivationally charged stimuli for further processing [9]. There is strong evidence to suggest that sexual stimuli automatically attract attention and demand the allocation of attentional resources, which makes sense given that they are among some of the most emotionally salient stimuli in the environment [10]. In this section, we review studies utilizing different methodologies to assess the various components of attention. These include studies using reaction-time methods, studies of arousal where attentional focus is experimentally manipulated, and studies of visual attention assessed using eye-tracking. Together, these data support the role of automatic and controlled attention in the onset and regulation of sexual response.
Different reaction-time-based tasks have been used to examine attentional bias, defined as the tendency for emotionally salient (sexual) stimuli to capture attention and be preferentially processed relative to other less salient (nonsexual) stimuli [11]. In general, reaction-time tasks require participants to react as quickly as possible to a secondary task (e.g., the location of a target in the dot probe or the font color of the word in the emotional Stroop). It is expected that the salience of the stimuli (e.g., sexual vs. nonsexual) will either enhance or interfere with performance on the secondary task. For example, in dot probe tasks, when two stimuli compete for attention at opposite screen locations, faster responses to the target appearing in the same location as the salient sexual stimulus indicate attentional capture, whereas slower responses to the target appearing at the location of the nonsalient, nonsexual stimulus demonstrate difficulty disengaging attention from the salient stimulus [11]. Studies using cueing tasks (e.g., dot probe and spatial cueing) have yielded contradictory results. While one study found no evidence for an automatic attentional capture toward sexual stimuli [12], others reported either faster responses to sexual stimuli (e.g., attentional capture) or slower responses due to difficulty disengaging from sexual stimuli [13,14].
Attentional biases have also been explored in clinical samples of women with sexual function concerns. Using a modified dot probe task to assess attentional biases elicited by sexual words, one study found that women with poor sexual function exhibited an attentional bias whereas women high in sexual function did not [15]. Specifically, women with sexual function difficulties (i.e., Female Sexual Function Index < 27) were faster to identify probes that replaced sexual words compared to nonsexual words, suggesting that sexual words captured and sustained attention. The authors hypothesized that similar to other psychopathologies (e.g., anxiety), female sexual dysfunction may be associated with an attentional bias toward disorder-relevant stimuli, specifically that sexual stimuli evoke a threat response that facilitates faster detection of such stimuli [15]. Pain-hypervigilance has been examined in women with and without vulvar vestibulitis syndrome (VVS; a sexual pain condition) using an emotional Stroop task involving neutral and pain-related words [16]. Compared with pain-free women, women with VVS displayed greater interference (i.e., slower response times) for pain-related words, suggesting that women with sexual pain experience hypervigilance for pain-related stimuli. Attentional biases assessed via the dot probe task have been examined in women with and without hypoactive sexual desire disorder (HSDD) [17]. The authors hypothesized that compared to healthy controls, women with HSDD would be faster to detect dots that replaced sexual images, indicative of an attentional bias consistent with sexual stimuli capturing attention [18]. Surprisingly, they found that both groups of women exhibited similar attentional biases, consistent with attentional (dis)engagement with sexual stimuli (i.e., slower response times), rather than attentional capture.
Attentional biases have been used as a means to examine treatment efficacy for sexual dysfunction. A study involving women with HSDD compared pre-and post-treatment attentional biases to sexual stimuli using the emotional Stroop task [19]. Pre-treatment women with HSDD did not exhibit an attentional bias to sexual words (i.e., similar response times for both sexual and nonsexual words). After treatment with testosterone and a PDE-5 inhibitor, women with HSDD showed an attentional bias, such that sexual words interfered with color naming (i.e., slower response times). This study is important in that it reveals a possible attentional mechanism underlying sexual dysfunction that is modifiable with treatment. Indeed, women with HSDD may lack an attentional bias (e.g., sexual stimuli are not salient and thus do not capture attention) and this may prevent activation of the sexual response system. Together, the evidence indicates that attentional mechanisms may be important for the underlying symptoms of sexual dysfunction. The equivocal findings in reaction-time studies are likely due to the variability in paradigms used (e.g., dot probe and emotional Stroop), which actually capture different attentional mechanisms-cueing and attentional filtering. Although studies using reaction-time tasks have generally supported that sexual stimuli command attentional resources, the link between these attentional mechanisms and sexual outcomes has not been thoroughly investigated. Given the role of attention in the initiation and regulation of sexual arousal, we might predict that cueing tasks, which use different classes of stimuli to experimentally manipulate orienting of attention, may be associated with the initial activation of genital arousal. Filtering tasks, rely on emotionally salient stimuli capturing and sustaining attention thereby reducing one's ability to attend to other stimuli, may be more strongly associated with the regulation of sexual arousal.
Over the last several decades, researchers have also manipulated the allocation of attentional resources available for stimulus processing using a variety of tasks (e.g., dichotic listening, instructional, performance demand, or use of distractions). The goal of these studies was to elucidate the role of voluntary or directed attention in women's sexual response. For example, under experimental conditions instructing women to increase their sexual arousal or to focus on pleasure rather than pain, women reported increases in sexual arousal [20,21]. Further, women's subjective feelings of sexual arousal while viewing erotic video stimuli increased after practicing mindfulness exercises encouraging women to focus their attention on bodily sensations [22,23]. Whereas directed attention to internal and external sexual stimuli can facilitate sexual arousal, cognitive distraction can divert attention away from such stimuli and hinder sexual response. Indeed, compared to conditions where attention is maintained on sexual stimuli, auditory (e.g., dichotic listening) or visual (e.g., picture flickering) distractions presented during the sexual stimulus presentation have been found to weaken both subjective and physiological sexual arousal in women [24][25][26][27][28]. Although these studies demonstrate the importance of attention to and distraction from sexual stimuli in sexual arousal, they reveal nothing about the specific attentional mechanisms because they situate attention as an independent rather than dependent variable. As well, the distractors used in these experiments lack ecological validity and may not be accurate representations of real-life distractors that may affect women experiencing sexual dysfunction.
Visual Attention to Sexual Stimuli
Visual attention is defined as the selective orienting to information from one region of the visual field at the expense of other regions in the same field [29]. Eye-tracking technology enables the direct recording of automatic and controlled eye movements and provides a continuous measure of attention allocation in real time [30][31][32]. Eye-tracking enables the assessment of initial or automatic attention through the latency to first fixation. This is an index of attentional capture or how quickly a stimulus captures attention, and according to the EMM is thought to trigger the onset of a sexual response [2•]. Sustained or controlled attention is assessed most commonly through total fixation duration. This is an index of the overt orienting and sustained attentional processing or the total amount of time spent looking at a stimulus and is thought to regulate sexual response over time.
Visual attention is a central component of most sexual experiences and sex researchers have relied on eye-tracking to examine visual processing of sexual stimuli as it relates to sexual interest, arousal, and attentional biases underlying sexual (dys)function. There is robust evidence to suggest that sexual stimuli capture more visual attention than nonsexual stimuli [33, 34•, 35]. Studies presenting participants with sexual (e.g., nude) and nonsexual (e.g., fully clothed) images of male and female targets have revealed a compelling attentional bias in favor of sexual stimuli. Specifically, sexual stimuli elicit longer gaze times relative to nonsexual stimuli [32,33]. Other paradigms focused on examining visual attention to preferred and nonpreferred sexual targets reveal different patterns dependent on the stage of attentional processing (i.e., initial versus controlled attention) [35, 36•, 37•, 38, 39]. Both preferred and nonpreferred stimuli elicit an initial attentional bias, suggesting that sexual stimuli regardless of their sexual relevance in terms of gender cues automatically capture attention [35, 36•, 37•]. Patterns of controlled attentional bias, however, seem to differ as a function of the eye-tracking paradigm. Specifically, in studies presenting single images of male or female targets, or dyads engaging in sexual activity, both preferred and nonpreferred targets garner similar degrees of visual attention [35,39]. In studies using a forced-attention paradigmwhere two single target images are simultaneously presented and compete for attentiononly preferred sexual stimuli elicit a controlled attentional bias [35, 36•, 37•]. Notably, patterns of controlled attention are strongly correlated with self-reported attraction ratings to sexual images (r = .47 to .49), consistent with the EMM [35].
Recently, researchers have begun exploring whether visual attention patterns elicited by sexual stimuli differ as a function of whether or not a woman has a sexual dysfunction. The overarching hypothesis is that sexual dysfunction may arise if sexual stimuli do not attract or command attentional resources because this would inhibit the activation of sexual response or the proliferation of the response over time. In a sample of women with acquired (n = 16) and lifelong (n = 9) sexual interest/arousal disorder (SIAD) [40], Brown et al. [41••] examined attentional bias for sexual stimuli. When a sexual and nonsexual image were simultaneously presented to compete for attention (e.g., forced-attention paradigm), women with SIADregardless of subtypeexhibited a controlled attentional bias, such that they looked more at the sexual compared to nonsexual images. That is, compared to nonsexual stimuli, sexual stimuli were prioritized by the visual attention system and commanded attentional resources for women whom sexual stimuli may not be appraised as positive and associated with reward as per the EMM.
Velten and colleagues [42••, 43••] investigated gaze patterns of women with clinical (n = 30), subclinical (n = 23), and normal (n = 16) levels of sexual function. Women with clinical levels of sexual function met DSM-5 criteria for SIAD and/or genito-pelvic pain/penetration disorder (GPPPD) [40]. Women with subclinical levels did not meet full diagnostic criteria for SIAD and/or GPPPD but scored below the clinical cut-off for sexual function (Female Sexual Function Index < 26.55) and above the clinical cut-off for sexual distress (Female Sexual Distress Scale-Revised > 11). The goal was to determine if women with sexual dysfunction attend differently to sexual stimuli (static images and dynamic videos) and if this may be a mechanism contributing to their sexual dysfunction. The authors hypothesized that women with clinical and subclinical sexual dysfunction would look less at the genital regions of sexual stimuli compared to women with normal sexual functioning. Results demonstrated that compared to healthy controls, women with sexual dysfunction attended less to the genital regions of sexual stimuli across static and dynamic stimuli. While these findings may indicate that the salient aspects of sexual stimuli did not capture or maintain attention in women with sexual dysfunction, the reported effects were small and not observed across all dependent variables (e.g., different eye-tracking measures). Consistent with the EMM, across all women, attention toward sexual cues facilitated sexual arousal, such that longer gaze durations on the genital regions of sexual videos was followed by increases in both subjective and physiological sexual arousal [43••].
Given that distraction has been hypothesized as a mechanism contributing to sexual dysfunction [7], some researchers have sought to examine how distraction, rather than attention, influences visual processing of sexual stimuli. In the same study described above [42••], eye movements were recorded during the presentation of sexual images where distracting objects (e.g., a to-do list or household item) were placed within a sexual image. Contrary to the hypothesis, across all women, the presence of distracting objects did not influence attention toward the genital regions of sexual stimuli, suggesting that distractibility from visual sexual cues may not be a key mechanism underlying sexual dysfunction [42••]. Another study evaluated visual attention patterns as a function of specific sexual concerns using static images depicting couples engaged in foreplay [44]. When comparing women without sexual complaints (n = 20) to women with low sexual desire (n = 14) and those experiencing painful intercourse (n = 20), women with genital pain looked significantly less at the sexual scene regions (i.e., the full bodies of the male and female in the sexual images) than women with low sexual desire and those with no sexual complaints. Consistent with Velten et al. [42••], there were no group differences in attention to distractor objects within sexual images. They concluded that among women who experience pain with intercourse, there may be an avoidance of sexual stimuli, which may contribute to their sexual difficulties [44].
Recent studies examining attentional biases among women with sexual dysfunction have contradictory findings, as sexual stimuli seem to both automatically capture attention [15] and command less voluntary attention [42••, 43••]. First, these paradigms assess different mechanisms: in reaction-time tasks, the primary variable of interest is response time, whereas in eye-tracking paradigms, gaze (e.g., total fixation duration) is assessed. The former is capturing more reflexive automatic attentional processes, whereas the latter is capturing more consciously controlled processes. Second, our visual system is typically inundated with many stimuli that compete for attention simultaneously and our attentional system prioritizes information based on saliency and stimulus complexity. Compared to paradigms using sexual and nonsexual words, stimuli containing images and videos depicting sexual activity contain many more cues and as such are more cognitively demanding to process. In fact, there is evidence to suggest that contextual cues differentially influence attentional processing of sexual stimuli, particularly in women [36•]. Thus, methodological differences may be responsible for the somewhat equivocal findings on the role of attention in sexual dysfunction.
Taken together, the evidence suggests that visual attention is biased toward sexual stimuli and that such stimuli facilitate sexual arousal among women with and without sexual dysfunction. However, salient aspects of sexual stimuli (e.g., genital regions depicting sexual activity) appear to produce varied patterns of attention among women with and without sexual difficulties. This provides preliminary support for the hypothesis that differences in patterns of sexual response and sexual function may be the result of differences in attentional processing of stimulus cues depicting reward salience (i.e., the regions of stimuli that indicate sexual activity). Additional support for the EMM is that visual attention patterns correlate with sexual outcomes, such that increasing attention to sexual stimuliparticularly salient featuresfacilitates sexual arousal. Conversely, the EMM hypothesizes that reduced attention (e.g., looking less) to sexual stimuli may impede physiological sexual arousal and reduce vaginal lubrication, which in turn may set the stage for lower levels of sexual desire and/ or the experience of pain with intercourse [2•].
Limitations to Studies of Visual Attention
Although studies of visual attention have shed light on our understanding of attentional mechanisms involved in sexual dysfunction, the research is not without limitations. The links between visual attention and sexual dysfunction as proposed in the EMM have yet to be comprehensively examined. Specifically, the few existing studies have focused on two sexual dysfunctions, namely low desire and genital pain. As such, our understanding of how attention relates (or not) to problems with orgasm, or specific genito-pelvic pain conditions, such as vaginismus (involuntary spasms of the pelvic floor muscles) or provoked vestibulodynia (pain to the vulvar vestibule that is provoked by touch or sexual activity) [40], is limited. The dearth of studies investigating the integration of visual attention and sexual outcomes is problematic because although attention may differ across women with varying levels of sexual function, it does not reveal if these attentional differences translate into the downstream effects on sexual response as per the EMM. Specifically, if the goal when treating sexual dysfunction is to augment sexual outcomes (including arousal, desire, and satisfaction), then identifying direct links between attentional mechanisms and sexual outcomes is necessary to determine if addressing attention will ameliorate the deficits seen among women with sexual dysfunction. It is challenging to draw strong conclusions from the few existing visual attention studies given the relatively small sample sizes within each group. Small samples limit variability in gaze and outcomes, producing less reliable estimates. Lastly, even if visual attention is a key mechanism in sexual response, viewing sexual stimuli on a screen, particularly in a laboratory setting, does not necessarily simulate sexual interaction in the real world. Thus, the lack of ecological validity in the experimental design may not reflect how sexual stimuli are processed in actual sexual situations. Solutions for future research might include ambulatory measures or virtual reality methodology which enables first-person point of view stimuli that may better mimic real-life situations.
Clinical Implications and Future Directions
Despite the relative paucity of research examining visual attention and sexual function, preliminary data support our understanding of attentional processes involved in sexual (dys)function consistent with the EMM. If sexual stimuli fail to capture attention, sexual response will not be activated, and if sexual stimuli fail to sustain attention, sexual response will be insufficient for sexual activity, thereby contributing to sexual dysfunction. It follows that attention to sexual stimuli is a key factor in the development and maintenance of sexual dysfunction, and in order to enhance arousal and sexual response, attention to sexual stimuli may need to be targeted in intervention.
Current interventions for sexual dysfunction incorporate attentional aspects. For example, sensate focus (i.e., focusing on one's own sensations and perceptions during arousal instead of goal-oriented behavior) and cognitive-behavioral therapy (i.e., replacing unhelpful cognitions that arise before, during, and after sex with more balanced thoughts) each attempt to augment sexual response through attentional or cognitive mechanisms [45,46]. Experimental studies investigating other interventions that target aspects of attentional focus such as mindfulness-based interventions also find benefit for sexual response [47,48]. Mindfulness involves bringing one's attention to experiences in the present moment in a nonjudgmental way during sexual activity [49] and is thought to benefit sexual response by reducing judgmental distracting thoughts and tuning in to present moment awareness of the experience. Indeed, mindful attention toward bodily sensations (e.g., genital arousal) is associated with higher levels of sexual arousal in women [22,23]. Although mindfulness benefits sexual response, the exact mechanisms by which this occurs remain unclear. Thus, exploring the relative contribution of directing visual attention to external stimuli versus directing cognitive attention to internal cues may be a fruitful avenue to explore in future investigations. To do this, we recommend a comprehensive examination of all aspects of sexual response, including attention (i.e., eye-tracking), sexual arousal measurements (i.e., self-report and physiological), and sexual motivation (i.e., approach and avoidance motivation), including experimental manipulations of attentional focus. Such examinations may generate important knowledge about the specific mechanisms that influence the various aspects of sexual response (e.g., genital response, subjective sexual arousal, and sexual desire) and as such may result in more tailored clinical recommendations for different sexual function problems. Given the finding that the presence of visual distractors on attention does not differ as a function of sexual dysfunction, other types of distraction that do not rely on the visual system (e.g., cognitive distraction) may be more relevant. We would predict that cognitive distractions (e.g., nonsexual thoughts) may be more strongly linked with problems with low sexual desire because attention is not focused on the present moment impairing sexual response. As such, cognitive distractions might be a relevant treatment target for women with low sexual desire. There is some evidence supporting visual avoidance and fear of sexual cues among women with genital pain conditions, which may interfere with their sexual response. Thus, visual attention to sexual cues and linking these cues with pleasure rather than pain may be the focus of treatment for women with genital pain conditions. Findings from visual attention research might also pave the way for the development of novel interventions to augment sexual response by directly targeting attentional mechanisms. Attentional bias modification (ABM) training interventions involve modifying attentional biases that are elicited by negative or threatening stimuli by training individuals to preferentially attend to neutral or positive stimuli [50]. ABM has demonstrated clinical utility in reducing symptoms of depression and anxiety using both reaction-time and eye-tracking methodologies [51,52]. In recent years, some studies have failed to successfully modify attentional bias, in large part due to the adoption of less rigorous approaches that focus on ABM outcomes, rather than the process of eliciting change [53]. ABM has not yet been examined in the context of sexual dysfunctions, but we believe it could be a promising avenue to explore following careful consideration of the processes required to modify attentional bias. In contrast to ABM for depression and anxiety that reduces attentional bias elicited by emotionally salient stimuli (i.e., negative or threatening stimuli), ABM training for sexual dysfunction would involve strengthening attentional biases toward sexual stimuli in order to facilitate sexual arousal. While more research is required to evaluate the extent to which modifications to attention contribute to changes in sexual response and subsequently symptoms of sexual dysfunction, treatments addressing sexual concerns would likely benefit from targeting attentional processing of sexual stimuli.
Conclusion
Although the field is nascent and in need of more comprehensive examinations to determine which aspects of visual attention are most relevant for sexual response and functioning, research has elucidated some basic aspects to enhance our knowledge. Attention is a key mechanism involved in the initiation and regulation of sexual response. Specifically, controlled visual attention correlates with sexual outcomes, such that increasing attention to sexual stimuli facilitates sexual arousal. Attentional processing of sexual stimuli appears to be impaired in some women with sexual dysfunction. Existing treatments that target attention are effective in improving sexual response and reducing sexual dysfunction. Future research that better delineates specific attentional mechanisms, their possible modification, and the relationship with sexual response outcomes may be beneficial for generating new attention-based interventions to treat sexual dysfunction.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2021-07-09T13:46:47.823Z | 2021-07-08T00:00:00.000 | {
"year": 2021,
"sha1": "8fe8a1847052cd6c64e86eb79306a8efedbf9e77",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11930-021-00312-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8fe8a1847052cd6c64e86eb79306a8efedbf9e77",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
220525095 | pes2o/s2orc | v3-fos-license | Gallibacterium anatis: Moleculer Detection of Tetracycline Resistance and Virulence Gene. J. World Poult. Res., 10 (2): 385-390. DOI: https://dx.doi.org/10.36380/jwpr.2020.45;
Gallibacterium anatis causes infections in the reproductive tract of egg-laying hens and it is associated with increased mortality and decreased egg production. For this study we used singeleplex and multiplex PCR with specific primers to assess the presence of tetracycline resistance (Tcr) (tet A, B, C, D, E, G, H, K, L, M, O, S, P, Q and X), virulence [cytotoxic (RTX-like toxin, gtxA) and fimbrial (flfA)] genes and antibiotic resistance in G. anatis isolates. Among the 20 isolates tested, the highest antimicrobial resistance patterns were observed in erythromycin, streptomycin, tilmicosin (100%) followed by colistin sulphate (65%), cephalexin and tulathromycin (50%). Among 20 isolates examined, 10 (50%) carried tetracycline resistance genes, 7 (35%) had tet(B), 2 (10%) had tet(G), and 1 (5%) had tet(A), (D), (M) or (L). Of these G .anatis isolates were carried out 6 (30%) gtxA but none of flfA gene. Based on present results, it is concluded that virulence and Tcr genes could contribute to pathogencity of G. anatis, which is a major risk to poultry health.
INTRODUCTION
Major health problems in the poultry industry can affect egg production. In particular, infectious diseases can reduce egg production and egg quality by directly affecting the reproductive system of hens. Such diseases also can indirectly diminish the overall health status of poultry (Clauer, 2009). Gallibacterium anatis (G. anatis) is a resident of normal microflora of the lower genital and upper respiratory tract in chickens and many other avian species (Bojesen et al., 2004;Rzewuska et al., 2007;Jones et al., 2013;Paudel et al., 2013;Persson and Bojesen 2015;Lawal et al., 2018). Decreased egg production associated with salpingitis, respiratory system problems and mortality in commercial laying hens therefore, G. anatis infections have been the topic of researchers' works in recent years (Bojesen et al., 2011a;Sing, 2016;Chaveza et al., 2017). The knowledge of bacteria-host interactions and antimicrobial susceptibility to G. anatis in laying hens remains limited (Bisgaard et al., 2009;Johnson et al., 2013). Among the most important G. anatis virulence factors involved in colonization and invasion of the epithelium in the trachea, oropharyngeal tissues and oviduct are the IgG destructive protease, RTX-like toxin, gtxA and hemagglutinin, which suppress the host immune response (Vaca et al., 2011;Lucio et al., 2012). Bacterial fimbria are also important not only as a virulence factor, but as a target for preventative vaccines (Kudirkiene et al., 2014;Sorour et al., 2015). Tetracycline resistance determinants (Tcrs) are widespread among both Gram negative organisms and Pasteurellaceae family and are often found in multi-drug resistant bacterial species (Levy et al., 1989;Roberts, 1996). To better understand G. anatis pathogenicity in poultry, this study amied to determine the prevalence of Tcr genes and virulence-specific factor genes in G. anatis isolates from laying hens.
Primers
A primer pair specific for 14 tetracycline resistance genes and G. anatis virulence genes were listed in Tables 1 and 2 (Ng et al. 2001;Bager et al. 2013;Paudel et al. 2013).
DNA Extraction
DNA extraction from G. anatis isolates were performed according to the instructions of the GeneJET Genomic DNA Purification Kit (Thermo Scientific, USA). DNAs were stored for use as template DNA at -20°C until amplification.
DISCUSSION
G. anatis is commonly found among normal flora of both the upper respiratory tract and lower genital tract of chickens and other avian species, and can therefore be regarded as an opportunistic pathogen. The pathogenesis of G. anatis is not well-characterized, particularly at the molecular level, and little is known about which antibiotic resistace, genes and mechanisms are associated with the ability of G. anatis to cause disease.The current investigation is the first study of the antimicrobial resistance, tet and virulence genes of G. anatis in Turkey. Among the 20 isolates tested, the highest antimicrobial resistance patterns were observed for erythromycin, streptomycin, tilmicosin (100%) followed by colistin sulphate (65%), cephalexin and tulathromycin (50%) which are shown in table 3 The majority of the isolates were exhibited susceptibility against to amoxicillin clavulanic acid, ceftiofur, enrofloxacin, florfenicol, gentamicin, trimethoprim sulphamethoxazole which is in agreement with the other studies (Jones et al., 2013;El-Bastawy, 2014;El-Adawy et al., 2018;Lawal et al., 2018). About 100% of the G. anatis isolates exhibited sensitivity to doxycilline while 15% and 85%, respectively, showed intermediate resistance to tetracycline. Especially high level of tetracycline resistance was similar with the previous researches (Bojesen et al., 2011b;Jones et al., 2013;Abd El-Hamid et al., 2016;Lawal et al., 2018). In contrast to these findings, Lin et al. (2001) also reported moderate sensitivity to tetracycline. Multi-drug resistance reveals that 13 isolates representing large percentage (65%) resistance against three or more antibiotics. Especially, MDR patterns in this study were similar to those observed in previous study (Bojesen et al., 2011b). In this study, singleplex and multiplex PCR were used to detect Tcr and virulence genes in G. anatis isolates from laying hens. This study can be one of the first tries to examine the prevalence of these genes in G. anatis isolates in Turkey and also to test for the presence of tet (P), (Q), (S), and (X) in addition to the previously studied tet (Hansen et al., 1993;Bojesen et al., 2011b). Four multiplex PCR groups were used in this study to detect 14 tetracycline resistance genes and singleplex PCR to target virulence-associated gtxA and flfA genes. Twenty isolates of G. anatis contained 10 (50%) carried genes for tetracycline resistance, 7 (35%) had tet(B), 2 (10%) had tet(G), and 1(5%) had tet(A), (D), (M) or (L). Another 2 (10%) carried both tet(B) and tet(G) while 1 (5%) had tet(B), (D) and (A) genes. None of the other resistance genes were detected. Together, tet(A), (B), (D), (G), (M) and (L) genes, which are associated with efflux and/or ribosomal protection mechanisms of G. anatis were detected (Ng et al., 2001;Michalova et al., 2004). Unsuprisingly, presence of these genes was explained according to the previous studies (Kehrenberg et al., 2001;Kehrenberg et al., 2006;Bojesen et al., 2011b). It is indicated that group I tet(B) genes had the most numbers among the 20 isolates, which is consistent with a report by Bojesen et al. (2011b). The tet(B) gene compared to the others, represented especially among Enterobacteriaceae (Roberts, 1996;Levy, 1998;Kehrenberg et al., 2006) and reported to be widely distributed among Pasteurellacea (Vaca et al., 2011;Lucio et al., 2012;Bager et al., 2013;Kudirkiene et al., 2014;Persson and Bojesen 2015;Zhang et al., 2017). The pathogenicity of G. anatis is influenced by various factors encoded by different virulence genes that play important roles in different pathogenic activities such as adhesion, invasion, intracellular survival, systemic infection, and toxin production (Kristensen et al., 2011;Persson and Bojesen, 2015;Sorour et al., 2015;Sing et al., 2016). In particular, the gtx toxin is responsible for the hemolytic and leukotoxic affects of G. anatis (Bager et al., 2013;Kudirkiene et al., 2014;Persson and Bojesen, 2015). The flfA gene is also implicated in G. anatis virulence and is a target for prevention of diseases caused by G. anatis in laying hens (Bager et al., 2013;Kudirkiene et al., 2014;Persson and Bojesen 2015). PCR amplification of these genes (gtxA and flfA) in this study showed that 6 (30%) of the tested strains carried gtxA, but none had flfA. All of the isolates in this study displayed hemolytic characteristics, which is consistent with the expectations about the value of detecting gtx for determination of pathogenic activity. A previous study that focused on hemolytic strains of G. anatis found that gtx was present in 7/12 (58%) and 5/13 (38.4%) samples from chickens and ducks, respectively (Sorour et al., 2015). Meanwhile, a study by Kristensen et al. (2011) revealed that gtx is associated with nonhemolytic G. anatis strains. The other studies found high incidences (50-75%) of flfA gene (Kudirkiene et al., 2014;Sorour et al., 2015), whereas none of the isolates in present study had flfA. Moreover, the absence of fimbria in the isolates that examined could have contributed to the lower pathogenicity of these G. anatis strains. The findings of this study indicated no correlation between the presence of Tcr genes and genes associated with virulence in the isolates tested. The virulence mechanisms associated with the ability of G. anatis, which is typically a non-pathogenic component of the normal respiratory microflora of animals, to induce opportunistic respiratory tract infections under conditions that compromise immune responses or those that cause stress, such as inadequate nutritional intake (Bojesen et al., 2003), require further investigation.
CONCLUSION
The present study detected the genes associated with virulence and tetracycline resistance of Gallibacterium anatis that isolated from laying hens in Turkey for the first time and presented the first evidence to support the use of specific primers for tet P, Q, S and X genes in this breed. The findings of this study can increase the knowledge of Gallibacterium anatis pathogenicity in poultry. | 2020-07-15T09:05:10.293Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "ad92869d14214988ad70930af6bc11db62507553",
"oa_license": null,
"oa_url": "https://doi.org/10.36380/jwpr.2020.45",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ad92869d14214988ad70930af6bc11db62507553",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
17928349 | pes2o/s2orc | v3-fos-license | Validation of the IPSS-R in lenalidomide-treated, lower-risk myelodysplastic syndrome patients with del(5q)
Myelodysplastic syndromes (MDS) are a heterogeneous group of hematopoietic stem cell disorders with variable clinical outcome. 1 – 4 The International Prognostic Scoring System (IPSS) categorizes untreated MDS patients into one of four risk groups (low, intermediate [Int]-1, Int-2 and high) based on the percentage of bone marrow blasts, presence of cytogenetic abnormalities and number of cytopenias. 2 The revised IPSS (IPSS-R), also developed in untreated MDS patients, re fi nes risk group de fi nitions by assigning greater weight to the cytogenetic risk categories, 5 the depth of cytopenias and by altering bone marrow blast percentage cutoff points. 6 The IPSS-R assigns patients to one of fi ve risk categories (very low, low, intermediate, high and very high) and has been validated by several groups. 7 – 9 The prognostic value of the IPSS-R in treated patients has not been fully established, 8 – 11 and its prognostic value speci fi cally in patients with deletion 5q [del(5q)] treated with lenalidomide is unknown. We assessed the prognostic value of IPSS-R using data from two multicenter studies (MDS-003 12 and MDS-004 13 ) that evaluated lenalidomide in red blood cell (RBC) transfusion-dependent patients with IPSS-de fi ned low- or Int-1-risk MDS and del(5q), with or without additional chromo-somal abnormalities. Methods for the MDS-003 and MDS-004 studies have been previously described; 12,13 both studies were suf fi ciently similar to justify combined analyses. Patients received lenalidomide at doses of 5 or 10 mg/day for 28 days of each 28-day
Myelodysplastic syndromes (MDS) are a heterogeneous group of hematopoietic stem cell disorders with variable clinical outcome. [1][2][3][4] The International Prognostic Scoring System (IPSS) categorizes untreated MDS patients into one of four risk groups (low, intermediate [Int]-1, Int-2 and high) based on the percentage of bone marrow blasts, presence of cytogenetic abnormalities and number of cytopenias. 2 The revised IPSS (IPSS-R), also developed in untreated MDS patients, refines risk group definitions by assigning greater weight to the cytogenetic risk categories, 5 the depth of cytopenias and by altering bone marrow blast percentage cutoff points. 6 The IPSS-R assigns patients to one of five risk categories (very low, low, intermediate, high and very high) and has been validated by several groups. [7][8][9] The prognostic value of the IPSS-R in treated patients has not been fully established, [8][9][10][11] and its prognostic value specifically in patients with deletion 5q [del(5q)] treated with lenalidomide is unknown. We assessed the prognostic value of IPSS-R using data from two multicenter studies (MDS-003 12 and MDS-004 13 ) that evaluated lenalidomide in red blood cell (RBC) transfusion-dependent patients with IPSS-defined low-or Int-1-risk MDS and del(5q), with or without additional chromosomal abnormalities.
Methods for the MDS-003 and MDS-004 studies have been previously described; 12,13 both studies were sufficiently similar to justify combined analyses. Patients received lenalidomide at doses of 5 or 10 mg/day for 28 days of each 28-day cycle, or 10 mg/day for 21 days of each 28-day cycle. This analysis included patients from MDS-003 and MDS-004 with available baseline IPSS2 and IPSS-R 6 scores, who received lenalidomide from study start. IPSS-R adjusted for age (IPSS-R-A) was also calculated. 6 Risk groups with o5 patients were excluded. End points included overall survival (OS), time to progression to acute myeloid leukemia (AML) and rate of RBC transfusion independence (RBC-TI) ⩾ 26 weeks. AML was defined by French-American-British criteria. 1 OS and time to AML were estimated using the Kaplan-Meier method, with differences evaluated using the log-rank test. Rates of RBC-TI ⩾ 26 weeks were compared across risk groups using the Cochran-Armitage trend test. Univariate and multivariate Cox proportional hazards models were developed using the following covariates: IPSS (low-risk vs Int-1-risk), IPSS-R (multi-level), gender (female vs male), age (per year increase), French-American-British classification (refractory anemia (RA)/RA with ringed sideroblasts vs RA with excess blasts/chronic myelomonocytic leukemia), time since diagnosis (per year increase), transfusion burden (units/ 8 weeks), bone marrow blast count (as a continuous variable), bone marrow blasts (o 5% vs ⩾ 5%), number of cytopenias (as a continuous variable), number of cytopenias (0-1 vs 2-3), platelet count, absolute neutrophil count, hemoglobin (each analyzed as a continuous variable), del(5q) status (isolated vs ⩾ 1 additional abnormalities), RBC-TI ⩾ 26 weeks response (yes vs no), RBC-TI ⩾ 26 weeks response (time-varying; yes vs no), serum ferritin (per log unit increase) and serum lactate dehydrogenase (per unit increase). Significant variables identified by univariate analysis (P ⩽ 0.10) were used to develop multivariate models using SAS version 9.2; the best model was chosen using the Akaike information criterion. Data cutoff dates were 1 October 2010 for MDS-003 and 26 November 2012 for MDS-004.
Overall, 83 patients (41%) had IPSS-defined low-risk disease and 118 (59%) had Int-1-risk disease. Using IPSS-R, 2, 55, 36, 7 and 1% of patients had very low, low, intermediate, high and very high-risk disease, respectively. A similar distribution was seen with IPSS-R-A, except that more patients were classified as very low risk (10 (5%) vs 3 patients (2%) with IPSS-R) because 8 younger patients in the IPSS-R low-risk group migrated to the IPSS-R-A very low-risk group. Of the 83 patients with IPSS-defined low-risk disease, 2 had very low risk (2%), 65 had low risk (78%) and 16 had intermediate risk (19%) by IPSS-R. Of the 118 patients with IPSS-defined Int-1-risk disease, 1 had very low risk (1%), 45 had low risk (38%), 57 had intermediate risk (48%), 14 had high risk (12%) and 1 had very high risk (1%) by IPSS-R. Baseline characteristics for the individual IPSS-R groups (low, intermediate, high), including IPSS score, transfusion burden, bone marrow blast count, number of cytopenias, platelet count, absolute neutrophil count, hemoglobin and cytogenetic complexity, were generally less favorable for patients in the higher IPSS-R risk groups.
OS was similar across IPSS-defined risk groups (P = 0.50; Figure 1a), but differed significantly across IPSS-R (P = 0.01; Figure 1b) and IPSS-R-A risk groups (P = 0.02; Figure 1c). In multivariate models, IPSS-R was independently associated with OS (low vs high; hazard ratio 0.45; P = 0.02), but IPSS was not (hazard ratio 0.97; P = 0.87). Other independent prognostic factors associated with improved OS included younger age, achieving RBC-TI ⩾ 26 weeks and higher baseline platelet count. For each prognostic system, 20-32% of patients progressed to AML across all risk groups. Time to AML progression was similar across risk groups by IPSS (P = 0.29), IPSS-R (P = 0.30) and IPSS-R-A (P = 0.83). The lack of difference in AML progression rates may be due to competing risks of death in an older patient population, which interceded prior to AML progression, or due to smaller sample size, which precluded adequate power calculations.
The proportion of patients achieving RBC-TI ⩾ 26 weeks was similar across IPSS-defined risk groups (P = 0.53), but differed significantly across the low-to-high IPSS-R risk groups (P = 0.03) and the very low-to-high IPSS-R-A risk groups (P = 0.03) ( Table 1).
In summary, compared with IPSS, IPSS-R demonstrated significant prognostic value for OS and rates of RBC-TI ⩾ 26 weeks in RBC transfusion-dependent, lenalidomidetreated patients with IPSS-defined low-or Int-1-risk MDS and del (5q). This is likely due to its greater sensitivity to degrees of cytopenias and the relative weight of cytogenetic risk to cytopenias accorded. It should be noted, however, that the total lenalidomide dose received and duration of treatment differed across the IPSS-R risk groups. In treatment cycle 1, the median total dose of lenalidomide was 210, 180 and 155 mg for the IPSS-R low-, intermediate-and high-risk groups, respectively, due to dose interruption and dose reduction. The median duration of treatment was 467, 365 and 118 days for the three IPSS-R risk groups, respectively, which reflects either better tolerability of lenalidomide for patients with lower-risk disease, or confounding by the IPSS-R group itself with respect to bone marrow reserve and natural history of disease. Neither IPSS-R nor IPSS was prognostic for AML progression, probably due to the small number of reported AML events and competing risks for death.
The prognostic value of IPSS-R has been assessed in MDS patients treated with other therapies, particularly hypomethylating agents, and in 1314 patients receiving best supportive care, induction chemotherapy or allogeneic transplantation, in whom it was superior to the World Health Organization Prognostic Scoring System in predicting OS; [8][9][10][11]14 however, this is the first study to validate the utility of IPSS-R beyond the initial MDS diagnosis in patients treated with lenalidomide. Similarly, the World Health Organization Prognostic Scoring System has been applied to an untreated population of 381 del(5q) patients and was found to be valid for predicting risk of AML transformation, highlighting the importance of transfusion dependency as a prognostic marker. 15 Although limited by its retrospective nature, the current analysis represents the largest available population of lenalidomidetreated patients with transfusion-dependent lower-risk MDS and del(5q). These results further support the use of IPSS-R as a validated prognostic tool in clinical practice.
CONFLICT OF INTEREST
MAS has received research funding from Celgene Corporation, and has served on the advisory boards for Celgene Corporation and Amgen. ASS, JSL, and MMS are employees of and hold equity in Celgene Corporation. PF has received honoraria and research funding from Celgene Corporation. GFS is on the advisory board of, and has received honoraria and research funding from, Celgene Corporation. FD has received honoraria from Celgene and Novartis. AFL is a consultant for and has received honoraria from Celgene Corporation. PLG and JMB declare no conflict of interest. | 2018-04-03T04:38:30.242Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "750013b993acaab06494b7a5ce27ef105d1cbf86",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/bcj201462.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "750013b993acaab06494b7a5ce27ef105d1cbf86",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230587137 | pes2o/s2orc | v3-fos-license | Long-term target tracking combined with re-detection
Long-term visual tracking undergoes more challenges and is closer to realistic applications than short-term tracking. However, the performances of most existing methods have been limited in the long-term tracking tasks. In this work, we present a reliable yet simple long-term tracking method, which extends the state-of-the-art learning adaptive discriminative correlation filters (LADCF) tracking algorithm with a re-detection component based on the support vector machine (SVM) model. The LADCF tracking algorithm localizes the target in each frame, and the re-detector is able to efficiently re-detect the target in the whole image when the tracking fails. We further introduce a robust confidence degree evaluation criterion that combines the maximum response criterion and the average peak-to-correlation energy (APCE) to judge the confidence level of the predicted target. When the confidence degree is generally high, the SVM is updated accordingly. If the confidence drops sharply, the SVM re-detects the target. We perform extensive experiments on the OTB-2015 and UAV123 datasets. The experimental results demonstrate the effectiveness of our algorithm in long-term tracking.
Introduction
While visual object tracking as a hot research topic in computer vision has been widely applied in various fields, many challenges are still not resolved especially in target disappearance, partial occlusion, and background clutter, and studying a general and powerful tracking algorithm is a tough task.
A typical scenario of visual tracking is to track an unknown object in subsequent image frames by giving the initial state of a target in the first frame of the video.In the past few decades, visual object tracking technology has made significant progress [1][2][3][4][5][6][7][8][9][10].These methods are very effective for short-term tracking tasks, which the tracked object is almost always in the field of view.However, in realistic applications, the requirement for tracking is not only to track correctly, but also to track for a longer period of time [11].During the period of time, the tracking output is wrong in the absence of the target objects.And the training samples will be incorrectly annotated, which leads to a risk of model drifts.Therefore, it is important to long-term trackers to determine whether the target is absent and have the capability of re-detection.
Long-term tracking task also requires the tracker as well as short-term tracking to maintain high accuracy in the challenges of disappearance and occlusion, especially to stably capture the target object in a long-term video [12].Therefore, the long-term tracking presents more challenges from two aspects.The first issue is how to determine the confidence degree of the tracking results.In [13], the maximum response value of the target is used to determine the confidence of the tracking result.When the maximum peak value of the response map is lower than the threshold value, the result is determined to be unreliable.However, the response map may fluctuate drastically when the object in occlusion or disappear condition, only using the maximum response value to judge confidence is incredibility.The average peak-to-correlation energy (APCE) criterion in [14] indicates the degree of fluctuation of the response map.If the target is undergoing fast motion, the value of APCE will be low even if the tracking is correct.However, the APCE criterion is commonly used to update trackers in [14].Secondly, how to relocate the out-of-view targets remains unresolved.The tracking-learning-detection (TLD) [15] algorithm exploits an ensemble of weak classifiers for global re-detection of the out of view.The method fails to classify the target object due to the huge number of scanning windows.The long-term correlation tracking (LCT) [13] algorithm proposes a random fern re-detection model to detect the out-of-view target.In [16], it learns a spatial-temporal filter in a lower-dimensional discriminative manifold to alleviate the influences of boundary effects.But the method still cannot solve the target disappearance problem.
This paper proposes a tracking algorithm combining the learning adaptive discriminative correlation filter tracker and re-detector.The proposed method aims to perform robust re-detection and relocate the target when target tracking fails.Our main contributions can be summarized as follows: i) We propose a stable long-term tracking strategy to track the targets that may disappear or deform heavily in long-term tracking.With the confidence strategy adopted, the learning adaptive discriminative correlation filters (LADCF) tracks the accurate target online.And the support vector machine (SVM) is updated when the confidence degree is generally high.In contrast, if the response maps fluctuate heavily, the SVM is used as a re-detector to relocate the target.ii) We not only utilize the maximum response but also adopt the APCE criterion to the re-detection component.The fusion of the two criteria can accurately determine the state of the tracker and improve the accuracy of the tracking system.iii) We evaluate the proposed tracking algorithm on the OTB-2015 [17] and UAV123 [18] datasets; the experimental results show that the proposed algorithm performs more stable and accurate tracking performance in the case of occlusion, background clutter, etc. during the long-term tracking.
The structure of the rest of the paper is as follows: Section 2 overviews the related work.Section 3 presents the proposed method.Section 4 reports the experimental results and experimental analysis.Section 5 concludes the paper.
Correlation filter
Correlation filters have shown outstanding results for target tracking [17,19].These methods exploit the circular correlation of the filter in the frequency domain to locate the target object.Bolme et al. [4] propose the pioneering MOSSE tracker, using only gray image features to train the filter.The circulant structure of tracking-by-detection with kernels (CSK) tracker [20] employs the illumination intensity features and applies DCFs in a kernel space.The kernelized correlation filters (KCF) [6] further improves CSK by the use of the multi-channel histogram of oriented gradient (HOG) features.Danelljan et al. [5] exploit the color attributes of the target object and learn an adaptive correlation filter.The literature [21] proposes a patch-based visual tracker that divides the object and the candidate area into several small blocks evenly and uses the average score of the overall small blocks to determine the optimal candidate, which greatly improves under the occlusion circumstances.The literature [22] proposes an online representative sample selection method to construct an effective observation module that can handle occasional large appearance changes or severe occlusion.
The estimation of the target scale is another important aspect for testing an outstanding tracker.It not only improves better performance, but also provides computational efficiency.The discriminative scale space tracking (DSST) tracker [23] performs translation estimation and scale estimation separately, using a scale pyramid to respond to the scale change.Li and Zhu [24] present an effective scale adaptive scheme, which defines a scale pool to turn the samples of each scale into the same size as the initial sample by the bilinear interpolation method.
The formulation of DCFs exploits the circular correlation which implements learning efficiently by applying fast Fourier transform (FFT).However, it induces the circular boundary effects, which has a drastic negative impact on tracking performance.Danelljan et al. [25] suggest reducing these boundary effects by introducing a spatial regularization component.Nevertheless, regularization will make the cost of the model optimization higher.Galoogahi et al. [26] propose an idea to the pre-multiply a fixed masking matrix containing the target regions to address such deficiency of DCFs.Then, they apply the alternating direction method of multipliers (ADMM) [27] algorithm to solve the constrained optimization problem in real time.The context-aware correlation filter tracking (CACF) [28] algorithm selects the background reference around the target by considering the global information and adds the background penalty to the closed solution of the filter.The discriminative correlation filter with channel and spatial reliability (CSRDCF) [29] method distinguishes the foreground and background by segmenting the colors in the search area.The learning adaptive discriminative correlation filters (LADCF) [16] approach adds adaptive spatial feature selection and temporal consistency constraints to alleviate the spatial boundary effects and temporal filter degradation problems that exist in the DCF method.
Long-term tracking
Kalal et al. [15] propose a tracking-learning-detection (TLD) algorithm, which decomposes the tracking task into tracking, learning, and detection.Among them, tracking and detection facilitate each other, the short-term tracker provides training examples for the detector, while the detectors are implemented as a cascade to reduce computational complexity.Enlightened by the TLD framework, Ma et al. [13] propose a long-term correlation filter tracker using a KCF as a baseline algorithm and a random fern classifier as a detector.The FCLT-A fully correlational long-term tracker (FCLT) [30] trains several correlation filters on different time scales as a detector and exploits the correlation response to link the short-term tracker and long-term detector.
Methods
In this section, we describe our tracker.In Section 3.1, we introduced the main tracking framework of our algorithm, which is shown in Fig. 1.In Section 3.2, we introduce the tracker based on LADCF correlation filtering.In Section 3.3, we introduce the composite evaluation criteria of the confidence degree and the SVM based re-detector.
The main framework of the algorithm
The proposed algorithm aims to combine both the DCF tracker and the re-detector for long-term tracking.First, the baseline correlation filter tracker is adopted to estimate the translation in the tracking stage.Second, the maximum response value and the APCE criterion are utilized to judge the confidence level of the target.Finally, when the value of confidence is higher than the threshold, the baseline tracker achieves the tracking target alone.When the confidence level drops sharply, it indicates tracking failure.
We do not update the model and exploit the SVM model to re-detect the target object in the current frame.The structure of the algorithm in this paper is shown in Fig. 1.
The tracking framework is summarized as follows: (1) Position and scale detection: We utilize DSST to achieve the target position and scale prediction.The t − th frame target is I t , and the filter model is θ model .When a new frame I t appears, we extract multiple scale search windows ½I patch t fsg from it, s = 1, 2, …, S, with S denoting the number of scales.For each scale s, the search window patch is centered around the target center position p t − 1 with a size of a N n × a N n pixels, where a is the scale factor and N ¼ b 2s − S − 1 2 c.The size of the basic search window size is n × n, which is determined by the target size ω × h and padding parameter as . So, the bilinear interpolation is applied to resize each patch into n × n.Then, we extract multi-channel features for each scale search window as χðsÞϵR D 2 ÂL .Given the filter template, the response score can efficiently be calculated in the frequency domain as [16]: After the implementation of the IDFT on each scale, the maximum value of f ∈ℝ D 2 ÂS is the relative position and scale.
(2) Updating: We adopt the same updating strategy as the traditional DCF method: where α is the updating rate.More specifically, since θ model is not available in the learning stage for the first frame, we use a pre-defined mask that only the target region is activated to optimize θ as in BACF.And then, we initialize θ model = θ after the learning stage of the first frame.
Correlation filter tracker
In this paper, we set LADCF [16] as the baseline algorithm of our tracking approach.
The LADCF algorithm proposes a new DCF-based tracking method, which utilizes the adaptive spatial feature selection and temporal consistent constraints to reduce the impact of spatial boundary effect and temporal filter degradation.The feature selection process is to select several specific elements in the filter to retain distinguishable and descriptive information, forming a low-dimensional and compact feature representation.Considering an n × n image patch x∈ℝ n 2 as a base sample for the DCF design, the circulant matrix for this sample is generated by collecting its full cyclic shifts, X Τ ¼ ½x 1 ; x 2 ; …; x n 2 Τ ∈ℝ n 2 Ân 2 with the corresponding Gaussian-shaped regression labels y ¼ ½y 1 ; y 2 ; …; y n 2 .The spatial feature selection embedded in the learning stage can be expressed as: where θ denotes the target model in the form of DCF, and ⊛ denotes the circular convolution operator.The indicator vector ϕ can potentially be expressed by θ and ‖ϕ‖ 0 = ‖θ‖ 0 , and diag(ϕ) is the diagonal matrix generated from the indicator vector of selected features ϕ.The ℓ 0 -norm is non-convex, and the ℓ 1 -norm is widely used to approximate the sparsity [24], so a temporal consistency is constructed by ℓ 1 -norm relaxation spatial feature selection model [16]: where λ 1 and λ 2 are tuning parameters, and λ 1 <<λ 2 .θ model denotes the model parameters estimated from the previous frame.The ℓ 2 -norm relaxation is adopted to further simplify the following expression: where the lasso regularization controlled by λ 1 select the spatial feature.In the above formula, the filter template model is used to increase smoothness between consecutive frames to promote time consistency.In this way, the temporal consistency of spatial feature selection can be preserved to extract and retain the diversity of the static and dynamic appearance.Since the multi-channel features share the same spatial layout [16], the multi-channel input is represented as Χ = {x 1 , x 2 , …, x L }, and the corresponding filter is represented as θ = {θ 1 , θ 2 , …, θ L }.By minimization, the goal can be extended to multi-channel functions with structured sparsity [16]: where θ j is the jth element of the ith channel feature vector θ i ∈ℝ D 2 .⊙ denotes the element-wise multiplication operator.The structured spatial feature selection term calculates the ℓ 2 -norm of each spatial location and then executes the ℓ 1 -norm to achieve joint sparsity.Subsequently, utilizing ADMM [27] to optimize the above formula, we introduce the relaxation variables to construct the goals based on convex optimization [31].Then, we could obtain the global optimal solution of the model through ADMM and form an enhanced Lagrange operator [16]: where H ¼ fη 1 ; η 2 ; …; η L g are the Lagrange multipliers, and μ > 0 is the corresponding penalty parameter controlling the convergence rate [16,32].As L is convex, ADMM is exploited iteratively to optimize the following sub-problems with guaranteed convergence: 3.3 Re-detector
Confidence criterion
Most existing trackers do not consider whether the detection is accurate or not.In fact, once the target is detected incorrectly in the current frame, severely occluded, or completely missing, this may cause the tracking failure in subsequent frames.We introduce a measure to determine the confidence degree of the target objects, which is the first step in the re-detection model.The peak value and the fluctuation of the response map can reveal the confidence about the tracking results.The ideal response map should have only one peak while all the other regions are smooth.Otherwise, the response map will fluctuate intensely.If we continue to use the uncertain samples to track the target in the subsequent frames, the tracking model will be destroyed.Thus, we exploit to fuse two confidence degree evaluation criteria.The first one is the maximum response value F max of the current frame.
The second one is the APCE measure which is defined as: where the F max and F min are the maximum response and the minimum response of the current frame, respectively.F w, h is the element value of the wth row and the hth column of the response matrix.If the target is moving slowly and is easily distinguishable, the APCE value is generally high.However, if the target is undergoing fast motion with significant deformations, the value of APCE will be low even if the tracking is correct.
Target re-detection
In this section, we describe the re-detection mechanism used in the case of tracking failure.In the re-detection module, when the confidence level is lower than the threshold, the SVM [33] is used for re-detection.Considering a sample set (x 1 , y 1 ), (x 2 , y 2 ), …, (x i , y i ), …, x i ∈ R d , including positive and negative samples, where d is the dimension of the sample, y i ∈ (+1, −1) is sample labels, SVM can make segmentation of positive and negative samples to obtain the best classification hyperplane.The classification plane is defined as [33]: where ω represents the weight vector, and b denotes the bias term.In the case of the linearly classifiable, for a given dataset T and classification hyperplane, the following formula is used for classification judgment: Combining the two equations, we can abbreviate it as: The distance from each support vector to the hyperplane can be written as: The problem of solving the maximum partition hyperplane of the SVM model can be expressed as the following constrained optimization problem: Next, the paper introduces the Lagrangian function to solve the above problem [33].
where c i > 0 is the Lagrange multiplier, the solution of the optimization problem satisfies the partial derivative of L(ω, λ, c) to ω and b be 0. The corresponding decision function is expressed as: Then, the new sample points are imported into the decision function to get the sample classification.
In the case of linear inseparability, we use the kernel function to map it to the highdimensional space.In this work, we use the Gaussian kernel function as follows: When a frame is re-detected, an exhaustive search is performed on the current frame using a sliding window, and the HOG features are extracted for each image patch as the Χ vector in the above formula.And the f(x) is calculated by formula (16).Then, we obtain the sample area with the largest f(x).When the response value is greater than the threshold, it will be used as the location of the tracking target again.
The training process of SVM is as follows [33].By the confidence level, we determine the quality of the sample.Then, samples with high confidence are used as the positive samples, and samples with low confidence are used as the negative samples.The HOG features from positive and negative samples are extracted to obtain the feature vectors.The feature vectors are represented as (x i , y i ), i = 1, 2, …, n, where n denotes the number of training samples, x i represents the HOG feature vector, and y i represents the attribute of the extracted sample.If the training sample is positive, then y i = 1, and if the sample is negative, then y i = − 1.For the binary classification problem of our samples, the loss function is defined as formula (18).
When the value of loss is negative, the parameters of SVM are updated as follows.
where c j is the Lagrangian coefficient, x is the feature vector extracted from the sample, and y is the label corresponding to the sample.
Experimental results and discussion
In this section, we evaluate the proposed algorithm on OTB-2015 and UAV123 benchmarks [17] with comparisons to other detection-based tracking algorithms and classical correlation filtering tracking algorithms.Section 4.1 introduces the experimental platform and parameter settings of the experiments.Section 4.2 introduces the experimental datasets and the evaluation criteria for the experiments.Section 4.3 describes the quantitative evaluation of the results and describes the qualitative evaluation in Section 4.4
Experimental setups
The experimental software environment is MATLAB R2016a, and the hardware environment is Intel Core i5-4200M processor, 4GB memory, Windows 8 operating system.The regularization parameters λ 1 and λ 2 are set to 1 and 15, respectively; the initial penalty parameter μ = 1; the maximum penalty parameter μ max = 20; the maximum number of iterations K = 2; the padding parameter as ϱ = 4; the scale factor as a = 1.01; the threshold for re-detection is set to tr = 0.13; and the update threshold is set to tu = 0.20.
Experimental datasets and evaluation criteria
The OTB-2015 dataset has a total of 100 video sequences, including 11 challenges, namely, illumination variation (IV), scale variation (SV), occlusion (OCC), deformation (DEF), motion blur (MB), fast motion (FM), in-plane rotation (IPR), out-of-plane rotation (OPR), out-of-view (OV), background clutter (BC), and low resolution (LR).The UAV123 consists of 143 challenging sequences, including 123 short-term sequences and 20 long-term sequences.Their evaluation criteria adopt the distance precision and overlap precision in one-pass evaluation (OPE) as the criteria of the evaluation algorithm.The overlap precision is defined as the percentage of overlap ratios exceeding 0.5.The distance precision shows the percentage of location error within 20 pixels.
Quantitative evaluation
In this paper, we compare our algorithm with 6 state-of-the-art trackers on the OTB-2015 dataset, including 2 tracking-by-detection algorithms, such as LCT [13] and large margin object tracking with circulant feature maps (LMCF) [14], and 4 mainstream correlation filtering tracking algorithms, such as CSK [20], KCF [6], DSST [23], and background-aware correlation filters (BACF) [26].Figure 2 shows the OPE success rate and precision plots of these algorithms.It can be seen from Fig. 2 that the proposed algorithm has been significantly improved compared with other algorithms.The precision and success rate of our method are 81.4% and 59.9%, respectively.Through experiments, we found that the short-term target trackers learn some wrong information, when the target is occluded or disappears.Thus, the template is polluted by the wrong information and unable to track the target correctly in subsequent frames.Therefore, compared with the BACF algorithm, our method improves the precision and success rate by 14.8% and 7.8%, respectively.The LCT exploits the random fern algorithm to re-detect targets, which is slow to operate.Thus compared with the tracking-by-detection LCT algorithm, the proposed algorithm improves the precision and success rate by 8% and 9.3%, respectively.Compared with the LMCF algorithm with multi-peak detection, our method increased the precision and success rate by 11.2% and 11.1%, respectively.
In order to further verify the superiority of our method, we analyze the tracking performance through attribute-based comparison in Table 1, which shows the area under the curve (AUC) scores of the success plots with 11 different attributes.
As shown in Table 1, the proposed algorithm in this paper achieves the best performance on 11 attributes.In the case of OCC, our algorithm score is 10.1% higher than that of the LMCF algorithm (tracking-by-detection style) and 12% higher than the algorithm BACF algorithm (short-term correlation filtering style).For FM images, our algorithm is 4.6% higher than the second-ranked BACF algorithm and 5.1% higher than the LCT algorithm using random fern re-detection.In the above condition, the target model may be contaminated, which makes target tracking difficult.Meanwhile, our model can solve this problem by accurate re-detection via SVM.In the case of OPR, LCT achieves a score of 48.5%.And our tracker provides a gain of 8.7%.This is because the baseline algorithm applied in this paper solves the influence of boundary effects to a certain extent and can achieve higher accuracy when the target rotation occurs.In the case of OV, the score of our algorithm is 50.7%, which is 3.9% higher than the BACF algorithm.The reason is that our template stops updating when the target goes out of view; the SVM is used to detect the target again.When the target reappears in the field of view, our model is not contaminated and can continue tracking the target correctly.Furthermore, we present the OPE success rate and precision plots on UAV123 in Fig. 3.
As shown in Fig. 3, our method beats other algorithms on the UAV123 dataset.Specifically, our method achieves the AUC scores of 65.2% and 46.1%, which is better than LCT by 13.1% and 13.4%.At the same time, the proposed method is 16.1% and 10.5% higher than BACF, because the proposed re-detection approach provides a novel solution to re-detect the low-confidence targets to improve tracking accuracy.
Qualitative evaluation
We selected 7 representative benchmark sequences from OTB-2015 to demonstrate the effectiveness of our algorithm.The visual evaluation results are shown in Fig. 4. As it can be seen from Fig. 4, in the "Jogger" sequence, the target is blocked at the 70th frame and the target reappears in the field of view at the 84th frame.Due to the redetection mechanism, our tracker can track the target correctly.But the short-time correlation filter tracking algorithm learns error information during occlusion, which leads to tracking errors in subsequent frames.In the "Soccer" and "Matrix" sequences, due to background clutter, the algorithms such as LCT and BACF lose the target.In contrast, the proposed algorithm can successfully handle such situations.In the "Car4" sequence, due to the scale change problem, the scale-based DSST algorithm and the proposed algorithm both show better performance.In the "Shaking" sequence, the proposed algorithm loses its target in the 17th frame due to issues such as similar lighting changes and background.However, owing to the supplement of a re-detection mechanism, the proposed algorithm relocates the target at the 18th frame and keeps tracking correctly.
In the "Bolt" sequence, our algorithm follows the target very closely even in the case of rapid motion of the target.In the "Dog" sequence, when the target is deformed, our algorithm can accurately track the target, while the BACF and LMCF algorithms have a certain offset.It can be seen from the above description that our algorithm achieves higher accuracy in these 7 sequences.Furthermore, we compare our method with the baseline tracker using 7 representative benchmark sequences of OTB-2015 in Fig. 5.The first three rows are short-term sequences which none of which exceeds 1000 frames, and the last four rows are longterm sequences, which all exceed 1000 frames.As shown in Fig. 5, in the experiments for the short-term sequences, the LADCF tracker drifts when the target objects undergo heavy occlusions (Soccer) and does not re-detect targets in the case of tracking failure.Moreover, the LADCF tracker fails to handle background clutter and deformation (Ironman, Bird1), since only the tracking component without the re-detection mechanism makes it less effective to discriminate targets from the cluttered background.In contrast, our method can track the object correctly on these challenging sequences because the trained detector effectively redetects the target objects.
In the Sylvester and Lemming sequences, the LADCF algorithm tracks incorrectly due to the rotating conditions encountered in these sequences, while our method provides better robustness to these conditions.In the Liquor sequence, the LADCF tracking algorithm is similar to our algorithm before the target is occluded.But when the target is occluded, the LADCF method fails to locate the occluded target.In the Rubik sequence, since the target object has undergone deformation and color variation at the 854th frame, the LADCF tracker fails to track correctly.Our method is able to track successfully due to re-detection.In our method, if the tracking fails, we perform the redetection procedure and initialize the tracker so that the target can be re-detected.Thus, our method can correctly track the target all the time.
Overall, our method performs well in estimating the positions of the target objects, which can be attributed to three reasons.Firstly, the combined confidence criterion of our method can correctly identify the target even in very low-confidence cases.Secondly, our re-detection component effectively re-detects the target objects in the case of tracking failure.Thirdly, our baseline tracker achieves adaptive discriminative learning ability on a low-dimensional manifold and improves the tracking effect.
Conclusions
This paper proposes a long-term target tracking algorithm, where the two main components are a state-of-the-art LADCF short-term tracker which estimates the target translation and a re-detector which re-detect the target objects in the case of tracking failure.Besides, the algorithm introduces a robust confidence criterion to evaluate the confidence value of the predicted target.When the confidence value is lower than the specified threshold, the SVM model is utilized to re-detect the target objects and the template is not updated.The algorithm is suitable for long-term tracking because it can detect the target accurately in real time and update the template with high reliability.Numerous experimental results show that the proposed algorithm achieves better performances than the other tracking algorithms.
Fig. 1
Fig.1The framework of the algorithm in this paper
Fig. 2
Fig. 2 Precision and success rate plots of the proposed method and state-of-art methods over OTB-2015 benchmark sequences using one-pass evaluation (OPE)
Fig. 4
Fig.4The tracking results of each algorithm on 7 video sequences (from top to bottom are Jogging, Soccer, Matrix, Car4, Shaking, Bolt, Dog, respectively)
Table 1
The AUC scores of success plots on OTB-2015 sequences with different attributes Fig.3Precision and success rate plots of the proposed method and state-of-art methods over UAV123 benchmark sequences using one-pass evaluation (OPE) | 2020-08-06T09:03:58.021Z | 2020-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "4be712d3c72940f35d43a3e9e6bdaa323b915e59",
"oa_license": "CCBY",
"oa_url": "https://asp-eurasipjournals.springeropen.com/track/pdf/10.1186/s13634-020-00713-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "114b0e4d329f061518f87b9cce121f2b5b27fd02",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250390906 | pes2o/s2orc | v3-fos-license | MM-GATBT: Enriching Multimodal Representation Using Graph Attention Network
While there have been advances in Natural Language Processing (NLP), their success is mainly gained by applying a self-attention mechanism into single or multi-modalities. While this approach has brought significant improvements in multiple downstream tasks, it fails to capture the interaction between different entities. Therefore, we propose MM-GATBT, a multimodal graph representation learning model that captures not only the relational semantics within one modality but also the interactions between different modalities. Specifically, the proposed method constructs image-based node embedding which contains relational semantics of entities. Our empirical results show that MM-GATBT achieves state-of-the-art results among all published papers on the MM-IMDb dataset.
Introduction
Despite the huge success of learning algorithms for applications involving unimodal data such as text, less is known for applications involving multimodal data, i.e. scenarios where each data entity has data attributes from multiple modes, such as text and image. While the previous works show that models with multimodal representation outperforms unimodal representation in downstream tasks such as classification, VQA, and disambiguation, the benefit of multimodal representation mostly comes from only one mode (such as text), while the other mode only contribute a marginal improvement. That is, the performance difference between text-only representation and multimodal representation is smaller than that of the image-only representation and multimodal representation (Arevalo et al., 2017;Vielzeuf et al., 2018;Moon et al., 2018;Kiela et al., 2020;Singh et al., 2020;Kiela et al., 2021).
We suspect that improper usage of imagemodality presents a limitation in creating multimodal representation. Existing multimodal models Figure 1: Given movie poster and text information, the problem is to predict the multilabel genres of movies. Our method narrows down this problem into a node classification task by constructing a multimodal entity graph where each node represents a movie entity and edge represents a shared feature between the movie entities.
have been applying a self-attention mechanism or create a graph with a single modality's attribute. However, these approaches ignore the interaction among entities, multi-modalities, or both. In other words, one modality is tied within its space and cannot see beyond its modality space. To overcome this limitation, we propose a novel framework by constructing a multimodal entity graph which simultaneously captures the interconnection between different data entries and data modalities. Our idea is motivated by homophily, in which similar nodes tend to be connected and tend to share similar labels (Hamilton, 2020).
We demonstrate our claim by considering a multilabel classification task using the MM-IMDb dataset (Arevalo et al., 2017) as in Figure 1. In the MM-IMDb dataset, each movie entity is provided with image and text, and our goal is to predict the movie's genre. Using this data, we construct a graph where each node represents a movie, and is given the movie image as an attribute. Furthermore, we connect two nodes if the corresponding movies share features, i.e. if they have the same producer, director, etc. By capturing dependency and interaction between the entities generated from Graph Attention Network (GAT) (Veličković et al., 2018), we expect to gain latent information that cannot be extracted from the image encoder solely.
The contributions of this work are as follows: (1) We propose a novel Multimodal Graph Attention Network (MM-GATBT) which enables interaction between data modalities. (2) To our best knowledge, this is the first attempt to construct imagebased entity graph to enrich image representation by capturing relational semantics between the entities.
(3) MM-GATBT achieves state-of-the-art results on the multilabel classification task among all published papers on MM-IMDb dataset.
Background
Multimodal Representation Joint representation is one of the most popular methods to combine modality vectors. This method has a strong advantage in implementation because it concatenates the modalities into a single vector. (Guo et al., 2019) explains that it is an intuitive approach to learn a shared semantic subspace from different modalities providing richer and complementary contexts. (Bayoudh et al., 2021) also explains three different fusion methods depending on the timing when modalities are combined. Early fusion (Sun et al., 2018) method fuses data before the feature extractor or classifier to preserve the richness of original features. The late fusion method fuses data after extracting features from separate modalities. Hybrid method uses both early fusion and late fusion at some point in their architecture to take advantage of both worlds.
Graph Neural Network Graph Neural Network (GNN) is powered by neural message passing and generates node embeddings. A graph G = (V, E) is defined as a tuple such that V is a set of vertices and E ⊆ V × V is a set of edges. We also employ the node feature matrix X ∈ R d×|V | where d is the feature dimension. Vanilla GNN (Kipf and Welling, 2017) averages neighbor messages for each layer using the mean aggregation function. Formally, it is defined by the following Eq. (1) where l is the layer index, h l i is hidden representation of node i at layer l, and U l is a learnable parameter.
Here, Deg i and N i denote the degree and the neighbor set of node i, respectively, and σ(.) is a nonlinear activation function. Graph Convolution Network (GCN) (Kipf and Welling, 2017) improves vanilla GNN by employing symmetric normalization (Hamilton, 2020). This model runs a spectral-based convolution operation. Because the spectral method assumes fixed graph, it often leads to poor generalization ability (Wu et al., 2021). Therefore, spatial-based models such as GraphSAGE (Hamilton et al., 2017) are often considered to enable inductive generalization.
In Eq. (2), [h l−1 i ; h l−1 j ] denotes a concatenated representation between the node's previous hidden state h l−1 i and an aggregated representation of local neighbor nodes h l−1 j where j ∈ N i . Attention Mechanism Attention mechanism (Luong et al., 2015;Bahdanau et al., 2015) computes a probability distribution α = (α t1 , α t2 , ...α ts ) over the encoder's hidden states h (s) that depends on the decoder's current hidden state h (t) . (Luong et al., 2015) computes global attention by where s refers to the index number of source hidden state and t refers to the index number of target hidden state. This method was introduced to assign more importance to more relevant h (s) . This method has been developed into self-attention (Vaswani et al., 2017) and GAT (Veličković et al., 2018). Self-attention mechanism computes weighted average of the input vectors. Similarly, GAT performs attention on the neighbor nodes.
Problem Statement
We address the multilabel classification task. We assume that n data sample are given, where each data sample corresponds to a movie entity that has a text and an image attribute. The goal is to classify the movie genre. Note that this is a multilabel classification task, as each movie can belong to more than one genre. Therefore, given text data X txt = {T 1 , T 2 , . . . , T n } and image data X img = Then, MM-GATBT concatenates text embedding and image-based node embedding to generate a joint multimodal representation used for classifier. 1), 2), and 3) denotes token embedding, segment embedding, and positional embedding respectively, following BERT-like tokenization method.
{I 1 , I 2 , . . . , I n }, we train function f that predicts binary label y i j for all j where i is an index number of an entity and j is an index number of classes. Binary label y i j is only accessible from training set. Our approach towards this problem is to construct a graph and use graph neural networks. The details are discussed in Section 3.3 below.
Model Overview
MM-GATBT consists of three main components: text encoder, image encoder, and GNN. We chose BERT (Devlin et al., 2019) as text encoder, Ef-ficientNet (Tan and Le, 2019) as image encoder, and GAT (Veličković et al., 2018) as GNN. The encoded images are used as node features in GAT to learn the relational semantics of entities. Then we fuse text embedding and image-based node embedding using MMBT (Kiela et al., 2020). We chose this architecture because unlike VilBERT and VisualBERT (Li et al., 2019), encoders can be trained independently as opposed to be trained jointly. That is, we can easily upgrade any of these three main components in the future. Thanks to this simple but powerful architecture, MM-GATBT leaves considerable room to increase its performance in the future.
Graph Construction
To represent relational semantics, we first construct an undirected graph G = (V, E) where a vertex represents an entity (i.e. a movie) and an edge denotes the presence of shared feature between the corresponding entities (such as sharing a director).
More precisely, if A = (A i,j : 1 ≤ i ≤ n) denotes the adjacency matrix of G, we have Here, T i f eat denotes the feature set corresponding to entity i. Since there can be multiple combinations to create these feature set, we carefully chose five features that shows the best performance empirically: director, producer, writer, cinematographer, and art director.
For implementation purposes, we add a self loops to isolated vertices, i.e. those vertices with degree zero. The constructed graph G is on the whole train and test dataset. While train vertices are accessible to labels, we mask the labels for test vertices to prevent the model from seeing the ground truth during training phase.
Image-based Node Embedding (GAT)
Graphs representing relations within a single image is a well-studied problem as in (Guo et al., 2020;Johnson et al., 2015). However, no attempts have been made to represent image-objects as nodes input to a GNN. We define this novel graph as image-based entity graph as visualized in Figure 2.
Instead of using a complex image encoder, we use EfficientNet b4 (Tan and Le, 2019) to maximize efficiency. Then each encoded image is fed as node feature of an entity. Note that entire images represent nodes, not segments of images. Related works such as MMBT-Region (Kiela et al., 2021), VilBERT and VisualBERT (Li et al., 2019) employs pretrained ResNet (He et al., 2015) based Faster-R-CNN, but they are overly expensive for GNN. That is because one single channel image is sufficient to enable an effective message passing.
While GraphSAGE (Hamilton, 2020) assigns the equal importance to neighbor nodes, in our application, depending on the context, different features can have different importance. Therefore, instead of using GraphSAGE, we employ GAT (Veličković et al., 2018) where it assigns different importance to different neighbor edges. This is done by where a is a learnable weight vector for linear transformation. For non-linear activation function σ(.), we use LeakyReLU function.
Contextualized Text Embedding
BERT (Devlin et al., 2019) achieved remarkable success in various downstream tasks with its unique tokenizing method and its self-attention mechanism. As visualized in Figure 2, we apply the same BERT tokenizer to textual data by tokenizing into 1) token embedding, 2) segment embedding, and 3) positional embedding. Their aggregated result is fed into a transformer and the final hidden state of this classification token is used for classification task. In figure 2, W i denotes tokenized word given text data where i is sequence index.
Multimodal Bitransformer
MMBT (Kiela et al., 2020) is used as an early fusion method. This model originally extends BERT (Devlin et al., 2019) by applying BERT style tokenizing method into image modality as in Figure 2. For MM-GATBT, because we use image-based node embedding, we consider each node feature I n as a token. After applying BERT-like tokenization method in both Section 3.4 and Section 3.5, we concatenate them. Note that the original MMBT (Kiela et al., 2020) pools the image and uses multiple separate image embeddings. However, we only use one single output vector of image-based node embedding per each image.
Training
To solve multi-label classification task, we optimize binary cross-entropy loss defined as where M is the number of classes, ω m is the fraction of samples of class m, y m is true label, andŷ m is predicted label. Because the MM-IMDb dataset is an imbalanced dataset, we assign different ω for different classes.
Experiment
System Configuration During the training phase, we used a single Nvidia RTX 3090 with a batch size of 12. We implemented our model using PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2020) on top of MMBT code available on the public repository. 1 For every encoder, we used pre-trained models to reduce the computational cost and maximize their performance. In the case of the text encoder, we used the BERT uncased base model available from Hugginface (Wolf et al., 2020). For the image encoder, we used pre-trained EfficientNet b4 (Tan and Le, 2019). For GNN, we chose GAT (Veličković et al., 2018) available from DGL. We pre-trained GAT before employing to MM-GATBT. We used five features to construct our graph, as was explained in Section 3.3 and Eq. (4) therein. The average degree of the resulting graph is 59 and it has 554 isolated nodes.
Experiment Setup
We used Multimodal IMDb (MM-IMDb) dataset from (Arevalo et al., 2017). This dataset consists of 23351 movie entities. Each movie in the dataset has a title, description, movie poster, producer, and related genres. Note that each movie can have multiple genres, making this task a multi-label classification task.
Empirical results from previous works imply that text modality carries more significant importance than image modality (Jin et al., 2021). The dataset is provided in a splitted format where the number of training set and testing set are 15552 and 7799 respectively.
raw dataset (Arevalo et al., 2017) includes a total of 27 distinct labels from the training and testing set. However, the literature drops entities with News and Adult labels, leaving the training and the testing set with 15513 and 7779 entities respectively. Additionally, while labels with Reality-TV and Talk-Show are included in the training set, they do not appear in the testing set. Therefore, we test with 23 distinct labels as in the literature.
Baseline Models
We compare MM-GATBT with two different types of models: unimodal models and multimodal models. For BERT (Devlin et al., 2019) and EfficientNet (Tan and Le, 2019) we use the same size of models used in the main model and compare their performance. For graphical model, we implement GAT w/ EfficientNet which outputs image-based node embedding used for the main model. Then we compare it with a single Effi-cientNet to examine the information gain from this structural difference. Our implementation is publicly available on GitHub. 2
Result
We validated our model using the following metrics: micro f1, macro f1, weighted f1, and samples f1. The results are rounded to 3 decimal places. We report our results as well as the state of the art in Table 1. Table 1 shows that MM-GATBT significantly outperforms baseline models in all metrics. Specifically, MM-GATBT significantly outperforms its unimodal submodels (i.e. considering text / image only) when ran separately. This 2 https://github.com/sbseo/mm-gatbt Figure 3: Example of constructed graph visualized using Pyvis (Perrone et al., 2020). Only 1 movie feature is used for visualization purposes.
performance increase can be explained from two perspectives. First, (Singh et al., 2020) addressed that the performance of pretraining models plays a critical role before fusion. As we suspected in Section 1, using image modality solely performs the worst, as it does not leverage the benefits of multimodal fusion. From this perspective, imageonly embedding is upgraded into image-based node embedding as shown in GAT w/ EfficientNet. Therefore, as we observe, the main model performs better when its submodel performs better. This also indicates that our approach successfully captures the interaction between the entities through message passing. Secondly, MM-GATBT reflects the connectivity structure of the constructed graph. As visualized in Figure 3, the constructed graph consists of both connected and isolated nodes. Therefore, it is crucial for the architecture to address the graph's density and sparsity. Indeed, the text encoder on the top of Figure 2 generates the word embedding neglecting the graph structure, which justifies its high performance on isolated nodes. In contrast, the GAT on the bottom of Figure 2 takes into account the connectivity of nodes. This well justifies why MM-GATBT also performs well on nonisolated nodes. By fusing these two embeddings, MM-GATBT leverages both connected and isolated nodes effectively. Note that neither BERT nor image-based node embedding could achieve the accuracy of 0.685 before they were fused.
Conclusion
We proposed MM-GATBT, a novel graph-based multimodal architecture, to address the multilabel classification task on the MM-IMDb dataset. MM-GATBT leverages image-based node embedding and attention mechanism during the early fusion phase. The results show that the proposed model successfully captures the latent information generated from the interaction between the entities and achieves state-of-the-art results among all published works on the MM-IMDb dataset. | 2022-07-10T13:07:49.178Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "1bdcf8ac4d0a241acc1816fd1d18e50731c3be63",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "1bdcf8ac4d0a241acc1816fd1d18e50731c3be63",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
212737014 | pes2o/s2orc | v3-fos-license | Quantum limits for precisely estimating the orientation and wobble of dipole emitters
Precisely measuring molecular orientation is key to understanding how molecules organize and interact in soft matter, but the maximum theoretical limit of measurement precision has yet to be quantified. We use quantum estimation theory and Fisher information (QFI) to derive a fundamental bound on the precision of estimating the orientations of rotationally fixed molecules. While direct imaging of the microscope pupil achieves the quantum bound, it is not compatible with wide-field imaging, so we propose an interferometric imaging system that also achieves QFI-limited measurement precision. Extending our analysis to rotationally diffusing molecules, we derive conditions that enable a subset of second-order dipole orientation moments to be measured with quantum-limited precision. Interestingly, we find that no existing techniques can measure all second moments simultaneously with QFI-limited precision; there exists a fundamental trade-off between precisely measuring the mean orientation of a molecule versus its wobble. This theoretical analysis provides crucial insight for optimizing the design of orientation-sensitive imaging systems.
Since the first observation of single molecules [1], chemists, physicists, biologists, and engineers have worked tirelessly to quantify precisely their positions [2][3][4] and orientations [5][6][7][8][9] to probe dynamic processes within soft matter at the nanoscale.Two fundamental challenges confront these experiments: the optical diffraction limit, i.e., the finite numerical aperture of the imaging system, and Poisson shot noise associated with photon counting.In recent decades, microscopists have developed numerous technologies [10][11][12][13][14] to measure the orientations of single-molecule (SM) dipole moments.Classical estimation theory, i.e., Fisher information (FI) and the associated Cramér-Rao bound (CRB) [15], allows us to calculate conveniently the best-possible precision of unbiased measurements of a few parameters.However, calculating the CRB requires us to assume a comprehensive set of priors about the object and the imaging system, such as the number of sources, their positions and orientations, their emission spectra and anisotropies, an exact model of the imaging system and its detector, etc.The performances of several orientation-sensing methods have been compared using CRB [16,17], but the fundamental limit of measurement sensitivity remains unexplored.
Recently, quantum estimation theory has ignited a series of studies that explore the fundamental limits of estimating the 2D [18] and 3D [19] positions of isolated optical point sources, as well as the limits of resolving two or more sources that are separated by distances smaller than the Abbé diffraction limit [20][21][22][23][24][25].Since quantum noise manifests itself as shot noise in incoherent optical imaging systems, the quantum Cramér-Rao bound (QCRB) sets a fundamental limit on the best-possible variance on measuring any parameter of interest.Further, this approach provides insight into how one may design an instrument to saturate the quantum bound, thereby achiev- * mdlew@wustl.eduing a truly optimal imaging system [19,20].However, to our knowledge, no studies exist to quantify the limits of measuring the orientation and rotational "wobble" of dipole emitters, which has numerous applications in biology and materials science [7,[26][27][28].
Here, we apply quantum estimation theory to derive the best-possible precision of estimating the orientations of rotationally-fixed fluorescent molecules, regardless of instrument or technique.We compare multiple existing methods to this bound and present an interferometric microscope design that achieves quantum-limited precision.Extending our analysis to rotationally-diffusing molecules, we derive bounds on estimating the temporal average of second-order orientational moments and show sufficient conditions for reaching quantum-limited measurement precision.Interestingly, we find that it is impossible to achieve QCRB-limited precision in measuring the orientation and wobble of a molecule simultaneously.
I. IMAGING MODEL AND QUANTUM FISHER INFORMATION
We model a fluorescent molecule as an oscillating electric dipole [29] with an orientation unit vector µ = [µ x , µ y , µ z ] † = [sin θ cos φ, sin θ sin φ, cos θ] † .For any unbiased estimator, the covariance matrix V of estimating the molecular orientation µ is bounded by the classical and quantum CRB [15,23,30] where J and K represent the classical and quantum Fisher information matrices (FI and QFI), respectively, and denotes a generalized inequality such that (V µ − J −1 ) and (J −1 − K −1 ) are positive semidefinite.Here, we consider the orientational parameters [µ x , µ y ] in Cartesian coordinates.Other representations of µ can be analyzed similarly.
If the photons detected at position [u, v] follow a Poisson distribution with expected value I(u, v; µ), the entries of the classical Fisher information matrix J are given by [ ∂I(u, v; µ)/∂µ i ][ ∂I(u, v; µ)/∂µ j ] I(u, v; µ) du dv, (2) Note that I(u, v; µ) is a property of the imaging system, i.e., any modulation of the collected emission light generally alters the classical FI matrix.
A fundamental bound on estimation precision is given by the quantum FI matrix, which is only affected by how photons are collected by the imaging system, i.e., its objective lens(es).For a density operator ρ representing the collected electric field, the entries of the quantum FI matrix K are given by [30][31][32] where L i is termed the symmetric logarithmic derivative (SLD) given implicitly by Using a vectorial diffraction model [12,[33][34][35][36][37], we express the wavefunctions of a photon emitted by a rotationally-fixed molecule at the back focal plane (BFP) of the imaging system as where (ψ x , ψ y , ψ z ) denote linearly-polarized fields along (x, y, z).The basis fields at the BFP of the imaging system (g 1 , g 2 , g 3 ) may be interpreted as the classical electric field patterns produced by dipoles aligned with the (x, y, z) Cartesian axes and projected by the microscope objective into the BFP [Appendix A and eq.(A1)].
To proceed in writing down the photon density operator ρ collected by an objective lens, we define a scalar wavefunction such that x-and y-polarized photons are detected separately and simultaneously, i.e., [u represents a translation u 2 0 + v 2 0 > r 0 of ψ y (e.g., by a pair of mirrors) such that ψ x and ψ y are spatially resolvable.Here, the dimensionless scalar r 0 = NA/n represents the radius of the pupil of the imaging system (normalized by the focal length of the collection objective) as a function of the numerical aperture NA and the refractive index of the imaging medium n, which is assumed to be matched to that of the sample.Similarly, we define such that the wavefunction can be written as Therefore, the one-photon state can be represented by where |vac denotes the vacuum state, where no photon is captured by the objective lens.Stemming from the finite NA of the imaging system, the probability of a photon to escape detection after being emitted by the dipole is given by 1 and |u, v denotes the position eigenket such that u, v|u The scalar c can be viewed as the probability of collecting a photon from a z-oriented molecule, normalized to that from an x-or y-oriented dipole, given by (Appendix A) (11) Throughout this paper, we use n = 1.515 and NA = 1.4,i.e., r 0 = 0.924 and c = 0.65, if not otherwise specified.
In Appendix A, we derive the QCRB for estimating the first-order orientational moments, yielding where eigenvectors z represent orientational unit vectors along the polar and azimuthal directions, and represent the QFI components along the polar and azimuthal directions, respectively.We may reparameterize this quantum limit in terms of the best-possible precision of measuring a dipole's orientation in polar coordinates [θ, φ], given by Here, we use σ QCRB to denote the best-possible measurement standard deviation for any imaging system, as determined by the QFI, while we use σ to denote the best-possible measurement standard deviation for a particular imaging system, as determined by classical FI.
The QFI along the polar direction F p implicitly quantifies the change of the wavefunction ψ with respect to the polar orientation of the source dipole and increases as µ z increases (Equation (13a)).Given the toroidal emission pattern of a dipole, changes in polar orientation are easier to detect when sensing the null in the distribution (i.e., large µ z ) in contrast to viewing the dipole from the side (i.e., large θ).In the limiting case of r 0 = c = 1, the 4π collection aperture captures the entire radiated field, and the limit of polar orientation precision σ θ,QCRB becomes 0.5 rad for all possible orientations.
Interestingly similar to estimating the 3D position of a dipole emitter [19], the QFI for measuring azimuthal orientation is uniform across all possible orientations [Equation (13b)], i.e., the best-possible uncertainty (as a longitudinal arc length on the orientation unit sphere) does not vary with NA or orientation µ.However, the circumference of the circles of latitude decrease with decreasing polar angle θ, thereby causing the limit of azimuthal orientation precision σ φ,QCRB to degrade as 1/(2 sin θ).
We compare the classical CRB of multiple orientation measurement techniques to the quantum bound.Remarkably, direct BFP imaging (with x-and ypolarization separation) [34] has the best precision among the methods we compared, and since its variance ellipses overlap with the quantum bound, it achieves QCRBlimited measurement precision [Figure 1(a)].The widely used x/y-polarized standard PSF (xyPol) [38] has relatively poor precision compared to other techniques, as quantified by using standardized generalized variance [det(J )] −1/2 (SGV), defined as the positive p th root of the determinant of a p × p covariance matrix [39].SGV scales linearly with the area of the covariance ellipse for estimating [µ x , µ y ], and the SGV of the xyPol technique is approximately three times larger on average than the quantum bound for out-of-plane molecules [Figure 1(b)] and twice larger for in-plane molecules [Figure 1(d)].Its precision in measuring x-and y-oriented molecules is severely hampered due to its symmetry and resulting measurement degeneracy.The Tri-spot (TS) PSF, a PSF engineered specifically to measure molecular orientation [8], has better overall precision compared to the x/ypolarized standard PSF, and its performance degrades only slightly for x-and y-oriented molecules.However, its precision does not reach the quantum limit.
Note that both the x/y-polarized standard and TS PSFs break the azimuthal symmetry associated with conventional imaging systems, leading to φ-dependent performance.Inspired to retain this symmetry, we also characterize the radially/azimuthally-polarized version of the standard PSF (raPol) [40].This PSF is implemented by placing a vortex wave plate (VWP), S-waveplate, or y-phi metasurface mask [41] at the BFP.These elements convert radially-and azimuthally-polarized light into linearly-polarized light with orthogonal polarizations; these polarizations may be separated downstream by using a polarization beamsplitter (PBS).This technique has uniform precision for measuring molecular ori- entation across all azimuthal angles φ due to its symmetry.Its measurement precision is better than that of the TS PSF for most orientations [Figure 1(b,c)] and only slightly worse for in-plane molecules [Figure 1(d)].
II. REACHING THE QUANTUM LIMIT OF ORIENTATION MEASUREMENT PRECISION
Although direct BFP imaging achieves quantumlimited precision, it can only measure the orientation of one molecule at a time, thereby limiting its practical usage.In contrast, the aforementioned widefield imaging techniques can resolve the orientations of multiple molecules simultaneously, but their precisions do not reach the QCRB (Appendix B).Here, we analyze the classical FI of an imaging system [Equation ( 2)] to deduce the conditions necessary for achieving the bestpossible precision equal to the QCRB.
The expected intensity distribution in the image plane is given by I = |U (ψ)| 2 , where U is a unitary operator, i.e., U (ψ 1 )|U (ψ 2 ) = ψ 1 |ψ 2 , that depends on the configuration of the imaging system.This linear operator U typically involves a scaled Fourier transform (xyPol), a Fourier transform after phase modulation (TS), or a Fourier transform after modulation by a polarization tensor (raPol).We consider an operator U (•) projecting the wavefunction ψ(u, v) to the image plane such that the resulting field is either real or imaginary at any position [u, v], i.e., the non-negative intensity is given by 2) can be simplified to become Further, since the basis fields remain mutually orthogonal after a unitary operation U , i.e., [U (g i )][U (g j )] du dv = 0 ∀ i = j, we find that the classical FI becomes equal to the QFI (Appendix B).Therefore, an imaging system achieves the QFI limit for measuring dipole orientations if its images contain non-overlapping (i.e., non-interfering) real and imaginary fields.Further, in Appendix B, we find that the classical FI of a measurement saturates the quantum bound if and only if the phase of the detected electric field does not contain orientation information, i.e., |U (ψ)| ∂ arg{U (ψ)}/∂µ i = 0. BFP imaging, where U is the identity operator, satisfies this condition, and its precision reaches the quantum limit.In contrast, the field at the image plane is simply related to the field at the BFP by a Fourier transform; therefore, to satisfy the condition, a system may separate real and imaginary electric fields at the image plane, which is equivalent to separating even and odd field distributions at the BFP due to the parity of the Fourier transform.Alternatively, measuring the full complex field, i.e., both its amplitude and phase, could in principle reach the quantum limit of measurement precision.
Leveraging this insight, we propose an interferometric imaging system [dualObj, Figure 2(a)] to measure the orientations of multiple molecules simultaneously with precision reaching the QCRB.This system uses two opposing objectives to collect the field emanated by a dipole, in a manner similar to 4Pi microscopy and iPALM [42,43].To model the fields captured by each lens, we define orientation coordinates (µ x , µ y ) such that the two captured fields have identical amplitude distributions in the BFP, i.e., due to dipole symmetry, orientation coordinates (µ x , µ y ) are not the same as position coordinates (x, y) as depicted in Figure 2 The precision of this interferometric imaging system saturates the QCRB [Figure 2(b)], since (1) the basis fields U (g x ), U (g y ), and U (g z ) captured across cameras (i-iv) are mutually orthogonal [Figure 2(c)], and (2) the real [Figure 2(c)(i,ii,iv)] and imaginary [Figure 2(c)(iii)] components of the field are spatially separated.QCRBlimited precision can also be achieved by using a single objective and a 50/50 beamsplitter, as shown in Fig- ure C1, but this system cannot measure positions and orientations of molecules simultaneously (Appendix C).Note that although the photon detection rate is doubled in experiments using dual-objective detection, the two schemes exhibit identical orientational precision per photon detected.
To demonstrate the features of this optical design, we consider the optical fields of molecules with orientations µ = [−1, 1, 1]/ √ 3 and µ = [1, 1, 1]/ √ 3, propagated by the proposed imaging system to the various image planes [Figure 2(d,e)].Corresponding images with Poisson shot noise are shown in Figure C1(b,c).Without including the VWP, the fields at cameras (i,ii) and intermediate image planes (IIPs, v,vi) represent the response of an x/ypolarized imaging system [Figure 2(d)(i,ii,v,vi)].Both the amplitudes and phases of the fields contain orientation information, but the phase patterns are lost when using photon-counting cameras.Therefore, the performance of the xyPol imaging system is worse than the quantum bound.After guiding the y-polarized fields to the interferometric detection path, the phase shift induced by the BS separates the real and imaginary fields, i.e., the phase patterns of the fields detected are binary [Figure 2(d)(iii,iv)] and do not contain orientation information.Images of these two dipoles are now easier to distinguish from one another, as exemplified by rotation in the elongated PSFs [red lines in Figure 2 While interferometric detection can also be implemented in the x-polarized channel [Figure 2(d)(i,ii)] to boost precision, we notice that a VWP combined with a PBS separates radially-and azimuthally-polarized light, and all basis electric fields in the azimuthal channel are odd at the BFP, i.e., the basis fields are completely imaginary in the image plane [Figure 2(e)(i,ii)].Therefore, using a VWP eliminates the need for interferometric detection in the azimuthal channel, yielding a simpler imaging system.In the radially-polarized channel [
III. FUNDAMENTAL LIMITS OF MEASURING ORIENTATION AND WOBBLE SIMULTANEOUSLY
While a single photon emitted by a dipole has a wavefunction ψ that is consistent with a single orientation µ, camera images usually contain multiple photons, thereby inherently enabling measurements of rotational dynamics during a camera's integration time [8,14,27,44].Note that a collection of photons emitted by a partially-fixed or freely-rotating molecule is equivalent to that emitted by some collection of fixed dipoles with a corresponding orientation distribution.Therefore, the one-photon state for a wobbling molecule may be expressed as a mixed state density matrix where M ij = (1/T ) T 0 µ i µ j dt is the temporal average of the second moments of molecular orientation over acquisition time T .The corresponding classical image formation model is given by Equations (E1) and (E2).The QFI may be expressed as a function of the orientational second moments and can be computed numerically as shown in Appendix D.
For simplicity, we parameterize a dipole's rotational motions by using an average orientation [μ x , μy , μz ] with rotational constraint γ [8, 9, 14], i.e., One sufficient condition to saturate the QFI for estimating a subset of parameters is for the measurement to project onto the eigenstates of the corresponding SLDs [32].For example, when a low NA objective lens is used, the x/y-polarized standard PSF separates nearly perfectly the basis images corresponding to M xx and M yy and has no sensitivity to M zz .Therefore, the x/ypolarized standard PSF projects onto the eigenstates of L xx and L yy and its precision approaches the QCRB limit for measuring M xx and M yy for small NA [Figure 3 To quantify measurement performance corresponding to out-of-plane second moments, we focus on the CRB σ zz , since all polarized versions of the standard PSF have poor sensitivity for measuring cross moments M xz and M yz .Not surprisingly, the precision of measuring M zz dramatically improves when using an objective lens of NA greater than 1 [Figure 3(c)(i)].Here, we notice the usefulness of dual-objective interferometric detection (dualObj); since the photons corresponding to M zz are separated from other second moments [Figure 2(c)], i.e., the system projects onto the eigenstate of L Mzz , dualObj achieves QCRB-limited precision [Figure 3(c)].Without interferometric detection (raPol), radially/azimuthally polarized detection achieves worse precision than dualObj but improves upon basic linear polarization separation (xyPol or 45 • Pol).
Close examination of Figure 3(a,b) shows that no existing orientation imaging methods, even those that achieve QCRB-limited precision for estimating first moments, can achieve QFI-limited precision for measuring all orientational second moments simultaneously.To gain insight into this phenomenon, we use classical FI to analyze the SGV σ i xx,yy,xy 2 of measuring all in-plane moments si-multaneously (Appendix E), yielding where the superscript i ∈ {x, y, z} denotes the SGV σ 2 , FI J , or QFI K of a dipole with an average orientation along one of the Cartesian axes.Equation (19) reveals that there exists a tradeoff between sensitivity for measuring squared moments, which mainly indicate the average orientation of a molecule, versus cross moments, which correspond to wobble [Equation (17)], for all imaging systems.Radially/azimuthally-polarized standard PSFs, both with (dualObj) and without (raPol) interferometric detection, exhibit nearly identical precision for measuring squared moments versys cross moments [Figure E2(a,b)] and perform closely to the bound given by Equation (19) for both low [Figure 4(i)] and high NA [Figure 4(ii)].In contrast, the linearly-polarized standard PSFs, xyPol and raPol, exhibit suboptimal SGVs σ 2 xx,yy,xy for measuring all in-plane second moments simultaneously for low NA as expected [Figure 4(i)], and these SGVs improve as NA increases [Figure 4(ii)].This improvement comes at the cost of worsening measurement precision for specific moments [(M xx , M yy ) for xyPol and M xy for 45 • Pol, Figure 3(a,b)(i)].Interestingly, no method can achieve QCRB-limited measurement precision for all second-order orientational moments simultaneously since the bound given by Equation ( 19) is greater than the quantum bound [Equation (18)].This trade-off also occurs for molecules wobbling around other average orientations (Figure E1).
IV. DISCUSSION AND CONCLUSION
Using quantum estimation theory, we derive a fundamental bound for estimating the orientation of rotationally-fixed molecules that applies for measurement techniques.The key result is that the bound is radially symmetric; the precision along the polar direction depends on the numerical aperture of the imaging system and the polar orientation µ z of the molecule, while the precision along the azimuthal direction is bounded by a constant 0.5 rad.Our approach can be extended to include appropriately modeled background photons (Appendix F).By comparing the precision of existing methods to the bound, we show that direct imaging of the BFP saturates the quantum bound, while all existing image-plane based techniques have worse precision.Upon further investigation of the classical FI, we show that a method can saturate the quantum bound if and only if the field in the image plane contains only trivial phase information.Inspired by this necessary and sufficient condition, we propose an imaging system with interferometric detection at the image plane that saturates the quantum bound.
We further examined the quantum bound for estimating the orientation and wobble of a non-fixed molecule.Our analysis shows that the optimality of a measurement depends on the specific molecular orientation to be observed.Although no measurement is physically realizable that achieves QCRB-limited precision for all second moments and all possible molecular orientations simultaneously, we show several methods that achieve quantum-limited precision for certain subsets of second moments.Generally speaking, spatially separating basis fields improves the precision of measuring the average orientation of an SM, while mixing (i.e., spatially overlapping) basis fields improves the precision of measuring their wobble.The trade-off is demonstrated using classical FI (Appendix E).An imaging system that separates radially-and azimuthally-polarized light using a VWP and a PBS is capable of distributing information evenly between measuring the average orientation and wobble (raPol and dualObj in Figure 3), and these methods achieve optimal measurement precision for in-plane moments in terms of CRB SGV [Figure 4].Although we model the orientation of SMs using orientational secondorder moments, similar results can also be derived for other orientation parameterizations such as generalized Stokes vectors and spherical harmonics (Appendix D).
Interestingly, we note that certain entries of the QFI matrix may be infinite, e.g., K x yy,yy = K x zz,zz = ∞ for fixed molecules oriented along the x axis [Equation (18a)] and K z xx,xx = K z yy,yy = K z xy,xy = ∞ for fixed molecules oriented along the z axis [Equation (18b)].Such cases arise when ρ ∂ρ/∂M ij vanishes as a molecule becomes more fixed (γ → 1).One such example is using the x/y-polarized standard PSF to estimate M yy for an xoriented fixed molecule; the classical FI J x yy,yy is also infinite in this case.That is, there exists some position(s) (u, v) in image space such that I(u, v; µ = [1, 0, 0] † ) = 0 and ∂I(u, v)/∂M yy > 0, i.e., we expect certain region(s) of the image to be dark for x-oriented dipoles but bright for y-oriented dipoles.Therefore, This situation is the orientation analog of MINFLUX nanoscopy [45], where infinitely good orientation measurement precision per photon may be obtained by receiving zero signal [46]; in this case, zero photons detected in the y-polarized channel implies M yy = 0.It is remarkable that quantum estimation theory provides fundamental bounds on measurement performance that are both instrument-independent and achievable by readily built imaging systems, such as the dual-objective system with vortex waveplates and interferometric detection proposed here.Further, these bounds give tremendous insight to microscopists, who can now compare existing methods for measuring dipole orientation to the bound and design new microscopes that optimally utilize each detected photon for maximum measurement precision.In particular, our analysis reveals that no single instrument can achieve the best-possible QCRB limit for measuring all orientational second moments simultaneously due to the trade-off between measuring mean orientation versus molecular wobble [Equation (19)].Therefore, the notion of designing a single, fixed instrument that performs optimally may simply be intractable, and instead, scientists and engineers should focus on designing "smart" imaging systems that adapt to the specific dipole orientations within the sample and orientational second moments of interest, thus achieving optimal, QFIlimited measurement precision.Such designs remain the object of future studies.
Thus, because the intensity |Ψ(u, v)| 2 is detected by a camera, orientation information is only useful if it is encoded within the field amplitude A Ψ (u, v), i.e., FI increases as ∂A Ψ (u, v)/∂µ i increases.Any information that may be present within phase variations that arise from changes in orientation, given by ∂α Ψ (u, v)/∂µ i , are simply lost and do not improve Fisher information.
Both the field and its partial derivatives can be viewed as superpositions of image-plane basis fields analogous to the fields at the BFP [Equation (A1)], given by Interestingly, we find that with equality if and only if Thus, the classical FI may equal the quantum bound if and only if the phase of the image-plane field, α Ψ (u, v), is constant as the dipole changes orientation.That is, if one can design an imaging system such that all changes in orientation correspond solely to changes in the imageplane field amplitude A Ψ (u, v), such an imaging system may achieve quantum-limited orientation measurement precision.
Appendix C: Single-objective interferometric imaging system that reaches the quantum limit of measurement precision Here, we show a single-objective interferometric imaging system that achieves QCRB-limited precision for estimating first-order orientational dipole moments (Figure C1), analogous to the dual-objective system discussed in the main text (Figure 2).This system similarly uses a vortex waveplate (VWP) to circumvent the need for interferometric detection of azimuthally-polarized emission light.However, this system passes radially-polarized light through two 50/50 beamsplitters in a Mach-Zehnder configuration.Each arm further uses a dove prism (DP) to flip the field for proper detection of orientation information.
Although this imaging system is simpler to implement than a dual-objective system, the use of only one objective lens prevents cameras (iii) and (iv) from measuring the position (x, y) and orientation (µ x , µ y ) simultaneously (Figure C1).For single-objective detection, the y-polarized field at the BFP for a molecule located at position (x, y) is given by whereas for the dual-objective system, the electric fields collected by objectives 1 and 2 are given by As stated in the main text (Section II), orientation measurements in the image plane achieve maximum precision when even and odd fields at the BFP are separated, e.g., when ψ y (u, v) + ψ y (−u, −v) and ψ y (u, v) − ψ y (−u, −v) are resolved simultaneously.In the dualobjective setup in Figure 2, the fields captured by cameras (iii) and (iv) are given by ψ ′ y,1 (u, v) + ψ ′ y,2 (−u, −v) and ψ ′ y,1 (u, v) − ψ ′ y,2 (−u, −v), respectively.Thereby, the orientation measurement is optimized, and position information exp[jk(ux + vy)] is preserved.
G i are the basis fields in the image plane [Equation (B2)], and s = 1 is a brightness scaling factor corresponding to one photon detected.
To investigate the trade-off in measuring squared vs.
with equality if and only if G x G * y is real, i.e., G x and G y have the same phase.Note that this inequality holds for all imaging systems, i.e., any possible G i .
The classical FI matrix of estimating in-plane orientational second moments, ignoring the third, fifth, and sixth rows and columns of the full FI matrix J , may be written as We now develop a relation between the covariance J 12 and the diagonal elements J 11 and J 22 given by Equations (E6a) to (E6c), yielding In the main text, we discussed a trade-off between achieving good precision in estimating squared second moments, e.g., M xx , versus achieving good precision in estimating cross-moments, e.g., M xy , for molecules wobbling around the in-plane axes or the optical axis.Here, we compute numerically the precision of measuring second moments for molecules with arbitrary average orientations ( θ, φ) and small rotational diffusion (γ = 0.8), which is equivalent to rotating uniformly within a cone of half-angle 30.7 • , using various methods [Figure E1].The estimation precisions for mostly fixed molecules are similar to those for freely-rotating molecules (Figure 3).The x/y-polarized standard PSF with a low NA objective lens has a precision achieving the quantum bound for measuring M xx and M yy for some orientations, but has no sensitivity for measuring M xy .The 45 • -polarized standard PSF has the opposite performance; it achieves the QCRB for measuring M xy , but has no sensitivity for measuring M xx and M yy .The radially/azimuthally-polarized standard PSF has better M zz precision compared to the inplane polarized PSFs.We surmise that these methods do not simultaneously achieve QCRB-limited precision for all orientations because they do not project onto the corresponding SLDs for the orientational second moments.
We next consider the CRB SGV (σ where the minimum SGV in the final inequality is found by setting ∂(σ x xx,yy,xy ) 2 ∂J 11 = 0. Similarly, for zoriented molecules, the SGV is bounded by (E11) We therefore observe that the classical CRB for measuring in-plane second moments is bounded; the precision in measuring M xx , M yy , and M xy cannot simultaneously reach the best-possible QCRB.These tradeoffs are exemplified by comparing the xyPol and 45 • Pol techniques in Figure 3(a,b), Figure E1(a,b), and Figure E2(a,b).Interestingly, although both of these methods saturate the QCRB for subsets of M xx , M yy , and M xy , their SGV for measuring all in-plane moments is poor.In contrast, raPol and dual-objective techniques cannot saturate the QCRB for any one in-plane second moment, but their SGV for all in-plane moments is very close to the bound given by Equations (E10) and (E11) [Figure 3 In this section, we briefly discuss the effect of background on the estimation precision.The estimation precision in the presence of background highly depends on the nature of the background photons, especially their spatial distributions.We write a new density matrix ρ, Similar to the backgroundless case [Equation (12)], the QFI is also azimuthally symmetric, given by where and the probability of a photon emitted by the dipole that escapes detection is given by 1 − ǫ z = (1 − c)µ 2 z .Compared to the backgroundless case and averaging over all possible orientations, the best-possible precision decreases by a factor of two for a signal-to-background ratio (SBR) s/b = 0.75 (Figure F1), i.e., 3 background photons are detected for every 4 signal photons.Note that we have assumed that these background photons project uniformly across |g x , |g y , and |g z in Equation (F1).The QCRB will change depending upon how photons from the background emitters project onto the basis fields of the imaging system.
(a).VWPs are placed at the BFPs to transform radially-and azimuthally-polarized light into x-and y-polarized light, respectively.Cameras (i) and (ii) detect identical images of the y (azimuthally)polarized fields.The x (radially)-polarized fields, one of which is flipped by a dove prism (DP) [Figure2(a)], are guided to a beamsplitter (BS).The resulting interference pattern is captured by cameras (iii) and (iv).
FIG. 2 .
FIG. 2. Dual opposing-objective interferometric imaging (dualObj) for achieving QCRB-limited precision.(a) Vortex waveplates (VWP) are placed at the BFPs to convert radially-polarized to x-polarized light (blue) and azimuthallypolarized to y-polarized light (red).Blue arrows depict the fast axis direction of the VWP.One of the radially-polarized channels is flipped using an x-oriented Dove prism (DP), then propagates to the beamsplitter (BS).(b) CRB covariance ellipses for measuring [µx, µy] using 25 detected photons and interferometric detection (magenta) compared to the quantum bound.(c) Basis electric fields U (gx), U (gy) and U (gz) at detectors i-iv.(d-e) Normalized amplitude and phase of the optical fields of molecules with orientations −µx = µy = µz and µx = µy = µz captured at detectors i-iv and intermediate image planes v,vi (d) without and (e) with VWPs.Scale bar: 1 µm.Colorbars: normalized amplitude and phase in rad.
FIG. 3 .
FIG. 3. Classical CRB of several techniques (Appendix E) compared to the quantum CRB of estimating second-order orientational moments of dipole emitters.(a) CRB SGV of estimating Mxx and Myy for molecules wobbling around the µx axis, (b-c) best-possible precision √ CRB of estimating (b) Mxy for molecules wobbling around the µx axis and (c) Mzz for molecules wobbling around the µz axis as functions of (i) numerical aperture NA (for γ = 0) and (ii) rotational constraint γ [NA = 0.1 in (a),(b) and NA = 1.4 in (c)].The gray regions are bounded from above by (a) QCRB or (b,c) √ QCRB [Equation (18)].Orange: standard PSF with xand y-polarized detection (xyPol), cyan: standard PSF with linearly polarized detection at ±45 • in the xy-plane (45 • Pol), green: standard PSF with radially-and azimuthally-polarized detection (raPol), and magenta: dual-objective interferometric detection with VWPs (dualObj).All curves assume 1 photon is detected from the dipole emitter.The estimation precision of 45 • Pol in (a) and xyPol in (b) are orders of magnitude larger than those of the other techniques and are not shown.
where γ = 0 represents a freely rotating molecule and γ = 1 indicates a rotationally fixed molecule.We may derive an analytical expression of QFI for estimating a subset of the second moments [M xx , M yy , M zz , M xy ] (Appendix D) by examining special cases where the dipole's average orientation is parallel to the Cartesian axes.The QFI matrices for a dipole with an average orientation along the x axis [K x , i.e., μx = 1, M xy = M xz = M yz = 0, Figure3(a,b)] and that for a dipole with an average orientation parallel to the optical axis µ z [K z , i.e., μz = 1, M xy = M xz = M yz = 0, Figure3(c)] are given by (a)].However, this technique lacks sensitivity for measuring the cross moment M xy [Figure 3(b)] since the corresponding FI entry is close to zero [Figure E2(b)].Intuitively, M xy may be measured simply by rotating the polarizing beamsplitter by 45 • around the optical axis to capture linearly polarized light along ±45 • .This approach achieves the QFI limit for measuring M xy , but consequently contains no information regarding the squared moments M xx and M yy [Figure 3(a,b), Figure E2(a,b)].
FIG. 4 .
FIG. 4. CRB standardized generalized variance (SGV in steradians) of estimating in-plane moments Mxx, Myy, and Mxy simultaneously for molecules wobbling around the (a) µx axis and (b) µz axis using (i) NA = 0.1 and (ii) NA = 1.4 objective lenses.The dark gray regions are bounded from above by the quantum bound [Equation (18)]; light gray regions are bounded from above by the classical bound [Equation (19)].Orange: standard PSF with x-and y-polarized detection (xyPol), cyan: standard PSF with linearly polarized detection at ±45 • in the xy-plane (45 • Pol), green: standard PSF with radially-and azimuthally-polarized detection (raPol), and magenta: dual-objective interferometric detection with VWPs (dualObj).All curves assume 1 photon is detected from the dipole emitter.The estimation precision of 45 • Pol and xyPol in (i) are orders of magnitude larger than those of the other techniques and are not shown.
FIG
FIG. C1. (a)A single-objective interferometric imaging system that reaches the quantum limit of measurement precision.A vortex waveplate (VWP) is placed at the BFP to convert radially-and azimuthally-polarized light to x-(blue) and y-polarized (red) light, respectively, which is then separated by a polarizing beamsplitter (PBS).Camera (i) detects an azimuthally-polarized image identical to those captured by cameras (i,ii) in Figure2.The radial channel is split and recombined by a pair of 50/50 beamsplitters (BS) in a Mach-Zehnder configuration; light in each arm is flipped using orthogonally-oriented dove prisms (DPs).Cameras (ii) and (iii) detect images identical in shape but half as bright as those captured by cameras (iii,iv) in the dual-objective system in Figure2.(b,c) Images of molecules with orientations −µx = µy = µz and µx = µy = µz captured at detectors (i)-(iii) and intermediate image planes (iv,v) (b) without and (c) with VWPs.Images depict a total of 2000 photons detected.Scale bar: 1 µm.Colorbar: photons per 58.5 × 58.5 nm 2 pixel.
FIG. E1.Classical CRB of several techniques (Appendix E) compared to the quantum CRB of estimating second-order orientational moments of a nearly-fixed (γ = 0.8) dipole emitter with average polar orientation (i) θ = 20 • and (ii) θ = 80 • using 1 detected photon.(a) CRB SGV for estimating inplane squared moments Mxx and Myy using a low 0.1 NA objective.(b) Best-possible precision √ CRB of estimating the in-plane cross moment Mxy using a low 0.1 NA objective.(c) Best-possible precision √ CRB of estimating the out-ofplane squared moment Mzz using a high 1.4 NA objective.The gray regions are bounded from above by the numericallycomputed (a) QCRB or (b,c) √ QCRB.Orange: standard PSF with x-and y-polarized detection (xyPol), cyan: standard PSF with linearly polarized detection at ±45 • in the xy-plane (45 • Pol), green: standard PSF with radially-and azimuthally-polarized detection (raPol), and magenta: dualobjective interferometric detection with VWPs (dualObj).All curves assume 1 photon is detected from the dipole emitter.The estimation precision of 45 • Pol in (a) and xyPol in (b) are orders of magnitude larger than those of the other techniques and are not shown.
cross moments, we analyze the in-plane second order moments (M xx , M yy , M xy ) and assume that B xz = B yz = B zz = 0 for simplicity.Since the total intensity of an image must be non-negative everywhere, the inequality I = B xx M xx + B yy M yy + B xy M xy ≥ 0, (E3) must be satisifed for all (M xx , M yy , M xy ) such that M 2 xy = µ x µ y 2 ≤ µ 2 x µ 2 y = M xx M yy .From the definition of the intensity basis images [Equation (E2)], we have
J 11 J
xx,yy,xy = J xx,xx J xx,yy J xx,xy J yy,xx J yy,yy J yy,xy J xy,xx J xy,yy J xy,xy 12 J 14 J 21 J 22 J 24 J 41 J 42 J 44 .(E5) We use the square root of the inverse of the determinant of the 2 × 2 FI submatrix J xx,yy to quantify the CRB SGV of estimating the squared second moments [Figure 3(a)], and we invert the diagonal entry J 44 to compute the CRB corresponding to M xy [Figure 3(b)].When using the aforementioned polarized standard PSFs (xyPol and raPol, with and without interferometric detection) to measure molecules with mean orientations μx = 1 or μz = 1, there is zero correlation between estimating in-plane squared moments (M xx , M yy ) and the cross moment (M xy ), i.e., J 14 = J 24 = 0; thus the diagonal entry J 44 can be directly evaluated for quantifying classical FI.Next, we compute the classical FI of measuring the inplane second moments of a molecule wobbling around the µ x axis, i.e., M xx = (1 + 2γ)/3, M yy = M zz = (1 − γ)/3 and M xy = 0, as E7) where we have utilized the fact that the total energies in B xx and B yy are each normalized to one.The equalities in Equations (E6a) and (E6b) are only satisfied when B xx B yy = 0 ∀ [u, v], i.e., the classical FI saturates the QFI when B xx and B yy are spatially separated on the camera.However, if this condition holds, then B xy = 0 [Equation (E4)], i.e., I does not depend on M xy , and I does not contain any information for measuring M xy .
(a,b), Figure 4, and FigureE2(a,b)].This analysis can be extended to z-related squared and cross moments, resulting in a similar trade-off.
Appendix F: Impact of background photons on the estimation precision of first-order orientational dipole moments
3 (
FIG. E2.Classical FI of several techniques (Appendix E) versus quantum FI of estimating second-order orientational moments of dipole emitters.(a) Inverse of generalized variance for estimating Mxx and Myy, classical FIs for estimating (b) Mxy for molecules wobbling around the µx axis and (c) Mzz for molecules wobbling around the µz axis as functions of (i) numerical aperture NA (for γ = 0) and (ii) rotational constraint γ [for (a),(b) NA = 0.1 and (c) NA = 1.4].The gray regions are bounded from below by the QFI [Equation (18)].Orange: standard PSF with x-and y-polarized detection (xyPol), cyan: standard PSF with linearly polarized detection at ±45 • in the xy-plane (45 • Pol), green: standard PSF with radially-and azimuthally-polarized detection (raPol), and magenta: dual-objective interferometric detection with VWPs (dualObj).All curves assume 1 photon is detected from the dipole emitter.
FIG. F1.Quantum FI of estimating first-order orientational moments of fixed dipole emitters as a function of signal-tobackground ratio (SBR).(a) QFI of estimating polar orientation.(b) QFI of estimating azimuthal orientation.Black, red, and white lines represent a QFI reduction of 50, 75, and 96 percent, i.e., a best-possible standard deviation in the presence of background equal to √ 2 times, twice, and 5 times that without background, respectively. | 2020-03-18T01:00:34.928Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "985327bb7568e2f303d3575be81c8b78ea09b1cc",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.033114",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "985327bb7568e2f303d3575be81c8b78ea09b1cc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Engineering",
"Medicine"
]
} |
12578141 | pes2o/s2orc | v3-fos-license | Evidence for Deviations from Fermi-Liquid Behaviour in (2+1)-Dimensional Quantum Electrodynamics and the Normal Phase of High-$T_c$ Superconductors
We provide evidence that the gauge-fermion interaction in multiflavour quantum electrodynamics in $(2 + 1)$-dimensions is responsible for non-fermi liquid behaviour in the infrared, in the sense of leading to the existence of a non-trivial (quasi) fixed point (cross-over) that lies between the trivial fixed point (at infinite momenta) and the region where dynamical symmetry breaking and mass generation occurs. This quasi-fixed point structure implies slowly varying, rather than fixed, couplings in the intermediate regime of momenta, a situation which resembles that of (four-dimensional) `walking technicolour' models of particle physics. Connection with the anomalous normal-state properties of certain condensed matter systems relevant for high-temperature superconductivity is briefly discussed. The relevance of the large (flavour) N expansion to the fermi-liquid problem is emphasized.
Introduction
One of the most striking phenomena associated with the novel high-temperature superconductors is their abnormal normal-state properties. In particular, these substances are known to exhibit deviations from the known Fermi-liquid behaviour, which are remarkably stable with respect to variations in the relevant parameters 1 . Recently, Shankar 2 and Polchinski 3 have presented an intuitively appealing idea of using the Renormalization-Group (RG) approach, so powerful in particle and statistical physics, to systems of interacting electrons with a Fermi surface in order to understand, at least qualitatively, how deviations from Fermi liquid behaviour can appear naturally (as opposed to being fine-tuned). From this point of view Landau's fermi liquid is nothing else but a system of free electrons, which has no relevant perturbations, in the RG sense, that can drive it away from its trivial infrared fixed point. In general, however, as we integrate out certain modes of our original theory, some interactions may become relevant in the RG sense, i.e. their effective coupling may grow as one lowers the momentum scale. Then, two interesting possibilities arise 3 . (i) Fermion bound states are formed, symmetries are spontaneously broken, and the low-energy spectrum bears little resemblance to that of the original theory. In such a case one has to re-write the effective theory in terms of the new degrees of freedom : for instance, in the superconducting case this is the Landau-Ginzburg effective action expressed in terms of the fermion condensate. (ii) a Invited speaker at the 4-th Chia meeting on 'common trends in particle and condensed matter physics', Chia-Laguna, Sardegna, Italy, September 1995 Alternatively, the growth of the coupling is cut off by quantum effects at a certain low energy scale, and in this way a non-trivial fixed point structure emerges. The low energy fluctuations still correspond to fields of the original theory despite their non-trivial interactions. This case leads to observable deviations from the Fermiliquid behaviour.
In the case of the high-T c materials, the physically interesting question is whether one model theory can be found with a structure rich enough to describe both the non-fermi liquid behaviour of the normal phase and the transition to (and phenomenology of) the superconducting phase. In this article we shall put forward a candidate model which, as we shall argue, seems to us to fulfill this rôle.
It is known that possibility (i) above can be caused by relevant interactions of superconducting (BCS) or charge-density-wave (CDW) type, both of which are accompanied by the formation of fermion condensates. Possibility (ii) has only rather recently begun to be seriously explored 2,3,4 . It has been known for a long time that the electromagnetic interaction of the vector potential can cause deviation from fermi-liquid behaviour 5 , but its effects are suppressed by terms of O[(v F /c) 2 ], with v F the Fermi velocity and c the light velocity. Its effects occur only at much lower energies than those relevant to the high-T c materials. Nevertheless, the electromagnetic example is suggestive enough, perhaps, to motivate a search for other (non-electromagnetic) gauge interactions in which the effective signal velocity would be of order v F , and which might be responsible for a non-trivial fixed point behaviour. It was precisely this sort of ("statistical") gauge-fermion interaction that was studied (in different forms) in 3 and 4 , and which led to non-trivial fixed point structure in the infrared.
Returning now to possibility (i), we recall that it has been shown 6 that a variant of QED in (2 + 1)-dimensions (QED 3 ) leads to superconductivity, characterized -as appropriate to two space dimensions -by the absence of a local order parameter (Kosterlitz-Thouless mode). Thus the exciting possibility arises that a single fermion-gauge theory could describe both non-fermi-liquid behaviour in the normal phase and the transition to the superconducting phase.
The main purpose of the talk is to review an (approximate) renormalization group analysis 7 of a simplified version of this model, namely QED 3 itself, which indicates that QED 3 exhibits two quite different behaviours depending on the momentum scale. At very low momenta QED 3 enters a regime of dynamical mas generation (d.m.g.), which in the full theory leads to superconductivity; but at "intermediate" momenta (see below) d.m.g. does not occur and the dynamics is controlled by a non-trivial fixed point, leading to non-fermi liquid behaviour. Thus we have the possibility -for the first time, to our knowledge -of one theory encompassing both the normal and the superconducting phases of the high-T c cuprates.
At this point the reader might worry that applying renormalization group techniques to a super-renormalizable theory like QED 3 is redundant, since the theory has no ultraviolet divergencies. However, this is a mistaken view. In the modern approach to the RG and effective field theories, one considers quite generally how a theory evolves as one integrates out degrees of freedom above a certain momentum scale, moving progressively down in scale. From this point of view an effective field theory description is equally applicable to non-renormalizable, renormalizable, and super-renormalizable theories. However, there are some crucial new features in the case of a super-renormalizable theory (which, to our knowledge, have not been identified hitherto). First, the QED 3 coupling e introduces an intrinsic intermediate scale e 2 which has the dimension of mass, this being directly related to the superrenormalizability of the theory. The physical effect of this will be the existence of an intrinsic mass scale and we can expect different physics in different regimes of momenta relative to this mass scale (p >> e 2 , p ≃ e 2 , p << e 2 ).
The second distinctive feature of our RG analysis of QED 3 , concerns the way in which we introduce a running coupling. Conventionally, such running couplings are dimensionless -so, once again the dimensionfulness of e 2 presents a new feature. The way in which an effective dimensionless running coupling can be introduced into QED 3 has been shown by Kondo and Nakatani (KN) 8 , building on work by Higashijima 9 for QCD 4 . The crucial step is to consider the effect of wavefunction renormalization in the Schwinger-Dyson (SD) equations, as controlled by a large-N approximation. In this case, one considers the theory at large N with α = e 2 N held fixed, and the dimensionless coupling that runs is essentially 1/N . KN actually considered only the regime in which dynamical mass generation (chiral symmetry breaking) occurs -and of course here the gauge coupling is becoming strong and the use of a large-N expansion is unavoidable. What we did in ref. 7 is to identify the "normal" (no dynamical mass generation) regime of the theory, and extend the RG-type analysis of KN to this normal regime. We argued that there exists a non-trivial (quasi-)fixed point of the effective dimensionless coupling, which governs the dynamics for a range of intermediate momenta p ≃ α, lying between the trivial fixed point at p >> α, and the region p << α of dynamical mass generation. Important to this analysis will be the introduction (following KN) of an infrared cutoff ǫ, which serves to delineate the different momentum regimes. The analysis of ref. 7 is performed at zero temperature. Some attempts have also been made to connect this to finite-temperature calculations, by interpreting the temperature as an effective infrared cutoff. We presented an approximate computation, at finite temperature, of the electrical resistivity ρ of the fermionic system. We argued that it is the existence of the non-trivial RG fixed point which is responsible for the fact that the non-fermi liquid behaviour (ρ approximately proportional to the temperature T ) is observed over so large a temperature range. Wavefunction renormalization effects, important at O(1/N ), lead to calculable logarithmic deviations from the linear in T behaviour.
At this stage it is useful to compare and contrast our approach with two other recent explorations of gauge theories in (2 + 1) dimensions in a similar context, by Polchinski 3 and by Nayak and Wilczek 4 . Both works deal with fermions interacting with a statistical gauge field, the latter representing magnetic spin-spin interactions. In both, the fermions represent spin quasi-particle excitations (spinons), and they should therefore not be identified with the carriers of ordinary electric charge (holes or electrons). This is to be sharply contrasted with our own model of refs. 6,7 , in which the spin-charge separation is done differently, leading to the fermions in our model carrying both statistical and ordinary charge.
The alert reader might worry at our cavalier use of a relativistic fermion field theory (QED 3 ) to infer conclusions pertaining to complicated condensed matter systems with non-trivial fermi surfaces, like the ones of relevance to high-temperature superconductivity. To such objections, we first stress that the results of ref. 7 should only be viewed as a qualitative attempt at identifying one particular (but important) source of (cross-over) deviations from fermi liquid behaviour. In support of this we refer the reader to an important observation by Polchinski 3 according to which, in such condensed matter systems, kinematics implies that the most important interactions among fermions are those which pertain to fermionic excitations whose momentum components tangent to the fermi surface are parallel. This is the only way that the gauge field momentum transfer can still be relatively large as compared to the distance of the fermion momenta from the fermi surface, as required by special kinematic conditions 3 . There are two cases where such conditions are met in condensed matter physics. The first pertains to nested fermi surfaces, at which the points with momenta k 0 and −k 0 have parallel tangents. This is the situation relevant to BCS or CDW. The other situation, which is the bulk of Polchinski's work and will be of interest to us as well, is the case where the fermions are close to a single point on the fermi surface. This means that the most important fermion interactions are those which are local on the fermi surface, and hence qualitatively this situation can be extended to relativistic (Dirac) fermions as well, since the dispersion relations become effectively linear 6 .
It should be stressed that the curvature of the fermi surface plays also a non-trivial rôle in deviations from fermi liquid behaviour, since any shape distortion appears as a relevant RG deformation of the model. However, as already pointed out in ref. 3 the remarkable stability of the observed non-fermi liquid behaviour in the normal phase of the high-T c materials, which persists up to temperatures of 600 K, cannot be explained by deformations of the fermi surface, as this would require an un-natural fine tuning. It is our belief that a dominant rôle in the phenomenon is played by the statistical gauge interaction among charged holes, which arguably characterizes magnetic superconductors 6 . Support for this conjecture, within the context of QED 3 prototypes, was one of the main results of ref. 7 .
Finally, in an attempt to convince the more skeptical formal readers about the qualitative validity of the relativistic models as prototypes for the description of such phenomena in condensed matter, we draw his/her attention to the fact that the quasi-fixed point behaviour that seems to characterize 7 QED 3 at T = 0, seems to persist for a wide range of finite tempratures T > 0, where Lorentz invariance is definitely lost.
Another important point, which was recently pointed out by Shankar 2 in connection with the RG approach to interacting fermions, is the use of an effective large-N expansion in cases where the effective momentum cut-off Λ is much smaller than the size of the fermi surface k F , Λ/k F → 0. Such a situation is encountered in a RG study of (deviations from) fermi liquid theories, the Landau fermi-liquid theory being defined as a trivial infrared fixed point in a RG sense. To understand the connection of a large-N expansion with infrared behaviour of excitations one should recall the work of ref. 10 where the RG approach to the theory of the Fermi surface has been studied in a mathematically rigorous way. The basic observation of ref. 10 is that, unlike the case of relativistic field theories, in systems with an extended fermi surface, the fermionic excitation fields exhibiting the correct scaling are not the original excitations, ψ x (x a configuration space variable), but rather quasiparticle excitations defined as follows : where for the shake of simplicity we assumed that the fermi surface is spherical with radius k F , Ω is a set of angular variables defining the orientation of the momentum vector of the excitation at a point on the fermi surface, and the tilde denotes ordinary Fourier transform in a momentum space K. These quasiparticle fields have propagators with the correct scaling 10 , which allows ordinary RG techniques, familiar from relativistic field theories, to be applied, such as the appearance of renormalized coupling constants, scaling fields etc. Indeed it is not hard to understand why this is so. For this purpose it is sufficient to observe that for large k F the exponent of the exponential in (1) is nothing other than the linearization, k ≡ K − k F Ω, about a point on the fermi surface, which makes these quasiparticle excitations identifiable with ordinary field variables of the low-energy limit of these condensed matter systems. The latter is a well-defined field theory 6 . The crucial point in this interpretation is that now the field variables will depend on 'internal degrees of freedom', Ω, which denote angular orientation of the momentum vectors on the fermi surface. In two spatial dimensions, which is the case of interest, Ω is just the polar angle θ. Following ref. 2 we discretize this angular space into small cells of extent f (Λ/k F ) << 1, e.g. f = Λ/k F : where k denotes a linearizing momentum about a point on the fermi surface. Doing so, we observe 2 that when looking at interaction terms involving fermionic particleantiparticle pairs, ψψ, the leading interactions are among those fermion-antifermion pairs for which the creation and anihilation operators lie within the same angular cell. This is for purely kinematic reasons in the infrared regime Λ << k F , similar to those mentioned previously 3 , which implied that the most important fermion interactions on the fermi surface must be among excitations which have their tangents to the fermi surface parallel. It is, then, straightforward to see that interaction terms involving either gauge excitations or just fermions resemble those in large-N relativistic field theories, given that the only Λ dependence appears through proportionality factors f (Λ/k F ) << 1 in front of the interactions, in the infrared. One, then, identifies 1/N with f (Λ/k F ) << 1, and the only difference from ordinary particle-physics large-N expansions is the dependence of this effective N on the cut-off Λ: that is to say, 1/N runs.
As we showed in ref. 7 , however, large N expansions in three dimensional QED can exhibit such scale dependence. Wave-function renormalization leads to a renormalized 'running' 1/N . Furthermore, the running is of a novel nature. Instead of finding a non-trivial infrared fixed point, we shall demonstrate the existence of an (intermediate) regime of momenta, where the effective running of the gauge coupling, which is essentially 1/N times a spontaneously appearing scale, is slowed down considerably, so that one encounters a quasi-fixed-point situation. This quasi-fixed point structure is sufficient to cause (marginal) deviations from the fermi liquid picture. At finite temperatures, there are indications 7 that this behaviour will lead to logarithmic temperature-dependent corrections to the linear resistivity of the fermion system, the latter being the result of the presence of (statistical) gauge interactions. This makes such theories plausible candidates for a correct qualitative description of deviations from Landau fermi liquid theory, which might be related to the observed anomalous normal phase properties of high-T c cuprates.
2 QED 3 : Super-renormalizability, 'running' couplings and non-trivial (quasi-)fixed-point structure Three-dimensional quantum electrodynamics (QED 3 ) has recently received a great deal of attention ( 11 -19 ) not only as a result of its potential application to the study of planar high-temperature superconductivity 6 , mentioned in the introduction, but also because of its use as a protoype for studies of chiral symmetry breaking in higher-dimensional (non-Abelian) gauge theories 20 .
Despite the theory's apparent simplicity the situation is not at all clear at present. A great deal of controversy has arisen in connection with the rôle of the wavefunction renormalization. In the early papers 11 the wave-function renormalization A(p) was argued to be 1 in Landau gauge to leading order in 1/N , where N is the number of fermion flavours, and thus was ignored. More detailed studies, however, showed 15 that the precise form, within the resummed 1/N graphs, of A(p) is below which, as argued in ref. 11 , dynamical mass generation occurs b . The situation became clearer after the work of ref. 8 , who showed that the introduction of an infrared cut-off affects the results severely, depending on the various ansatzes used for the vertex function. In particular, there are extra logarithmic scaling violations in the expression for N c , depending on the form of the vertex function assumed, which render the limit where the infrared cut-off is removed, not well-defined.
For our present purposes, however, we are not so much interested in whether the inclusion of wavefunction renormalization leads to a critical N c or not, as in the more general point that -as noted by Kondo and Nakatani (KN) 8 , following Higashijima 9 -the vacuum polarization contribution to A produces effectively a running coupling, even in the case of the super-renormalizable theory of QED 3 . KN's analysis was restricted to the regime of dynamical mass generation, and our main purpose in this section is to extend that to the "normal" regime where mass is not dynamically generated. We emphasize now, however, that if A is set equal to unity at the outset, the power of the running coupling concept to unify both regimes is completely lost.
We now proceed to a brief review of our analysis in ref. 7 . Following ref. 8 , we make the vertex ansatz where p denotes the momentum of the photon. The Pennington and Webb 15 ansatz corresponds to n = 1, where chiral symmetry breaking occurs for arbitrarily large N 21 . It is this case that was argued to be consistent with the Ward identities that follow from gauge invariance 15 . In this paper we shall concentrate on the generalized ansatz, with n = 1, and in particular we shall discuss its finite temperature behaviour. We keep the exponent n arbitrary 8 and discuss qualitatively the implications of the vertex ansatz for various ranges of the parameter n. As we shall argue below this is crucial for the low-energy renormalization-group structure of the model.
Using the ansatz (4), one can analyze the Schwinger-Dyson (SD) equations, in the various regimes of momenta, in terms of a running coupling. For pedagogical purposes, we first concentrate on the (infrared) regime of dynamical mass generation, where g 0 = 8/π 2 N , N is the number of fermion flavours, and ǫ is an infrared cutoff.
In the low-momentum region relevant for dynamical mass generation p << α and b However, this result was not free of ambiguities either, given that the inclusion of wave-function renormalization necessitates the introduction of a non-trivial vertex function. The exact expression for the latter is not tractable, even to order O(1/N ), and one has to assume various ansatzes 15 that can be questioned. the first term in the right-hand-side of (5), cubic in ( k p ), may be ignored. Then, taking into account that G(k 2 ) = A(k) n , and using the bifurcation method in which one ignores the gap function B(k) in the denominators of the SD equations, one obtains easily which has the solution Substituting to the SD equation for the gap, one then obtains a 'running' coupling 8 in the low momentum region which, we note, is actually independent of ǫ. The existence of the dimensionless parameter g L in QED 3 may be associated with the ratio of the gauge coupling e 2 /α, given that in the large N analysis the natural dimensionful scale α has been introduced. Thus, a renormalized running N −1 might be thought of as expressing 'charge' scaling in this super-renormalizable theory. In particular (8) implies that the β function corresponding to g L is of 'marginal' form Thus, depending of the sign of 2−n one might have marginally relevant or irrelevant couplings g L ∝ e 2 /α. The first derivative of the β function with respect to the coupling g L is and since g L > 0 by construction, its sign depends on the sign of n − 2. For n < 2 (the marginally relevant case) the gauge interaction decreases rapidly as one moves away from low momenta, and the theory is "asymptotically free" 8 . If n > 2 (marginally irrelevant), on the other hand, then g L (t) tends to zero in the low momentum region, whilst for n = 2 the coupling is exactly marginal and one recovers the results of ref. 11,16 about the existence of a critical flavour number. Gauge invariance, in the sense of the Ward-Takahashi identity, seems to imply 15,16 n ≤ 2 and this is the range we shall explore in this article.
Our task in ref. 7 was to extend (8) beyond the region p << α. Consider first the true ultraviolet region p → ∞. Assuming for the moment that (8) were correct for p >> α, one finds a zero of the β function at the point t → ∞, the trivial fixed point g * = 0, which is an ultraviolet fixed point. However, (8) or (9) are not reliable for the range of momenta p >> α. Both formulas have been derived in the regime of momenta relevant to the dynamical mass generation, p << α.
This being so, do we have an alternative argument for a trivial ultraviolet fixed point? The answer is affirmative. To this end we use the results of ref. 22 employing a quenched fermion approximation in large N QED. The result of such an investigation is that once fermion loops are ignored, and hence only tree-level graphs (ladders) are taken into account, the wave-function renormalization is rigorously proved to be trivial in the Landau gauge : This result is a consequence of special mathematical relations of resummed ladder graphs in Schwinger-Dyson equations. Now in our case, one observes that in the high-energy regime, p → ∞, the 1 N -resummed gauge-boson polarization tensor vanishes as Π(p → ∞) ≃ α/8p → 0. Thus, the situation is similar to the quenched approximation, which implies the absence of any wave-function renormalization (11), and therefore the vanishing (triviality) of the effective ('running') coupling constant g in the ultra-violet regime of momenta. This is in qualitative agreement with the naive estimate made above, based on the formulas (8), (9).
The situation is, therefore, as follows. The coupling grows from the trivial fixedpoint (ultraviolet regime) where there is no mass-generation, to stronger values as the momenta become lower. According to the naive formula (9), this coupling grows indefinitely for low momenta and the perturbation expansion breaks down. But -to repeat -(8) was derived for the regime p << α, and the question now arises whether nothing new happens from this regime all the way up to p → ∞, or whether there is interesting structure at intermediate scales. In particular, we might envisage a "quasi-fixed-point" situation, in which g remains more or less stationary around the value g(0) for a wide range of t below t = 0, before commencing to grow rapidly at very low momenta.
The answer 7 to the above question turns out to reside, essentially, in the infrared cutoff ǫ (which, as we noted above, actually disappeared from (8)). The coupling of (8) is "asymptotically free" (i.e. grows rapidly in the far infrared) for n < 2, provided that the ratio α/ǫ is large enough -and in this case dynamical mass generation (d.m.g.) occurs. To get to the region where d.m.g. does not occur, we must consider smaller values of α/ǫ, tending ultimately to unity. This is the region that will yield the effective non-trivial fixed point structure. In this case, p ≃ α and hence the only allowed region for the momentum k in (5) is k ≤ p, which now eliminates the second term in (5). Solving then (5) in this approximation (and taking B = 0 since d.m.g. does not occur), with the vertex (4), one obtains which can be easily solved with the result A(t) = (const + 2 − n 9 g 0 e 3t0−3t ) where the const is a positive one and can be found from the value of the wave function renormalization at t = ln(ǫ/α) ≡ t 0 , namely A(t 0 ) = 1. From (12) this yields the value const = 1 − 2−n 9 g 0 . Substituting (13) back to the gap equation one obtains a 'running' coupling constant in this new intermediate regime We note that just as the "lower scale" ǫ disappeared from (8), so the "intermediate scale" α is absent from (14).
Let us study the fixed-point structure of this renormalization-group flow . To this end, consider the β function obtained from (14) : Taking into account that g 0 = 8 π 2 N we observe that the vanishing of β I occurs not only at g I = 0 but also at the non-trivial point For what momenta is this fixed point reached ? Accepting (14) at face value, the answer would be that it is reached for p → ∞. But of course (14) is not valid for p >> α, being appropriate for ǫ < p < α where the ratio ǫ/α is smaller than unity, though not so very small that p can enter the region of d.m.g. Referring then to the right hand side of the second equality in (14), we see that when p ≃ α the quantity g I will be very close to g I * , differing from it by terms of order (ǫ/α) 3 1 N 2 , which is negligible. Indeed, as p moves down to p ≃ ǫ, g I arrives at g 0 , which is still within (1/N 2 ) of g I * .
Thus the crucial point is that there is -on the basis of this admittedly approximate analysis -a significant momentum region over which the coupling g I varies very slowly, and we are in a "quasi-fixed-point" situation. In a sense, this slow variation of g I in the range ǫ < p < α (for not too small ǫ) provides a reconciliation between the normalizations adopted in the two different approximations (8) and (14) -namely between g L (p = α) = g 0 and g I (p = ǫ) = g 0 .
The new fixed point occurs at weak coupling for large N . This is consistent with the interpretation that such a fixed point should characterize a regime of the theory, as determined by the ratio α/ǫ, where dynamical mass generation does not occur.
In summary, then, our analysis in ref. 7 suggests a significant modification of the picture presented by Kondo and Nakatani 8 . Whereas those authors only considered ǫ << α, which is the regime of "asymptotic freedom" and d.m.g., we have explored also the region of smaller values of α/ǫ, and have concluded that here quantum corrections create a quasi-fixed-point with weak coupling. Both regions of α/ǫ are important in our application of these results to the cuprates, as we discuss in some detail in ref. 7 , where we tried to relate the "ǫ"of this QED 3 with the temperature T of QED 3 at finite temperature.
At this stage, it is worth pointing out the similarity of the above-demonstrated 'slow running' of the effective gauge coupling g at intermediate scales with (fourdimensional) particle physics models of 'walking technicolour' type 23 . Such models pertain to gauge theories with asymptotic freedom and involve regions of momentum scale at which effective running couplings move very slowly with the scale, exactly as happens in our (asymptotically free) QED 3 case c . This slow running of the coupling results in such theories in a significant enhancement of the size of the fermion condensate. In our case, such condensates are responsible for an opening of a superconducting gap, and, therefore, one could associate the slow running of the coupling at intermediate scales with the suppression of the coherence length of the superconductor (inverse of the fermion condensate) in the phase where dynamical mass generation occurs. Such a suppression, as compared to the phonon (BCS) type of superconductivity, which is an experimentally observed and quite distinctive feature of the high-T c cuprates 25 , appears then, in the context of the above gauge theory model 6 , as a natural consequence of the non-trivial quasi-fixed-point renormalization group structure. Note that in ref. 6 the enhancement of the superconducting gap-to-critical-temperature ratio, as compared to the standard BCS case, had been attributed to the super-renormalizability of the theory and the Tindependence of quantum corrections, features which are both associated with the above quasi-fixed-point (slow running) situation as discussed above. It is understood, of course, that before we arrive at definite conclusions about the actual size of the coherence length in the model, we should be able to perform exact calculations by resumming the higher orders in 1/N to see whether these features persist. At present this is impossible analytically, but one could hope for (non-perturbative) lattice simulations of the above systems 6,26 .
For completeness, we would like to compare our results 7 to other existing results in the literature concerning the infrared structure of QED 3 , and in particular to the results of refs. 8,27,28 . In ref. 27 , it has been argued, on the basis of a powercounting analysis, which did not make any use of the Ward-Takahashi identities, c A similarity of QED 3 with walking technicolour had also been pointed out previously 24 , but from a different point of view. In ref. 24 , a formal analogy of QED 3 with walking technicolour models was noted, based on the rôle of fermion loops in softening the logarithmic confining gauge potential to a Coulombic 1/r type, in the infrared regime of momenta. This 1/r behaviour of the potential, and its relevance to dynamical chiral symmetry breaking, is common in both theories. The formal analogy between QED 3 and walking technicolour theories is achieved 24 by replacing the coupling g 2 of the four-dimensional theory by 1/N of QED 3 . However N of ref. 24 does not vary with the energy scale, since wave-function renormalization effects have not been discussed in their case. This is the crucial difference in our case, where there is a more precise analogy with walking technicolour theories, due to the slowing-down of the variation of the 'effective' N (14) with the (intermediate) energy scale. that there is no renormalization of N to any order in 1/N , in the infrared regime of the model. The arguments were based on the softened Coulombic form of the gaugeboson propagator in the infrared, as a result of fermion vacuum polarization: D µν ∝ (1/q)(g µν − (1 − ξ)q µ q ν /q 2 ), in an arbitrary ξ gauge, for small momentum transfers q << α. It is worth noticing that such arguments appear to apply equally well to Abelian as well as non-Abelian theories, since in the latter case non-Abelian three or four gluon interactions could not contribute to the potential scaling-violating interactions. This analysis has been performed without implementing an infrared cutoff, due to the infrared finiteness of the (zero-temperature) theory. In the work of ref. 8 , which is applied to the infrared regime, an infrared cut-off is introduced, which changes the scaling properties of the gauge-boson propagator. In this case, the scale-invariant situation seems to occur only for the value n = 2 in the vertex ansatz, which notably does not satisfy the Ward-Takahashi identities 15 . As we have seen, gauge invariance requires n = 1, and in that case there exists a running N , at infrared momentum scales, as well as a finite critical flavour number, which however is infrared cut-off dependent, and diverges in the limit where the cut-off is removed.
We can also compare this result with that of ref. 28 , which claims to have proven the gauge invariance of the critical number of flavours in QED 3 . There, a nonlocal gauge fixing was used; this mixes orders in 1/N expansion, in the sense that the gap function in SD contains now graphs of O(1/N 2 ), whilst the wave-function renormalization still remains of O(1/N ). In contrast, the analysis of ref. 8 Thus, the key to a possible explanation of the discrepancy between the works of ref. 27,28 and ref. 8 seems to be hidden in the higher orders in the large N expansion, as well the presence of the infrared cut-off. Notice that a naive removal of the infrared cut-off might lead to ambiguities, as becomes clear from the work of ref. 18 for finite temperature field theories, provided that one makes 17 the (physically sensible ) identification/analogy of the infrared cut-off with the temperature scale, at least within a condensed matter effective theory framework. Now we come to our case. As can be seen by the above discussion, our results can offer a way out of the above-mentioned discrepancy. For us, the momentum regime of interest is not the infrared one, where dynamical mass generation occurs, but the intermediate scale. In this regime, the power-counting arguments of ref. 27 do not apply, since the gauge-boson propagator does not have a simple Coulombic behaviour. Thus, the wave-function renormalization effects, that appear to exist in our, admitedly rough, truncation of the SD equations, might not be incompatible with the results of ref. 27 , pertaining to the existence of a critical flavour number. From our point of view, this would mean that, although there is a (slow) running of an effective N , and thus scale invariance is marginally broken, however, the running of the coupling is even more suppressed in the infrared, where strong quantum effects cut off the increase of the (asymptotically free) coupling. The infrared cutoff then, appears as the (spontaneous ?) scale, above which a slow running of the (asymptotically free) coupling becomes appreciable. In a condensed-matterinspired framework, such a spntaneously appearing scale makes perfect sense, if one associates the infrared cut-off with the temperature scale 17 . For momenta sightly above the infrared cut-off, then, the situation of KN 8 seems to be valid. This regime may be viewed as the boundary regime for which dynamical mass generation still can happen. Below the infrared scale, which is a regime that makes perfect sense in an infrared-finite theory such as QED 3 , dynamical mass generation certainly occurs, and the arguments of ref. 27 apply, leading to an effective cut-off of the increase of the coupling constant. In this regime, the gauge-boson propagator assumes a softened Coulombic 1/r form, which has been argued to be important for a (superconducting) pairing attraction among fermions (holes) in the model of ref. 6 . Such a situation was envisaged in ref. 9 for the case of chiral symmetry breaking in four-dimensional QCD, which in this way was dissociated from the confining properties of the theory.
In the work of KN 8 and ours, all these issues could be confirmed only if a more complete analysis of the SD equations, including higher-order 1/N corrections, is performed. Whether resummation to all orders in 1/N washes out completely the wave-function renormalization effects at intermediate momenta, leading to an exactly marginal (scale invariant) situation, or keeps this effect at a RG marginal level, remains an unresolved issue at present. On the basis of the above discussion, one would expect that marginal deviations from scale invariant behaviour at intermediate momenta, such as the ones studied in the present work, survive higher-order analyses, but they also lead to a critical number of flavours, since the latter is an entity pertaining to the infrared regime of the theory. Moreover, for us, who are interested in performing the analysis in a condensed-matter rather than particletheory framework, there is the issue of the ambiguous infrared limit of the theory at finite temperatures, which is by no means a trivial matter 18 . It seems to us that all these important questions can only be answered if proper lattice simulations of the pertinent systems are performed. At present, the existing computer facilities might not be sufficient for such an analysis.
However, as pointed out in 7 , the slow running of the coupling constant of the model at intermediate momentum scales, if true, is a desirable effect from a condensed matter point of view, where both infrared and ultraviolet cut-offs should be kept. The wave-function renormalization effects, discussed above, prove sufficient in leading to a (marginal) deviation of the theory from the fermi-liquid fixed point. At finite temperatures, this effect can have observable consequences, and might be responsible for the experimentally observed abnormal normal state properties of the high-T c cuprates, the physics of which the above gauge theories are believed to simulate. We stress once again that such effects would be absent in an exactly marginal situation, like the one suggested in ref. 27 .
3 Linear behaviour of the (normal phase) Resistivity in QED 3 with the temperature scale We would like to conclude this talk by making some remarks on the behaviour resistivity of QED 3 , i.e its response to an externally applied electromagnetic field. This requires connecting the above picture of the behaviour of QED 3 at zero temperature to that of the same theory at finite temperature, T . In the absence, again, of anything like an exact solution in the T = 0 case, approximations (quite possibly severe ones) will have to be made. However, the physical aim is clear: we want to connect the experimental observation that the electrical resistivity in the normal phase of the high-T c superconductors varies linearly with T over a wide range in T from low temperatures up to a scale of 600 K, to the existence of the non-trivial quasi-fixed-point structure of QED 3 found in the previous section. Qualitatively, the way we shall make the connection is to interpret the temperature in finite-T QED 3 as (related to) an effective infrared cutoff. This will follow from the form of the gauge boson propagator for T > 0, which we analysed in some detail in ref. 7 , and we shall not repeat here.
Our aim in this subsection is to exhibit non-fermi liquid behaviour of the resistivity, and associate it with the quasi-fixed-point structure at intermediate scales revealed in the previous section, via the qualitative connection α/ǫ ∼ α/T . The resistivity of the model is found by first coupling the system to an external electromagnetic field A and then computing the response of the effective action of the system, obtained after integrating out the (statistical) gauge boson and fermion fields, to a change in A.
In the case at hand, in the model of ref. 6 (τ 3 − QED) the effective action of the electromagnetic field, after integrating out hole and statistical gauge fields d , assumes the form in a resummed 1/N framework, with Π the one-loop polarization tensor due to fermions. The functional variation of the effective action with respect to A yields the electric current j. From (17) this is proportional to the electric field E(ω) = ωA, in, say, the A 0 = 0 gauge, with ω the energy. In the normal phase of the electron d Due to the τ 3 structure, as a result of the bi-partite lattice structure 6 , there are no crossterms between the statistical and the electromagnetic gauge fields to lowest non-trivial order of a derivative expansion in the effective action. This implies that in this model the resistivity is determined by the polarization tensor of the hole (fermion) loop. On the other hand, in models where only a single sublattice is used 29,30 , such cross terms arise, which are responsible -after the statistical gauge field integration -for the appearance of a conductivity tensor proportional to , with Π B,F denoting (respectively) polarization tensors for the boson fields of the CP 1 model and for the fermions (holes) in a resummed 1/N framework. In such a case, the conductivity is determined by the lowest conductivity among the subsystems 30 . In condensed-matter systems of this type, relevant for the physics of the normal state of the high-Tc cuprates, it is the bosonic contribution that determines the total electrical resistivity 29 . system, the proportionality tensor, evaluated at zero spatial momentum, is σ f × ω, with σ f the conductivity 30 . From (17) then,we have where P denotes spatial components of the momentum.
If the effective action were real, then the temperature (T ) dependence of the resistivity of the model would be given by the T -dependence of the finite-temperature vacuum polarization of the gauge boson. Thus, following the estimates of ref. 19 for the polarization tensor in the resummed-1/N framework, we would have immediately obtained a linear T -dependence for the resistivity. Such a temperature dependence would actually be valid 7 for a wide range of temperatures above the critical temperature of dynamical mass generation 6 , due to specific features of the ansatzes involved in the analysis of ref. 19 .
However, things are not so simple. As first shown by Landau 31 , the analytic structure of the vacuum polarization graphs entering the effective action (17) is such that there are imaginary parts in a real-time formalism 32 . These imaginary parts are associated with dissipation caused by physical processes involving (onshell) processes of the type fermion → fermion + gauge boson. It turns out that these constitute the major contributions to the (microscopic) resistivity 33,34,29 . In this picture, the latter is determined by virtue of the Green-Kubo formula 35 in the theory of linear response, and it turns out to be inversely proportional to the imaginary part of the two-point function of the "electric" current j ψ µ = ψγ µ ψ, evaluated at zero spatial momentum. In our case, in the leading 1/N -resummed framework, the two-point function of the electric current is given by the graph of fig. 1. Adopting the ansatz (4) for the vertex function, the result for the current-current correlator is To compute the imaginary parts of (19) would require a real-time formalism, taking into account the processes of Landau damping 18 , which are not an easy matter to compute in resummed 1/N approximation, especially in the limit of zero-momentum trasfer, relevant for the definition of resistivity. Indeed, as shown in ref. 18 , and mentioned briefly above, there is a non-analytic structure of the imaginary parts of the one-loop polarization tensors appearing in the quantum corrections of the gauge boson propagator. Such non-analyticities result in a non-local effective action. This non-locality persists upon coupling the system to an external electromagnetic field A. Since the resistivity of the system is defined as the response of the system to a variation of A, then the Landau processes, which constitute the major contribution to the (microscopic) resistivity, complicate the situation enormously. At present, only numerical treatment of these non -analyticities is possible 18,19 .
We can circumvent this difficulty, and use only the real parts of the gauge boson polarization tensor to estimate the temperature dependence of the resistivity, by making use 7 of the fact that in "realistic" many-body systems 6,34,29 , believed to be relevant for a description of the physics of the cuprates, there is the phenomenon of spin-charge separation of the relevant excitations. According to this picture, the statistical current (responsible for spin transport) is opposite to the hole current (electric charge transport) and this constraint is implemented by the statistical gauge field, a µ , that plays the rôle of a Lagrange multiplier 29 . The gauge field, on the other hand, is identified 7 , for physical (on-shell) processes, with the bosonic current of the spin excitations. The electric charge is, thus, transported with a velocity which equals the propagation velocity v F of the statistical gauge fields a µ . In non-trivial vacua, such as the the one pertaining to our system, the velocity v F receives quantum corrections 36 from vacuum polarization effects. In a thermal vacuum such corrections are temperature-(T -) dependent.
If we represent the (observable) average of the electric current as j ψ = charge×v F , and use Ohm's law to relate it with an (T -independent) externally applied electric field E, j ψ = σ.E, then one observes that in this picture the main T -dependence of the resistivity σ −1 , comes from v F , as a result of (thermal) vacuum polarization effects 36 Using the association of the momentum infrared cutoff Q ≃ ǫ with α/β ∝ √ T , one gets from (20) a linear T -dependence for v −1 F , and thus for the resistivity ρ. Such a linear T dependence is a characteristic feature of the gauge interactions, and, as we shall discuss below, is valid for a wide range of T .
Incorporating wavefunction renormalization effects in the above analysis one can easily 7 demonstrate the existence of (logarithmic) deviations from this linear T behaviour. This part of the analysis does not require an explicit computation of the imaginary part of the correlator (19). It only requires A evaluated at p = 0. The resistivity, which formally is given by the imaginary part of the inverse of (19) as p → 0, turns out 7 to have the following temperature dependence (resummed up to O(1/N )): where we have taken n = 1 as in 15 . We cannot, in any case, take the precise value of the exponent in (21) seriously in view of the rough approximations made along the way.
However the region βα >> 1 is, in fact, that of dynamical mass generation, rather than the "intermediate" region βα > ∼ 1 in which we expect the quasi-fixedpoint structure to play a rôle. A numerical analysis shows 7 that for a wide range of temperature below α, but not so low that the symmetry-breaking phase is entered, the resistivity should have the form (21), where the precise coefficient of the 1/N power is not known accurately from the above analysis. The main point, then, is the "stability" of this T -dependence which correlates remarkably with the quasifixed-point structure discussed above.
Conclusions and Outlook
In this talk we have reviewed results of some recent work 7 , which provide evidence for certain interesting effects of the wave-function renormalization in (a variant of) QED 3 that is believed to be a qualitatively correct continuum limit of semi-realistic condensed matter (planar) systems simulating high-temperature superconducting cuprates.
Based on an (approximate) Schwinger-Dyson (SD) improved Renormalization Group (RG) analysis, we have argued for the existence of an (intermediate) regime of momenta, where the running of the renormalized dimensionless coupling of multiflavour QED 3 , which is nothing other than the inverse of the flavour number, is considerably slowed down, exhibiting a behaviour similar to that of 'walking technicolour' models of particle physics. This slow running, or (quasi) fixed point structure, has been argued to be responsible for an increase of the chiral-symmetry breaking (superconducting) fermion condensate of the model, as well as for a (marginal) deviation from the Landau fermi-liquid fixed point. In connection with the latter property, we have argued that the large N expansion is fully justified from a rather rigorous renormalization group approach to low-energy interacting fermionic systems with large fermi surfaces. Some experimentally observable consequences of this (marginal) non-fermi liquid behaviour, including logarithmic temperaturedependent corrections to the linear resistivity, have been pointed out, which could be relevant for an explanation of the abnormal normal-state properties of the high-T c cuprates.
The above RG-SD analysis was, however, only approximately performed at present. To fully justify the above considerations, and to make sure that the above-mentioned effects are not washed out in an exact treatment, one has to perform lattice simulations of the above models. Given that this might not be feasible yet, due to the restricted capacities of the existing computer devices, an intermediate step would be to perform a more complete analytic RG treatment of the relevant large-N SD equations at finite temperatures. Such a treatment is not easy, however, due to the mathematical complexity of the involved equations. In addition, finite-temperature field theory is known to exhibit unresolved ambiguities concerning the low momentum limit, which complicates the situation. Some of these issues constitute the object of intensive research effort of our group at present, and we hope to be able to reach some useful conclusions soon.
Our work made use of relativistic fermion systems. We have provided evidence that this might capture the correct qualitative features responsible for the observed deviation from fermi liquid behaviour in realistic high-temperature superconducting systems, which are known to be characterized by large fermi surfaces. Indeed, the remarkable stability in the observed behaviour up to temperatures of 600 K cannot be ascribed to simple deformations of the fermi surface, which would require an unnatural fine tuning. The presence of gauge interactions, of the type considered in this work, with subtle wave-function renormalization properties, provides a natural and simple explanation of the phenomena in terms of a (quasi)-fixed point (i.e. cross over) behaviour, rather than a new universality class. This should be contrasted to the works of refs. 4,3 , where the existence of a fixed point was argued. This is a non-trivial point to have in mind for possible experimental searches in the future. Of course, it is understood that in order to explain the complete set of the observed properties of the normal phase of high-temperature superconductors the simple relativistic QED 3 picture advocated above is not sufficient, not even qualitatively. One should probably take into account all possible sources of deviation, including the ones arising from the curvature of the fermi surface and its distortions, etc, in order to arrive at a quantitatively satisfactory picture of the situation.
In this context it might be worth pointing out that our results are also of value for cases of condensed matter systems with relativistic spectra around some nodes of their fermi surfaces. At present, we do not have a physical intutition on the microscopic nature of the gauge interactions that might be involved in such situations, neither are we aware of realistic candidate systems that would realize such scenaria. A plausible testing ground for these ideas would be the case of ν = 1/2 fractional quantum Hall systems e . Experimentally, there appear to be deviations from fermi liquid behaviour in such systems, and there are recent theoretical attempts 37 to relate this to the existence of new infrared fixed points. From our point of view, if the ν = 1/2 Hall case is to be characterized by a new gauge interaction due to, say, interactions among the magnetic moments of the (planar) electrons, then our work shows 7 that it is more likely to be characterized by a cross-over behaviour rather, than the appearence of a non-trivial infrared fixed point. We hope to study these fundamental problems in the future.
Closing, we would like to stress once again the exciting atmosphere for collaboration between particle-physics and condensed-matter communities triggered by the discovery of fractional quantum Hall systems and high-temperature superconductors. Indeed, as we have heard at this meeting 38 , there is a plethora of striking resemblances between many phenomena that characterize these solid state systems with the corresponding phenomena in particle physics. The very nature of the spincharge separation, which is essential for magnetic scenaria of high-temperature superconductivity, seems to be analogous to the quark fractional charge phenomenon inside the hadron 38 . This 'constituent-fermion' picture of the planar holes in magnetic superconductor models, which was also extended recently to Hall systems as an attempt at a microscopic understanding of the fractional quantum Hall effect 39 , is strongly reminiscent of the quark model of hadrons. This is not unrelated to the gauge approach to the high-T c problem advocated in refs. 6 and 7 , and briefly discussed above, where the asymptotic freedom of the abelian three dimensional gauge field plays a crucial rôle in determining the infrared behaviour of the system in connection with either dynamical mass generation, related to the superconducte We thank A. Tsvelik for a discussion on this point.
ing phase, or with the anomalous properties of the normal phase. The situation is analogous to chiral symmetry breaking in four dimensional QCD. In this context, the reader's attention is drawn to recent numerical evidence 38 for a QCD-like-string (hadronic) Regge-pole strucure in the physical spectrum of Hubbard or t−j models, which was associated with the spin-charge separation property. Certainly this line of research appears very interesting and exciting, and should be pursued further. | 2014-10-01T00:00:00.000Z | 1996-03-30T00:00:00.000 | {
"year": 1996,
"sha1": "6f79073ff5cb535180a07f09e7cbebe6e000c002",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6f79073ff5cb535180a07f09e7cbebe6e000c002",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264103444 | pes2o/s2orc | v3-fos-license | Socio-economic status of fishermen in cox’s bazar district of Bangladesh
The current study has been conducted to evaluate the socio-economic profile of fishermen in Cox’s Bazar district, Bangladesh through a survey method. The socioeconomic profile of fishermen has been discussed in points of family size, age structure, educational status, religious status
Introduction
Fishing is one of the oldest economic activities, comes next only to Agriculture (Kumbhar, 2017) [11] .Bangladesh is considered the most convenient country for fisheries in the world with the largest Bay of Bengal.It is one of the world's leading fish-producing countries, producing 46.21 lakh MT in Fiscal Year 2020-2021, with aquaculture contributing for 57% of total fish production (DoF, 2022) [5] .The Fisheries sector is recognized as an important source of income for a large number of populations of our country and also a significance source of revenue for the state economy (FAO, 2014) [6] .Socioeconomic status is the strongest indicator of people's lives that provides social, cultural, economic, and political characteristics of people, households, community groups, and institutions (Marmot et al., 1987) [12] .According to Darin-Mattsson et al. (2018) [3] , the individual who has lower socioeconomic status (SES) die younger than those who has high SES.Education, social class, occupational complexity, and income are interlinked with each other.Low-income households prioritize satisfying their immediate wants rather than building wealth (Saifi et al., 2011) [13] .Families with larger disposable incomes are better equipped to build wealth, prioritize addressing their immediate needs, indulge in indulgences, and handle emergencies.One of the elements of SES is "Occupational prestige," which includes income and educational attainment (Saifi et al., 2011) [13] .The lower-income job required more effort, physical risk, and provided less autonomy (Scott and David, 2005) [10] .In Bangladesh, where artisanal small-scale fishing accounts for 82.86%, or 5.56 lakh MT.This marine fisheries account for about 15% of the country's overall fish production (DoF, 2022) [5] .Despite being of great economical, commercial, and ecological importance to Bangladesh's economy, little research has been done on the fisherman community of the coastal district of Bangladesh.For progress to be effective, people's quality of life must be improved.People of all kinds, especially the fishing community, which is the most defenseless.Understanding the local fishermen is crucial and totally required for this improvement.
The livelihood condition and socio-economic status of the Bangladeshi fishing community, however, is not well understood.Studies on the socioeconomic conditions of coastal fishermen of Bangladesh need to be conducted.In this regard, the current study focused on the socio-economic status of major fishing communities of Cox's Bazar district.
Methodology
Before selecting the study area, several visits were done in different regions of Cox's Bazar to acquaint with the area and fishermen community as well as their nature.The study was conducted from August 2022 to February 2023 at four regions of Cox's Bazar District (Figure 1).Data were collected from 140 respondents including fishermen, fisherwomen, fish retailers, fish sellers, fishrelated day labors, and boat owners through a well-designed questionnaire survey form.Primary data were collected through personal interviewes from fishermen, focus group discussions, questionnaire surveys, direct observation, and field data collection.The data collected from 4 different sites finally compiled together to get a clear picture regarding socio-economic status of Cox's Bazar district.
Family Size
According to this study, most of the families (59%) have 5-6 members, termed as medium family.19 % family belongs to the large family group and has 7-10 members, whereas 15% belongs to the small family 91-4 members) and only 7% belongs to the extra-large family having more than 10 members.(Figure 2a).Here, most of the medium families are nuclear family.Islam et al. (2021) [8] reported that, for the fishing community of Padma River, Chapai Nawabgonj district majority (72%) of family having 4-6 people.Whereas only 12% of the families consist of just 1-3 members and the rest (16%) has the member more than 6.The reason behind the majority of medium-sized families is because the fishermen thought that the more children they had, the more hands for earning.
Age Structure
The majority of the fishermen (35%) belong to the age group 31-40, whereas 25% belong to the 21-30 and 18% belong to the 41-50-year age group.Only 15% of people having more than 50 years and 7% are less than 20 years old (Figure 2b).The data reflects that most of the fishermen belonged to the middle-aged group.Islam et al. (2017) [9] found that 13% were in the age group 18-30 years, 40% were in the age group 31-45 and 47% were above 45 years.
Educational Status
Educational levels are the subdivision of formal learning, actually covering early childhood education, primary education, secondary education, and tertiary level education.The educational status of the respondents is classified into four groups.They are: 1.No academic education means they had only the signature of their name; 2. Primary educationclass 1 to 5; 3. Secondary education-class 6 to 10; and 4. Higher study-above secondary level.The pie chart shows that 50% of the respondents have no academic education, whereas 24% and 21% received their primary and secondary education respectively.Only 5% of respondents have higher study above the secondary level (Figure 1c).The data analysis indicated that the majority of them do not have formal learning and a few are exceptional who tried to do the higher study.The fishermen who have completed the higher than the secondary study did not want to attach to this profession.Therefore, illiterate fishermen also want that their children should be literate and they should have a better life.Islam et al. (2021) [8] mentioned that 49% of the fishermen attended below class 5 and only 7% beyond the secondary level at Dengar Beel under Melandah Upazila, Jamalpur.In another study, Islam et al (2017) [9] found that 73% received no education, 13% attained primary education and 7% were at secondary level at Padma River in Chapai Nawabganj district.
Religious Status
The socio-economic profile of an individual is associated with individual religious affiliations and practices.The main religions of our country are Islam, Hindu, Buddhist, and Christian.But in Cox's Bazar district, three types of religious people were found in the study area-Muslim, Hindu, and Buddhist.83% of the fishermen have been reported as Muslim, 12% as Hindu and 5% as Buddhhist (Figure 1d).The result is similar to Ahamed (1999) [1] who conducted a study in coastal areas and found that the majority of fishermen via Muslim 68%.Hindu fishermen were found at 32% of Sundarbans.
Marital Status
Marital status is the position of being married or not married.One of the options describes an individual relationship with a significant other.It is found that 73% are married and 27% are unmarried among the fishermen.The unmarried people are mostly under 18 school-going students.Child marriage or early-age marriage has not been observed in these areas.Bappa et al. (2014) [2] reported a similar result regarding the fishermen of the Marjat Baor at Kaligonj in Jhenidah district, Bangladesh.
Professions
A community that is involved in the harvest or processing of fishery resources to meet social and economic needs is called a fishing community.In the fishing community, most of the people are directly engaged in fishing activities.Some are also involved with fishing-related other professions.Such as shopkeepers, middlemen, day labors, fish retailers, businessmen, net makers etc.In these study area 71% are fishermen, 7% day labors, 4% retailers, 5% middlemen, 3% shopkeepers, 2% net makers, and 6% with other activities such as poultry rarer, farmer, cobbler, beggar etc. (Figure 1e).Day by day, it becomes very difficult for fishermen to lead their life with minimum income, as a result some of them are planning to leave this profession to start work as day labor in metropolitan areas.
Daily Income
Fishermen's daily income is not enough for them.Daily income varied from season to season.In the peak season, they have earned a lot which is satisfactory for them, but in the lean season, they do not have any definite work to do.At that time, they earn on the basis of their types of work.Sometimes they do not get any work and pass their time hard.It has been observed that the majority (48%) of people's income is in the medium range (400-600 BDT/day).Whereas only 15% belong to the high-income range (600-1000 BDT/day).The data is represented in Figure 3a.According to Islam et al. (2021) [8] , 50% of the fishermen earned 200-300 BDT/day during the full harvesting period, 32% earned taka 100-200 per day and the rest earned 100 or below 100 BDT/day.
Housing Condition
It is said that the Social status of a community is based on the nature of its house.The house of the fishermen indicates their low standard of livelihood.It is found that 55% had the mud walls and tin roofs house and 31% had tin walls and tin roofs house (Figure 3b).According to Islam et al. (2021) [8] , majority of the fishermen had the house made of mud walls and tin roofs (71%), 17% had tin walls and tin roofs and 12% had other types of houses.
Sanitation
The sanitation system in the study area is not satisfactory.It has been observed that the sanitary condition of the fishermen is not up to mark.Each family does not have personal toilet facilities, as a result, they use common toilets, which are not safe and hygienic.One positive thing here is they don't use any open toilets.Islam et al (2017) [9] reported that the coastal fishermen used kacha-toilets, semi-paka and paka toilets.
Drinking Water Facilities
Drinking water facilities means a community water system or a nonprofit non-community water system.It has been perceived that fishermen used tube wells for drinking water.Fishermen might use their own tube well or government tube well, or neighbor tube wells.It has been observed that 51% of the fishermen depended on government tube well for their drinking water source.36% depended on neighborhood tube well and only 13% had their own tube well (Figure 3c).Das (2015) [4] reported that water has direct effect on the fisherman's health but among the total surveyed respondents only 89% of fishermen used tube well water for drinking purposes, 3% used pond water, 7% used river water and 1% used other sources of water for drinking and other daily activities.
Health Facilities
There is a proverb "Health is wealth".The majority of the fishermen do not get proper health treatment as government and private hospitals are far away from them.It is observed that 63% got health service from village doctors (Non-MBBS), 25% were dependent on Upazila Health Complex, and the rest 12 % visits the local Kabiraj (Figure 3c).Islam et al. (2017) [9] mentioned that 47% of fishermen took treatment from village doctors, 33% went to the Upazila Health Complex and 20% went to the MBBS doctors and others.
3.12.Electricity Facilities 100% of fishermen have found connected with electricity supply.Government have ensured this facility in recent years.Hasan et al. (2016) [7] reported that 95% fishermen had a facility of electricity and 5% fishermen didn't get the facility to use electricity.
Training Facilities
The training is provided by government or NGO to the fishermen to teach them about the modern techniques of fishing and increase their awareness about fishing.In the present study, it is found that only 7% have received the training and others not.It indicates that they do not have enough training to catch the fish and do not know the modern techniques of catching fish.For that unintentionally they harm to the natural fisheries resources of the Bay of Bengal.
Net and Boat Used by Fishermen
Several types of nets are used in Cox's Bazar district.Such as gill nets, seine nets, cast nets, purse nets etc.Two types of nets are observed: legal and illegal net.Gill netting is the common fishing method in our country.Current jal is prohibited by the government but they use this illegal fishing gear for fishing enormously.In the study area, different types of fishing boats have been observed.Some of them are wooden boat, balam boat, shampan, mechanized dingi nouka, and wooden trawler etc.The fishermen use these kinds of boats for artisanal fishing.Table 1 shows the list of nets and boats are used in the study area.
Credit Facilities
Fishermen's income is not enough for them.Only a few of them do not need any cooperation to lead their life but the majority of them have to borrow loans from different sources.
In the study area, different organizations are involved to provide credit to the fishermen.Such as NGO, BRAC, Proshika, Grameen bank etc.They have to take loans with high interest.Some of them accessed the loans from the Dadon or Mohajon.Purpose of receiving loan: to support their family, for Agricultural activities, for repairing houses, to repair fishing gears, nets and boats etc. Fishermen fall under the poverty line.These are due to their underprivileged housing conditions, low income, limitation in different living facilities, less support from the Government, and limited access to credit facilities.Most of the fishers in the study area are in economic necessity.Due to poor socioeconomic condition, a remarkable amount of fishermen is willing to convert from their profession.The fishermen have a low level of literacy, which impacted them highly in every aspect of their life.They are deprived of many facilities in their day-to-day life.As the small-scale fishing community contributes hugely in Marine Fisheries Sector of Bangladesh, so the Government of Bangladesh should implement proper initiatives to enhance the livelihoodstandard and socio-economic status of fishermen of the coastal community.
3. 16 .
Challenges for the Fishermen Most of the fishermen are facing various problems during fishing.The main problems were: ▪ No work in the lean season ▪ They do not have any control over the market strategies ▪ Political influences in the market trend make the lives of the fishermen horrible ▪ Lack of leadership ▪ Lack of marketing facilities ▪ Lack of knowledge of modern fishing techniques ▪ Lack of appropriate gears ▪ Lack of initiatives among fisherme
Fig 1 :
Fig 1: Map of the study area
Table 1 :
Net used and fish caught by fishermen of Cox's Bazar | 2023-10-15T15:20:38.416Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "8bdb65f23a04099f4c26af6929ebe24742133d2b",
"oa_license": null,
"oa_url": "https://www.fisheriesjournal.com/archives/2023/vol11issue5/PartC/11-5-23-101.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a357ac68bbcc1e9f16dd7de000a003d6bfcfc10",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": []
} |
208767947 | pes2o/s2orc | v3-fos-license | The natural history of solitary post-nephrectomy kidney in a pediatric population
ABSTRACT Introduction: Children with a solitary post-nephrectomy kidney (SNK) are at potential risk of developing kidney disease later in life. In response to the global decline in the number of nephrons, adaptive mechanisms lead to renal injury. The aim of this study was to determine the prevalence and time of onset of high blood pressure (HBP), proteinuria, glomerular filtration rate (GFR) disruption and renal tubular acidosis (RTA) in children with SNK. Materials and methods: After obtaining the approval from our institution's ethics committee, we reviewed the medical records of patients under 18 years of age who underwent unilateral nephrectomy between January 2005 and December 2015 in three university hospitals. Results: We identified 43 patients, 35 (81.4%) cases of unilateral nephrectomy (UNP) were due to a non-oncologic pathology and Wilm's tumor was identified in 8 (18.6%) cases. In patients with non-oncologic disease, 9.3% developed de novo hypertension, with an average time of onset of 7.1 years, 25% developed proteinuria de novo, with an average time of onset of 2.2 years. For GFR, 21.8% presented deterioration of the GFR in an average time of 3.4 years. Ten (43.5%) patients developed some type of de novo renal injury after UNP. Patients with oncologic disease developed the conditions slowly and none of them developed proteinuria. Conclusions: Taking into account the high rate of long term postoperative renal injury, it can be considered that nephrectomy does not prevent this disease. The follow-up of children with SNK requires a multidisciplinary approach and long-term surveillance to detect renal injury.
INTRODUCTION
When compared to the general population, patients with a solitary kidney have an increased risk of developing chronic kidney disease (CKD) throu-ghout life (1). There are reports of long-term outcomes but the results are variable and do not allow to confi rm causality of kidney disease in the future (1)(2)(3).
Some case series report rates between 30 and 50% of kidney disease in these patients (4)(5)(6)(7)(8). However, the etiological burden that leads to end-stage kidney disease is unclear and there are no specific prognostic factors that can accurately predict this.
In response to the decreased numbers of nephrons, several adaptive mechanisms occur in the remaining ones which can manifest clinically as arterial hypertension (AHT), decreased glomerular filtration rate (GFR) and proteinuria. There are intraglomerular hemodynamic changes due to the initial hyperfiltration. It starts with intrarenal vasodilatation and glomerular hypertension which causes higher glomerular volume and surface. This inflicts a mechanical pressure on the hypertrophied podocytes, producing patches in the glomerular basement membrane which leads to a scarred Bowman's capsule and segmental sclerosis. Histopathological findings suggest focal and segmental sclerosis in these kidneys is what leads to long-term kidney disease (5,6).
Currently, there are insufficient global and local statistics about the short, medium and long--term outcomes of the solitary kidney in the pediatric population.
The aim of this study was to determine the prevalence and time of presentation of hyperfiltration nephropathy, hypertension, proteinuria, decreased GFR and renal tubular acidosis (RTA) in pediatric patients with a solitary post-nephrectomy kidney. Thus, our goal is to expand the knowledge and evaluate the prognosis of these patients.
MATERIALS AND METHODS
After obtaining the approval from our institution's ethics committee, we performed a retrospective analysis of data from pediatric patients who underwent unilateral nephrectomy between January 2005 and December 2015 at Hospital Universitario San Ignacio, Hospital Militar Central y Fundación Santa Fe de Bogotá in Bogotá, Colombia. Preoperative and postoperative conditions were analyzed: age, sex, weight, height, blood pressure, 24-hour urine total protein and/or urine protein to creatinine ratio, GFR by Schwartz formula, RTA, exposure to nephrotoxic drugs and the reason for performing the nephrectomy. BP and proteinuria values were adjusted according to age and GFR values were adjusted according to age and height.
For GFR, we used the 2002 National Kidney Foundation Kidney Disease Outcomes Quality Initiative (KDOQI) classification of chronic kidney disease (CKD) from stages 1 to 5 (S I to S IV) (11).
And finally, for glomerular hyperfiltration we adjusted GRF according to age in preterm and full-term patients (10).
All of these variables were evaluated at the time of diagnosis, prior to unilateral nephrectomy, at the time of the surgical procedure, at 3, 6, 9, and 12 months and annually until possible.
It was defined if patients had improved, worsened or had no changes after surgery: Improved: partial or total amelioration of the preoperative disruption after surgery, a decrease in the stage of the disease and no development of new disruptions; Worsened: an increase in the stage of the disease or development of new disturbances post-operatively; and No changes: the stage of the disease remained the same and no development of new disruptions was observed.
The data of patients with oncologic nephrectomy indication and history of chemotherapy were evaluated separately since that factor could alter the outcomes. Patients who had stage 5 CKD and were undergoing dialysis were excluded from the post-operatively analysis since the natural history of their disease is already known.
Patients without follow-up were excluded. Statistical analysis was performed using the Microsoft Excel® 2016 program.
RESULTS
A total of 43 patients entered the study. The mean age at diagnosis was 5.53 years (antenatal-17 years). The percentage of men was 44% and women 56%.
Regarding the causes of nephrectomy, it was found that 35 (81.4%) were non-cancer patients and 8 (18.6%) were cancer patients.
NON-CANCER PATIENTS
The main congenital anomalies of the kidney and the urinary tract (CAKUT) was reflux nephropathy in 42.9%, followed by ureteropelvic junction obstruction and ureterovesical junction obstruction with 20% each, followed by multicystic dysplastic kidney and ectopic ureter with 5.7% each and finally nephrolithiasis and nephroma with 2.9% each.
Analysis by patients
46% of the patients had at least one preoperative disturbance such as BP, proteinuria, GFR or RTA 50% of them had just one condition and 50% two conditions. 28% of the patients with at least one preoperative condition had contralateral CAKUT.
The most frequent condition was proteinuria which was present in 50% of patients, the second one HBP present in 31% of patients and decreased GFR in 31% of them, the third one RTA in 25% and the least frequent condition was increased GFR in 12.5%.
After unilateral nephrectomy, 72% of the patients presented with some of the following disturbances: BP, proteinuria, GFR or RTA 56.5% of them had a history of kidney disease and 43.5% developed it de novo 80% of this last group had a normal contralateral kidney.
In the group with the de novo conditions, the most frequent disturbance was proteinuria in 50% of the patients, the second one decreased GFR in 30% of them, followed by HBP in 20%, increased GFR in 10% and TRA in 10% of the patients.
Overall, 43.7% of the patients got worst after surgery, of which 71.4% developed some de novo condition such as BP, proteinuria, GFR or RTA and 29% had already a history of kidney disease ( Figure-1).
Post-operative evaluation
After excluding 3 patients with CKD stage V we evaluated the 32 remaining patients.
The mean follow-up time was 67.5 months (12-129.4 months), 78% of the patients were followed for more than 3 years.
Post-operative evolution compared to pre--operative state can be seen in Table-1. In relation to proteinuria, 6% got better and 25% got worst, with an average time of occurrence of 27.3 months (8,13) 37.5% of the patients who got worst had contralateral CAKUT. Additionally, 25% of the patients developed de novo proteinuria (Table-1).
When RTA was evaluated, we found that 6.2% got worst and no one got better (Table-1).
As for GFR, 21.8% got worst with an average time of occurrence of 41.3 months (8, 06-94, 4 months). Only one patient got worst with an increase to S III CKD and higher, that patient had contralateral CAKUT. Only 2 patients increased their GFR and the ones that had pre-operative hyperfiltration remained the same (Table-1).
Cancer patients
There were 8 oncological patients, two of them had no follow-up which is why they were excluded from the analysis.
Analysis by patients
Fifty percent were female and 50% were male patients. As for laterality, 4 patients (66.6%) presented the tumor on the right side and 2 patients (33.3%) on the left side.
33.3% of the patients had at least one preoperative disturbance such as BP, proteinuria, GFR or RTA, 33.3% of them had HBP and proteinuria and 33.3% had no condition.
Overall, 4 patients (66.6%) got worst, three of them with a pre-operative condition and one of them developed a de novo condition which was HBP. Among the ones that got worst, one of them developed S II CKD, one of them developed RTA and one of them went from a stage 1 to a stage 2 HBP.
Pre-operative evaluation
Two patients (33.3%) had HBP stage 1 and one patient (16.6%) HBP stage 2. As for proteinuria, one patient (16.6%) had mild proteinuria and one moderate proteinuria, none of them received pre-operative chemotherapy. There were no patients with RTA Regarding GFR, there were no patients with decreased GFR and just one with increased GFR.
Post-operative evaluation
The mean follow-up time for these patients was 87.6 months.
Post-operative evolution compared to pre--operative state is shown in Table-2. Regarding HBP, two patients (33.3%) got worst, one presented with a stage 1 HBP and the other one with a stage 2 HBP. Two patients (33.3%) got better about this condition.
In relation to proteinuria, there were no patients with de novo proteinuria, and the two patients with pre-operative proteinuria got better, one of them did not present this condition anymo-re and the other changed from moderate to mild proteinuria.
When RTA was evaluated, we found that one patient (16.6%) had developed RTA with an average time of 70.4 months.
As for GFR, one patient decreased his GFR to a stage II CKD, the rest of them had no change as to their pre-operative condition, the patient with increased GFR remained the same.
Non-cancer patients
The natural history of the single post--nephrectomy kidney has not been systematically specified in our population and there are not many studies about this in the literature.
In fact, in our population there is only one study about single kidney which evaluated unilateral multicystic renal dysplasia, nevertheless, it is a different condition from acquired single kidney so the results are not comparable (12).
In the world literature, the reference study about this is the KIMONO which is a retrospective study of renal injury markers performed in 206 children with congenital and acquired solitary functioning kidney (1).
Our study describes the evolution of an acquired single kidney pediatric population in Bogotá, Colombia, the conditions prior to unilateral nephrectomy and its outcome after the procedure.
We found that the most frequent cause of nephrectomy in pediatric population is reflux nephropathy followed by ureteropelvic junction obstruction, which is consistent with the world literature (1).
We found a significant number of patients with kidney disease during the follow-up phase after acquired solitary kidney, 43.7% worsened their initial condition and 43.5% developed a de novo condition. At the time patients underwent unilateral nephrectomy, almost half of them already had previous kidney disease which may indicate that this disease has a multifactorial component in relation to CAKUT.
However, post-operative changes play an important role in kidney disease because 71.4% Given that in some proportion of patients undergoing unilateral nephrectomy, kidney function is compromised, and it can actually get worse, strict guidelines and parameters should be followed when deciding to perform this procedure.
The emergence of any disturbance such as BP, proteinuria, GFR or RTA during the post--nephrectomy phase in our study was 43.7% in an average time of 5.6 years of follow-up. It is a rate higher than the one reported on the literature where the average rate is 31-38, 1% in studies with mean follow-up of 10-14.9 years (1,13,14). It is possible that this difference could be due to the way the health system works in our country, where administratively it is difficult to ensure a strict and multidisciplinary follow-up to these patients.
HBP
In our study, 14% of patients had some degree of hypertension. None of these patients wor-sened and 50% improved 9.3% developed de novo HBP. The literature describes HBP rates of 11% and 33.3% (1,3,5,(13)(14)(15). The high variability of the results could be explained by the lack of standardization of BP in each study. When BP is taken with ambulatory blood pressure monitoring compared to single office measurement, the diagnosis of hypertension can be increased by as much as 17% (15). The mean time of onset of hypertension in our study was 7.1 years, (86.3 months) similar to other studies that showed average times between 4.9 and 12 years (5,13).
The large number of patients who developed hypertension and late onset hypertension portraits the importance of monitoring BP throughout childhood in single kidney patients.
Proteinuria
Twinty tree percent of our patients had proteinuria prior to the surgical procedure, 25% developed it de novo, and 25% worsened during follow-up. In our series the occurrence of proteinuria is much higher than the one found in some investigations, which is between 6.7% and 19% (1,13,14). However, previous studies reported rates up to 70% (5). The big difference in the results may be secondary to cultural and demographic factors, since proteinu- ria is directly related to obesity, sedentary lifestyle, among others. The average onset of de novo proteinuria was 2.2 years, which is faster when compared to another study that shows an average of 9.8 years (13). This difference is important, however, the larger studies and references such as the KIMONO cohort do not measure the onset time of this condition. More studies are necessary to define the real time (1).
The short time of proteinuria onset in our patients can also be explained by the fact that an important percentage (37.5%) of those who worsened, had CAKUT of the residual kidney.
GFR
Twenty tree percent of patients had decreased GFR before nephrectomy and 21.8% got worst with an average time of 3.4 years. This value is higher than the one reported in the literature which is 6% with an average time of 6.4 years (13).
The KIMONO study reports that GFR deterioration is more evident after puberty but our study does not have a follow-up phase long enough to perform this type of analysis (1).
No patient developed end-stage kidney disease, which can be attributed to the follow-up time of our study, while KIMONO and others report that between 20-50% of single-kidney can develop this condition at their 30`s (1,16).
RTA
Given the low rate of appearance of RTA after nephrectomy, we consider that this is not a condition that is part of the natural history of the solitary post nephrectomy kidney patient.
Cancer patients
As previously indicated, patients with oncological pathology deserve a different analysis given their history of chemotherapy. Nephrotoxicity due to chemotherapy is a controversial issue, some define that the decrease in kidney function secondary to chemotherapy is temporary and reversible, and is only present during chemotherapy (17); whereas other studies indicate that nephrotoxicity due to chemotherapy is multifactorial and does not de-pend solely on the chemotherapeutic agent used (18). The risk factors for these children may be inherent to their biological and medical conditions, such as decreased circulatory volume (diarrhea-vomiting), liver dysfunction, fluid sequestration and acute kidney disease. Likewise, there may be direct effects of cancer such as tumor lysis, hypercalcemia, disseminated intravascular coagulation, paraneoplastic glomerulopathies, among others (18).
A follow-up of 7.3 years (87.6 months) was carried out, similar to other case series reports published that indicate a follow-up time of 9.1 years (19). During this time, some patients developed progressive kidney disease, but in a smaller proportion compared to patients undergoing nephrectomy due to non-oncological pathology, this data agrees with the literature (19).
In addition, we noticed that patients with an oncological pathology developed a later decrease in GFR compared to patients with non-oncological pathology, 63.5 months vs. 41.3 months. Also, no patients with oncological pathology developed proteinuria compared to the 27.39% non-cancer patients who did.
This may suggest that those with oncological pathology have a healthy contralateral kidney with no underlying pathology to deteriorate it, a situation that is different from several cases of patients with non-oncological pathology.
Study strengths and limitations
A characteristic that differ from the rest of the literature in regard to this topic, is that our study describes pre-operative and post-operative evolution of patients who underwent unilateral nephrectomy.
We believe that the results we show in this study are very meaningful, since they show the importance of long-term follow-up of single-kidney patients, especially in developing country health systems where it is not possible to ensure this, it is necessary to raise awareness of the importance of a strict and careful follow-up within the physicians and the health institutions that handle single-kidney patients.
The limitations of this study are that it is a retrospective study, with a significant but small number of patients compared to some other studies, and the follow-up does not exceed the adult age of the patients. Additionally, there is a selection bias, since it is a convenience sample of the patients taken to nephrectomy, which was reduced by including multiple health institutions.
Another limitation is that the development time of the conditions was defined at the time when it was documented in the follow-up meetings, but not when the patients developed it.
CONCLUSIONS
Given the high rate of post-operative long--term kidney disease, it can be considered that nephrectomy does not completely prevent this disease which is the intention when the surgical procedure is performed; the surgical indications must be strict and maybe new surgical strategies should be considered.
Our case series reports the outcome of a significant number of single-kidney patients in our country, which contributes to statistics and allow us to give a prognosis to patients before performing a nephrectomy.
In the follow-up of children with a single kidney, a multidisciplinary approach is required, involving pediatricians, pediatric nephrologists and pediatric urologists, with strict long-term follow-up that includes the active search for conditions such as proteinuria, HBP and deterioration of TFG. | 2019-12-06T22:18:49.842Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "b8b4d90db9122cfca378c93a82847c229922b0ca",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/ibju/v45n6/1677-6119-ibju-45-06-1227.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92be154d3d91ad38d1435b69b408b1d857ff1967",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252542181 | pes2o/s2orc | v3-fos-license | Empathy and shame through critical phenomenology: The limits and possibilities of affective work and the case of COVID‐19 vaccinations
Abstract This paper begins by developing the critical phenomenologies of shame and empathy. It rejects that empathy is the supposed antidote to shame, and rather demonstrates the ways in which they function in parallel. The author contends that both shame and indeed empathy risk objectifying and fetishizing the other who is being shamed or empathized with. This argument and phenomenology about the relationship between shame and empathy is then applied and further developed through a case study of COVID‐19 vaccinations. The author explores whether empathy and shame ever “work” to increase vaccine uptake, and ultimately argues that both affects do and do not depending on the structures of power informing the specific context.
encounter between two bodies is needed. Instead, a whole complex world of language, culture and normative values must be in place, where certain behaviours, actions or modes of being are prohibited and seen as deviant and others are socially sanctioned and considered "normal" or "acceptable"'. 3 Ahmed 5 would similarly describe such an experience of shame as the 'affective cost of not following the scripts of normative experience'.
Another window into felt shame's phenomenology is by exploring shame's relationship to its supposed opposite: empathy. This thematic issue is on shame and respect, but when I think of respecting another's views, actions, or person, I think ultimately of striving to understand where another is coming from-hence I end up at empathy. Or, to be more precise, I end up at the popular colloquial understanding of empathy as fellow feeling through stepping into the feelings and perspectives of another, as marketed by popular figures like Brené Brown. 7 Returning to the initial tweet I quoted by Brown, she posits empathy not only as the antithesis of shame, but even its antidote: shame does not work, she asserts, but empathy does. In doing so, she implies the relationship between shame and empathy as opposites: the shaming subject judges and pushes away a shamed other, meanwhile the empathizing subject understands and moves close to others. This reflects a broader discourse on how empathy can be a cure for everything wrong in healthcare. Here's one example: medicine's paternalism and even shaming of patients can supposedly be cured if doctors are just trained to be more empathetic of their patients. This kind of empathy is often 'trained' often through patient 'expert' panels, communication skills workshops, reflective writing, and even reading literature in groups. [8][9][10] Here's another example: healthcare worker burnout can supposedly be prevented and even cured if healthcare workers find ways to reconnect to the deep meaning and relationships that are central to their work by, for example, using techniques like reflective writing to preserve empathy. 11 This kind of empathy work has been critiqued by numerous scholars in the medical humanities, including Anne Whitehead and Angela Woods as they make the case to push the field forward in ways that are critical to the empathizing impulse of the 'first wave' of medical humanities. 12 Rebecca Garden does precisely that in her seminal essay 'On the Problem of Empathy', wherein she argues: 'Theories of empathy must address tendencies to objectify the patient as a spectacle of suffering through which physicians exercise their own virtue […] Further, theories of empathy that emphasize interpersonal relations should not obscure the larger social contexts that determine illness and disability, beginning with inequities in access to and quality of health care based on ethnicity, class, gender and sexual/affeetional orientation'. 13 I too seek to problematize empathy, but through its relationship to shame. To reveal the ways in which shame and empathy may actually function in parallel, I will move from the colloquial understanding of empathy as fellow feeling back to empathy's phenomenological roots as an 'other-directed form of intentionality' that 'allows the other's experiences to disclose themselves as other'. 14 I understand empathy-as fellow-feeling to be a narrowed view of empathy's broader phenomenological structure.
Thinking critically about empathy's broader structure, especially through Ahmed's work, will better help us see the fallacy of empathy-as-cure by revealing how empathy's phenomenology, like shame, can also risk othering, objectification, and fetishization that abstracts the other from its social contexts.
Parts of empathy's phenomenological structure is surprisingly similar to felt shame-albeit with some critical differences that help further nuance the phenomenology of both empathy and shame.
Ahmed theorizes: 'Shame requires an identification with the other who, as witness, returns the subject to itself. The view of this other is the view that I have taken on in relation to myself; I see myself as if I were this other'. 5 In other words, the shamed self takes in the view of real or imagined others' feelings, perspectives (often judgements of that other), and experiences, re-shaping themselves through the lens of that other. Similarly, the empathizing self takes in the view of real or imagined others' feelings, judgements, and experiences (or at the very least tries to) and re-shapes themselves through the lens of that other. The first critical difference between the phenomenologies of shame and empathy I am describing is that the 'I' or 'self' of shame is the object of the action of shaming where the shaming actually results in felt shame. That is: I am describing the phenomenology of shame from the perspective of one who is shamed, rather than the one who is shaming. The 'I' or 'self' of empathy in this parallel phenomenology, however, is the one enacting empathy rather than the object of that empathy. The reshaping of the shamed subject and empathizing subject also takes different forms: the shamed subject is reshaped as if it they are what they assume the other imagines them to be, whereas the empathizing subject is reshaped as if they are what they themselves imagine the other to be. As a result, the empathizing subject strives to reach towards the one being empathized with, whereas turns away from the one who shames (at least in the very first instance-the shamed subject may indeed be compelled to move toward the one who shames to realign with the norm).
But what can ultimately happen in both the cases of empathy and shame is turning the objects of the verb (the one who is shamed or the one who is empathized with) into an other in problematic ways. This is more obvious in the case of shame: the one who is But it is important to note that this othering through shame and empathy can happen, not that it necessarily always does or has to happen. This is where a critical phenomenology, like that of Ahmed, offers a different approach to empathy that does not transcend or fetishize an other, but rather seeks to recognize and attend to the cultural-political context of the one who is being empathized with.
This also raises the question of whether shame also has such possibilities: perhaps if shaming is not levied towards individuals who transgress cultural-political norms, but rather levied towards institutions that create and contribute to reifying such norms, shame can be used in productive, progressive ways. When shame and empathy are simplistically viewed as working in opposition, rather than teasing out the nuances of where and how their phenomenologies are actually similar, shame remains an unquestioned 'bad' and empathy remains an unquestioned 'good.' Good as in the adjective: a good, even noble way to interact with others. But also a good as in the noun: empathy produces the object of its attention as an other-as a kind of fetishized good. Dolezal argues that shame 'is a permanent, necessary and structuring factor of identity. However, it is a double-edged force; it contains the potential for individual and social transformation, while also containing the potential for world-shattering personal and social devastation'. 3 Empathy, in both its colloquial and phenomenological understandings, is also critical and indeed funda- Empathy prevents responsible action precisely when it is perceived to be a completed, responsible action on its own, thus foreclosing the need for any further action. Ahmed 5 continues to explore how the Australian nation 'may bring shame "on itself" by its treatment of others; for example, it may be exposed as "failing" a multicultural ideal in perpetuating forms of racism'. She goes on to explain how 'those who witness the past injustice through feeling "national shame" are aligned with each other as "well-meaning individuals"; if you feel shame, you are "n" the national, a nation that means well […] In other words, our shame means that we mean well, and can work to reproduce the nation as an ideal' rather than actually bringing tangible justice to Indigenous peoples. 5 This also applies to empathy: those who witness the injustices faced by their patients through empathy can prevent responsible action when they become aligned with each other as 'well-meaning individuals' who reproduce healthcare as an ideal empathetic, helping enterprise at the individual-to-individual level without actually having to do any further work to make healthcare just at a systemic level. On the other hand, however, such empathy could alert one to injustices in healthcare and incite them to actually take further responsible action, for example through policy changes. Thus depending on how empathy or shame are 'taken up' in healthcare, they can end up working for or against oppression, for or against transformation. While Ahmed built her argument around an example of felt shame, the argument can also apply to shaming. Shaming unvaccinated individuals or groups can also align such 'wellmeaning individuals' in a way that reproduces themselves as an ideal group in such a way that does not necessitate and sometimes even prevents responsible action that actually changes vaccination uptake. This is where shaming individuals or groups for not getting vaccinated does not necessarily 'work' as it is only an unfinished political act.
However, in the opening example of shaming the government for allowing private corporations to maintain patents and profits off the COVID-19 vaccine, shame did indeed 'work' to incite a further political act (i.e., denying these private pharmaceutical companies the right to patent the vaccine) that will make vaccines more accessible globally and hence tangibly increase their uptake. In the words of Naomi Klein, shame 'worked' against oppression as just one (albeit small) step towards global vaccine equity. But returning to the context of shaming individuals or groups, rather than institutions like the government or private corporations, one can easily imagine or have experienced how such shaming can, in fact, prevent responsible action by alienating that person who feels shame, triggering defensiveness, and ultimately pushing them further from ever getting vaccinated. As such, shame's potential to 'work' depends on whether it is levied at an individual or an institution or anything in between.
In her book Is Shame Necessary?, Jennifer Jacquet argues that 'shame's service is to the group, and when it is used well and at the right time, it can make society better off' and in order 'to maximize effectiveness, it often can be better to focus on institutions, companies, or countries rather than individuals'. 18 (7) be implemented conscientiously'. 18 The above example of shaming the government for its role in global vaccine inequity meets all seven of Jacquet's criteria for effective shaming. The transgressor is a government whose policies allow pharmaceutical companies to patent and profit life-saving vaccines, which is a transgression that concerns a global audience (habit 1), deviates widely from the generally desired behavior to save lives through accessible healthcare (habit 2), and cannot realistically be expected to be formally punished When one experiences shame, one is seen (by oneself or others) to be doing something untoward or inappropriate'. 3 Pushed further, when one experiences felt shame, one is not just seen (by oneself or others) to do something untoward or inappropriate, but to actually be something untoward or inappropriate. To see an individual, such as someone who is not vaccinated as being untoward, even morally inferior, is a very different thing than making visible an institution or system as such. This is where the empathetic view of unvaccinated groups comes in: many strive to empathize with, for example, unvaccinated Black folks, by claiming to understand how it is reasonable given the history and present of racism in Western medicine. Here the capacity or willingness to empathize is dependent upon recognizing what systemic factors are at play with regards to COVID vaccinations. Just as it is critical to consider how the reasons for not getting vaccinated is dependent on social structures, it is critical to recognize that 'shame is not experienced in the same manner by all subjects. In fact, the propensity to shame, and its consequences, is very much dependent on one's position within a social group'. 3 It's not just the propensity to shame that is determined by power, but so too is the propensity to be shamed and to respond to it: 'Each body subject does not have equal power when it comes to returning or "receiving the Look." As a result, some bodies are more prone to shame than others'. 3 This is where a critical phenomenology of shame and empathy that accounts for their cultural politics comes in: is the shame around vaccines disproportionately levied towards and harming already oppressed subjects like Indigenous, Black, and disabled communities with valid reasons to mistrust Western medicine? And if empathy is to be effective in convincing such vaccine hesitant subjects to get vaccinated, then it must not only attend to such systems at play, but also incite changes in these systems. In this way, empathy can indeed be a tool working towards social justice.
But here lurks a double-edged sword yet again: in such a move to empathize with unvaccinated people by attending to the cultural politics of their relationship to healthcare, such oppressed groups risk being thematized as victimized others of healthcare without the more complex capacity to think or act beyond that trauma and their oppressed positions. In other words, through empathy, these communities may inadvertently become mere objects of their circumstances-fetishized objects of poverty, racism, ableism, more.
In writing about shame, Dolezal draws upon Sartre's phenomenology to elucidate how encountering the other can be objectifying: 'to be objectified in this sense involves a process whereby one person sees or treats another person as a type of object (rather than as a transcendence, i.e., as a human being whose complexity eludes Here shaming healthcare workers themselves for being weak, fallible, burned out and even angry prevents the kind of political action that is necessary to actually improve the working conditions for healthcare workers and thereby patient care. One such political action would be to mandate vaccination to decrease COVID-19 rates that are pushing such healthcare systems and the people who work in them to the brink of collapse. In Canada, for example, the script of shame has flipped such that healthcare workers are shaming the government for its inaction regarding vaccine policies, leaving the burden to care for those who get COVID-19 to an already overworked healthcare workers and system. Meanwhile there have even been antivaccination protests happening outside hospitals across the country, where even ambulances are blocked from reaching the hospital and individual healthcare workers are shamed and harassed upon entering and exiting the building. 25 Here shame is again levied towards individuals, and ultimately allowed to do so by a government that is unwilling to take serious political action. Drawing upon Dolezal's words, shame has become 'a structural feature of cultural politics' and thus 'it is not enough to overcome shame individually, but it must be done collectively'. 3 On the other hand, there has been a rise in empathy towards healthcare heroes, most notably demonstrated through the #HealthcareHeroes movements across the globe. There is a renewed understanding, respect, and empathy for how difficult a healthcare worker's job is and how they rise up to the challenge.
But here again, empathy forestalls political action by still putting the burden on individual healthcare workers to be overworked heroes without actual systemic support. Again in Canada, there has been significant backlash by healthcare workers against being called a healthcare hero precisely because it distracts and absolves institutions from necessary action to improve the working conditions of such supposed heroes-significantly including vaccine mandates. 26 Many hospitals have by now instituted their own vaccine mandates, but the government has never made it a requirement for even hospital workers, let alone the population at large. In the case of healthcare worker burnout, a perhaps better approach to empathy than the healthcare heroes narrative would be one that makes visible the many failures of a healthcare system to actually support its workers, from the 26 h shifts medical students and residents work to the government's failure to mandate vaccines in hospitals at the very least.
Here we have circled back, in perhaps unexpected ways, to Garden's earlier caution that empathy must not obscure larger social contexts and especially power imbalances. Through the development of the pheneomenologies of shame and empathy, and their application to the case of COVID-19 vaccination, I demonstrate how such affects work in more complex, vexxed, and even contradictory ways than typically considered. Attending to such nuance is critical if one is to work towards effectively levying shame and empathy towards justice rather than further oppression. There are no tidy conclusions to make here-and that is precisely the point: to problematize these affects, not to 'to hand you after an hour's discourse a nugget of pure truth to wrap between the pages of your notebooks and keep on the mantelpiece for ever,' in the words of Virginia Woolf from the opening of her seminal text A Room of One's Own. 27 In her chapter on 'Virginia Woolf and the Limits of Empathy,' Meghan Marie Hammond provides readings of several of Woolf's texts, notably including A Room of One's Own, and argues that Woolf rejects 'fellow feeling as a guiding principle for ethical action'. 28 But in my analyses of phenomenology and the case of COVID-19 vaccinations, I do not reject empathy nor shame entirely. This is perhaps the closest I will come to offering a conclusion: while empathy and shame do not necessarily guide ethical actions or decisions, they still can if used in critically reflexive ways.
DATA AVAILABILITY STATEMENT
Data sharing is not applicable to this article as no new data were created or analyzed in this study. | 2022-09-28T06:18:26.185Z | 2022-09-25T00:00:00.000 | {
"year": 2022,
"sha1": "3d892688f31e4c07b59e43c07912b3ea677af29a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "432c81b2221d2d46326faa3f83c85189d8d6f619",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1698648 | pes2o/s2orc | v3-fos-license | Bounds for Bilinear Complexity of Noncommutative Group Algebras
We study the complexity of multiplication in noncommutative group algebras which is closely related to the complexity of matrix multiplication. We characterize such semisimple group algebras of the minimal bilinear complexity and show nontrivial lower bounds for the rest of the group algebras. These lower bounds are built on the top of Bl\"aser's results for semisimple algebras and algebras with large radical and the lower bound for arbitrary associative algebras due to Alder and Strassen. We also show subquadratic upper bounds for all group algebras turning into"almost linear"provided the exponent of matrix multiplication equals 2.
Introduction
We study noncommutative group algebras and the problem of computing the product of two elements of an algebra. We restrict ourselves on the so-called rank or bilinear complexity of multiplication, which, roughly speaking, counts only the bilinear multiplications used by an algorithm, i.e. multiplications where each of the operands depends on one of the input vectors. A quadratic (in terms of dimension of an algebra) upper bound is straightforward, while all currently known general lower bounds are linear. This research is motivated by the recent group-theoretic approach for matrix multiplication by Cohn and Umans [9] and following group-theoretic algorithms for matrix multiplication [10]. It was shown that finite groups possessing some special properties can be used to design effective matrix multiplication algorithms. Our goal is to explore the structure of group algebras and investigate structural and complexity relation between noncommutative group algebras and the matrix algebra. We investigate this approach and put it into a different light. In fact, we show that the group algebras for the most promising groups for the group-theoretic approach have essentially the same complexity as the matrix multiplication itself. On the other hand, for a wide class of group algebras a lower bound holds which depends on the exponent of matrix multiplication (denoted in literature by ω, see Sect. 3 for definition). If one finds a more effective algorithm of multiplication in these group algebras, it would give a better upper bound for ω (but without necessary proving ω = 2, which is the general conjecture [6]). We also study general bilinear complexity of noncommutative group algebras and this paper extends the research in [22,23,7] where the problem for commutative group algebras over arbitrary fields was solved entirely. Our results also improve the Atkinson's upper bound for the total complexity of multiplication in group algebras [2]. Using Bläser's theorem on classification of all algebras of the minimal rank (see Sect. 5) we formulate a criterion for a semisimple group algebra to be an algebra of the minimal bilinear complexity. For some special cases we also show a 5 2 ·dimension-lower bounds for the rank of group algebras. For other special cases we show an up to 3·dimension of an algebra lower bound. For one special class of groups having not "too many" different irreducible representations we show a lower bound which depends on the exponent of matrix multiplications and turns to be superlinear if the exponent of matrix multiplication does not equal to 2. This employs Schönhage's τ -theorem (see Sect. 5). We show that this class is not empty, for instance group algebras of symmetric groups of order n! and general linear groups over finite fields have such a lower bound. Another motivation for this work was the search for algebras of high bilinear complexity. It is known, that over algebraically closed fields there exist families of algebras of arbitrarily high dimensions with bilinear complexity of each algebra from the family strictly greater than (dimension of the algebra) 2 27 [6,Exercise 17.20]. However, no concrete examples are known. This is in some sense similar to the situation in logical synthesis theory, where it is known that the circuit complexity (in a full basis) of almost all boolean functions of n variables is asymptotically c 2 n n [21] where the constant c depends solely on the basis, e.g. for the classical circuit basis {∨, &, ¬}, c = 1. 1 But there is no explicit construction of a function of n variables with a superlinear lower bound on the number of gates in a full finite functional basis. We show that a broad class of group algebras has superlinear bilinear complexity if the exponent of matrix multiplication does not equal to 2. We then turn to upper bounds and show by a simple technique a general upper bound for the total complexity of multiplication in group algebras that depends on the total complexity of matrix multiplication. In fact, if the exponent of matrix multiplication equals 2, then the total complexity of the multiplication in group algebras is always "almost linear". We indicate some special cases, when this upper bound can be improved provided a maximal irreducible representation of the group has not too high dimension. For lower bounds we distinguish between the semisimple and the modular case. If the characteristic of the ground field is either zero or does not divide the order of the group then the group algebra is known to be semisimple. In the other case, if the characteristic p divides the order of the group, then the algebra has nontrivial radical. In some cases its structure inside the group algebra can be described exactly. But in general this introduces additional significant difficulties. If the radical has relatively small nilpotence index then it is possible to obtain relatively 1 In fact, for a full circuit basis B = {f1, . . . , fn} where each fν is of mν variables (with no fictitious dependenies) and has weight wν, the constant c = min 1 ν n mν 2 wν mν −1 .
high lower bounds for the bilinear complexity of multiplication in group algebra. Finally, we show direct relations between complexity of noncommutative group algebras and complexity of matrix multiplication and pose several open questions. The paper is organized as follows: in Sect. 2 we bring all necessary definitions and notions from algebra and representation theory. In Sect. 3 we introduce the model of computation we will be working with and formulate related computational problems. We discuss briefly tight relation between different algebraic notions and computational complexity. We introduce an important quantitative measure estimate for complexity of multiplication in families of algebras of growing dimensions which generalizes the well-known notion of the exponent of matrix multiplication. Classical structural results from the theory of finite-dimensional algebras and representation theory will be presented in Sect. 4. Section 5 contains all necessary results from the algebraic complexity theory to be employed for obtaining lower and upper bounds for the complexity of multiplication in group algebras. In Sect. 6 we prove the first part of our main result. We show, that for any "complicated enough" group its corresponding group algebra is not of the minimal rank. We also prove two different kinds of lower bounds for families of group algebras depending on the representations of their groups. We also show the general relation between the lower bound for the complexity of group algebra multiplication and the complexity of matrix multiplication. We show, that the bilinear complexity of multiplication in group algebras of symmetric groups is superlinear in their dimension if the exponent of matrix multiplication does not equal 2. In Sect. 7 we turn to effective algorithms for multiplication in group algebras. We show the general upper bound for multiplication in any group algebra depending on the exponent of matrix multiplication and some improvements based on particular properties of the group.
Basic Definitions
In what follows we always use the term algebra for an associative algebra with unity. For example, n × n-matrices over some field form an algebra, and so do univariate polynomials over some field modulo some fixed polynomial or multivariate polynomials modulo some system of polynomials.
A basis of an algebra is any basis of the underlying vector space. The dimension (dim A) of an algebra A is the dimension of the underlying vector space. The multiplication in an algebra is completely defined if it is defined for the vectors of any of its bases: let A be an algebra over k, n = dim A, and e1, . . . , en be some basis of A, then where α ν ij are the structural constants from the field k. We call a basis {ei} n i=1 of A a group basis if the vectors ei form a multiplicative group with respect to the multiplication in algebra. In this case A is called a group algebra. On the other hand, given a finite group G = {g1, . . . , gn} and a field k we can define a group algebra k[G] as a n-dimensional vector space over k with basis {gi} n i=1 and multiplication in k[G] defined as We call the direct product of the algebras A and B over one and the same field k the algebra A × B over k which consists of pairs of vectors (a, b), a ∈ A, b ∈ B and all operations in A × B are performed component-wise: (a1, b1) • (a2, b2) = (a1 • a2, b1 • b2), • ∈ {+, −, ·} and λ · (a, b) = (λa, λb), where ai ∈ A, bi ∈ B, i = 1, 2, λ ∈ k. We call B ⊆ A a subalgebra of A, if B is a linear subspace of A and the product (in A) of any two vectors of B lies in B. A subalgebra I of A is called left (right) ideal of A if for all a ∈ A, x ∈ I the product ax ∈ I (xa ∈ I resp.) A left ideal that is at the same time a right ideal is called a two-sided ideal. A (left, right, two-sided) ideal is called maximal if it is not contained in any other proper (left, right, two-sided) ideal of the algebra. An ideal I is called nilpotent if I m = {0} for some m > 0. 2 The smallest m with this property is called the nilpotence index of I. The sum of all nilpotent left ideals of an algebra A is called the radical of A and is denoted by rad A. The intersection of all the maximal left ideals of the algebra A is called the Jacobson radical of A and is denoted by J(A). Proof. This follows from the fact, that the descending chain condition for left ideals in A implies rad A = J(A), see [26]. It ensures that any family of left ideals in A contains at least one minimal ideal, i.e. an ideal that does not contain any other ideal of the family. In a finite-dimensional algebra this always holds since we can map any family of ideals to the subset of integers in [0, dim A] mapping each ideal to its dimension as a linear subspace. The resulting image will contain the minimal element which will correspond to the set of ideals from the family having the minimal dimension. Obviously, any of these is minimal.
⊓ ⊔ The nilpotence index of rad A will be denoted by N (A). The set of all x ∈ rad A such that x · rad A = {0} is called the left annihilator of rad A and is denoted by LA. The right annihilator RA is introduced in the similar manner. Algebra A is called a division algebra if every element of A has an inverse in A with respect to the multiplication in A. A is called local if A/ rad A is a division algebra, and A is called basic if A/ rad A is a direct product of division algebras. Following Bläser [5] we call A superbasic if A/ rad A ∼ = k t for some t 1.
Algebra A is called semisimple if rad A = 0 and simple if it does not contain any proper twosided ideals except for the {0}. Structure of semisimple and simple algebras is described in Wedderburn's theorem which can be found in [26].
Theorem 1. Every finite dimensional semisimple algebra over some field k is isomorphic to a finite direct product of simple algebras. Every finite dimensional simple k-algebra A is isomorphic to an algebra D n×n for an integer n 1 and a k-division algebra D. The integer n and the algebra D are uniquely determined by A (the latter up to isomorphism).
Computational Model
Let U, V , and W be finite dimensional vector spaces over a field k. Let ϕ : U × V → W be a bilinear map. A bilinear algorithm for ϕ is a sequence (u1, v1, w1; . . . ; ur, vr, wr) r is called the length of the bilinear algorithm and the minimal length over all bilinear algorithms for ϕ is called the rank or the bilinear complexity of ϕ and is denoted by rk ϕ.
is called a quadratic algorithm for ϕ. ℓ is called the length of the quadratic algorithm and the minimal length over all quadratic algorithms for ϕ is called the multiplicative complexity of ϕ and is denoted by C(ϕ).
Obviously C(ϕ) rk ϕ. A straightforward argument implies also that rk ϕ 2C(ϕ) and except for trivial cases, rk ϕ < 2C(ϕ) [15]. Multiplication in algebra A is a bilinear map. Rank and multiplicative complexity of multiplication in A are called rank and multiplicative complexity of A and are denoted by rk A and C(A) respectively.
). However, it is not known if the converse also holds which is known as the famous Strassen's Direct Sum Conjecture [6, p. 360]. Obviously, rank (and therefore, multiplicative complexity) of any algebra A is at most (dim A) 2 . Let A = {A1, A2, . . . } be a family of algebras over a field k. We define ωA, the rank-exponent of multiplication in A as Obviously, 0 ωA 2. Note that this definition makes only sense if A contains algebras of arbitrarily big dimensions. In this case ωA 1 since multiplication in algebra is always faithful. This notion is very similar to the well-known exponent of matrix multiplication which will be denoted just by ω when the ground field will be clear. The only technical difference is that the exponent of matrix multiplication is defined relative to the square root of the respective algebra dimension. In fact, it can be easily seen that the regular exponent of matrix multiplication equals double the rank-exponent of matrix multiplication. We acknowledge that the introduced rank-exponent provides quite a crude estimate, since it even does not indicate the growth order of the bilinear complexity as a function of algebra dimension. For example, if rk An = O(dim An), then ωA = 1, but the opposite statement must not hold: if ωA = 1 then the rank may potentially be superlinear, e.g. (dim An) · polylog(dim An). On the other hand, there are no known general upper bounds that are tight enough for the rank-exponent to be too rough. One of the most famous open problems in computational linear algebra and algebraic complexity theory is matrix multiplication, for which its exponent (and the rank exponent) is only known to be within 2 ω 2.376 [11].
Structure of Group Algebras
Here we introduce some basic concepts from the representation theory. For the extensive treatment we refer to [27]. Let G be a finite group and k be a field. Then k[G] is semisimple if and only if char k ∤ ♯G. Let G be a finite group and k be an algebraically closed field either of characteristic 0 or p ∤ ♯G. Then k[G] decomposes into a direct product of matrix algebras: where each matrix algebra is called irreducible representation of G over k, and t τ =1 The numbers n1, . . . , nt are called the character degrees of G in k.
If k is not algebraically closed but again of characteristic either 0 or where Dτ are all division algebras over k of dimensions dτ for 1 τ t and t τ =1 Let k be a field of characteristic p and let G be a finite group of order np s , According to the proposition 1, is semisimple (see [26]). This implies where Dτ again are all division algebras over k of dimension dτ for 1 τ t and t τ =1 In case when Sylow p-subgroups of G are not normal the situation becomes more obscure. However, it is known that J(k[G]) contains all ideals generated by J (k[H]) where H is any normal p-subgroup of G. In particular, this holds when H is the intersection of all the p-Sylow subgroups of G.
Bounds for the Rank of Associative Algebras and Complexity of Matrix Multiplication
One general lower bound for the multiplicative (and therefore the bilinear) complexity of associative algebras is due to Alder and Strassen.
Theorem 2 ([1]). Let A and B be associative algebras over a field k and let t(A) be the number of maximal twosided ideals of A. Then Algebras for which the Alder-Strassen bound is tight (put B = {0} in (5)) are called algebras of minimal rank. All such algebras over arbitrary fields were characterized by Bläser.
The next two lower bounds are due to Bläser.
Theorem 4 ([3]). Let
A be a finite dimensional algebra over a field k, A/ rad A ∼ = A1 × · · · × At with Aτ = D nτ ×nτ τ for all τ , where Dτ is a k-division algebra. Assume that each factor Aτ is noncommutative, that is, nτ 2 or Dτ is noncommutative. Let n = n1 + · · · + nt. Then We will show later how this can be combined with Theorem 2 for group algebras to obtain high lower bounds in cases when some Aτ are commutative. The next theorem gives a particularly good lower bound for algebras with big radical and small nilpotence index.
Theorem 5 ([3]
). Let k be a field and A be a finite dimensional kalgebra. For all m, n 1, the rank of A is bounded by The following fact is a simplified version of Schönhage's τ -theorem.
Theorem 6 ([24]). Let
where nτ > 1 for at least one τ and rk A r. Let ω0 be a root of the equation n x 1 + · · · + n x t = r. Then the exponent of matrix multiplication over k does not exceed ω0.
Lower Bounds
Let G = {G1, G2, . . . } be a family of finite groups of unbounded orders and let k be a field. We will distinguish between two different cases: 1. char k = 0 or char k = p and for any i 1 p ∤ ♯Gi and 2. char k = p and for some i 1 p | ♯Gi.
We will call G in the first case a semisimple family of groups and in the second a modular family of groups. We will start with the semisimple case.
Semisimple Case
We will start with the case of algebraically closed k since all simple algebras over k are simply matrix algebras.
⊓ ⊔ Let G be a finite group and k be a field. We introduce following notation: let ti(G) be the number of irreducible character degrees of G over k equal to i. Let Ti(G) = ∞ j=i tj (G) be the number of irreducible character degrees of G over k not less than i. Obviously, The last follows from the fact, that every group has at least two different irreducible representations. Note, that the number of maximal twosided ideals of k[G] is exactly T1(G) = t, where t is the number of multiplicands in (1). 3. Let G = {G1, G2, . . .} be a family of finite groups, ♯Gn < ♯Gn+1 for all n 1. Assume that the number of irreducible character degrees of G ∈ G over k is o(♯G). 3 Then the following lower bound holds: Proof. Consider the decomposition (1) for k [G]. Note, that the number t is exactly the number of maximal twosided ideals of k[G]. Assume w.l.o.g. that n1 · · · nt and let A be the direct product of all the matrix algebras from (1) of order 1 or 2 and let B be the remaining product: (10) and the fact that A is of minimal rank follow from Theorem 3. The number of maximal twosided ideals in A is t1(G) + t2(G).
By using Lemma 1 for dimensions of factors of C and setting δ = 1 2 we obtain On the other hand, the number t1(G) of different irreducible representations of G of dimension 1 does not exceed t and therefore is also o(♯G), therefore, dim The lower bound in case 2 can be improved further by employing the lower bound due to Bläser rk k n×n 2n 2 +n−2 for n 3 [4]. However, the best we can achieve by now is to employ Alder-Strassen lower bounds for all multiplicands in (1) except for one (of the biggest dimension) and use 2n 2 + n − 2 for the last: if n1 · · · nt and nt 3 then rk k n 1 ×n 1 × · · · × k nt×nt 2♯G + nt − t − 1. Note, that if the Direct Sum Conjecture were true, then from (1) for the rank of multiplication in the group algebra k[G] for algebraically closed k would immediately follow rk k[G] = rk k n 1 ×n 1 + · · · + rk k nt×nt .
It turns out that an insignificantly weaker version of the corresponding lower bound can be proved independently of the validity of the Direct Sum Conjecture.
Theorem 8. Let G = {G1, G2, . . .} be a family of finite groups and k be an algebraically closed field whose characteristic does not divide any of the orders of groups from G. Let f (N ) be a function that for each G ∈ G the dimension of the largest irreducible representation of G is at least where ω is the exponent of matrix multiplication over k. Let t(N ) be a function such that for each G ∈ G the number of different irreducible representations of G does not exceed t(♯G). Then Proof. The first statement trivially follows from the observation that for any algebras A, B over one field rk A × B max{rk A, rk B}. Let k[G] have decomposition according to (1). Consider the following equation n x 1 + · · · + n x t = rk k[G]. Let ω0 be a root of this equation. Then by Schönhage's τ -theorem ω ω0. In other words, using the fact that all nτ 1 On the other hand, by employing Lemma 1 which proves the theorem. 2. In the same setting, if ω > 2, then the rank of group algebras from the family described above is superlinear on their dimensions. 3. If ω > 2 and f (N ) ≫ N 1 ω then the group algebras from the corresponding family of groups have superlinear bilinear complexity. One promising family of finite groups which could help to achieve ω = 2 in [9] has f (N ) = N 1 2 −ε for some fixed ε > 0. It follows, that in general one should look for ε > 1 2 − 1 ω > 0.079 since otherwise the lower bound depends on ω and is not superlinear iff ω = 2. 2. The order of GL(n, q) is Note that 1 − 1 q n−1 Q 1. GL(n, q) has an analytical irreducible representation of order [13]. It follows, that at least one irreducible representation of has the same order. Now the corresponding matrix algebra has dimension We will show now that q n Q = o(N ε ) for any ε > 0. This will complete the proof since for all groups of size N > N0 and ε > 0 where N0 depends on the choice of ε.
Modular Case
Let k be now an algebraically closed field of characteristic p and let G be a finite group of order N = np d , where p ∤ n. We will assume that G has the normal Sylow p-subgroup H of order p d . In this case rad k[G] is generated by the augmentation ideal 4 of k[H] and dim rad k[G] = p d (n − 1). We will further be concerned with the case of abelian H, which is then a direct product of cyclic p-groups: We will denote elements of H by hi 1 , ..., is , 0 iσ < p tσ for all 1 σ s assuming hi 1 , ..., is · hj 1 , ..., js = h (i 1 +j 1 ) mod p t 1 , ..., (is+js) mod p ts . The augmentation ideal of k[H] (and R = rad k[G]) is generated by r1, . . . , rs. It is easy to see that r p tσ σ = 0 and the system of vectors is linearly independent. The system r i 1 1 · · · r ts s | i1 + · · · + is m, 0 iσ < p tσ is also linearly independent and generates R m , so dim R m = n(p d −am−1) where Let ξ be a discrete random variable. We denote by Eξ the expectation of ξ, i.e. if ξ takes value ai ∈ R with probability pi 0 for 1 i n, n i=1 pi = 1, then Eξ = n i=1 aipi. We also denote by Dξ = E(ξ − Eξ) 2 the dispersion of ξ.
Theorem 9. Let G = {G1, G2, . . . } be a family of groups and k be a field of characteristic p. Let G ∈ G and ♯G = N = np d , where p ∤ n. Assume that P = Z(G) 5 is the Sylow p-subgroup of G and the parameter d is unbounded for groups in G. Let p T be the order of biggest cyclic factor of P and p t be the smallest order, and let s be the total number of factors. Assume that for any ε > 0 the difference T − t < 1 2 log p εs for all G ∈ G with ♯G > N0 = N0(ε). Then Proof. Following proof is based on ideas by Chokayev and generalizes similar result proven in [7] for one special case of commutative group algebras. We note, that since P is abelian, it is a finite product of cyclic p-groups: where t1 · · · ts and the exponent of P is p ts . Since it is o(♯P ), the parameter s is unbounded among all groups from G. According to (7) As (1) and (2) indicate, complexity of multiplication in group algebras is closely related to complexity of matrix multiplication. In particular, provided an effective algorithm for multiplication of square matrices, we immediately obtain an effective algorithm for multiplication in group algebras.
Proposition 2. Let n1, . . . , nt > 0 and alpha 1. Then Proof. The statement follows from the fact that x α is convex for x 0 and α 1.
For any pair of monotonically growing functions f (n) and g(n) we will write f (n) g(n) if for every δ > 1 f (n) O (g(n)) δ . Let G be a finite group and k be an algebraically closed field whose characteristic is either 0 or does not divide ♯G. Now we are ready to introduce the general upper bound for the rank of k[G].
Theorem 10. Let G be a group and k be an algebraically closed field of characteristic either 0 or coprime with ♯G. Then where ω is the exponent of matrix multiplication.
Proof. Consider decomposition (1) where ω is the exponent of matrix multiplication and the minimum is taken over all functions h(N ) such that at least one irreducible character degree of G is less or equal than h(♯G).
Proof. Let n1 · · · nt be the irreducible character degrees of G over k. Let h(N ) be as defined. Let j(N ) be the number of nτ greater than h(N ). Note that The last equation holds for any h(N ) so it holds also for the one minimizing the right side. ⊓ ⊔ Theorem 11. Let G = {G1, G2, . . . } be a family of finite groups and k be an algebraically closed field of characteristic either 0 or coprime with order of each Gi. Let f (N ) be a function which satisfies following property: for each G ∈ G all character degrees of G over k are less or equal than f (♯G). Then for any G ∈ G where ω is the exponent of matrix multiplication.
Let k now be an arbitrary field of characteristic 0 and G be a finite group. By definition of prime field, Q ⊆ k is the prime subfield of k. Let K ⊇ k be an algebraically closed extension of k. It is known (see [18,Theorem 11.4, Chapter XVIII]) that every representation of G over K is definable over Q(ζm) where m is exponent of G and ζm is a primitive m-th root of unity. Therefore, it is definable over k(ζm) (if k does not already contain ζm). Now consider any irreducible representation of G over k. It is a simple k[G]-module by Maschke's Theorem [18, Theorem 1.2, Chapter XVIII]. Therefore, it is isomorphic to D n×n where D is a k-division algebra. ζm is algebraic over D since it is algebraic over k ⊆ D and D ∼ = D ′ ⊆ k(ζm). The latter holds since there are no simple irreducible representations of G over k(ζm) other than those isomorphic to matrix algebras over k(ζm). Thus, D is a subalgebra of k(ζm), or D ∼ = k(ζ ℓ ) for some ℓ | m. where ω is again the exponent of matrix multiplication.
Proof. Since k[G] is semisimple, (2) holds. As mentioned above, Dτ is actually an extension field of k, thus for all τ rk Dτ 2dτ − 1 since it can be implemented via polynomial multiplication over k and k is infinite.
Conclusion
Noncommutative group algebras appear to be closely connected with the matrix algebra. Studying the problem of complexity of multiplication in group algebras may give us new algebraic insight into this classical problem of computer algebra and algebraic complexity theory. There are numerous open problems related to group algebras. We mention here only some of them.
1. It could be possible to obtain a general upper bound not depending on the matrix representations for the rank of group algebras based on the group structure that will be better than upper bounds given by Theorems 10, 11, and 12. In this case it could improve the upper bound for matrix multiplication. 2. We would like to extend Theorem 12 for fields of arbitrary characteristic that does not divide any of the group orders from the family under consideration.
3. The radical of a group algebra in the modular case is tightly related to Sylow p-groups. These groups are well-studied, although their structure may vary very strongly. It is known that the rank of commutative group algebras with nontrivial radical is still linear, so it does not affect the order of the complexity. On the other hand, a commutative group algebra over algebraically closed field of characteristic p is of minimal rank iff its Sylow p-group is cyclic. An open question is if similar effects also hold for noncommutative group algebras. | 2010-03-24T16:07:11.000Z | 2010-03-24T00:00:00.000 | {
"year": 2010,
"sha1": "7b7bcda9e6e6cc2985ae1cdc8dc90a90316bbe7d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cb726b84dea389b2b306713ea6355cf860bfd8f5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
115765715 | pes2o/s2orc | v3-fos-license | Manifestation of sub-Rouse modes in flow at the surface of low molecular weight polystyrene
The presence of a viscoelastic mechanism distinctly different from the segmental a-relaxation and the Rouse modes within the glass-rubber transition zone of polymers had been justified by theoretical considerations, and subsequently experimentally verified in different bulk polymers by various techniques, and in several laboratories. Referred to in the literature as the sub-Rouse modes, they were also found in polymer thin films by the creep compliance measurements of McKenna and co-workers Established by experiment and theoretical considerations is the enhanced mobility of sub-Rouse modes in thin PS films by the combination of effect from the free surface, finite size, and induced chain orientations, concomitant with the segmental a-relaxation. Induced chain orientations effect is present only when h is less than the end-to-end distance of the high molecular weight polymer chains. In this paper, the proven enhanced mobility of sub-Rouse modes at the surface of polymers is used to explain recent experimental investigations of viscous flow at the surface of low molecular weight PS by Chai et al. [Science, 343, 994 (2014)], and by Yang et al. [Science, 328, 1676 (2010).]. Viscous flow of polymers is by global chain motion, therefore the observed large reduction of viscosity at the surface of low molecular weight PS originates from the sub-Rouse modes, and not the segmental a-relaxation. This distinction is not commonly recognized in the current literature. The accerleration of the sub-Rouse modes at the surface explains the experimental findings.
Introduction
Theoretical consideration as well as experimental evidences in the glass-rubber transition zone of bulk amorphous polymers have shown that the segmental α-relaxation is not followed immediately by the Rouse modes [1][2][3][4][5][6][7] . In between these two better known visocoelastic mechanisms are the new modes, referred to as sub-Rouse modes [3][4][5][6][7][8] , with length within one chain longer than the segmental α-relaxation but shorter than the Gaussian submolecule, the 3 basic unit needed for the formation of the Rouse modes. Review of the history leading to the discovery of these intermediate viscoelastic mechanisms in various polymers by experiments was given in Ref. [5]. The best way to separate out the contributions from the three mechanisms within the glass-rubber transition zone is by shear compliance (creep) 3,4 , precision dielectric, and internal friction measurements [10][11][12][13][14][15][16] . The measured shear compliance J(t) is rigorously the sum of the contributions from segmental α-relaxation, ̂( ), the sub-Rouse modes, ̂( ), and the Rouse modes, ̂( ). Experiments on various polymers including polystyrene and analyses of data have determined the extent of contributions of these three viscoeleastic mechanisms 1,17,18 . A recent review has been given in Ref. [17], and here we go straight to the essential results. For entangled high molecular weight polystyrene (PS), it has been shown that ̂( ) lies within the range bounded by the glassy compliance = 0.93 × 10 −9 Pa −1 and ≈ 4 × 10 −9 Pa −1 , i.e., The sub-Rouse modes contribution, ̂( ), exist within the bounds The Rouse modes contribution, ̂( ), fall within the range, where J plateau ≈ 10 -5 Pa -1 is the entanglement plateau compliance of PS.
The values of ̂( ) contributed by the sub-Rouse modes span over a considerable range. Hence, only the sub-Rouse modes with ̂( ) closer to ≈ 4 × 10 −9 Pa −1 have properties closer to that of the segmental α-relaxation. The classical studies by Plazek and coworkers found thermorheological complexity of the compliance spectra and viscoelastic anomalies 5,6,18-24 , which were confirmed over the years by other workers [25][26][27][28][29][30][31] . The cause is 4 traced to the presence of the three viscoelastic mechanisms and the different temperature dependencies of their effective relaxation (or retardation) times, τ α , τ sR and τ R . [3][4][5][6]8 The segmental α-relaxation time τ α has the strongest, τ sR the intermediate and τ R the weakest temperature dependence. The segmental α-relaxation is well known to be dynamically heterogeneous involving cooperative or correlated motion of repeat units within a lengthscale. The properties of the more recently discovered sub-Rouse also indicates cooperative dynamics, albeit to a lesser extent than the segmental α-relaxation [3][4][5][6]13 . Despite the fundamental nature of the findings of thermorheological complexity in the glass-rubber transition zone, no explanation has been given by in the literature except the singular one by the Coupling Model (CM) 4,8,17,24,25,32 . The explanation is based on the difference in the degrees of cooperativity of the three mechanisms. In the order of decreasing degree of cooperativity are the segmental α-relaxation, the sub-Rouse modes, and the Rouse modes.
Degree of cooperativity is characterized in the CM by the coupling parameters. Hence we have the relations between the coupling parameters of the thre mechanisms, , , and , given by by > > = 0. The sub-Rouse modes with ̂( ) and τ sR closer to ≈ 4 × 10 −9 Pa −1 and τ α respectively have larger . The above is a short summary of the characteristics of the three mechanisms in the glass-rubber transition zone of bulk high molecular weight entangled PS.
The focus of the present work is on the viscous flow at the surface of low molecular weight PS, for which experimental measurements have been made recently 33,34 . At the free surface of supported and freestanding PS thin films, the mobility of the segmental αrelaxation is much higher than in the bulk. This was anticipated in the very first paper of applying the CM to PS thin films in 1998 35 by making the statement: "In addition, polymer 5 chains on or near the surface will have an increased mobility due to fewer interactions with neighboring chains; i.e. half of the neighboring chains are missing at the surface. Again, this reduction of intermolecular constraints leads to a decrease of the coupling parameter. As h decreases the surface to volume ratio increases and the reduction of T g becomes larger.". The emphasis on the free surface effect was repeated in 2002 36 , where one can find the statement: "we pointed out that the reduction of intermolecular coupling in the film depends on the distance from the nearest surface and hence the same is true for the decrease in n or the resultant enhancement of local segmental mobility. The largest decrease of n from its bulk value occurs at the free surface and the change diminishes continuously when going towards the center of the film. This idea is consistent with the computer simulation results that the mobility near the surface is higher [22][23][24][25][26][27][28] and also the simplified three-layers model proposed later by Mattsson et al. [8].".
In high molecular weight (MW) PS film with thickness comparable or less than the endto-end distance of the chains, there is induced chain orientation which can also reduce intermolecular coupling, but this effect does not exist in the viscous flow at at the free surface of low MW PS, the focus of the present paper. The free surface effect and the finite size effect (i.e., when thickness h less than the cooperative length-scale) act together to mitigate intermolecular constraints, which corresponds in the framework of the CM to a reduction of the coupling parameter from the bulk value , to a smaller value (ℎ, ). Here j is the jth layer counting from the surface layer, which is the first. It was stated explicitly in the 2002 paper: "The largest reduction of and τ occurs at the free surface layer and monotonically become less for layers located further into the interior", and repeated verbatim in the recent 2013 paper 17 . 6 The key equation of the CM is the dependence of the segmental α-relaxation time τ α on n α given by where t c =1 to 2 ps for PS and τ 0 is the primitive relaxation time with value independent of h and j. Based on Eq.(4), the 2006 paper 37 For the same reason, the effects of the free surface and the finite size of thin film cause a reduction of the bulk coupling parameter , to smaller values of (ℎ, ). From this result and by applying Eq.(5) to both the bulk and the thin film, it can be easily verified that . Naturally the smallest value of (ℎ, ) and the shortest τ (ℎ, ) are at the j=1 free surface layer.
In the following section we first briefly review the creep compliance experiments on nanobubble inflation freestanding PS thin films of McKenna and co-workers [41][42][43] and their observation of the simultaneous accerleration of the segmental α-relaxation and the sub-Rouse modes. The retardation times τ α and τ sR both becomes shorter on decreasing the film thickness h, which will be used to address the results from recent study of surface viscosity of a low molecular weight PS by Chai et al. 33 and Yang et al. 34 Explanation of the experimental data by the sub-Rouse, that is consistent with all our previous works, is the objective of this The tip and the end of the arrow indicate D sR and D eα respectively. The sole purpose of Fig.1 is to demonstrate the simultaneous observation of the sub-Rouse modes and the segmental αrelaxation in thin PS films. By considering the change of the biaxial compliance data on decreasing film thickness h, we have explained and concluded in Ref. [17] that both the sub-Rouse modes and the segmental α-relaxation are accerlerated on decreasing h, but to a lesser extent for the former than the latter. Here we can use Fig.1 to elucidate simply this fact. It is clear from Fig.1 that the creep compliance D(t) data obtained at 72 and 75 C are contributed entirely by the sub-Rouse modes. In order for the sub-Rouse modes of the bulk PS with MW=994,000 Da to be seen in the experimental time window of Fig.1, the temperature has to be much higher than T g =98.8 C. Since the sub-Rouse modes appear within the experimental time window at 72 and 75 C, therefore clearly the sub-Rouse modes have been acclerated by the effect of the free surface and possibly also the finite size effect in the 36 nm thick high-MW PS film.
In Fig.2 we compare the master curve D(t) of the film with the master curve J(t) of bulk PS of high MW=600,000 Da. 22 The slope of the log-log plots of the data in the sub-Rouse regime of the thin film is about a factor of 2 smaller than that in the bulk. This significant change of slope in thin film can be taken as evidence of faster sub-Rouse modes contributing to higher compliance are accerlerated less than slower sub-Rouse modes contributing to lower 9 compliance. These experimental findings had been explained by the Coupling Model (CM) equations (4) and (5)
Direct evidence of acceleration of sub-Rouse modes at surface of polymers from surface viscosity measurements
A novel investigation of enhanced surface mobility was reported by Chai et al. 33 , using the geometry of a stepped PS film on a substrate. They measured the viscosity above and below the bulk T gB of the low molecular weight PS with M w =3000 g/mol. Above the bulk T gB =343 K or 70 C, the entire film is involved in viscous flow. However, below T gB , flow occurs only in the near-surface region, made possible by the high mobility at the surface. At temperatures sufficiently far below T gB , the flow measured comes totally at near the surface. From about 100 C down to T g ≈70 C, they found the recoverable compliance, ( ) = ( ) − / , are all less than ≈ 10 −7 Pa −1 , and hence the data are contributed by the sub-Rouse modes and the segmental α-relaxation. 5,17,22,46 This can be seen from Fig.7 of Ref. [22], where the final increase of ( ) is due to the presence of a higher molecular weight tail in the polydisperse sample. The same was found in monodisperse poly(methylphenylsiloxane) with low molecular weight of 5000 g/mol 38 . Thus, also in the of 3000 g/mol low molecular weight PS studied by Chai et al., at temperatures above and below the bulk T g = 70 C, the only viscoelastic mechanisms present are the segmental α-relaxation and the sub-Rouse modes.
Nevertheless, viscous flow of the low molecular weight PS in the bulk or at the surface is performed exclusively by the sub-Rouse modes, and not by the segmental α-relaxation. Large enhanced surface diffusion has been observed in indomethacin, a small molecular glass-former 50 . It is also found in the surface of shear bands of mechanically deformed metallic glasses 51 . In both cases, the Coupling Model (CM) is able to explain 52,53 quantitatively the large enhancement of mobility at the surface from experiments.
There is no doubt that the free surface is an important cause of enhanced mobility of both the segmental α-relaxation and the sub-Rouse modes of polymers. The latter is amply demonstrated by the surface flow experimental data considered in this work.
Notwithstanding, finite size effect is another contributing factor in polymer thin films when the thickness is comparable to the cooperative length-scale of the segmental α-relaxation. It acts alone in causing significant reduction of T g in systems without free surface as shown by experiments. Notable examples include the confinement of PMPS in nanocoposites 54 , in nanometer glass pores of PDMS and PMPS by Schonhals and coworkers , and the study by Simon and coworkers 58 . | 2019-04-13T09:53:50.856Z | 2014-10-17T00:00:00.000 | {
"year": 2014,
"sha1": "b7ba6c343cdfa7a8574cd97c178982719766c692",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b7ba6c343cdfa7a8574cd97c178982719766c692",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
123936273 | pes2o/s2orc | v3-fos-license | POSITIVE SOLUTIONS TO ELLIPTIC EQUATIONS IN UNBOUNDED CYLINDER
. This paper investigates the positive solutions for second order linear elliptic equation in unbounded cylinder with zero boundary condition. We prove there exist two special positive solutions with exponential growth at one end while exponential decay at the other, and all the positive solutions are linear combinations of these two.
1.
Introduction. The structure of positive harmonic functions on a domain Ω in R N (N ≥ 2) has been much studied. Early in 1941 Martin [13] gave a method for uniquely representing any positive harmonic function in an arbitrary domain in R 3 by an integral on the minimal Martin boundary. His results have been extended to second order elliptic operators with a zero potential by Shur [16]. In the case where the closure of a domain is compact in the manifold, many mathematicians gave sufficient conditions for the corresponding Martin boundary to be equal to the relative boundary of the domain(see Hunt and Wheeden [9] and Taylor [17]). There are also some investigation fixed on the positive harmonic functions in some special unbounded domain, such as half-space, cone or cylinder. For example, Benedicks [1] has established a harmonic measure criterion that describes when the cone of positive harmonic functions on Ω that vanish on the boundary ∂Ω is generated by two linearly independent minimal harmonic functions. Benedicks' criterion describes when a Denjoy domain behaves like the union of two half-spaces from the point of view of potential theory. Related work, based on sectors, cones or cylinders, may be found in [3,12,7,15]. Landis and Nadirashvili [11] showed that a positive solution to a uniformly elliptic equation in a cone of R n which vanishes at the boundary is unique up to a constant multiple. Murata [14] established a method to construct the Martin boundary and Martin kernel for second order elliptic equations and gave a sufficient condition for an equation in R n or a cone of R n to have a unique (up to a constant multiple) positive solution vanishing at the boundary.
Assume C is the cylinder D × R, where D is a bounded Lipschitz domain in R n+1 , and x = (x 1 , · · · , x n+1 ) = (x 1 , · · · , x n , y) = (x , y) denote a typical point of R n × R. This paper investigates positive solutions of linear elliptic equation defined in C. When the cylinder is U = B × R (here B is the unit ball in R n ), it is known (see [6]) that the cone of positive harmonic functions h ± (x , y) = e ±αy φ(x ), where α is the square root of the first eigenvalue of the operator −∆ = − n j=1 ∂ 2 ∂x 2 j on B and φ is the corresponding eigenfunction, normalized by φ(0) = 1. We want to show the set of the positive solutions of elliptic equation (1) defined in C has a similar structure.
We consider the following elliptic equation: where L stands for second order uniformly elliptic operator of one of the following two types: We assume that a ij (x) = a ji (x) ∈ C ∞ (C), and L is uniformly elliptic, for some Λ > 0 and any ξ ∈ R n+1 .
The assumption a ij ∈ C ∞ is qualitative, in the sense that none of our estimates depend on the smoothness of a ij . By the standard approximation technique, all of our results are valid to uniformly elliptic equations with measurable coefficients a ij .
We are interested in the question of existence and uniqueness (to within a multiplicative constant) of a solution u of problem (1). We also study the precise asymptotic behaviors of the solutions and show finer properties of these solutions in the cylinder. In particular, there are two special solutions with exponential growth at one end while exponential decay at the other and all the positive solutions are linear combinations of these two.
Theorem 1.1. Solution set S + and S − of problem (1) are not empty. S is a linear combination of S + and S − . That is, for any v ∈ S + , w ∈ S − , we have The asymptotic behaviors of the solutions: There exist constants α, β, C, C > 0 depending only on n, Λ, D, such that, for any v ∈ S + \ {0}, w ∈ S − \ {0}, For any u ∈ S ∨ . We assume x * = (x * , y * ) ∈ C, such that u(x * ) = m(u). Then In order to illustrate the results, we give a simple example which is also a special case of Therem 1 in Gardiner [6].
Define S F is the solution set of problem (6). It is easy to see that e y sin x, e −y sin x ∈ S F . Actually they are the only two nontrivial solutions in S F in the sense of Theorem 1.1. We also see that the asymptotic behavior of solutions is exponential.
Particularly, for any y ∈ R, we use C y := C {y} , C + y := C (y,+∞) , C − y := C (−∞,y) , C + := C + 0 , C − := C − 0 . We also study the positive bounded solutions defined in half cylinder C + . They can be approximated by the solution in S − . That is Then for w ∈ S − , there exist constants α > 0, which is only dependent of n, Λ, D, and K, C > 0 which are only dependent ofû(1), n, Λ, D, such that The rest of this paper is divided into two parts. In Section 2 we establish some auxiliary results. First we introduce some fundamental notions concerning the positive solutions to equation (1) vanishing at the boundary. The maximum principle for the solutions in cylinder is proved under a bounded condition. We then introduce a so-called boundary Harnack principle which is proved in Fabes et al. [5] The main theorem will be proved in Section 3. We study the structure of the set of the positive solutions. We will show the exponential growth and decay for the positive solutions. The existence of the positive solutions is also proved. Any bounded positive solution in half cylinder can be approximated by the solution in the whole cylinder.
2.
Preliminaries. In this section, we collect some preliminary results. In Section 2.1, we shall give the maximum principle in cylinder. In Section 2.2, on the basis of the boundary Harnack principle, we give some lemmas to compare the solutions.
2.1. Maximum Principle. According to the well-known Maximum Principle a subharmonic up-bounded continuous function defined in a domain Ω ⊂ R n , n ≥ 1 which is non-positive on the boundary ∂Ω, is in fact negative everywhere in Ω and this result extends to nonnegative solutions of a large class of linear elliptic equation [8]. It is an important tool for us to study the properties of the solutions. Now we want to investigate the validity of the maximum principle for the solutions in cylinder. We begin our investigation with a Harnack-type principle for operator L.
By classical maximum principle, u(x) ≤ u + (x) ≤ 1, x ∈ C (0,2) . With Boundary Hölder estimate in Gilbarg and Trudinger [8] (Theorem 6.19 in non-divergence form and Theorem 8.25 in divergence form), there exists a constant C 0 > 0 which is . Take ε 0 sufficiently small such that C 0 ε α 0 ≤ 1 2 and apply Harnack principle When L is in non-divergence form in Lemma 2.1, there is an alternate proof using barrier function. As a matter of fact, assume D ⊂ B R ⊂ R n , set Now we can extend the maximum principle to the solutions in cylinder C.
Lemma 2.2. Suppose Lu(x) ≤ 0, x ∈ C, and u is bounded from above. Then Proof. Without loss of generality, we assume sup Set The solutions defined in half cylinder C + can be proved to satisfy the following maximum principle through the same method.
Now we are able to show the exponential decay of the bounded solutions in C + .
Proof. By the definition of m(u) andû(y), we assume the minimize sequence We claim that there exists a subsequence of {(x j , y j )} j (we still denote it {(x j , y j )} j ), such that lim j→∞ y j = −∞ or lim j→∞ y j = +∞.
We claim there exists a constant a > 0, such that av(x) ≤ u(x), x ∈ C. By contradiction, suppose there exist {x j = (x j , y j )} +∞ j=1 ⊂ C, such that 1 j v(x j ) > u(x j ). Then there is a subsequence of {x j } +∞ j=1 (still denote it {x j } +∞ j=1 ), such that lim j→∞ y j = +∞. It is easy to see there is a lower bound for {y j } +∞ j=1 from (9). Actually if {y j } +∞ j=1 is bounded in R, we assume that |y j | < M, j ≥ 1.
With Lemma 2.8, there exists a constant k M > 0, such that If we take J ∈ N + sufficiently large such that 1 We get a contradiction with (10).
Without loss of generality, we assume that {y j } +∞ j=1 is a strictly monotone increasing sequence, that is y j2 > y j1 > 0, j 2 > j 1 ≥ 1(we can subtract subsequence to assure it). Assume ε is the constant in Lemma 2.8. For any j ≥ 1, we consider u(x) and 1 j u(x) in area C (0,yj ) . There is a point With lim j→∞ y j = +∞, From Proposition 1, u(x) is not bounded in the area C (0,+∞) , and we get a contradiction. So there is a positive constant a > 0, such that av ≤ u. With the similar method, we can show there is a positive constant b > 0, such that bw ≤ u.
Proposition 3. For any u, v ∈ S + , there exists a constant c > 0, such that Proof. With Lemma 2.8, for any y ∈ R, there exists a constant c y , such that We prove (11) by contradiction. Suppose there exist {x j = (x j , y j )} +∞ j=1 ⊂ C, such that u(x j ) > jv(x j ). With Lemma 2.8, there exist a constant ε, such that u(x , y j ) ≥ εjv(x , y j ), x ∈ D. If we choice j = N big enough such that εN > c 0 +1, then this contradicts with (12).
3. Proof of the main theorem. With the preparations in above section, we will finish the proof of the main theorems in the section.
Proof of Theorem 1.1. We divide the proof into three parts. 1 o First we study the structure of S + and S − . For any u, v ∈ S + , set By contradiction. If Kv − u = 0, then Kv − u ∈ S + . With Proposition 3, there exists a constant K 1 > 0, such that v(x) ≤ K 1 (Kv(x) − v(x)), and Then K − 1 K1 ∈ E, and this contradicts with the definition of K. Therefore we get u = Kv, and S + = {av|a > 0}. With the similar method we can also get S − = {bw|b > 0}.
2 o Next we study the structure of the solution set S.
For any u ∈ S, if u ∈ S + or u ∈ S − , then there is a > 0 or b > 0 , such that u = av or u = bw. Next we suppose u ∈ S ∨ . Assume With Proposition 2, E = ∅, so a * > 0. We also have a * < +∞. Actually if not we can follow the method in Proposition 3 to get a contradiction.
Consider the function u − a * v. By the continuity, From u / ∈ S + and Harnack principle, For u − a * v ∈ S, we claim that u − a * v ∈ S − . Then there exists b > 0, such that u − a * v = bw. It is easy to see that So we get the conclusion. We prove the claim by contradiction, if u − a * v ∈ S ∨ , then by Proposition 2, there is a constantā > 0, such that u − a * v ≥āv, u − (a * +ā)v ≥ 0. So a * +ā ∈ E, this contradicts with a * = sup E.
3 o The existence of the solutions: For H ∈ N + , consider the equation With the classical elliptic theory, there exists a unique positive solution u H (x) to equation (13) and u H (x) > 0, x ∈ C (−H,+H) . For (13) is linear, we can adjust the constant C, such that u H (0 , 0) = 1. 1] . When M = 2, we can also get a subsequence of {u 2] . We do the similar operations when M ∈ N + , and get a sequence {u M ] , M ∈ N + . We denote the limit by u(x) which is defined in C. So u(x) satisfies (1). Therefore u(x) ∈ S. Moreover we can prove that u(x) ∈ S + .
With the similar method we can prove that there exists w(x) ∈ S − . It is easy to check (u + w)(x) ∈ S ∨ = ∅.
Proof of Theorem 1.2. We divide the proof into two steps. In Step 1, we prove that the growing and decaying rate of positive solutions to (1) is at least exponential in the infinity. In Step 2, we prove the rate of asymptotic behaviors of these positive solutions is at most exponential in the infinity.
Therefore we get a part of the inequality in (3). With the similar method we get a part of the inequality in (4).
With a similar argue we get the estimate in (−∞, y * ). Therefore we get the rest of (5).
We use Lemma 2.3 and get If (20) is satisfied, with a similar argue we also get (21). | 2019-04-21T13:08:33.937Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "a28e50a0a8ea42a043f6500d1b12660fb477b329",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=8b56d81e-c98e-4acb-b856-17dd7c9a9d4e",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5f1efd4dfb44436554bd93916d725b8e59c6501d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236640028 | pes2o/s2orc | v3-fos-license | Development of Engineered Cementitious Composites Using Sea Sand and Metakaolin
The present study investigates the possibility of using sea sand, instead of silica sand, in producing engineered cementitious composites (ECCs) and the optimal mix proportion, mechanical behavior, and erosive effect of chloride ions on sea sand ECCs (SECCs). Nine groups of SECC specimens were prepared based on the orthogonal test design, and these cured for the uniaxial tensile, uniaxial compression, and fracture energy tests. The roundness and sphericity of sea sand and silica sand were quantified by digital microscopy. The microstructure and composition of SECCs were characterized by scanning electron microscopy (SEM) and X-ray diffraction (XRD). The mix proportions of SECCs with a tensile strain capacity more than 2% and a compressive strength more than 60 MPa were obtained. The factor analysis of these serial tests revealed that the contents of both fly ash and sea sand have a significant effect on the compressive strength and tensile strain capacity of SECCs. The fracture energy test revealed that the matrix fracture toughness of SECCs significantly increases with the increase in sea sand content. The XRD analysis revealed that the addition of metakaolin can enhance the ability of SECCs to bind chloride ions, and with the increase in chloride ion content, the ability of SECCs to bind chloride ions would improve. The results of the present study provide further evidence of the feasibility of using sea sand in the production of ECCs, in order to meet the requirements of diverse concrete components on ductility and durability.
INTRODUCTION
Engineered cementitious composites (ECCs) are a novel structural material with high resistance to crack and damage and were originally proposed by Li and Leung (Li and Leung, 1992). With the addition of fibers, the tensile strain-hardening characteristic of more than 2% ultimate tensile strain can be obtained, which is approximately 200 times that for ordinary concrete. Excellent crack control, good resistance to wear and spalling, provides ECCs great potentials in the replacement of mainstream building materials (Li et al., 2003;Kojima et al., 2004;ECC Technology Network, 2005;Rokogo and Kanda, 2005;Li and Xu, 2009;Zhong, et al., 2021). However, the aggregate used by ECCs is generally fine silica sand, which is priced at 20-30 times more than untreated sea sand (price varies by region). This high cost limits ECCs' large-scale production and further application in the construction industry. Therefore, it is a promising approach to use sea sand in the place of fine silica sand, in order to reduce the cost of production.
In addition, the preparation of ECCs by sea sand not only saves costs but also eliminates the time-consuming problem of transporting silica sand from inland to coastal areas, thereby shortening the construction period. Huang et al. (Huang et al., 2020a) discussed the feasibility of producing seawater sea sand ECCs (SS-ECCs) by compressive tests and direct tensile tests, and the results indicated that seawater and sea sand slightly increase the compressive strength by 12% and marginally decrease the tensile strength by 6% and tensile strain capacity by 18%. Furthermore, some scholars (Huang et al., 2020b;Huang et al., 2021;Yu et al., 2021) comprehensively investigated the influence of sea sand size, polyethylene fiber length, and fiber volume dosage on the mechanical performance and crack characteristics of SS-ECCs. They proposed a probabilistic method to analyze the reliability of the tensile strain capacity of SS-ECCs, a five-dimensional representation to assess the overall performance of SS-ECCs, and a probabilistic model to describe the stochastic nature and evolution of crack width. Yao et al. (Yao et al., 2022) used sea sand to partially replace silica sand, in combination with BFRP bars, in order to greatly improve the tensile strength of SECCs. Although the effect of sea sand on the mechanical behavior of ECCs has been previously examined, until recently, few data hinted at the importance of chloride ions and the negative effects on the mechanical property of SECCs.
In terms of replacing silica sand with sea sand, the high content of chloride ions in sea sand would corrode the reinforcement. In the present study, metakaolin was added to alleviate the erosive impact of chloride ions, and the comprehensive effects of the amount of fly ash and sea sand on the mechanical properties of SECCs were investigated, according to the orthogonal test design method and the typical ECC design basis (Li, 2012). Analysis of variance (ANOVA) and range analysis were performed on the data through uniaxial tensile and uniaxial compression tests, in order to obtain the optimal fit ratio for tensile strain capacity and compressive strength. The roundness and sphericity of sand were quantified using a digital microscope to distinguish the grain morphology of sea sand from that of the silica sand matrix. Scanning electron microscopy (SEM) and X-ray diffraction (XRD) were used to analyze the mechanism of factor action from a microscopic viewpoint and the chemical composition. In addition, the influence of matrix fracture toughness on the tensile properties of SECCs was investigated using the fracture energy test. The present study would provide reference for further research on the engineering applications of SECC proportion design, contributing to the establishment of the balance between the economy and material properties.
MATERIALS AND METHODS
In the replacement of silica sand with sea sand, there is concern that the high content of chloride ions in sea sand would corrode the steel reinforcement. There are two binding forms of chloride ions in cement-based materials. One is physical adsorption, that is, chloride ions are adsorbed on the surface of the C-S-H gel. However, the binding force is relatively weak, making it easy to be damaged due to the conversion of the adsorbed chloride ions into free chloride ions (Wang et al., 2013). The other form is chemical binding. Previous studies (Ben-Yair, 1974) have revealed that, in the case of mixed chloride ions, chloride ions would react with the cement hydration product Ca(OH) 2 to generate CaCl 2 and subsequently react with C 3 A in the cement to generate Friedel's salt. The chemical reaction is as follows: The above chemical reaction shows that the binding of chloride ions is mainly due to the formation of Friedel's salt, and the amount of chloride ions that are bound usually increases with an increment in the effectiveness of the Al 2 O 3 content in cementitious materials. Compared with various supplementary cementitious materials (silica powder, slag, and fly ash), metakaolin (45% Al 2 O 3 ) has the highest binding rate of chloride ions (Thomas et al., 2012). As the content of Al 2 O 3 in the matrix increases (the content of C 3 A increases in the matrix), the equilibrium equation (2) of the chemical reaction moves to the right. That is, as the metakaolin content increases, the binding rate of chloride ions also increases. Furthermore, metakaolin can refine the pore structure of concrete, reduce chloride ion transport channels, and improve chloride ion penetration resistance (Zeng et al., 2015). However, the Al 2 O 3 content in cement is merely approximately 5-12%, while metakaolin can reach as high as 40%. Therefore, metakaolin was added in the present experiment to bind the chloride ion and refine the pore structure.
In addition, after the replacement of silica sand with sea sand, which is generally a medium-size sand, the increase in aggregate particle size would lead to more tortuous crack paths (Li, 2012). Thus, the matrix fracture toughness (K m ) would not be conducive to multiple cracks. Considering that K m in the matrix decreases with the increase in fly ash content (Turk and Nehdi, 2018), the investigators attempted to increase the fly ash content to reduce the fracture toughness (K m ) of the matrix, thereby maintaining the ductility of the SECC.
Orthogonal Experimental Design
The orthogonal test is a mathematical method for conducting multifactorial tests based on statistical principles and the orthogonal theory (Statistics group of institute of mathematics , 1975). The orthogonal test selects sufficient representative samples from the full-scale test, according to orthogonality, in order to analyze the relationship between factors and results. The representative samples have the characteristics of uniform dispersion and neat comparison, which is the main method for the fractional factor analysis design. In the present study, the orthogonal test was used to determine the optimal mix proportion. The orthogonal test is widely used in research works and can handle complex issues with significantly lower cost and less time (Bai et al., 2009).
In the present study, the experimental factors to be considered were sea sand content (factor A), metakaolin content (factor B), and fly ash content (factor C). The basis of the orthogonal test is the orthogonal table, and according to the number of different factors or values, different orthogonal tables are selected. The test factors and levels are presented in Table 1. The L 9 (3 4 ) orthogonal table (Gao et al., 2019) was applied for the present study. The mix proportions used in the orthogonal test are presented in Table 2.
Raw Materials and Specimen Preparation
SECCs comprise ordinary Portland cement [Class P.O 42.5, Guangxi, China (General Administration of Quality Supervision, Inspection and Quarantine of the People's Republic of China, 2007)], fly ash (aggregate size 13 μm, Class I, Henan, China), metakaolin (aggregate size 6.5 μm, Henan, China), fine silica sand (aggregate size 106 μm, Jiangsu, China), sea sand (chloride content 0.21%, fineness modulus 2.43, and shell particle content 4.4%; the grading curve is presented in Figure 1; Philippines), water, polycarboxylate superplasticizer (water reducing rate 25%, Anhui, China), and 2% (volume %) polyvinyl alcohol fiber (the physical and mechanical characteristics are presented in Table 3; Kuraray, Japan). The average aggregate diameter used in this study is the average length diameter D l . By adding the diameters of all the aggregates of the sample and dividing them by the total number of aggregate diameters, the average diameter of the aggregates obtained is equal to the arithmetic mean of the diameters of all the particles and is called the average length diameter D l . (Zhang et al., 2000). The chemical composition of cement, fly ash, and metakaolin is presented in Table 4. The grading curves for fly ash, metakaolin, and silica sand were determined by particle size analysis (PSA), as shown in Figure 1. The sand-to-binder and water-to-binder ratios were 0.36 and 0.27, respectively. The dose of the water-reducing agent was adjusted according to the consistency of the mixture during mixing. The adjustment principle was that the slump of the mixture should reach 200 ± 10 mm.
The uniaxial compression compressive strength of SECCs was measured using cubes, with a dimension of 70.70 mm × 70.70 mm × 70.70 mm, which has been widely used in China (Ministry of Industry and Information of the People's Republic of China, 2018). Then, a 400 mm × 100 mm × 15 mm rectangular plate was used for the tensile test (Li et al., 2001). Four 2 mm thick aluminum plates were pasted at the two ends of the plate specimen to reduce the potential stress concentration caused by the clamping part.
The mixing procedure was initiated by dry mixing of cement, metakaolin, sea sand, fly ash, and fine silica sand using a standard mortar mixer for 2 min (140 rpm). Afterward, the water and water reducer were added to the dry components (stirred at 285 rpm for 5 min), and the PVA fibers were slowly mixed while stirring at 285 rpm for 5 min. Then, the specimens were shaken for 1 min after pouring and allowed to stand for 24 h in an environment with a temperature of 25 ± 3°C and a relative humidity of 65 ± 2%. Next, the specimens were stored at a relative humidity of 95 ± 5% and a temperature of 20 ± 2°C, at the age of 28 days. Three specimens were prepared for each proportion used for the tensile test, and three specimens were prepared for each proportion used for the compressive test. The tensile tests were conducted using a microcomputer-controlled electrohydraulic servo universal testing machine. In the tensile test, the loading method was displacement controlled, and the loading rate was 0.20 mm/min. In the compression test, the loading rate was 0.30 MPa/s. In order to determine the influence of the fracture toughness of the matrix on the tensile property of SECCs, a three-point bending test on the notched beam was conducted according to ASTM E399-12 (ASTM, 2009). The specimen size was 354 mm × 75 mm × 40 mm. Before the test, a 30.0 mm deep notch was cut at the middle bottom of the specimen, as shown in Figure 2A. During the test, a clip extensometer was used to measure the crack mouth opening displacement (CMOD), with a loading rate of 0.05 mm/min. The loading details are presented in Figure 2B.
Test Results
The ultimate tensile stress and ultimate tensile strain of SECCs are listed in Table 5. The ultimate tensile strain of SECCs with different mix proportions ranges between 0.38 and 2.4%. The tensile specimen with more than 15% difference compared to the average value was eliminated based on the experimental standard (Ministry of Industry and Information of the People's Republic of China, 2018). The stress-strain curves of the two tensile tests are presented in Figure 3. It can be observed from the stress-strain curve that all mixtures exhibited distinct strain-hardening characteristics. Followed by cracking, the bridging capacity of the fiber restricts further crack propagation, and the fluctuation of stress on the curve reflects the generation of multiple cracking behaviors on the specimen surface. In order to scientifically analyze the influence of various factors on SECCs, range analysis and variance analysis were performed on the data in Table 5. The null column in the table is actually a comprehensive column of uninvestigated interactions and other unknown influencing factors, which can reflect the errors caused by random factors.
Range Analysis
Range analysis can quickly and intuitively analyze the primary and the secondary order of influencing factors and obtain the optimal mix proportion of SECCs. Range analysis was carried out, and the analysis results for ultimate tensile stress are presented in Table 6.
The order of influence of each factor on ultimate tensile stress is fly ash content > sea sand content > metakaolin content, and the optimal mix proportion is A1B1C1. It can be observed from the optimal proportion that the ultimate tensile stress was the greatest when the content of sea sand and fly ash was the lowest. This is because a lower K m is attained with the increase in fly ash (Li, 2012), and the decrease in K m resulted in the decrease in tensile strength. The addition of sea sand also reduced the ultimate tensile stress, but its influence was less than that of fly ash. However, the matrix fracture toughness K m of the ECC matrix increased with the increase in aggregate particle size. This can be attributed to the fact that an increase in aggregate size can lead to fiber reunion, which prevents fibers from performing well in bridging (Li and Li, 2013), ultimately leading to the reduction in ultimate tensile strength. In the present study, digital microscopy was used to quantify the particle morphology of sea sand and silica sand, in order to determine whether there are morphological factors other than particle size that affect the mechanical properties of SECCs. The reason for the minimal effect of metakaolin on the ultimate tensile stress of SECCs is that although metakaolin can refine the pore structure and improve the tensile strength of SECCs to a certain extent, it mainly plays the role of binding the chloride ion in the matrix. Furthermore, this has the least amount in the matrix. Therefore, this has a minimal effect on the ultimate tensile stress of SECCs.
According to the range analysis of the ultimate tensile strain, the order of influence of each factor on the ultimate tensile strain is also fly ash content > sea sand content > metakaolin content, and the optimal mix proportion is A1B3C3. This is because as the content of fly ash increases, the chemical bonding of the fiber/matrix interface decreases, and the frictional bonding increases (Yang, 2007). In particular, the interface bond would be helpful for the fiber pull-out failure, rather than the fiber tensile failure, in the matrix. Furthermore, the increase in fly ash would reduce the matrix fracture toughness K m . Both trends are conducive to the development of multiple cracks in SECCs and improving the ductility. In order to further confirm this result, the destruction of fibers in specimens was observed using a scanning electron microscope after unloading. The range analysis results revealed that the enhancement amplitude for matrix fracture toughness by sea sand (when the content was 0.6) was less than the reduction amplitude for matrix fracture toughness by fly ash (when FA/C was 3.2). However, when the sea sand content was ≥0.8, the increase of matrix fracture toughness for sea sand was greater than the decrease of matrix fracture toughness for fly ash. In general, the ultimate tensile strain generally decreases with the increase in sea sand content. In addition, with the same amount of sea sand, the higher the content of fly ash, the better the ductility, which is consistent with the previous discussion.
Analysis of Particle Morphology
The critical issue to be addressed for the SECC mix proportion design is to satisfy both the strength and energy criteria (Li, 2012). The strength criterion must initially be met when microcracks are generated. That is, the first cracking strength controlled by the matrix fracture toughness should be less than the fiber bridging capacity σ 0 at any given potential crack surface: In addition, the energy criterion must be satisfied. The flat crack propagation mode requires an energy balance, that is, the work performed by the tensile load applied on the matrix σ ss δ ss must be equal to the energy required to break down the toughness of the crack-tip material J tip and the energy required to open the fiber bridging crack from 0 to δ ss : where where E c refers to Young's modulus of the matrix, and the left side of Eq. 4 is called the complementary energy. When σ(δ) reaches the bridging stress σ 0 (δ 0 ), the complementary energy reaches the maximum value: In order to achieve steady-state cracking, the following is necessary: As an important ingredient for typical ECCs, sand has an important impact on the basic mechanical properties, workability, shrinkage rate, and material cost of materials (Li, 2011). A larger aggregate particle size increases the sinuosity of the crack propagation and matrix fracture toughness K m (Li, 2012). This is not conducive to flat crack propagation necessary for ECCs and can cause fiber reunion, resulting in the insufficient fiber bridging capacity of flat cracks (Li and Li, 2013). Wu et al. (Wu et al., 2019) explored the influence of the morphological parameters of sand on the mechanical performance of ECCs and determined the morphological parameters (including particle roundness and sphericity) of river sand using image analysis and computer algorithms.
Wadell and Hakon (Wadell, 1932;Wadell, 1933) first proposed the use of roundness to describe the sharpness of particle corners. As shown in Figure 4, r in refers to the radius of the maximum inscribed circle, r i refers to the radius of the ith inscribed circle, and n refers to the total number of corner circles. The roundness of sand particles is defined as the ratio between the average radius of the curvature at the particle corners and the maximum radius of the inner circle in the following equation: R ranges from 0 to 1. The rounder the sand, the closer the value of R to 1. The sphericity of sand particles describes the approximation degree of the projected area of particles to a circle (Krumbein and Sloss, 1951) as the ratio of the particle width d 2 to the particle length d 1 in Eq. 9. For the sphericity of spherical particles, S is equal to 1. For elongated particles, S≪1: Under a digital microscope, the sphericity and roundness of silica sand and sea sand are calculated, as shown in Figure 5. The yellow circle represents the inner circle of the sand corners, while the red circle represents the maximum inner circle of the sand particles. A total of 100 complete sand particles were randomly selected from the electronic images of these two kinds of sand, and the roundness and sphericity were calculated and averaged. It can be observed from the calculation results that the sphericity and roundness of these two kinds of sand are not significantly different, and the main difference lies in the particle size and main component. The average particle size of sea sand is 0.42 mm, while the average particle size of silica sand is 0.10 mm. In addition, the main components of sea sand are SiO 2 and CaCO 3 , while the main component of silica sand is SiO 2 . Therefore, the difference in aggregate morphology between sea sand and silica sand has minimal impact on the fracture toughness of the SECC matrix. This is consistent with the previous discussion that the influence of aggregates on the ultimate tensile stress and ultimate tensile strain of SECCs mainly lies in the particle size.
Morphology Analysis
After the uniaxial tensile test, the sections of each group of specimens, with a size of 15 mm × 15 mm × 10 mm, were taken out using a small cutting machine. A Phenom scanning electron microscope was used for observation, under an accelerating voltage of 15 kV.
On the microscopic observation of SECC failure surfaces with different mix proportions, the destruction of fibers in different proportions could be clearly observed. In the present study, the tensile failure sections of A1B1C3, A3B2C1, and A2B2C3 were analyzed with the classical ratio of M45 (Singh et al., 2019) as the control. The SEM test results are presented in Figure 6.
According to the SEM observations, the fibers in the matrix had tensile failure, regardless of the content of sea sand for SECCs. This was attributable to the fact that, in the energy criterion, in order to fully play the role of the bridging fiber in the matrix, the bond strength of the fiber/matrix should be moderate. That is, if the bond strength is lower, the fiber tension-softening phenomenon would occur, while if the bond strength is higher, the possibility of fiber rupture would increase, thereby reducing J b ', which is not conducive to multiple cracks. Besides, the increase of fly ash content can reduce the chemical bonding strength at the fiber/matrix interface (Li, 2012). The fly ash content of the SECC matrix varies in different mix proportions; therefore, the chemical bond strength of the fiber/ matrix interface is different.
The ductility of SECCs is significantly influenced by the fracture toughness of the matrix. In order to determine the influence of different sea sand content on the fracture toughness of the SECC matrix, the three-point bending test of the notched beam was carried out. The results are summarized in Table 7. The fracture toughness K m is calculated according to Eqs. 10, 11 (Yu et al., 2018). The load-CMOD curve is presented in Figure 7A. As shown in Figure 7B, the content of sea sand has a significant influence on K m : The matrix fracture toughness significantly increases with the increase in sea sand content. This is not conducive to the realization of the energy criterion and leads to the excessive rupture number of the fiber. In contrast, for M45 without sea sand, more fibers were observed in the SEM images for pull-out failure, while few fibers had tensile failure.
Fly ash has three effects in the matrix: microaggregate effect, activity effect, and morphological effect (Yan, 2007). In the SEM images, fly ash particles with a complete grain shape and smooth surface (not involved in the secondary hydration reaction) and fly ash particles attached to the C-S-H gel (already involved in the secondary hydration reaction) could be clearly observed. With the increase in fly ash content, the amount of fly ash without the secondary hydration reaction increases. This significantly improves the pore structure and compactness of the matrix. In addition, the C-S-H gel formed by fly ash participating in the secondary hydration reaction was closely bound to the fly ash. When the matrix was damaged, the fly ash took the hydration products away from the matrix, leaving spherical holes on the fractured surface. Although the increase in fly ash content can reduce the chemical bonds of the fiber/matrix and increase the friction bonds (Yang, 2007), it would still be difficult to balance the improvement of matrix fracture toughness caused by the addition of sea sand. Thus, in ECCs mixed with sea sand, all fibers had tensile failure. However, the ultimate tensile strain of A1B3C3 with the highest fly ash content and the least sea sand content could still reach 2%. This is consistent with the previous discussion that both the increase in interfacial bonds and decrease in fracture toughness of the matrix contribute to the improvement of the ductility of ECCs.
Variance Analysis
Variance analysis can distinguish the differences in test results due to the changes in factor levels from differences due to fluctuations in error. The variance analysis for the ultimate tensile stress and ultimate tensile strain test results for SECCs is presented in Tables 8, 9. It can be observed in Table 8 that fly ash content is the main factor that affects ultimate tensile stress, and sea sand content merely has a certain influence on ultimate tensile stress. Furthermore, it can be observed in Table 9 that the content of fly ash and sea sand had a significant influence on the ultimate tensile strain and that the content of fly ash had a more significant influence. However, the change in metakaolin content had no significant influence on the ultimate tensile strength and ultimate tensile strain of SECCs. This is consistent with the results of the range analysis, which shows that the orthogonal analysis result is reasonable.
Test Results
The compressive strength test results for SECCs are presented in Table 10. The compressive strength for SECCs in the nine groups with different mixing proportions ranged between 37.81 and 64.66 MPa. All tested SECCs met the requirements for the design strength grade.
Range Analysis
The range analysis for the SECC compressive test data was carried out, and the results are presented in Table 11. The order of factors 1.29 9 0.14 ---S e 1.63 11 0.15 ---Note: (*), the change in the factor level has a certain influence on the test. *The change in the factor level has a significant influence on the test. that affect the compressive strength is fly ash content > sea sand content > metakaolin content, and the optimal proportion is A3B2C1. The content of fly ash was also the main factor that affected the compressive strength of SECCs. An appropriate content of fly ash can promote the secondary hydration of cement and fill pores, in order to improve the strength. However, if the fly ash is increased, the amount of cement per unit volume decreases, the effective water-to-cement ratio to control the hydration reaction increases, and the early compressive strength of the material decreases (Fu and Cai, 2019). Therefore, when the content of sea sand is the same, the compressive strength of SECCs decreases with the increase in fly ash content.
Furthermore, sea sand is a secondary factor that affects the compressive strength of SECCs. This can be attributed to the fact that sea sand has a higher density due to the composition of shell particles, which is calcium carbonate. Although calcium carbonate has no gelation, shell particles are strong and durable, which reduce the porosity, to some extent (Xiao et al., 2017). Therefore, the increase of sea sand content contributes to the improvement of the compressive strength of SECCs. The influence of the change in metakaolin content on the tensile performance and compressive performance of SECCs cannot be observed from the range analysis and variance analysis. Therefore, the present study analyzed the action of metakaolin from the chemical point of view.
X-Ray Diffraction Pattern Analysis
Considering that it is difficult to obtain the effect of metakaolin on the basic mechanical properties of SECCs with the orthogonal test and that it is also difficult to observe Friedel's salt in the SEM images, SECC samples that were aged for 28 days were used for the XRD analysis of the nine mixed proportions, in order to demonstrate chemical reactions (Eqs. 1, 2) and the change in metakaolin content. The samples were crushed and ground in mortar. Then, anhydrous ethanol was added during grinding to stop hydration. After grinding to cement fineness, the samples were collected and dried in an oven at 50 ± 5°C, in order to prevent the hydration products from decomposition at high temperature. Next, the powder samples were analyzed using an X-ray diffractometer (Rigaku D/MAX 2500V, Japan) with a scanning speed of 4°/min and a test range of 5°-60°. The generation of Friedel's salt was observed.
Effects of Sulfate Ions on Phase Composition
In the case of mixing chloride ions in sea sand, C 3 A in the matrix reacts with the chloride ion to form Friedel's salt in the cement hydration process, and the use of sulfate in the cementation material (the SO 3 content in cement was the highest, 3.16%) would cause competition between the sulfate ion and the chloride ion for C 3 A. However, in the hydration process of cement mixed with chloride ions, it has been generally considered that the C 3 A phase preferentially reacts with sulfate ions to form AFt until the sulfate ions are exhausted, and subsequently, Friedel's salt is generated (Wang et al., 2013). Hence, the peak value of AFt in the diagram was not obvious. Furthermore, the addition of various mineral admixtures reduced the absolute production amount of AFt. If the fly ash content is increased, the cement content decreases, and the sulfate ions in the matrix on the whole also decrease. Therefore, in the proportion with the least fly ash content (FA/C 1.2), the peak value of AFt was the most obvious.
Analysis of Causes Affecting the Production of Friedel's Salt
It can be observed from Figures 8A-C that the diffraction peak intensity of Friedel's salt increases with the increase in sea sand content. Therefore, the increase of sea sand content (i.e., the increase in chloride ion content) increases the content of Friedel's salt. This can be attributed to the fact that metakaolin and fly ash both contain a significant amount of C 3 A, which can fully react with the chloride ions in sea sand to form Friedel's salt (Wang et al., 2013). Therefore, if the chloride ion content is increased, the content of Friedel's salt also increases. In addition, in the initial stage of mixing, the chloride ion at the far end of the surface of sea sand is dissolved in the solution, at the interface between sea sand and cement paste. In the later stage of mixing, along with the hydration of cement, the chloride ions near the surface of sea sand would gradually spread outward, with sea sand as the center. Some of free chloride ions formed Friedel's salt with hydration products, while the other parts were distributed around the aggregate or adsorbed by the C-S-H gel (Xing et al., 2006). Therefore, the diffraction peak of Friedel's salt was not obvious for the proportion with less sea sand content (less chloride ion content). However, the chloride ion mainly reacts with metakaolin to produce Friedel's salt (Wang et al., 2013), thereby affecting the mechanical properties of SECCs. In the present study, the curing time was short, and Friedel's salt content generated by the reaction was small. Hence, it was difficult to observe the macro effect of metakaolin.
Variance Analysis
The variance analysis for the SECC compressive strength test results is presented in Table 12. As illustrated in Table 12, the content of both sea sand and fly ash had influences on the SECC compressive strength. Fly ash was still the main factor that affected the compressive strength, and the influence of sea sand content was less than that of fly ash. Furthermore, there was no significant difference in the influence of metakaolin content on compressive strength. The variance analysis result was consistent with the range analysis result. Hence, the orthogonal analysis result is reasonable.
CONCLUSION
In the present study, range and variance analysis, XRD analysis, and SEM analysis were applied to investigate the SECC uniaxial tensile and compressive performances. The following conclusions were obtained. The ultimate tensile strain of SECCs was 0.38-2.40%, the ultimate tensile stress of SECCs was 2.56-4.58 MPa, and the compressive strength of SECCs was 37.81-64.66 MPa. The content of fly ash and sea sand had a highly significant positive effect on ultimate tensile strain. Furthermore, the fly ash content had a highly significant negative effect on compressive strength, and the ultimate tensile stress and compressive strength were less affected by the sea sand content.
For the parameter ranges, the content of fly ash and sea sand almost completely determined the ductility of SECCs. The increase in fly ash content significantly increased the ductility of SECCs. The decrease of cement content and the increase of fly ash content led to the decrease of SECC compressive strength. Furthermore, the ultimate tensile strain decreased with the increase in sea sand content, and by observation and calculation using a digital microscope, sea sand and silica sand had similar sphericity and roundness, except for the difference in particle size and chemical composition. Moreover, the fracture energy test revealed that the matrix fracture toughness of SECCs significantly increased with the increase in sea sand content.
Although the content of metakaolin had minimal influence on the basic mechanical properties of SECCs in the test, which could contribute to the lack of curing time and the incomplete reaction of C 3 A in metakaolin with the chloride ion in sea sand, Friedel's salt could be observed in the XRD analysis, and this content increased with the increase in sea sand content (chloride ion content). Therefore, metakaolin is indispensable for binding chloride ions in the SECC matrix to improve durability.
In the SECC uniaxial tensile test, it could be observed from the SEM test that the fiber failure mode was tensile failure. Finally, the mixed proportions of group 3 (FA/C was 3.2, the sea sand content was 0.6, and the metakaolin content was 0.2) and group 8 (FA/C was 1.2, the sea sand content was 1, and the metakaolin content was 0.16) were recommended for SECCs, based on the results of the present study.
In the present study, more economical, site-specific, and durable SECC proportions were obtained while maintaining the ductility of SECCs at more than 2%. Depending on the engineering application, the proportion of high ductility (ultimate tensile strain 2.4%, compressive strength 37.95 MPa, and ultimate tensile stress 3.48 MPa) or high compressive strength (compressive strength 64.66 MPa, ultimate tensile strain 0.42%, and ultimate tensile stress 3.36 MPa) can be selected, which can meet the tensile strain capacity, corrosion resistance, or strength requirements of practical constructions.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, and further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
QY designed the experiment and involved in data analysis. QY and XT investigated the data and wrote the manuscript. QY, ZL, CL, and XT performed the methodology. QY, ZL, CL, and XT revised the manuscript. XT involved in funding acquisition and project administration. Note: (a), the change in the factor level has a certain influence on the test. **The change in the factor level has a highly significant influence on the test. | 2021-08-02T13:24:13.994Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "1aafb367c987d17bf9a07bf34e4bb10f4f761a9d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmats.2021.711872/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "1aafb367c987d17bf9a07bf34e4bb10f4f761a9d",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
247335371 | pes2o/s2orc | v3-fos-license | Episodic ejection from a low-mass young stellar object traced by H$_2$O masers
We present the project of a VLBI study of the 22 GHz H$_2$O maser in a prototypical low-mass protostellar system IRAS 16293-2422. The observation was conducted to characterise the cause of the newly discovered enhanced maser activity in the source and to study the source's ejection behaviour as traced by maser emission. Single-dish monitoring and analysis of archival data indicate that the activity of the H$_2$O maser in IRAS 16293-2422 has a cyclic character and traces episodic ejection events in the source. A new maser flare was recently discovered in a spectral feature that has never shown such a significant increase in flux density before. The flare of this feature seems to indicate the beginning of a new cycle of activity.
Masers to trace star formation
The life cycles of stars follow patterns based mostly on their initial mass. Comparison of the physical parameters of protostar envelopes of different masses hints that the transition between them seems to be smooth, and the formation processes and triggers are similar [1]. Disk-mediated accretion accompanied by episodic accretion bursts is thought to be a common mechanism of star formation across the entire stellar mass spectrum. However, the accretion process itself is poorly understood due to scarce observational evidence.
Much progress in the study of high-mass star formation has been made recently with the study of masers, which have proven to be a powerful tool for locating massive young stellar objects (YSOs) undergoing accretion events (see [2], and other publications of the Maser Monitoring Organisation (M2O) -a global co-operative of maser monitoring programs).
During accretion events in massive YSOs, multiple maser species and transitions flare (e.g. [3]). Maser action occurs only over certain ranges of physical conditions (e.g. [4]), hence the spatial distribution of masers can reveal the temperature, density and radiation enhancements in the region, while the kinematics of the maser spots can indicate gas motions.
According to theory and observation [5], there is a correlation between ejection and accretion rates, so major outflow events can be linked to major accretion events (bursts). Under this hypothesis the accretion history of a star can be inferred by its ejection history traced by symmetric pairs of ejection bow shocks at ever increasing distances from the central object [6]. The morphology of the jets/outflows traced by H 2 O masers may indicate timescales of the accretion process, reveal the collimation properties of the jet and the density of the ambient material.
However, while the study of maser emission has shown excellent results for massive YSOs, adaptation of the approach for low-mass protostars is challenging. Low-mass YSOs show much less variety of associated maser species, and the detected masers are highly variable and (in most cases) weak.
IRAS 16293-2422
IRAS 16293-2422 (IRAS 16293, hereafter) is a low-mass protostellar system, showing bright and active H 2 O maser emission. The source is known for its very rich chemistry, several complex molecules have been discovered in it for the first time ever (e.g. [7,8]).
The source is a binary system, usually referred to as sources A and B [9]. The sources A and B have properties of Class 0 objects. Component B appears to be a single source, while component A was resolved into two subcomponents: A1 (an ionized region associated with an outflow) and A2 (a protostar powering a radio jet) [10,11]. An almost edge-on rotating disk around source A and signs of infall around source B were found in [12]. The latest multi-epoch continuum VLA observations of the system confirm that A2 is a protostar driving episodic mass ejections [13].
The source A2 shows highly variable, but bright 22 GHz H 2 O maser with the flux density of tens of kJy during flares and no less than 100 Jy in a stable state [14]. The flaring features appear alternately at blueshifted and redshifted velocities with respect to the cloud velocity of ∼4 km s −1 , typically in the velocity range of −5 -10 km s −1 . Recently a new maser flare of tens kJy was detected in a blueshifted feature at the velocity of −1.5 km s −1 ; this is the brightest flare ever detected in this spectral feature.
VLBI studies of the spatial distribution of the 22 GHz H 2 O maser emission in the source showed that there are two separate clusters: maser spots with blueshifted velocities are found to the north, and redshifted spots are found to the south [15][16][17]. Most of the papers propose to associate the detected H 2 O maser clusters with outflows (e.g. [16]). Single-dish monitoring of maser emission in the source suggests that the H 2 O maser flares are most probably linked to motions of shocked gas [14]. Our analysis of the VLBI data available in literature indicates that H 2 O masers in IRAS 16293 could trace shocks excited by a precessing outflow system (see Fig.1). Although the general segmentation of maser emission into two spatial clusters persists from epoch to epoch, the distance and alignment between them vary. The elongated morphology and bipolar outward motion of water masers suggest that they are associated with an ejection event from the YSO. Particularly noteworthy is the fact that the distance between clusters increases. There is a correlation between kinematic age and outflow length (the angular separation between the lobes) of H 2 O maser jets: very young bipolar H 2 O maser jets/outflows are very compact.
Note that observation of [18] was made during a quiescent state of the maser and blueshifted features were not detected. Monitoring of [14] indicated that the activity of the H 2 O maser in IRAS 16293 has a cyclic character, and the period of 2006-2008 was one of the longest maser emission minima. Thus, it is possible that the shocks traced by the H2O maser and presented in [15,16] and [17] no longer exist, and with VLBI observation of the maser emission associated with the new flare, we can catch the launch of young ejection bow shocks.
The EVN observation
The EVN observation of the source was conducted on March 11, 2021. The telescope array consisted of 19 antennas, including the longest NS baselines to the 26-m Hartebeesthoek telescope and the longest EW baselines to the KVN telescopes (see the obtained UV-coverage in Fig.2). The achieved spatial resolution was ∼1 mas.
By the date of the EVN observation, the flare flux density of the source had decreased from tens of kJy to ∼2 kJy (Fig.3). Nevertheless, such a high brightness of the source made it possible to study in detail the spatial distribution of the maser features. All of the spectral features were detected on the EVN baselines with a high signal-to-noise ratio. A detailed analysis of the obtained data will be presented in subsequent publications. | 2022-03-10T16:10:26.807Z | 2022-03-08T00:00:00.000 | {
"year": 2022,
"sha1": "6e39e3b3dd7a2e2a548ca4efdeecc7bd3f2ac135",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/399/021/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "497fe621def0a7757b3208b866ecc60ad9214ea0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
211562496 | pes2o/s2orc | v3-fos-license | EVALUATION OF OVERLOAD ON FAMILY CAREGIVERS OF INDIVIDUALS WITH SCHIZOPHRENIA AVALIAÇÃO DA SOBRECARGA DE FAMILIARES CUIDADORES DE INDIVÍDUOS COM ESQUIZOFRENIA EVALUACIÓN DE LA SOBRECARGA DE LOS CUIDADORES FAMILIARES DE PERSONAS CON ESQUIZOFRENIA
Objective: to evaluate the overload (objective and subjective) experienced by family caregivers of individuals with schizophrenia. Method: this is a quantitative, descriptive study with 15 family members who directly or indirectly cared for the individual with schizophrenia in a Psychosocial Care Center. Data was analyzed using nonparametric statistics. Results: there was objective overload of family members in the preparation of meals (60%), patient followup in transportation (66.7%), administration of patient money (80%), follow-up at medical appointments (60%), supervision of problem behaviors (33.3%), suicidal behavior (33.3%) and excess cigarettes, food and liquids (33.3%). Conclusion: the main objective and subjective overloads experienced by caregivers of individuals with schizophrenia were evaluated, thus contributing to the reflection of the services on necessary interventions. Descriptors: Mental Health; Schizophrenia; Family Relationships; Caregivers; Mental Health Services; Psychosocial Impact.
Schizophrenia is known to affect all peoples and cultures in an estimated prevalence of 1% of the population. It is the main form of psychosis due to its frequency and clinical importance and is characterized by the presence of typical symptoms, such as hallucinations and delusions; disorganized thinking; bizarre behavior and inappropriate or dull affects. It occupies a place among the most incapacitating diseases, producing a high cost for society and becoming an overload for the affected individual, family and community. 1,2 It is considered that mental health care in Brazil has been undergoing several transformations resulting from the Brazilian Psychiatric Reform, initiated in the late 70's and legitimized by Law No. 10.216, of April 6, 2001. It stands out among its strategies the expanded conception of health, the focus of attention in the territory, the intersectoriality, the networking and the focus on the family. As a result, the responsibility for the care of people with mental disorders has also come from the family and at home. 3 In this sense, the family is requested to be a partner of the new services and to reaffirm itself as one of the possible spaces for the provision of care, starting to be conceived as necessary and allied with their family member with mental disorder. It is known that the family lives and suffers intensely due to psychological distress, feelings of distress, isolation, depression, anguish, fear, guilt and chronic sadness that can lead to overload. 4,1 It is considered that individuals with schizophrenia may present functional impairment, difficulties to perform activities of daily living (ADL) and instrumental activities of daily living (IADL) such as eating, grooming and taking care of personal hygiene, going to the toilets, transport, shopping, meal preparation and medication management, often requiring a caregiver who is often a family member to assist with their activities. 5,6 It is reaffirmed that the presence of people with mental disorders in the family and home generates a heavy overload for all its members. This overload is seen in its objective and / or subjective extension. 4 It is noteworthy that objective overload refers to changes that usually happen in the family member's routine due to the restriction of their social and family life, financial expenses and losses, supervision of problematic behaviors that hinder their life projects. Subjective overload includes concerns, discomforts in the task of caring and concerns arising from care. 7 It is identified in clinical nursing practice, the caregiver overload through the nursing diagnosis "Tension of the caregiver role". It is defined by the difficulty to play the role of family caregiver. Becoming a caregiver, in some situations, means taking care of the other and neglecting yourself. This situation can be aggravated as the mental disorder is usually not transient and has a long duration requiring permanent adaptations. 8,9 Considering the transfer of responsibility for mental health care, previously provided entirely by psychiatric institutions, to the family members, it is understood that this process has generated several changes in the dynamics of families, in their way of functioning, being very commonly accompanied by overload and suffering. It is believed that research on this reality can produce contributions to health services by facilitating the implementation of care for relatives of people with mental disorders.
• To evaluate the overload (objective and subjective) experienced by family caregivers of individuals with schizophrenia. This is a quantitative, descriptive study at a Psychosocial Care Center (CAPS) II in a municipality in the northern region of Minas Gerais (MG), Brazil. CAPS is a service that is part of the Specialized Psychosocial Care component of the Psychosocial Care Network (RAPS), which aims to assist people with severe and persistent mental disorders and those with needs arising from the use of crack, alcohol and other drugs in its territorial area under intensive, semi-intensive and non-intensive treatment. 10 The convenience sampling technique was used, thus, 15 family caregivers of patients diagnosed with schizophrenia participated. Family members of both sexes, aged over 18 years, who directly or indirectly cared for the patient and consented to participate in the study were included. The number of participants with the clinical staff of the service was defined, consisting of family members of patients who were admitted during the second semester of 2017.
Data was obtained through a family member description questionnaire that provided information from the caregiver and application of the Family Overload Rating Scale (FBIS-BR).
The FBIS-BR scale was used to assess the degree of objective and subjective overload of family members of psychiatric patients consisting of the following subscales: A) Assistance in the patient's daily life; B) Supervision of the patient's problematic behaviors; C) Financial expenses; D) Impact on daily family routines; E) Concerns with the patient. We evaluated the questions on the scale that referred to information arising from the last 30 days. The type and value of patient METHOD OBJECTIVE INTRODUCTION expenses, their contribution and permanent changes in the caregiver's life were analyzed.
Objective overload was assessed through the frequency of care and supervision of the family member in daily care with the patient. It was indicated how consistently the family member performed tasks for the patient, dealt with problematic behaviors and obtained changes in the routine of his life. We used the likert model scale that has the following response alternatives: 1 = no time, 2 = less than once a week, 3 = once or twice a week, 4 = three to six times a week and 5 = every day.
In turn, subjective overload was used through the degree of discomfort felt by the family member when performing the role of caregiver and their concerns with the patient. The degree of discomfort was analyzed using the response options: 1 = not at all, 2 = very little, 3 = a little and 4 = a lot. For the assessment of concerns, the scale response alternatives also contain five points, where 1 = never; 2 = rarely, 3 = sometimes, 4 = often, 5 = always or almost always.
It is known that the FBIS-BR scale was applied by reading the questions to the family caregiver. The instrument was applied individually and without the patient's presence in a place determined by the caregiver: at home through home visits, at work in a private place provided by the caregiver or at CAPS, in a room provided by the institution.
Data was analyzed using the Statistical Package for the Social Science version Windows 20.0®. A descriptive analysis of the results regarding the family member description questionnaire was performed. It was analyzed the verification of family overload through the family members' overload scale of psychiatric patients, and the percentage of answers for each subscale item was analyzed, considering that: questions related to objective overload are high, whose answers are 4 and 5. Evidenciam-se sobrecarga elevada dos principais cuidadores referentes à sobrecarga subjetiva, as respostas 3 e 4, e na avaliação das preocupações do familiar com o paciente, as respostas 4 e 5 indicam sobrecarga elevada. Considera-se que na subescala C não se aplica aos cálculos de porcentagem. Obtiveram-se, a partir da mesma, dados referentes às despesas com o paciente e informações sobre as questões econômicas do grupo familiar.
This was followed by the research, which determines Resolution No. 466, of December 12, 2012, in all its aspects, and was approved by an independent Research Ethics Committee with opinion number 2.255.076.
Family caregivers of individuals with schizophrenia were found to be male and female, with the majority being male (73.3%). Their ages ranged from 39 to 70 years, with a predominance of 64 to 70 years. The siblings predominated in relation to the degree of kinship, representing 40% of respondents, followed by parents 26.7% and of these, the majority were the main caregivers (93.3%).
Regarding financial expenses, higher expenses with transportation, food, small amounts of money for small expenses and medication, lower expenses with mental health care and other expenses were identified, 73.3% of the patients contributed with the expenses, 66 % with the value of a minimum wage, while the average expense of family caregivers with the patient was $ 468.4. Questioning how often expenses were significant, 6.7% answered that rarely, 20% sometimes, 33.3% often and 40% answered that always or almost always. Table 1 section A presents the objective overload of family members who provide assistance in daily life, and these tasks were performed more than three times a week by the caregiver. Caregivers were found to be responsible for preparing meals (60%), accompanying the patient in transportation (66.7%), administering the patient's money (80%), and following up on medical appointments (60%). In assessing subscale A (Table 2), there was a greater subjective overload on those who administered the patient's money (66.7%), those who followed the transportation (60%) and those who accompanied the patients in medical appointments (53.3%).
Objective overload was evidenced in relation to the supervision of problematic behaviors, starting from section B (Table 1) in which the majority occurred for the factors related to this type of behavior (33.3%), excessive demand for attention (33.3%), suicidal behavior (33.3%) and excess cigarettes, food and liquids (33.3%). Subjective overload was verified through subscale B (Table 2). Regarding the supervision of problem behaviors, it was higher for aspects related to problem behavior (80%), disturbing people at night and aggressive behaviors (53.3%), suicidal behavior (53.3%) and excessive attention demand (40%).
Section D (Table 1) identified the impact on daily care routines, with 40% of family caregivers having cancellations or delays in appointments, 46.7% having changes in social and leisure activities and 53.3% had to change the service and home routines to take care of the schizophrenic individual. Table 3 shows the concerns of family caregivers with individuals with schizophrenia, according to the analysis of overload E, in which family members worried about the patient, which leads to greater overload. The biggest concerns were physical security 66.7%, the future 60% and finance 40%. It is emphasized that schizophrenia is a serious mental disorder that usually develops in young adults causing changes in the structure of their lives and the individuals with whom they live prominently with their family. 11 Most caregivers and family members of individuals with severe mental disorders are not prepared for care, especially schizophrenia. This phenomenon can be explained by the lack of knowledge about the disorder, few resources in the community and lack of knowledge to cope with crisis situations. Thus, there is both objective and subjective overload, which can also cause illness to caregivers. 12 The results of the study showed that most caregivers were elderly and male. Thus, new roles are redefined with the presence of the male gender in the care, that is, in addition to his participation with the financial resources, the man begins to assume the role of caregiver that was previously put into female practice. 8 Thus, this data differs from other studies that obtained the predominance of females as caregivers. 13,14,7 The number of elderly who are being responsible for the care of family members with schizophrenia were highlighted. It is known that the totality of patients with schizophrenia living with the family member, who is the main caregiver, contributes to the increased frequency of tasks and assistance provided to the sick individual. It is often the family who is the primary caregiver of schizophrenic patients and is responsible for ensuring their well-being. However, it should be noted that the family is also the one who commonly deals with crises and problem behaviors, and is also a source of social support and financial aid. 9 According to family members, it is confirmed that most individuals with schizophrenia contributed to the expenses. Such fact is justified by the receipt of social benefit. It is noted that the minority who could not contribute to the expenses, had the tables funded by family members, with the largest expenses with transport and food. With regard to expenses, it was found that 40% of caregivers reported that they were always or almost always significant, ie, the individual with schizophrenia demands high financial resources and that the lowest expenses were with treatments with mental health care, possibly justified by the service offered by the network.
RESULTS
It was revealed that objective overload prevailed in the development of care activities of the patient's daily life, showing high objective overload compared to the supervision of problematic behaviors and impact on daily care routines that revealed mild overload. He stood out for high expenses in descending order to care provided with money, transportation, medical appointments, food preparation and medication. This data corroborates with a study by Reis et al 7 .
It was evidenced that care with money, transportation, and medical appointments also showed high subjective overload. The precariousness inherent in the social and cognitive skills present in schizophrenia is known to produce poverty and joblessness creating a demand for the need for income support offered by social assistance. These benefits are generally managed by family members. 15 It is noted that most caregivers feel uncomfortable living with the judgment of others about the correct or inappropriate use of the individual's money with schizophrenia. 16 It is noteworthy that the limitations often present in individuals with schizophrenia affect the need for constant supervision of these resources and routine care generating objective and subjective overload. Thus, the dependence to take medication, prepare meals, follow-up on transportation and consultations points to an objective and subjective overload on the caregiver, considering that they produce significant changes in their family context and way of life. 14 Regarding the treatment, it is verified that the caregiver may refer to distress regarding the correct administration of the drugs, because it is common the fear that the individual with schizophrenia may make inappropriate use of the drugs, becoming a social risk. 17 It is noteworthy that non-adherence to medication may lead to crisis or worsening of symptoms, which may lead DISCUSSION to readmissions. About 50% of individuals with schizophrenia are known to not adhere to medication. This leads to worsening prognosis, increased hospitalization expenses, and increased risk of self or hetero-agressiveness. 18 It was found in subscale analysis B, that the prevalence of objective overload was lower in relation to problem behaviors, however, in relation to subjective overload, there was high overload. The overload with problematic behaviors, bothering people at night, aggressive behaviors and suicidal behavior were highlighted. There was also evidence of subjective overload in relation to concerns about physical and future security. This data is corroborated with the study by Reis et al 7 . The relative is worried about the possibility that in his absence there is no one who can take care of the individual, forcing him to live in a situation of streets and without care. 12 It is known that most people do not know how to act in the face of strange and bizarre behaviors common in schizophrenia. This shows concern, helplessness, fear, anxiety and anger, as well as doubts about how to act. [18][19][20] It is emphasized that the family manifests difficulties in dealing with some behaviors such as hetero-aggressiveness, thus, living with the individual with schizophrenia is marked by feelings of insecurity and discomfort in the face of unforeseen actions. 21 It is noteworthy that the data alert the mental health service network about the need to function as a support to the dimensions that cause greater overload to family members. It is therefore important to help family members cope with problem behaviors through interventions and management of these difficult behaviors. 7 It is understood that behavioral changes presented by individuals with schizophrenia such as impulsive acts, strangeness, psychomotor agitation or slowness, bizarre behaviors, movement rigidity and depressive symptomatology, added to idleness, are directly related to the activation of caregivers hypervigilance affecting their quality of life and mental health. 15 It is noticed that the care for individuals with schizophrenia can represent, besides a change in the family routine, changes in the family's plans and plans, thus, the diagnosis becomes the fundamental element that moves this family. It is known that the removal and sometimes the exclusion of the family from any social contact is very frequent and the unpredictability of behaviors present in schizophrenia is one of the factors that generates this condition in the family. 5 It is noteworthy that care for individuals with schizophrenia promotes changes in the daily lives of family caregivers, limiting them in relation to employment, leisure and rest opportunities, as well as producing emotional distress as a result of overload and because they have no one to share the activities with caution. 13 It was found in this study that family members are satisfied with the quality of treatment received, individuals with schizophrenia enjoy attending CAPS, receive medication, and are welcomed thus favoring relief and satisfaction to the family caregiver. It is emphasized that this reported treatment was not mentioned to primary health care.
It is noted that the CAPS is made up of a multidisciplinary and interdisciplinary team, offers day care, has a welcoming environment, offers treatment to people in crisis situation, attends cases of severe mental disorders, aiming to avoid the hospitalization of the individual. It is noteworthy that some therapeutic activities developed in CAPS are: therapeutic workshops, group or individual psychotherapy, artistic activities, guidance and monitoring of medication use, as well as family care. It is noteworthy that the service has as one of its objectives to promote the reintegration of the person with mental disorder to the community and their family environment. 9 This highlights the importance of the multidisciplinary team in preventing the caregiver's illness. It is up to the nurse to provide information about schizophrenia and its treatment, stimulate treatment and support families through listening, assisting in times of crisis and encouraging the family during the rehabilitation process. Thus, this action helps the patient and family member to identify and manage the demands regarding the disease. 17 It is understood that among the multidisciplinary team, the nurse is the professional who maintains direct contact with the patient, for this reason, should know the disease process of schizophrenia, and support the family or the caregiver. Thus, the effectiveness of care for individuals with schizophrenia is guaranteed, being necessary for the quality of treatment with the mental health team. 17 It is emphasized the need for nurses working in mental health care to be able to recognize the overload of the family member, thus providing satisfactory reception and developing strategies and interventions aimed at family caregivers. The role of the professional nurse and its importance in family approach in the context of mental health is highlighted. Thus, it is understood that the identification of the nursing diagnosis "Tension of the role of caregiver" subsidized by the use of instruments that assess objective and subjective overload allows the professional nurse to take care of those who care. Thus, it is envisaged interventions that favor the reduction of stress, depression and fear, improving the caregiver's quality of life. 8 It is emphasized that all staff should offer humanized assistance to users, providing comprehensive mental health care, ensuring rehabilitation and psychosocial reintegration of users into the service. 11 Thus, the importance of holistic care is emphasized, with a focus on integrality expressed through networking with the articulation between its devices; interdisciplinarity; intersectoriality; the contact and the welcome; therapeutic listening; psychosocial rehabilitation; the resources of therapeutic and educational workshops; the ambience and the incorporation of the subjective component and expansion of the clinic. 3 Most caregivers of individuals with schizophrenia had objective and subjective overload due to several factors. The evaluation and characterization of the overload experienced by family caregivers of individuals with schizophrenia was revealed as essential, since the family is the main provider of continuous care and legally responsible for the individual with schizophrenia. It is evident that the family needs to be prepared and receive satisfactory support to adequately take care, must be instructed by professionals to know how to understand and deal with changes in the behavior of the family member. Thus, the role of health services in welcoming, accompanying and supporting these families is highlighted.
In view of this, the importance of seeking new mental health care strategies directed to the overload of family caregivers is highlighted. It is noteworthy that the welcoming, listening and guidance are fundamental so that family members can express their difficulties and be supported. It is important to work with a broad approach including family focus allowing the professional to realize the main overloads experienced by caregivers of individuals with schizophrenia, thus enabling the implementation of necessary interventions.
It was confirmed that the results found are similar to those reported in the literature and research on the subject. Thus, further qualitative studies are suggested, in order to value the discourses, singular experiences and feelings experienced by family caregivers of individuals with schizophrenia. | 2020-02-28T20:08:50.507Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "1e250a49d0cd650032a715d479a9c19bf6ded51f",
"oa_license": "CCBY",
"oa_url": "https://periodicos.ufpe.br/revistas/revistaenfermagem/article/download/243361/34306",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e250a49d0cd650032a715d479a9c19bf6ded51f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
89604192 | pes2o/s2orc | v3-fos-license | Green Processes for Green Products: The Use of Supercritical CO2 as Green Solvent for Compatibilized Polymer Blends
Polycaprolactone-g-glycidyl methacrylate (PCL-g-GMA), a reactive interfacial agent for PCL-starch blends, is synthesized using supercritical carbon dioxide (scCO2) as reaction medium and relatively high molecular weight PCL (Mw = 50,000). Higher GMA and radical initiator intakes lead to higher functionalization degrees (FD) for PCL-g-GMA samples. A mathematical model is developed to describe the correlation between monomer and initiator intake and FD values. The model shows an excellent R2-value (0.978), which implies a good fit of the experimental data. Comparison of this model with a similar one for the reaction in the melt clearly indicates a better reaction efficiency in scCO2. Furthermore, GPC results show that less degradation occurred for samples made in scCO2. Finally, the use of the PCL-g-GMA made in scCO2 (as interfacial agent) in ternary blend of PCL/starch/PCL-g-GMA results in better mechanical properties with respect to those obtained by using the same graft-copolymer as prepared in the melt.
Introduction
Plastic products, with a global production of approximately 335 million tons annual and an annual growth of approximately 2.5% (in 2016), represent the largest field of application for crude oil outside the energy and transportation sectors. The plastic industry is so dependent on oil that if the price of crude oil increases, a negative effect on the plastic market follows as consequence. Another important issue is the accumulation of plastic waste that is not easily degraded [1]. A lot of research has been performed to overcome these problems; the general field of "bioplastics" being in this case a paradigmatic example.
Starch is a biopolymer already widely researched for application in bioplastics. However, direct use of starch usually results in rather poor mechanical properties (e.g., tensile strength) and in relevant drawbacks coupled with the processability of the corresponding products [2]. A popular approach to solve these problems involves the blending of starch with a synthetic polymer such as polycaprolactone (PCL). The blend is expected to have good mechanical properties while maintaining its biodegradability [3,4]. However, phase separation takes place and a coarse morphology is obtained when simply blending starch with PCL [3,5]. In this context, the use of an interfacial agent represents a popular choice in order to improve the adhesion at the interface between the two polymers [4,[6][7][8][9][10].
Generally, interfacial agents based on chemically modified PCL itself have been used to improve the system. The interfacial agent can be synthesized by functionalizing PCL with different monomers (functional groups). Another option is to graft PCL to starch by in situ ring-opening polymerization of ε-caprolactone in the presence of starch particles and using a catalyst [4,7,8]. The latter route is considered more complex than the functionalization of PCL. Monomers such as GMA [9,[11][12][13], maleic anhydride (MAH) [9,10,[14][15][16], pyromellitic anhydride [6], diethyl maleate (DEM) [13], and acrylic acid (AA) [10] have been successfully grafted onto PCL backbone for compatibilization purposes. These researches also concluded that functionalization with GMA results in the highest efficiency. In general, the grafting reaction of GMA is performed in a melt state due to its simplicity. However, grafting reaction carried out in melt would also result in unwanted side-reactions, such as PCL degradation. This problem can be solved by using supercritical carbon dioxide (scCO 2 ) as grafting medium. Moreover, due to the plasticizing effect of scCO 2 a melting point depression is observed and thus a lower processing temperature can be applied. The supercritical fluid is also thought to act as "transporting agent", thus improving mass transfer during the mixing and leading to higher conversion [17]. Several monomers such as GMA [18], MAH [19], methyl acrylate [20,21], and methyl methacrylate [22] have been already and successfully grafted onto polypropylene using supercritical carbon dioxide as reaction medium. However, a systematic study linking the chemical composition of the feed to the reaction course (e.g., to the obtained FD values) is still missing in the open literature. To the best of our knowledge, also the use of scCO 2 as grafting medium for polyesters (PCL in this case) has been not yet described, even if spectroscopic evidence clearly suggests relevant solubility of scCO 2 in this kind of polymers [23].
This research focuses on the development of interfacial agents (PCL-g-GMA from high molecular weight PCL) using scCO 2 as grafting medium. The obtained graft copolymers are then used in PCL/starch blends and their influence on the mechanical properties established. In both cases (functionalization and blending), the corresponding melt-based processes/products will be taken as reference to establish the influence, if any, of scCO 2 on the reaction efficiency and the final product properties.
Synthesis and Purification of PCL-g-GMA
Melt grafting. The reactive compatibilizers were prepared in a Brabender (Plasticorder PL2000) batch kneader with a chamber volume of 35 cm 3 . The intake volume of reagents was set on 24 cm 3 , which is about 69% of the chamber volume to ensure a proper mixing. The kneader was heated to 130 • C with a rotation speed of 80 rpm. PCL was added when the required temperature was reached. A solution of BPO in GMA was added drop by drop over a period of 5 min. The materials were mixed for another 5 min, before the chamber was opened to collect the samples. An overview of the prepared samples is provided in Table S1 (Supplementary Materials).
Grafting in scCO 2 . The interfacial agent was prepared in a Parr reactor (chamber volume 100 cm 3 ) equipped with heating mantle and turbine impeller (Figure 1). A compressor (LEWA) was used to bring the reactor to the desired pressure. PCL, initiator, and monomer were weighted and placed in the reactor. Intakes and results for each experiment are given in Table 1. The reactor was flushed with nitrogen (10-30 bars) for 10 min to remove oxygen from the system. After that, the nitrogen was released, and the CO 2 valve was opened to pressurize the reactor to approximately 50 bars. The heating mantle was set to the desired temperature and when this was reached the reactor was pressurized to the desired value. A stirrer (operated at 900 rpm) was used to achieve better mixing. The reaction started when the desired temperature and pressure were reached. After a given reaction time, the stirrer and the heating mantle were turned off and the valve was opened to depressurize the reactor. The reactor was opened to collect the samples, which were immediately quenched in liquid nitrogen. Grafting in scCO2. The interfacial agent was prepared in a Parr reactor (chamber volume 100 cm 3 ) equipped with heating mantle and turbine impeller (Figure 1). A compressor (LEWA) was used to bring the reactor to the desired pressure. PCL, initiator, and monomer were weighted and placed in the reactor. Intakes and results for each experiment are given in Table 1. The reactor was flushed with nitrogen (10-30 bars) for 10 min to remove oxygen from the system. After that, the nitrogen was released, and the CO2 valve was opened to pressurize the reactor to approximately 50 bars. The heating mantle was set to the desired temperature and when this was reached the reactor was pressurized to the desired value. A stirrer (operated at 900 rpm) was used to achieve better mixing. The reaction started when the desired temperature and pressure were reached. After a given reaction time, the stirrer and the heating mantle were turned off and the valve was opened to depressurize the reactor. The reactor was opened to collect the samples, which were immediately quenched in liquid nitrogen.
Purification of PCL-g-GMA
To remove the unreacted GMA monomer, decomposed BPO, and GMA homo-polymers, a further purification of PCL-g-GMA is necessary [13]. PCL-g-GMA (5 g) was dissolved and stirred for about 1.5-2 h in THF (100 mL). Methanol (400 mL) was then added for precipitation at 6-8 °C (overnight). The suspension was decanted and the solid product was dried in a vacuum oven (40 °C, 5 mbar) until constant weight [13].
Ternary Blend of PCL/Starch/PCL-g-GMA
The blends were prepared in a Brabender (Plasticorder PL2000) batch kneader (chamber volume 35 cm 3 ). The intake volume was set on 24 cm 3 , to ensure a proper mixing. The kneader was heated to 170 °C with a rotation speed of 80 rpm. PCL, starch, and PCL-g-GMA were premixed in a beaker before being added to the mixing chamber. The mixture was blended properly for 15 min followed by sample collection [9,13].
Purification of PCL-g-GMA
To remove the unreacted GMA monomer, decomposed BPO, and GMA homo-polymers, a further purification of PCL-g-GMA is necessary [13]. PCL-g-GMA (5 g) was dissolved and stirred for about 1.5-2 h in THF (100 mL). Methanol (400 mL) was then added for precipitation at 6-8 • C (overnight). The suspension was decanted and the solid product was dried in a vacuum oven (40 • C, 5 mbar) until constant weight [13].
Ternary Blend of PCL/Starch/PCL-g-GMA
The blends were prepared in a Brabender (Plasticorder PL2000) batch kneader (chamber volume 35 cm 3 ). The intake volume was set on 24 cm 3 , to ensure a proper mixing. The kneader was heated to 170 • C with a rotation speed of 80 rpm. PCL, starch, and PCL-g-GMA were premixed in a beaker before being added to the mixing chamber. The mixture was blended properly for 15 min followed by sample collection [9,13].
1 H-NMR
To characterize the reactive interfacial agent samples, 1 H-NMR measurements were performed. The 1 H-NMR spectra were obtained by using a 400 MHz Varian AMX Oxford NMR apparatus with CDCl 3 (99.8%, Aldrich) as the solvent.
The FD could be calculated from the following: The amount of moles of GMA attached to the PCL backbone was calculated by dividing the area of the peak of a characteristic proton belonging to the GMA (-CH< proton of the epoxide ring at δ 3.2 ppm, A monomer ) by that of a characteristic proton of the PCL backbone (-CH 2 -protons at δ 4.0 ppm, A polymer ) [13,15,24]. Furthermore, the efficiency of the grafting process, E, is defined by comparing the value of FD (experimental) with maximum obtained FD (theoretically based on the feed composition) as follows:
Gel Permeation Chromatography
Gel permeation chromatography was used for molecular weight and polydispersity (PDI) determination. The samples were dissolved in THF (10 mg/mL) with toluene as flow marker. The analysis was performed on a Hewlett-Packard 1100 system equipped with three PL-gel 3 µm MIXED-E columns in series. The column was operated at 42 • C with eluent flow rate of 1.0 mL/min, and a GBC LC 1240 RI detector was equipped. The average molecular weight was calculated using a calibration curve from two known PCL samples.
Tensile Tests
To characterize the mechanical properties of the samples, tensile test was performed by using an Instron 4301 pulling bench in accordance with ASTM D1708. The dumbbell-shaped microtensile specimens were prepared by using a Fontijne Holland (TH 400) hot press machine. From one sequence of pressing, eight specimens (17.5 mm length, 4.4 mm width and 2.0 mm thickness) were obtained. The press temperature was 150 • C and a force of 150 KN was applied for 3 min. A water flow of 30% was applied to cool down the mold until reach room temperature, while the pressure was maintained. For every specimen, strain at break, stress at break, and elastic modulus were measured with a pulling rate of 50 mm/min. The corresponding value for every blend was calculated from an average of six measurements while the standard deviation was taken as the absolute error of the average values.
Scanning Electronic Microscopy (SEM)
SEM was performed by using a Jeol 6320 F Scanning Electron Microscope. Before analysis, the samples were covered with a palladium/platinum conductive layer of 20 nm thickness, created by using a Cressington 208 sputter coater.
Selective Solvent Extraction
To characterize, albeit indirectly, the chemical reaction at the interface between the -OH groups on the starch and the epoxide ones on PCL-g-GMA, we performed a selective solvent extraction in chloroform. 3 g of finely grounded samples were then extracted at room temperature for 48 h in 250 mL chloroform. The remaining solid (insoluble fraction) was then filtrated while the solvent was evaporated from soluble one. Both fractions were then dried in a vacuum oven and weighted. As PCL and PCL-g-GMA are fully soluble in chloroform under these conditions, an amount of insoluble fraction higher than the corresponding starch content indicates the presence of PCL coupled to the starch phase.
2.3.6. Differential Scanning Calorimetry (DSC) DSC measurements were performed by using a Q1000 TA instrument equipped with a TA instrument DSC cooling system. Each sample was initially heated from 0 to 100 • C (heating rate 10 • C/min) to remove the thermal history of the material. The transition temperatures of each sample were further determined by firstly cooling down the samples from 100 to 0 • C and subsequently heating up back to 100 • C (cooling and heating rate were 10 • C/min).
Functionalization Reaction of PCL-g-GMA
Sixteen different graft copolymers (PCL-g-GMA) were prepared by reacting GMA with PCL (M w 50,000) using AIBN as radical initiator. The obtained FD and E values are based on the calculation from NMR analysis (Table 1). Table 1. Overview of experiments for the PCL-g-GMA (T = 97 • C, P = 90 bar, t reaction = 40 min).
Sample
Intake (Table S1). The obtained results show that higher GMA intakes lead to higher FD values ( Figure 2). Similar results are observed for GMA grafting onto PCL in the melt even though using another radical initiator [11,13,25]. This trend is commonly explained by considering the higher GMA concentration, thus leading to faster grafting reaction rate. In addition, GMA is usually present as homopolymer grafted onto the PCL backbone [9,11]. This homo-propagation reaction (of the first GMA molecule grafted on the PCL) is usually invoked to explain the relatively high GMA consumption rate and FD values [11]. In our case, the increase in FD is only slightly affected ( Figure 2) by the AIBN amount except at the lowest employed intake (0.6%). A possible explanation for this behavior is probably connected to the reaction mechanism, as already described in the literature and reported in Figure 3 [9,11]. After the hydrogen abstraction from the PCL backbone by the primary radicals generated from the initiator, a PCL macro-radical is formed. This has four available reaction pathways: it can give coupling with low molecular weight radicals present in the system (resulting in a termination step), with a growing GMA chain (yielding a grafted product) or with another macro-radical (yielding a crosslinked product). Furthermore, it can also give addition to the GMA monomer, which can in turn propagate to yield the desired graft-copolymer or give H-abstraction (not shown for brevity). Two parallel reaction pathways (i.e., grafting of a single GMA molecule followed by homopolymerization or GMA homopolymerization followed by coupling with a macro-radical) yield in principle the desired product. However, the first is clearly favored from a statistical point of view since the chance for recombination of a GMA growing chain with a macro-radical (in principle a second order kinetic process) is supposedly smaller than the one for the addition of a GMA molecule to the same macroradical. Degradation reactions (not shown for brevity) might also take place [13,15]. This homo-propagation reaction (of the first GMA molecule grafted on the PCL) is usually invoked to explain the relatively high GMA consumption rate and FD values [11]. In our case, the increase in FD is only slightly affected ( Figure 2) by the AIBN amount except at the lowest employed intake (0.6%). A possible explanation for this behavior is probably connected to the reaction mechanism, as already described in the literature and reported in Figure 3 [9,11]. After the hydrogen abstraction from the PCL backbone by the primary radicals generated from the initiator, a PCL macro-radical is formed. This has four available reaction pathways: it can give coupling with low molecular weight radicals present in the system (resulting in a termination step), with a growing GMA chain (yielding a grafted product) or with another macro-radical (yielding a crosslinked product). Furthermore, it can also give addition to the GMA monomer, which can in turn propagate to yield the desired graft-copolymer or give H-abstraction (not shown for brevity). Two parallel reaction pathways (i.e., grafting of a single GMA molecule followed by homopolymerization or GMA homopolymerization followed by coupling with a macro-radical) yield in principle the desired product. However, the first is clearly favored from a statistical point of view since the chance for recombination of a GMA growing chain with a macro-radical (in principle a second order kinetic process) is supposedly smaller than the one for the addition of a GMA molecule to the same macro-radical. Degradation reactions (not shown for brevity) might also take place [13,15]. At low monomer intakes (5% and 10%), the FD degree increases with the amount of initiator used (Figure 4). This can be easily explained by making allowances for the increased number of radicals generated at high AIBN intakes, which in turn can easily give addition to GMA. The trend is not monotonous, but the FD seems to level off (or only slightly increase) for AIBN intakes larger than 1.5 mol %). In this case, cage-effects can be invoked to explain this trend [26]. For highest monomer intake (15%), the FD can be considered as constant and independent of the initiator amount. To the best of our knowledge, this has not been previously observed for At low monomer intakes (5% and 10%), the FD degree increases with the amount of initiator used (Figure 4). This can be easily explained by making allowances for the increased number of radicals generated at high AIBN intakes, which in turn can easily give addition to GMA. The trend is not monotonous, but the FD seems to level off (or only slightly increase) for AIBN intakes larger than 1.5 mol %). In this case, cage-effects can be invoked to explain this trend [26]. At low monomer intakes (5% and 10%), the FD degree increases with the amount of initiator used (Figure 4). This can be easily explained by making allowances for the increased number of radicals generated at high AIBN intakes, which in turn can easily give addition to GMA. The trend is not monotonous, but the FD seems to level off (or only slightly increase) for AIBN intakes larger than 1.5 mol %). In this case, cage-effects can be invoked to explain this trend [26]. For highest monomer intake (15%), the FD can be considered as constant and independent of the initiator amount. To the best of our knowledge, this has not been previously observed for For highest monomer intake (15%), the FD can be considered as constant and independent of the initiator amount. To the best of our knowledge, this has not been previously observed for functionalization reactions in scCO 2 . One might speculate that the relatively high concentration of GMA in a radical-rich environment might lead to GMA homopolymerization with only a slight detectable effect on the FD ratio. This is not surprising if one considers the relatively low chance for GMA homopolymer recombination with a macro-radical (vide supra).
The discussion above clearly highlights the synergistic effect of monomer and initiator intake in determining the final FD. The combined influence can be conveniently described by using a multivariable regression procedure. The obtained model (Equation (3)) assumes that FD = f (n m , n i ), where n m is the amount of GMA and n i is the amount of AIBN in the feed.
The model is statistically, although qualitatively, validated by the random distribution of the residuals as function of the employed variables and in a normal probability plot (both not shown for brevity). A quantitative validation is obtained by the analysis of variance (ANOVA) [27]. This procedure consists of calculating the sum of squares (SS) for the model and the error. When the relative degree of freedom (DF) is known, it is possible to calculate the mean square (MS) for the model and error. With the latter value, the F-value and the p-value can be determined for the model. The latter is a measure of the statistical significance of the model. In the present case, the very low p-value implies that the model is statistically significant ( Table 2). Furthermore, the obtained model can correctly describe the experimental data (as testified by the R 2 and adjusted-R 2 values) and has a very good predictive ability (R 2 -PRESS), at least within the range of variables studied here. The obtained model can be conveniently visualized in a 3D plot ( Figure 5), which highlights the dependence of the FD values on the feed composition, as previously discussed. functionalization reactions in scCO2. One might speculate that the relatively high concentration of GMA in a radical-rich environment might lead to GMA homopolymerization with only a slight detectable effect on the FD ratio. This is not surprising if one considers the relatively low chance for GMA homopolymer recombination with a macro-radical (vide supra).
The discussion above clearly highlights the synergistic effect of monomer and initiator intake in determining the final FD. The combined influence can be conveniently described by using a multivariable regression procedure. The obtained model (Equation (3)) assumes that FD = f(nm, ni), where nm is the amount of GMA and ni is the amount of AIBN in the feed.
The model is statistically, although qualitatively, validated by the random distribution of the residuals as function of the employed variables and in a normal probability plot (both not shown for brevity). A quantitative validation is obtained by the analysis of variance (ANOVA) [27]. This procedure consists of calculating the sum of squares (SS) for the model and the error. When the relative degree of freedom (DF) is known, it is possible to calculate the mean square (MS) for the model and error. With the latter value, the F-value and the p-value can be determined for the model. The latter is a measure of the statistical significance of the model. In the present case, the very low pvalue implies that the model is statistically significant ( Table 2). Furthermore, the obtained model can correctly describe the experimental data (as testified by the R 2 and adjusted-R 2 values) and has a very good predictive ability (R 2 -PRESS), at least within the range of variables studied here. The obtained model can be conveniently visualized in a 3D plot ( Figure 5), which highlights the dependence of the FD values on the feed composition, as previously discussed. Particularly interesting is the comparison between the process in the melt and the one in scCO 2 , which can be performed on the basis of the corresponding statistical models and 3D-plots ( Figure 6) [25]. Particularly interesting is the comparison between the process in the melt and the one in scCO2, which can be performed on the basis of the corresponding statistical models and 3D-plots ( Figure 6) [25]. One should be aware that this comparison is limited to the intake values that were experimentally tested in both processes (initiator from 0.6% to 1% and GMA monomer from 6% to 15%). It might very well be that outside this range the behavior of these two reactions might be different that the one presented above. The use of two different initiators (BPO for melt, AIBN for scCO2) at two different temperatures might hinder a direct comparison. However, this has been easily overcome by employing appropriate reaction conditions, in particular a reaction time equal to 7 times the half-life time of the initiator at the given temperature (all initiator expected to be decomposed). In general, the scCO2 plot is always above the one for the melt process (except at low initiator and GMA intake) within the investigated range. The lack of data under scCO2 at relatively high GMA intakes (particularly above 15 mol %, see Table 1 as compared to Table S1) does not allow to extend this conclusion for higher FD values. Nevertheless, within the investigated range, this demonstrates the higher efficiency (in terms of FD values) for the grafting reaction in scCO2 as compared to the one in the melt. From a scientific point of view, a preliminary explanation for the observed trend might be related to the improved diffusion of GMA and initiator in scCO2 as compared to the melt.
Effect of Feed Composition on Molecular Weight
GPC analysis was performed to investigate the correlation between the functionalization degrees of the reactive interfacial agents and the corresponding molecular weight. Overall, the Mn values are generally constant (Figure 7). One should be aware that this comparison is limited to the intake values that were experimentally tested in both processes (initiator from 0.6% to 1% and GMA monomer from 6% to 15%). It might very well be that outside this range the behavior of these two reactions might be different that the one presented above. The use of two different initiators (BPO for melt, AIBN for scCO 2 ) at two different temperatures might hinder a direct comparison. However, this has been easily overcome by employing appropriate reaction conditions, in particular a reaction time equal to 7 times the half-life time of the initiator at the given temperature (all initiator expected to be decomposed). In general, the scCO 2 plot is always above the one for the melt process (except at low initiator and GMA intake) within the investigated range. The lack of data under scCO 2 at relatively high GMA intakes (particularly above 15 mol %, see Table 1 as compared to Table S1) does not allow to extend this conclusion for higher FD values. Nevertheless, within the investigated range, this demonstrates the higher efficiency (in terms of FD values) for the grafting reaction in scCO 2 as compared to the one in the melt. From a scientific point of view, a preliminary explanation for the observed trend might be related to the improved diffusion of GMA and initiator in scCO 2 as compared to the melt.
Effect of Feed Composition on Molecular Weight
GPC analysis was performed to investigate the correlation between the functionalization degrees of the reactive interfacial agents and the corresponding molecular weight. Overall, the M n values are generally constant (Figure 7). This trend is in contrast to what reported by C-H. Kim et al., who observed an increase of the molecular weight probably related to chain extension [11]. In our case, no chain extension is observed, but a significant decrease (in Mn), coupled to a broader distribution (PDI values), at high AIBN (2.4 and 3.0%) and GMA (15%) intakes. Such slight degradation is probably due to the rich radical environment, which could lead to scission. The Mn values are general a result of the two processes taking place here: grafting (and chain extension) on one side and degradation on the other. In the present case these two effects balance each other out at relatively low AIBN intakes while degradation seems to prevail for relatively higher AIBN concentrations.
Also in this case, comparison with the corresponding data for process in the melt (Figures 8 and 9) yields several interesting conclusions. This trend is in contrast to what reported by C-H. Kim et al., who observed an increase of the molecular weight probably related to chain extension [11]. In our case, no chain extension is observed, but a significant decrease (in M n ), coupled to a broader distribution (PDI values), at high AIBN (2.4 and 3.0%) and GMA (15%) intakes. Such slight degradation is probably due to the rich radical environment, which could lead to scission. The M n values are general a result of the two processes taking place here: grafting (and chain extension) on one side and degradation on the other. In the present case these two effects balance each other out at relatively low AIBN intakes while degradation seems to prevail for relatively higher AIBN concentrations.
Also in this case, comparison with the corresponding data for process in the melt (Figures 8 and 9) yields several interesting conclusions. This trend is in contrast to what reported by C-H. Kim et al., who observed an increase of the molecular weight probably related to chain extension [11]. In our case, no chain extension is observed, but a significant decrease (in Mn), coupled to a broader distribution (PDI values), at high AIBN (2.4 and 3.0%) and GMA (15%) intakes. Such slight degradation is probably due to the rich radical environment, which could lead to scission. The Mn values are general a result of the two processes taking place here: grafting (and chain extension) on one side and degradation on the other. In the present case these two effects balance each other out at relatively low AIBN intakes while degradation seems to prevail for relatively higher AIBN concentrations.
Also in this case, comparison with the corresponding data for process in the melt (Figures 8 and 9) yields several interesting conclusions. In general, samples prepared in melt display only slight differences on observed Mn and larger PDI values compared to those processed using scCO2, see for example PDI values at about 6 mol % FD. This effect is even more relevant at relatively large FD values, even if such high FD values (above 12 mol %) were not investigated (nor attempted) under scCO2. This difference might then be caused by slower degradation reactions under scCO2 as already stated by several authors [17,21,22,26]. This might constitute a relevant issue when employing the prepared graft copolymers as interfacial agents in PCL-starch blends.
Thermal and Mechanical Properties for PCL-Starch Blends
The thermal properties of binary and ternary blends (PCL/starch/PCL-g-GMA) show a substantial invariance with respect to the GMA intake (on PCL-g-GMA) and starch content (See Supplementary Materials).
The mechanical properties of the ternary blends (PCL/starch/PCL-g-GMA) are compared when using two interfacial agents with similar functionality (FD 6%) but prepared according to the two different processes (vide supra). We start noticing how the observed trends (Figures 10 and 11) as function of the starch content (namely decrease in stress and strain at break) are in agreement with previous studies [12,13]. Compared to the binary blends, all compatibilized blends using interfacial agents prepared in scCO2 shows mechanical properties improvements (higher modulus and stress at break in Figures 10 and 12). The strain at break remains factually the same (Figure 11). In general, samples prepared in melt display only slight differences on observed M n and larger PDI values compared to those processed using scCO 2 , see for example PDI values at about 6 mol % FD. This effect is even more relevant at relatively large FD values, even if such high FD values (above 12 mol %) were not investigated (nor attempted) under scCO 2 . This difference might then be caused by slower degradation reactions under scCO 2 as already stated by several authors [17,21,22,26]. This might constitute a relevant issue when employing the prepared graft copolymers as interfacial agents in PCL-starch blends.
Thermal and Mechanical Properties for PCL-Starch Blends
The thermal properties of binary and ternary blends (PCL/starch/PCL-g-GMA) show a substantial invariance with respect to the GMA intake (on PCL-g-GMA) and starch content (See Supplementary Materials).
The mechanical properties of the ternary blends (PCL/starch/PCL-g-GMA) are compared when using two interfacial agents with similar functionality (FD 6%) but prepared according to the two different processes (vide supra). We start noticing how the observed trends (Figures 10 and 11) as function of the starch content (namely decrease in stress and strain at break) are in agreement with previous studies [12,13]. Compared to the binary blends, all compatibilized blends using interfacial agents prepared in scCO 2 shows mechanical properties improvements (higher modulus and stress at break in Figures 10 and 12). The strain at break remains factually the same (Figure 11). On the other hand, blends with interfacial agents prepared in the melt show different trends. Equal stress and strain at break values (with respect to the corresponding binary blends) are observed at 10% and 20% starch intake while a lower value is detected at 30 wt %. Finally, the modulus of the blends is not affected by the starch addition and retains its value, similar to the one of virgin PCL. SEM images of all blends, independently of PCL-g-GMA the intake and the preparation methods ( Figure S1), display factually the same morphology with a constant average starch particles size and distribution This is not surprising when making allowances for the fact that starch, used here without plasticizer, remains solid at the mixing temperature. One might speculate then that a higher PCL-g-GMA intake might result in a better interfacial adhesion (lack of voids at the interface in Figure S1). However, this is factually impossible to quantify based on the SEM images alone.
A comparison of the mechanical properties as function of the employed process for the compatibilizer preparation (melt towards scCO2) shows that the stress and strain at break clearly decrease at starch intake of 30% (Figures 10 and 11). The same is true for the modulus (Figure 12) except for the blends with 30 wt % starch (similar values). These results could be preliminarily explained by the (slight) difference in average molecular weight ( Figure 8). However, it is debatable whether such small difference could be the cause for this dramatic improvement in mechanical properties. Another possible reason might lie in different topological characteristics for the two graft copolymers, as already suggested in our previous work [13]. To get deeper insight into the characteristics of the two graft copolymers (PCL-g-GMA from melt and scCO2), GPC measurements of the GMA homo-polymers (polyGMA) acquired from the purifications process of PCL-g-GMA were carried out. A clear difference in the degree of polymerization (DP) could be observed, with polyGMA from the scCO2 displaying shorter chains (scCO2-DP = 9 vs. melt-DP = 11). By assuming that the chain length of polyGMA grafted on the PCL backbone are proportional to those which are formed as by-product of the grafting process, one might speculate that at equal FD (as in this case) the PCL-g-GMA prepared in scCO2 has a larger amount of shorter GMA chains grafted onto the PCL backbone. This would in turn result in an easier (from a steric hindrance point of view) reaction of the GMA groups with the -OH ones on the surface of the starch particles [13]. Although speculative On the other hand, blends with interfacial agents prepared in the melt show different trends. Equal stress and strain at break values (with respect to the corresponding binary blends) are observed at 10% and 20% starch intake while a lower value is detected at 30 wt %. Finally, the modulus of the blends is not affected by the starch addition and retains its value, similar to the one of virgin PCL. SEM images of all blends, independently of PCL-g-GMA the intake and the preparation methods ( Figure S1), display factually the same morphology with a constant average starch particles size and distribution This is not surprising when making allowances for the fact that starch, used here without plasticizer, remains solid at the mixing temperature. One might speculate then that a higher PCL-g-GMA intake might result in a better interfacial adhesion (lack of voids at the interface in Figure S1). However, this is factually impossible to quantify based on the SEM images alone.
A comparison of the mechanical properties as function of the employed process for the compatibilizer preparation (melt towards scCO 2 ) shows that the stress and strain at break clearly decrease at starch intake of 30% (Figures 10 and 11). The same is true for the modulus (Figure 12) except for the blends with 30 wt % starch (similar values). These results could be preliminarily explained by the (slight) difference in average molecular weight ( Figure 8). However, it is debatable whether such small difference could be the cause for this dramatic improvement in mechanical properties. Another possible reason might lie in different topological characteristics for the two graft copolymers, as already suggested in our previous work [13]. To get deeper insight into the characteristics of the two graft copolymers (PCL-g-GMA from melt and scCO 2 ), GPC measurements of the GMA homo-polymers (polyGMA) acquired from the purifications process of PCL-g-GMA were carried out. A clear difference in the degree of polymerization (DP) could be observed, with polyGMA from the scCO 2 displaying shorter chains (scCO 2 -DP = 9 vs. melt-DP = 11). By assuming that the chain length of polyGMA grafted on the PCL backbone are proportional to those which are formed as by-product of the grafting process, one might speculate that at equal FD (as in this case) the PCL-g-GMA prepared in scCO 2 has a larger amount of shorter GMA chains grafted onto the PCL backbone. This would in turn result in an easier (from a steric hindrance point of view) reaction of the GMA groups with the -OH ones on the surface of the starch particles [13]. Although speculative in nature, such hypothesis has also been formulated in connection to polymer blends comprising other polymeric components [28].
Independently of the exact mechanism and from a purely applicative point of view, one must stress here the superior, albeit slightly, performance of the scCO 2 -prepared compatibilizer with respect to its melt counterpart in terms of stress and strain at break at 30 wt % starch intake.
Conclusions
A series of reactive interfacial agents PCL-g-GMA has been synthesized under scCO 2 and applied in ternary PCL/PCL-g-GMA/starch blends. The functionalization reaction in scCO 2 clearly follows a different course with respect to the melt one. Higher FD values (at comparable feed composition) are generally obtained coupled with a substantial invariance (as opposed to thermal degradation in the melt) of average molecular weight and polydispersity of the starting PCL. However, the melt process makes it possible to prepare grafted polymers with higher FD values (in absolute terms) than the one in scCO 2 under the employed experimental conditions. The scCO 2 -prepared interfacial agents also display a clear difference in performance when used in ternary blends with starch and PCL, significantly outscoring the melt prepared ones in terms of stress, strain at break and modulus at starch intakes of 30 wt %. This has been preliminarily explained in terms of the difference in molecular weight but also in topology (i.e., length and distribution of the grafted chains) of these graft copolymers.
To the best of our knowledge this study offers for the first time a comprehensive overview of the advantages of scCO 2 functionalization with the respect to the classical melt process. As such, it paves the way towards the definition and study of new functionalization reactions in supercritical media and application of the corresponding graft copolymers in industrially relevant products. | 2019-02-17T16:23:45.598Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "9886a76cac21bc05b707a78ebe7b5986b94a1ea0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/10/11/1285/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9886a76cac21bc05b707a78ebe7b5986b94a1ea0",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
254948377 | pes2o/s2orc | v3-fos-license | Metal‐rich soils increase tropical tree stoichiometric distinctiveness
Ultramafic soils have high metal concentrations, offering a key opportunity to understand if such metals are strong predictors of leaf stoichiometry. This is particularly relevant for tropical forests where large knowledge gaps exist. On the tropical island of Sulawesi, Indonesia, we sampled forests on sand, limestone, mafic and ultramafic soils that present a range of soil metal concentrations. We asked how variation in 12 soil elements (metals and macronutrients) influenced leaf stoichiometry and whether stoichiometric distinctiveness (the average difference between a species and all others in a multivariate space, the axes of which are the concentrations of each leaf element) is influenced by increasing soil metal concentrations. Positive correlations between corresponding elements in soils and leaves were only found for Ca and P. Noticeably, soil Cr had a negative effect upon leaf P. Whilst most species had low stoichiometric distinctiveness, some species had greater distinctiveness on stressful metal-rich ultramafic soils, generally caused by the accumulation of Al, Co, Cr or Ni. Our observation of increased stoichiometric distinctiveness in tropical forests on ultramafic soils indicates greater niche differentiation, and contrasts with the assumption that stressful environments remove species with extreme phenotypes.
Introduction
Relationships between the majority of elements found in soils and plant tissues are poorly understood (Kaspari and Powers 2016). Species' stoichiometry (the balance of elements in an organism) may match the environment, or species could preferentially take up key elements needed for plant function . Equally, if uptake of an element has negative consequences, they can actively be excluded by species (Kazakou et al. 2008). For species that tolerate and survive on contrasting soil types, the relationships between tissue stoichiometry and the mosaic of elements available in the environment is unlikely to be simple (Kaspari and Powers 2016). Tissue elements may be primarily influenced by that same element in the soil. However, elemental concentrations are often tied to a different element that plays an indirect role in uptake (Yuan et al. 2011;Liu et al. 2017). For instance, in Arabidopsis, nitrate uptake is reliant upon calciumsignalling pathways (Liu et al. 2017). Alternatively, elements may have a negative effect upon one another because of competition for identical membrane transporters. For instance, Fe and Co use the same membrane transporter, and when the latter element is available in the environment it limits the former's uptake (Lange et al. 2017). Whilst relationships between plants and soils for macronutrients are well studied, many essential elements have received little attention (Kerkhoff et al. 2006;Elser et al. 2010;Kaspari and Powers 2016). Elements such as Al, Ca and Mg are only occasionally examined (Tripler et al. 2006;Metali et al. 2015;Zemunik et al. 2018), whereas other metallic elements, e.g. Cu, Ni and Zn, are even less well understood (van der Ent et al. 2018b).
In most ecosystems, soil stoichiometry is a good predictor of plant stoichiometry (Thompson et al. 1997;Tuah et al. 2003;Metali et al. 2015). In tropical tree communities however, there is little clarity about metal element uptake and its influence on co-uptake of macronutrients (Zemunik et al. 2018). Understanding effects of metallic elements is important because they can be toxic for plants. Cu, Ni, Zn etc. can damage the machinery behind plant processes responsible for growth (photosynthesis, respiration) and therefore can indirectly affect competitive interactions between individuals/species (Küpper and Andresen 2016;Mohiley et al. 2020). Without a greater understanding of metals in plants we cannot be sure how stoichiometry influences the functioning of tropical ecosystems and the huge number of species they support (Cleveland et al. 2011).
Plants with different traits build their tissues and organs with different concentrations of elements (Ågren 2008). For example, plant species with short lived, delicate leaves have greater foliar P whilst those with long lived, tough leaves can have low P concentrations (Wright et al. 2004;Sardans and Peñuelas 2013). Likewise, below ground, fine root N concentrations are positively correlated with respiration but negatively with traits associated with toughness (Roumet et al. 2016). Therefore, trait shifts caused by the environment should lead to parallel shifts in stoichiometry (Meunier et al. 2017).
For plant function to change, the suite of elements in plants must also change. This suite can be represented by a multivariate space, the axes of which are based upon concentrations of each element. A species' position within this space can therefore be thought of as a measurement of plant function based upon stoichiometry (González et al. 2017;Ågren and Weih 2020). The differences between species within this space represents functional differences between them (Violle et al. 2017). These differences can be quantified using a measure known as 'functional distinctiveness' -the average distance between a species and all others in a multivariate space based upon functional traits (Grenié et al. 2017). Functional distinctiveness is important because it is linked to competition (MacArthur and Levins 1967), niche differentiation and abundance (Kraft et al. 2015;Umaña et al. 2017b). The ties between plant stoichiometry and function suggest that stoichiometric functional distinctiveness, with trait axes corresponding to tissue element concentrations, should be equally useful for predicting competition and niche differentiation within communities.
Stoichiometry is constrained by minimum element concentrations needed for cellular function and by maximum concentrations, beyond which a toxicity threshold is crossed (Meunier et al. 2014). Within these constraints, distinctiveness may vary dependent upon fitness differences associated with contrasting stoichiometry. If there is one point of optimum fitness we would expect a single mode in the distinctiveness distribution (Parker and Maynard Smith 1990). Conversely, there may be multiple optima due to trade-offs between different elements (Marks and Lechowicz 2006;D'Andrea et al. 2020;Worthy et al. 2020). For instance, one optimal strategy may require an element that shares a membrane transporter with another. This second element would be outcompeted at transporters if the first is found at high concentrations (Andresen et al. 2018). The second element at higher concentrations may offer an equal fitness benefit but would likewise outcompete the first element for transport/uptake. The two approaches, of equal fitness, would result in very different stoichiometry. This could lead to multiple fitness optima associated with multiple modes in the stoichiometric distinctiveness distribution (Laughlin et al. 2015). The environment should also be influential: for instance, a more variable environment may support a greater range of trait strategies (Kraft et al. 2008;Stark et al. 2017) as a result of there being more fitness optima (Levin and Muller-Landau 2000;Marks and Lechowicz 2006). If this is the case, we would expect an effect of soil element concentrations upon stoichiometric distinctiveness.
For some species, the hyperaccumulation of tissue metal beyond the thresholds of most other species can be a viable strategy to deal with high metal concentrations in soils (van der Ent et al. 2013a;Andresen et al. 2018). Most species exclude potentially detrimental metals. Some species do accumulate metals howevermetal chelators aid transport to the relatively metabolically inactive vacuole or cell wall for storage (Peng et al. 2020). Exogenous storage in high concentration patches on leaves has also been observed (van der Ent et al. 2018a). To examine the accumulation of metals in tropical tree species we sampled the little studied, speciesrich forests over ultramafic soils (van der Ent et al. 2018b;Lopez et al. 2019). Ultramafic soils derive from the weathering of mantle derived geology rich in metals e.g. Ni, Co and Cr (Moores 2011). The stoichiometry of the resulting soils reflects this weathering (high Ni, Co, Cr etc.), as can their plant communitieswhich feature the vast majority of metal hyperaccumulators (van der Ent et al. 2013a). Here we quantify multi-element soiltree stoichiometry across tropical ultramafic and nonultramafic sites.
This study focuses on Sulawesi, an island in central Indonesia, part of the Wallacea biodiversity hotspot that features the tropics' largest outcrop of ultramafic soils (van der Ent et al. 2013b; Galey et al. 2017). Forest plots were established on sand, limestone, mafic and ultramafic soils (Trethowan et al. 2020). We examine the relationships between Al, Ca, Co, Cr, Cu, Fe, K, Mg, Mn, Ni, P and Zn in soils vs. leaf tissue, and test how variability of these elements in soil affects tree species stoichiometry and stoichiometric distinctiveness.
Sample collection
Ten 50 × 50 m permanent primary forest plots were established across Sulawesi (Trethowan et al. 2020).
Study locations were centred around the tropics' largest mafic/ultramafic complex (van der Ent et al. 2013b). Two plots were located on ultramafic soils, at the complex's centre, in Morowali Nature Reserve. We established four plots at the complex's eastern periphery, in protected forests of the Bualemo peninsula. This consisted of two plots on mafic (basalt) soils, a single plot on a limestone hill and another in a limestone valley. Four plots were located at the south-eastern periphery, in protected forest of Wawonii Island. These consisted of two plots on ultramafic soils and two on sand.
We sampled all trees ≥ 10 cm diameter at breast height (1.3 m). Herbarium specimens were collected for species identification (Utteridge and Bramley 2015;Baker et al. 2017). Samples for tissue elemental analysis consisted of mature, shade leaves from all species in each plot. Shade leaves were collected because shade is the condition that most leaves experience (Keenan and Niinemets 2016) and, on a practical basis, were more accessible to tree climbersthis does however mean we did not examine sun-exposed leaves which may show different responses. One or two leaves were collected from a single branch and heat dried in the field; to avoid soot deposition, leaves were placed within envelopes during drying. Soil samples were taken from the upper 10 cm of topsoil at the centre of all 10 × 10 m subplots; all samples from each plot were then pooled for further analysis giving us a total of ten soil samples for analysis. We acknowledge these pooled samples will have led to us missing fine scale edaphic influences on foliar stoichiometry. All plots had 'upland' soils so did not experience waterlogging. The upper 10 cm of topsoil is generally representative of the nutrients tropical trees are exposed to because this is where up to half of the total fine root mass is found, with an exponential decline in mass with soil depth (Brearley 2013;Lalnunzira et al. 2019).
Leaf tissue stoichiometry
A total of 723 leaf samples were collected from 283 species. The number of samples per plot ranged from 47 to 105. Species richness ranged from 38 to 53 across the plots. Generally, each species was sampled once in each plot it occurred within. When species were sampled more than once in each plot the mean element concentration of that species within the plot was used. A subsample of each of the leaves (c. 100 mg) was digested in 10 ml of concentrated HNO 3 using a CEM Mars Xpress microwave (1200 W with a 15 minute ramp and 20 minute hold time at 170°C) and made up to 100 ml in ultrapure (18 MΩ) deionised water. Al, Ca, Co, Cr, Cu, Fe, K, Mg, Mn, Ni, P and Zn concentrations were quantified using a Thermo-Finnegan iCAP 6300 Duo inductively coupled plasma optical emission spectrometer.
For quality control, certified reference material (LGC 7162, Strawberry Leaves) was analysed alongside the samples. Reference sample measurements did not differ from certified values for any element (Wilcoxon P > 0.05). Additionally, 61 leaf samples were washed by sonicating for five minutes in deionised water to determine if there was any soil contamination. The sonicated samples did not differ from unwashed samples (Wilcoxon P > 0.05) indicating our samples were not contaminated; we therefore used unwashed samples for all analyses.
Soil stoichiometry data
Total soil Al, Ca, Co, Cr, Cu, Fe, K, Mg, Mn, Ni, P and Zn were quantified via digestion of 0.5 g of thoroughly ground and homogenised soil in 5 ml HNO 3 and 1 ml HClO 4 at 100 to 200°C by ramping over 7 hours. Samples were diluted to 25 ml with deionised water and analysed on an Agilent Technologies 4100 microwave plasma atomic emission spectrometer (Co, Cr and Ni) or an Agilent Technologies 200 Series atomic absorption spectrometer (all other elements).
Edaphic effect upon species leaf stoichiometry
We explored the relationships between soil and leaf elements using partial least squares regression (PLS)this approach identifies the effects of multiple predictor variables while accounting for covariation amongst them (Wehrens and Mevik 2007). We square roottransformed all soil and leaf element concentrations to reduce the influence of outlying values, then scaled (zscores used) prior to PLS to ensure regression coefficient estimates were comparable between elements. In a single model, we tested for the effect of each soil element upon each leaf element, whilst accounting for covariation in soil elements.
Edaphic effect upon species stoichiometric distinctiveness
Our distinctiveness measure was the mean distance of a species to all others in a multivariate space, the axes of which were each scaled (z-score) leaf element concentration (Violle et al. 2017). We did not reduce leaf stoichiometry axes down to a set of PC axes because there was little covariation between elements (nine PC axes were required to explain 90 % of the variation in the data).
To examine edaphic influence on leaf stoichiometry, we used phylogenetic generalised least square regression (PGLS) following Metali et al. (2012). We incorporated phylogenetic distance into the analysis because related species are not independent samples (Hurlbert 1984;Verboom et al. 2017;Ives 2018). The square-root of species stoichiometric distinctiveness was the response variable and soil principal component (PC) axes were the predictor variables.
Phylogenetic data were derived from a plant family resolved supertree provided by Gastauer et al. (2017), pruned to consist of the taxa identified across the plot series using Phylomatic (Webb and Donoghue 2005). The resolved phylogeny was then dated according to Magallón et al. (2015).
Edaphic effect upon species leaf stoichiometry
We found that the first two partial least squares axes explained 72 % of the variability in the relationship between soil and leaf element concentrations. These two axes showed large positive effects (regression coefficient > 0.1) of soil Al and P concentration upon leaf Ca, Cu, and P concentration, and soil Ca concentration upon leaf Ca and P concentration (Fig. 1a). We found large positive effects of soil Cr and Fe concentration upon leaf Ni concentration (Fig. 1a). Large negative effects (regression coefficients < -0.1) were found for soil Cr concentration upon concentration of leaf P (Fig. 1a). Generally, we did not find clear effects of soil elements upon the reciprocal element in leaves, except for Ca and P (Fig. 1b). All soil and leaf element pairwise relationships can be found in Fig. S1.
Edaphic effect upon species stoichiometric distinctiveness
Five PC axes explained > 90 % variation in the soil data. The first axis was responsible for a gradient of Co, Fe, Mn and Zn; the second, for Al, Ca and P; the third, for high Cr and Cu to high Co, Mg and Ni; the fourth, for high Co to high Mg; and the fifth for K (Table 1).
There was a significant effect of the first and third soil PC axes on species stoichiometric distinctiveness (P < 0.001, Fig. 2, Table S1). These axes, as detailed above, were responsible for a gradient in metals rather than macronutrients, which is indicative of a general positive relationship between soil metal concentration and stoichiometric distinctiveness. We also found a weak significant effect of the second PC axis which represents a gradient of Al and the macronutrients Ca and P (P < 0.05, Fig. 2; Table 1).
Discussion
We found that tropical tree leaf stoichiometry shows limited change in the face of soil heterogeneity -except for some species found on ultramafic soils that are distinct from all others. This warrants two points for discussion: (1) why do we find limited effects of soil on leaf stoichiometry in our study system? And (2) why do ultramafic soils increase stoichiometric distinctiveness of tropical trees?
Previously, log-linear relationships between soil and tissue stoichiometry, specifically N:P ratios, have been identified (Elser et al. 2010). In contrast, we found little correlation for many elements between soil and tissue stoichiometry. The likeliest explanation is that, when looking at a broad spectrum of elements across soils, most species' leaf stoichiometry remains similar, buffered from variation in soil stoichiometry. This is because stoichiometry within the upper and lower bounds of a range of concentrations is needed to support cellular processes, irrespective of soil type (Meunier et al. 2014;González et al. 2017). However, some of the elements that do not obey this rule are worth noting. We see strong positive effects of soil Al, Ca and P upon leaf Ca and P. Positive correlations between these elements in tropical forests has been seen before, both when looking at just tissue stoichiometry (Masunaga et al. 1998;Metali et al. 2015) and also, as in our study, between tissue and soil (Asner et al. 2014). Added to that, we find evidence for metal impact as soil Cr has a strong negative relationship with leaf P, possibly a result of competition between Cr and phosphate for shared cross-membrane transport proteins (Sinha et al. 2018). This could lead to a potentially influential deficiency in P, considering ultramafic soils are naturally low in P (Porder and Ramachandran 2013). Broadly, we found minimal effects of most soil element concentrations upon tissue element concentrations. When we did find effects, they are not necessarily intuitive i.e. soil element X does not directly influence leaf element X. Macronutrients tended to have a positive relationship between soils and leaves whereas other metals did not. When soil metals (Cr and Fe) did covary with leaf elements it was not in the reciprocal elements but others (P and Ni). It appears that the complex relationship between elements required for plant function (Kaspari and Powers 2016) is reflected by an equally multi-faceted relationship between soils and plant tissues.
Why, in ultramafic communities, have we found some species with high stoichiometric distinctiveness (Fig. 3b)? It has been suggested that plants are generally adapted for competition, stress tolerance or rapid establishment after recent disturbance (Grime 1977). Previously, species adapted for competition or rapid establishment have been shown to have reduced distinctiveness (based upon traits) (Umaña et al. 2017a). Our data suggests that increased distinctiveness could be a result of stress tolerance. We find that species with high stoichiometric distinctiveness have high concentrations of metals not classified as macronutrients (Fig. 3a).
Metal accumulation is largely found in species that tolerate the stresses of metal-rich soils (van der Ent et al. 2013a). Furthermore, metal accumulator species are often outcompeted in more benign environments (Reeves et al. 1999) -another signifier of stress tolerance (Grime 1977). Therefore, it appears that in diverse tropical forest, increased stoichiometric distinctiveness allows tree species to tolerate metal-rich ultramafic soils.
The observation of species becoming more distinct in response to stress contrasts with evidence gathered from other abiotic gradients. For communities exposed to stressful climatic change with elevation or along a soil fertility gradient the response is the reduction in species distinctiveness (Laughlin et al. 2015;Verboom et al. 2017;Umaña and Swenson 2019), where the environment removes species that are far from their optimum in stressful conditions. However, we show that under edaphic stress, the response from a stoichiometric perspective can be increasing distinctiveness. This is presumably because there are alternate strategies of optimum fitness within the metal-rich stressful environment (Marks and Lechowicz 2006;Worthy et al. 2020). The presence of alternate strategies indicates increasing difference in the biotic and abiotic environment occupied by species (i.e. niche differentiation) (Letten et al. 2017;Peñuelas et al. 2019). This tends to reflect species competing less for key resourceswhich is often touted for communities in stressful environments (Freestone 2006;Niu et al. 2020).
Stoichiometric distinctiveness may also be linked to niche differentiation. In the sites studied here, it is generally the accumulation of the metals Al, Co, Cr and Ni in tissue that causes distinctiveness (we also find a few cases of high distinctiveness due to high leaf Ca on limestone; Fig. 3a). Metal accumulation may offer a defence against herbivory by reducing palatability, in a similar way to polyphenols and proteases (Boyd 2004;Kazemi-Dinan et al. 2014;Volf et al. 2018;Coley et al. 2019). High leaf metal concentrations may also aid conspecific recruitment, because competitors are less able to deal with localised spikes in soil metal concentrations that result from leaf litter with high metal concentrations (Boyd and Martens 1998;Boyd and Jaffré 2001;Mohiley et al. 2020). These interactions should be mediated by intraspecific variability in stoichiometric distinctiveness, particularly considering the facultative nature of metal accumulation between many populations (Pollard et al. 2014). How, and if, the above mechanisms affect competition in tropical forests remains uncertain. If greater stoichiometric distinctiveness does reflect greater niche differentiation we expect it to allow species to persist when they would otherwise be outcompeted and removed from communities (Levine and HilleRisLambers 2009). This should contribute to the coexistence of species in high diversity ultramafic rich tropical regions (Rahbek et al. 2019). Our results come with the caveat of not using soil pH or bioavailable soil element datathe former determining the latter. Although our finding that increasing soil metal drives increasing leaf metal makes sense, in terms of biology, addition of this extra data would reinforce the conclusions presented. Despite this drawback, examination of remote ultramafic tropical forest communities are rare. This study is a starting point for understanding these complex diverse systems that are threatened by increasing human demand for metals and the mining this requires (van der Ent et al. 2013b). We found that the effects of metal-rich ultramafic soils upon tropical trees is not simply a change in leaf stoichiometry across all species. Species mostly retain similar leaf stoichiometry irrespective of substrate. However, some species on ultramafic soils accumulate metals (e.g. Cr, Ni), resulting in distinct stoichiometry from all other members of the community. This variable stoichiometric distinctiveness likely has implications for interspecific competition in highly diverse tropical forests. b Stoichiometric distinctiveness values of all species across tree communities in Sulawesi. Red bars indicate the presence of species in ultramafic communities with leaf sample analysis and the Indonesian Agricultural Research Agency (Badan Litbang Kementerian Pertanian) for conducting soil analysis. We thank Jennifer Rowntree, Tim Baker, Giacomo Sellan and three anonymous reviewers for comments that much improved the manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2022-12-22T14:52:40.982Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "1a0712d82c6feb8bd1e6574e9f93199356c2fc0f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-021-04839-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "1a0712d82c6feb8bd1e6574e9f93199356c2fc0f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
211049774 | pes2o/s2orc | v3-fos-license | Tribological Properties of Si3N4-hBN Composite Ceramics Bearing on GCr15 under Seawater Lubrication
This paper concerns a comparative study on the tribological properties of Si3N4-10 vol% hBN bearing on GCr15 steel under seawater lubrication and dry friction and fresh-water lubrication by using a pin-on-disc tribometer. The results showed that the lower friction coefficient (around 0.03) and wear rate (10−6 mm/Nm) of SN10/GCr15 tribopair were obtained under seawater condition. This might be caused by the comprehensive effects of hydrodynamics and boundary lubrication of surface films formed after the tribo-chemical reaction. Despite SN10/GCr15 tribopair having 0.07 friction coefficient in the pure-water environment, the wear mechanismsits were dominated by the adhesive wear and abrasive wear under the dry friction conditions, and delamination, plowing, and plastic deformation occured on the worn surface. The X-ray photoelectron spectroscopy analysis indicated that the products formed after tribo-chemaical reaction were Fe2O3, SiO2, and B2O3 and small amounts of salts from the seawater, and it was these deposits on the worn surface under seawater lubrication conditions that, served to lubricate and protect the wear surface.
Introduction
With land resources on the way to depletion, the development of marine-based resources has been gaining increasing attention. The specialized equipment used for marine engineering is the foundation of emergent marine resources and economic development. The equipment used in such environments requires special engineering, in which the friction-pair component is an important aspect [1]. Compared to traditional metal or alloy materials, which are susceptible to electrochemical corrosion, structural ceramics possess unique properties and are promising materials for use in frictional components and corrosion-resistant parts. Si 3 N 4 -based ceramics have found wide applications in a variety of engineering fields due to their properties. They are high in hardness, low in density, excellent in thermal and chemical stability, and outstanding in corrosion resistance, Moreover, these ceramics have seen wide-ranging industrial applications, including use in high-speed cutting tools, engine parts, sealing modules, bearings, and corrosion-resistant components [2][3][4][5]. Wu et al. [6] found a WC-10Co-4Cr/Si 3 N 4 tribopair with a friction coefficient of 0.09 and a wear rate lower than 9 × 10 −6 mm 3 N −1 m −1 in natural seawater, attributing this to Si 3 N 4 tribochemical reaction. Liu et al. [7] conducted research on the tribological behaviors of Si 3 N 4 /AISI316 tribopair in seawater conditions. They found that the friction coefficient under seawater lubrication decreased to 0.16, attributing this to SiO 2 gel lubrication function. Accordingly, Si 3 N 4 ceramics have a promising prospect of finding wide applications in marine engineering equipments. In previous studies, researchers just assessed the
Materials Prepared and Specimens Made
The detailed synthetization of Si 3 N 4 -hBN composite ceramics is presented somewhere else [16][17][18]. Generally, starting materials were ball milled, hot-press sintered at 30 MPa and 1800 • C, and also were cut into pins. The sintering raw materials were composed of Si 3 N 4 powder (the content of 90% α-Si 3 N 4 or more and the average particle size of 0.5 µm) and hBN powder (the purity of 99.6% and the average particle size being of 1 µm). The sintering aids were composed of hBN powder (the purity of 99.6% and the average particle size of 1 µm), Al 2 O 3 powder (the purity of 99.5%), and Y 2 O 3 powder (the purity of 99.9%) [27].
However, in this study, Si 3 N 4 -hBN specimens contained only 10 vol% hBN (SN10). Figure 1 shows the results of the sintered speciment Si 3 N 4 -10% hBN ceramic. Its components are β-Si 3 N 4 and hBN. Tables 1 and 2 show the physical and mechanical properties of Si 3 N 4 -10% hBN specimens and GCr15. The Si 3 N 4 -10% hBN ceramic was cut into 5 mm × 5 mm × 10 mm rectangular type pins with a surface roughness of lower than 0.08 µm for the friction and wear tests. A GCr15 disc with diameter 44 mm in and thickness of 6 mm was used as the mating material. Its friction part was mechanically ground until surface roughness reached 0.04 µm. were cut into pins. The sintering raw materials were composed of Si3N4 powder (the content of 90% α-Si3N4 or more and the average particle size of 0.5 μm) and hBN powder (the purity of 99.6% and the average particle size being of 1 μm). The sintering aids were composed of hBN powder (the purity of 99.6% and the average particle size of 1 μm), Al2O3 powder (the purity of 99.5%), and Y2O3 powder (the purity of 99.9%) [27]. However, in this study, Si3N4-hBN specimens contained only 10 vol% hBN (SN10). Figure 1 shows the results of the sintered speciment Si3N4-10% hBN ceramic. Its components are β-Si3N4 and hBN. Table 1 and Table 2 show the physical and mechanical properties of Si3N4-10% hBN specimens and GCr15. The Si3N4-10% hBN ceramic was cut into 5 mm × 5 mm × 10 mm rectangular type pins with a surface roughness of lower than 0.08 μm for the friction and wear tests. A GCr15 disc with diameter 44 mm in and thickness of 6 mm was used as the mating material. Its friction part was mechanically ground until surface roughness reached 0.04 μm. Table 3 displays the major components of the disc. The seawater in the experiments was artificially prepared based on standard ASTM D1141-1998 [29] with pH adjusted to 8.2 by a 0.1 mol/L NaOH solution. Table 4 displays its major components. Table 4. Chemical composition of the artificial seawater. The seawater in the experiments was artificially prepared based on standard ASTM D1141-1998 [29] with pH adjusted to 8.2 by a 0.1 mol/L NaOH solution. Table 4 displays its major components.
Test Methods
In the experiments, a tribometer was used. Its upper rotary pin contacts a stationary disc in three lubricating conditions: dry, pure water, and seawater. The schematic diagram depicting this testing apparatus is shown in Figure 2. A detailed overview of the apparatus can be found in our previous report [17]. As shown in Figure 2, the SN10 pins are used to slide against the GCr15 disc. When testing the sliding friction, the disc was submerged in the lubrication medium (either fresh water or seawater). All experiments were conducted at ambient temperature with. 10 N set load, the sliding speed of 1.73 m/s, and the grinding distance of 1000 m. Before doing the experiments, all samples were placed in acetone and ethanol and cleaned ultrasonically for 15 min. The tribometer recorded the friction coefficient (f ), and the wear rate w was derived from the formula: in which ∆m represents the wear volume assessed by according to the weight loss from a microbalance with an accuracy of 0.1 mg, ρ is the density, F the normal load, and S friction distance. In the calculation of f and w, the initial values were excluded, for the friction coefficients and wear rates were the average of the values from three independent experiments. Phase constitution of sintered Si 3 N 4 -10%hBN ceramic was determined by X-ray diffraction (XRD) (D8 Advance, Bruker, Germany) analysis using Cu Kα radiation. Scanning electron microscopy (SEM) (FEI Apreo, Hillsboro, OR, USA) was adopted to analyze the morphologies of the worn surfaces. X-ray photoelectron spectroscopy (XPS) (AXIS Supra, Manchester, UK) was applied to chemical characterization of the worn surfaces.
Test Methods
In the experiments, a tribometer was used. Its upper rotary pin contacts a stationary disc in three lubricating conditions: dry, pure water, and seawater. The schematic diagram depicting this testing apparatus is shown in Figure 2. A detailed overview of the apparatus can be found in our previous report [17]. As shown in Figure 2, the SN10 pins are used to slide against the GCr15 disc. When testing the sliding friction, the disc was submerged in the lubrication medium (either fresh water or seawater). All experiments were conducted at ambient temperature with. 10 N set load, the sliding speed of 1.73 m/s, and the grinding distance of 1000 m. Before doing the experiments, all samples were placed in acetone and ethanol and cleaned ultrasonically for 15 min. The tribometer recorded the friction coefficient (f), and the wear rate w was derived from the formula: in which △ m represents the wear volume assessed by according to the weight loss from a microbalance with an accuracy of 0.1 mg, ρ is the density, F the normal load, and S friction distance.
In the calculation of f and w, the initial values were excluded, for the friction coefficients and wear rates were the average of the values from three independent experiments. Phase constitution of sintered Si3N4-10%hBN ceramic was determined by X-ray diffraction (XRD)(D8 Advance, Bruker, Germany ) analysis using Cu Kα radiation. Scanning electron microscopy (SEM) ( FEI Apreo, Hillsboro, OR, USA ) was adopted to analyze the morphologies of the worn surfaces. X-ray photoelectron spectroscopy (XPS) (AXIS Supra, Manchester, UK) was applied to chemical characterization of the worn surfaces. Figure 3 shows the friction coefficients of SN10 on GCr15 steel in dry friction, pure water, and seawater conditions, respectively, As revealed in this figure, the friction coefficient of SN10/GCr15 tribopair is greater than 0.5 in dry condition, and higher than that in the lubrication condition. In fresh water lubrication condition, the friction coefficient is approximately 0.07. However, in seawater lubrication condition, the friction coefficient is about 0.03 when the experimental parameters remain Figure 3 shows the friction coefficients of SN10 on GCr15 steel in dry friction, pure water, and seawater conditions, respectively, As revealed in this figure, the friction coefficient of SN10/GCr15 tribopair is greater than 0.5 in dry condition, and higher than that in the lubrication condition. In fresh water lubrication condition, the friction coefficient is approximately 0.07. However, in seawater lubrication condition, the friction coefficient is about 0.03 when the experimental parameters remain unchanged, and its trace is quite stable. Evidently, the seawater environment promoted superior lubrication compared with the pure-water condition.
Characteristics of Friction and Wear
Materials 2020, 13, x FOR PEER REVIEW 5 of 15 unchanged, and its trace is quite stable. Evidently, the seawater environment promoted superior lubrication compared with the pure-water condition. Figure 4 lists the wear rates of the SN10/GCr15 tribopair in the three lubrication conditions. It can be found that, in the aqueous environments, the tribopairs demonstrated a much better wear resistance compared to the dry condition. The wear rates of the pin or disc were as high as 10 −5 mm 3 N −1 m −1 under dry friction, while the pins displayed negligible wear rates (not higher than 10 −6 mm 3 N −1 m −1 ) in the pure-water and seawater environments. As it is known, most materials exposed to corrosive aqueous environments inevitably suffer corrosion. During the assessment of the sliding friction of the SN10/GCr15 tribopair in the seawater environment, corrosive wear plus mechanical wear affected metal components. However, the lowest wear rate of the pin or disc was obtained in experiments with seawater, as opposed to pure water, indicating that Si3N4-10% vol hBN possessed an excellent lubricating effect in seawater under the given experimental parameters. Figure 5 shows the respective images of the SN10/GCr15 tribopair in dry friction, fresh water, as well as seawater conditions. Obviously, aqueous environments made surfaces relatively more Figure 4 lists the wear rates of the SN10/GCr15 tribopair in the three lubrication conditions. It can be found that, in the aqueous environments, the tribopairs demonstrated a much better wear resistance compared to the dry condition. The wear rates of the pin or disc were as high as 10 −5 mm 3 N −1 m −1 under dry friction, while the pins displayed negligible wear rates (not higher than 10 −6 mm 3 N −1 m −1 ) in the pure-water and seawater environments. As it is known, most materials exposed to corrosive aqueous environments inevitably suffer corrosion. During the assessment of the sliding friction of the SN10/GCr15 tribopair in the seawater environment, corrosive wear plus mechanical wear affected metal components. However, the lowest wear rate of the pin or disc was obtained in experiments with seawater, as opposed to pure water, indicating that Si 3 N 4 -10% vol hBN possessed an excellent lubricating effect in seawater under the given experimental parameters.
Characterization of the Worn Surfaces with SEM
Materials 2020, 13, x FOR PEER REVIEW 5 of 15 unchanged, and its trace is quite stable. Evidently, the seawater environment promoted superior lubrication compared with the pure-water condition. Figure 4 lists the wear rates of the SN10/GCr15 tribopair in the three lubrication conditions. It can be found that, in the aqueous environments, the tribopairs demonstrated a much better wear resistance compared to the dry condition. The wear rates of the pin or disc were as high as 10 −5 mm 3 N −1 m −1 under dry friction, while the pins displayed negligible wear rates (not higher than 10 −6 mm 3 N −1 m −1 ) in the pure-water and seawater environments. As it is known, most materials exposed to corrosive aqueous environments inevitably suffer corrosion. During the assessment of the sliding friction of the SN10/GCr15 tribopair in the seawater environment, corrosive wear plus mechanical wear affected metal components. However, the lowest wear rate of the pin or disc was obtained in experiments with seawater, as opposed to pure water, indicating that Si3N4-10% vol hBN possessed an excellent lubricating effect in seawater under the given experimental parameters. Figure 5 shows the respective images of the SN10/GCr15 tribopair in dry friction, fresh water, as well as seawater conditions. Obviously, aqueous environments made surfaces relatively more smooth compared with those developed in dry condition, as the aqueous solutions provided Figure 5 shows the respective images of the SN10/GCr15 tribopair in dry friction, fresh water, as well as seawater conditions. Obviously, aqueous environments made surfaces relatively more smooth compared with those developed in dry condition, as the aqueous solutions provided lubrication and helped dissipate heat by friction during sliding. Thus, the friction coefficient and wear rate of SN10/GCr15 tribopair were low and limited. Seen from Figure 5b, the worn surface of GCr15 disc was subjected to severe plastic deformation, delamination, and plowing. Likewise, seen from Figure 5a debris found on the surfaces of the SN10 pins, might be traced to the embedding of worn debris in the spalling pits. These results are likely due to the lower shear strength of the GCr15 steel in comparison with the SN10 ceramic, with debris from the formed adhesion spots torn from the GCr15 disc being transferred and adhered to the worn out pins in dry friction condition. In fresh water environment, the worn surfaces of the SN10 pins showed significant wear debris in the transferred layer and slight furrows paralleling the sliding direction (Figure 5c). Furthermore, some discontinuous films were found covering the worn parts of GCr15 disc (Figure 5d). This phenomenon might lie in the formation of worn debris on grinding interfaces, as water in the experiments failed to rid the interfaces of the debris completely, promoting the mechanical abrasion on the interfaces. Figure 5e,f is the relatively smooth worn surfaces of the SN10/GCr15 tribopair in seawater lubrication condition, having only a few scratches and pits. This manifests that the lubricating effect of, seawater is superior to that of fresh water. in comparison with the SN10 ceramic, with debris from the formed adhesion spots torn from the GCr15 disc being transferred and adhered to the worn out pins in dry friction condition. In fresh water environment, the worn surfaces of the SN10 pins showed significant wear debris in the transferred layer and slight furrows paralleling the sliding direction ( Figure 5c). Furthermore, some discontinuous films were found covering the worn parts of GCr15 disc (Figure 5d). This phenomenon might lie in the formation of worn debris on grinding interfaces, as water in the experiments failed to rid the interfaces of the debris completely, promoting the mechanical abrasion on the interfaces. Figure 5e,f is the relatively smooth worn surfaces of the SN10/GCr15 tribopair in seawater lubrication condition, having only a few scratches and pits. This manifests that the lubricating effect of, seawater is superior to that of fresh water. In condition, the observed adhesive and abrasive wear dominated the wear mechanisms in dry sliding friction condition, explaining, the high friction coefficient and wear rate. In water lubricating condition, the friction coefficients and wear rates of SN10/GCr15 tribopair were low, since tribofilm was formed on the surface serving to lubricate and protect the worn surface and the hydrodynamic lubrication conferred by the liquid medium. In the sliding friction test, the relative velocity of SN10/GCr15 tribopair produced tribofilms between the contact interfaces, and they in turn, generated lubrication regimes surrounding the boundaries impeding adhesion wear and lowering the friction coefficient. One more phenomenon deserves notice, two regions on SN10 pin were worn out, as shown in Figure 6. Region 1 was primarily composed of Fe, Si, O, and C, indicating that this region had a mixed composition of oxides. Furthermore, the Ca and Cl from seawater could be identified in region 2 as well, indicating that some salt was deposited on or had been incorporated into the wear interface. Fe element was removed while SN10 pin grinding on GCr15 disc. However, in region 1, the Fe element and salt from the seawater were not found, but the oxygen and boron were detected. In condition, the observed adhesive and abrasive wear dominated the wear mechanisms in dry sliding friction condition, explaining, the high friction coefficient and wear rate. In water lubricating condition, the friction coefficients and wear rates of SN10/GCr15 tribopair were low, since tribofilm was formed on the surface serving to lubricate and protect the worn surface and the hydrodynamic lubrication conferred by the liquid medium. In the sliding friction test, the relative velocity of SN10/GCr15 tribopair produced tribofilms between the contact interfaces, and they in turn, generated lubrication regimes surrounding the boundaries impeding adhesion wear and lowering the friction coefficient. One more phenomenon deserves notice, two regions on SN10 pin were worn out, as shown in Figure 6. Region 1 was primarily composed of Fe, Si, O, and C, indicating that this region had a mixed composition of oxides. Furthermore, the Ca and Cl from seawater could be identified in region 2 as well, indicating that some salt was deposited on or had been incorporated into the wear interface. Fe element was removed while SN10 pin grinding on GCr15 disc. However, in region 1, the Fe element and salt from the seawater were not found, but the oxygen and boron were detected. Figures 7-9 tell the results of the XPS analysis about the worn surfaces of the GCr15 discs grinding against the SN10 pins in lubrication conditions. In dry friction condition, Si 2p, B 1s, and Fe 2p 3/2 peaks of GCr15 could be subdivided into. On spectra of Si 2p, the peaks were at 98. 6
XPS Analysis on the Worn Surfaces
2p3/2 at 707.2 and 710 eV could be assigned to metallic Fe, FeO and Fe2O3. In our previous report [1], the Si3N4 and BN underwent following reactions with water: SiO2 + nH2O → SiO2·nH2O, The negative values of Gibbs free energy in chemical reactions mentioned above, revealed possible reaction of Si3N4 and BN with moisture in the air or water in generating SiO2, B2O3, and H3BO3 [30][31][32]. According to our previous study [20], spalling pits were first to have appeared formed on SN10 pins, since hBN spalled first, and the worn debris were then embedded in the pits to react with water during the grinding friction process. Thus, a triboflim made up of Fe2O3, SiO2, and B2O3 began to take shape on the worn surfaces, endowing the sliding pair with hydrodynamic lubrication. In summary, in the fresh water and seawater conditions, the low friction coefficient and wear rate of the SN10/GCr15 tribopair were triggered by triboflims developed on the worn surfaces.
Characterization of Seawater Lubrication
In this test, CaCO3 and Mg(OH)2 were found to be aggregated and enriched on the friction surface with salt ions contained seawater and they took the effect of lubrication at the friction interfaces [33]. Figure 10 shows the XPS spectrum of Ca 2p and Mg 2p in the seawater condition. The peaks of Ca 2p were at 347.6 eV and 350.7 eV, corresponding to those of CaCO3, and the peaks of Mg The negative values of Gibbs free energy in chemical reactions mentioned above, revealed possible reaction of Si 3 N 4 and BN with moisture in the air or water in generating SiO 2 , B 2 O 3 , and H 3 BO 3 [30][31][32]. According to our previous study [20], spalling pits were first to have appeared formed on SN10 pins, since hBN spalled first, and the worn debris were then embedded in the pits to react with water during the grinding friction process. Thus, a triboflim made up of Fe 2 O 3 , SiO 2 , and B 2 O 3 began to take shape on the worn surfaces, endowing the sliding pair with hydrodynamic lubrication. In summary, in the fresh water and seawater conditions, the low friction coefficient and wear rate of the SN10/GCr15 tribopair were triggered by triboflims developed on the worn surfaces.
Characterization of Seawater Lubrication
In this test, CaCO 3 and Mg(OH) 2 were found to be aggregated and enriched on the friction surface with salt ions contained seawater and they took the effect of lubrication at the friction interfaces [33]. Figure 10 shows the XPS spectrum of Ca 2p and Mg 2p in the seawater condition. The peaks of Ca 2p were at 347.6 eV and 350.7 eV, corresponding to those of CaCO 3 , and the peaks of Mg 2p were at 87.9 eV and 88.3 eV, corresponding to those of Mg(OH) 2 , indicating that some amount of salt from the seawater was deposited on or incorporated into the contact interface [34]. XPS results supported the interpretation of the EDS analysis (see Figure 6) by revealing that CaCO 3 and Mg(OH) 2 were in deed generated. Following are the related chemical reactions: Figure 11 shows the highly magnified SEM image of the worn surface of GCr15 steel after grinding in seawater. There was clear turtle-shell sludge deposited on GCr15 steel after bearing on the SN10 pin. Deposits CaCO3 and Mg(OH)2 have reported to play an important role in the process of tribopair friction and wear in seawater [24]. The layer of deposits CaCO3 and Mg(OH)2 served to segregate GCr15 disc from seawater and prevent chlorine ions from sprawling over the disc, thus impeding the corrosion rate of GCr15 disc. This might help to explain why the friction coefficient and wear rate of the SN10/GCr15 tribopair were lower in seawater and higher in fresh water. Figure 11 shows the highly magnified SEM image of the worn surface of GCr15 steel after grinding in seawater. There was clear turtle-shell sludge deposited on GCr15 steel after bearing on the SN10 pin. Deposits CaCO 3 and Mg(OH) 2 have reported to play an important role in the process of tribopair friction and wear in seawater [24]. The layer of deposits CaCO 3 and Mg(OH) 2 served to segregate GCr15 disc from seawater and prevent chlorine ions from sprawling over the disc, thus impeding the corrosion rate of GCr15 disc. This might help to explain why the friction coefficient and wear rate of the SN10/GCr15 tribopair were lower in seawater and higher in fresh water. grinding in seawater. There was clear turtle-shell sludge deposited on GCr15 steel after bearing on the SN10 pin. Deposits CaCO3 and Mg(OH)2 have reported to play an important role in the process of tribopair friction and wear in seawater [24]. The layer of deposits CaCO3 and Mg(OH)2 served to segregate GCr15 disc from seawater and prevent chlorine ions from sprawling over the disc, thus impeding the corrosion rate of GCr15 disc. This might help to explain why the friction coefficient and wear rate of the SN10/GCr15 tribopair were lower in seawater and higher in fresh water. In addition, it had been reported that the SiO2 formed via the tribochemical reaction could be accumulated and aggregated into a SiO2 gel deposited on the grinding interface when either SiC or Si3N4 was involved in grinding in aqueous condition [7,35]. The SiO2 gel proper could be an excellent lubricant, so friction and wear were lowed. Ions such as mainly Na + and Cl − in seawater could accelerate SiO2 colloids' aggregation on the friction surface. Then SiO2 colloids would turn into SiO2 gel lubricant. Although colloidal SiO2 could develop in fresh water, it could be easily removed by the water surrounding it, so SiO2 gel deposited would be too thin to function as an effective lubricant for the interface. XPS results presented in Figures 8 and 9 indicate that the SiO2 generated after the tribochemical reaction could continue staying to function as a good boundary lubricant, contributing In addition, it had been reported that the SiO 2 formed via the tribochemical reaction could be accumulated and aggregated into a SiO 2 gel deposited on the grinding interface when either SiC or Si 3 N 4 was involved in grinding in aqueous condition [7,35]. The SiO 2 gel proper could be an excellent lubricant, so friction and wear were lowed. Ions such as mainly Na + and Cl − in seawater could accelerate SiO 2 colloids' aggregation on the friction surface. Then SiO 2 colloids would turn into SiO 2 gel lubricant. Although colloidal SiO 2 could develop in fresh water, it could be easily removed by the water surrounding it, so SiO 2 gel deposited would be too thin to function as an effective lubricant for the interface. XPS results presented in Figures 8 and 9 indicate that the SiO 2 generated after the tribochemical reaction could continue staying to function as a good boundary lubricant, contributing to the low friction coefficient and wear rate observed in the present research. To further verify the hypothesis that SiO 2 colloids would aggregated and turn into SiO 2 gel in the seawater, a SiO 2 colloid solution was prepared by mixing tetraethoxysilane with ethanol and water. This solution was then added to the fresh water and seawater. As it turned out that the fresh water was still clear while. The seawater turned milky white. This phenomenon was similar to the results reported by Liu [7]. However, a better understanding of the mechanism must be further explored. The schematic diagrams representing the wear models of the SN10 ceramic bearing on the GCr15 steel under seawater lubricating condition are depicted in Figure 12. As shown, due to the relatively poor combination of 10 vol% hBN with the Si 3 N 4 matrix, the hBN was easily extruded from the friction surface of the SN10 ceramic, resulting in the formation of spalling pits being formed on SN10 pin. In the course of grinding friction experiments, worn pieces were found either embedded into the pits on SN10 pin or deposited on GCr15 disc (see Figure 5c,e), and subsequently reacted with the available water. The subsequent chemical products aggregated in the pits and spread over the surface (see Figure 5d,f), and eventually they turned into a tribofilm made up of SiO 2 and H 3 BO 3 . This helps to explain why the friction coefficient and wear rate of SN10/GCr15 tribopair were low in an aqueous condition. As a result, this tribofilm protected the wear interface from abrasive wear and functioned as a lubricant to worn surfaces of the pin and disc. In addition, salt ions from the seawater also accelerated CaCO 3 and Mg(OH) 2 accumulation on the friction surfaces, functioning as a lubricant for the boundaries. It in turn effectively overcame corrosion of the chlorine ions. The results of the XPS analysis showed that, in different lubrication conditions, the tribochemical products developed on SN10/GCr15 tribopair were also different. That is to say, the wear mechanisms of the various tribochemical products were also different. Table 5 shows that, in dry friction condition, despite some tribochemical products on the worn surface, there were also ample amounts of worn pieces deposited (see Figure 5a,b), and a complete surface film was not formed, indicating that mechanical wear (adhesive and abrasive wear) was the dominant wear mechanism under these conditions. In an aqueous environment, the film generated by the tribochemical reaction effectively protected the worn surface from abrasive wear, due to its contribution to the hydrodynamic lubrication and boundary lubrication.
Conclusions
The present paper concert a study on the tribological properties of Si 3 N 4 -10 vol% hBN grinding on GCr15 in dry friction, fresh water, and artificial seawater condition. The following are the summaries of this research: In dry friction condition, the friction coefficient and wear rate of SN10/GCr15 tribopair are higher than that in aqueous condition. The friction coefficient is around 0.5, and wear rate over 10 −5 mm 3 /Nm. Mechanical wear (adhesive and abrasive wear) dominates wear mechanism in dry friction condition. Moreover, the worn surface are liable to grave plastic deformation, delamination, and plowing.
In an aqueous environment, when SN10 grinding on GCr15, the values of friction coefficients and wear rates are low that those in dry lubrication condition. Their respective values are f ≈ 0.06, w ≈10 −6 mm 3 /Nm in fresh water lubrication condition and, f ≈ 0.03, w ≈ 10 −6 mm 3 /Nm in seawater lubrication condition. The film generated by the tribochemical reaction can overcome further abrasive wear to the worn surface, and functions as an effective lubricant. The lubrication characteristics shown in the seawater conditions lies in tribofilm formed in situ at the wear interface or in the layer made up of CaCO 3 and Mg(OH) 2 deposited on the worn surface. | 2020-02-06T09:13:33.807Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "2fc20d037702c84abefcd945aa82e6c7e856e029",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma13030635",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a31e97fc538f0b19a834f91ff69ffcf67f987cc7",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
7927952 | pes2o/s2orc | v3-fos-license | Half-isomorphisms of Moufang loops
We prove that if the squaring map in the factor loop of a Moufang loop $Q$ over its nucleus is surjective, then every half-isomorphism of $Q$ onto a Moufang loop is either an isomorphism or an anti-isomorphism. This generalizes all earlier results in this vein.
Introduction
A loop (Q, ·) is a set Q with a binary operation · such that for each a, b ∈ Q, the equations a · x = b and y · a = b have unique solutions x, y ∈ Q, and there exists a neutral element 1 ∈ Q such that 1 · x = x · 1 = x for all x ∈ Q. We will often write xy instead of x · y and use · to indicate priority of multiplications. For instance, xy · z stands for (x · y) · z.
In this paper we will only need the first Moufang identity, namely xy · zx = x(yz · x) . (1.1) Basic references to loop theory in general and Moufang loops in particular are [2,16]. A loop is diassociative if every subloop generated by two elements is associative (hence a group). By Moufang's theorem [15], if three elements of a Moufang loop associate in some order, then they generate a subgroup. In particular, every Moufang loop is diassociative. We will drop further unnecessary parentheses while working with diassociative loops, for instance in the expression xyx.
The starting point for the investigation of half-isomorphisms of loops is the following result of Scott.
Scott gave an example of a loop of order 8 which shows that Proposition 1.1 does not directly generalize to loops. It is nevertheless natural to ask if the result generalizes to Moufang loops, since these are highly structured loops that are, in some sense, very close to groups. This question was first addressed by Gagola and Giuliani [5] who proved the following. 5]). Let Q, Q ′ be Moufang loops of odd order. Every half-isomorphism of Q onto Q ′ is either an isomorphism or an anti-isomorphism.
We will call a half-isomorphism which is neither an isomorphism nor an anti-isomorphism a proper half-isomorphism. (In [5,6], the word "nontrivial" is used instead.) Gagola and Giuliani also showed that there exist Moufang loops of even order with proper half-automorphisms [6].
The next result in the same vein was by Grishkov et al. A loop is automorphic if all of its inner mappings are automorphisms [3,11].
Proposition 1.3 ([7]). Every half-automorphism of a finite automorphic Moufang loop is either an automorphism or an anti-automorphism.
Grishkov et al conjectured that the finiteness assumption can be dropped, and that the corresponding result holds for all half-isomorphisms.
Our main result simultaneously generalizes Propositions 1.1, 1.2 and 1.3, and as a byproduct answers the conjecture of [7] in the affirmative.
To state the main result, we first recall that the nucleus of a loop Q is defined by We conclude this introduction with some motivational remarks. Scott's original result might seem at first to be a curiosity, but there is interest in it centered around the result of Formanek and Sibley [4] that the group determinant determines a group. A shorter and more constructive proof of [4] was later given by Mansfield [13]. Hoehnke and Johnson generalized this to show that the 1-, 2-, and 3-characters of a group determine the group [8]. A more explicit use of the fact that group half-isomorphisms are either isomorphisms or anti-isomorphisms can be found in [8], which cites the aforementioned exercise in (the 1970 French edition of) [1].
Loops have determinants as well [9], and all the results on half-isomorphisms of Moufang loops are motivated by the following open question: Let M be a class of Moufang
Proper Half-isomorphisms
Our goal in this section is Theorem 2.6, which describes necessary conditions for the existence of a proper half-isomorphism between Moufang loops. We start by expanding upon a lemma of Scott.
Since left and right divisions can be expressed in terms of multiplication and inverses in diassociative loops, every element of X is a word w involving only multiplications and inverses of elements from X, parenthesized in some way. Since (ϕx) −1 = ϕ(x −1 ) by (iii), we can assume that X = X −1 and that no inverses occur in w. Suppose that w has leaves x 1 , . . . , x n ∈ X, possibly with repetitions. Applying ϕ to w yields a term with leaves ϕ(x 1 ), For the converse, consider a word w in ϕ(x 1 ), . . . , ϕ(x n ). We prove by induction on the height of w that w ∈ ϕ( X ). If w = ϕx, there is nothing to prove. Suppose that w = ϕu · ϕv for some u, v ∈ X . If w = ϕ(vu), we are done. Otherwise ϕ(vu) = ϕv · ϕu, and (ii) implies w = ϕ(uv).
A semi-homomorphism ϕ : Q → Q ′ of diassociative loops is a mapping satisfying ϕ(1) = 1 and ϕ(xyx) = ϕx · ϕy · ϕx for all x, y ∈ Q. From Lemma 2.2, we immediately obtain a generalization of [ Then A and B are subloops of Q.
Proof. By Lemma 2.2, it is clear that both A and B are closed under taking inverses a → a −1 . Thus it remains to show that both A and B are closed under multiplication. Fix a, b ∈ Q. Then for all x ∈ Q, On the other hand, if a, b ∈ B, then (2.1) gives
Lemma 2.5. No loop is the union of two proper subloops.
Proof. This is a standard exercise in group theory and the same proof holds here. For a contradiction, suppose that A, B are proper subloops of a loop Q with Q = A ∪ B. Fix a ∈ A\B and b ∈ B\A. We have ab ∈ A or ab ∈ B since Q = A ∪ B. However, ab ∈ A implies b ∈ A since A is closed under left division, and similarly ab ∈ B implies a ∈ B.
A version of the following result (without the third part of the conclusion) was proved in [5,Proposition 5]. Their proof used the general finiteness assumption of Proposition 1.2 in an essential way. Our statement and proof make no reference to cardinality. Theorem 2.6. Suppose that there is a proper half-isomorphism of Moufang loops Q, Q ′ . Then there is a proper half-isomorphism ϕ : Q → Q ′ and elements a, b, c ∈ Q such that the following properties hold: (i) ϕ ↾ a, b is an isomorphism and a, b is nonabelian, (ii) ϕ ↾ a, c is an anti-isomorphism and a, c is nonabelian, (iii) ϕ ↾ b, c is an isomorphism and b, c is nonabelian.
Proof. Assume ϕ : Q → Q ′ is a proper half-isomorphism, and let A, B be defined as in Lemma 2.4. Since ϕ is proper, both A and B are proper subloops by Lemma 2.4. By Lemma 2.5, Q = A ∪ B, so there is an element a ∈ Q which is in neither subloop. Since a ∈ B, there exists b ∈ Q such that ϕ(ab) = ϕa · ϕb = ϕb · ϕa. By Lemma 2.2, ϕ ↾ a, b is an isomorphism and a, b is nonabelian, proving (i).
Let J : x → x −1 denote the inversion permutation on Q. Now, ϕ ↾ b, c is either an isomorphism or an anti-isomorphism by Lemma 2.2. If the latter case holds, then ϕ • J is still a proper half-isomorphism, (ϕ • J) ↾ b, c is an isomorphism, (ϕ • J) ↾ a, b is an anti-isomorphism and (ϕ • J) ↾ a, c is an isomorphism. In particular, conditions (i) and (ii) hold for ϕ • J with the roles of b and c reversed. Thus there is no loss of generality in assuming that ϕ ↾ b, c is an isomorphism.
Given a proper half-isomorphism ϕ : Q → Q ′ of Moufang loops, we will refer to a triple (a, b, c) of elements a, b, c ∈ Q satisfying the conditions of Theorem 2.6 as a Scott triple, since the idea of considering triples satisfying conditions (i) and (ii) of the theorem goes back to Scott's original paper [17]. Proof. Note that if (a, b, c) is a Scott triple then (c, b, a) is also a Scott triple. It therefore suffices to verify that a 2 , c is an abelian group. For k ≥ 1, we calculate and so ϕ(b · a k c · b) = ϕ(bc) · ϕ(a k b) .
Proof of the Main Theorem
In this section we prove Theorem 1.4. For a contradiction, suppose that Q, Q ′ are Moufang loops, N = N(Q) is the nucleus of Q, the squaring map in Q/N is surjective, and let ϕ : Q → Q ′ be a proper half-isomorphism. By Theorem 2.6, Q contains a Scott triple (a, b, c). By the assumption on Q/N, there is d ∈ Q such that d 2 N = (dN) 2 = aN, and so there is also n ∈ N such that d 2 = an.
Throughout the proof we will use the observation that n, x, y is a subgroup of Q for any x, y ∈ Q, thanks to Moufang's theorem.
Lemma 3.1. In the above situation, (d, b, c) is a Scott triple, and n, d and n, c are abelian groups.
Proof. Since n, d, b is a group, a, b is nonabelian, a, b ≤ n, d, b and ϕ ↾ a, b is an isomorphism, we have that ϕ ↾ n, d, b is an isomorphism, and therefore ϕ ↾ d, b is an isomorphism.
Since n, d, c is a group, a, c is nonabelian, a, c ≤ n, d, c and ϕ ↾ a, c is an antiisomorphism, we have that ϕ ↾ n, d, c is an anti-isomorphism, and therefore ϕ ↾ d, c is an anti-isomorphism.
Since n, b, c is a group, b, c is nonabelian, b, c ≤ n, b, c and ϕ ↾ b, c is an isomorphism, we have that ϕ ↾ n, b, c is an isomorphism. Now ϕ ↾ n, d is both an isomorphism and an anti-isomorphism, and so n, d is an abelian group. Also, ϕ ↾ n, c is both an isomorphism and an anti-isomorphism, and so n, c is an abelian group.
If c, d were abelian, then from the above it would follow that n, c, d is abelian, contradicting the fact that a, c is nonabelian. Thus c, d is nonabelian.
Let us now finish the proof of Theorem 1.4. By Lemma 3.1, n, c is abelian, n, d is abelian, and (d, b, c) is a Scott triple. By Theorem 2.7, d 2 , c = an, c is abelian. Finally, na = nd 2 n −1 = d 2 = an because n, d is abelian, so n, a = n, an is abelian. Altogether, n, an, c is an abelian group. But then a, c ≤ n, an, c is abelian, a contradiction with (a, b, c) being a Scott triple.
Remarks and open problems
In this section we examine hypotheses and generalizations of Theorem 1.4. The somewhat technical assumption of Theorem 1.4 that squaring in Q/N(Q) is surjective can be replaced by the assumption that squaring in Q/N(Q) is bijective. We do not know if these two assumptions are equivalent in Moufang loops. It is easy to show that the kernel of a half-homomorphism of loops is a subloop: Proof. Let K = Ker(ϕ) and a, b ∈ K. Then ϕ(ab) ∈ {ϕa · ϕb, ϕb · ϕa} = {1}, so ab ∈ K. Denote by a/b the unique element of Q such that (a/b)b = a. Then 1 = ϕa = ϕ((a/b)b) is equal to ϕ(a/b) · ϕb = ϕ(a/b) or to ϕb · ϕ(a/b) = ϕ(a/b). In either case, a/b ∈ K follows. Similarly for the left division.
However, we do not know the answer to the following problem and hence whether Scott's result on kernels can be generalized: The present paper and all proofs in this context rely rather heavily on the assumption that one is working with Moufang loops. The following problem therefore suggests itself. Finally, the following example shows that the target Q ′ of a half-isomorphism Q → Q ′ need not be a diassociative loop even if Q is a group. Here, (Q, ·) is the symmetric group S 3 , and (Q, * ) is an automorphic loop that is not diassociative, as witnessed by 3 * (3 * 1) = 3 * 5 = 2 = 1 = 0 * 1 = (3 * 3) * 1. | 2015-07-02T13:36:57.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "5e08ee82634c856976ad9a9eb3cfda013e61f4e0",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jalgebra.2015.10.022",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5e08ee82634c856976ad9a9eb3cfda013e61f4e0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
255980546 | pes2o/s2orc | v3-fos-license | Immunoproteomic analysis of the excretory-secretory products of Trichinella pseudospiralis adult worms and newborn larvae
The nematode Trichinella pseudospiralis is an intracellular parasite of mammalian skeletal muscle cells and exists in a non-encapsulated form. Previous studies demonstrated that T. pseudospiralis could induce a lower host inflammatory response. Excretory-secretory (ES) proteins as the most important products of host-parasite interaction may play the main functional role in alleviating host inflammation. However, the ES products of T. pseudospiralis early stage are still unknown. The identification of the ES products of the early stage facilitates the understanding of the molecular mechanisms of the immunomodulation and may help finding early diagnostic markers. In this study, we used two-dimensional gel electrophoresis (2-DE)-based western blotting coupled with matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF/TOF-MS/MS) to separate and identify the T. pseudospiralis adult worms ES products immunoreaction-positive proteins. In total, 400 protein spots were separated by 2-DE. Twenty-eight protein spots were successfully identified using the sera from infected pigs and were characterized to correlate with 12 different proteins of T. pseudospiralis, including adult-specific DNase II-10, poly-cysteine and histidine-tailed protein isoform 2, serine protease, serine/threonine-protein kinase ULK3, enolase, putative venom allergen 5, chymotrypsin-like elastase family member 1, uncharacterized protein, peptidase inhibitor 16, death-associated protein 1, deoxyribonuclease II superfamily and golgin-45. Bioinformatic analyses showed that the identified proteins have a wide diversity of molecular functions, especially deoxyribonuclease II (DNase II) activity and serine-type endopeptidase activity. Early candidate antigens from the ES proteins of T. pseudospiralis have been screened and identified. Our results suggest these proteins may play key roles during the T. pseudospiralis infection and suppress the host immune response. Further, they are the most likely antigen for early diagnosis and the development of a vaccine against the parasite.
Background
Trichinellosis is an important food-borne parasitic zoonosis that infects humans and other mammals, with outbreaks in many parts of the world [1,2]. Humans acquire the disease by consuming raw or undercooked meat of pigs and other animals containing the infective larvae of Trichinella spiralis [3]. Nine different species and three genotypes have been identified to date. Among them, T. nativa, Trichinella T6, Trichinella T9, T. murrelli, Trichinella T8, T. britovi, T. patagoniensis, T. nelsoni and T. spiralis are encapsulated in the host muscle tissues with the formation of collagen layer. Trichinella pseudospiralis, T. papuae and T. zimbabwensis do not induce formation of the collagen layer in the nurse cell [4]. Trichinella spiralis and Trichinella pseudospiralis are independent and typical species in the genus Trichinella. These two species are similar but differ in certain host responses, such as capsule morphology, gene expression, immunological responses and ES products.
After being ingested with infected muscle tissue, the muscle infective larvae (ML) are released and invade the small intestinal epithelium, where larvae complete four moults in 30-40 h and develop into adult worms. The female begins to release the newborn larvae (NBL) over a period of 5-10 days. The NBL penetrate the intestinal wall through the blood and lymphatic circulation into striated muscle, where they grow and form encapsulated and non-encapsulated forms [5]. Trichinella pseudospiralis has a worldwide distribution in Europe, Asia, North America and Australia. It has been detected to infect sylvatic predators such as pigs and rats [6,7], lynx [8] and red foxes [9]. Moreover, this species can infect humans [10] and is the only species that infects birds [11].
Trichinella spiralis and T. pseudospiralis ES products are very similar but are not identical in cDNA sequence, molecular mass, antigenicity and peptide maps of ES products [12][13][14][15]. ES products are considered to be directly exposed to the host's immune system, which induces the host immune responses. Consequently, ES products may play a crucial role in the invasion and development of Trichinella larvae [16,17]. The ES products of T. spiralis include some functional proteins, such as heat shock proteins [18], endonucleases [19], proteinases [20], protein kinases [21,22], proteinase inhibitors [23], DNA binding [24], and 5′-nucleotidase [25]. The ES products of T. pseudospiralis are likely involved in products that have been published, including gp 38, TppSP-1, 45 kDa antigen, TpSerP and 21 kDa ES [17]. The study of the ES products of T. pseudospiralis that modulate the host environment to allow parasite development and survival is of fundamental importance to identify the mechanisms leading to immunosuppression and relieving the host inflammatory response in T. pseudospiralis-infected host and may provide good markers for diagnosis and candidates for drug and vaccine development.
Recent advances in technology, such as western blotting, indirect immunofluorescence, enzyme-linked immunosorbent assay (ELISA) and proteomics have been utilized to identify the ES proteins of Trichinella spp. [26,27]. Proteomics-based analyses involve the simultaneous separation, visualization and quantification of thousands of proteins. More importantly, the combination of proteomics with immunoblotting assays may discover more species-specific antigens than onedimension analysis can. 2-DE and western blotting combined with MALDI-TOF/TOF-MS/MS are an effective approach for the high-resolution analysis and identification of complex groups of ES products.
Parasites and animals
Trichinella pseudospiralis preserved in Food-Borne Parasitology Laboratory of Key Laboratory for Zoonoses Research, Ministry of Education, Institute of Zoonoses, Jilin University were genotyped and proved by OIE Collaborating Center on Foodborne Parasites in Asian-Pacific Region in August 2014. Trichinella pseudospiralis (ISS13) ML were isolated by pepsin-HCl digestion from infected mice at 30 days. Adult worms and NBL were isolated from the infected mice intestines at 6 days postinfection (dpi) [28].
Collection and preparation of T. pseudospiralis-infected animal sera
Trichinella pseudospiralis-infected pig sera were collected from 4 pigs orally infected with 10,000 worms/pig for 26 days. Uninfected sera from the same pigs before infection were collected as negative controls.
Collection of T. pseudospiralis adult worms and NBL proteins sample preparation
The collected adult worms and NBL were washed several times with sterile PBS and then were cultured in pre-warmed RPMI-1640 supplemented with 100 U penicillin/ml and 100 μg streptomycin/ml. The adult worms and NBL were incubated at 5000 worms/ml for 20 h at 37°C in 5% CO 2 . After incubation, the media containing the ES proteins were centrifuged at 1000× rpm at 4°C for 5 min and the supernatant containing the ES products were filtered through a 0.2 μm membrane into ultrafiltration device. The ES products were centrifuged at 5000× rpm at 4°C and were concentrated to 100 μl. The total protein concentration was determined by Bradford assay [29].
Two-dimensional gel electrophoresis
Three replicates of the ES proteins samples were run in parallel on three immobilized pH gradient (IPG) strips. Normally, 100 μl of ES proteins were diluted to 360 μl in rehydration buffer and were loaded into the precast IPG strips (pH 4-7 NL, 17 cm, GE Healthcare, Fairfield, USA). ES proteins were gradient separated by isoelectric focusing (IEF) (GE ETTAN IPGPHOR3). The three IPG strips were rehydrated overnight for 12 h at 20°C, followed by IEF under a running parameter (a gradient at 500 V and 1 h for 500 Vh, a gradient at 1000 V and 1 h for 1000 Vh, a gradient at 8000 V and 3 h for 24,000 Vh, a step and hold at 8000 V and 2.4 h for 19,200 Vh, and a step and hold at 500 V and 0.5 h for 250 Vh) to achieve a final level of approximately 45,000 Vh and 8 h (using a limit of 50 μA/strip). After IEF, the IEF strips were first equilibrated in an 8 ml reducing buffer (6 M urea, 2% sodium dodecyl sulfate (SDS), 50 mM Tris-HCl pH 8.8, 30% glycerol and 100 mM dithiothreitol) approximately 15 min, followed by an 8 ml alkylation buffer (6 M urea, 2% SDS, 50 mM Tris-HCl, pH 8.8, 30% glycerol and 250 mM iodoacetamide) for approximately 15 min. Then, the IPG strips were processed by the second-dimensional electrophoresis.
The equilibrated three IPG strips were loaded onto 12% SDS-PAGE gels by mixing 16 ml of 400 g/l acrylamide/ bisacrylamide (29:1 by weight), 10 ml of 1.5 mol/l Tris-HCl (pH 8.8), 13.48 ml of distilled and deionized water, 100 μl of ammonia persulfate (Amersham, Fairfield, USA) and 20 μl of tetramethylethylenediamine (TEMED, Amersham). Next, 10 g/l low-molecular-weight agarose in SDS electrophoresis buffer was boiled to seal the equilibrated IPG strips to the top of the resolving gel. Gels were run at 2 w/gel for 60 min and then at a constant 17 w/gel until the dye front reached the bottom. The proteins of one gel were stained with Coomassie brilliant blue G-250 (Bio Basic, Amherst, USA) for 6 h. Images of the gels were captured using MICROTEK ScanMakeri800.
Western blotting
The separated protein spots by 2-DE were transferred to a polyvinylidene fluoride (PVDF) membrane with a wet transfer cell (400 mA, 2.5 h). The PVDF membranes with the ES proteins were blocked with 5% skim milk in TBST (80 ml) for 1 h at room temperature. The TBST-blocked PVDF membrane was incubated (overnight, 4°C) with T. pseudospiralis-infected swine pooled sera diluted 1:1000 in TBST. After completion of the incubation, the PVDF membrane was washed with the TBST solution (5 min × 3), and then the membrane was again incubated with the horseradish peroxidase-conjugated goat anti-mouse IgG (BioRad, Hercules, USA) (1:8000, 1 h, room temperature). The membrane was washed with TBST solution (5 min × 3) and visualized. Uninfected sera from the same pigs were used as a parallel negative-control. The negative-control experiment used the same method as mentioned above.
The scanned images of the Coomassie brilliant bluestained 2-DE gels combined with the visualized western blot membranes were input to Image Master 2D Platinum 5.0 (GE) to identify species-specific spots.
Proteins identification using MALDI-TOF/TOF-MS/MS analysis
The 2-DE gel spots corresponding to the T. pseudospiralis positive serum western blot spot were excised, and the proteins were digested in a gel with trypsin (Promega, Madison, USA). The samples mixed with an equivalent matrix solution (α-cyano-4-hydroxycinnamic acid) were applied for further MALDI-TOF/ TOF-MS/MS analysis using a fuzzy logic feedback control system (Ultraflex III TOF/TOF mass spectrometer, Bruker, Karlsruhe, Germany). MS spectral data were acquired from the samples, and an automatically generated MS/MS list was further analyzed. All mass spectra were recorded in positive reflector mode and generated by accumulating data from 1000 laser shots. The following threshold criteria and settings were used: mass range of 800-4000 Da for detection, UV wavelength of 355 nm, laser frequency of 50 Hz, repetition rate of 200 Hz and accelerated voltage of 20,000 V. Peptide mass fingerprint (PMF) data were matched to the UniProt Trichinella and NCBInr T. pseudospiralis database using profound program under 50 ppm mass tolerance. Data were processed, and proteins were unambiguously identified searching against a comprehensive non-redundant sequence database by using the MASCOT software search engine (http://www.matrixscience.com). With the mascot search results, the protein probability score for the match, molecular weight (MW), isoelectric point (pI), number of peptide matches and percentage of the total sequence covered by the peptides were identified. Protein scores greater than 23 (P < 0.05) were considered significant.
Bioinformatics analysis
Gene-ontology (GO) analysis was used to further uncover the molecular function and biological process of T. pseudospiralis ES proteins with the Quick Go Online Software (http://www.ebi.ac.uk/QuickGO/WebServices.html). Trichinella pseudospiralis ES proteins were divided into different clusters according to molecular function and biological process.
Statistical analysis
All mass spectra data were analysed using Mascot software. Mascot adopts a probabilistic scoring algorithm for the identification of proteins, which was adapted from MOWSE algorithm. A P < 0.05 was considered as statistically significant.
Trichinella pseudospiralis ES proteins analysis by 2-DE
In an attempt to identify species-specific parasite antigens, adult worms ES proteins were separated by 2-DE in a 17 cm, pH 4-7 IPG strip (preliminary experiments show that protein points mainly concentrated in the pH 4-7) and stained with Coomassie brilliant blue G250 (Fig. 1a). More than 400 spots were detected, with pI varying from 4 to 7 and MW from 10 to 170 kDa. Spots with a significant decrease (or increase) in their relative abundance were considered differentially expressed proteins if P < 0.05 and two spots had 1.5-fold differences in volume.
Immunoblot analysis of ES proteins of adult worms and NBL of T. pseudospiralis
The results of the immunoblot of adult worms ES proteins were shown in Fig. 1b. Approximately 28 immunoreactive protein spots were identified by T. pseudospiralis 26 dpi positive serum and matched to the corresponding protein spot in Coomassie brilliant blue stained gels. These matched spots were named spot 1 to 28 and were selected to be further identified through matching of proteins by MALDI-TOF/TOF-MS/MS. These immunoreactive protein spots were observed to have MW ranging from 20 kDa to 130 kDa and pI values between 4 and 7. Most of these spots had observed MW ranging from 20 kDa to 40 kDa and pI values between 5 and 6. No proteins reacted to uninfected pig sera (Fig. 1c).
Identification of T. pseudospiralis ES proteins by MALDI-TOF/TOF-MS/MS
Twenty-four of the 28 differentially expressed proteins were successfully identified by PMF, corresponding to 12 species-specific proteins. These proteins are listed in Table 1. Three criteria were used to identify speciesspecific proteins. First, several specific peptides for a given protein were found. Secondly, protein scores greater than 23 (P < 0.05) were considered significant. Finally, the observed MW and pI of the protein measured by 2-DE were in agreement with the calculated values. Spots 1, 12, 21, 23 and 27 were identified as adultspecific DNase II-10; spots 5, 7, 13, 16 and 26 were identified as poly-cysteine and histidine tailed protein isoform 2; spots 15, 17 and 24 were identified as serine protease; spots 3, 19 and 28 were identified as serine/ threonine-protein kinase ULK3; other spots were identified as enolase, golgin-45, putative venom allergen 5, deoxyribonuclease II superfamily, chymotrypsin-like elastase family member 1, uncharacterized protein, peptidase inhibitor 16 and death-associated protein 1. Spots 4, 6, 11 and 14 were unmatched to any T. spiralis and T. pseudospiralis sequence currently in the database. Fewer peptide matches and a lower percent coverage were tolerated in making a putative assignment of ES proteins identity.
Functional categories of ES proteins from adult worms and NBL of T. pseudospiralis by gene ontology
To further understand the functions of the ES proteins identified by early infection sera in this study, gene ontology annotation was performed. We utilize the UniProt database, and these 12 species-specific proteins were classified into a molecular function and biological process according to GO hierarchy (http://www.uniprot.org//). For the molecular function ontology, the proteins are related to peptidase activity acting on deoxyribonuclease II activity (GO: 0004531, 24%), serine-type endopeptidase activity (GO: 0004252, 16%), DNA binding (GO: 0003677, 12%), nuclease activity (GO: 0004518, 12%), protein kinase Golgi to plasma membrane protein transport (GO: 0043001, 5.56%) (Fig. 2b).
Discussion
Previous reports have been indicated that the ES products of parasites play important roles in development, adhesion, proteolysis and extracellular matrix organization of the organism [30]. During infection, the ES proteins may control the host immune reaction and recognition, acting as virulence factors or immune regulators [31]. Non-encapsulated species induce less inflammation than do encapsulated species [32], indicating that the ES proteins differ between these organisms. Thus, the study of identifying the ES proteins of adult worms and NBL of T. pseudospiralis are meaningful to reveal intestinal stage host-parasite interactions and understanding the phenomenon of less intestinal inflammation. Also, these ES proteins identified in our study might be candidates for use in developing early diagnostic tests and effective vaccines.
In this study, we successfully employed 2-DE and western blotting combined with MALDI-TOF/TOF-MS/ MS to identify specific ES proteins of adult worms and NBL of T. pseudospiralis. Approximately 24 matched protein spots were recognized by PMF and were characterized to correlate with 12 different proteins. To further elucidate the functions of 12 different proteins, we utilized Quick GO online software, and these 12 proteins were categorized based on the GO annotation of biological process and molecular functions. Most of the molecular functions of ES proteins were DNase II activity and serine-type endopeptidase activity. Several proteins had multiple spots with different pI and MW, such as adult-specific DNase II-10, poly-cysteine and histidine tailed protein isoform 2, serine/threonine-protein kinase ULK3 and serine protease. These proteins might undergo alternative splicing, post-translational modifications, binding to different co-factors or protein processing [17,33], possibly involving biological regulation, glycolytic process, metabolic process, protein folding and proteolysis.
Previous studies have indicated that the parasitic ES products are composed of many functional proteins involved in host-parasite interactions. In this study, many functional proteins were also identified, especially multiple isoforms of DNase II and serine proteases. DNase II and serine proteases have been proved critical for the Fig. 2 Gene ontology categories of proteins of adult worms excretory-secretory of Trichinella pseudospiralis. The identified proteins were classified into molecular function (a) and biological process (b) by Quick GO according to their gene oncology signatures invasion of the host and modulating host immune responses [34]. DNase II is a well-known acidic endonuclease that fulfils a variety of functions from degrading DNA associated with apoptosis and dietary DNA to modulating host immune responses. Multiple isoforms of the DNase II identified in T. pseudospiralis ES products suggest that DNase II may function as selfprotective molecules by alleviating host intestinal inflammation by cleaving DNA from apoptotic host cells during the larvae invasion of epithelial cells. Furthermore, a novel microbicidal mechanism beyond cell death, recently described as extracellular traps (ETs), was identified in many innate effector cells. ETs are weblike structures composed of chromatin and granular and cytoplasmic proteins, which ensnare and kill microorganisms including bacteria, fungi and parasites [35][36][37]. However, some of the microorganisms such as Staphylococcus aureus, Streptococcus pneumonia, Vibrio cholera and Leishmania amazonensis have been proved to express endonucleases that efficiently degrade DNA filaments from ETs, allowing these microorganisms to escape the toxic effects of ETs and to invade or spread throughout the host [38][39][40][41]. The DNase II secreted from the T. pseudospiralis may have the similar function to escape the fatal attraction of ETs.
Serine proteases reportedly are expressed at different stages and have different functions in establishing the parasitism of T. spiralis [42]. Indeed, serine proteinases partially purified from the ES products of T. spiralis adult worms exhibit strong biological activity, which plays a vital role in the degradation of intestinal tissues and helps parasites penetrate a diverse range of host tissue barriers, acquire nutrients and evade the host immune response [43]. Meanwhile, serine proteases are also involved in mediating apoptosis-like cell death and phagocytosis [44], which may contribute to parasite-induced immunosuppression. In this study, multiple serine proteases including chymotrypsinlike elastase were identified by sera at 26 dpi of T. pseudospiralis, which might be correlative with the invasion of host enterocytes and the suppression of inflammation. However, the functions of serine protease families are strongly context-dependent in parasite infection, and further experimental analyses are necessary to improve the reliability of the functional interpretation of our results.
Compared with the ES products of T. pseudospiralis ML, many stage-specific molecules, such as adult-specific DNase II-10, poly-cysteine and histidine tailed protein iso-form2, serine protease, serine/threonine-protein kinase ULK3, enolase, putative venom allergen 5, chymotrypsinlike elastase family member 1, uncharacterized protein, peptidase inhibitor 16, death-associated protein 1, deoxyribonuclease II superfamily and golgin-45 were identified in adult worms. Apart from DNAse II and serine protease families, the poly-cysteine and histidine tailed protein (PCHTP) were also identified in several spots. The PCHTP as a metalloprotein is a new nematode ES-specific protein family of PCHTP-Poly-cysteine proteins, which are unique to order Trichocephalida [45,46]. In previous studies, PCHTP sequence analysis showed typical metal binding residues, suggesting that multiple potential metal binding sites may be formed [47,48]. These metal binding sites consist of cysteine-rich and poly-histidine regions that can bind different bivalent metal ions such as Fe, Ni, Cu, Co and Zn and for the first time have been identified in the ES products of non-encapsulated larvae, which most likely play a role in transporting or storing metal ions [45,46]. Another three proteins, namely, golgin-45, venom allergen 5 and enolase, were also identified in this study. Golgin-45 is located on the surface of the medial cisternae of the Golgi complex and maintains the structure of Golgi and secretory protein transport [49]. The venom allergen five gene is identified in many metazoans as the major allergen and is often associated with allergic responses in humans [50,51]. The venom allergen five proteins are part of a cocktail of salivary proteins believed to function either in suppressing the host immune system or preventing clotting to prolong feeding [52]. The putative venom allergen five secreted by T. pseudospiralis may help adult worms and NBL evade the host immune response. Enolase is a multifunctional protein that catalyzes the reversible dehydration of 2-phospho-D-glycerate (2PGA) to phosphoenolpyruvate (PEP). Parasitic enolase can enhance the activation of plasminogen [53]. Therefore, as a component of ES products, enolase has been confirmed to be a very important virulence factor during invasion into the host [54]. Furthermore, Wang X et al. recently reported invasion, immune evasion and pathogenesis mechanisms through which Clonorchis sinensis enolase (Csenolase) from ES products participating in plasminogen acquisition and proteolysis may enhance extracellular matrix degradation and control the parasite growth [55]. As a potential vaccine, Csenolase has shown high immunogenicity and has exhibited considerable protective efficacy [56,57]. In the same way, enolase from T. pseudospiralis may contribute to the tissue migration of NBL and host-parasite interactions and may be a vaccine candidate or diagnostic protein.
Conclusions
In summary, 2-DE and western blotting combined with MALDI-TOF/TOF-MS/MS was used to screen the early candidate antigens from the ES proteins of T. pseudospiralis adult worms in this study. Twelve different proteins including adult-specific DNase II-10, polycysteine and histidine tailed protein isoform2, serine protease, serine/threonine-protein kinase ULK3, enolase, putative venom allergen 5, chymotrypsin-like elastase family member 1, uncharacterized protein, peptidase inhibitor 16, death-associated protein 1, deoxyribonuclease II superfamily and golgin-45 stagespecific proteins were identified in total. The identification of ES products is critical to understanding the host-parasite interaction and may have broader implications for research on the mechanisms of immunosuppression. | 2023-01-19T22:13:47.405Z | 2017-11-21T00:00:00.000 | {
"year": 2017,
"sha1": "b4a990203d4e1d8faeccaebcef23102c732e0604",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13071-017-2522-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b4a990203d4e1d8faeccaebcef23102c732e0604",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
190879035 | pes2o/s2orc | v3-fos-license | Extent of surgical trauma may not be a key factor in Medication-related osteonecrosis of the jaw – a pilot study
The pathogenesis of Medication-Related Osteonecrosis of the Jaw (MRONJ) is not fully understood, however, surgical trauma is thought to play a role. Therefore, the aim of the current pilot study was to compare the incidence and characteristics of MRONJ following single or multiple molar tooth extractions in a rat model. To this aim, twenty male Lewis rats were treated with subcutaneous injection of zolendronic acid (ZA), an established bone anti-resorption agent, (7.5 µ g/kg) and dexamethasone (Dex), (1 mg/kg), or saline, once a week, for 11 weeks. At three weeks, the first or both first and second maxillary molar teeth were extracted. Eight weeks following extraction, rats were sacrificed and extraction sites were evaluated. Clinical macroscopic examination showed MRONJ-like lesions in all single extraction ZA/Dex-treated rats, showing exposed bone. In the control and multiple extraction ZA/Dex-treated groups, none of the rats showed visible signs of MRONJ. Histological characteristics of MRONJ were found in all ZA/Dex-treated rats (both single and multiple extractions), whereas rats treated with saline showed almost no empty lacunae and necrotic bone. In conclusion, the extent of the surgical field may not be the key factor in MRONJ development since only rats with single tooth extraction displayed exposed bone. However, histological characteristics were identified in both models. Therefore, preclinical studies that aim to evaluate histological features of MRONJ may use both models, whereas when a clinically exposed bone is required, the single tooth extraction model appears to be preferable. Further large scale studies are warranted to corroborate the present findings.
Introduction
Bisphosphonates (BSP) are used clinically to manage cancerrelated conditions including hypercalcemia of malignancy, skeletalrelated events associated with bone metastases in the context of solid tumors including breast cancer, prostate cancer and lung cancers as well as for the management of multiple myeloma [1]. Furthermore, BSP are widely used for the treatment of osteoporosis and other metabolic disorders by increasing bone mineral density, decreasing fracture risk, and inhibiting bone resorption [2].
BSP-related osteonecrosis of the jaw (BRONJ) which was first described by Marx in 2003, is a serious adverse effect of BSP therapy [3]. Most BRONJ cancer patients were treated with concurrent I.V. medication of nitrogen containing BSP (such as zoledronate and pamidronate) and steroids (dexamethasone) [4]. Recently, with the discovery of osteonecrosis of the jaw in patients taking Receptor Activator of Nuclear Factor kappa-B Ligand (RANKL) antibody and Vascular endothelial growth factor (VEGF) antagonists, the definition of BRONJ has been modified to Medication-Related Osteonecrosis of the Jaw (MRONJ) [1]. Although rare, MRONJ is a debilitating disorder that is usually associated with pain, bone sequestration, tooth loss, intraoral and extraoral fistulae, and jaw fracture [3,5].
The prevalence of MRONJ ranges from 0.7-18.6% among cancer patients [1,6], 0.017-0.04% in patients with I.V. BSP therapy [7] and 0.00038-0.21% in patients receiving long-term oral BSP therapy [8]. In recent years, the incidence of MRONJ has decreased owing to early screening and initiation of appropriate dental care [1]. The American Society for Bone and Mineral Research task force considers that systemic risk factors associated with chemotherapy affect the occurrence and aggravation of MRONJ, and concurrent use of chemotherapeutic regimens and steroids have a synergistic effect on MRONJ [1].
Although the first MRONJ case was reported over a decade ago, the pathophysiology of the disease has not been fully elucidated. However, several known risk factors were reported such as age, periodontal disease, smoking, diabetes, steroid therapy and immunosuppression [1]. Several hypotheses were proposed in an attempt to explain the confined localization of MRONJ exclusively to the jaws. These include altered bone remodeling or over-suppression of bone resorption [9][10][11], inhibition of angiogenesis [12], constant micro-trauma; suppression of innate or acquired immunity; vitamin D deficiency [13]; soft tissue BSP toxicity [14]; dental disease or bacterial infection alone [15,16] or in combination with fungal and viral infections [17,18]. Dentoalveolar surgery, especially tooth extraction, is considered a major risk factor for developing MRONJ. Several studies report that among patients with MRONJ, tooth extraction is a common predisposing event (52-61% of patients with ONJ underwent tooth extraction) [19].Owing to the development of MRONJ in patients with multiple confounding factors, it is very difficult to identify the underlying pathogenesis determinants of the disease. Therefore, it is imperative to develop animal models with a high incidence of MRONJ with minimal environmental and genetic variance.
In a recent study, Jang et al., found that the combination of zolendronic (ZA) acid and dexamethasone (Dex) increased the occurrence of MRONJ in a rat model, and that a surgical stimulus, such as extraction, plays an important role as a trigger factor, increasing the incidence of MRONJ [4].Thus, we hypothesized that by increasing the surgical stimulus (i.e. extraction of two adjacent molar teeth compared with a single tooth extraction) would increase the prevalence of MRONJ. The aim of the current pilot study was to compare the incidence of MRONJ and characteristics following single or multiple tooth extractions in a rat model. The results of this study could enable further investigations in the field of MRONJ.
Materials and Methods
The study was performed in accordance with the Guidelines laid down by the National Institute of Health (NIH) in the USA regarding the care and use of animals for experimental procedures, or with the European Communities Council Directive of 24 November 1986 (86/609/ EEC), and in accordance with the ARRIVE guidelines and with local laws and regulations. The study protocol was approved by the Committee for the Supervision of Animal Experiments at the Faculty of Medicine, Technion, I.I.T. (approval # IL0580514).
Establishment of a MRONJ model in Lewis rats
In order to increase the prevalence of MRONJ, combined treatment with both ZA and Dex in addition to molar extraction was used to induce MRONJ [4]. Male Lewis inbred rats (n = 20, 13 weeks, ∼300 g) were used in the experiment. Rats were treated with subcutaneous (s.c.) injection of ZA (Hospira, Almere, Holand) 7.5 µg/kg and 1 mg/kg Dex (Kern pharma, Barcelona, Spain) once a week, for 11 weeks (n = 10); control rats were treated by s.c. injection of saline in the same volume and duration (n = 10) [20]. At the third week (in addition to s.c. injection), rats were anaesthetized by intramuscular injection of 100 mg/kg bw Ketamin (Ketaset, Fort Dodge, Iowa, USA) and 5 mg/kg bw Xylazin (Eurovet, Cuijk, Holland) and all animals underwent unilateral tooth extraction: In the single tooth extraction group (ZA/Dex = 5; Saline = 5), the first maxillary molar was extracted while in the multiple tooth extraction group (ZA/Dex = 5; Saline = 5), the first and second maxillary molars were extracted. Three days post tooth extraction, rats were treated with 0.3 mg/kg bw Buprenorphine (Vetamarket, Shoham, Israel) and 50 mg/kg bw Cephalexin (Norbrook laboratories, Newry, Ireland) that were injected s.c. Rats were fed water-soaked rat chow and water ad libitum. ZA/Dex administration protocol was maintained until the animals were sacrificed. All rats were sacrificed by CO 2 asphyxiation 8 weeks after teeth extraction.
Evaluation of MRONJ occurrence was performed by clinical and histological analyses.
Clinical measurements
Eight weeks after extraction, the presence of exposed bone was identified, measured and recorded (clinical photos taken with a 105 mm lens digital camera).
Histology and histomorphometry
The part of the maxilla surrounding the extraction socket was sawed out and specimens were fixed immediately in 4% paraformaldehyde for 2 days. Fixed specimens were decalcified in 10% EDTA (Sigma-Aldrich, MS, USA), for 4 weeks, embedded in paraffin and sectioned (5µm). For determination of soft tissue and bone morphology: sections were stained with hematoxylin and eosin (H&E). Two stained sections (∼20 µm apart) from each specimen were captured by a digital camera (Olympus DP70, Olympus, Tokyo, Japan) with a calibration scale and analyzed morphometrically using imageJ software (NIH, Bethesda, MD, USA). The area of the extraction site was identified adjacent to the second molar in the single extraction model and adjacent to the third molar in the multiple extraction model. Histomorphometric measurements were performed at these sites. Epithelial thickness (microns) was measured at three points in the extraction site, and an average of these measurements was calculated for each specimen. In MRONJ cases, epithelial discontinuation was measured in the most coronal mesio-distal dimension (Fig. 1). In order to quantify the area of osteonecrosis, areas with empty lacunae were identified and the number of empty lacunae was counted manually. Bone necrosis was defined as three or more empty lacunae per 1000 µm 2 [21]. In order to calculate the blood vessel density, 10 microscopic fields (at ×40 magnification) in the connective tissue as well as in the basal bone, were randomly selected and blood vessels were manually counted (Fig. 1). Immunohistochemistry-CD31 antibody which recognizes endothelial cells served as a marker for blood vessel counting. Briefly: antigen retrieval of the samples was performed, followed by blocking non-specific binding sites (Background bluster, Innovex, Bioscience). After washes with PBS, the sections were incubated for 1 hour with a primary antibody against CD31 (Novus Biologicals, Colorado, USA), diluted 1:100. After extensive washing, samples were incubated with horseradish peroxidase (Zytomed system, Berlin, Germany).
3,3'-Dia-minobenzidine (DAB) (SuperPicture TM , Thermo Fisher Scientific, Massachusetts, USA) was applied for 15 minutes and gently washed. Finally, sample dehydration and mounting were performed. Slides were visualized with an Olympus CX31 microscope (Olympus CX31, Olympus optical CO, LTD Philippines) equipped with an Olympus DP12. Blood vessel density in the connective tissue (BV and CT, respectively) and blood vessel density in the bone (BV bone).
Estimation of sample size and power
According to the literature, MRONJ-like lesions occur in 50% of the animals after administration of bisphosphonates and tooth extraction. In contrary, we except to find no MRONJ occurrence in rats that were treated with saline and underwent tooth extraction. Therefore, 5 animals in each group were considered sufficient.
Statistical analysis
StatPlus R (AnalystSoft, Vancouver, BC, Canada) and JMP 10.0 (SAS Institute, Cary, NC, USA) statistical packages were used. Descriptive statistics which included means and medians, ranges and standard deviation (SD) were initially tabulated. Comparisons between control (saline) and test (ZA/Dex) groups and between one or two extractions were performed using unpaired t-test and two way anova analysis. Significance level was set at 5%.
Clinical evaluation (Macroscopic examination)
Three rats died during anesthesia, therefore were not included in the study. Surviving animals demonstrated good hemostasis, and gained body weight; overall, 10 rats in the single tooth extraction group (ZA/ Dex = 5; Saline = 5), and 7 rats in the multiple tooth extraction group (ZA/Dex = 4; Saline = 3) were included in the analysis. MRONJ-like lesions (i.e. exposed bone) were evident clinically in all the single extraction ZA/Dex-treated rats (Fig. 2B). In contrast, all saline-treated rats (single and multiple extraction groups) failed to show clinical signs of MRONJ ( Fig. 2A, 2C). In the ZA/Dex multiple extraction group, minimal evidence of incomplete healing was observed only following magnification (Fig. 2D). In addition, in all single tooth extraction cases (both saline and ZA/Dex), we found open contacts between the 2nd and 3rd molars that were associated with food impaction (Fig. 3).
Histological and histomorphometric analyses
Tissue sections obtained from the saline groups (single and multiple tooth extraction models) showed normal soft and hard tissue healing. Oral mucosa presented continuous epithelium with developed rete ridges and wide non-inflamed underlying connective tissue. Underneath, the basal bone exhibited features of mature bone with normal bone remodeling and cellular lacunae (Fig. 4A, 4B). The tissue sections obtained from the ZA/Dex-treated groups that demonstrated exposed bone clinically (single tooth extraction group), showed discontinuity of the epithelium with exposed fragments of necrotic bone and sequestrum surrounded by extensive inflammatory infiltrate that consisted of mononuclear cells (Fig. 4C, 4D).
Histomorphometric analysis was performed for epithelial thickness, epithelial discontinuation, inflammatory infiltrate, blood vessel density (BVD), empty lacunae and area of necrotic bone. When comparing total ZA/Dex to total saline treated rats, epithelial thickness was not significantly different between the groups and ranged between 157.5 µm-325.1 µm. Epithelial discontinuation (ulceration) was evident solely in ZA/Dex rats. Inflammatory infiltrate grade was higher in the ZA/Dex group (2.22 ± 0.833) compared with the saline group (1.125 ± 1.356). The number of empty lacunae and the area of necrotic bone were higher in the ZA/Dex treated group (p = 0.0007, p = 0.04 respectively), while blood vessel density in the connective tissue and bone were decreased in the ZA/Dex-treated group (p = 0.0054 and p < 0.0001, respectively) ( Table 1).
When comparing single versus multiple tooth extraction groups (Fig. 5), clinically exposed bone was evident in the ZA/Dex-treated single extraction rats only, however, histological and histomorphometric analyses revealed evidence for necrotic bone and empty lacu- nae in both the single and multiple extraction rats that were treated with ZA/Dex. In the multiple extraction cases, epithelial ulceration (epithelial discontinuation) was too small to be detected clinically. Accordingly, several histomorphometric parameters (empty lacunae; area of necrotic bone and BVD in the CT) showed significant differences between ZA/Dex and saline groups in the multiple extraction models that were not significant in the single extraction models (Fig. 5A, 5B, 5C). However, BVD in bone was lower in the ZA/Dex versus saline groups in the single (p = 0.0008) and multiple (p = 0.0039) extraction models (Fig. 5C).
Discussion
MRONJ was described for the first time by Marx on 2003, as exposed bone in the oral cavity in patients taking BSP to treat osteoporosis [3]. Most of the literature in this field is limited and is based on human clinical case reports that show that MRONJ emerged after tooth extraction or surgery. Nevertheless, several case reports have presented "spontaneous development of MRONJ" (without prior surgical intervention) that could be attributed to the presence of chronic infection around teeth or dental implants [22]. In order to gain insight into the pathogenesis of the disease we aimed to establish a MRONJ model in the rat. We hypothesized that similar to humans, by increasing the extent of the surgical trauma, we would elevate the incidence of MRONJ, in this rat model. The results of the present study showed that all the rats that underwent single tooth extraction in the ZA/Dex-treated group exhibited clinical evidence of exposed bone and characteristics of MRONJ histologically. Unexpectedly, rats that were treated with ZA/Dex and underwent multiple extractions showed minimal clinical signs of MRONJ that could not be detected with the naked eye. However, evidence for the disease was found histologically. Therefore, our findings suggest that the degree of surgical trauma may not be the main risk or trigger factor for MRONJ development.
Even though MRONJ diagnosis is based on clinical examination of the patient and identification of the exposed bone that proceeds for at least eight weeks, there are several histological characteristics that are associated with MRONJ lesions. Histologically, MRONJ is characterized by diverse tissue changes, including necrotic bony trabeculae with empty osteocyte lacunae and granulation tissue. The inter-trabecular space is infiltrated by inflammatory cells including neutrophils, lymphocytes, and plasma cells as well as decreased vascularization and number of osteoblasts [23,24]. In the present study, none of the rats in the multiple extraction groups showed exposed bone clinically, however, histological characteristics were displayed. One explanation for this surprising result may be due to the micrometer scale ulceration in the epithelium that can only be detected microscopically. However, since MRONJ diagnosis requires the clinical appearance of exposed bone for diagnosis, this multiple extraction model is inferior to the single extraction model. Za/Dex-treated rat groups showed decreased blood vessel density in the oral mucosa and alveolar bone, increased number of empty lacunae and higher grading of the inflammatory infiltrate compared with control rats treated with saline. In general, most of the histological parameters were similar among single tooth or multiple tooth extraction models. However, the mean area of necrotic bone in the ZA/Dex-treated rats that underwent single tooth extraction was 4fold higher in comparison to the multiple tooth extraction treatment (0.235 mm 2 vs 0.0545 mm 2 , p < 0.05). These histological findings are in accord with the clinical observations in which we found higher incidence of exposed bone in a single tooth extraction model.
To interpret the results of the current study, mechanisms and risk factors for MRONJ should be discussed. Interestingly, MRONJ is restricted to the jaw bone. Unlike long-bones, the jaw is covered with a thin oral mucosa that separates the underlining bone from oral flora and protects the bone from mechanical trauma caused by food impaction. Injury or ulceration to the oral mucosa exposes the underlining jaw to bacterial and fungal contamination that may contribute to MRONJ development. Furthermore, in a study by Duzan et al., gingival mechanical local damage promoted Th17 cell migration and contributed to the potentiation and exacerbation of local oral immunity which contributed to pathogenic bone loss [25]. In the current research, clinical examination of rats in the single tooth extraction group showed open contact points between the second and third molars that were associated with food impaction, soft tissue ulceration and significant bone loss between the second and third molars. Based on this observation, we hypothesize that chronic local trauma to the oral mucosa, caused by the food impaction, may play an important role in MRONJ development. Alternatively, the neighboring teeth with their bacterial load or the presence of neighboring teeth that keep the wound open and enable bacteria to enter the socket, may aggravate MRONJ.
It is unclear whether or not MRONJ is induced by tooth extraction or by the surgery itself. Alternatively, as an unproven hypothesis, MRONJ may already exist as "microlesions" in the alveolar socket prior to tooth extraction or surgery, e.g. in periodontal teeth requiring extraction, and become visible after extraction or surgery. BSP are known to bind to bone at neutral pH and are dissociate from the bone in an acidic environment. During bone resorption, the acid pH in the resorption lacunae increases the release of BSP from hydroxyapatite resulting in high local BSP concentrations and therefore MRONJ development [25]. Local acidic milieus are common in infections and wound healing after surgical procedures. Furthermore, Marx (2014) described that osteoclastic resorption of BSP-loaded bone results in osteoclast cell death in which the cell lyses, releasing the BSP drugs to reenter the local bone or bone marrow in a re-dosing effect [27]. In the single tooth extraction model, the presence of open contact points indicates that tooth migration occurred. Since tooth migration occurs due to osteoclastic resorption, this "re-dosing effect" may further explain the difference in MRONJ occurrence between the two different extraction models.
The restrictive location of MRONJ to the jaw, maybe due to the high bone turnover rate and limited vasculature of the jaw. Our findings support this hypothesis as we found a reduced vasculature in the oral mucosa and in the alveolar bone of rats that were treated with ZA/DEX in comparison to control rats that were treated with saline [1]. Previous studies noted that suppression of angiogenesis can result in the development of MRONJ, and that serum vascular endothelial growth factor (VEGF) levels might be a predictive marker of MRONJ [27].Furthermore, these findings are supported by studies about cancer patients treated with ZA who exhibited decreased circulating VEGF levels [29].
To explore preventive and future treatment strategies for MRONJ, there is a burning need to establish reliable preclinical animal models that mimic clinical and histological characteristics of MRONJ. Moreover, being able to establish MRONJ-like lesions in high incidence is a prerequisite to allow further research in this field.
Several animal models for induction of MRONJ have been described using different animal species, medications and surgical triggers [30]. Overall, the incidence of MRONJ-like lesions largely varied between 0% to a 100% among the studies. In accordance with our results, Marino el al., (2012), showed that I.V. injection of ZA 20 µg/kg and 1 st mandibular molar extraction resulted in clinical MRONJ-like lesion in 60% of the rats [31]. Unlike our results, other studies that combined ZA with multiple teeth extractions found a higher percentage of exposed bone. In a miniature pig model, extraction of three molars and I.V. administration of ZA caused MRONJ appearance in 80%-100% of the pigs [32]. A possible explanation for this discrepancy is the heterogeneity between the studies, for example, large versus small animals, differences in medication dosages and modes of treatment. In large animal models (e.g. sheep or pig), the dimensions of MRONJ-like lesions are on a centimeter scale enabling detection of lesions with the naked eye. Nevertheless, the advantage of choosing small animal models in research is obvious, especially in studies that can rely on histological characteristics of MRONJ to meet study aims. However, clinical detection of necrotic bone is crucial for MRONJ diagnosis, treatment and prevention studies.
Conclusions
Within the limits of the current pilot study with a predominantly small sample size, the extent of the surgical field may not be the key factor in MRONJ development since only single tooth extraction showed exposed bone. However, histological characteristics were identified in both single and multiple teeth extraction models. Therefore, preclinical studies that aim to evaluate histological features of the disease may use both models, whereas when clinically exposed bone is required, the single tooth extraction model is preferred. The effects of chronic micro-trauma, food impaction and tooth migration should also be considered. | 2019-06-14T13:46:42.306Z | 2018-12-27T00:00:00.000 | {
"year": 2018,
"sha1": "1b56e96e89f355287c149f2711c97475b30620d0",
"oa_license": null,
"oa_url": "https://doi.org/10.31083/j.jmcm.2018.02.005",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "598dfc0306f469f53240673c3470be3e198346ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270981633 | pes2o/s2orc | v3-fos-license | Antimicrobial management of dental infections: Updated review
Dental infections, which include anything from severe periodontal illnesses and abscess forms to routine tooth caries, are a major public health risk. This review article focuses on the pathophysiology and treatment of dental infections. A narrative review was conducted based on several published articles, relevant journals, and books in Google Scholar PubMed using the keywords dental caries, periodontal disease, gingivitis, and related diseases; we excluded duplicated information. Our review illustrated the types of dental infections and the proper antimicrobial drug that is suitable for this disease. Drawing from recent research findings and clinical evidence, we explore the spectrum of bacteria commonly associated with dental infections and their susceptibility profiles to various antibiotics. Emphasis is placed on understanding the mechanisms of antibiotic action and resistance in the context of dental pathogens, shedding light on optimal treatment regimens and potential challenges in clinical management. Additionally, we go over the clinical consequences of antibiotic therapy in dentistry, taking into account factors like patient selection, dose guidelines, and side effects. The management of dental infections through antimicrobial strategies has undergone significant advancements, as evidenced by this updated review. Besides the normal methods, emerging technologies such as 3D printing for drug delivery of antibiotics and disinfectants hold promise in enhancing treatment efficacy and patient outcomes. By leveraging the precision and customization afforded by 3D printing, dentistry can tailor antimicrobial interventions to individual patient needs, optimizing therapeutic outcomes while minimizing adverse effects.
Introduction
Dental caries, periodontal disorders, and pulpal necrosis can cause dental infections.These infections can seriously harm the oral cavity's soft and complex structures.Pain, fever, and edema are frequent signs of dental diseases.The early treatment of infected teeth involves surgery and endodontics, followed by antibiotic medication. [1]n dental practice, dentists prescribe antibiotics, especially during dental treatment and as a preventive measure.For instance, in patients with prosthetic joints, infectious endocarditis, metabolic disorders like diabetes, [2] and other conditions, antibiotics are given to prevent acute attacks; they are also administered before tooth extraction. [3]There are few conditions in dentistry where systemic antibiotics may be used since oral hygiene practices, including topical antiseptics, antibiotics, and surgical intervention, [4] are the best ways to treat most dental and periodontal disorders. [5]The use of antibiotics in dentistry is typified by the empirical use of a limited number of short-term, broad-spectrum antibiotics, such as Amoxicillin, metronidazole, or clindamycin. [6]In dentistry, the main work is surgery and endodontic treatments; the dental work is the main issue, and antibiotics are sometimes indicated. [7]Only around 12% of dentists were found to give antibiotics appropriately and accurately, demonstrating the need for thorough guidelines.Misuse and overuse of antibiotics in dentistry will lead to side effects, and the emergence of antimicrobial resistance (AMR) results in treatment failure.It may hurt a vast number of populations all over the world.The creation of prescription guidelines and educational programs to promote the responsible use of antibiotics and discourage their irrational use are necessary to prevent the onset of AMR. [8]Despite continuous education about essential antibiotic guidelines, the World Health Organization reported that the emergence of resistance is increasing. [9]For example, resistance against clindamycin is increased by 46.7% for Amoxicillin, 39.2%, doxycycline by 25%, and metronidazole by 21 7%, but the combination of Amoxicillin and metronidazole by 6.7% only.Misuse and overuse of antibiotics have The authors have no funding and conflicts of interest to disclose.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.a led to the development of AMR in a wide range of microorganisms, which leads to the consequent inefficacy of commonly used antibiotics. [10]Antibiotics are often used during dental operations to treat prophylactic, local, focal, odontogenic, and non-odontogenic infections. [11]Odontogenic infections are the most frequent, affecting periodontal and dental structures such as dental caries, pericoronitis, periodontitis, pulpitis, or pulpal necrosis. [12]non-odontogenic infections start in extra-dental structures such as mucous glands, tongue, paranasal sinus, and sialadenitis.They can cause severe infections that can migrate to deep backspaces and sometimes lead to serious complications and death. [13]pecific dental procedures, such as extractions, surgical periodontal procedures, implant placement, re-implantation of teeth, endodontic procedures or surgeries, subgingival placement of antibiotic fibers or strips, and intraligamental local anesthetic injections, increase the risk of dental infection in susceptible patients.Antibiotic prophylaxis is required to avoid dental and extra-dental issues. [14]Dental infections could be cured by surgical intervention; the earlier the surgical management of the infected tooth, the better clinical outcomes.In severe cases, surgical intervention includes debridement, irrigation, and incision and drainage.Furthermore, in patients with signs of systemic involvement, administration of intravenous antibiotics according to bacterial cultures and sensitivity is suggested. [15]Present guidelines indicate that antibiotics should be prescribed 2 to 3 consecutive days after surgical treatment. [16]The surgical cut in the soft tissue (incise) and drain the pus as the antibiotic is not working in the presence of pus (this is because pus contains antibiotic inhibitors).A root canal (RC) can help get rid of the source of infection; the dentist accesses the cavity in the teeth, removes the diseased and infected central tissue (pulp), disinfects the canal using an irrigation solution, applies antibiotic dress in the canal, and restoration of the access cavity. [17]uppose the affected teeth cannot be saved.In that case, the dentist will extract the tooth and prescribe antibiotics after extraction to prevent infection (in addition to antibiotics before extraction, particularly in immunosuppressed patients).
However, prescribing antibiotics is not recommended if the infection is limited to the abscessed area (localized).If the patient has no systematic manifestations but the disease has spread to nearby teeth or jaw, the dentist will likely prescribe antibiotics to stop the spreading of the infection.Dentists prescribe broad-spectrum antibiotics with bactericidal effects or 2 bactericidal antibiotics in immunosuppressed patients.
Low-level laser therapy [18] and photodynamic therapy [19] are 2 alternative antiseptic techniques.A treatment has been used to treat infected wounds to lessen bacterial growth and inflammation.On the other hand, photodynamic therapy has effectively treated localized infections, including burns, abscesses, wounds, and periodontal infections, as well as eliminated bacteria.Dental infections should be treated as soon as possible because they can cause severe and irreversible side effects like meningitis, thrombosis, orbital abscess, osteomyelitis, brain abscess, airway obstruction, carotid sinuses, septicemia, and loss of vision. [20]It is advised to administer antibiotic prophylaxis before routine invasive dental care surgeries.Additionally, a new challenge for collaboration between dentistry and medical researchers has emerged from the link between oral infections and myocardial/cerebral infarction.
This review illustrates types of causative microorganisms, types of dental infections, and appropriate antibiotics for treating dental diseases, and discusses new technology to improve the drug delivery of antibacterial medication by nanotechnology or by 3D printing systems.
Method
A narrative review where conducted based on several published articles relevant journals and books in Google Scholar PubMed using keywords dental caries, periodontal disease, gingivitis, and related disease we excluded duplicated information and articles that we could not access to full articles.
Results
Dental caries happens when bacteria convert residual sugars and carbohydrates in the mouth to produce acid that destroys the tooth enamel and its underlying dentin layer. [21]If dental decay is not treated promptly, it can spread from the tooth crown to the root.However, if caries is detected early on, they can be reversed--dental caries resulting from Lactobacilli, Actinomyces, and Streptococcus mutans. [22]ental caries can be treated by Amoxicillin, metronidazole, co-amoxiclav alone, or clindamycin alone some studies record the use of secnidazole as treatment of dental caries. [23]Gum disease, known as gingivitis, is brought on by plaque accumulation on teeth, which inflames the gums around the teeth.Bacteria are found in plaque naturally.Films that cling to teeth and release chemicals that cause gum irritation.If left untreated, bleeding, puffiness, and redness of the gums can develop into periodontitis.
Actinomyces, prevotella intermedia, streptococcus anginosus, and Campylobacter rectus are possible culprits in gingivitis.Amoxicillin, metronidazole, and tetracyclines (tetracycline, doxycycline, or minocycline) can treat gingivitis. [24] dangerous gum condition called periodontitis weakens the bone-supporting teeth and affects soft tissue.Plague accumulation results in gradual degradation if left untreated and may lead to tooth loss. [25]Bacteria from periodontitis can enter the bloodstream and damage other body parts if treatment is not received.In cases of periodontal disease, oral bacteria can create ulcerated epithelium that allows the bacteria to enter the circulation.This can lead to temporary bacteremia, considered a concern, particularly in immunocompromised patients or those who wear prosthetics. [25]Porphyromonas gingivitis, bacteroid forsythias, lactobacillus, prevotella intermedia, and fusobcreium nucleatum are a few possible bacteria that cause periodontitis.[28] Periodontal infections can be treated by Tetracyclines such as Tetracycline itself, Doxycycline, and Minocycline.prevotella oralis, prevotella melanogenic, streptococcus anginosus, Porphyromonas gingivalis, and Peptostreptococcus micros-caused periapical abscess.Treatment options for periapical abscess include erythromycin, Amoxicillin, and cefoxitin.Acute necrotizing ulcerative gingivitis, sometimes called Vincent angina, is caused by spirochete borrelia vincentii, fusiform.It is an acute bacterial infection due to overgrowth of normal oral flora when there are disturbances in oral microbiota.There are 700 bacteria living in the oral cavity as normal flora, most of which are anaerobic. [29]This type of dental infection can be treated by metronidazole or clindamycin or sometimes by penicillins such as penicillin V or penicillin G, Tetracyclines, Amoxicillin, and co-amoxiclav. [30]
Pericoronitis
Peptostreptococcus, Porphyromonas gingivalis, and fusobacterium species cause Pericoronitis.It is related to dental cases of the third molar and can be treated with Amoxicillin or metronidazole.
Peri-implantitis is caused by Peptostreptococcus incisors, fusobacteruim nucleatum, prevotella intermedia, pseudomonas aeroginosa, and Staphylococcus species.It can be treated by surgical decontamination, including chemical (use of citric acid, ethylenediaminetetraacetic acid, hydrogen peroxide, saline water), or laser.Surgical treatment provides air power abrasion, respective surgery (implantoplasty), and regenerative surgery.On the other hand, non-surgical include mechanical methods, antiseptic, and topical antibiotics such as tetracycline or doxycycline. [31]ocally administered antibiotics alone or adjunct to surgical and non-surgical treatments for peri-implantitis showed favorable outcomes, albeit with limited evidence.The administration of systemically delivered antibiotics in combination with non-surgical or surgical treatments remained questionable. [32]ndodontic (pulpitis or RC treatment) caused by Peptostreptococcus micros, Porphyromonas endodontalis, prevotella intermedia, prevotella melaninogenica, fusobacterium nucleatum; it can be treated (as non-surgical) by Amoxicillin or clindamycin or Azithromycin. [33]ulpitis with a periapical abscess is caused by fusarium intermedia, Peptostreptococcus micros, canociytophaga achalasia, Serenoa endodontics, and streptococcus species.It can be treated by Amoxicillin, azithromycin, co-amoxiclav, clarithromycin, Cefoxitin, clindamycin, or metronidazole. [34]ndodontic therapy consists of a series of treatments, including removing pulpal tissue, cleaning, shaping, obturation, and placing a permanent restoration for the tooth. [7]C treatment includes a sequence for the infected pulp of a tooth intended to result in the elimination of infection and the production of the decontaminated tooth and protection from future microbial invasion.RC and their associated pulp chamber are the physical hollows within a tooth naturally inhabited by nerve tissue, blood vessels, and other cellular entities, causing pain.These dental procedures alleviate pain and prevent future infections of the RC and pulp chamber. [35]udwig angina is a bacterial infection (a rare type of cellulitis) that affects the neck and the floor of the mouth. [36]It is not contagious and typically starts from an abscessed tooth; it can spread rapidly, causing life-threatening swelling that can affect the ability to breathe (this is due to edema, which makes breathing difficult).
A dry socket (alveolar osteitis) can happen after dental extraction.When the tooth is removed, a blood clot forms in the socket (a hole in the bone where the tooth was).A dry socket happens when a blood clot moves or does not form.Without the clot, the bone and nerves are exposed to the oral environment, leading to more pain than before extraction.A dry socket can be painful and delay surgical site healing. [37] dry socket is better, which may be prevented by preadministration of metronidazole rather than treated by this antimicrobial. [38]Although it is challenging to discover causative bacteria in dry sockets, it has been noted that anaerobic bacteria are primarily responsible for the formation of dry sockets.It has been found that Treponema denticola is known to be a putative microorganism in the development of periodontal disease. [39]t can be treated in the dental office by local antibacterial, antibiotic, antifibrinolytic agents, and steroids; the dentist also uses obtunding dressings such as eugenol-containing.However, oral antibiotics used to treat dry sockets include penicillins (such as penicillin V or penicillin G), clindamycin, metronidazole, and Tetracyclines.
Deep neck space infections, including Para pharyngeal abscess, peritonsillar abscess, and retropharyngeal abscess, commonly arise from an odontogenic or upper aerodigestive tract origin.Both aerobic and anaerobic bacteria may cause this infection.
This condition may be life-threatening if not diagnosed and not treated promptly, as it may lead to airway compromise and spread to adjacent compartments.
Treatment should include broad coverage of betalactamase-producing bacteria staphylococcus aureus; other bacteria include streptococcus pyogenes and viridance.Anaerobic gram-negative bacilli and Peptostreptococcus species also take part.Until culture results are obtained to help direct treatment, the empirical treatment includes clindamycin plus levofloxacin as injections because it is a life-threatening condition. [40]r cases with a rhizogenic source of infection, use vancomycin plus ampicillin-sulbactam or vancomycin plus ceftriaxone plus metronidazole or clindamycin plus levofloxacin.
Treatment of jaw osteomyelitis is complicated by the presence of teeth and persistent exposure to the oral environment.Antibiotic therapy tends to be prolonged, often for weeks or months.Clindamycin, Dicloxacillin, and Moxalactam have excellent bioavailability in bone tissue (concentration in the bone about 90% that in the blood).Other antibiotics with good penetrating activity include ciprofloxacin, clindamycin, metronidazole, chloramphenicol, rifampin, and others.
Antibiotics effective in the treatment of anaerobic infections include metronidazole, Meropenem, Co-amoxiclav, pipercillinclavulanic acid, Ticarcillin-clavulanic acid, amoxicillinsulbactam, clindamycin, chloramphenicol. [42]nything that creates a pathway for bacteria to the tooth or surrounding tissues can lead to tooth abscess.Dental caries and chipped or cracked teeth allow bacteria to seep into the path of a tooth and spread to the pulp. [43]eriodontal disease is an infection and inflammation of tissues around the teeth; as the periodontal disease progresses, the bacteria gain access to deeper tissues, causing abscesses.Injury to the teeth, like trauma, can damage the inner pulp even if there is no visible crack; the injury makes it susceptible to infection. [44] tooth abscess is a pocket of pus from a bacterial infection.An abscess usually looks like a red, swollen bump, boil, or pimple.It affects the involved tooth, but the disease can also spread to the surrounding bone, neighboring teeth, or soft tissue. [45]ental abscesses can be treated by drainage of pus and using 1 or more of the following antibiotics: Ampicillin-Sulbactam or penicillin G or co-amoxiclav plus metronidazole or Cefoxitin. [46]eri-implantitis is caused by peptedostreptococcus incisors, fusobacterium nucleatum, prevotella intermedia, pseudomonas aeroginosa, and staphylococcus species.This can be treated by surgical decontamination by laser or chemicals (citric acid, ethylenediaminetetraacetic acid hydrogen peroxide, or saline).
Surgical treatment includes air power abrasion, respective surgery (implantoplasty), and regenerative surgery.Non-surgical procedures include mechanical methods, antiseptic, and topical antibiotics such as tetracycline or doxycycline [47] all the above summarized in Table 1.
The uses of nanotechnology and biomaterials for dental implant medication delivery
Nanostructured scaffolds and matrices aid in better controlling cell differentiation in regenerative dentistry.Compared to traditional autologous and allogenic tissues or alloplastic materials, nanomaterials better mimic the natural architecture and structure of teeth and generate functional tissues.Furthermore, some metals are used in medication complexes to improve transportation.Different methodologies are offered for biomaterial-based delivery systems, despite the fact that each biomaterial has pros and cons that can greatly impact complicated transportation, such as solubility in physiological settings or dispersion in tissues.Biomaterials can prolong time contact, have antibacterial and anti-inflammatory properties, and even strengthen the effects of antibiotics for treating oral infections.50][51]
3D printing designs unique drug delivery systems
The uses of 3D printing in medical devices, films, liquids, gastroretentive, colon, transdermal, and intrauterine drug delivery systems.Owing to its unique characteristics and originality, 3D printing is inherently capable of resolving several formulation and drug delivery issues, many of which are connected to medications that are poorly soluble in water.The recent FDA technical advice on additive manufacturing in relation to medical devices and the approval of Spritam®, Laboratory Prasco S.P A have sparked a great deal of research in the fields of bioengineering and drug delivery systems.From the pre-clinical stage to first-in-human trials and on-site manufacture of bespoke formulations with exceptional dosage flexibility at the point of treatment, 3D printing technology might be effectively applied.The pharmaceutical manufacturing industry will undergo a quick transition with the adoption of new regulatory rules and the advent of breakthrough 3D printing machines that provide built-in quality and flexibility. [52]
Discussion
Despite the desire to reduce antibiotics in dental infections, antibiotics are effective in the treatment of the following dental infections: acute necrotizing ulcerative gingivitis, stage III grade C/incisor-molar pattern periodontitis, acute periapical abscess, cellulitis, local or systemic spread infection, pericoronitis, peri-implantitis, deep facial layers of the head and neck infection, Ludwig angina, anaerobic infections. [53]moxicillin is a broad-spectrum penicillin that is destroyed by bacterial penicillin.It treats dental caries, gingivitis, periapical abscesses, pericoronitis, endodontitis, and pulpitis with apical abscesses.Amoxicillin is a first-line antibiotic in dental infections and a frequently prescribed antibiotic in dental practice (approximately 50% of prescriptions).Some dentists prescribe metronidazole with Amoxicillin to cover most likely pathogens, particularly anaerobic microorganisms, which usually inhabit the oral cavity. [54]ther dentists prefer using co-amoxiclav to broaden the spectrum to include penicillinase-producing bacteria.This preparation combines Amoxicillin and clavulanic acid.Clavulanic acid is β-lactamase inhibitor.The disadvantages of clavulanic acid are hepatotoxicity and that it is better to be avoided during pregnancy.It has been shown that all the bacteria extracted from odontogenic abscesses were susceptible to co-amoxiclav. [50]o-amoxiclav is helpful in the treatment of dental caries, dental abscesses, and anaerobic infections.Other penicillins, such as penicillin V and penicillin G, were used in the treatment of Vincent angina, cellulitis, and abscesses.Ampicillin-sulbactam is used to treat dental abscesses and Ludwig angina.Meropenem plus piperacillin-tazobactam when an infection is caused by nasty microorganisms such as Pseudomonas aeroginosa and Staphylococcus aureus. [51]he third most commonly used antibiotic is metronidazole, which is primarily effective against anaerobic bacteria and used in the treatment of dental caries (the causative microorganism is streptococcus mutans), gingivitis, and acute necrotizing ulcerative gingivitis, pericoronitis, and dental abscess, dry or infected socket. [52]lindamycin has 2 good characteristics.First, it is effective against most anaerobic infections and is a good bone penetrator.However, the most severe and dangerous side effect of clindamycin is the overgrowth of clostridium difficile bacteria, causing pseudomembranous colitis, a dangerous complication.
Unfortunately, there is antagonism between clindamycin and metronidazole as only 1 study was done by researchers, [53] but most researchers, such as, [53] used clindamycin with metronidazole to prevent dry socket and pseudomembranous colitis at the same time (prevent the side effect).
Clindamycin is helpful in the treatment of dental caries, Vincent angina, endodontitis, cellulitis, jaw osteomyelitis, dental abscess, Ludwig angina, and anaerobic infections.It has been found that clindamycin is as effective as amoxicillin and metronidazole in treating periodontitis in diabetic patients. [55]oxycycline (a type of tetracycline) is used in the treatment of gingivitis and periodontitis, and minocycline is also a type of tetracycline used topically during surgical work in the treatment of peri-implantitis.
Dentists have prescribed many Cephalosporins to treat dental infections in penicillin-allergic patients if the patient has not had severe reactions to Cephalosporins. [56]However, it is well known that there is a 10% cross-allergenicity between penicillins and Cephalosporins, which necessitates avoiding Cephalosporins to treat infections in patients allergic to penicillin, and it is better to use Azithromycin, particularly in the treatment of periapical abscess, dental abscess, dry socket. [57]efoxitin is used to treat some dental infections when anaerobic bacteria are suspected.Other Cephalosporins are also used in dentistry, such as cephalexin, Cephazolin, and ceftriaxone.Cefepime is needed when the presence of pseudomonas aeroginosa is suspected.
Fluoroquinolones are used in different dental diseases; for example, ciprofloxacin is used in treating peri-implantitis, and moxifloxacin is used in treating jaw osteomyelitis.Levofloxacin is indicated in the treatment of deep head and space infection.Some other antibiotics, such as rifampin, have little place in dentistry; it is used in the treatment of osteomyelitis of the face.Several studies link some types of cancer and bacterial infection, [58,59] there is association between oral cancer and the infections for that reason our review is important to surmise all these factors and related illness.
Conclusions
The most commonly used antibiotic in dental infections is Amoxicillin (approximately 50%); the second most commonly used antibiotic to treat dental infections is co-amoxiclav.Cephalosporins are better to avoid in penicillin-allergic patients because there is 10% cross-allergenicity between penicillins and cephalosporins due to the similarity in the primary structural formula (beta-lactam ring).
It has been found that combining Amoxicillin and metronidazole is effective in stage I to III grade c periodontitis.Also, this combination is effective in the treatment of aggressive periodontitis.
The second combination found to be effective in dental infections is co-amoxiclav and metronidazole.
On the other hand, the combination of clindamycin and metronidazole shows some antagonism, but it could be used in some impotent situations to prevent the dangerous pseudomembranous colitis resulting from the use of clindamycin.Clindamycin inhibits most intestinal flora's growth, and the resistant clostridium difficle overgrows.These bacteria secrete toxins that cause narcotization to intestinal cells, leading to pseudomembranous colitis.
For empirical use of antibiotics in most dental infections, a combination of clindamycin and metronidazole can be used in penicillin-allergic patients.Most previous work agrees with this use, except a report from some research indicates the presence of drug resistance to this combination that decreases the effect.Besides the normal methods, emerging technologies such as 3D printing for drug delivery of antibiotics and disinfectants hold promise in enhancing treatment efficacy and patient outcomes.By leveraging the precision and customization afforded by 3D printing, dentistry can tailor antimicrobial interventions to individual patient needs, optimizing therapeutic outcomes while minimizing adverse effects.Furthermore, the integration of nanotechnology and biomaterials into dental implant medication delivery represents a paradigm shift in preventive and therapeutic approaches.Nanotechnology enables the development of novel drug delivery systems with enhanced bioavailability and targeted delivery, potentially reducing the risk of infection and improving implant success rates.Moreover, biomaterials tailored for controlled release mechanisms offer prolonged and localized antimicrobial activity, fostering tissue regeneration and implant integration.
Clindamycin acts on bacterial protein synthesis, and metronidazole acts on bacterial nucleic acid synthesis; this needs to be clarified in future clinical trials in dentistry.
Dean of Al-Manara College for Medical Sciences, Amarah, Iraq, b Al-Manara College for Medical Sciences, Amarah, Iraq, c Department of Dentistry, Alturath College, Baghdad, Iraq, d Head of Dentistry Department, Alturath College, Baghdad, Iraq, e Head of the Dentistry Department, Al-Manara College for Medical Sciences, Amarah, Iraq, f Department of Pharmacy, Bilad Alrafidain University College, Baqubah, Iraq, g Dr Hany Akeel Institute, Iraqi Medical Research Center, Baghdad, Iraq.
Table 1
Type of infection and treatment agent. | 2024-07-07T05:10:19.183Z | 2024-07-05T00:00:00.000 | {
"year": 2024,
"sha1": "6a77af5a3ede0ab9953974d8c41ae81db9c40e25",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6a77af5a3ede0ab9953974d8c41ae81db9c40e25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222301787 | pes2o/s2orc | v3-fos-license | Compliance to secondary prevention strategies for coronary artery disease: a hospital-based cross-sectional survey from Ernakulam, South India
Objectives The primary objective of the study was to report the compliance to secondary prevention strategies for coronary artery disease (CAD), such as smoking cessation, weight management, low-density lipoprotein (LDL) cholesterol control, blood pressure control, glycaemic control, physical activity and cardiovascular drug therapy from a resource-limited setting. Design Analytical cross-sectional survey with data collection using questionnaire administered by study personnel. Setting Institutional—two tertiary care hospitals and two cardiology clinics. Participants Patients in the age group of 30–80 years with documented CAD with a minimum of 1 year and a maximum of 6 years of follow-up after diagnosis. Main outcome measures The main outcome measures were the prevalence of individual compliance to secondary prevention strategies for CAD such as smoking cessation, weight management, LDL cholesterol control, blood pressure control, glycaemic control, physical activity and cardiovascular drug therapy. The secondary outcomes were the association of secondary prevention strategies with age, sex, domicile, socioeconomic status, insurance and type of treatment. Results We recruited a total of 1206 patients among whom 879 (72.9%) were males. The median age of patients was 62 (14) years. The compliance to smoking cessation was 93.86% (95% CI 91.66% to 96.06%), ideal body mass index was 63.76% (95% CI 61.05% to 66.47%), blood pressure control was 65.11% (95% CI 62.42% to 67.80%), LDL compliance was 36.50% (95% CI 33.18% to 39.82%), diabetes control was 51.23% (95% CI 46.10% to 56.36%) and adequate physical activity was 39.22% (95% CI 36.46% to 41.98%)respectively. Reported compliance for cardiovascular drugs therapy was 96% for antiplatelets, 89.4% for statins, 68.2% for beta blockers, 37.7% for renin angiotensin aldosterone system blockers, 81.28% for oral hypoglycaemic agents and 22% for insulin therapy. Conclusion Compliance to secondary prevention strategies for CAD in resource limited settings are moderate. This needs further improvement for better outcomes related to CAD in future.
The overall benefit for the patient under treatment for CAD is often sub optimal due to poor implementation of secondary prevention strategies
Selection and description of study participants
The study was conducted in two hospitals and two cardiology clinics in Ernakulam district, Kerala state and was coordinated by Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala, India. The period of study was 24 months (January 2017 -January 2019). The study design was an analytical cross-sectional survey.
We used the study by Kosteva et al to calculate the sample size 19 .
The Euroaspire study reported a compliance of 19.5% for LDL target (<70 mg/dl) in accordance with the guidelines published by the Joint European society guidelines 20 . We used the LDL target to compute the sample size as this was the target applicable to all patients and one deemed most important by cardiology consultants. The minimum sample size required was 683 with a desired CI of 95% and 3% absolute precision. We inflated the sample size to 1200 to adjust for withdrawals from the study.
Patients with documented CAD were recruited by consecutive sampling from the patients under care at the study institutions. We We plan to disseminate the study findings through patient support groups to improve the compliance to secondary prevention strategies.
Study Tool
The study used a structured questionnaire which collected information Malayalam for ease of data collection. All interviews were done by study personnel who were trained by the principal investigator before the commencement of data collection. The English version of the study tool used is presented as Appendix 1. The tool administrations were conducted in-hospital/clinic for enrolled patients.
Statistical Analysis
Statistical analyses were conducted using SAS The study was approved by the institutional ethics committee (IRB-AIMS-2017-125). Written informed consent was obtained from study participants before collecting the data. The consent included title, purpose, methods, benefits and right to withdraw from the study at any point of time. Confidentiality was maintained throughout the study.
Baseline Data
We approached 1230 patients who were eligible for recruitment as part of the study. A total of 1206 patients provided consent and participated in the study giving us a response rate of 98%. The remaining 24 patients were not interested to participate in the survey.
The final analysis included data from 1206 patients with CAD who were under follow up from the three study centers. The baseline details of the study population are presented as Table 1.
Among patients 879 (72.9%) were males and 767 (63.6%) were from rural areas. In the study, the majority of patients were in the age group of 61-80 years (647, 53.6 %) followed by those in the age group of 41-60 years (537, 44.5%). Only 22 (1.8%) patients were from the age group of 30-40 years. The mean age of the participants was 61.3 (9.6) years. The same for males and females were 60.4 (9.6) and 63.5 (9.3) years respectively. The mean age at occurrence of primary event was 58.6 (9.6) years. The same for males and females were 57.7(9.6) and 61 (22%) were on Insulin therapy.
Subgroup analysis: Compliance profile stratified by selected variables
We did a sub group analysis stratifying the study sample into subgroups based on selected variables including age, sex, SES, place of domicile, insurance and type of treatment taken. The adjusted Prevalence ratios along with 95% CI for the same are presented as Table 4.
Age showed a significant association with BP control (Adj. PR compliance to LDL control compared to those on medical therapy.
Discussion
The current study is the first of its kind from India looking at the compliance to accepted secondary prevention strategies among patients with documented CAD. The study data sheds light into the current practices and missed opportunities that may be important in Our study reported that compliance to individual secondary prevention targets aimed at CAD are adopted differentially in the study population. Smoking cessation showed excellent compliance followed by moderate compliance to BMI target, BP control and diabetes control. The compliance to physical activity requirements as well as LDL control paints a dismal picture with tremendous scope for improvement. The study suggests that regular screening for LDL control is sub-optimal. The monitoring of diabetic control among those with diabetes is also sub-optimal despite the fact that one in two patients in the study has diabetes.
Several registries/studies set up to study secondary prevention of CAD across the globe allow us to critically examine the findings of the current study with a global perspective 4,23,24,25,26 .
Among the secondary prevention strategies, the best response seems to be in adiposity management for the current study. Among study patients, two out of three have kept their BMI in the recommended range as per guidelines. This is much higher compared to the
Competing interest
None declared.
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. All study expenses were covered by the principal investigator (RS).
3)
Between 50%-75% of the total debt 4) Above 75% of the total debt 5) Not Applicable 37(e) . Is there any time limit to repay the amount?
Objectives
The primary objective of the study was to report the compliance to secondary prevention strategies for coronary artery disease (CAD) such as smoking cessation, weight management, LDL cholesterol control, blood pressure control, glycemic control, physical activity and cardiovascular drug therapy from a resource limited setting.
Design
Analytical cross-sectional survey with data collection using questionnaire administered by study personnel.
Setting
Institutional -two tertiary care hospitals and two cardiology clinics
Participants
Patients in the age group of 30 -80 years with documented CAD with a minimum of one year and a maximum of six years of follow up after diagnosis.
Main outcome measures
The main outcome measures were the prevalence of individual compliance to secondary prevention strategies for CAD such as smoking cessation, weight management, LDL cholesterol control, blood pressure control, glycemic control, physical activity and cardiovascular drug therapy. The secondary outcomes were the association of secondary prevention strategies with age, sex, domicile, socioeconomic status, insurance and type of treatment.
Conclusion
Compliance to secondary prevention strategies for CAD in resource limited settings are moderate. This needs further improvement for better outcomes related to CAD in future.
Keywords
Compliance, secondary prevention, coronary artery disease, risk factors, guidelines implementation, resource limited setting
Strengths and Limitations
The strengths of the current study include (i) a large sample size (ii)high response rate (iii)availability of information on several confounders The study limitations include (i) limited geographic/ethnic variation among study subjects (ii) Probability of social desirability bias from patient responders. The overall benefit for the patient under treatment for CAD is often sub optimal due to poor implementation of secondary prevention strategies across the world and especially in South Asia.
Selection and description of study participants
The study was conducted in two hospitals and two cardiology clinics We plan to disseminate the study findings through patient support groups to improve the compliance to secondary prevention strategies. confidence intervals. The regression coefficients were tested using the Wald statistic. We used the Bonferroni correction to account for multiple comparisons in the sub group analysis and a threshold of 0.008 was used for testing significance of associations.
Ethical Approval
The study was approved by the institutional ethics committee (IRB-AIMS-2017-125). Written informed consent was obtained from study participants before collecting the data. The consent included title, purpose, methods, benefits and right to withdraw from the study at any point of time. Confidentiality was maintained throughout the study.
Baseline Data
We approached 1230 patients who were eligible for recruitment as part of the study. A total of 1206 patients provided consent and participated in the study giving us a response rate of 98%. The final analysis included data from 1206 patients with CAD who were under follow up from the four study centers. The baseline details of the study population are presented as Table 1.
Among patients 879 (72.9%) were males and 767 (63.6%) were from rural areas. In the study, the majority of patients were in the age group and 63.5 (9.3) years respectively. The overall mean age at occurrence of primary event was 58.6 (9.6) years. The mean age of occurrence of primary event for males and females were 57.7(9.6) and 61.1 (9.2) years respectively. The median follow-up was 2.0 (3.0) years.
Overall compliance to secondary prevention strategies
The showing better compliance to this target. The type of treatment as well as place of domicile showed no significant association with any of the variables tested in the sub group analysis.
Discussion
The current study is probably the first of its kind from India looking at the compliance to accepted secondary prevention strategies among patients with documented CAD.
Competing interest
None declared.
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.All study expenses were covered by the principal investigator (RS).
Data statement
De-identified raw data can be made available for research related request after institutional clearance. Between 25%-50% of the total debt
17
(ii) There was a high response rate among responders during recruitment. 18 (iii) The study presents information related to several confounders 19 which were used in the final analysis. 20 (iv) The study has limited geographic/ethnic variation among study 21 subjects. | 2020-10-13T13:05:48.772Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "521551320a23442daf54459df19900b54dda164b",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/10/e037618.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d176ad86a7be1030bab2d903bea44c15051c9dee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233820546 | pes2o/s2orc | v3-fos-license | EFFECT OF SILICA ON ALKALINE BAGASSE CELLULOSE AND SOFTWOOD CELLULOSE
This study investigates the effect of silica on sugarcane bagasse (SCB) and softwood (SW) cellulose. Cellulose was extracted from raw SCB and SW chips using a three-step process, namely thermal pre-treatment, alkaline treatment and bleaching treatment. Alkali treated cellulose was then subjected to silica surface modification using the solvent exchange method. The effect of silica modification on SCB and SW cellulos was investigated using X-ray diffractions analysis (XRD), Fourier transform infrared (FTIR) spectroscopy and optical microscope (OPM) techniques. Both the FTIR and XRD results confirm successful extraction of cellulose from both raw fibers and addition of silane functional groups in the cellulose surface. XRD patterns of all samples revealed typical spectra for natural fibers corresponding to crystalline peaks of cellulose and undissolved amorphous hemicellulose respectively. SCB and SW showed similar increasing patterns of crystallinity with nanosilica surface modification. The surface morphology results showed that both SCB and SW cellulose modified with silica were swollen and displayed small particles agglomerating on the surface of the fibers. The solvent exchange method proved to be a successful method for modifying SCB and SW cellulose with nanosilica. It also proved to be cost-efficient and time-efficient.
INTRODUCTION
There is a significant research interest in the application of natural fibers in the field of polymer composites due to their many advantages. Natural fibers are abundant, renewable, non-abrasive, non-toxic and biodegradable as compared to synthetic fibers. They also possess doi.org/10.37763/wr.1336-4561/66. 1.8594 outstanding mechanical properties with varying morphology and good surface properties (Sequeira et al. 2009, Jacob et al. 2005, Eichhorn et al. 2001, Sibuya et al. 2018). Amongst the natural fibers, sugarcane bagasse (SCB) is one of the major agricultural residues that have gained popularity lately and an exceptional fiber for composites reinforcement because of low modification cost and high quality composites attained. SCB is a versatile fibrous agricultural residue that is obtained after extraction of 'juice' from sugarcane, that can be can be converted to paper, feedstock, biofuel amongst others. It can also be used as an absolute substrate for microbial processes to produce electricity, chemicals, enzymes, and other valuable products. SCB contains about 40-50% cellulose, 25-30% of hemicellulose and about 20-25% of lignin content. It has been used for reinforcement for thermoplastics in the automotive, construction and food packaging industries (Loh et al. 2013, Ahmed et al. 2012. Soft wood (SW) is one of the most used natural fibers in thermoplastic reinforcement. The main components in SW are cellulose, lignin and hemicellulose which account for 55%, 11% and 26% respectively. Wood elements employed in polymer composites vary in shape and can be used in combinations or alone. The shape and wood fiber size determine the properties of the final product such as surface chemistry. The strength of wood polymer composites depends on factors like chemical compositions, density, thickness, fiber content, and the type of bonding agent (if any). Softwood has applications in architectural woodwork, composite materials, construction and furniture fields (Terrett et al. 2019, Ashori, 2008. Cellulose is one of the highly abundant natural polymers found in earth. It can be extracted from several sources including bagasse, wood, cotton, pineapple leaves and sisal fibers amongst others. Structurally, it consist of D-anhydroglucopyranose units which are joined to form a linear molecular chain. Cellulose extraction is normally a three-step process, i.e. thermal pre-treatment, alkaline treatment and bleaching treatment. The alkalization step removes non-cellulosic components such as lignin, hemicellulose, waxes and pectin. The treatment increases the roughness of the fiber surface resulting in the improved adhesion between the fibre and the matrix ( Silica (SiO 2 ) is mainly found as quartz in nature and in various living organisms. Silica as a filler is known for enhancing mechanical strength, thermal stability and transparency. It makes cellulose composites hydrophobic and resistant to structural deformation (Litschauer et al. 2011, Ha et al. 2019). Cellulose modification results in morphology changes and increases in hydroxyl groups. Hence, cellulose surface modification enhance surface tension, wettability, swelling, adhesion and compatibility with polymers (Ashori et al. 2008, Wei et al. 2015. The preparation of cellulose-silica composites can be achieved in various ways such as acid-catalyzed hydrolysis, sol-gel method or using precursors like tetraethoxysilane (TEOS) (Cerchiara et al. 2018, Maleki et al. 2014) amongst others. One of the cheap and simple methods of preparing cellulose-silica composites is the solvent exchange method, which substituted the need for surfactants when incorporating cellulose fibers into non-polar polymers. This method uses the percolating approach to prepare the cellulose surface for effective facial interaction with hydrophobic silica and allow the incorporation of composite formation without the use of catalysts and crosslinking agents. The cellulose fiber assembles to a three-dimensional template, then the percolating structure is then filled with nanosilica. Cellulose-silica composite normally takes days and energy to synthesize it. However, it only takes a few hours using the solvent exchange method because it is a one-step energy-efficient method. This method yields a composite with reduced moisture absorption, enhanced thermal properties and dimensionally stability (Rodríguez-Robledo et al. 2018). In a study by Barra et al. (2006), the treatment of sisal fiber with silica showed improvement in tensile strength, impact strength and tensile modulus. There are also changes in morphology and porosity of the cellulose-silica composite depending on the silica content. Silica-based composites can be used to coat implants and medical products as biosensors, biocatalysts, and matrix for a controlled release of drugs (Hou et al. 2010, Xie et al. 2009). Due to their antifungal activity property, such composites can be used to avoid the growth of Aspergillus Versicolor which degrades paper artwork such as books, manuscripts, paintings, etc.
MATERIALS AND METHODS
Sugarcane bagasse was obtained from Tongaat Hullets sugar mill in Felixton, South Africa. Softwood (Pinus patula) chips was obtained from the nearby farm in Empangeni, South Africa. Silica (SiO 2 ), sodium hydroxide (NaOH), sodium chlorite (NaCIO 2 ), glacial acetic acid (CH 3 COOH), acetone (C 3 H 6 O), and ethanol (CH 3 COOH) were purchased from Laboratory consumables, South Africa. All chemicals were used without further purification.
Thermal pre-treatment SCB and SW feedstock were separately subjected to thermal pre-treatment. The feedstock was boiled with water for an hour on a hot plate. The mixture was removed from the hot plate and rinsed with distilled water. The process was repeated four times to ensure that impurities and any dirt trapped were effectively removed.
Alkaline treatment
The thermally pre-treated SCB and SW were treated with an alkaline solution (2% NaOH) prepared by dissolving 100 g NaOH in 5L distilled water. The mixture was boiled for an hour and rinsed with distilled water. The process was repeated four times.
Bleaching
The buffer solution was prepared by adding 54 g NaOH, 150 mL acetic acid, and distilled water in 2L volumetric flask. The solution of sodium chlorite (3.5%) was prepared by dissolving 70 g NaIO 4 salt in 2L distilled water. The buffer and sodium chlorite solution were mixed in 1:1 volume ratio before used. Alkali pre-treated fibers were boiled the solution for an hour before rinsed with distilled water. The same process was repeated four times.
Solvent exchange method
Firstly, water was added in droplets into cellulose fibers while stirring for 15 min to form a gel. The gel mixture was added to ethanol in 1:1 volume ratio and stirred for about an hour before acetone was added drop wise in a 1:2 volume acetone-water ratio. Stirring was continued for 3 hours more before nanosilica which was previously immersed in acetone was added. The mixture was further stirred for 10 min. Lastly, the mixture was sonicated for 20 min at an ultrasound bath of 40 kHz, maintaining the temperature below 40°C before the resultant product was dried at 60°C in an oven.
Optical microscope (OPM)
The powdered samples of cellulose and its silica composites were analysed using the Zeiss optical microscope. The morphology was captured using a digital system. A small amount of each sample was spread on a glass slide and stamped with a coverslip.
X-Ray diffraction analysis (XRD)
The samples were analysed using X-ray diffractometer (D8-Advance Bruker AXS GmbH) at room temperature (RT) with a monochromatic CuKα radiation source (λ = 0.1539 nm) in the step-scan mode with a 2θ angle ranging from 0° to 60° with a step of 0.04 and scanning time of 5.0 min.
Fourier transform infrared (FTIR) spectroscopy
The spectra of all samples were carried out on a Perkin-Elmer FTIR spectrophotometer using a standard ATR cell. The gauge was adjusted to 90 for sufficient contact. All samples were scanned over the wavenumber (450 -4000 cm -1 ).
Spectral analysis
The FTIR spectra of SCB and SW celluloses as prepared are shown in Fig. 1. Both spectra showed the common peaks associated with cellulosic materials at 3336 cm -1 (OH-stretch), . The -OH from silanol absorbs around 1030 cm -1 and 3300 cm -1 might be the reason for bigger intensities observed in modified cellulose. These peaks are more pronounced in SW cellulose as compared to SCB cellulose. This may be due to the reduction in hydrogen bonding in cellulosic O-H groups, thereby increasing -OH concentration due to high energy of O-H bonds Fig. 2 is showing the diffractograms of SCB and SW celluloses as prepared and modified respectively. Both prepared and modified fibers display typical spectra for natural fibers with peaks around 16°, 23°, and 35° corresponding to amorphous cellulose I, crystalline cellulose II and undissolved amorphous hemicellulose respectively (Pothan et al. 2002, Xie et al. 2009). With the introduction of silica to SCB cellulose, there were minor changes in peaks positions observed and an increase in peak intensities. The minor peaks shift for SCB may be due to disorder caused by modification of fiber and might indicate an increase in the interplanar distance. The intensity increase could suggest that the nanosilica modification improved the crystallinity of SCB. The same trend was observed with SW cellulose. Tab. 1 showed the crystallinity index (CI) estimated using the deconvolution and peak height methods (Ciolacu et al. 2011, Johar et al. 2012, Kim et al. 2013). For both SCB and SW cellulose, there was an increase in crystallinity with the introduction of silica as expected. The differences in crystallinity index values for both SCB and SW might be due to differences in chemical compositions and exposure of cellulose after alkali treatment. It is clearly evident that the addition of nanosilica particles improves the crystallinity of both SCB and SW cellulose.
Optical microscope
The optical microscope images of unmodified and modified SCB and SW cellulose are displayed in Fig. 3. It could be seen that both SCB and SW unmodified fibers are thin and longer as compared with their modified counterparts. Both SCB/SiO 2 and SW/SiO 2 are swollen and display dusty small particles agglomerating on the surface of the fibers (see arrows). Similar results were also reported in modification of natural fibres , Maeda et al. 2006. According to Pothan et al. (2002), the swelling of the fiber leads to a transfer of the electrochemical double layer and the shear plane of the fiber to the electrolyte solution. Moreover, the fiber length for both SCB and SW were not affected by modification. In fact, the longer fibers (5 mm) have a property to allow high stress to be transferred to reinforcement and that contributes to its superior mechanical properties (Loh et al. 2013).
CONCLUSIONS
The study investigated the effect of silica on sugarcane bagasse (SCB) and softwood (SW) cellulose properties. FTIR and XRD results confirmed that cellulose was successfully extracted from SCB and SW using the alkali treatment. The surface modification of both SCB and SW cellulose were performed successfully using the solvent exchange method. FTIR analysis confirmed that silica functional groups were successfully added onto the surface of SCB and SW cellulose. There were evident shifts in peak positions and intensities with the introduction of silica. New peaks were detected at 1367 cm -1 and 435 cm -1 signaling that nanosilica functional groups were added on the cellulose surface. XRD patterns showed minor changes in peaks positions and an increase in peak intensities with the introduction of silica. There was also an increase in crystallinity index estimated using the deconvolution and peak-height method for both modified SW and SCB cellulose. The surface morphology displayed fiber swelling with introduction of silica, which has impact on mechanical properties of the fiber and the resultant composites. The solvent method proved to be cheap, simple and time efficient for use in cellulose modification. | 2021-05-07T00:02:52.853Z | 2021-03-08T00:00:00.000 | {
"year": 2021,
"sha1": "9e961b087096deb615f022df2622690aa0381ddd",
"oa_license": null,
"oa_url": "https://doi.org/10.37763/wr.1336-4561/66.1.8594",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "514ce408907d0ddb402fbed123164e2215254c65",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
237378862 | pes2o/s2orc | v3-fos-license | A simple human cell model for TAU trafficking and tauopathy-related TAU pathology
The microtubule (MT)-associated protein TAU is highly abundant in the axon of human brain neurons, where it binds to and stabilizes MT filaments. Thereby, TAU regulates the dynamic (dis)assembly of MT strands and is involved in a wide range of neuronal functions. In Alzheimer’s disease (AD) and other tauopathies, TAU is missorted into the somatodendritic compartment. TAU missorting is accompanied by (or leads to) abnormal TAU phosphorylation, MT destabilization, and loss of dendritic spines and mitochondria, eventually resulting in TAU aggregation, neuronal dysfunction and cell death (Arendt et al., 2016). Strikingly, the mechanisms of TAU sorting, and the detrimental cascade upon its failure, are still not fully understood.
A simple human cell model for TAU trafficking and tauopathy-related TAU pathology
The microtubule (MT)-associated protein TAU is highly abundant in the axon of human brain neurons, where it binds to and stabilizes MT filaments. Thereby, TAU regulates the dynamic (dis)assembly of MT strands and is involved in a wide range of neuronal functions. In Alzheimer's disease (AD) and other tauopathies, TAU is missorted into the somatodendritic compartment. TAU missorting is accompanied by (or leads to) abnormal TAU phosphorylation, MT destabilization, and loss of dendritic spines and mitochondria, eventually resulting in TAU aggregation, neuronal dysfunction and cell death (Arendt et al., 2016). Strikingly, the mechanisms of TAU sorting, and the detrimental cascade upon its failure, are still not fully understood.
Which neuronal cell models are available for studying TAU sorting? Primary rodent neurons are often-used (Zempel and Mandelkow, 2017), but have considerable limitations, including the need for animals and species-specific TAU-intrinsic (e.g. different isoforms) and TAU-extrinsic differences (e.g. different interactomes). Recently, human-derived induced pluripotent stem cells (iPSCs)-derived and neural progenitor cells (NPCs)-derived neuronal models became relevant for mimicking human disease conditions in vitro, also for TAU trafficking (Sohn et al., 2019), and aggregation (Choi et al., 2014). Human iPSC-and NPC-derived neurons have many benefits, but the neuronal differentiation of these cells is complex, time-and resource-consuming, and often results in heterogeneous cultures with moderate differentiation efficiency. In contrast, SH-SY5Y neuroblastoma cells are human-derived, robust and cheap in maintenance, highly proliferative, and, in contrast to post-mitotic neurons, accessible to all forms of stable genetic manipulation, which includes the TAU-encoding MAPT gene (Bell and Zempel, 2021). SH-SY5Y cells can be differentiated into neuronal cells (SH-SY5Y-derived neurons) with various procedures.
In this perspective, we outline the suitability of these cells (i) to study the TAU-intrinsic regulation of TAU trafficking, and (ii) the impact of cellular interaction partners. Further, we discuss the limitations in comparison to other cell models for studying (i) TAU-induced postsynaptic spine loss, (ii) end-stage TAU pathology-related aggregation, (iii) neuronal subtype-specific susceptibility for TAU pathology, and (iv) tauopathies caused by traumatic axon injury (TAI).
General features of SH-SY5Y cells: SH-SY5Y neuroblastoma cells have been used for decades to study general principles of neurobiology and tauopathies, despite the fact that they harbour several chromosomal abnormalities, complex rearrangements, and a copy number gain of the MAPT locus on chromosome 17 (Bell and Zempel, 2021). The pathological relevance of abnormal MAPT overexpression remains unclear. While Mapt-knock out mice with roughly 1.5fold human TAU overexpression show no abnormalities, an increase of TAU levels is correlated with faster disease progression in sporadic AD patients. Clinically, only very few cases with MAPT microduplications are described showing a large phenotypic variety with both neurodevelopmental and neurodegenerative disorders.
The differentiation of naïve SH-SY5Y cells into neuronal cells (SH-SY5Y-derived neurons) within one to two weeks is well-established with various substances, such as retinoic acid (RA), brain-derived neurotrophic factor (BDNF) or nerve growth factor (Kovalevich and Langford, 2013). SH-SY5Yderived neurons show important features of mature neurons, including pronounced neuronal polarity, axonal outgrowth, the neuron-typical separation of axonal TAU and somatodendritic MT-associated protein 2, expression of neuronal maturation markers (like neuronal nuclei, synaptophysin or the synaptic vesicle protein SV2), and neuronal excitability (Bell and Zempel, 2021). Of note, the differentiation efficiency is highly variable and depends on the used procedure (see below).
In SH-SY5Y-derived neurons, all six major human brain isoforms are expressed. The reported isoform ratios differ from the adult human brain, as the 0N3R isoform is most abundant (Bell and Zempel, 2021). Nevertheless, the principle and synchronous expression of six TAU isoforms outclasses the situation in rodent cell culture models, where only four of the six human isoforms are expressed. IPSC-derived neuron cultures express all six isoforms, depending on the differentiation method, earliest after one month of cultivation, and also with a nonhuman brain-like ratio.
The phosphorylation state of TAU directly regulates its MT-binding affinity and is thought to play a role in the process of axonal enrichment (Arendt et al., 2016). In SH-SY5Y cells, several known residues are phosphorylated including the epitopes AT8 (S199, S202), AT180 (T231, S235), 12E8 Perspective Michael Bell * , Hans Zempel * (S262), and PHF1 (S396, S404). Many known TAU-related kinases (MAPK, CDC2, CDK5), phosphatases (PP1, PP2A) and likely other PTM-modifying enzymes regulate TAU in SH-SY5Y cells (Bell and Zempel, 2021). Thus, the presence of major TAU PTM-modifying enzymes in SH-SY5Y-derived neurons is plausible, which allows to study the interplay of TAU trafficking and PTM regulation, including isoform-dependent differences in the phosphorylation state and MT-binding affinity.
TAU-intrinsic factors involved in TAU trafficking:
We recently discovered that SH-SY5Y-derived neurons sort endogenous TAU with a slightly lower efficiency than mouse primary neurons and iPSC-derived neurons. However, SH-SY5Y-derived neurons show and tolerate transient overexpression of transfected TAU much longer and achieve endogenous-like sorting efficiency, in contrast to primary rodent neurons ( Figure 1A-D) (Bell et al., 2021). This enables us to study TAU trafficking of truncated, modified or otherwise engineered TAU constructs. Initial data with a truncated C-terminus-lacking TAU construct show similar sorting behaviour in SH-SY5Y-derived neurons, mouse primary neurons and iPSCderived neurons (Bell et al., 2021). Hence, we consider a comprehensive analysis of domains, motifs, or interaction sites required for successful sorting feasible in SH-SY5Yderived neurons.
Cellular and axon initial segment-specific factors involved in TAU trafficking:
The axon initial segment (AIS), a highly specialized region at the proximal axon with ankyrin G (ANKG) as a master organizer (Rasband, 2010), is critical for developing neuronal polarity and action potential generation. In rodent primary neurons, ANKG or the tripartite motif-containing protein 46 (TRIM46) were critical for successful axonal TAU sorting (Rasband, 2010;Van Beuningen et al., 2015). Surprisingly, SH-SY5Y-derived neurons show efficient TAU sorting without any detectable accumulation of ANKG and TRIM46 at the proximal axon ( Figure 1E and F). This lack of a classical AIS in SH-SY5Y-derived neurons could be the result of neuronal immaturity (primary rodent neurons develop TRIM46/ANKG accumulation at DIV3/4) and has to be considered as a potential limitation of mimicking the in vivo situation. However, this cell system bears potential for future studies, like addressing the importance of TRIM46mediated MT polarization for TAU trafficking, and the general necessity of TRIM46 or ANKG for MT polarization at the AIS.
The (largely) ANKG/TRIM46-independent TAU sorting hints at the presence of unidentified mediators of neuronal (and TAU) polarity. Recent peroxidase-or biotinylationbased proximity labelling methods could be helpful to assess TAU-AIS interactions in SH-SY5Y-derived neurons, also in comparison with other neuronal cell models (Cho et al., 2020). However, AIS-specific proximity labeling requires high sensitivity to detect transient interactions and a system enabling site-specific labelling without affecting the TAU trafficking process.
Synaptotoxicity and spine loss due to TAU missorting: Elevated levels of dendritic TAU result in mitochondrial mislocalization and postsynaptic spine loss via TAU-induced recruitment of the excitotoxicity-mediating kinase Fyn, or tubulin tyrosine ligase-like proteins that induce microtubule breakdown (Ittner and Ittner, 2018). The cascade from TAU missorting to spine degradation, however, is under debate. Suitable neuronal cell models must exhibit functional synapses and dendritic spine formation.
In SH-SY5Y-derived neurons, different preand postsynaptic markers as well as vesicle proteins are expressed (Bell and Zempel, 2021). However, the spatial distribution of these markers along axonal or dendritic processes does not faithfully recapitulate the in vivo situation. Robust co-localization of pre-and postsynaptic markers like seen in rodent primary neurons or in iPSC-and NPC-derived neurons is not observed. The obvious limitation of SH-SY5Y-derived neurons is the short lifespan of the cultures, with maximum growth periods of four to five weeks after RA/BDNF treatment. Thus, despite the reported excitability and the presence of functional synaptic vesicles in SH-SY5Y-derived neurons (Bell and Zempel, 2021), these cells might lack the degree of maturity that is necessary to mimic the synaptotoxic effects of pathological TAU in disease-burdened human neurons.
Tauopathy-related TAU aggregation:
While many features of tauopathies like TAU missorting, hyperphosphorylation, or postsynaptic degradation can be induced in cell culture systems with external stressors, the formation of insoluble TAU aggregates is not inducible in most systems, including SH-SY5Y cells. Only with overexpression of pro-aggregant TAU mutants, TAU aggregates or inclusions can be obtained (Figure 1A, right panels). However, this artificial way of TAU aggregation, also performed in SH-SY5Y cells (Bell and Zempel, 2021), may generate aggregates reminiscent of certain genetic forms of tauopathy, but different from insoluble TAU aggregates found in AD brains. Strikingly, NPC-derived neurons were generated that successfully developed profound amyloid-β (Aβ) pathology, Aβinduced TAU aggregation, and AD brainlike neuronal morphology after two to three months of cultivation (Choi et al., 2014). This AD-like pathology was achieved with lentiviral transduction of amyloid precursor protein and presenilin variants known from familial AD cases, and by using a threedimensional Matrigel-based culture matrix. This indicates that NPC-or iPSC-derived neuron cultures are more promising for studying TAU aggregation, in principle due to the much higher culture life span, and their demonstrated ability to form aggregates composed of endogenous physiological TAU as seen in AD patients.
Neuronal subtype-specific susceptibility for TAU pathology: In many tauopathies including AD, the progression of TAU pathology is brain region-specific, e.g. the locus coeruleus is usually affected very early and even in asymptomatic individuals. This suggests that specific inter-neuronal differences are critical for the susceptibility to TAU pathology. However, the underlying neuron subtype-specific features remain enigmatic.
For SH-SY5Y-derived neurons, the reported neuronal identity include, depending on the differentiation treatment and the analysed biochemical markers, primarily noradrenergic, dopaminergic, or cholinergic neuronal subtypes (Kovalevich and Langford, 2013). Interestingly, these neuron types are found in subcortical nuclei that are early affected in many tauopathies including AD: the locus coeruleus, the nucleus basalis, and the substantia nigra pars compacta. Since the underlying pathomechanisms are still unclear, steerable generation of distinct SH-SY5Y-derived neuron subtypes would allow to study TAU-based toxicity in different neuronal subpopulations.
However, the reported identity of SH-SY5Y-derived neurons after specific differentiation procedures is inconsistent. Hence, the neuronal identities of SH-SY5Yderived neurons might be not distinctive cellular subtypes, but rather accentuations of a spectrum of the same entity (Bell a n d Z e m p e l , 2 0 2 1 ) . We c o n d u c t e d comparative analyses of four differentiation procedures but did not observe clearly distinct expression patterns of key enzymes commonly used to define neuronal subtypes (Figure 1I and J). Another principle obstacle is that general age-related changes in the cellular functionality, and major features of locus coeruleus, nucleus basalis or substantia nigra pars compacta neurons thought to have large impact on their increased susceptibility are certainly difficult to recapitulate in cell culture (Bell and Zempel, 2021).
Taken together, the resulting neuronal identities are ill-defined and might not recapitulate sufficient features of cells affected in AD and related tauopathies to be useful for studying neuronal subtype-specific susceptibility to TAU pathology.
Traumatic brain injury modelling with induced axon lesion:
In one subgroup of tauopathies, including traumatic brain injury and chronic traumatic encephalopathy, mechanically evoked traumatic axon injury (TAI) precedes TAU pathology and NFT formation (Blennow et al., 2012). The underlying pathological cascade is still under debate. Although TAI mouse models exist, an in vitro laser-inducible axotomy cell model bears potential for several approaches. It allows the induction of precise lesions on a single-cell and even compartment-specific scale. Available live-cell imaging tools (e.g., photoconvertible TAU constructs, live AIS cytoskeleton markers and biosensors) would allow to monitor TAI-dependent alterations of TAU trafficking, phosphorylation, or the AIS architecture.
We tested whether SH-SY5Y-derived neurons are suitable for UV laser-induced axotomy. For this, we measured the somatic levels of transfected mTAU mCitrine after axotomy, in comparison to uncut neurons (Figure 1G and H). Unfortunately, many neurons detached several hours after axotomy, impeding downstream analyses. Of note, preliminary experiments with primary mouse neurons and iPSC-derived neurons suggested a higher degree of attachment and viability (data not shown).
Although the experimental setup could still be improved to enable the use of SH-SY5Yderived neurons, the data and experience suggest that alternative neuronal cell culture models are more suitable for laser-mediated axotomy.
Conclusion: SH-SY5Y-derived neurons are of human origin, capable of expressing all six human brain TAU isoforms, and exhibit many features of disease-relevant mature neurons. They are suitable for studying TAU sorting, as they show sorting of endogenous and transfected (physiological and truncated) TAU constructs similar to often-used neuronal cell models. The absence of classical AIS formation allows to study factors of TAU sorting that are independent of ANKG or TRIM46 enrichment.
In contrast, clear limitations of this cell model exist in comparison to other human and rodent neuronal cell models for studying (i) TAU missorting-induced spine loss, (ii) AD brain-like TAU aggregates, (iii) TAI-induced TAU pathology, and (iv) neuronal subtypespecific susceptibility to TAU pathology. | 2021-09-02T14:05:25.300Z | 2021-08-30T00:00:00.000 | {
"year": 2021,
"sha1": "042d81e605215b478f9e31982918d1a67ce86d55",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc8530135?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e0baeac2adf1b29a647d147a2ff26629e449275",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252846677 | pes2o/s2orc | v3-fos-license | Stochastic Constrained DRO with a Complexity Independent of Sample Size
Distributionally Robust Optimization (DRO), as a popular method to train robust models against distribution shift between training and test sets, has received tremendous attention in recent years. In this paper, we propose and analyze stochastic algorithms that apply to both non-convex and convex losses for solving Kullback Leibler divergence constrained DRO problem. Compared with existing methods solving this problem, our stochastic algorithms not only enjoy competitive if not better complexity independent of sample size but also just require a constant batch size at every iteration, which is more practical for broad applications. We establish a nearly optimal complexity bound for finding an $\epsilon$ stationary solution for non-convex losses and an optimal complexity for finding an $\epsilon$ optimal solution for convex losses. Empirical studies demonstrate the effectiveness of the proposed algorithms for solving non-convex and convex constrained DRO problems.
Introduction
Large-scale optimization of DRO has recently garnered increasing attention due to its promising performance on handling noisy labels, imbalanced data and adversarial data (Namkoong & Duchi, 2017;Zhu et al., 2019;Qi et al., 2020a;Chen & Paschalidis, 2018). Various primal-dual algorithms can be used for solving various DRO problems (Rafique et al., 2021;Nemirovski et al., 2009). However, primal-dual algorithms inevitably suffer from additional overhead for handling a n dimensionality dual variable, where n is the sample size. This is an undesirable feature for large-scale deep learning, where n could be in the order of millions or even billions. Hence, a recent trend is to design dual-free algorithms for solving various DRO problems (Qi et al., 2021;Jin et al., 2021;Levy et al., 2020).
In this paper, we provide efficient dual-free algorithms solving the following constrained DRO problem, which are still lacking in the literature, min w∈W max {p∈∆n:D(p,1/n)≤ρ} where w denotes the model parameter, W is closed convex set, ∆ n = {p ∈ R n : n i=1 p i = 1, p i ≥ 0} denotes a n-dimensional simplex, ℓ i (w) denotes a loss function on the i-th data, D(p, 1/n) = n i=1 p i log(p i n) represents the Kullback-Leibler (KL) divergence measure between p and uniform probabilities 1/n ∈ R n , and ρ is the constraint parameter, and λ 0 > 0 is a small constant. A small KL regularization on p is added to ensure the objective in terms of w is smooth for deriving fast convergence.
There are several reasons for considering the above constrained DRO problem. First, existing dual-free algorithms are not satisfactory (Qi et al., 2021;Jin et al., 2021;Levy et al., 2020;Hu et al., 2021). They are either restricted to problems with no additional constraints on the dual variable p except for the simplex constraint (Qi et al., 2021;Jin et al., 2021), or restricted to convex analysis or have a requirement on the batch size that depends on accuracy level (Levy et al., 2020;Hu et al., 2021). Second, the Kullback-Leibler divergence measure is a more natural metric for measuring the distance between two distributions than other divergence measures, e.g., Euclidean distance. Third, compared with the KL-regularized DRO problem without constraints, the above KL-constrained DRO formulation allows it to automatically decide a proper regularization effect that depends on the optimal solution by tuning the constraint upper bound ρ. In other words, solving the constrained DRO with ρ offers the capability of optimizing the temperature parameter λ in Eq. (2), which corresponds to the log-sum-exponential form with a temperature parameter λ is widely used in many ML/AI methods, e.g., constrastive self-supervised learning (Yuan et al., 2022;Qiu et al., 2023b). Empirical studies have demonstrated that selecting an appropriate value for λ is crucial for achieving good performance (Goel et al., 2022;Li et al., 2021a;Radford et al., 2021). Therefore, solving the constrained distributionally robust optimization problem provides the added advantage of identifying an optimal temperature during the training process.
The question to be addressed is the following: Can we develop stochastic algorithms whose oracle complexity is optimal for both convex and non-convex losses, and its per-iteration complexity is independent of sample size n without imposing any requirements on the (large) batch size in the meantime?
We address the above question by (i) deriving an equivalent primal-only formulation that is of a compositional form; (ii) designing two algorithms for non-convex losses and extending them for convex losses; (iii) establishing an optimal complexity for both convex and non-convex losses. In particular, for a non-convex and smooth loss function ℓ i (w), we achieve an oracle complexity of O(1/ϵ 3 ) 2 for finding an ϵ-stationary solution; and for a convex and smooth loss function, we achieve an oracle complexity of O(1/ϵ 2 ) for finding an ϵ-optimal solution. We would like to emphasize that these results are on par with the best complexities that can be achieved by primal-dual algorithms (Huang et al., 2020;Namkoong & Duchi, 2016). But our algorithms have a per-iteration complexity of O(d), which is independent of the sample size n. The convergence comparison of different methods for solving (1) is shown in Table 1.
To achieve these results, we first convert the problem (1) into an equivalent problem: By considering x = (w ⊤ , λ) ⊤ ∈ R d+1 as a single variable to be optimized, the objective function is a compositional function of x in the form of f (g(x)), where g(x) = λ, 1 n n i=1 exp ℓi(w) λ ∈ R 2 and f (g) = g 1 log(g 2 ) + g 1 ρ. However, there are several challenges to be addressed for achieving optimal complexities for both convex and non-convex loss functions ℓ i (w). First, the problem F (x) is non-smooth in terms of x given the domain constraint w ∈ W and λ ≥ λ 0 . Second, the outer function f (g)'s gradient is non-Lipschtiz continuous in terms of the second coordinate g 2 if λ is unbounded, which is essential for all existing stochastic compositional optimization algorithms. Third, to the best of our knowledge, no optimal complexity in the order of O(1/ϵ 2 ) has been achieved for a convex compositional function except for Zhang & Lan (2021), which assumes f is convex and component-wisely non-decreasing and hence is not applicable to (2).
To address the first two challenges, we derive an upper bound for the optimal λ assuming that ℓ i (w) is bounded for w ∈ W, i.e., λ ∈ [λ 0 ,λ], which allows us to establish the smoothness condition of F (x) and f (g). Then we consider optimizingF ( By leveraging the smoothness conditions of F and f , we design stochastic algorithms by utilizing a recursive variance-reduction technique to compute a stochastic estimator of the gradient of F (x), which allows us to achieve a complexity of O(1/ϵ 3 ) for finding a solutionx such that E[dist(0,∂F (x))] ≤ ϵ. To address the third challenge, we consider optimizingF µ (x) =F (x) + µ∥x∥ 2 /2 for a small µ. We prove thatF µ (x) satisfies a Kurdyka-Łojasiewicz inequality, which allows us to boost the convergence of the aforementioned algorithm to enjoy an optimal complexity of O(1/ϵ 2 ) for finding an ϵ-optimal solution toF (x). Besides the optimal algorithms, we also present simpler algorithms with worse complexity, which are more practical for deep learning applications without requiring two backpropagations at two different points per iteration as in the optimal algorithms.
In the existing analysis of compositional optimization algorithms, either (i) the problem is assumed to be unconstrained, e.g., Qi et al. (2020a;2021), or (ii) the complexity is sub-optimal, e.g., Ghadimi et al. (2020), or (iii) the problem is restricted, e.g., the outer function f is convex and non-decreasing as assumed in (Zhang & Lan, 2021). To the best of our knowledge, this is the first result for stochastic compositional optimization with a domain constraint that enjoys the optimal complexities for both convex and non-convex objectives.
Related Work
DRO springs from the robust optimization literature (Bertsimas et al., 2018;Ben-Tal et al., 2013) and has been extensively studied in machine learning and statistics (Ahmadi-Javid, 2012;Namkoong & Duchi, 2017;Duchi et al., 2016;Staib & Jegelka, 2019;Deng et al., 2020;Qi et al., 2020b;Duchi & Namkoong, 2021), and operations research (Rahimian & Mehrotra, 2019;Delage & Ye, 2010). Depending on how to constrain or regularize the uncertain variables, there are constrained DRO formulations that specify a constraint set for the uncertain variables, and regularized DRO formulations that use a regularization term in the objective for regularizing the uncertain variables (Levy et al., 2020). Duchi et al. (2016) showed that minimizing constrained DRO with f -divergence including a χ 2 -divergence constraint and a KL-divergence constraint, is equivalent to adding variance regularization for the Empirical Risk Minimization (ERM) objective, which is able to reduce the uncertainty and improve the generalization performance of the model.
Primal-Dual Algorithms.
Many primal-dual algorithms designed for the min-max problems (Nemirovski et al., 2009;Juditsky et al., 2011;Yan et al., 2019;Namkoong & Duchi, 2016;Yan et al., 2020;Song et al., 2021;Alacaoglu et al., 2022) are applicable to solving (1) when ℓ is a convex function. For non-convex loss functions, recently, Rafique et al. (2021) and Yan et al. (2020) proposed non-convex stochastic algorithms for solving non-convex strongly convex min-max problems, which are applicable to solving (1) when ℓ is a weakly convex function or smooth. Many primal-dual stochastic algorithms have been proposed for solving non-convex strongly concave problems with a state of the art oracle complexity of O(1/ϵ 3 ) for finding a stationary solution (Huang et al., 2020;Luo et al., 2020;Tran-Dinh et al., 2020). However, the primal-dual algorithms require maintaining and updating an O(n) dimensional vector for updating the dual variable.
Constrained DRO. Wang et al. (2021) studies the Sinkhorn distance constraint DRO, a variant of Wasserstein distance based on entropic regularization. An efficient batch gradient descent with a bisection search algorithm has been proposed to obtain a near-optimal solution with an arbitrarily small sub-optimality gap. However, no non-asymptotic convergence results are established in their paper. Duchi & Namkoong (2021) developed a convex DRO framework with f -divergence constraints to improve model robustness. The author developed the finite-sample minimax upper and lower bounds and the non-asymptotic convergence rate of O(1/ √ n), and provided the empirical studies on real distributional shifts tasks with existing interior point solver (Udell et al., 2014) and gradient descent with backtracking Armijo line-searches (Boyd et al., 2004). However, no stochastic algorithms that directly optimize the considered constrained DRO with non-asymptotic convergence rates are provided in their paper.
Recently, Levy et al. (2020) proposed sample independent algorithms based on gradient estimators for solving a group of DRO problems in the convex setting. To be more specific, they achieved a convergence rate of O(1/ϵ 2 ) for the χ 2 -constrained/regularized and CVaR-constrained convex DRO problems and the batch size of logarithmically dependent on the inverse accuracy level O(log(1/ϵ)) with the help of multi-level Monte-Carlo (MLMC) gradient estimator. For the KL-constrained DRO objective and other more general setting, they achieve a convergence rate of O(1/ϵ 3 ) under a Lipschitz continuity assumption on the inverse CDF of the loss function and a mini-batch gradient estimator with a batch size in the order O(1/ϵ) (please refer to Table 3 in Levy et al. (2020)). In addition, Levy et al. (2020) also proposed a simple stochastic gradient method for solving the dual expression of the DRO formulation, which is called Dual SGM. In terms of convergence, they only discussed the convergence guarantee for the χ 2 -regularized and CVaR penalized convex DRO problems (cf. Claim 3 in their paper). However, there is still gap for proving the convergence rate of Dual SGM for non-convex KL-constrained DRO problems due to similar challenges mentioned in the previous section, in particular establishing the smoothness condition in terms of the primal variable and the Lagrangian multipliers (denoted as x, ν, η respectively in their paper). This paper makes unique contributions for addressing these challenges by (i) removing η in Dual SGM and deriving the box constraint for our Lagrangian multiplier λ for proving the smoothness condition; (ii) establishing an optimal complexity in the order of O(1/ϵ 3 ) in the presence of non-smooth box constraints, which, to the best of our knowledge, is the first time for solving a non-convex constrained compositional optimization problem.
Furthermore, it is noteworthy that the KL-constrained DRO formulation (2) offers a distinct advantage over the KL-regularized DRO problem without constraints. Specifically, the proposed algorithms enable automatic determination of an optimal regularization effect for the constrained DRO (2) upon the optimizing of λ, through the fine-tuning of the constraint upper bound ρ. This innovative approach has been empirically demonstrated to yield significant efficacy in the realm of contrastive learning, as substantiated by the findings of Qiu et al Qiu et al. (2023b).
Regularized DRO. DRO with KL divergence regularization objective has shown superior performance for addressing data imbalanced problems (Qi et al., 2021;2020a;Li et al., 2020;2021b). Jin et al. (2021) proposed a mini-batch normalized gradient descent with momentum that can find a first-order ϵ stationary point with an oracle complexity of O(1/ϵ 4 ) for KL-regularized DRO and χ 2 regularized DRO with a non-convex loss. They solve the challenge that the loss function could be unbounded. Qi et al. (2021) proposed online stochastic compositional algorithms to solve KL-regularized DRO. They leveraged a recursive variance reduction technique (STORM (Cutkosky & Orabona, 2019)) to compute a gradient estimator for the model parameter w only. They derived a complexity of O(1/ϵ 3 ) for a general non-convex problem and improved it to O(1/(µϵ)) for a problem that satisfies an µ-PL condition. Qi et al. (2020a) reports a worse complexity for a simpler algorithm for solving KL-regularized DRO. Li et al. (2020;2021b) studied the effectiveness of KL regularized objective on different applications, such as enforcing fairness between subgroups, and handling the class imbalance.
Compositional Functions and DRO. The connection between compositional functions and DRO formulations have been observed and leveraged in the literature. Dentcheva et al. (2017) studied the statistical estimation of compositional functionals with applications to estimating conditional-value-at-risk measures, which is closely related to the CVaR constrained DRO. However, they do not consider stochastic optimization algorithms. To the best of our knowledge, Qi et al. (2021) was the first to use stochastic compositional optimization algorithms to solve KL-regularized DRO problems. Our work is different in that we solve KL-constrained DRO problems, which is more challenging than KL-regularized DRO problems. The benefits of using compositional optimization for solving DRO include (i) we do not need to maintain and update a high dimensional dual variable as in the primal-dual methods (Rafique et al., 2021); (i) we do not need to worry about the batch size as in MLMC-based stochastic methods (Levy et al., 2020;Hu et al., 2021).
Optimizing the Temperature Parameter. Our formulation and algorithm can be applied to optimizing the temperature parameter in the temperature-scaled cross-entropy loss, which has wide applications in machine learning and artificial intelligence, e.g., knowledge distillation Hinton et al. (2015) and self-supervised learning (Chen et al., 2020). Recently, Qiu et al. (2023a) leveraged the optimization technique proposed in this paper for optimizing the individualized temperature parameter in the global contrastive loss of self-superivsed learning.
Preliminaries
Notations: Let ∥ · ∥ denotes the Euclidean norm of a vector or the spectral norm of a matrix. And where D denotes the training set and i denotes the index of the sample randomly generated from D. Let f λ (·) = λ log(·) + λρ, and ∇f λ (g) = λ g denotes the gradient of f in terms of g. Let Π X (·) denote an Euclidean projection onto the domain X . Let [T ] = {1, . . . , T } and τ ∼ [T ] denotes a random selected index. We make the following standard assumptions regarding to the problem (2). Assumption 1. There exists R, G, C, and L such that (a) The domain of model parameter W is bounded such that there exists R > 0 it holds ∥w∥ ≤ R for any (c) ℓ i (w) is G-Lipschitz continuous function and bounded by C, i.e., ∥∇ℓ i (w)∥ ≤ G and |ℓ i (w)| ≤ C for all w ∈ W and i ∈ D.
(d) There exists a positive constant ∆ < ∞ and an initial solution Assumption 2. Let σ g , σ ∇g be positive constants and σ 2 = max{σ g , σ ∇g }. For i ∈ D, assume that Remark: Assumption 1 (a), i.e., the boundness condition of W is also assumed in Levy et al. (2020), which is mainly used for convex analysis. Assumption 1(b), (c), i.e., the Lipstchiz continuity and smoothness of loss function, and the variance bounds for g i and its gradient in Assumption 2 can be derived from Assumption However, F (w, λ) is not necessarily smooth in terms of x = (w ⊤ , λ) ⊤ if λ is unbounded. To address this concern, we prove that optimal λ is indeed bounded. Lemma 1. The optimal solution of the dual variable λ * to the problem (2) is upper bounded byλ = λ 0 + C/ρ, where C is the upper bound of the loss function and ρ is the constraint parameter.
Thus, we could constrain the domain of λ in the DRO formulation (2) with the upper boundλ , and obtain the following equivalent formulation: The upper boundλ guarantees the smoothness of F (w, λ) and the smoothness of f λ (·), which are critical for the proposed algorithms to enjoy fast convergence rates. Lemma 2. F (w, λ) is L F -smooth for any w ∈ W and λ ∈ [λ 0 ,λ], where L F =λL 2 g + 2L g +λL ∇g + 1 +λ. L g and L ∇g are constants independent of sample size n and explicitly derived in Lemma 7 .
SinceF is non-smooth, we define the regular subgradient as follows. Definition 1 (Regular Subgradient). Consider a function Φ : Since F (x) is differentiable, we use∂F (x) = ∇F (x) +∂δ X (x) (see Exercise 8.8 in Rockafellar & Wets (1998)) in the analysis. Recall the definition of subgradient of a convex functionF which is denoted by ∂F . WhenF (x) is convex, we have∂F (x) = ∂F (x) (see Proposition 8.2 in Rockafellar & Wets (1998)). The dist(0,∂F (x)) measures the distance between the origin and the regular subgradient set ofF at x. The oracle complexity is defined below: Definition 2 (Oracle Complexity). Let ϵ > 0 be a small constant, the oracle complexity is defined as the number of processing samples z in order to achieve E[dist(0,∂F (x))] ≤ ϵ for a non-convex loss function or E[F (x) − F (x * )] ≤ ϵ for a convex loss function.
Equivalence Derivation
Before we move to the proposed algorithms in the next section, we derive the equivalence between equation between equations (1), (2), and (3). Recall the original KL-constrained DRO problem: , D(p, 1/n) is the KL divergence and λ 0 is a small positive constant.
In order to tackle this problem, let us first consider the robust loss And then we invoke the dual variable λ to transform this primal problem to the following form Since this problem is concave in term of p given w, by strong duality theorem, we have Then the original problem is equivalent to the following problem Next we fix x = (w ⊤ , λ) ⊤ and derive an optimal solution p * (x) which depends on x and solves the inner maximization problem. We consider the following problem which has the same optimal solution p * (x) with our problem.
There are three constraints to handle, i.e., p i ≥ 0, ∀i and p i ≤ 1, ∀i and n i=1 p i = 1. Note that the constraint p i ≥ 0 is enforced by the term p i log(p i ), otherwise the above objective will become infinity. As a result, the constraint p i < 1 is automatically satisfied due to n i=1 p i = 1 and p i ≥ 0. Hence, we only need to explicitly tackle the constraint n i=1 p i = 1. To this end, we define the following Lagrangian function where µ is the Lagrangian multiplier for the constraint n i=1 p i = 1. The optimal solutions satisfy the KKT conditions: From the first equation, we can derive p * i (x) ∝ exp(ℓ i (w)/λ). Due to the second equation, we can conclude exp(ℓi(w)/λ) . Plugging this optimal p * (w) into the inner maximization problem, we have Therefore, we get the following equivalent problem to the original problem which is Eq.
Stochastic Constrained DRO with Non-convex Losses
In this section, we present two stochastic algorithms for solving (4). The first algorithm is simpler yet practical for deep learning applications. The second algorithm is an accelerated one with a better complexity, which is more complex than the first algorithm.
Basic Algorithm: SCDRO
A major concern of the algorithm design is to compute a stochastic gradient estimator of the gradient of F (x). At iteration t, the gradient of F (x t ) is given by Both ∇ λ g(x t ) and ∇ w g(x t ) can be estimated by unbiased estimator denoted by ∇g i (x t ). The concern lies at how to estimate g(x t ) inside ∇f λt (·). The first algorithm SCDRO is applying existing techniques for two-level compositional function. In particular, we estimate g(x t ) by a sequence of s t , which is updated by moving and invoke the following moving average to obtain the gradient estimators in terms of w t and λ t , respectively, Finally we complete the update step of Update v t+1 , u t+1 according to (1): We would like to point out the moving average estimator for tracking the inner function g(w) is widely used for solving compositional optimization problems (Wang et al., 2017;Qi et al., 2021;Zhang & Xiao, 2019;Zhou et al., 2019). Using the moving average for computing a stochastic gradient estimator of a compositional function was first used in the NASA method proposed in Ghadimi et al. (2020). The proposed method SCDRO is presented in Algorithm 1. It is similar to NASA but with a simpler design on the update of x t+1 . We directly use projection after an SGD-tyle update. In contrast, NASA uses two steps to update x t+1 . As a consequence, NASA has two parameters for updating x t+1 while SCDRO only has one parameter η for updating x t+1 . It is this simple change that allows us to extend SCDRO for convex problems in the next section. Below, we present the convergence rate of our basic algorithm SCDRO for a non-convex loss function. Theorem 1. Suppose the Assumption 1 and 2 hold, and set β = 1 Remark: Theorem 1 shows that SCDRO achieves a complexity of O(1/ϵ 4 ) for finding an ϵ-stationary point, i.e., E[dist(0,∂F (x R ))] ≤ ϵ for a non-convex loss function. Note that NASA (Ghadimi et al., 2020) enjoys the same oracle complexity but for a different convergence measure, i.e., We can see that our convergence measure is more intuitive. In addition, we are able to leverage our convergence measure to establish the convergence for convex functions by using Kurdyka-Łojasiewicz (KL) inequality and the restarting trick as shown in next section. In contrast, such convergence for NASA is missing in their paper. Compared with stochastic primal-dual methods (Rafique et al., 2021;Yan et al., 2020) for the min-max formulation (1), their algorithms are double looped and have the same oracle complexity for a different convergence measure, i.e., Our convergence measure is stronger as we directly measure E[dist(0,∂F (x τ )) 2 ] on a returned solution x τ . This is due to that we leverage the smoothness of F (·).
Accelerated Algorithm: ASCDRO
Our second algorithm presented in Algorithm 2 is inspired by Qi et al. (2021) for solving the KL-regularized DRO by leveraging a recursive variance reduced technique (i.e., STORM) to estimate g(w t ) and ∇g(w t ) for (5). In particular, we use v t for tracking ∇ w g(x t ), use u t for tracking ∇ λ g(x t ), and use s t for tracking g(x t ), which are updated by A similar update to s t has been used in Chen et al. (2021) for tracking the inner function values for two-level compositional optimization. However, they do not use similar updates for tracking the gradients as v t , u t . Hence, their algorithm has a worse complexity.
Then we invoke these estimators into ∇ w F (x t ) and ∇ λ F (x t ) to obtain the gradient estimator (8) Below, we show ASCDRO can achieve a better convergence rate in the non-convex loss function.
Remark:
Theorem 2 implies that with a polynomial decreasing step size, ASCDRO is able to find an ϵ-stationary solution such that E[dist(0,∂F (x R ))] ≤ ϵ with a near-optimal complexity O(1/ϵ 3 ). Note that the complexity O(1/ϵ 3 ) is optimal up to a logarithmic factor for solving non-convex smooth optimization problems (Arjevani et al., 2019). State-of-the-art primal-dual methods with variance-reduction for minmax problems (Huang et al., 2020) have the same complexity but for a different convergence measure, i.e,
Stochastic Algorithms for Convex Problems
In this section, we presented restarted algorithms for solving (3) with a convex loss function ℓ i (w). The key is to restart SCDRO and ASCDRO by using a stagewise step size scheme. We define a new objective where µ is a constant to be determined later. With this new objective, we have the following lemma.
Lemma 3 allows us to obtain the convergence guarantee for convex losses. The idea of the restarted algorithm is to apply SCDRO and ASCDRO to the new objectiveF µ (x) by adding µx t to (1) of Algorithm 1 and substituting z t in (8) and restarting SCDRO or ASCDRO with a stagewise step size to enjoy the benefit of KL inequality ofF µ (x). It is notable that a stagewise step size is widely and commonly used in practice. The multi-stage restarted version of SCDRO and ASCDRO are shown Algorithm 3, to which we refer as restarted-SCDRO (RSCDRO) and restarted-ASCDRO (RASCDRO).
Algorithm 3 RSCDRO or RASCDRO
Change η k , T k according to Lemma 4 or Lemma 5 7: end for 8: return: x K
Restarted SCDRO for Convex Problems
In this subsection, we present the convergence rate of RSCDRO for convex losses. We first present a lemma that states F µ (x k ) is stagewisely decreasing.
Lemma 4. Suppose Assumptions 1 and 2 hold,
The above lemma implies that the objective gap is decreased by a factor of 2 after each stage. Based on the above lemma, RSCDRO has the following convergence rate
Remark: Corollary 1 shows that RSCDRO achieves an oracle complexity of O(1/ϵ 3 ) for finding an ϵ-optimal solution. i.e., E[F (x) − F (x * )] ≤ ϵ for the convex loss function with a geometrically decreasing step size in a stagewise manner.
Restarted ASCDRO for Convex Problems
In this subsection, we establish a better convergence rate of RASCDRO for convex losses.
Lemma 5. Suppose Assumptions 1 and 2 hold,
is decreased by a factor of 2 after each stage. Hence we have the following convergence rate for the RASCDRO.
Theorem 4. Under the same assumptions and parameter settings as Lemma 5, after
By the same method of derivation of Corollary 1, the following corollary of Theorem 4 holds.
Corollary 2. Let µ = ϵ/(2(R 2 +λ 2 )). Then under the same assumptions and parameter settings as Lemma 5, Remark: Corollary 2 shows that RASCDRO achieves the claimed oracle complexity O(1/ϵ 2 ) for finding an ϵ-optimal solution, which is optimal for solving convex smooth optimization problems (Nemirovsky & Yudin, 1983). Finally, we note that a similar complexity was established in (Zhang & Lan, 2021) for constrained convex compositional optimization problems. However, their analysis requires each level function to be convex, which does not apply to our case as the outer function f λ (·) is non-convex.
Experiments
In this section, we verify the effectiveness of the proposed algorithms in solving imbalanced classification problems. We show that the proposed methods outperform baselines under both the convex and non-convex settings in terms of convergence speed, and generalization performance. In addition, we study the influence of ρ to the robustness of different optimization methods in the supplement. All our results are conducted on Tesla V100.
Baselines.
For the comparison of convergence speed, we compare with different algorithms for optimizing the same objective (1) The original CIFAR10, CIFAR100 are balanced data, where CIFAR10 (resp. CIFAR100) has 10 (resp. 100) classes and each class has 5K (resp. 500) training images. For constructing CIFAR10-ST and CIFAR100-ST, we artificially construct imbalanced training data, where we only keep the last 100 images of each class for the first half classes, and keep other classes and the test data unchanged. ImageNet-LT is a long-tailed subset of the original ImageNet-2012 by sampling a subset following the Pareto distribution with the power value 6. It has 115.8K images from 1000 categories, which include 4980 for head class and 5 images for tail class. iNaturalist 2018 is a real-world dataset whose class-frequency follows a heavy-tail distribution. It contains 437K images from 8142 classes.
Convergence comparison between different baselines.
In the convex setting, we compare RSCDRO and RASCDRO with SPD, FastDRO and Dual SGM baselines. We report the training accuracy and testing accuracy in terms of the number (#) of processing samples. We denote 1 pass of training data by 1 epoch. We run a total of 3 epochs for CIFAR10-ST and CIFAR100-ST and decay the learning rate by a factor of 10 at the end of 2nd epoch. Similarly, we run 60 epochs and decay the learning rate at the 30th epochs for the ImageNet-LT, and run 30 epochs and decay the learning rate at the 20th epoch for iNaturalist2018. In the nonconvex setting, we compare SCDRO with two baselines, PG-SMD2 and FastDRO. We run 120 epochs for CIFAR10-ST and CIFAR100-ST, and decay the learning rate by a factor of 10 at the 90th epoch. And we run 30 epochs for ImageNet-LT and iNaturalist2018, and decay the learning rate at the 20th epoch.
Results.
We first report the results for convex setting in Figures 1. It is obvious to see that RSCDRO and RASCDRO are consistently better than baselines on CIFAR10-ST, CIFAR100-ST, and ImageNet-LT. PD-SMD2 and Dual SGM have comparable results with our proposed algorithms on the iNaturalist2018 in terms of training accuracy, but is worse in terms of testing accuracy. FastDRO has the worst performance on all the datasets. RSCDRO and RASCDRO achieve comparable results on all datasets, however, the stochastic estimator in RASCDRO requires two gradient computations per iteration, which incurs more computational cost than RSCDRO. Hence, in the non-convex setting, we focus on SCDRO. Figure 2 reports the results for non-convex setting. We can see that SCDRO achieves the best performance on all the datasets. The margin increases on the large scale ImageNet-LT and iNaturalist2018 datasets. For the three baselines, Dual SGM has better testing performance than FastDRO and PD-SGM2 on CIFAR10-ST and CIFAR100-ST. On the large scale data ImageNet-LT and iNaturalist2018, however, Dual SGM has the worst performance in terms of the testing accuracy. Furthermore, SCDRO is more stable than FastDRO and Dual SGM in different settings as the training of Dual SGM and FastDRO is comparable to SCDRO in convex settings and much worse than SCDRO in non-convex settings.
Comparison with ERM and KL-regularized DRO. Next, we compare our method for solving KLconstrained DRO (KL-CDRO) with 1) ERM+SGDM, and KL-regularized DRO (KL-RDRO) optimized by RECOVER, ABSGD in the non-convex setting 2) CVaR-constrained DRO, χ 2 -regularized DRO χ 2constrained DRO optimized by FastDRO in the convex setting. We conduct the experiments on the large-scale ImageNet-LT and iNaturalist2018 datasets. The results shown in Table 2 and 3 vividly demonstrate that our method for constrained DRO outperforms the ERM-based method and other popular f -divergence constrained/regularized DRO in different settings.
Sensitivity to ρ. We study the sensitivity of different methods to ρ. The results on CIFAR10-ST and CIFAR100-ST are shown in Table 4 in the supplement, which demonstrates that the testing performance is sensitive to ρ. However, our method SCDRO is better than baselines PG-SMD2 and FastDRO for different values of ρ.
Conclusions
In this paper, we proposed dual-free stochastic algorithms for solving KL-constrained distributionally robust optimization problems for both convex and non-convex losses. The proposed algorithms have nearly optimal complexity in both settings. Empirical studies vividly demonstrate the effectiveness of the proposed algorithm for solving non-convex and convex constrained DRO problems.
Then by this lemma we have ∥∇f λ (g(x))∥ ≤ λ and ∥∇f λ (g( Proof. For any q ≥ 1, we have And for any q 1 , q 2 ≥ 1, we have This complete the proof.
Lemma 7. Let L
Proof. The gradient of g i (w, λ) is given as Then by Assumption 1, we have For for all (w, λ), (w ′ , λ ′ ) ∈ X , we have where equality (a) is due to the property of the norm of rank-one symmetric matrix and inequality (b) is due to Cauchy-Schwarz inequality.
Lemma 10. Under Assumption 1, run Algorithm 1 with ηL F ≤ 1/4, and then the output x R of Algorithm 1 satisfies Proof. The proof of this lemma follow the proof of Theorem 2 in (Xu et al., 2019).
Recall the update of x t+1 is then by Exercise 8.8 and Theorem 10.1 of (Rockafellar & Wets, 1998) we know which implies that By the update of x t+1 , we also have, Since F (x) is smooth with parameter L F , then Combing the above two inequalities, we get That where the last inequality uses Young's inequality ⟨a, b⟩ ≤ ∥a∥ 2 + ∥b∥ 2 4 . Then by rearranging the above inequality and summing it across t = 1, · · · , T , we have By the same method used in the proof of Theorem 2 in Xu et al. (2019), we have the following inequality, Recalling ηL F ≤ 1 4 and combining Eq. (15) and Eq. (16), we obtain where inequality (a) is due to (2L 2 F + 3L F η ) ≤ 5L F η and inequality (b) is due to 1 1/4−ηL F /2 ≤ 8. Recalling Eq. (14) and the output rule of Algorithm 1, we have Proof. To facilitate our proof statement, we define the following notations: For every iteration t, by simple expansion we have where the equality (a) is due to ∥B∥ 2 + 2β∥C∥ 2 , 2β(1 − β)⟨A, C⟩ ≤ (1 − β) 2 ∥A∥ 2 + β 2 ∥C∥ 2 and 2β 2 ⟨C, D⟩ ≤ β 2 ∥C∥ 2 + β 2 ∥D∥ 2 . Therefore, noting (1 − β) < 1 and 1/β > 1, we can obtain Thus recalling the defintion of G(x t ), ∇F (x t ), ∇F (x t ) and applying the smoothness and Lipschitz continuity of f λ and g, we have where the inequality (a) is due to | log(g(x t )) − log(s t )| ≤ |s t − g(x t )| since g(x t ) ≥ 1, s t ≥ 1 for all t = {1, · · · , T } by the definition and initialzation of g i (x t ), s t , and the inequality (b) is due to L 2 g L 2 ∇f λ t +1 ≤ L 2 F . And by the similar method, we also have Thus combining the Eqs. (19,20,21) and applying Assumption 2, we can obtain Taking summation of E[∥z t+1 − ∇F (x t+1 )∥ 2 ] from 1 to T and invoking Lemma 9, we have Published in Transactions on Machine Learning Research (06/2023) Taking Eq. (15) into the above inequality, we have where the inequality (a) is due to ηL F ≤ 1/4 and the inequality (b) is due to 8(4L 2 F + 20L 2 F L 2 g )η 2 ≤ β 2 2 . Rearranging terms and dividing T on both sides of Eq. (22), we compelte the proof.
B.2 Proof of Theorem 1
4L F which satisfy the assumptions of η in Lemma 10 and Lemma 11. Therefore, combining Lemma 10 and Lemma 11, we have By the definition of s 1 and Assumption 2, it holds that where the inequality (a) is due to ∥a + b∥ 2 ≤ 2∥a∥ 2 + 2∥b∥ 2 .
Applying the smoothness and Lipschitz continuity of f λ and g, we obtain combining Eqs. (26,27), we have . This complete the proof.
and after running T iterations, Algrithm 2 satisfies
In addition, where the inequality (a) uses the inequality (x + y) 1/3 − x 1/3 ≤ yx −2/3 3 , the inequality (b) is due to w ≥ 2σ 2 , and the inequality (c) is due to where the last inequality is due to In the same way with Eq. (15) and η t ≤ η 1 , ∀t ≥ 1, we could also have Noting η 1 L F ≤ 1 4 and invoking Lemma 12, we obtain Combining Eqs. (34, 36), we have This complete the proof.
C.2 Proof of Theorem 2
Proof. Noting the monotonity of η t and dividing η1 1/4−η1L F /2 on both sides of Eq. (35), we have By the same method used in the proof of Theorem 2 in Xu et al. (2019), we have the following inequality, Multiplying η t on both sides of the above inequality and taking summation from 1 to T , we have where inequality (a) is due to (2L 2 Noting the monotonity of η t and dividing T η T on both sides of Eq. (40), we obtain Combining Eqs. (18, 41) and noting T t=1 η 3 t ≤ O(log T ), we get the conclusion that . This complete the proof.
D.1 Technical Lemmas
Lemma 15. If ℓ i (w) is convex for all i, we can show that F (w, λ) is jointly convex in terms of (w, λ). (w,λ,p) .
Proof. We have
Since G(w, λ, p) is jointly convex in terms of (w, λ) for every fixed p, F (w, λ) is jointly convex in terms of (w, λ).
(1) of Algorithm 1, where µ is a small constant to be determined later. Without loss of the generality, we assume 0 < µ ≤ 1 2 and then we have Proof. To facilitate our proof statement, we define the following notations: Since F (x) is L F -smooth, then we have F µ (x) is L Fµ -smooth, where L Fµ = (L F + µ). Noting L F > 1 and µ ≤ 1 2 , we obtain L F + µ ≤ 3 2 L F . For every iteration t, by simple expansion we have The above inequality shows that the only difference between I t in the proof of Lemma 11 and I t in the proof of Lemma 16 is term A.
Therefore, by the same method used in the proof of Lemma 11, we have By L Fµ ≤ 3 2 L F and ηL Fµ ≤ 3 2 ηL F ≤ 1/4, it holds that where the last inequality is due to 8(9L 2 F + 20L 2 F L 2 g )η 2 ≤ β 2 2 . Rearranging terms and dividing T on both sides of Eq. (42), we complete the proof of this Lemma.
Lemma 17. At the k-th stage of RASCDRO, let
Proof. Recall the definition of ∥κ t ∥ 2 and by the same proof of Lemma 12 we have (44) Denote κ t at kth-stage as κ t k , and by Lemma 13, at the kth-stage in RASCDRO we have Combining Eqs. (44,45), we obtain where the last inequality is due k and c = 576L 4 F to above inequality, we get the conclusion that
D.2 Proof of Lemma 3
Proof. Since ℓ i (w) is convex for all i, by Lemma 15 we know F (x) is convex. And thus by the definition of F µ (x) we haveF µ (x) is a strongly convex function. Then by strong convexity, we havē
Since F µ (x k ) ≤F µ (x k ) and inf x∈X F µ (x) = inf x∈XFµ (x), applying Lemma 3 we have This complete the proof of this Lemma.
Since F µ (x k ) ≤F µ (x k ) and inf x∈X F µ (x) = inf x∈XF µ(x), applying Lemma 3 we have This complete the proof of this Lemma.
D.7 Proof of Theorem 4
Proof. Invoking Lemma 5, then after K = O(log 2 (ϵ 1 /ϵ)) stages, we have From the first equation, we can derive p * i (x) ∝ exp(ℓ i (w)/λ). Due to the second equation, we can conclude that p * i (x) = exp(ℓi(w)/λ) n i=1 exp(ℓi(w)/λ) . Plugging this optimal p * (w) into the inner maximization problem, we have Therefore, we get the following equivalent problem to the original problem min w∈W min λ≥λ0 λ log 1 n which is Eq. | 2022-10-13T01:15:47.951Z | 2022-10-11T00:00:00.000 | {
"year": 2022,
"sha1": "a1382290160a39a7211d8291faa21f3833818061",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a1382290160a39a7211d8291faa21f3833818061",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
216668395 | pes2o/s2orc | v3-fos-license | Social media and supply chain risk management: improving risk Social media and supply chain risk management: improving risk detection and supply chain resilience detection and supply chain resilience
The introduction of social media has changed the methods by which many individuals, communities, and organizations communicate and interact. The increasing popularity of social media within a business context has forced executives to rethink how they operate their businesses. Chae (2015) observed that the field of supply chain management (SCM) has been lagging in identifying the potential role and use of social media in both research and practice. Recently, greater attention is being given to social media and its potential uses within the supply chain. This paper investigates the potential use for social media as a technology to help with supply chain risk detection and supply chain resilience.
INTRODUCTION
Ever increasing competitive pressures including escalating customer demand expectations, requirements and greater competition from international markets have forced organizations to operate on a global basis (Manuj and Mentzer, 2008).
The increasing complexity of global supply chains necessitates the flow of goods, services, information, and cash, both within and across national boundaries, which must be highly coordinated. With increasing complexity, supply chains have become much more susceptible to disruption (Craighead et al., 2007). The more globalized the firm, the greater the risk exposure due to the increased length of the supply chain network. Numerous recent incidents, including natural disasters, various industrial and societal disputes, and other supply chain "glitches" have revealed the vulnerability of modern global supply chains. Modern supply chains increase the likelihood for potential delay points, providing for greater uncertainty and creating the need for improved coordination and communication. As a result, the modern supply chain must be continuously monitored and managed (Mentzer, 2001) and innovation is critical. Now more than ever, the supply chain and the innovations within are closely linked to some of the newest technologies. Blockchain is the latest technology that in various use cases has the potential to revolutionize supply chains by creating opportunities for improved processes. Innovative supply chain performance improvements demand technology. An additional emerging area of technology which holds much promise for innovative improvement in supply chain management is social media.
Social media is defined as collaborative online applications and technologies that enable participation, connectivity, user-generated content, the sharing of information, and collaboration amongst a community of users (Henderson and Bowley, 2010). The introduction of social media has changed the means by which many individuals, communities, and/or organizations interact and communicate (Kaplan and Haenlein, 2010). In a business context, social media is used in a businessto-consumer (B2C) environment to allow companies to promote their brands and market products to consumers (Howells, 2011). The field of supply chain management has been slow in identifying the potential role and use of social media for research and practice (Chae 2015). However, social media could provide many benefits for supply chain management such as greater visibility, improve communication, increase control, and potentially reduce operational and labor costs. Social media could allow supply chain participants to monitor supply chain events and transactions to keep everyone up-to-date with current situations, such as a delay in shipping or a carrier failing to pick-up a shipment. Social media may provide companies with more timely and insightful information about risks and events, enabling organizations to take corrective action sooner and thus minimizing the impact of any supply chain disruption and increasing supply chain resilience (Rusch, 2014). It's this potential use for social media that leads to the following research questions: (1) Can the use of social media improve an organizations ability to sense and recover from potential disruptions?
(2) How can supply chain managers use social media to adjust to changes in the upply chain environment?
This paper discusses the use of information technology to achieve supply chain innovation. A discussion of supply chain risk management and supply chain resilience follows. We then we provide background on Dynamic Capabilities (Teece et al., 1997) and describe the connection to the use of social media for improved supply chain resilience. Principles related to disaster recovery and social media are then applied in a supply chain context and propositions are offered. Finally, managerial implications along with conclusions from this examination are discussed.
INFORMATION TECHNOLOGY AND THE SUPPLY CHAIN
Value is created within the supply chain in matching supply and demand through both reliability and responsiveness. Reliability is defined as delivering the right product in the right quantity at the right time to the right place at the lowest cost. Responsiveness is defined as the ability to quickly respond to changing market conditions (Hendricks and Singhal, 2003). To be both reliable and responsive, organizations have formed sophisticated supply networks and management structures that allow materials to be sourced from around the world, while still delivering on reliability and responsiveness (Autry and Moon, 2016). The task of managing those supply networks necessitates coordination both within and across organizational boundaries, including the integration of business processes and functions across the supply chain (Cooper, Lambert, and Pagh, 1997). Some scholars maintain that it is impossible to achieve both reliability and responsiveness, and create an efficient, collaborative supply chain without information technology, noting that; "IT is like a nerve center in supply chain" (Gunasekaran and Ngai, 2004). The business processes associated with supply chain management are deemed mission critical for many organizations (Bala, 2013) and the reliance on IT to help achieve mission critical processes is generally accepted. Some scholars have referred to supply chain management as "a digitally enabled inter-firm process capability" (Rai et al., 2006).
The sharing of information is at the heart of the modern supply chain concept (Thomas, Esper, and Stank, 2010) and the advantages of increased information sharing through greater technology linkages has been discussed in much of the prior supply chain research (Lee and Whang, 2000). Cachon and Fisher (2000) detailed a reduction in supply chain costs with the sharing of both demand and inventory information among supply chain partners. Fawcett et al. (2007), reviewed two facets of information sharing; connectivity and willingness to share, and determined both are not only critical to an information sharing capability but both are found to positively impact operational performance. Zhou and Benton Jr. (2007) explored the effect of information sharing and supply chain practice on supply chain performance. Their conclusions indicated that both are crucial to attaining greater supply chain performance. Klein et al. (2007) found that firms realized better performance when information is shared among supply chain partners. Information sharing improves the coordination of supply chain processes enabling the flow of material and reducing inventory costs, leading to greater collaboration and increased levels of supply chain integration (Li and Lin, 2006).
Supply chains comprise vast numbers of products or commodities that are sourced, manufactured, or stored in multiple locations throughout the world, increasing complexity (Chopra and Sodhi, 2014). Events often occur that threaten to disrupt supply chain operations and jeopardize the ability to perform effectively and efficiently (Melnyk et al., 2015). Natural disasters, political instability, terrorist attacks, equipment failure and human error have all contributed to various supply chain disruptions. Irrespective of the type of disruption, the sharing of information is an essential component within any supply chain to quickly respond to a disruption (Datta, 2017). Supply chain disruptions can be costly and if not properly managed, can result in significant delays and an inability to meet customer demand (Blackhurst et al., 2005). Supply chain managers and practitioners understand the necessity to protect their supply chains from disruptions, unfortunately few take necessary action (Chopra and Sodhi, 2014). The most obvious solutions; increasing capacity, boosting inventory levels and having multiple suppliers, can undermine efforts to improve supply chain cost efficiency and responsiveness to demand changes. Consequently, supply chain risk management has emerged as a top priority for companies (Chopra and Sodhi, 2014).
SUPPLY CHAIN RISK MANAGEMENT AND RESILIENCE
Supply chain risk is defined as the likelihood and impact of unexpected events or conditions that adversely influence any part of a supply chain leading to operational, tactical, or strategic level failures or irregularities (Ho et al., 2015). Supply chain risk management (SCRM), defined as an inter-organizational collaborative endeavour utilizing quantitative and qualitative risk management methodologies to identify, evaluate, mitigate and monitor unexpected macro and micro level events or conditions, which might adversely impact any part of a supply chain (Ho et al., 2015), is rapidly evolving into a preferred area of research for both academicians and practitioners (Rao and Goldsby, 2009). Although scholars understand that SCRM is a necessary part of a holistic supply chain management philosophy, researchers have also argued that managing risks in the current environment continues to be an increasingly challenging task (Christopher and Lee, 2004). The essence of SCRM is to make decisions to concurrently take advantage of opportunities and minimize risk (Narasimhan, 2009). Scholars have noted that a firm should have a cost-effective risk management strategy for monitoring and detecting supply chain disruptions (Autry and Moon, 2016) and managers can reduce risk by designing supply chains to contain risk rather than allow it to proliferate throughout the entire supply chain (Chopra and Sodhi, 2014). An organization can substantially increase its resilience; that is the ability to resist disruptions and recover operations capability after disruptions occur, by improving its ability to detect and respond quickly to such events (Sheffi, 2105). Despite this, executives have been hesitant to address supply chain risk. There is a perception among executives that providing for risk reduction will lessen any cost efficiencies and other benefits of their existing global supply chains (Chopra and Sodhi, 2014). Trade-off decisions between managing risk and delivering value are important factors for building resilience into the supply chain (Juttner et al., 2003). SCRM is considered to be the principle method for enhancing supply chain resilience (Datta, 2017).
Supply chain resilience is a concept which has received increased attention within the supply chain domain. It is a complex construct, regarded as a dynamic process of directing actions so that organizations always stay out of trouble should a disruptive event occur. The system then initiates a very swift and efficient response to minimize the consequences and maintain or regain a dynamically stable state, which then allows the firm to adapt operations to the new requirements of the changed environment (Datta, 2017). For this research, resilience is defined simply as the ability of the supply chain to both resist disruptions and recover operational capability after disruptions occur (Melnyk et al., 2015). Melnyk et al. (2015) note; "The resilient supply chain requires two critical capacities: the capacity for resistance and the capacity for recovery" (p. 35). Organizations throughout the world have reported incidents of increased significance regarding supply chain resilience. Datta (2017) detailed the well-known example of Nokia's ability to adapt quickly to disruption by using alternate suppliers following a fire at a key component manufacturer in 2000. The same disruption also affected Ericsson. However, their lack of resilience resulted in a loss of $400 million in revenue. In another example, Melnyk et al. (2015) discussed the ability of General Motors to quickly recover from the Thailand floods of 2011 despite having suppliers in the area affected.
A great deal of the literature concerning supply chain resilience has examined recommendations for structuring a resilient supply chain (Datta, 2017). In his seminal work The Resilient Enterprise, Sheffi (2005) illustrates how organizations can decrease the likelihood of a supply disruption by building both redundancy and flexibility into their supply chain. The author notes that using practices such as standardization, modular design, developing collaborative relationships and creating a culture of flexibility can help build a more resilient enterprise.
Detailing the importance of managing the efficiency of resilience enhancement interventions, Collicchia et al. (2010) proposed a simulation model specifying the impact of different risk management procedures. Christopher and Peck (2004) specified what they termed the five broad enablers of supply chain resilience. These were supply chain understanding; implying knowledge about supply chain structures, a supply base strategy; selecting the right number of suppliers; supply chain collaboration, agility, and creating a risk management culture. The fundamental principle of supply chain collaboration is that the sharing of information can reduce uncertainty (Martha and Subbakrishna, 2002). The construction of a supply chain that will facilitate the exchange of information between supply chain partners is a key priority for SCRM and improving supply chain resilience (Christopher and Peck, 2004). Autry and Moon (2016) note that a strategy for detection is needed to allocate limited management resources to monitor the supply network to more quickly detect and disseminate information about any disruption. Social media has emerged as a technology and a business tool that can capture and share information, enable collaboration, and improve supply chain resilience through better SRCM. Thus, social media has the potential to help improve resiliency.
SUPPLY CHAIN RESILIENCE AND DYNAMIC CAPABILITIES
Dynamic capabilities (Teece et al.,1997) was selected to explicate the necessity for the use of social media platforms like Twitter to improve effectiveness and efficiency in supply chain risk management. Dynamic capabilities are defined as 'the ability to integrate, build, and reconfigure internal and external competencies to address rapidly-changing environments' (Teece et al., 1997, p. 516). Dynamic capabilities are considered a response to the need for change, and those changes may take many different forms, including the transformation of organizational processes and the allocation of resources. The changing allocation and utilization of resources is an essential part of dynamic capabilities. These resources can include human capital, including managers and employees, technological capital, knowledge-based capital, and tangible-asset-based capital, among others (Easterby-Smith and Prieto, 2008).
Organizations find themselves resource constrained and are forced to take steps to manage key resources more effectively. In this model, the organization's need to innovate and integrate is critical, even when there is no guarantee of a sustained, competitive advantage (Wade and Hulland, 2004). Technologies, like e-business proved to have a dramatic impact on operational efficiencies. Zhu et al., (2006) examined this area from the technology diffusion perspective. Social media, likewise, is proving to provide both opportunities and challenges in a dynamically changing business environment.
Traditionally, new technologies are introduced into the workplace and accepted and integrated at varying rates, depending upon numerous factors like need and competition (Winter 2003). Social media platforms like Twitter are already pervasive allowing for little to no transition in organizations. In addition, even late adopters and laggards can appear in the marketplace with no apparent long-term effects.
Dynamic Capabilities, originally proposed for information system resources (Wade and Hulland, 2004), is process based and assumes adaptation between an organization's resources and a dynamic business environment. Social media seems to be a natural fit into this sphere due to the almost instantaneous response capabilities and mobile nature of the mobile devices that are common.
SOCIAL MEDIA AND SUPPLY CHAIN RESILIENCE
Social media has gradually become an increasing part of the fabric of society and human social interaction. According to Statista, a provider of market and consumer data, in the first quarter of 2018, Twitter and Facebook, two of the most popular social media platforms, were reported to have 336 million users and over 2.19 billion users respectively (Statista, 2018). With access to such an enormous number of prospective customers, business disciplines such as marketing have made widespread use of social media. The field of supply chain management has been lagging in identifying the potential role and use of social media in both research and practice (Chae, 2015;O'Leary, 2011). However social media has the potential to impact the supply chain in several different ways. This includes increasing productivity, reduced operating costs, gaining marketplace intelligence, better risk detection, improved risk management, and increased resilience.
Fronetics (2014) conducted a survey on the use of social media within logistics and supply chain management. The results indicated Twitter as the first preference social media tool for supply chain improvement. Social media can serve as a tool to facilitate intra-and inter-organizational activities and provide for greater information sharing within the supply chain (Ngai et al., 2015;O'Leary, 2011 In addition to using social media to recruit drivers and market their services, some are finding innovative ways to provide for the movement of freight. MercuryGate International Inc. and Conway Inc. are two such organizations. Both use social media to move freight. In 2010, Con-way Multimodal, a division of Con-way Inc., initiated a service called "TweetLoad." TweetLoad allows carriers to access available loads from Con-Way Multimodal via Twitter. Carriers who follow @ConwayTweetLoad on Twitter can see the latest available shipments as well as links to additional information on the company's link board. Load information is updated on Twitter every 15 minutes, thus allowing carriers who follow @ConwayTweetLoad to have real-time information on available loads. The former president of the American Trucking Associations (ATA), Bill Graves, was quoted as saying, "With this novel use of Twitter, Con-way Multimodal is leading the industry in maximizing the best features of new technology to improve their processes. This is a great example of how innovative transportation companies can make it easier for carriers to do business with them, which will be a benefit to our industry overall." (Fronetics, 2014).
In 2011 MercuryGate International Inc. launched Freight Friend. Freight Friend is a relationshipbased load and truck internet posting service for shippers, brokers and carriers. Freight Friend creates a private network between transportation partners and utilizes technology to automatically identify appropriate matches. The combination of the technology utilized, and the relationship-based nature of Freight Friend allows companies to have real-time visibility to book trucks and find freight with companies they trust. According to Mr. Graves, "FreightFriend is perfect for carriers, shippers, brokers, 3PLs and freight management firms who only want to share information with companies they trust. They can keep their current information in one place, knowing that friends -and only friends -will have constant access. While public load boards fill a real need, they come at a cost -a lot of unknown companies bidding to carry the freight. Private boards are often useful too, but they're inconvenient to carriers with multiple clients asking them to check their bid portals. FreightFriend solves the dilemma with a single service where carriers can easily communicate with all of their clients and brokers can find available capacity from carriers they trust." (Fronetics, 2014).
Alexander (2014) discussed the actual and potential use of social media in emergency, disaster, and crisis situations, noting that just-in-time information can be provided on how to cope with developing situations. He documented how social media may be used in seven different ways within the emergencies field for disaster response, recovery, and risk reduction including; listening, monitoring, integration into planning and crisis management, collaborative development, creating cohesion, furthering causes, and enhancing research. Alexander (2014) further details the need for emergency managers to adapt organizational practices and embrace the use of social media in crisis management. Some supply chain disruptions, by their very nature, can make detection problematic. The concepts of information sharing, collaboration, and integration between organizations could rest at the center of building the continuity and resiliency necessary to detect and manage supply chain disruptions (Autry and Moon, 2016).
LISTENING AND MONITORING
Social media is often referred to as the new "newswire." According to Fronetics (2014), a digital content and marketing firm focused on the supply chain, social media has supplanted traditional news organizations such as the Associated Press and Bloomberg for breaking news. Major events such as the recent earthquake in China, the Boston Marathon bombing, the death of Osama bin Laden, and the engagement of Prince William to Kate Middleton were all stories that broke on the social media website Twitter. Twitter is a micro-blogging application allowing users to "tweet" a message of up to 280 characters. Because of the nature of its quick bursts of information, Twitter may be particularly useful where supply chain risk detection and disruption recovery is concerned. Quick detection is considered an essential element in the effort to mitigate the impact of most supply chain disruptions (Sheffi, 2015). For example, the United States Geological Survey currently monitors Twitter to detect earthquakes (Sheffi, 2015). "In some cases, it gives us a heads-up that it happened before it can be detected by seismic wave," according to Paul Earle, a seismologist with the US Geological Survey (Sheffi, 2015).
According to Alexander (2014), listening is the sampling of varied output on social media. Whereas listening is passive, monitoring is conducted to improve reactions to better manage an event by learning what people are thinking and doing. Firms have the ability to "listen in" using social media, but they also must be vigilant with rapid and targeted responses (Crawford, 2009). Crawford (2009) noted that the value of organizations listening using social media could be considered in three ways. The first is being seen to participate within a community, the second is utilizing a rapid and lower-cost form of customer support, and the third is gaining global awareness of how a brand is considered and the patterns of both consumer use and satisfaction. For instance, O'Leary (2011) noted that Best Buy uses Twitter to listen, monitor and respond to customer inquiries. Dell employs staff to listen and monitor more than 130 Twitter feeds (Soller, 2009). As supply networks can be extensive and only a limited amount of management resources may be available to commit to the purpose of risk detection, a firm should have a cost-effective strategy for detecting and monitoring disruptions (Autry and Moon, 2016). Listening and monitoring could allow firms to be proactive instead of reactive by providing for quicker reaction and improved response to a disruption. Thus, the following proposition is offered: P1. The use of social media for listening and monitoring is positively linked to improved supply chain resilience.
The use of social media listening and monitoring for risk management will foster increased communication and significantly help with improved decision making during a disruption. As supply chain professionals are continuously communicating with a broad community of partners and consumers, the use of social media to improve communication may lead to increased information sharing and improved collaboration. In this rapidly changing and competitive environment, the widely accepted use of social media by individuals globally speaks to the application of the Dynamic Capabilities where resources may be used most effectively and with little training.
SOCIAL MEDIA AND COLLABORATIVE DEVELOPMENT
The philosophy of supply chain management is based upon the collaboration of supply chain partners (Stank et al., 2001). Collaboration in a supply chain relates to the capability of firms to work effectively together in both planning and executing supply chain operations toward shared goals (Cao et al., 2010). Higher-level collaboration that brings the resources of diverse supply chain members together in both innovative and distinct ways promises a heightened level of uniqueness and lasting success (Lavie, 2006). The supply chain literature details specific collaboration-driven benefits including faster new product development cycles, shorter delivery lead times, better quality, lower inventory levels, higher productivity, lower materials and manufacturing costs and improved relationship quality among partners (Ferdows, Lewis, and Machuca, 2004;Lee, 2004;Fawcett et al., 2012). Furthermore, effective supply chain collaboration has also been associated with higher levels of customer satisfaction (Frohlich and Westbrook, 2001), differential firm performance (Frohlich and Westbrook, 2001) and the development of new competencies (Nooteboom, 2004). Supply chain collaboration between organizations is a core concept of supply chain management and is considered an important part of current SCRM practices (Scholten et al., 2014;Scholten and Schilder, 2015). Hammer (1990Hammer ( , 2004 contended that information technology can be employed to dramatically rethink and redesign the core processes responsible for creation of value within the supply chain. An organizations ability to use IT to collect, analyze, and disseminate information need to synchronize decision-making is referred to as supply chain connectivity (Fawcett et al., 2010). When supply chain partners are connected, improved decisionmaking, along with higher levels of coordination, thus collaboration is possible (Fawcett et al., 2010). Collaboration supports the development of synergies among partners, enables joint planning and fosters the real-time exchange of information (Scholten and Schilder, 2015) necessary for firms to prepare for, respond to and recover from supply chain disruptions while reducing their impact. Pettit et al., (2013) revealed that low collaboration, lack of excess capacity, and minimal flexibility are the major causes of poor supply chain resilience. Wieland and Wallenburg (2012) identified that communicative and cooperative (i.e. collaborative) relationships have a positive effect on resilience.
Information technology is considered an important enabler of supply chain collaboration allowing organizations to share resources and coordinate efforts . Social media is a technology which can allow participants to join forces and connect on a larger scale than most traditional communication methods. This larger network brings greater potential for increased supply chain connectivity and value-added to those who are attached through the network. Given the risks inherent in the global supply chain, especially with sourcing, the use of social media can lead to closer supplier relationships, moving beyond collaboration. The continued need for improved visibility necessitates increasingly closer relationships with key suppliers. Creating a "community" of suppliers, where crucial information, including information about disruptions can be shared in realtime, could provide for increased resilience. Social media platforms such as Twitter, are suitable to be the foundations for such supplier communities. Therefore, we propose the following: P2. The use of social media for collaborative development is positively linked to improved supply chain resilience.
Collaboration is a precursor to integration. The integration of social media into supply chain management has required firms to better understand the characteristics of integration and the potential effects and impacts for improved supply chain resilience. The motivation for increased collaboration and information sharing is at the heart of the application of the Dynamic Capabilities.
Organizations that collaborate will find that their resources, especially their human capital is free to focus on core competencies when using an already familiar technology.
SOCIAL MEDIA INTEGRATION
According to Autry and Moon (2016) a prerequisite for creating and maintaining a resilient supply chain is IT integration. It is considered a chief catalyst for competitive advantage within the context of supply chain management. Moreover, an integrated IT infrastructure is the foundation upon which all modern supply chain activities and processes are built (Autry and Moon, 2016). Access to information from anywhere at any time is critical for effective and timely responses to environmental changes within the supply chain and IT infrastructure integration is considered especially important to ensure that access.
The corporate sector was quick to realize the many advantages of using social media to promote closer relationships with customers, to gain information about products and services, and to enhance public image (Crawford, 2009). Skylar (2009) noted, social media is seen as a relationship tool. Many firms, including companies such as Dell, have used social media to deliver news and provide special offers to customers. However, social media it is now becoming integrated into all business areas. The world's leading enterprise resource planning suite, SAP, currently provides organizations with the capability to integrate with social media platforms. This integration affords social capabilities both where and when they are required within a firm's business processes while keeping the connection to the working environment. Using SAP Jam, the social collaboration platform from SAP, the social collaboration tools provide structure to social exchanges and work to quickly drive actions, make essential decisions, or to solve crucial business problems (SAP, 2018).
The use of Radio Frequency Identification (RFID) can also be used to generate Twitter messages (O'Leary, 2011). RFID has long been used in logistics and supply chain management to track the movement of products. Alexander (2014) notes an example of a project at the University of Waterloo. RFID-marked cows are robotically milked. Twitter messages summarizing a variety of variables are then generated and sent once the milking process in completed. Based upon RFID events, Twitter can be used to facilitate supply chain transparency and the speed of information flow (O'Leary, 2011).
As previously noted, there is evidence within the literature that integration through information sharing and collaboration provides for improved resilience (Ambulkar et al., 2016;Scholten et al., 2014;Scholten and Schilder, 2015;Harland et al., 2003). Esper et al. (2010) note that an integrated supply chain decision making capability can be paramount when it aids supply chain partners in more effectively managing disruptions. Supply chain integration can be a dynamic capability that assists the firm in overcoming supply chain disruptions in its upstream supply chain (Autry and Moon, 2016). Thus, the following proposition is offered: P3. The integration of social media for supply chain risk management is positively linked to improved supply chain resilience.
Risk is a variable that can only be mitigated. The nature of risks is that they are often unknown or unforeseen events. The effective and efficient use of resources, such as freely available social media technology to quickly adapt to such events, may provide for improved risk mitigation.
MANAGERIAL IMPLICATIONS
The inclusion and integration of any new technology presents organizational challenges. The introduction of social media applications into supply chains may seem less intrusive due to the general acceptance of its use. However, any new process or procedural change is likely to impact the resiliency of a supply chain. The listening and monitoring capabilities are basically a different form of instant messaging, the differences being the platform and the general acceptance of social media communication. (Iacovou et al., 1995;Young et al., 1999). Collaboration within the supply chain affords involved parties' efficiencies and perhaps potential solutions to ineffective supply chain resilience. It is a certainty that managers must be adept and ready to address the new opportunities, and the new challenges.
While seemingly a minor issue, determining whether to use personal or business devices must be addressed. Most people already carry smart phones with the ability to access social media in its various forms like Twitter ® and Facebook ® . Should businesses require employees to use their personal devices? Would separate business-only devices be more secure but add additional expense? How should lost or stolen business devices be handled in terms of potential confidential data being exposed? These questions can be addressed by comprehensive policies not unlike those required with the introduction of laptop computers and flash memory drives.
Regardless of built-in safeguards, people remain instrumental in the success or failure of any system. The use of a mobile device and social media introduces potential points of failure as well as opportunities for improvement. While impossible to list all potential failure points, all mobile devices users have experienced issues as simple as a discharged battery. Cellular network outages or lack of coverage may also be a hindrance, and at key points in communication. The question remains, what additional potential risk areas might occur, especially when dealing with instant communication?
O'Leary (2011) discusses building relationships with customers. These relationships built largely on mutual trust, extend to supply chain partners. Goolsby (2010) discussed the fear of inaccurate information as being one of the critical factors in the success or failure in these relationships. General acceptance by people requires an understanding of what your employees are thinking (O'Leary, 2011). Further, this may include groups formed outside of the purview of the organization allowing workers to criticize management. This may be viewed as spying on employees and data may become scarce or even tainted. Developing bonds of trust with employees is the first step in any successful system. Anonymization of data and perhaps sharing summarized results with employees may be a step in the right direction.
Strategic alignment with any "system" is key to successful implementation and sustainable use. The use of social media for supply chain resiliency will require management to align that use with the strategic mission of the organization. This topic is pervasive across the literature related to information system implementations (Goepp and Avila, 2015;Velcu, 2010;Schniederjans and Cao, 2009). There may be more questions than answers at this point. Does the use of social media offer some new innovative approach to communications across the supply chain, or does it simply replace current forms already in existence? Simply replacing one form of electronic message with another does not address the efficiency or the effectiveness of a supply chain process. This replacement must afford reasonable opportunities for improvement to be justified. The further intrusion of the human element into the process may also introduce data errors or exacerbate efficiency. The introduction of technologies like IoT, or Internet of Things, may mitigate the risk of human error. Because this technology is not reliant upon third-party logistics sources, the inherent higher speeds and accuracy with smart embedded devices may offer solutions to management in relation to integration. As more devices become capable of listening, monitoring, and collaborating automatically, the integration of IoT solutions is almost a certainty.
Yet another area of technological innovation is the explosion of big data and analytics. Ittmann (2015) concludes with an insistence that supply chain managers embrace the reality of big data analytics and its impacts on identifying value in data. Supply chain analytics is using the data collected from within the supply chain and performing appropriate analysis to provide fast, accurate results to improve decision-making (Ittmann, 2015). Because of the variety of data, the increasing volume of available data, and the requirements for veracity and velocity (Minelli et al., 2013), big data analytics techniques and technology is critical to ensuring that efficiency and effectiveness gains using social media for supply chain resiliency isn't lost. A key factor for the use of big data and analytics is the potential for enhanced visibility of data across the supply chain (Ittmann, 2015;Milliken, 2014Milliken, , 2015. Milliken illustrates the "transformation of big data into supply chain analytics" from the use of descriptive analytics to the construction of decision modelling. It is important to remember an important concept first offered by Peter Drucker (1973), "Innovation is not a technical term. It is an economic and social term. Its criterion is not science or technology, but a change in the economic or social environment, a change in the behaviour of people as consumers and producers, as citizens, as students or as teachers…" (p.785).
According to Gallouj et al. (2018) the traditional model is for technological change to drive service and social innovation, interestingly enough, the adoption and use of social media technology by individuals is driving the technological innovation in supply chain resiliency applications. As organizations introduce emerging technologies into the strategic flow, it is always important to remember the rationale is not to use the latest software or gadget, the intent must always be to improve the profitability of the business. In this case, improving the channels of communications, arming managers with instantaneous information, and providing visibility across the supply chain are key criteria in strategic alignment of social media as a tool to enhance supply chain resiliency.
LIMITATIONS AND FUTURE RESEARCH
The potential for the extensibility of any research findings is an exciting attribute of the widespread use of social media in its various forms. Social media is so widely accepted globally, repeating research studies should be possible. Understanding various cultural norms, carefully ensuring model constructs are valid, and other common practices will remain necessary. The limitation of this research is that no real data is collected to assist in determining the validity of our propositions. The need to further study the propositions should be addressed with not only quantitative research, but also qualitative studies to assist in developing themes and additional propositions. As the IoT expands, additional work is needed to understand how to best integrate technology and where human intervention is still required.
Future research could include how is information, leveraged through the collaboration capability social media provides, could be used to increase competitive elements beyond productivity, brand management and customer satisfaction. Additionally, an under-explored area within supply chain management is that of small and medium-sized enterprises (SMEs). Research on the potential use of social media for improved resilience in small and medium enterprises could prove fruitful. Finally, additional case studies related to social media and its use within the supply chain would provide valuable insight.
CONCLUSION
Supply chains are no longer simply a cost of doing business, they have become a platform for growth allowing organizations to reach new markets to touch new customers. To be successful, companies must innovate to compete. Social media has the potential to be an instrumental tool for supply chain managers looking to recognize new innovations, identify new trends and collaborate with stakeholders, and improve relationships with partners and suppliers. Supply chain disruptions are an inevitable occurrence in today's tumultuous business environment (Skipper and Hanna 2009). According to a report in the Financial Times from May 2015, supply risks have more than tripled since 1995. An organization can and should attempt to mitigate potential risks via traditional supply chain risk management practices but cannot prevent all disruptions from occurring.
When it comes to supply chain risk management, having information about what is happening in real time is essential. Whether it is learning about a natural disaster that happened near your manufacturing plant, information that may alter planned travel routes, or observing the path and intensity of an on-coming hurricane; real time information is critical and will enable an organization to make more informed and timely decisions on how to manage or mitigate risk. Alexander (2014) examines the use of social media in the mitigation of disaster risk and improving the management of crisis response. The concepts of a "listening function" and a "monitoring function" (p. 720) are discussed. Social media has the potential to be an invaluable tool for supply chain professionals attempting to collaborate with stakeholders, improve existing processes, increase efficiencies, mitigate risk and promote recovery following a supply chain disruption. The ideas of listening and monitoring, collaborative development, and integration between organizations could be at the core of creating a resilient supply chain (Autry and Moon, 2016). Social media could be an effective tool to add to an organization's risk management toolkit. | 2020-04-30T23:21:37.617Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "ac8441ab8a58c76bd062565f2270af2a003c1222",
"oa_license": null,
"oa_url": "https://digitalcommons.wayne.edu/cgi/viewcontent.cgi?article=1417&context=jotm",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5f602ee20dd8386591fb1f4f014ca57c2885a55f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
258170058 | pes2o/s2orc | v3-fos-license | Low-carbon Lithium Extraction Makes Deep Geothermal Plants Cost-competitive in Energy Systems
Lithium is a critical material for the energy transition, but conventional procurement methods have significant environmental impacts. In this study, we utilize regional energy system optimizations to investigate the techno-economic potential of the low-carbon alternative of direct lithium extraction in deep geothermal plants. We show that geothermal plants will become cost-competitive in conjunction with lithium extraction, even under unfavorable conditions and partially displace photovoltaics, wind power, and storage from energy systems. Our analysis indicates that if 10% of municipalities in the Upper Rhine Graben area in Germany constructed deep geothermal plants, they could provide enough lithium to produce about 1.2 million electric vehicle battery packs per year, equivalent to 70% of today`s annual electric vehicle registrations in the European Union. This approach could offer significant environmental benefits and has high potential for mass application also in other countries, such as the United States, United Kingdom, France, and Italy, highlighting the importance of further research and development of this technology.
Introduction
Lithium is crucial for the transition to greenhouse gas neutral energy systems. In 2019, over 60% of lithium produced was utilized for the manufacturing of lithium-ion batteries, the compact and high-density energy storage devices for low-carbon-emission electric vehicles, and secondarily as a storage medium for renewable energy sources like solar and wind [1,2]. In 2 °C compatible geothermal brines in Germany (Molasse Basin [27], Upper Rhine Graben [28,29] and North German Basin [30][31][32]), and environmental impacts of lithium extraction [33,34].
In this article, we investigate for the first time the techno-economic impacts of installing and operating deep geothermal systems with lithium extraction in decentralized energy systems. For this purpose, we focus on the Upper Rhine Graben in Germany, whose brine lithium deposits are comparable to currently exploited evaporative brine and hard rock mining lithium operations [29,32,35,36]. An integrated energy system model, based on the open-source framework ETHOS.FINE [37], is extended to include hybrid geothermal plants (Section 2) and applied to optimize greenhouse gas-neutral energy systems of municipalities located in the Upper Rhine Graben in Germany for the year 2045 from a macroeconomic perspective (Section 3). Thus, based on expert evaluations of the key parameters of lithium extraction plants and through distinctive sensitivity analyses, we show the conditions under which deep geothermal energy with DLE will become an indispensable component of future energy systems. In Section 4, we discuss our findings in the context of the global energy transformation and derive conclusions.
Methods
In the methodology section, we first describe the energy system optimization framework used, on which the regional model for individual municipalities is based (Section 2.1). Subsequently, we address the key equations used to represent the geothermal plant (Section 2.2), as well as how hydrothermal temperatures and drilling are incorporated in the model (Section 2.3). The implementation of the DLE plant is shown in Section 2.4 along with key cost assumptions. Finally, in Section 2.5, we describe the studied municipalities from our case studies.
ETHOS.FNE optimization
This study utilizes a municipal energy system optimization model, which is based on the opensource Framework for Integrated Energy System Assessment (ETHOS.FINE) Python package [37]. The model provides a framework for modeling, optimizing, and assessing regional energy systems using high-resolution generation and consumption data. The objective of the model is the minimization of total annual costs (TAC) for supplying all demand sectors of a municipality while considering the technical and environmental constraints for a greenhouse gas-neutral renewable energy system in 2045. The costs are composed of the total annual costs of all built renewable power generation technologies, conversion technologies, and storage technologies, as well as sources/sinks (e.g., photovoltaic panels or lithium demand), and are determined using each technology's per unit capital costs, annuity factor, number of built installations, and operation and maintenance costs. The total costs of components may be negative, as revenues from sources/sinks are included in the operational costs (e.g., through electricity or lithium carbonate sales). The optimization is performed from the perspective of a central planner with perfect foresight.
Although the model can also be used for analyses at the NUTS-3 administrative level or higher, those presented in this work take place at the municipal level. The application of a hierarchical clustering approach with the Time Series Aggregation Module (TSAM) [38] with 60 periods and 16 segments enables the analysis of a high number of energy systems at an hourly resolution (8760 h) without significant accuracy losses (mean deviation in optimized total annual costs: 0.3%).
The optimization model includes onshore wind, rooftop photovoltaics (PV), open-field photovoltaics (OFPV), biomass, biogas, and waste, and is extended by deep geothermal plants and the commodities of lithium and lithium carbonate (Li2CO3) (see Figure 1). Regional potentials for rooftop and open-field PV, as well as wind, are determined using the Tool for Regional Renewable Potentials (TREP) [39]. Energy demand sinks are households, the trade commerce and service sector (TCS), and industry, as well as their respective commodities. Industrial energy demand consists of the demand for electricity, heat, and process heat. Process heat is implemented in three different forms: low-temperature for up to 100 °C, medium-temperature for between 100 and 500 °C, and high-temperature for processes above 500 °C. For the regional demand time series, top-down demand data [40] is regionalized based on employment, population, and CO2 emissions data.
Deep geothermal plant model
A geothermal plant utilizes thermal energy in deep hydrothermal aquifers to produce heat and/or electricity (see Figure 2). The power generation Pel of the Organic Rankine Cycle plant and the heat generation ̇ℎ of the district heating plant per time step t are determined as follows [12].
where ̇ is the volumetric flow rate of the geothermal brine in l/s, the mean density of the geothermal water in kg/l, , the mean heat capacity of the geothermal water in kJ/(kg⋅K), and
Hydrothermal temperatures and drilling
Drilling costs account for the majority of geothermal plant investment costs, with a share of up to 70% [12]. As these cost functions are non-linear (see Eq. 3 [12]) the optimization model must select one drilling depth from amongst a set of up to 400 discrete options in steps of 10 m from 1000 m, and up to 5000 m. The lower limit of 1000 m is used, as lithium reserves are only present at greater depths. It is assumed that economies of scale apply to these drilling costs, with the cost of the second well being 90% those of the first. The drilling costs are calculated using the drilling depth zD in meters, as well as the distance between the production well and injection well dD in meters: [41] and measured lithium contents [28,31,32,42,43].
The selected drilling depths then dictate the maximum achievable hydrothermal temperature in the optimization, the theoretical maxima of which can be found for German municipalities [13] up to a depth of 5000 m in Figure 3. The assumed mean temperature gradients for the major geothermal basins, the Molasse Basin, the North German Basin, and Upper Rhine Graben, are 32 C/km, 35 C/km, and 43 °C/km, respectively. Locally, however, the temperature gradient for the Upper Rhine Graben may be much higher [44], particularly at depths of up to 3 kilometers, with average values of up to 110 C/km. Therefore, for the Upper Rhine Graben the assumed average temperature gradient has been divided into three sections with 47 °C/km until a depth of 1900 m, 41 °C/km between 1900 m and 3250 m, and 33 °C/km from 3250 m and above.
Direct lithium extraction
After the heat exchange with the Organic Rankine Cycle and district heating network, the cooled brine is transported to the lithium extraction plant and brought in contact with a lithium-selective adsorbent that binds with the lithium ions. The lithium is then separated from the adsorbent and upgraded to lithium carbonate, and the cooled lithium-depleted brine is returned underground via the injection well ( Figure 2). In the optimizations, a mean lithium concentration of 175 mg/l is assumed based on measured data for the Upper Rhine Graben ( Figure 3). The quantity of lithium extracted from lithium-bearing geothermal brines is determined using Eq. 4: where ̇ is the brine flow rate measured in l/s, the concentration of lithium in the brine measured in mg/l, the extraction efficiency, with the final product being elemental lithium ̇ measured in mg/s. After the extraction, the lithium is processed with a conversion factor of 5.324 [23] into lithium carbonate, which is a largely traded raw material to produce, e.g., lithiumion batteries [6].
Economic and technical data on lithium extraction from geothermal brines is scarce and therefore subject to major uncertainties. Whilst we were able to find literature values for all needed parameters, we assessed the impact of each of these in extensive sensitivity analyses (see main text). Furthermore, we assume fixed contract prices for the lithium carbonate market prices of between 8500 €/t and 25,500 €/t. The average annual lithium carbonate price for fixed contracts has more than doubled since 2020, reaching 17,000 €/t in 2021 [5]. Typically, such fixed contracts for lithium carbonate last three to five years [4]. More recently, spot prices have shown even greater volatility, rising from roughly 5500 €/t lithium carbonate in September 2020 to over 76,000 €/t in September of 2022 [5]. However, spot prices are typically higher than contract prices, and studies anticipate that in the long-term, the market price will be significantly lower than the current spot market price [45,46]. Lithium carbonate market volatility has been observed in the past, with fixed contract prices increasing from 2015 to 2018 and then decreasing sharply until 2020. The 2015 and present spikes in pricing can be attributed to "unexpected and explosive EV market growth" [46], while the latter is also attributable to the COVID-19 pandemic. Future market prices will largely be determined by available reserves, as well as the growth of electric vehicle sales. In the long-term, lithium carbonate pricing could decrease to as low as 10,000 €/t [46].
Case studies
A total of 330 municipalities in the Upper Rhine Graben in Germany (see Figure 3) have achievable hydrothermal temperatures of 60 °C or more. We investigate the optimal energy systems of these municipalities with and without the DLE option in the Mean URG scenario (see Table 1 and Section 3. GWhel and the total heating demand is roughly 465 GWhth. Residential heating demand comprises roughly 45% of the total heat demand, while industry electricity demand makes up the largest portion of the total electricity demand at about 31%. The stated total electricity demand also includes the optimization results of ca. 183 GWhel for storage losses and electricity conversion to heat, process heat, and hydrogen (H2).
Direct lithium extraction benefits deep geothermal plants
The Bruchsal geothermal well in the Upper Rhine Graben is currently being investigated in pilot projects to identify qualified lithium-selective adsorbents, determine reservoir sustainability, assess environmental impacts, and evaluate whether lithium extraction from geothermal brines can be economically competitive with lithium sourced from South America and Australia using conventional methods. Bruchsal has a favorable lithium content (159 mg/l), temperature gradient (on average 43 °C/km), and reservoir temperature (131 °C) for such a project [29,41] and is therefore investigated here as a first case study in four scenarios (Table 1). Further information on the demand and supply structure of the municipality can be found in the Methods section.
Deep geothermal plants for power and heat generation alone are only cost-competitive under very favorable conditions and thus are not installed in optimal energy systems due to the low achievable flow rate in Bruchsal. This finding is in line with previous analyses using different energy system optimization models [12,17]. If no geothermal plant is built, most of the electricity or heat will be provided by onshore wind, rooftop and open field photovoltaic, or heat pumps, respectively; see the worst case scenario in Figure 4, which results in the same energy system as the baseline scenario without DLE.
However, depending on the geological characteristics of the geothermal source, the option of lithium extraction and sale makes deep geothermal plants cost-competitive (see the baseline, optimistic, and best case scenarios in Figure 4). [12]. In this assessment, it is important to keep in mind that average parameters were assumed to ensure the applicability of the developed model for every municipality in Germany, e.g., for temperature gradients and efficiencies, etc. However, the most uncertain aspect of a geothermal project, the drilling costs, cannot be estimated very accurately using our model. Here, the model results of 11.4 M€ are 41% higher than the real costs of 8.1 M€ [12]. For the costs, a safe conservative estimate had to be made in our model, as geothermal projects can become more expensive than initially estimated due to unexpected costs arising. This means that the valuation of geothermal plants in this study could be slightly underestimated for specific regions. Bruchsal energy system (wellhead temperature of 115 °C; for the other parameters, see the mean URG scenario in Table 1 When conducting this study, many questions also arose surrounding the economics of DLE, its efficiency, and the market price of lithium carbonate. The extraction efficiency rates in the literature vary from 50-90% [10,49] and have a significant impact on total system costs ( Figure 5). The same applies to the market price of lithium carbonate, which has increased substantially in recent months. The U.S. Geological Survey estimates an average annual lithium carbonate price of 17,000 €/t for fixed contracts in 2021, which is more than double the same value in 2020 [5]. However, the spot market price for September 2022 was up to roughly 76,000 €/t and is forecast to increase [52].
Cost-competitiveness even under pessimistic conditions
Sustainable low-carbon lithium may also command a premium price compared to lithium from conventional extraction due to growing demand for low-carbon products. This demand is present in the automotive sector, with a push for electric vehicle manufacturers to decarbonize supply chains, including Volkswagen and Toyota, which have set the lofty goal of eliminating carbon emissions from their value chains [53]. The commercial interest in low-carbon lithium has already been proven in the form of offtake agreements for geothermal lithium signed by Renault, Volkswagen, Umicore, LG Energy Solutions, and Stellantis [54]. As the lithium market price has a significant impact on overall costs, such premium pricing could further improve the economics of energy systems, including combined geothermal-lithium plants.
The operating expenses (OPEX) and capital expenses (CAPEX) of DLE plants have a negligible
effect on the energy system design and costs. The operating expenses identified during the literature review vary from just under 2000 €/t, per Vulcan Energy [49], to roughly 4000 €/t, as reported by the US Department of Energy [23], to up to roughly 8000 €/t per a discussion with experts. CAPEX are also quite uncertain: although we utilized a value of 20,800 M€, the actual CAPEX value for such a project could significantly differ.
Large-scale impacts of geothermal plants with lithium extraction
In contrast to the previous sensitivity analyses, -64%). The tendency to displace more photovoltaics, even though the cost of electricity generation is lower, can be explained by the higher system integration costs compared to wind power [55]. If every municipality in the URG were to install a hybrid geothermal plant with lithium extraction, ca. 510 kt of lithium carbonate could be produced, which lies well within the range of current estimates. With a typical electric vehicle lithium-ion battery pack (NMC523 type) containing ca.
8 kg of lithium [56] enough to manufacture over 11.9 million battery packs annually, greatly exceeding the 1.7 million new electric vehicle registrations recorded in 2021 for the entirety of the European Union [57]. However, given the significant barriers to future development of hybrid deep geothermal projects including exploratory risks, financial uncertainty, and public opposition (see discussion), it is unlikely that 100% of the municipalities would be developed with combined geothermal-lithium plants. Nevertheless, if only 10% of the municipalities in the URG were to deploy such a plant, this could yield substantial benefits (see Figure 7).
Discussion and Conclusions
Research on the extraction of lithium from geothermal brines dates to the early 1980s, while DLE technology has been in use for over 20 years at Livent Corporation's mine in Argentina [58].
Although the technology has been proven technically feasible with salar brines, uncertainties exist as to its application with geothermal brines, and its commercial efficacy remains to be proven.
While presenting enormous potential, it is important to acknowledge that there has been a recent surge of hype with regard to geothermal lithium extraction that may exaggerate this potential [26].
One such example is that of Vulcan Energy's Zero Carbon Lithium project in the Upper Rhine
Graben, which anticipates operating expenses roughly half those for geothermal-lithium operations in the Salton Sea area, despite having a significantly lower flow rate and lithium concentration [23,49]. Additional concerns regarding the sustainability of such lithium extraction are not without merit, as the geological source and refresh rate of these lithium deposits are not fully understood. Furthermore, social opposition, induced seismicity risks, and financial uncertainty could present major barriers to future development.
The geological source and refresh rate of lithium deposits are not yet fully understood; however, these factors may significantly impact results [29]. Geothermal brines are rich in minerals such as magnesium, potassium, and sodium and possess significant quantities of total dissolved solids, which may cause scaling in the geothermal plant, leading to the degradation of plant components and an increase in costs arising from maintenance and cleaning [25,59]. It is unknown how the addition of a lithium extraction facility would impact scaling and corrosion. In addition, the capital-intensive drilling phase is associated with considerable risk, which we accounted for in the model with conservative assumptions regarding the exploration costs. Subsurface geothermal resources are often not fully understood, and drilling may be unsuccessful in locating a hydrothermal resource with favorable characteristics for geothermal exploitation. Germany in general is considered a high-cost country for geothermal development, with drilling costs exceeding those in the U.S., for example. The risk of unsuccessful drilling can create significant financial losses and delays [60]. anti-geothermal protest movement [62]. Since then, incidents of subsidence and injection-induced seismicity with magnitudes of up to 2.6 in some German towns have solidified concerns about geothermal energy use [62,63]. The importance of social acceptance is illustrated in the example of the now-abandoned Brühl geothermal site in the Upper Rhine Graben, where construction of the planned geothermal plant was halted due a lack of public acceptance, despite drilling success and the achievement of high flow rates [64]. In addition to strategies for improving social acceptance, including preventing and minimizing undesirable effects, compensating local communities when damages occur, creating benefits for the latter, and enhancing community engagement [65], combined lithium extraction may also have a positive impact as "green lithium" and has received significant positive media coverage recently, and provides an attractive talking point for geothermal plant operators to present to the public.
If combined geothermal-lithium technology is not commercially-successful due to one of the above-mentioned reasons, the demand and environmental impacts of lithium procurement will potentially further increase. With current lithium supply insufficient to meet the anticipated 60-fold increase in lithium needed by 2050 to fulfill European Union demand, dependence on lithium imports from countries such as China, Australia, and Chile will likely increase, which could in turn impact the security of energy supply and transition to carbon-neutral energy systems.
In addition, environmental and climate impacts associated with conventional lithium extraction will likely increase and lithium markets may become increasingly volatile due to highly concentrated supply [3]. If lithium market prices will also continue to rise, this could lead to new lithium resources being developed, especially carbon-intensive hard-rock deposits in Australia with a carbon footprint of about 15.8 kg CO2,eq per kg lithium carbonate equivalent [66]. This can be compared with estimated carbon footprints of 0.3 kg CO2,eq for brine deposits in South America [67]. Further research found that brine extraction has a carbon footprint of 3.2 kg CO2,eq and it is predicted that this will increase to 3.3 kg CO2,eq in 2100 [68]. The impacts are exacerbated by lithium having an estimated end-of-life recycling rate of less than 1% [69]. Assuming a carbon abatement potential of 15.8 kg CO2,eq when compared with conventional hard-rock procurement methods, the implementation of approximately 30 such geothermal-lithium plants in the Upper Rhine Graben could lead to an abatement of 800 kt CO2 annually. Therefore, combined geothermal-lithium projects could present one of the best opportunities to decarbonize the lithium supply chain and could have a net negative carbon impact if the offsets of the generated power/heat are sold to the grid and displace coal-fired generation 64 .
Given the numerous ongoing pilot projects demonstrating the potential of DLE from geothermal brines and the rapid advancement of the technology in recent years, the assumption of commercial success may be strengthened. With a total technical potential in Germany of 4155 TWhel/a, deep geothermal energy could play a key role in the achievement of climate goals [70]. These geothermal plants could reduce CO2 emissions from the energy sector and provide a much needed baseload supply of renewable heat and electricity not affected by weather and with a low land-use intensity [70]. The baseload heating is highly relevant in light of the energy crisis and desire to phase out imports of Russian natural gas [71]. Lithium extraction in combination with geothermal energy use could also increase and diversify lithium supply, reduce the environmental and climate impacts of lithium extraction, and aid in the energy transition by promoting the development of low-carbon technologies such as electric vehicle batteries and lithium-ion batteries for grid scale energy storage. Hybrid geothermal plants could also provide significant economic benefit in the form of stable jobs and a new domestic lithium industry in Germany, which possesses abundant lithium resources in the Upper Rhine Graben [5,29]. This lithium potential is not limited to Germany alone: significant lithium geothermal brine deposits have also been identified in the U.S., France, the U.K., and Italy [26,29] suggesting that the utilization of combined geothermal-lithium plants in future transformation strategies is essential.
Data and Code Availability. The ETHOS.FINE framework used is publicly available on GitHub | 2023-04-17T01:15:07.440Z | 2023-04-14T00:00:00.000 | {
"year": 2023,
"sha1": "4e5d50f0812b8fdb9d32fb7890dae9368f21b414",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4e5d50f0812b8fdb9d32fb7890dae9368f21b414",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
231877045 | pes2o/s2orc | v3-fos-license | Incidence and risk factors of COVID-19-like symptoms in the French general population during the lockdown period: a multi-cohort study
Background Our main objectives were to estimate the incidence of illnesses presumably caused by SARS-CoV-2 infection during the lockdown period and to identify the associated risk factors. Methods Participants from 3 adult cohorts in the general population in France were invited to participate in a survey on COVID-19. The main outcome was COVID-19-Like Symptoms (CLS), defined as a sudden onset of cough, fever, dyspnea, ageusia and/or anosmia, that lasted more than 3 days and occurred during the 17 days before the survey. We used delayed-entry Cox models to identify associated factors. Results Between April 2, 2020 and May 12, 2020, 279,478 participants were invited, 116,903 validated the questionnaire and 106,848 were included in the analysis. Three thousand thirty-five cases of CLS were reported during 62,099 person-months of follow-up. The cumulative incidences of CLS were 6.2% (95% Confidence Interval (95%CI): 5.7%; 6.6%) on day 15 and 8.8% (95%CI 8.3%; 9.2%) on day 45 of lockdown. The risk of CLS was lower in older age groups and higher in French regions with a high prevalence of SARS-CoV-2 infection, in participants living in cities > 100,000 inhabitants (vs rural areas), when at least one child or adolescent was living in the same household, in overweight or obese people, and in people with chronic respiratory diseases, anxiety or depression or chronic diseases other than diabetes, cancer, hypertension or cardiovascular diseases. Conclusion The incidence of CLS in the general population remained high during the first 2 weeks of lockdown, and decreased significantly thereafter. Modifiable and non-modifiable risk factors were identified. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-021-05864-8.
Introduction
Following the identification of a novel coronavirus (SARS-CoV-2) in Wuhan, China in December 2019 and its worldwide spread [1], the first imported COVID-19 cases were initially reported in France on January 24, 2020 [2]. Less than 2 months later, the French government declared a nationwide epidemic (phase 3) and a generalized lockdown procedure was set-up on March 17, 2020 [3]. The lockdown included banning of any non-essential public gatherings, closure of educational and public/cultural institutions, ordering people to stay home apart from exercise and essential tasks. Children and their parents were required to stay at home as much as possible [4]. Public health reports have shown that lockdown had a marked impact on the dynamics of the pandemic with a clear downward trend in new hospitalizations from April 1, 2020, and a consecutive decrease in the number of deaths from April 7, 2020 [4,5]. Thus, the French government eased these restrictions on May 11, 2020 [3]. Although lockdown appeared to successfully alleviate the burden of severe COVID-19 [6], estimates of its impact on mild-tomoderate COVID-19 are based on modelling studies [7], and are not yet supported by clinical evidence.
Our main goals were 1) to estimate the incidence of illnesses presumably caused by SARS-CoV-2 infection during the lockdown period; 2) to identify the associated risk factors. We also described associated symptoms, preventive behaviors and healthcare in relation to these illnesses.
Participants and methods Design
The SAPRIS ("SAnté, Perception, pratiques, Relations et Inégalités Sociales en population générale pendant la crise COVID-19") survey was began in March 2020 to evaluate the main epidemiological, social and behavioral challenges of the SARS-CoV2 epidemic in France in relation to social inequalities in health and healthcare. SAPRIS is based on a consortia of prospective cohort studies involving two child-cohorts (not presented in this study) and three general population-based adult cohorts: -1) CONSTANCES, a "general population" cohort including 204,973 adults aged 18 to 69 at inclusion and randomly selected from 2012 to be a representative sample of the French adult population affiliated to the General Health Insurance Fund (the source population, that is, approximately 85% of the total French population) [8]. Among CONSTANCES participants, 66,881 are followed by internet, the rest through mailed questionnaires.
-2) E3N / E4N, a multigenerational adult cohort based on a community of families with 113,000 participants (including women recruited in 1990 and still actively followed-up, their offspring and the fathers of these offspring) among whom 89,606 followed by internet, the rest through mailed questionnaires [9].
Ethics and public involvement. Ethical approval and written informed consent was obtained from each participant before enrolment in the original cohort. According to French law, the present nested survey did not require specific additional written consent from the participant. It was approved by the Inserm ethics evaluation committee (approval #20-672 dated March 30, 2020). Volunteer participants were involved in testing the readability, the comprehension and acceptability of the questions as well as the time required to complete the questionnaires, but they did not contribute to other aspects related to the design, conduct, reporting or dissemination of the research.
All participants from the original cohorts followed using electronic (internet) questionnaires and who were still under active follow-up on April 1, 2020 (n = 279,478) were invited to participate in the current SAPRIS survey (Fig. 1). There were no restrictions on inclusion criteria in the survey. A first self-administered questionnaire covered the lockdown period and was sent from April 1, 2020 and returned before May 12, 2020. A second questionnaire covered the postlockdown period and was sent between May 5, 2020 and June 15, 2020. The present study used the data from the first self-administered questionnaire, which included questions on socio-demographics, household size and composition, SARS-CoV2 diagnosis, a detailed description of the subject's symptoms in the 2 weeks before the questionnaire, comorbidities, healthcare use and treatment, employment, daily life, child care, alcohol, tobacco and cannabis use, social and sexual life, preventive measures, risk perception and beliefs.
Additional specific socio-demographic and clinical characteristics were extracted from original cohort databases. Symptoms were reported if they had been present at least once in the last 14 days. If a symptom had been, but was no longer present when the questionnaire was completed, the duration was noted on a scale (less than 1 day, one to 3 days, four to 7 days, eight to 14 days, > 14 days). Finally, the total time (in days) between the onset of the first symptoms and the questionnaire was reported. All visits outside the home and the use of preventive measures in the 7 days before the questionnaire were reported.
Outcome
The main outcome was COVID-19-Like symptoms (CLS), defined according to the European Centre for Disease Prevention and Control as at least one of a cough, a fever, a dyspnea, a sudden onset of anosmia, ageusia or dysgeusia [11], that lasted more than 3 days and occurred during the at-risk period. Participants were also requested to report the occurrence of cough, fever or dyspnea before March 1, 2020 or between March 1 and the 2 weeks before the questionnaire, and whether they or any other household members had tested positive for SARS-CoV-2 before the questionnaire. The primary "at-risk period" was defined as the 17 days before the self-administered questionnaire for each patient, corresponding to the 14 days to report the presence of symptoms, plus 3 days for the minimum duration of our definition of CLS. In a first sensitivity analysis, no restriction was made on the minimum duration of symptoms, extending our primary case-definition of CLS to illness that lasted less than 4 days. In a second sensitivity analysis, the at-risk period was defined as between March 16, 2020 and the date of the questionnaire for all participants. This definition made it possible to include all CLS that occurred during the lockdown period.
Statistical methods
We determined that 100,000 subjects were needed to have a power of at least 80% to identify associations (Odds-Ratio, OR < 0.9 or OR > 1.1) between covariates and CLS in a wide range of situations, assuming 10% of events and 10 to 90% exposure.
We used inverse probability weighting to correct for selection bias (when only a subgroup of the whole cohort was invited to participate by internet) and inverse probability weighting to correct for non-participation bias in those who were invited. Weights were estimated using logistic regression models, with selection or participation as the response variables, and participants' characteristics as covariates (see supplementary Table 1). Unweighted and weighted daily incidence rates of CLS and 95% confidence intervals were estimated with an exact method based on the Poisson distribution. Estimates of unweighted and weighted cumulative incidences on days 15 (March 30, 2020), 30 (April 14, 2020) and 45 (April 29, 2020) of lockdown were obtained as one minus the estimated probability of survival free of CLS at that time.
To account for potential heterogeneity between the cohorts, left-truncation and censorship in the data, factors associated with the occurrence of CLS were identified using unweighted data and delayed-entry Cox models with stratification on the source cohort. The start of the at-risk period was defined according to the calendar day for each participant and survival time was calculated as the time between that day and the day the questionnaire was filled-out in case of no symptom or the day the first symptoms were reported in CLS cases. Multivariable analysis was performed including all factors associated with CLS cases on univariable analysis. All analyses were performed with SAS 9·4 software (SAS Institute Inc., Cary, North Carolina, USA). A P-value <.05 was considered to be statistically significant.
Results
A total of 116,903 of the 279,478 participants (42%) who were invited to participate in the survey completed the questionnaire. The participation rate was 69% in the CONSTANCES cohort, 51% in the E3NE4N cohorts and 26% in the NutriNet-Santé cohort (Fig. 1). Table 1 presents the characteristics of included participants. Median age was 59 years old (Q1-Q3: 46 to 71 years), and 66% of the participants were women. Twenty-six percent were residents of the Ile-de-France or GrandEst regions the two regions with the highest rate of SARS-CoV-2 in metropolitan France, while 23% lived in rural areas and 44% lived in cities of more than 100,000 inhabitants. At least one child or adolescent was present at home in 25%. Forty-three percent were retired and 50% were working adults, but only 8% worked outside the home during the lockdown period. Ten percent of the participants were obese and a chronic disease was reported in 34% of participants.
The primary daily incidence rate peaked on day four of lockdown (March 19, 2020; unweighted estimate 5.57 per 1000 person-days (95%CI 4.45; 6.89) - Fig. 2) and showed a sharp and constant decrease to reach less than 1 per 1000 person-days after day 25 (April 9, 2020). Similar findings were observed in the weighted incidence rates and the sensitivity analysis considering a different at-risk period (supplementary Figs. 1&2). Daily incidence rates were higher but showed a similar temporal pattern when the case-definition of CLS included illness that lasted less than 4 days (supplementary Fig. 3).
Eighty out of 189 participants who experienced CLS and were tested reported a positive (RT-PCR) test result (supplementary Table 2). Headaches, rhinorrhea and (1) 552 (1) 1057 (1) No professional activity (housewife or husband) 806 (2) 438 (3) 113 (2) 205 (2) 1295 (3) 2857 (2) Missing 142 55 10 64 0 271 Essential job position Healthcare worker 1968 (4) 0 (0) 1 (0) 555 (6) 1744 (4) 4268 (4) Other essential job 5330 (12) 6 (0) 2 (0) 1423 (15) 4250 (11) 11,011 (9) Professional activity during lockdown fatigue were frequently reported in addition to the symptoms defining CLS. Eight hundred and forty-eight (28%) participants with CLS had a GP or a hospital visit, and a diagnosis of COVID-19 was considered to be very likely or likely by the physician in 62% cases. Paracetamol was taken by 62% and antibiotics by 6% of participants with CLS. Only 8 participants used chloroquine or hydroxychloroquine. Forty percent participants stayed strictly confined at home following symptoms onset. Table 2 presents the unweighted incidence rates of CLS and the hazard ratios obtained from the univariable Cox models with stratification on source cohort. Almost all tested factors were found to be associated with CLS. A positive RT-PCR in another household member was strongly associated with CLS in the participant; this variable was not included in the multivariable analysis to avoid overfitting. On multivariable analysis (Table 3), the risk of COVID-19 was lower in older age groups and was higher in the Ile-de-France and GrandEst regions (compared to other French metropolitan regions), in those living in cities > 100,000 inhabitants (vs rural areas), when at least one child or adolescent was living in the same household, in overweight or obese participants, and in people with chronic respiratory diseases, anxiety or depression and chronic diseases other than diabetes, cancer, hypertension or other cardiovascular diseases. The observed associations were confirmed in the sensitivity analyses, except that male gender, living in a household of size 2 and being retired were negatively associated with the risk of CLS in addition to factors identified in the primary analysis (supplementary Tables 3 & 4).
Discussion
Lockdown was associated with a strong decrease in the incidence of CLS in the French adult population that participated in this survey. This study shows that the cumulative incidence of CLS on day 45 of lockdown ranged from 7.7 to 10.2% depending on the estimation method, that more than 60 % of new cases occurred within the first 2 weeks, and that the daily incidence remained at a sustained low level 1 month after lockdown and thereafter. In addition, we identified several risk factors of CLS during this period, and have described the immediate consequences in terms of access to healthcare and treatment associated with these syndromes. To our knowledge this is the first study to report the signs and symptoms of COVID-19 on a nationwide scale and during lockdown.
Only 28% of the participants with CLS had a medical visit. This result is in line with estimates based on a digital participatory system in France during the same period, in which 31% of COVID-19 patients sought medical advice [13]. Forty percent of the participants with symptoms remained strictly confined without leaving their homes, following the government's Considering the estimated 5-day median incubation time of COVID-19 and the appearance of symptoms within twelve days after infection [14], a large proportion of participants who developed CLS in the first two weeks were probably infected before lockdown, most of them in the community or at the workplace. It is therefore not surprising to find the association of CLS in adults with decreasing age [15], living in urban versus rural environments [16], in highly prevalent French regions [17], all factors that were reported in other studies performed before lockdown.
A lower infection rate with increasing age was reported in several population-based serological studies [18] which is consistent with our findings, although the risk of severe illness or deaths exponentially increases with age among those infected [19]. As in other studies, univariable analysis identified the size of the household (including children), but only living with at least one child or adolescent remained associated with CLS on multivariable analysis, indicating that this age group could play an important role in household-related transmission [20]. We also identified other factors indicating potential secondary household-related transmission, such as living with another person with a positive diagnosis of SARS-CoV-2 [21]. However, it was impossible to determine a timeframe for this factor and identify whether the participant was the source of infection or was infected by a household member. Being a healthcare worker was associated with CLS in univariable analysisas reported other studies [22,23], but the association did not remain significant in the multivariable model, potentially due to a lack of statistical power. Obesity has been found to be linked with the risk of severe CLS in young patients [24], and also suspected to increase the susceptibility to infection [25]. Different theories suggest that asthma, COPD and other respiratory diseases may be negatively or positively associated with the susceptibility to SARS-CoV-2 infection due to up or down regulation of angiotensin-converting enzyme-2 expression. However, all of these respiratory diseases have been shown to be associated with the severity of illness in infected persons [26][27][28]. Since 30 to 60% of SARS-CoV-2 infections are asymptomatic [29][30][31][32] and were not included in our CLS cases, by definition, it is not surprising to find the presence of these conditions, which are known to be associated with more severe disease, in subjects with symptomatic SARS-CoV-2 infection. Finally, we found a strong association between CLS and anxiety or depression, which may be related either to a direct (i.e. causal) impact of these comorbidities on the risk of CLS, or to an over-reporting of CLS caused by increased anxiety or stress in this specific subgroup. Although psychiatric disorders have been reported during the acute phase of the infection [33], the risk of reverse causality explaining this association should however be limited as co-morbidities were collected prior to the survey. Consistent results were obtained in the sensitivity analyses. An association with CLS was found with being retired compared with being working, but the strength of the association was of the same magnitude than estimated in the main analysis. This result can be the consequence of a higher power of the sensitivity analyses due to a higher number of events, and explained by a lower rate of social contacts in this category of persons compared with working people of the same age.
Our study has several limitations. The most important limitation is the lack of virological confirmation of CLS and the risk of misclassification of a SARS-CoV-2 infection and a disease from another etiology. During lockdown, French health authority recommendations limited SARS-CoV-2 testing with a RT-PCR test to patients with severe symptoms requiring hospitalization or to specific situations (e.g. healthcare workers with symptoms). Thus, testing was not available to most participants. Nevertheless, the influenza season peaked on week 6 and ended on week 10 to 12, just before lockdown, which limits the risk of acute respiratory infection caused by an influenza virus. In addition, 42% of the small group of participants who were tested for SARS-CoV-2 infection in our study reported a positive RT-PCR result. This positive rate was higher than that reported in the community (30% at its highest between March 23 and March 29, 2020) [34]. However, a 15 to 20% seroprevalence of SARS-CoV-2 was reported in Spain in individuals from the general population who presented symptoms compatible with COVID-19 [32]. It is therefore likely that the cause of illness was not SARS-CoV-2 infection in a significant proportion of our CLS cases and only studies using sensitive and specific virological methods can accurately quantify the extent of the SARS-CoV-2 epidemic. To avoid recall bias, which is another potential limitation of our study, we limited the questionnaire to the symptoms present in the past 14 days. To avoid a selection bias induced by different dates for filling in the questionnaires resulting in dates of 'atrisk period' that varied from one subject to another, we used a Cox model with delayed entry. Finally, although participation bias was accounted for with an appropriate weighting method, our findings should not be considered to be strictly representative of the general adult population in France. Nevertheless, the large number of subjects from all social categories allows us to draw robust conclusions on the factors associated with the occurrence of CLS in France.
Conclusion
In conclusion, to our knowledge this is the first study to quantify the incidence of CLS in the general population on a nationwide scale and during a lockdown, and it
Availability of data and materials
In regards to data availability, data of the study are protected under the protection of health data regulation set by the French National Commission on Informatics and Liberty (Commission Nationale de l'Informatique et des Libertés, CNIL). The data can be available upon reasonable request to the corresponding author (fabrice.carrat@iplesp.upmc.fr), after a consultation with the steering committee of the Sapris study. The French law forbids us to provide free access to Sapris data; access could however be given by the steering committee after legal verification of the use of the data.
Ethics approval and consent to participate Ethical approval and written informed consent was obtained from each participant before enrolment in the original cohort. The study was approved by the Inserm ethics evaluation committee (approval #20-672 dated March 30, 2020). According to French law, the present nested survey did not require specific additional written consent from the participant. Representatives of the participants tested and validated the questionnaires, but they did not contribute to other aspects related to the design, conduct, reporting or dissemination of the research. | 2021-02-11T14:19:23.896Z | 2021-02-10T00:00:00.000 | {
"year": 2021,
"sha1": "c7494bb1cc3f52ab22f876835a7356b848774d3b",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-05864-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c744bb1b70b7f35bbf7945f3f52dc2a85d68595",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
197421368 | pes2o/s2orc | v3-fos-license | Exploring perceptions of consanguineous unions with women from an East London community: analysis of discussion groups
Consanguineous unions are relationships between blood relatives. This study explores the perceptions of consanguineous unions and risk of childhood disability and illness through the reported views and experiences of women in an ethnically diverse London community. This qualitative study utilised group discussions to elicit women’s views and experiences. Field notes were recorded by independent note-takers in four group discussions. Field notes were coded manually and independently by two researchers who identified common themes for thematic analysis. Thirty-six women attended, of whom 20 identified as Asian Pakistani. Identified themes included variation in participants’ views of consanguineous unions and associated health risks, the value of informed decisions and preferences for information distribution. Although participants had diverse opinions and experiences, they considered risk awareness to be vital for encouraging informed decisions in younger generations. This study highlights the importance of involving the community in efforts to increase awareness around consanguineous unions and genetic risk, emphasising the need for enabling educated choices and the value of co-developing educational efforts with the community. Electronic supplementary material The online version of this article (10.1007/s12687-019-00429-4) contains supplementary material, which is available to authorized users.
Background
Clinical genetics considers a relationship between blood relatives who are second cousins or closer as consanguineous (Hamamy et al. 2011;Ng 2016). Consanguinity describes the state of being related by blood and the terms consanguineous relationships or unions are used to describe relationships between blood relatives. Consanguineous unions are prevalent in many communities worldwide and it is estimated that, globally, 15% of all neonates have consanguineous parents (Bennett et al. 2002;Bittles and Black 2010;Darr 2016). In recent years, migration has led to increasingly multi-ethnic societies with a cultural milieu of diverse traditions and social norms. This has contributed to the spread of global awareness of the genetic implications of customary consanguineous marriages (Bennett et al. 2002;Modell and Darr 2002).
Consanguineous marriages have been linked to genetic disease due to an increased risk of autosomal recessive disorders and infant mortality (Bennett et al. 2002;Modell and Darr 2002;Hamamy 2012). Evidence suggests that the risk of inheriting a genetic disorder is doubled in the children of consanguineous parents, compared to children of unrelated parents (Bennett et al. 2002;Shaw 2009;Hamamy 2012;Darr et al. 2013). Congenital birth defects, such as sensorineural hearing loss and heart disease, and neurodevelopmental disorders, such as autism spectrum disorder and unexplained learning difficulty, also seem to occur in children born to consanguineous parents at high rates (Lyons et al. 2009;Strømme et al. 2009;Shieh et al. 2012;Ng 2016;Al-Mubarak 2017;Best et al. 2017;Sanyelbhaa et al. 2018). Although these disorders often have a complex aetiology which cannot be directly linked to genetics alone, this phenomenon can be partly explained by the increased likelihood of inheriting two recessive alleles, and hence manifestation of genetic disease (Modell and Darr 2002). Despite the potential health risks, consanguineous marriage is favoured in some populations due to social, cultural, and economic benefits, including the strengthening of family ties, confidence in finding a compatible spouse, and protecting property (Khlat et al. 1986;Bittles et al. 1991;Bittles 1994;Hussain 1999;Modell 2002;Khan et al. 2011).
In the United Kingdom (UK), studies involving families of Pakistani descent indicate higher rates of consanguineous unions (Darr 2016) and a threefold increase in child mortality when compared to Caucasians (Bundey and Alam 1993; Khan 2010). A cohort study in the London Borough of Tower Hamlets found a significantly increased risk of autosomal recessive disorders in children of consanguineous parents (33.6% versus 21.6%, p value = 0.011) (Best et al. 2017). Since the demographics of Tower Hamlets closely resembles the neighbouring borough of Newham (Office for National Statistics 2016), the results of this study raised awareness of the need to develop a public health response to address the needs of consanguineous families and marginalised communities in Newham. The UK has not established nation-wide action to support these families and only local approaches have been documented to date. These documented local approaches emphasise the importance of community engagement and co-design, to ensure that they are respectful of local beliefs and can be effectively implemented in the community (Salway et al. 2016;Ali et al. 2018). To this end, the present study sought to explore perceptions of consanguineous unions in a district of Newham and contribute to the growing body of evidence on local initiatives.
Aims
The primary aim of the study was to explore perceptions of consanguineous unions and associated genetic risks, indirectly assessing genetic literacy at the community level. The secondary aim of the study was to examine proof of concept for future collaborative interventions involving genetic literacy in marginalised communities, such as ethnic minorities.
Methods
Qualitative research, using facilitated small group discussion, was identified as the most suitable method for investigating community perceptions. This method combines interviewstyle questioning with group interactions to explore opinions, beliefs and experiences within a supportive and social framework whilst allowing researchers to observed shared language and knowledge within a group (Hughes and DuMont 1993;Krueger 1994). The ongoing conversation café initiative in Newham (Newham London n.d.) aims to engage, empower, and develop women and families in the community, presenting a good opportunity for hosting these group discussions. Topic guides were developed with input from a female community facilitator in order to structure group discussions ( Table 1).
Recruitment of participants
Participants were recruited by the community facilitator using snowballing through purposive sampling for gender and ethnicity, focusing on females of South Asian and Middle Eastern descent. Potential participants were approached in street talks, a ladies' Arabic group session, local libraries, schools, mosques and beauty parlours in Newham, London. Approximately 200 women were approached by the community facilitator, with 36 participants ultimately attending. Participants were not asked about their own marriage or relationship status for recruitment purposes or during the discussion groups to avoid stigmatisation. Demographic data and reasons for non-participation for those who did not agree to attend were not collected.
Group discussions
Following a brief presentation on genetic disease in children of consanguineous parents and explanation of the study aims, participants were divided into four sub-groups of nine participants (Morgan 1997), each coordinated by an independent female facilitator. The discussions were hosted at East Ham Town Hall on 11/09/2017 and lasted for 90 min, with a 30-min-catered lunch break following question 5 ( Table 1).
The facilitators were impartial mediators recruited from outside the local community to reduce any bias caused by pre-existing knowledge of community perceptions. All four facilitators were female and three of these facilitators had additional language skills, allowing for translations in Urdu, Hindi, Malayalam, Punjabi and Bengali. Ultimately, all participants felt comfortable carrying out discussions in English. Each facilitator guided discussion by offering prompts based on a topic guide (Table 1).
Data collection
Data collection was conducted by each facilitator as field notes on a laptop, with verbatim quotations where possible. Voice recorders were not utilised for this data collection to encourage candidness, following advice from the community facilitator.
Demographic data on participants were collected using a questionnaire piloted by authors (Manikam et al. 2016). Confidentiality was maintained through coding of participant responses with assigned numbers corresponding to their anonymised demographic information.
Data analysis
Responses from transcripts were reviewed and coded independently by MAC and MA to derive common themes and subthemes from the data through subsequent thematic analysis. Conflicts in data analysis were resolved by discussion with EA. In this study, we use thematic analysis to understand fundamental themes and their relationships within the participant group, including the range of individual attitudes, opinions and beliefs expressed (Guest et al. 2010;Bowling 2014).
Participant characteristics
Of the 36 women included in the discussion groups, most (47%, n = 17) were between 30 and 39 years of age (Table 2) with a mean age of 39 years. The majority of participants were of Asian Pakistani descent (55%, n = 20), with the most common birthplace of participants being Pakistan (42%, n = 15), followed by the UK (25%, n = 9). Urdu was the most common native language (39%, n = 14) and Islam was the most commonly reported faith (97%, n = 35). The majority of participants (67%, n = 24) had lived in the UK for over 10 years, and all but two participants had children.
Emergent themes
A number of themes were identified by authors' iteration within and across the transcripts: (1) variation in perception of consanguineous unions and associated health risks, (2) the importance of informed choice and (3) preferences for information and sources of information (Fig. 1).
Theme 1: Variation in perception of consanguineous unions and associated health risks
Participants represented a variety of cultures, religions and ages, expressing a wide range of views on and experiences of consanguineous unions. This variation also informed participants' views of health risk associated with consanguineous unions.
Subtheme 1.1: Variation in participants' views on consanguineous unions Participants' overall views on consanguineous unions highlighted both the associated benefits and disadvantages, with views ranging from supportive to sceptical. Variations in opinion were marked by differences in religious beliefs and age. For example, a participant explained that consanguineous marriage is permitted in her religious beliefs and health risks are not a paramount concern. However, she was also aware of significant variation in opinion within the Islamic faith.
There are different religions, in Islam God did not forbid it. For my daughter, she will like her cousin, I will say go for it, it is not forbidden... Not all Muslim is the same, some say no. (Age 46).
The generational gap also appeared to divide opinions, with participants discussing that the older generation may show more support for consanguineous unions than the younger generation and that it is important to understand perspectives from difference generations.
New generation and old is completely different, so it is good to talk to both. The new generation, they are against this cousin marriage. (Age 46). Subtheme 1.2: The role of personal experiences Participants with experience of disability in children of consanguineous marriage were more aware of the potential health risks and were more critical of consanguineous unions, whilst participants with experience of consanguineous unions leading to healthy progeny or disability in progeny of unrelated parents believed that parentage was unjustly associated with disability. Many women detailed experiences within their own families, with a participant giving an account of her experience with physical disability in a child.
I knew [about the risks] before I came here because I saw it with my own eyes. A cousin married, and baby born with just one eye, it's horrible... There is more chance [of disability] and I don't like that. (Age 40).
Other participants considered the association between consanguineous unions and genetic disability to be overemphasised. These participants cited cases where healthy children were born in consanguineous unions and children with disabilities were born in non-consanguineous unions. For example, a participant with experience of disability occurring a child of unrelated parents expressed her belief that consanguineous unions should not be associated with a definite risk of disability.
It is not a 100% chance a child will be ill. A cousin married outside [the family] and has autism in family. It's a risk to take regardless. Need to change people's views. (Age 64). At the same time, participants also discussed some of the benefits of marrying outside the family, including avoidance of family conflict, extending family networks and experiencing new cultures. For some women, autonomy in marriage decisions where love, happiness and choice were said to be paramount. One participant voiced their support of autonomy in marriage very clearly, stating: Parents should not interfere and force cousin marriages, the people getting married should think for themselves. (Age 64).
Subtheme 2.2: Desire for information Participants requested information on risks associated with consanguineous unions to enable educated decision-making, inform choices for their children and invest into the future, with most participants citing this as their motivation for joining the discussion.
Participants were generally aware of the link between consanguineous unions and illness, and were unclear on the nature of the association and which conditions had a genetic component. Many participants expressed difficulty understanding the mechanisms of Mendelian inheritance, leading to difficulties in comprehending genetic risk and the risk of disability that might be associated with consanguineous unions. The low risk of disability in children of both consanguineous and unrelated parents also contributed to difficulty in understanding the level of risk that is attributed to consanguineous unions. I knew there's a low risk of disability, I don't know exactly how much. (Age 35).
Fig. 1 Concept map of emergent themes from thematic analysis of discussions
Discussions also reflected limited understanding around the genetic mechanisms behind disease and about which conditions arise from recessive genetic disorders. For example, one participant believed that infectious conditions, such as meningitis, were due to consanguineous unions but was corrected by another participant who explained that this was not the case and went on to elaborate that some disorders have complex aetiology and are not associates with genetics alone: … meningitis does not come from internal marriage. Also, in UK there is many autistic children, maybe because of food they are eating or something around. [It] can be when women are pregnant. (Age 46).
Subtheme 2.3: Awareness and educated decision-making
Most participants agreed that marriage decisions ultimately belong with the couple and advocated for increased awareness of the risks associated with consanguineous unions to promote educated decision-making. One participant clearly articulated this view by stating: If there is higher genetic risk in cousin marriages then before getting married the two people could consider their genetics and so they can find out their genetic risk before getting married. (Age 35).
Another participant warned that whilst raising awareness and increasing community discussion is important, overemphasis of the risks may generate anxiety for consanguineous couples and families: Want to emphasise that it's low risk, because you don't want to scare all those that are already married within cousins but say the risk is there and just make sure they know. Also tell them about the genetic tests available if they like a cousin because not everyone knows. (Age 35).
Discussions reflected the belief that education on genetic risk is necessary to inform marriage choices. However, it was clear that other motivators for the marriage could take precedence despite knowledge of the risks associated: Many people have their own purpose of cousin marriages and to fulfil the purpose they do not think about the long-term risks such as increased risk of genetic disease. (Age 35).
Theme 3: Preferences for information and recommended sources
In addition to variation in the perception of consanguineous unions and a desire for information to inform decision-making, preferences for the distribution of information about consanguineous unions was a key area of discussion.
Subtheme 3.1: Education with respect for health and social factors Participants' discussions highlighted the need for education to promote informed decision-making. Participants were widely accepting of question 8 of the topic guide (Table 1) which enquired about the acceptability of educating children on genetic literacy. The benefits of education included raising awareness, encouraging open discussion and passing information to future generations.
A course should be created to raise awareness about cousin marriage to inform people of the benefits and risks associated. This should be a life skill. (Age 61).
Participants emphasised the importance of a universal approach in education which is culturally sensitive and considerate of that of the social factors associated with consanguineous unions. This universal approach was highlighted by a participant who stated: Secondary school and college definitely it should be integrated with science not made a separate topic, so they don't feel targeted. (Age 35).
Subtheme 3.2: Accessible information Participants identified the need for widespread dissemination of information about genetic risk through media which is readily accessible to the community, such as posters, advertisements and local media outlets. The use of printed media for information sharing, such as newspapers, posters and leaflets, was suggested for community spaces, religious centres and GP offices. Participants emphasised the value of an accessible approach, highlighting the need for local resources to be accessible to women and children in the community.
Local library is good. Family resource centre is really good … [you] can bring children there is small creche. It is important, majority of women cannot go to talk because of their children... The women would be learning for their children (Age 46).
Medical practitioners were also recognised for their role in disseminating information about health risks. However, participants acknowledged that medical guidance is often limited since discussions usually occur after a woman becomes pregnant, rather than during prenatal counselling. This highlights the need for a multi-faceted approach to information dissemination to ensure information is accessible.
Cousin marriage will only be raised when the wife is pregnant and at this stage it could be too late (Age 35). Subtheme 3.3: Information from influencers Many women noted that authority figures may influence marriage decisions within their families, indicating a need to engage with community leaders, health professionals and religious leaders. Participants also considered discussions between parents and children as an important source of influence and opportunity for discussion, emphasising that education across multiple generations would be mutually reinforcing.
Parents with friendly children, the children can understand their parents. So, if you encourage parents with children, they can give the choice then to the children (Age 32).
Improving awareness in men was highlighted as an important area for improving community awareness. Participants felt that men had little engagement with the topic of consanguineous unions, even though they were important decision makers in marriage arrangements. Lack of engagement from men was considered a missed opportunity for increasing awareness, but one participant suggested this was changing due to education on the topic: Men have different types of discussions about cousin marriages and they do not go into such detail. Educated men have more awareness of this than uneducated men … (Age 61).
Emergent themes
Analysis of findings generated three emergent themes on consanguineous inions and the associated health risks. Variation in perception is a central theme, influencing the subsequent themes of informed choice and preferences for information. The varying opinions on consanguineous unions amongst participants can be explained by their diverse backgrounds and experiences (see Table 2). The discussions also highlighted a gap in genetic literacy which is reflected in our themes of variation and the importance of informed choice.
Participants made several recommendations for dissemination of information, emphasising the need for a multi-faceted approach. Parents and men in the community were also identified as potential influencers for spreading information. It is evident that future efforts to reach out to communities with health information about consanguineous unions should involve identification and engagement with the influencers with communities.
In light of participants' acceptance of children receiving information about genetic risk, an educational intervention for genetic literacy holds potential for success in a diverse community, such as Newham. However, such an intervention must be carefully co-developed with community members to account for variation in views and the myriad of factors which play into marriage decisions, whilst avoiding stigmatisation of the community. The information requested by participants centred around the genetic mechanisms of disease and which conditions may be genetic in nature, highlighting a desire for improved genetic literacy. However, this study highlights the importance of recognising that consanguineous unions do not just present a simple "health risk", but have wider social, economic and political dimensions in the complex context of marriage. Any service which is developed to address genetic literacy and enable informed choice must be respectful of the themes for variation in views and preferences for information, taking special care to avoid the stigma which was a concern for some participants.
Some participants felt consanguineous unions are over emphasised in their community, a finding which has been linked to alienation and stigma in previous studies (Ali et al. 2012;Ajaz et al. 2015). This suggests that efforts to improve genetic literacy should take a universal approach to avoid stigmatising a particular group and be informed through community engagement. Further to this, it became clear that the term "consanguinity" was novel to some participants. Many participants were aware of the term "cousin marriage" or "internal marriage", but few reported awareness of the term "consanguinity" prior to the discussion group. Awareness of the terms used within communities will be vital for future engagement efforts.
Findings in context
A large body of evidence supports an association between consanguineous unions and an increased risk of genetic disease (Bennett et al. 2002;Modell and Darr 2002;Strømme et al. 2009;Lyons et al. 2009;Shaw 2009;Shieh et al. 2012;Hamamy 2012;Darr et al. 2013;Ng 2016;Al-Mubarak 2017;Best et al. 2017;Sanyelbhaa et al. 2018). This association has become particularly concerning in the UK, with several studies focusing on consanguineous unions in Pakistani communities (Sanderson et al. 2006;Sheridan et al. 2013;Best et al. 2017). Some studies have focused on community perceptions of genetic risk (Ali et al. 2012;Ajaz et al. 2015;Darr 2016) and sought to inform interventions on how best to improve genetic literacy in consanguineous populations Salway et al. 2016;Ali et al. 2018).
Previous research on perceptions of consanguineous unions support our findings that personal experiences may shape individual opinions on the risks associated with consanguineous unions, with some members of British Pakistani communities disputing the risks (Ajaz et al. 2015). The confusion around quantifying genetic risks in consanguineous unions is also a theme identified by several qualitative studies (Ajaz et al. 2015;Darr 2016). Our findings on services for genetic literacy also reflect those of similar research conducted in the UK and Netherlands, highlighting the risk of adding to perceptions to stigma in communities where consanguineous unions are common and reaffirming that health risks may not be the primary drivers in marriage decisions (Salway et al. 2016;Ali et al. 2018).
To our knowledge, this is the first study of women's perception of consanguineous unions in a London Borough. This study provides unique insight into perceptions of consanguineous unions and genetic risk and indicates the acceptability of educational interventions to improve genetic literacy in children. Findings highlight the potential for co-design to navigate the variations in opinion throughout the community whilst addressing the desire to seek knowledge for informed choice, expanding on the findings of similar research on health literacy in the UK (Ali et al. 2018). This highlights the importance of co-design for developing services, increasing community awareness and making services and information accessible.
Limitations
This study has several limitations, primarily due to biases in the self-selected study population. Our study sample was recruited by invitation from a community facilitator and is therefore likely to be composed of individuals who are interested in health outcomes in their community. These individuals may have a level of knowledge about consanguineous marriage which differs systematically from others in their community. This recruitment limitation is further illustrated by the fact that all participants were able to participate in the discussion groups in English, indicating that strong English speakers may have been more inclined to attend the discussion. The limited study population consisted of only women, with the majority being of Asian Pakistani background and 30 to 44 years of age (Table 2). Future qualitative research should aim to engage a wider demographic for data triangulation, including male participants and individuals of broader age ranges. Including men in future research should be a priority, since gender differences were highlighted in the present study and previously published literature (Buunk 2017).
The presentation at the start of the discussions was intended to start discussion but may have introduced some biases regarding awareness of genetic risk. A further limitation comes from the nature of groups discussions, whereby strong opinions from outspoken participants may overshadow the responses of others. This has been mitigated by the use of trained facilitators. Although the facilitators were trained to avoid leading questions and affirmative responses, reporter bias cannot be ruled out as a potential limitation due the reliance on assisted discussion. Furthermore, variable proficiency in English may have created barriers to discussion by some members, especially where conversation was fast paced or complex in detail. However, participants were aware of facilitators' ability to translate into various languages if needed.
Despite some debate over the suitability of group discussions for exploring sensitive topic such as consanguineous unions, group discussions have proven efficacy in research on sensitive topics, including family planning and reproductive health, (Linhorst 2002;Van Teijlingen and Pitchforth 2006;Bowling 2014) and provide insight into community beliefs through their interactive nature (Gothberg et al. 2013). This methodology has proven success in understanding group perspectives, particularly around health issues, and can lead to improved candidness in responses when compared to individual interviews due to a perceived "safe space" and ability to build on ideas through discussion (Bowling 2014). We also did not collect details on participants' marriage status or personal experience with consanguineous unions, which could have been useful in characterizing the influence of personal experience on their opinions.
Conclusion
Overall, this study emphasises the need for awareness, educated decision-making and co-developing educational materials regarding consanguineous unions to support marginalised communities. Participants were widely receptive and engaged by the subject matter presented for discussion and requested additional community engagement.
Conflict of interest Meghan A Cupp, Mary Adams, Michelle Heys, Monica Lakhanpaul, Emma Alexander, Yasmin Milner, Tausif Huq and Iram Shazia Mirza have no conflicts of interest to disclose. Meradin Peachey was Director of Public Health in the London Borough of Newham for the study period. Lakmini Shah is an elected Councillor in the London Borough of Newham. Logan Manikam is a public health consultant funded to undertake this project by the London Borough of Newham. Although the study was funded by the London Borough of Newham, the council was not involved in the conduct of the study and have not influenced the study in any way.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-07-18T14:22:06.279Z | 2019-07-16T00:00:00.000 | {
"year": 2019,
"sha1": "c06436c07a01028fedecf13336455c9752828ed6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12687-019-00429-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "af54bdee3f36acfe997ea1ed905ff3f2b821653b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
258322828 | pes2o/s2orc | v3-fos-license | Algorithms and Techniques for the Structural Health Monitoring of Bridges: Systematic Literature Review
Structural health monitoring (SHM) systems are used to analyze the health of infrastructures such as bridges, using data from various types of sensors. While SHM systems consist of various stages, feature extraction and pattern recognition steps are the most important. Consequently, signal processing techniques in the feature extraction stage and machine learning algorithms in the pattern recognition stage play an effective role in analyzing the health of bridges. In other words, there exists a plethora of signal processing techniques and machine learning algorithms, and the selection of the appropriate technique/algorithm is guided by the limitations of each technique/algorithm. The selection also depends on the requirements of SHM in terms of damage identification level and operating conditions. This has provided the motivation to conduct a Systematic literature review (SLR) of feature extraction techniques and pattern recognition algorithms for the structural health monitoring of bridges. The existing literature reviews describe the current trends in the field with different focus aspects. However, a systematic literature review that presents an in-depth comparative study of different applications of machine learning algorithms in the field of SHM of bridges does not exist. Furthermore, there is a lack of analytical studies that investigate the SHM systems in terms of several design considerations including feature extraction techniques, analytical approaches (classification/ regression), operational functionality levels (diagnosis/prognosis) and system implementation techniques (data-driven/model-based). Consequently, this paper identifies 45 recent research practices (during 2016–2023), pertaining to feature extraction techniques and pattern recognition algorithms in SHM for bridges through an SLR process. First, the identified research studies are classified into three different categories: supervised learning algorithms, neural networks and a combination of both. Subsequently, an in-depth analysis of various machine learning algorithms is performed in each category. Moreover, the analysis of selected research studies (total = 45) in terms of feature extraction techniques is made, and 25 different techniques are identified. Furthermore, this article also explores other design considerations like analytical approaches in the pattern recognition process, operational functionality and system implementation. It is expected that the outcomes of this research may facilitate the researchers and practitioners of the domain during the selection of appropriate feature extraction techniques, machine learning algorithms and other design considerations according to the SHM system requirements.
Introduction
The construction of cities, villages and related infrastructure are among the unavoidable necessities due to the acceleration of population growth in recent decades. To conduct social, welfare, political and economic activities, a city, nation, or region uses a variety of systems, equipment and services together referred to as infrastructure [1][2][3][4][5]. One of the relatively expensive infrastructures is a bridge that deteriorates with time for a variety of reasons, including creep, corrosion and cyclic loads. However, they can last for hundreds of pervised and neural network) for different SHM schemes for bridges. Each scheme requires a different analytical approach (classification, regression and clustering). For example, supervised learning algorithms are deployed for classification and regression problems, whereas unsupervised learning algorithms are deployed for clustering problems. Moreover, signal processing techniques are becoming popular in the feature extraction stage of the SHM process, especially with the advancement in the areas of the Internet-of-things (IoT) and big data. Consequently, a systematic study is required to classify and describe the existing design implementation. Furthermore, a systematic study can reveal the need for future studies to overcome the challenges of the existing systems. Table 1 summarizes the existing literature reviews on the deployment of machine learning algorithms in SHM of bridges [50][51][52][53][54][55][56]. It can be observed from Table 1 that existing reviews describe the current trends in the field with different focus aspects. However, a systematic literature review that presents an in-depth comparative study of different applications of machine learning algorithms in the field of SHM of bridges does not exist. Furthermore, there is a lack of analytical studies that investigate the SHM systems in terms of several design considerations including feature extraction techniques, analytical approaches (classification/regression), operational functionality levels (diagnosis/prognosis) and system implementation techniques (data-driven/model-based). Table 1 overviews the focus and the limitations of the existing literature reviews on the deployment of machine learning algorithms in SHM of bridges.
Contributions
In order to address the research gap identified in Section 1.2, we conducted an SLR to perform an in-depth analysis of different SHM applications in terms of various algorithms and techniques. Through an SLR approach, the explorations for the answers to the following formulated research questions are the key contributions of this work: Research question 1: What important research has been reported from 2016 to 2023 where machine learning algorithms have been utilized in the pattern recognition process for SHM in bridges?
Research question 2: Which of the machine learning algorithms and the analytical approaches are more frequently utilized in the pattern recognition process for SHM in bridges during the 2016-2023 research?
Research question 3: Which of the signal processing techniques are more frequently utilized in the feature extraction process for SHM in bridges during the 2016-2023 research?
Research question 4: Which of the system implementation techniques and operational functionality approaches are more frequently utilized in the process for SHM in bridges during the 2016-2023 research?
Layout of the Research Approach
In order to find responses to the constructed questions (Q1-Q4), the actual layout/framework of the used SLR approach in this article is shown in Figure 1.
Using three scientific databases (i.e., IEEE, Springer, Elsevier), the research studies were carefully selected through some inclusion and exclusion rules. The details of the employed research methodology for the selection of research studies are provided in Section 2. The selected studies were classified into three types: supervised learning algorithms (16 research studies), neural network algorithms (25 research studies) and combined algorithms (4 research studies). To carry out an inclusive examination and synthesis of the selected studies (total = 45), the aforesaid types (supervised, neural network and combined) were further organized according to different mechanisms. Consequently, Section 3 analyzes the selected studies in terms of different design considerations including feature extraction techniques (25 techniques), analytical approaches (classification/regression), op-Sensors 2023, 23, 4230 4 of 29 erational functionality (diagnosis/prognosis) and system implementation techniques (datadriven/model-based). While a brief overview of the synthesis results of machine learning algorithms is provided in Section 3, an in-depth exploration is provided in Section 4. It includes four algorithms in the supervised category including decision tree (DT), random forest (RF), support vector machines (SVM) and K-nearest neighbors. The neural network category includes artificial neural network (ANN) and convolutional neural network algorithms. Subsequently, responses against the constructed queries are discussed in Section 5. In Section 6, we will discuss the results and the limitations of the research. Finally, the conclusions of this article are provided in Section 7. Table 1. The existing literature reviews on the deployment of machine learning algorithms in SHM of bridges.
Ref. Year
Focus Limitations [50] 2022 Investigates the use of artificial intelligence to enhance the operation of data-driven SHM systems for bridges.
• It is not systematic • Limited to data-driven • Feature extraction step was not explored • Did not investigate the analytical approaches [51] 2022 Investigates the feature extraction and pattern recognition processes of SHM systems for building and bridges.
• It is not systematic • It is not focused on bridges • Did not investigate the analytical approaches [52] 2022 Targets the study of vibration-based systems and machine learning techniques in bridges.
• It is not systematic • Limited to vibration-based • Feature extraction step was not explored • Analytical approaches were not explored [53] 2022 Explores the latest trends and limitations of the use of deep learning algorithms in SHM for bridges. bridges during the 2016-2023 research?
Layout of the Research Approach
In order to find responses to the constructed questions (Q1-Q4), the actual layout/framework of the used SLR approach in this article is shown in Figure 1.
Research Methodology
The systematic literature review process described in [57] was used to carry out this research. It is a proper and replicated process of documenting pertinent details on a precise research area for reviewing and investigating all existing research related to research questions. Consequently, this research incorporates six stages: (1) categories definition, (2) review protocol development, (3) selection and rejection criterion, (4) search process, (5) quality assessment, (6) data extraction and synthesis.
Categories Definition
We defined three categories to organize the selected research studies. This categorization significantly improves the accuracy of the answers to our research questions. The details of the categories are given below.
Supervised Learning Algorithms
Supervised algorithms utilize a labeled dataset. These algorithms are used for classification and regression analytical processes. Consequently, the "Supervised learning algorithms "category includes all those research articles in which one or more than one supervised learning algorithms (random forest, decision tree, support vector machine and K-nearest neighbor) are used for the pattern recognition process of the SHM system for bridges.
Neural Network Algorithms
Neural network algorithms such as the Artificial Neural Network (ANN) and Convolutional Neural Network (CNN) are designed to perform supervised or unsupervised learning algorithms [49]. Unsupervised learning utilizes unlabeled data for training and is used for clustering and dimensional reduction in analytical processes. The research articles in which one or more than one neural network learning algorithm (ANN and CNN) are used for the pattern recognition process of SHM system for bridges are included in the neural network learning algorithm category.
Combined Algorithms
The research articles in which a combination of supervised and neural network learning algorithms is used for the pattern recognition process are included in the combined category.
Review Protocol Development
Once the categories are defined, we developed a review protocol for our research on the basis of predefined SLR standards [57]. The developed protocol defines the selection and rejection criterion, search process, quality assessment, data extraction and synthesis of the extracted data. The details of the review protocol are given in subsequent sections.
Selection and Rejection Criterion
We defined a concrete criterion for the selection and rejection of research works. Six parameters were defined to ensure the correctness of the answers to our research questions. The research work is selected based on these parameters as given below: Subject relevant: Select the research work only if it is relevant to the research context. It must support the answers to our research questions and must be relevant to one of the three predefined categories (Section 2.1). Reject irrelevant research that does not belong to any of the four predefined categories.
2016-2023: Selected research work must be published from 2016 to 2023. Reject all research articles published before 2016 to ensure the inclusion of the latest research works.
Publisher: Selected research work must be published in one of the three renowned scientific databases, i.e., IEEE, SPRINGER and ELSEVIER.
Crucial effects: Selected research work must have crucial positive effects regarding the deployment of machine learning algorithms in the pattern recognition process of SHM for bridges.
Results-oriented: Selected research work must be results oriented. The proposal and ultimate outcomes of the research must be supported by solid facts and experimentation. Reject the research work if its proposal is verified through a weak validation method.
Repetition: All the research in a particular research context cannot be included. Consequently, reject searches that are identical in the given research context, and only one of them is selected.
Search Process
The selection and rejection criterion, presented in Section 2.2.1, shows that we selected three scientific databases (i.e., IEEE, ELSEVIER and SPRINGER) in order to carry out this SLR. These scientific databases contain high-impact journals and conference proceedings. Furthermore, we also explored related books and technical reports to enhance our knowledge. To accomplish the search process, we used different search terms like SHM, bridges, machine learning, etc. The search terms along with the results for each scientific database are summarized in Table 2.
We used the "2016-2023" filter to get research articles published during 2016-2023. The results obtained through the AND operator do not guarantee the relevance of our research context. Therefore, we also used the OR operator to obtain some potential search results required for our research. However, the OR operator provides too many search results to scan all. Consequently, we also used filters to refine the results such as "content type = article", subject area = Engineering.
The steps performed during the search process are depicted in Figure 2. It can be observed from Figure 2 that various search terms are specified in three scientific databases. For this review, we analyzed approximately 7832 search results. In the next step, 4861 research articles were discarded by reading their Title. We further discarded 1824 research articles by reading their Abstract. Subsequently, we performed a general study of 1147 articles by reading different relevant sections of each research. Based on our general study, we discarded 771 articles that do not meet the selection and rejection criterion. We selected the remaining 376 relevant research articles for a detailed study. We performed a detailed study We used the "2016-2023" filter to get research articles published during 2016-2023 The results obtained through the AND operator do not guarantee the relevance of ou research context. Therefore, we also used the OR operator to obtain some potential search results required for our research. However, the OR operator provides too many search results to scan all. Consequently, we also used filters to refine the results such as "conten type = article", subject area = Engineering.
The steps performed during the search process are depicted in Figure 2. It can b observed from Figure 2 that various search terms are specified in three scientific databases For this review, we analyzed approximately 7832 search results. In the next step, 4861 research articles were discarded by reading their Title. We further discarded 1824 research articles by reading their Abstract. Subsequently, we performed a general study of 1147 articles by reading different relevant sections of each research. Based on our general study we discarded 771 articles that do not meet the selection and rejection criterion. We selected the remaining 376 relevant research articles for a detailed study. We performed a detailed study of 376 research articles and discarded 331 research articles. Finally, we selected 45 research articles fully compliant with our pre-defined selection and rejection criteria.
Quality Assessment
We developed quality criteria to understand the important outcomes of selected research studies. The developed criteria also define the credibility of each selected research and its decisive findings: (1) The data appraisal of the research is based on concrete facts and theoretical perspectives without any vague statements. (2) The validation of research has been performed through proper validation methods (case study, etc.). (3) The research provides information about the implementation of the SHM systems. (4) As we intend to investigate the latest machine learning algorithms and trends, the objective is to include the most recent research as much as possible. Therefore, 78% of research articles are from 2020 to 2023. Moreover, 91% of the research articles included are from 2018 to 2023 as shown in Figure 3.
Quality Assessment
We developed quality criteria to understand the important outcom search studies. The developed criteria also define the credibility of each and its decisive findings: (1) The data appraisal of the research is based on concrete facts and th tives without any vague statements. (2) The validation of research has been performed through proper va (case study, etc.). (3) The research provides information about the implementation of th (4) As we intend to investigate the latest machine learning algorithm objective is to include the most recent research as much as possib of research articles are from 2020 to 2023. Moreover, 91% of the re cluded are from 2018 to 2023 as shown in Figure 3. (5) The originality of the research is another important factor. There cluded articles that are published in at least one of the three renow accepted scientific databases, i.e., IEEE, SPRINGER and ELSEVIER excluded conference papers from our scope.
Data Extraction and Synthesis
Data extraction and synthesis, as shown in Table 3, were performed to our research questions. For data extraction, defined from serial num tracted important details of each research to ensure its compliance with rejection criterion. For data synthesis, defined from serial numbers 5 to detailed analysis of each research. For example, all selected articles were ied and analyzed in order to assign them to the corresponding categor selected research was thoroughly analyzed to extract accurate informat erational functionality, system implementation, feature extraction, an and learning algorithm. (5) The originality of the research is another important factor. Therefore, we only included articles that are published in at least one of the three renowned and globally accepted scientific databases, i.e., IEEE, SPRINGER and ELSEVIER. In addition, we excluded conference papers from our scope.
Data Extraction and Synthesis
Data extraction and synthesis, as shown in Table 3, were performed to get the answers to our research questions. For data extraction, defined from serial numbers 2 to 4, we extracted important details of each research to ensure its compliance with the selection and rejection criterion. For data synthesis, defined from serial numbers 5 to 9, we performed a detailed analysis of each research. For example, all selected articles were thoroughly studied and analyzed in order to assign them to the corresponding category. Similarly, each selected research was thoroughly analyzed to extract accurate information regarding operational functionality, system implementation, feature extraction, analytical approach and learning algorithm. System implementation The type of the system (model-based or data-driven)
Results
This section first classifies the selected studies in terms of pattern recognition algorithms (Section 3.1). Another important aspect in the pattern recognition stage of SHM is the utilization of analytical approaches (Section 3.2). Subsequently, the feature extraction and the associated signal processing algorithms are discussed (Section 3.3). In addition to various steps in the SHM process (such as pattern recognition and feature extraction), there are two additional design parameters that are required to be analyzed. These design parameters are the operational functionality approach (Section 3.4) and the system implementation scheme (Section 3.5).
Pattern Recognition Algorithms
Pattern recognition is the last step of the SHM process in which the machine learning algorithms are deployed to automatically identify patterns and regularities in data. In this stage, the decision is taken on the state of the structure. The scope of this SLR is limited to the supervised learning algorithms (DT, RF, SVM, KNN) and the neural network learning algorithms (ANN and CNN) as they are the most used algorithms in the given research context. Table 4 shows the distribution of the selected studies into the predefined categories in Section 2.1. It can be seen from Table 4 that neural network learning algorithms are the most widely used. This is expected because the neural network learning algorithms can be deployed to perform supervised and unsupervised learning processes. An in-depth comparative study of various machine learning algorithms employed in the selected studies for each category is presented in Section 4 of this SLR.
Utilization of Analytical Approaches
Another important aspect of pattern recognition is the selection of analytical methods applied to recognized patterns. The most analytical classes used in SHM of bridges are classification and regression. Regression techniques are used to predict the output numeric values based on the input characteristics found in the dataset. In contrast, the classification technique produces a class rather than a numeric value. It can be observed from Table 5 that the classification method is frequently employed in the process of SHM for bridges (38 studies out of 45). This is expected because the SHM process is a classification problem from the machine learning point of view to compare damaged and undamaged states of the structure. Only six studies deploy the regression approach where the SHM system is applied to predict the behavior of the bridge under different environmental conditions. For example, DUNWEN et al. [66] deploy the regression approach to predict the hydration heat of mass concrete to apply a temperature control method in advance to reduce the possibility of thermal cracks.
Feature Extraction Techniques Utilization
As can be seen in Table 6, the most frequent signal processing techniques used in the selected studies in this SLR are FRF, PCA, FFT and GMM methods. Gordan et al. [99] deploy the FRF technique to collect the first four natural frequencies from the experimental dataset to implement data-mining damage identification on a slab-on-girder bridge. Zhang et al. [86] deploy the PCA method to dimensionally reduce the calculated features in the dataset into a small number of features.
Operational Functionality Investigations
As mentioned in Section 1, the damage identification process can be classified into five levels: identification, localization, classification, assessment and life-time prediction. The first two levels are used for the diagnosis process of the structural health of the bridge, whereas the latter three are considered prognosis levels to predict the impact of the damage on the structure.
It can be observed from Table 7 that the diagnosis process is frequently applied in the SHM for bridges (26 studies out of 45). For example, Bing et al. [68] propose a climbing robot to automatically collect impact echo (IE) signals from concrete bridges. These signals are then analyzed to detect the damage. Only 19 selected studies deploy the prognosis process in which the SHM process is deployed in order to classify, assess or predict the damage. For example, Ghiasi et al. [91] develop damage classification systems using vibration-based deep learning approaches. This system can classify different extents of cross-section losses due to corrosion damage. Yanez-Borjas et al. [96] develop a vibration-based methodology in which the autocorrelation of the vibration data is used to detect, locate and assess the corrosion damage in steel truss bridges.
System Implementation Investigations
From an implementation point of view, SHM systems can be model-driven or data-driven [23]. It can be observed from Table 8 that data-driven is the more commonly used implementation method in the SHM for bridges (27 studies out of 45).
In the model-based category, the undamaged condition model of the building is created using finite element analysis (FEA) [24]. Model computational complexity is the major limitation of model-based systems [25][26][27][28]. For example, Entezami et al. [61] validate the methodology of bridge condition assessment by numerical concrete beam modeled with 4-node linear 2D elements with reduced integration. The simulation process deployed by the ABAQUS Explicit finite element code. In contrast, data-driven techniques are very practical in handling ambiguity and unexpected cases [29]. Various mechanisms to detect and locate the damage in data-driven implementations are the vibration-based anomaly, vision-based surface crack and sub-surface rebar. For example, Jie et al. [102] develop a bridge SHM system by monitoring different factors, which include strain, temperature, traffic flow and heavy vehicle number. The output of the framework is the health degree of the bridge depending on the classification of different monitoring factors.
Machine Learning Algorithms Investigations
Section 3 provides the synthesis/classification results from the selected research studies in terms of pattern recognition algorithms, utilization of analytical approaches, feature extraction signal processing algorithms, operational functionality approach and the system implementation scheme. However, an in-depth comparative study of the machine learning algorithms employed in the pattern recognition process is one of the core objectives of this SLR. Therefore, this section provides a comprehensive analytical comparison of machine learning algorithms in terms of various performance attributes. A comparison is made for all three categories, as defined in Section 2.1. Firstly, Section 4.1 analyzes the studies in the supervised learning algorithms category, which includes RF, DT, SVM and KNN algorithms. Then, the neural network category is investigated in Section 4.2, which includes ANN and CNN algorithms. Lastly, the studies in the combined category are discussed in Section 4.3.
Supervised Learning Algorithms
In the first category (supervised learning algorithms), 16 studies are selected, which cover the following algorithms DT, RF, SVM and KNN. The 16 selected studies for supervised learning algorithms are further categorized as: 2 studies for the RF category, 2 studies for the DT category, 7 studies the for SVM category, 3 studies for the KNN category and 2 studies for the mixed category in which three algorithms are applied in the same study (RF, SVM, KNN) [72,73].
Decision Trees (DT)
Decision trees are one of the most widely used algorithms in data mining. The decision tree-based predictive models can be applied to both stratified and regression models [103]. Moreover, Decision trees can be in many forms such as classification and regression trees (CART), Chi-squared automated interaction detection (CHAID), C4.5 and ID3 [104,105]. Table 9 presents an overview of selected research studies in which the DT algorithm has been used. The third column in Table 9 identifies the number of nodes that have been used in the development of the algorithm. The dataset size, that has been used in the training and testing process, is presented in the fourth column. The accuracy of the proposed system in targeted applications is expressed in the last column. As can be seen in Table 9, the decision tree algorithm has been deployed in many SHM applications that include damage detection, damage severity prediction and condition assessment of bridges. The accuracy of the algorithm in solving both classification (damage detection tool) and regression (decision-making tool) problems is satisfactory. Furthermore, it can manage numerical and categorical data with minimal data preparation. The main disadvantage of this algorithm is that it is sensitive to specific features in the dataset and therefore small changes to the data result in significant changes to the tree.
For example, Entezami et al. [61] propose a novel SHM methodology to evaluate structural conditions and detect potential damage in civil structures. To examine system reliability, two types of damage-sensitive features were introduced. These are derived from the autoregressive (AR) and principal component analysis (PCA) models. The percentage of the classification error of the DT algorithm according to the AR coefficient and PCA coefficient are 24.37% and 37.5%, respectively.
Random Forest (RF)
The random forest algorithm is a nonparametric tree-based ensemble technique. It was first proposed by Breiman [106]. Random forests employ a range of understandable decision tree models. Data from several decision tree models can be combined to produce more precise forecasts [107,108]. When dividing a "node" the algorithm looks for the best properties among a random sample of properties rather than the most important attributes. Consequently, a large range of possibilities and a better model can be achieved [109][110][111]. Table 10 presents a review of selected studies in which the RF algorithm was used. The third and the fourth columns in Table 10 identify the number of trees and the minimum leaf size that has been deployed in the development of the algorithm. The dataset size that has been used in the training and testing process is presented in the fifth column. The accuracy of the proposed system in targeted applications is expressed in the last column.
As shown in Table 10, the random forest algorithm has been deployed in many SHM applications that include damage detection, condition assessment and early damage prediction of bridges. This operation of the RF algorithm is time-consuming and more complex as compared to the decision tree algorithm as it merges individual trees. In addition, the RF algorithm can manage large datasets efficiently, which, on the other hand, increases the number of calculations and the memory overhead. For instance, Dawei et al. [58] propose a non-destructive magnetic flux leakage detection system to precisely identify damage in cable wires. Modeling and simulation approaches were used to design the system. The maximum detection errors of the random forest algorithm in the vertical climbing mode of width and cross-sectional area loss were 0.64 mm and 0.46%, respectively. Whereas the maximum detection errors of the random forest algorithm in the spiral climbing mode of width and cross-sectional area loss were 0.21 mm and 0.1%, respectively. These results show that the spiral climbing mode provides higher classification accuracy. One of the most efficient supervised machine learning algorithms is the support vector machine, which was proposed by Cortes and Vapnik [112]. One of the crucial factors in the success of the SVM process is the selection of the kernel function in different conditions. This function is responsible for transforming the dataset into the appropriate category [113]. Table 11 presents a review of selected studies in which the SVM algorithm was used. The third column in Table 11 identifies the type of kernel function that has been used in the development of the algorithm. The dataset size, that has been used in the training and testing process, is presented in the fourth column. The accuracy of the proposed system in targeted applications is expressed in the last column. As shown in Table 11, the SVM algorithm has been deployed in many SHM applications that include damage identification, localization, condition assessment and damage prediction of bridges. This algorithm works well on small datasets. The SVM algorithm is the most deployed supervised algorithm as it exhibits a relatively high accuracy in solving both classification (damage detection tool) and regression (early prediction tool) problems. The main disadvantage of the SVM algorithm is that the training time increases significantly with the dataset size. Yifu et al. [64] propose a data-driven SHM system based on Optimized AdaBoost-Linear SVM. The main objective of the system is to identify damage by means of vibration signals received from a vehicle passing over the bridge. In the implementation stage, the AdaBoost-SVM methodology increases the accuracy of the results by 5% to 16.7% compared to other algorithms such as SVM and RF.
K-Nearest Neighbors (KNN)
The K-Nearest Neighbors method (KNN) algorithm is a supervised learning algorithm that is considered one of the simplest algorithms in terms of application complexity. The most important parameter that affects the operation of the KNN algorithms is the number of neighbors, which is highly influenced by the noise in the dataset. Furthermore, the size of the dataset needed for training the algorithm is directly proportional to the dimension of the feature under investigation, which, in turn, increases the computational cost of the system [114,115]. Table 12 presents a review of selected studies in which the KNN algorithm was used. The third column in Table 12 identifies the number of neighbors that have been used in the development of the algorithm. The dataset size that has been used in the training and testing process is presented in the fourth column. The accuracy of the proposed system in targeted applications is expressed in the last column. As can be seen in Table 12, the KNN algorithm has been deployed in many SHM applications that include damage identification, localization and condition assessment. This algorithm is used only to solve the classification problems in the selected studies with acceptable accuracy. The KNN algorithm does not work well with imbalanced data and the prediction process runs quite slowly when working with large datasets. For example, Sarmadi et al. [69] propose a novel anomaly detection system for SHM under variable environmental conditions. The system deploys a mechanism called adaptive Mahalanobissquared distance and one-class KNN (AMSD-kNN). The main objective of the system is to identify the appropriate nearest neighbors for training and testing datasets in order to eliminate the environmental effect of the variable environmental conditions. In the implementation stage, the total error is 0.25% for 90% of the learning sample.
Svendsen et al. [72] introduce a data-based SHM approach for damage identification in steel bridges. Two groups of sensors have been utilized for acquiring both the local and global responses of the considered bridge. Four supervised machine learning algorithms including the k-nearest neighbors (kNN), the support vector machine (SVM), the random forests (RF) and the Gaussian naïve Bayes (NB) algorithms were applied to assess the capabilities of these algorithms to identify and classify structural damage. The system accuracy was satisfactory for SHM applications with a total error of type 1 and type 2 error of 19.7%.
Wang et al. [73] present an effective label ranking method for bridge condition assessment (LR-BCA). The mechanism of the system operation investigates the natural order relationship between bridge condition ratings. Furthermore, the feature extraction process is accomplished by the means of a heuristic data cleaning (HDC) technique. The HDC method was applied for cleaning the bridge condition dataset by recognizing all the label conflict samples, then iteratively filtering out the noise. Then, three supervised machine learning algorithms including RF, SVM and KNN were deployed to evaluate the effectiveness of HDC in LR tasks. The proposed LR-BCA approach attains an accuracy of 99% in predicting different bridge conditions ratings.
Neural Network Learning Algorithms
In the second category (neural network algorithms), 23 studies were selected, which cover the following algorithms ANN and CNN algorithms. The 23 selected studies for NN learning algorithms are further categorized as: 12 studies for the ANN category and 13 studies for the CNN category.
Artificial Neural Network (ANN)
The operation of the artificial neural network (ANN) depends on isolating the input into several levels of abstraction. This network can be trained using datasets to recognize patterns in images or sounds. The optimal behavior of the network can be achieved by the methodology of connections between different components in the network and the weight of those components. An automated process is normally utilized to modify the component's weight during the training process [116,117]. Table 13 presents a review of selected studies in which the ANN algorithm was used. Columns 3, 4, 5 and 6 in Table 13 identify the number of neurons, the number of hidden layers, the training algorithm and the activation function that has been used in the development of the algorithm. The dataset size that has been used in the training and testing process is presented in the seventh column. The accuracy of the proposed system in targeted applications is expressed in the last column.
As can be seen in Table 13, the ANN algorithm has been deployed in many SHM applications that include damage identification, localization, condition assessment and damage prediction. The ANN algorithm is one of the most deployed neural network algorithms as it exhibits a relatively high accuracy in solving both classification (damage detection tool) and regression (early prediction tool) problems. The ANN algorithm requires a large dataset during training which in turn increases the computational cost of the proposed system. For instance, Hooman et al. [78] propose an ANN-based two-level damage identification approach to damage localization and severity estimation in steel girder bridges. The reliability of the system was examined by the means of a finite element (FE) model of the I-40 Bridge. In the validation stage of the system, the accuracy of the results was sufficiently high with a maximum modal strain energy-based damage index error of 1.2%.
Convolutional Neural Network (CNN)
The convolutional neural network (CNN) is a feed-forward neural network, which is the most commonly used deep learning approach in the area of image and object classification. CNN consists of many layers including a convolutional layer, pooling layer, ReLU correction layer and fully connected layers. It is widely used for 2D image classification [118]. The XAI (eXplainable Artificial Intelligence) techniques can be used to interpret the obtained results from the CNN algorithm in a form that is humanly explainable and directly implementable in new tools for bridge inspections. This facilitates the observation of the activation zones and nearly perfectly highlights the type of specific defect in a given image [119]. Table 14 presents a review of selected studies in which the CNN algorithm was used. Columns 3, 4 and 5 in Table 14 identify the convolution layers size, pooling layers size and the activation function that has been used in the development of the algorithm. The dataset size that has been used in the training and testing process is presented in the sixth column. The accuracy of the proposed system in targeted applications is expressed in the last column. As shown in Table 14, the CNN algorithm has been deployed in many SHM applications that include crack detection, anomaly detection, damage localization and crowd estimation. The CNN algorithm is one of the most deployed deep learning algorithms as it is computationally efficient with large datasets and exhibits a relatively high accuracy in solving both classification (crack detection tool) and regression (crowd estimation tool) problems. For example, Dinh et al. [98] develop a based automated rebar localization and identification approach. The proposed system integrates conventional image processing methods and convolutional neural networks (CNN). In the validation stage, the overall damage identification accuracy of the system was found to be 99.60% ± 0.85%.
Combined Category
In the third category (combined algorithms), four studies have been selected. Two of them have combined CNN and SVM algorithms [101,102]. Similarly, the work in [99] combines ANN, SVM and DT algorithms [99]. Finally, the work in [100] combines CNN and KNN algorithms [100]. The review of these studies is covered in Sections 4.1 and 4.2.
The proposed systems in the combined category utilize different machine learning algorithms for different purposes. For example, the ANN, SVM and DT algorithms have been used in [99] as a tool of comparison to validate the performance of the proposed model, which is the Cross Industry Standard Process for Data Mining (CRISP-DM) model for damage severity assessment. On the other hand, the KNN algorithm has been used as a feature extraction technique in [100] to facilitate the pattern recognition process executed by the CNN algorithm. Similarly, the CNN algorithm has been used as a feature extraction technique in [102] to facilitate the pattern recognition process executed by the SVM algorithm. The proposed system in [101] deployed both SVM and CNN algorithms to solve a classification problem (crowd attribute classification for motion speed and load designation) and a regression problem (load estimation). This integration of machine learning algorithms increases the efficiency of the proposed SHM systems in damage detection.
Parisi et al. [100] propose a damage identification approach in steel truss railway bridges by deploying machine learning classification algorithms. The proposed method eliminates the need for feature extraction techniques in the analysis of the signal from the strain sensor. The proposed system integrates the K-nearest neighbors and the convolutional neural network algorithms for feature extraction and classification purposes, respectively. The system accuracy of damage detection is 93%.
Samir et al. [101] deploy the latest module for simultaneous crowd and structural monitoring, which implements the integration of sensing technologies (Fiber Bragg Grating (FBG) and Fiber Optic Sensors (FOSs)) with wearable sensing devices incorporating Inertial Measurement Units (IMUs). The proposed system integrates CNN and SVM for the pattern recognition stage. The accuracy of the system is sufficiently high with peak testing accuracy for single-class motion speed classification at 98%, multi-class motion speed and load characterization classification at 91%, and percentage error for load estimation regression reaching a minimum of 9%. • Four research studies have been recognized in the supervised learning algorithms category (Section 4.3).
Research question 2:
Which of the machine learning algorithms and the analytical approaches are more frequently utilized in the pattern recognition process for SHM in bridges during the 2016-2023 research?
Answer: On the basis of this SLR, the accuracy of the neural network learning algorithms including the ANN and the CNN techniques in anomaly and crack detection is above 80%. Furthermore, one of the most important features of neural networks is that they can be deployed in supervised (classification, regression) and unsupervised learning (clustering) contexts. Consequently, they are the most used algorithms in the pattern recognition process for SHM in bridges. Further details are available in Tables 13 and 14. Furthermore, from the selected studies, it can be seen that the classification method is the most commonly used technique applied in the process of SHM for bridges (38 studies out of 45) as shown in Table 5. This is expected because the SHM process is a classification problem from the machine learning point of view to compare damaged and undamaged states of the structure. Only six studies deploy the regression approach in which the SMM system is applied in order to predict the behavior of the bridge under different environmental conditions.
Research question 3: Which of the signal processing techniques are more frequently utilized in the feature extraction process for SHM in bridges during the 2016-2023 research?
Answer: on the basis of this SLR, the signal processing techniques most deployed in the feature extraction process for SHM in bridges are FRF, PCA and FFT methods. These methods enhance the system's efficiency by reducing the processing time required for damage recognition. Further details are available in Table 6. There are other approaches that have been deployed in the bridges SHM applications such as wavelet transform (discrete and continuous methods), Kalman filter-based techniques and the Autocorrelation method.
Research question 4: Which of the system implementation techniques and operational functionality approaches are more frequently utilized in the process of SHM in bridges during the 2016-2023 research?
Answer: From the selected studies, it can be seen that the diagnosis process is the most technique used in the SHM for bridges (26 studies out of 45) as shown in Table 7. Only 19 selected studies deploy the prognosis process in which the SHM process is deployed in order to classify, assess or predict the damage in the bridge structure. In addition, from the selected studies, it can be seen that data-driven is the most implemented style applied in the SHM for bridges (27 studies out of 45) as shown in Table 8. These studies deploy different mechanisms to detect and locate the damage such as vibration-based anomaly detection, vision-based surface crack detection and sub-surface rebar detection. The model-based implementation was applied in 18 selected studies.
Discussion and Limitations
Discussion on learning algorithms for pattern recognition process: In this SLR, we focused on the applications of machine learning algorithms in the pattern recognition process of SHM systems for bridges. In addition, we restricted our focus to the supervised and neural network learning algorithms as unsupervised learning algorithms are less likely to be used and their applications are limited to clustering problems only. The supervised learning algorithms, discussed in this SLR, include RF, DT, SVM and KNN. It is important to note that there are some other supervised learning algorithms such as Bayesian [120][121][122][123] and ensemble methods [124] that have been used in SHM systems. However, they have not been frequently deployed in the pattern recognition process of SHM systems for bridges. Consequently, the scope of this SLR focuses only on the most widely used learning algorithms including RF, DT, SVM and KNN. In addition, the neural network algorithms (ANN and CNN) are presented in this SLR. From the results of this SLR, it can be argued that the ANN and CNN (55% of the selected studies) are the most widely deployed algorithms due to their wide range of functionality in different SHM applications such as crack detection and anomaly detection. Furthermore, the accuracy of neural network algorithms in damage detection is considerably high as compared to other algorithms as shown in Tables 13 and 14.
Discussion on analytical approaches: The supervised machine learning algorithms are capable of two main analytical approaches: classification and regression. The classification approach can be used in SHM systems to cover the first four levels of damage identification including identification, localization, classification and assessment. On the other hand, the regression approach is limited to prediction studies only. It can be argued that the classification approach is the most analytical approach implemented in the SHM system for bridges as can be seen in Table 5, which is expected because the SHM process is a classification problem from the machine learning point of view to compare between damaged and undamaged states of the structure. In the selected studies for this SLR, only 15.5% of the selected studies deploy regression for prediction operations [59,60,65,66,76,82,101]. For example, Xiao-Wei et al. [59] propose a data-driven approach to predict the vibration amplitudes of girders and towers for early warning SHM. The regression module of the RF learning algorithm was used in the prediction process. A cable-stayed bridge with a main span of 1088 m is taken as the case study. The prediction accuracy of the proposed system according to the validation process is 92.5 %.
Discussion on feature extraction techniques: The feature extraction process is an important stage in the SHM procedure. Here, the datasets from different sensors are processed to extract the relevant feature from the design perspective. In other words, feature extraction is implemented by signal processing techniques such as wavelet-based techniques, frequency-response functions (FRF), soft computing-based techniques and Kalman filter-based techniques. As can be seen in Table 6, 38% of the selected studies deploy FRF, PCA, FFT and GMM methods for the feature extraction process. Other techniques identified in this SLR are particle swarm optimization, cuckoo search and heuristic data cleaning. Tran et al. [74] develop a novel approach to detect structural deterioration. The system integrates the ANN algorithm and the evolutionary algorithm cuckoo search (CS) method. In the validation process, two numerical models were used to assess the reliability of the system (a steel beam calibrated using experimental measurements and a large-scale truss bridge). The proposed technique exhibited high accuracy for damage identification (location and severity) with a learning coefficient (R) higher than 0.99 in all test cases. In addition, the system has significantly reduced the computational time.
Discussion on operational functionality: The damage identification process is classified into five levels including identification, localization, classification., assessment and lifetime prediction. The first two levels are used for the diagnosis process of the structural health of the bridge, whereas the latter three are considered prognosis levels to predict the impact of the damage on the life-time of the structure. As can be seen in Table 7, 58% of the selected studies for this SLR deploy diagnosis operations, while 42% of the selected studies perform prognosis methods for SHM for bridges. Thanh cuong et al. [63] develop an SHM system that integrates the particle swarm optimization method and the Support Vector Machine algorithm (PSO-SVM) for structural damage identification, location and severity. The damage location classification accuracy is enhanced by the effective searching capability of PSO, which eliminates the redundant input parameters. The proposed system presents an impressive classification accuracy of 0.0461 and 0.957 for root mean square error (RMSE) and correlation coefficient (R 2 ), respectively.
Discussion on system implementation: The physical implementation of the SHM system for bridges can be classified into two main categories: data-driven and model-based approaches. The operation of model-based systems has many computational limitations, therefore, a data-driven system that incorporates different sensing capabilities has recently been deployed more often in SHM systems for bridges. As can be seen in Table 8, 60% of the selected studies for this SLR deployed the data-driven approach, whereas the model-based approach was used in 40% only. For instance, Rageh et al. [82] propose an SHM system for automated damage identification (location and intensity) by means of a continuous stream of data. The system deployed a number of strain sensors on the structure of the steel truss bridge. The focus of the study was to detect the stringer-floor beam connection deterioration. An integration of the ANN algorithm and proper orthogonal modes (POMs) approach is presented in the study. The system showed improved accuracy in damage detection for damage intensities higher than 40%.
Limitations of research: Even though, we have strictly followed the guidelines of SLR presented by Kitchenham [57] and completely observed the developed review protocol, there are some minor limitations:
•
We utilized the relevant search terms and systematically scanned the search outcomes.
Nevertheless, a few search terms obtained thousands of outcomes that we could not scan comprehensively. In addition, we excluded several studies on the basis of their titles in accordance with the search process. Therefore, there is a possibility that the scope of the article is not appropriately clear in the title. Subsequently, we do not claim the comprehensiveness of our research in this SLR.
•
We used three prestigious scientific databases, i.e., IEEE, ELSEIVER and SPRINGER, which contain a huge number of journal and conference publications. Nevertheless, there exist other databases that provide a lot of publications. Consequently, there is a fair possibility that we could have excluded recent research from other databases. However, we firmly believe that the final results of this SLR are not considerably affected because high-quality recent research is available in the selected scientific databases.
Conclusions
This research presents the applications of machine learning algorithms used for the pattern recognition process for SHM in bridges. To accomplish this objective, SLR has been performed to recognize 45 research articles. On the basis of different machine learning techniques, selected research studies are classified into three different categories, and six learning algorithms are discussed. Consequently, a comprehensive analysis is performed on the identified algorithms by considering various important parameters according to the type of the algorithm. Furthermore, the selected studies have been analyzed in terms of different design considerations including the feature selection approaches, the operational functionality, the analytical approaches and system implementation. Thus, the latest applications of machine learning algorithms in the process of pattern recognition for SHM in bridges are presented and analyzed under this SLR, which is rarely available to the best of our knowledge. This will facilitate the selection of the appropriate machine learning algorithm according to the SHM system requirements.
On the basis of this SLR, it can be claimed that the classification method and neural network learning algorithms including the ANN and CNN are the most used techniques and algorithms in the pattern recognition process for SHM in bridges in recent studies. Furthermore, the most signal processing techniques deployed in the feature extraction process for SHM in bridges are the FRF, PCA and FFT methods. In addition, the diagnostic techniques and the data-driven approach are the most widely used operational functionality and system implementation techniques for SHM in bridges, respectively.
The learning algorithms and signal processing techniques considered in this SLR exhibit limitations including the computational complexity, memory requirement and time consumption in the pattern recognition and feature extraction processes, respectively. Accordingly, further investigations of novel combined algorithms and techniques to overcome these limitations should be considered in future research studies. Furthermore, most of the recent studies in the field focus only on the investigations of the damage diagnosis levels including damage identification and localization. Consequently, future research studies should further investigate the implementation of prognosis levels to classify, assess and predict the lifetime of the bridge. This will increase the efficiency of the SHM system in damage detection and increase the lifetime of the bridge. | 2023-04-26T15:04:49.145Z | 2023-04-24T00:00:00.000 | {
"year": 2023,
"sha1": "7f0dec60abe8933513c4826f22a38b74ede7e658",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/9/4230/pdf?version=1682315591",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a51f7e4f54524a007e90db0617cecb07d18e21b3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3659494 | pes2o/s2orc | v3-fos-license | Computational design of environmental sensors for the potent opioid fentanyl
We describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment.
Introduction
Fentanyl is a potent agonist of the m-opioid receptor (MOR), with an affinity of approximately 1 nM and a potency 100-times that of morphine (Volpe et al., 2011). It is used both pre-and post-operatively as a pain management agent. The fast acting nature and strength of fentanyl have been attributed to its high degree of lipophilicity (Peckham and Traynor, 2006). Fentanyl has become a widespread drug of abuse, and has played a central role in the growing opioid epidemic. Reports of illegal manufacturing and fentanyl-related deaths across the country and other parts of the world have increased significantly in recent years (Drug Enforcement Administration, 2017).
Custom-designed ligand-binding proteins offer the possibility of both detecting and counteracting toxins such as fentanyl. Antibodies raised against small molecules generally require mammalian expression systems and conjugation of the compound (hapten) to an immunogenic carrier protein. In addition, elicitation of antibodies by immunization does not provide control over the interactions that the protein makes with the ligand. In contrast, computationally designed proteins can be readily expressed in bacterial and other low-cost expression systems, and specific interactions can be directly programmed. However, computational design of precise protein-ligand interactions for flexible, predominantly hydrophobic compounds is challenging. As these molecules are in some sense 'featureless', due to their overall hydrophobic character, binding depends heavily on the proteinligand shape complementarity. We previously reported a method for generating binders for relatively rigid molecules containing hydrogen bonding functional groups, where the focus was on solutions with optimal hydrogen bonding geometry (Tinberg et al., 2013). However, this approach is not well suited for flexible, nonpolar compounds such as fentanyl.
Results
We pursued a two-step approach to designing fentanyl binders (Figure 1-figure supplement 1). Fentanyl contains 6 rotatable bonds, which increases the combinatorial complexity of possible protein-ligand interactions to be considered. Starting from the structure of a fentanyl-citrate toluene solvate (Peeters et al., 1979), we generated 11 conformers plus an additional hydrated model of fentanyl, based on the small molecule structure, with non-covalently bound water atoms at both the tertiary amine (3 Å nitrogen to water distance, 109˚carbon-nitrogen-water angle) and the carbonyl oxygen (3 Å oxygen to water distance, 120˚carbon-oxygen-water angle) ( Figure 1a). For each fentanyl conformer, we identified a large number of shape complementary placements of fentanyl within protein scaffolds from the MOAD database (Hu et al., 2005) using the fast docking algorithm Patch-Dock, which identifies shape complementary interactions between binding partners (Duhovny et al., 2002). Multipose binding has been observed in many naturally occurring protein-ligand complexes (Kulp et al., 2012;Blum et al., 2011;Barelier et al., 2015), but for our approach we sought precise control of the fentanyl pose by considering only a single conformer per protein scaffold.
In the second design step, we selected the top 20 scoring docks from PatchDock for each scaffold and optimized the identities and rotamer conformations of amino acids within 8 Å of fentanyl for shape complementarity and specific protein-ligand interactions. Similar to other MOR agonists, fentanyl possesses a charged tertiary amine, one of only two sites capable of making electrostatic interactions. We sought to exploit the tertiary amine to confer directionality and allow atomic level control over the placement of the otherwise hydrophobic molecule. Two design strategies were pursued: (1) The introduction of specific side chain-fentanyl interactions, either acidic (Asp or Glu) or cation-pi (Phe, Tyr, Trp) with the tertiary amine, and (2) the use of the hydrated fentanyl, as described above, for bridging indirect fentanyl-protein interactions. Designs were filtered based on shape complementarity, protein-fentanyl interface energy and the solvent-accessible-surface-area (SASA), and 62 were selected for experimental characterization.
eLife digest Many small molecules, including toxins and some medicines, have flexible structures, which makes it difficult to detect and/or neutralize them. The pain medication fentanyl, for example, can rotate to adopt many shapes. In recent years, fentanyl drug abuse has become increasingly common, and the drug is often illegally produced. The number of deaths caused by fentanyl has risen greatly, which provides a strong reason to find new ways to detect this and other drugs. Now, Baker et al. have created new sensors that are able to detect fentanyl. First, the 11 most likely shapes that fentanyl could adopt were identified based on known information about the structure of the molecule. Then, a computer program was used to design proteins that were predicted to strongly bind to these most common shapes. Next, genes that coded for these proteins were synthesized in the laboratory and introduced into bacteria, which read the genes to build the proteins.
Similar to a well-fitted lock and key, the shape of the newly designed protein had to complement a likely shape of the fentanyl molecule. Baker et al. used a technique called X-ray crystallography to visualize the proteins in atomic detail and confirm that these fentanyl-binders matched their corresponding computational models. Those proteins that bound fentanyl best were then engineered into plant cells, and later into whole plants, together with reporter systems that gave signals when the sensors detected fentanyl.
In future, these specifically synthesized proteins could be integrated into entire panels of plants or other systems to detect toxins and other harmful chemicals. Such systems would be of interest in a medical setting and for detecting environmental contamination. The sugar bound to the native scaffold (2QZ3), xylotetraose (left), is much more polar than fentanyl (middle), a predominantly hydrophobic compound. The right panel shows the 11 non-solvated fentanyl conformers used in design. (b) Cartoon representation of the 2QZ3 crystal structure (left) and the Fen49 computational model (right). Amino acid side chains colored grey represent the computationally introduced mutations in Fen49, and their native counterparts in 2QZ3. Designed fentanyl-associated waters are shown as Figure 1 continued on next page The designs were expressed using yeast surface display and probed for binding with a bovine serum albumin-fentanyl (Fen-BSA) conjugate. Sixty-one of the 62 designs expressed well, and three bound fentanyl with low micromolar to high nanomolar affinities. Fen49, the strongest binder (500 nM affinity for Fen-BSA) on yeast (Figure 2a), and Fen21 (10 mM) were chosen for further experimental characterization, as they represent two different scaffold classes. Of these two designs, recombinantly expressed Fen49 proved to be more stable and amenable to crystallization (see below).
Following the placement of the hydrated fentanyl into the binding pocket via PatchDock, Rosetta-Design introduced 9 mutations to the Fen49 scaffold to optimize the protein-ligand interactions ( Figure 1b). Yeast-binding experiments of individual Fen49 point mutants corresponding to the computationally substituted positions showed that most are crucial for recognizing fentanyl ( Figure 2b). Fentanyl does not bind the unmodified Fen49 scaffold (Figure 2a), a glycoside hydrolase (PDB 2QZ3). Purified Fen49 displayed an affinity of 6.9 mM for a fentanyl-Alexa-488 conjugate by fluorescence polarization (Figure 2c). We chose to conjugate the Alexa-488 fluorophore at the 4phenyl position, as this site is compatible with the designed binding mode, and is also the site of fentanyl conjugation to the BSA probe used in our initial yeast display experiments (see Materials and Methods). 2QZ3 was cocrystallized with xylotetraose (only 3 of the 4 xylose molecules were placed in the final 2QZ3 model), a sugar molecule with a high degree of polarity compared with fentanyl ( Figure 1b) (Vandermarliere et al., 2008). Such a dramatic repurposing of a sugar-binding protein is possible because the initial low-resolution docking step is agnostic to the polar character of the scaffold-binding cavity, as shape complementarity is the primary focus.
We solved an atomic resolution (1.00 Å ) X-ray crystal structure of Fen49 in the apo state, one of the first examples of an original (non-optimized) computational design that has been structurally characterized ( Figure 3a). The structure reveals a highly preorganized binding cavity (28 of 30 nonalanine/non-glycine side chains within~8Å of fentanyl adopt the designed rotamer) and an overall structure in very close agreement with the design model; the r.m.s.d. of the design model to the parent structure is 0.26 over 184 of 185 residues (TM_align (Zhang and Skolnick, 2005) score of 0.99). The Fen49 apo crystals were obtained from a condition containing 25% polyethylene glycol (PEG) 3350 as the precipitant. During model building, a well-ordered portion of PEG was observed in the binding cavity (Figure 3-figure supplement 1). Soaking experiments with fentanyl tended to crack the crystals and destroy X-ray diffraction, likely as a result of PEG being displaced from the binding cavity. This, coupled with a lack of alternate crystal forms, prevented us from obtaining a structure of the parent Fen49-fentanyl complex.
To obtain a detailed map of the sequence determinants of folding and binding, we carried out site-saturation mutagenesis (SSM) on 184 of the 185 Fen49 residues, with the exception of the start methionine. At each position, each of the 20 amino acids were allowed, resulting in 3680 unique, single-mutant sequences (184 Â 20 = 3680). Next-gen sequencing (millions of sequence reads) was carried out after each of 4 rounds of affinity enrichment (Figure 2-figure supplement 1). The majority of the binding site residues were preserved during selection, suggesting that Fen49 was designed with a near-optimal binding cavity ( Figure 2b). Exceptions to this were three alanine residues, A67, A78 and A172, at the base of the binding pocket that were frequently substituted with larger hydrophobic residues, which provide additional packing for fentanyl. Two positions above the binding cavity enriched to amino acids that could reduce steric hindrance (Arg 112 to smaller aliphatic amino acids) or function as a hydrophobic lid over the binding site (Pro 116 to larger side chains). Charged amino acids, which might be expected to destabilize the hydrophobic cavity of Fen49, were disfavored during selection. However, a modest enrichment for glutamate at position 37 was observed in the second round of selection, suggesting an E37-tertiary amine salt bridge and the possibility of From the SSM experiments, we identified a Fen49 Y88A point mutant, termed Fen49*, that proved to be more suitable for complex structure determination ( Figure 3 and Figure 3-figure supplement 2). The 1.79 Å Fen49*-apo structure again revealed a highly preorganized binding site, and an overall structure in close agreement with the Fen49 design (0.72 r.m.s.d. for Fen49* compared with the design model over 184 of 185 residues (TM_align score of 0.98)). The majority of Fen49* side chains adopt the design conformations (25 of 30 non-alanine/non-glycine residues within~8 Å of fentanyl are correct) and the structure shows minimal backbone rearrangements. The only significant deviation from the parent Fen49 is in the loop region Thr87 -Thr93, which contains the Y88A substitution ( Figure 3 and Unlike the parent Fen49, Fen49*-apo produced crystals with an empty binding cavity that proved to be useful for soaking experiments. We solved a 1.67 Å Fen49*-fentanyl complex structure, which exhibits a high degree of similarity both with the designed model (r.m.s.d. of 0.64 over 184/185 residues, TM_align score of 0.99), and the Fen49*-apo structure (r.m.s.d. of 0.420 over all 185 residues, TM_align score of 0.99). The Thr87 -Thr93 loop adopts the same structure found in Fen49*-apo. With the exception of Trp63, which is flipped nearly 180˚in the complex, fentanyl does not induce any significant changes to the active site upon binding (Figure 3-figure supplement 5). Fentanyl appears to stabilize the binding site; Fen49*-apo Trp63 and the Thr87 -Thr93 loop exhibit higher than average B-factors when compared both with the Fen49*-apo structure overall and with the corresponding residues in the Fen49* complex (Figure 3-figure supplement 6). Despite the divergent Thr87 -Thr93 loop, the parent Fen49 and Fen49* have virtually identical affinities for fentanyl, suggesting that this loop, and more specifically the differential Trp63-90 interaction with fentanyl, do not substantially lower the free energy of fentanyl binding. Instead, preorganization of the inner binding cavity residues appears to be the main determinant for binding.
Fen49 was designed to bind a solvated fentanyl. The water modeled at the fentanyl tertiary amine was introduced to bridge an indirect protein-ligand interaction with Tyr80. During structure refinement, a strong electron density peak was observed at this location (3 Å distance and 109.2˚angle). Refinement with water at this position produced a strong positive signal in the F o -F c difference map, and it became clear that the density corresponded instead to a chloride ion ( Figure 3). The chloride occupies the site of the designed water; it is coordinated by the tertiary amine, which is protonated at the crystallization pH (7.5), Tyr80 and a nearby water, a trigonal planar arrangement for chloride . Green and red represent depletion and enrichment, respectively. Positions for which insufficient data were obtained in the naive library to make a comparison are colored grey. Fen49 design amino acid identities are colored yellow. Binding site side-chains are represented by sticks in the accompanying cartoon model of the Fen49*-complex structure, and colored according to enrichment (green = no enrichment away from the designed residue; red = strong enrichment away from the designed residue; olive = enrichment in early rounds of selection with depletion in later rounds. (c) Binding affinities (Kd) determined by equilibrium fluorescence anisotropy, using Fen-PEG-Alexa488 and Fen49 (left), Fen49* (middle) or Fen49.1 (right). DOI: https://doi.org/10.7554/eLife.28909.005 The following figure supplements are available for figure 2: . To address the role of the chloride in binding, we carried out binding experiments using potassium phosphate (pH 7.4), free of chloride, as the assay buffer (protein was prepared in KPi as well). Nearly identical affinities were observed for Fen49.1, while Fen49* showed a modest 3-fold (~20 mM) reduction in affinity (data not shown), suggesting that a chloride may be preferred, but is not required for binding. We speculate that in the absence of chloride, a water molecule takes its place as in the design model. The Tyr80-chloride interaction observed in Fen49* is mimicked by a Tyr80-PEG hydrogen bond in the Fen49 parent structure (Figure 3-figure supplement 1). A second water molecule is observed bound to the fentanyl carbonyl oxygen at the designed position (2.7 Å distance, 135.2˚angle).
A fentanyl detector would have applications in both medicine and public health. To this end, we incorporated our fentanyl binders into a transcription factor (TF)-based biosensor system, which couples ligand binding to transcription activation by stabilizing the protein against degradation (Banaszynski et al., 2006;Feng et al., 2015). This sensor can be placed in any system; we chose plants as they offer inexpensive on-site sensors for public health workers. We first tested our sensors in isolated plant cells (Arabidopsis protoplasts). The sensors were cloned into a plasmid containing a Gal4-responsive promoter-driving expression of a luciferase reporter, and expressed transiently in protoplasts. Fentanyl was added to the liquid media in which the protoplasts were incubated. In the presence of fentanyl, both the Fen21 and Fen49 TFs activated luciferase expression (Figure 4a). The Fen21-based sensor, which proved to be the better of the two, produced an 8-fold increase in luciferase expression over background when treated with 250 mM fentanyl. Fen49-expressing protoplasts displayed a modest background signal in the absence of ligand, suggesting that Fen49 is too stable in its apo form to function effectively as a sensor without additional engineering. To demonstrate that our computationally designed fentanyl sensor could function in a multicellular organism, we stably transformed Arabidopsis plants with the Fen21 TF directing luciferase expression. Plants were incubated in liquid plant culture media supplemented with 100 mM fentanyl (250 mM fentanyl was toxic to plants; data not shown). As early as 24 hr, the Fen21 TF transgenic plants showed an firefly luciferase construct respond to treatment with fentanyl. Control cells did not receive fentanyl. Fen21 (~8 fold luciferase expression over background) was found to be more responsive to fentanyl compared with Fen49, and was used to generate intact transgenic plants shown in (b). Fen21 was refractory to crystallization, and therefore we were unable to obtain structural information for this design. (b) Fentanyl-dependent induction of Figure 4 continued on next page approximately 5-fold increase in luciferase activity, which increased to 10-fold at 72 hr post incubation ( Figure 4b).
Discussion
Neutralization of toxic compounds, either through binding or enzymatic breakdown, is an area of great interest for medical and environmental purposes. Our computational approach to designing environmental detectors lays the foundation for engineering practical plant-based sensors that are, in theory, able to detect and respond to any given small molecule. Unlike previous computationally designed ligand binders (Tinberg et al., 2013), the method that generated Fen49 did not involve any manual intervention, and hence could be rapidly applied to many ligands.
Fen49 does not possess any of the conserved sequence elements of MOR, a membrane bound GPCR, that are required for binding of agonists and antagonists (Surratt et al., 1994). Fentanyl is likely to make a direct salt bridge with MOR via its tertiary amine and a conserved aspartate in the third transmembrane helix of the receptor (Manglik et al., 2016;Manglik et al., 2012). In contrast, we have designed an entirely orthogonal, soluble protein that exploits indirect protein-ligand interactions and shape complementarity as the primary drivers of binding. With Fen49, we have expanded the repertoire of small molecules that are amenable to computational design to include predominantly hydrophobic, flexible ligands. Binders targeting toxic small molecules such as fentanyl should find useful applications as environmental sensors and antidotes.
Generation of fentanyl conformers
Eleven conformers of fentanyl were generated based on an earlier investigation (Subramanian et al., 2000). To model solvation of the positive charge of the fentanyl tertiary amine and the polar carbonyl, explicit oxygen atoms were added at idealized distances and angles (3.0 Å N-O distance, 109˚C-N-O angle for the tertiary amine; 3.0 Å O-O distance, 120˚C-O-O angle for the carbonyl). These values were chosen based on the small molecule crystal structure of a fentanylcitrate toluene solvate (Peeters et al., 1979).
Scaffold selection
Scaffold proteins were comprised primarily of the 2010 MOAD set of high-resolution protein-ligand structures (Hu et al., 2005) (2454 PDBs), as well as a curated set of homologous proteins that had been shown to express well in the laboratory or that were suitable for computational design (399 PDBs). Also included was a set of 83 PDBs from the Pfam group of ketosteroid isomerases. The full list of PDB scaffolds used in this study is given in Supplementary Table 1 (Supplementary file 1).
Generation of fentanyl parameters
To generate the fentanyl parameters for use in Rosetta (RRID:SCR_015701), an auxiliary script, which is distributed with the Rosetta software package, was used to convert the mol2/mol format to Rosetta parameters, using the command shown below. Execution of the command generates a parameter file in full atom as well as in centroid mode with a formal charge of +1 for fentanyl. Python~/Rosetta/main/source/src/python/apps/public/molfile_to_params.py Fen-tanyl_cambridge.mol -n CFN -c -recharge=1 Geometric placement of fentanyl in scaffolds using patchdock and matching Scaffold proteins were used for initial docking of the set of fentanyl conformers. Docking was restricted to binding pockets identified by preexisting ligands in the crystal structure, or by Rosetta-Holes (Sheffler and Baker, 2009), which was used to define a position file of residues in the pocket. The 2.0 drug module of PatchDock (Schneidman- Duhovny et al., 2005) was used with the default settings for docking. The top 20 scoring poses for each scaffold were selected for subsequent RosettaDesign.
/Rosetta/main/source/bin/gen_apo_grids.linuxgccrelease -s PDBFILE -databasẽ /Rosetta/main/database @flags Where the flags file contained the following (text following # are comments): -mute all -unmute apps.pilot.wendao.gen_apo_grids -chname off -constant_seed -ignore_unrecognized_res -packstat:surface_accessibility -packstat:cavity_burial_probe_radius 3.0 # if the cavity ball can be touched by probe r>3, then it is not in a pocket -packstat:cluster_min_volume 90 # minimum size of a pocket, smaller voids will not be considered -packstat:min_cluster_overlap 1.0 # cavity balls must overlap by this much to be clustered -packstat:min_cav_ball_radius 1.0 # radius of the smallest void-ball to consider -packstat:min_surface_accessibility 1.4 # voids-balls must be at least this exposed These positions were set by using the receptorActiveSite in the PatchDock parameter file by initially converting the position file into PatchDock format: python splitfile.py PDBPOSITIONFILE.pos Next, the parameters were generated using the supplied scripts: buildParams.pl perl buildParams.pl PDBFILE FENTANYL_CONFORMATION 2.0 drug The receptor active site was added to the parameter file: echo "receptorActiveSite POSITIONFILEPATCHDOCK" >> params.txt Lastly, fentanyl was docked into each scaffold protein: patch_dock.Linux params.txt patchdock.out For a small subset of the final 62 fentanyl binder designs, the RosettaMatch algorithm (Zanghellini et al., 2006) was used to introduce specific protein-fentanyl interactions to the subset of ketosteroid isomerase scaffold proteins (83 PDBs). Polar residues were used to introduce hydrogen bonds to the fentanyl carbonyl, while acidic (Asp and Glu) and aromatic residues (Phe, Trp and Tyr) were used to make charge-charge or dipole-quadrupole interactions, respectively, with the tertiary amine. A summary of the designs, their sequences and the method used to generate them is given in Supplementary
Rosetta design
For each docked or matched pose, residues within 8 Å of fentanyl were designed using the Rosetta-Design (Leaver-Fay et al., 2011) algorithm to optimize the sequence around fentanyl for shape complementarity (SC) and protein-ligand interface energy. Initially, the designs were filtered based on the orientation of the ligand to allow egress of the chemical linker from the binding cavity for yeast display. For poses where the ligand had been placed according to the matching procedure, restraints were added and minimized in the context of an alanine backbone. For the initial round of designs, the catalytic residues were fixed and not allowed to change rotamer or amino acid identity. This was repeated 10 times and the lowest restraint score was kept for further design. The native scaffold residues were given a bonus of 1.5 REU, and the sequence was optimized with the matched residues fixed for 10 iterations. Again, the designs with the lowest interface energy (IFE) were kept. In order to limit the number of designs, we applied filters to remove poses that scored that did not meet the following thresholds: SC > 0.5, IFE < À10 REU, dsasa ! 0.8 of the ligand, a geometrical restraint score 5, packing of the rotamers in the binding pocket with an RMSD < 1 Å , ddG between the protein and the ligand of < À10 REU, and finally that the electrostatic field of the charged hydrogen and the oxygen of the carbonyl were negative. We furthermore made a greedy optimization of the residues in the interface such that they should contribute with at least an average energy to the interface energy, full-atom Dunbrack score, total score of the residue as well as its solvation score which we had computed from the CSAR 2010 high quality docking set using the enzdes score function. During the design process, we alternated between a linear version and an r6-r12 version of the Lennard-Jones potential.
Gene synthesis
Fen21, Fen49 and 2qz3 were purchased from Genscript with the coding sequence cloned into the NdeI and XhoI sites of vector pETCON, a modified version of pCTCON2 that contains a c-terminal fusion to the c-myc epitope.
Site-directed mutagenesis
Fen49 variants were generated by PCR using the megaprimer method (Ke and Madison, 1997) using the primers listed in Supplementary Table 3 (Supplementary file 3). Oligos were ordered from Integrated DNA Technologies, Inc.
Biotinylation of Fen-BSA
Fen-BSA (10 mg/mL in dH 2 O) was first diluted to 2 mg/mL in PBS pH 7.4. Biotinylated Fen-BSA was prepared by reacting 14.3 ml of a 10 mM EZ-link-Sulfo-NHS-LC-Biotin solution (prepared in PBS pH 7.4, 10 eq) with 500 ml of Fen-BSA in an eppendorf tube shielded from light, on ice. After 4 hr the solution was dialyzed at 4˚C against 500 mL of PBS in order to remove unreacted biotin reagent. The dialysis buffer was exchanged for an additional round. The extent of biotinylation was determined using a Pierce biotinylation quantitation kit. Reactions resulted in 1-3 molecules of biotin per BSA.
Yeast surface display binding assays
All designs, derivatives and controls were transformed into S. cerevisiae EBY100 cells following the protocol outlined by Gietz and Schiestl, but without the single-stranded carrier DNA (Gietz and Schiestl, 2007). Transformants were plated on selective media (C -ura -trp) and incubated at 30˚C for 48 hr. Colonies were picked and grown overnight in 1 mL of SDCAA (Chao et al., 2006) at 30˚C, 225 RPM. The following day, 1e7 cells were harvested by centrifugation at 1000 x g for 2 min at RT in an Eppendorf microcentrifuge. The supernatant was removed, and the cells were resuspended in 1 mL of SGCAA induction media supplemented with 0.2% glucose. Protein expression was carried out at 18-22˚C. Following 36-48 hr of protein expression, 2e6 cells were collected into 96-well plates (Corning #3363). Cells were pelleted at 1000 x g for 2 min at 4˚C and washed twice with 200 ml of ice-cold PBSF pH 7.4 (PBS supplemented with 1 g/L of BSA). Cells were resuspended in a 20 ml PBSF solution containing Fen-BSA at various concentrations. 1 ml of anti-fentanyl antibody (CalBioreagents) was added to positive control cells expressing two tandem Z domains of protein A (ZZ domain) (Mazor et al., 2007;Nilsson et al., 1987). Plates were incubated at 4˚C for 4 hr on a Heidolph Tetramax 1000 plate shaker at 1350 RPM.
Unbound Fen-BSA was removed by centrifugation and washing the cells once with ice-cold PBSF. Cells were labeled with 0.5 ml of anti-C-myc-FITC conjugated antibody (1 mg/ml) and 0.5 ml streptavidin-phycoerythrin (SAPE, 3.3 mM) in a 20 ml volume of PBSF for 10 min at 4˚C with shaking.
Cells were washed once with 200 ml of ice-cold PBSF to remove unbound C-myc and SAPE. Cell pellets were resuspended in 100 ml of ice-cold PBSF immediately prior to use. Protein expression and binding were measured on an Accuri C6 flow cytometer (488 nm excitation, 575 nm emission) by monitoring the FITC and PE fluorescence contributions, respectively.
Fen49 Site-Saturation Mutagenesis (SSM) Library Generation and Selection
A single site-saturation mutagenesis (SSM) library was generated for the entire Fen49 coding sequence, with the exception of the start methionine, using a 2-step overlapping PCR method and pETCON-Fen49 as the template. The first step involved 2 separate PCR reactions to generate the 5' and 3' fragments flanking the site of interest. The reaction conditions were as follows: 16.125 ml ddH 2 O, 5 ml of HF Buffer (5X), 0.125 of Phusion High-Fidelity DNA Polymerase (NEB), 1.25 ml of 10 mM dNTPs, 0.5 ml of template DNA at 10 ng/ml, 1 ml each of either the 3'-MCS primer (5'-GTAC-GAGCTAAAAGTACAGTGGGAAC-3') plus the forward NNK-containing primer, or the 5'-MCS primer (5'-TGACAACTATATGCGAGCAAATCCCCTCAC-3') plus a reverse primer designed to have a partial overlap with the NNK primer. The second PCR step reconstituted the full Fen49 gene containing a single SSM site, plus 5'-and 3'-flanking sequences derived from the pETCON vector. The reactions conditions were as follows: 33.25 ml ddH 2 O, 10 ml HF Buffer (5X), 0.25 ml DNA Polymerase, 2.5 ml dNTPs, 1 ml each of the 5'-and 3'-MCS primers, 1 ml each of both PCR products described in step one. All primers were at 20 mM in dH 2 O. All amplifications were carried out by 30 cycles of PCR (98˚C 15 s, 54˚C 30 s, 72˚C 60 s), with an initial 30 s melting step at 98˚C and a final 5 min extension step at 72˚C. The NNK degenerate primers used to generate the SSM library are listed in Supplementary Table 4 (Supplementary file 4).
All full-length Fen49 PCR products were pooled and purified by gel extraction (Qiagen) using ddH 2 O. Library DNA was used to electroporate S. cerevisiae EBY100 cells in triplicate, following a slightly modified version of the protocol detailed by Benatuil et al. (2010)., using 2 mg of NheI/ XhoI/BamHI linearized pETCON and 6 mg of Fen49-SSM library DNA. Cells were electroporated using a Gene Pulser Xcell (Bio-Rad) at 2.5 Kv, 25 mF and 200 W. The library complexity was determined to be 5e8. Following the recovery step in YPD-sorbitol, cells were grown in 100 mL C -Trp -Ura media plus 100 mg/mL carbenicillin at 30˚C for 24 hr, 225 RPM. Cells were pelleted at 1000 x g for 3 min, resuspended in 100 mL of fresh C -Trp -Ura and incubated for another 24 hr. The 3 100 mL library cultures were pooled, and 4e8 cells were pelleted and resuspended in 25 mL of SGCAA media supplemented with 0.2% glucose, 100 mg/mL carbenicillin and 50 mg/mL kanamycin. The library was expressed overnight at 22˚C, 225 RPM. In order to identify Fen49 variants with a greater affinity for fentanyl than the parent design, a fentanyl-SAPE tetramer label was used in place of Fen-BSA (20 molecules of fentanyl per BSA) to reduce the dependency of the system on avidity. Onehundred-million cells from the naïve library were pelleted at 1000 x g, washed twice with 1 mL of ice-cold PBSF and labeled for 3 hr at 4˚C, shielded from light, with 4 mM Fen-SAPE in a 1 mL volume of PBSF plus 20 mg/mL FITC conjugated anti-CMYC. Labeled cells were pelleted, washed once with 1 mL of ice-cold PBSF and resuspended in 4 mL PBSF. Cells with strongest signal in the PE channel (488 nm excitation, 585/30 nm optical filter) were collected with a SONY SH800 series cell sorter. Collected cells were grown in C -trp -ura.
Next generation sequencing and site saturation mutagenesis analysis
Fen49 SMM library DNA was analyzed on an Illumina MiSeq, using the v3 kit (600 cycles), which produces 300-base reads. For full sequence coverage, the FEN49 library was split into 305 and 295 paired-end reads, respectively, where the overlapping sequence of the two portions was 42 base pairs. The next-generation sequencing of SSM libraries produced 5,017,520 forward and 5,093,932 reverse reads. The paired-end reads were assembled and filtered for quality (average Phred score ! 18 and a minimum position Phred score of ! 12). The paired-ends were further filtered based on a minimum of 11 overlapping reads. This resulted in 3,992,873 full length DNA reads (Supplementary file 5) that were translated to their corresponding protein sequence and mutation frequencies were determined using the Enrich software package (Fowler et al., 2011). The relative enrichment values, calculated as described below, were determined for mutations with >15 counts.
The frequency of an observed mutantion at position i for mutation j in the selected (f select ij ) and naive (f naive ij ) libraries were determined from the full length reads. Mutation frequencies over all positions and mutation types were summed to determine the total frequency of mutants in a selected and naive library for the full protein sequence length (N). The possible mutations include the 19 standard amino acids (wild-type amino acid identities were not considered) and a sequence termination encoded by stop codon. The probability of a mutation in a sequence for the selected (p select ij ) and naive library (p naive ij ) were calculated as shown in Equation 1 and Equation 2, respectively. The enrichment (E ij ) value was determined as shown in Equation 3. SSM heat maps were generated using the MatrixPlot function in Mathematica.
Synthesis of biotinylated and Alexa488 fentanyl derivatives
All chemical reagents and anhydrous solvents for synthesis were purchased from commercial suppliers (Sigma-Aldrich, Fluka, Acros) and were used without further purification or distillation. The composition of mixed solvents is given by the volume ratio (v/v). 1 H and 13 C nuclear magnetic resonance (NMR) spectra were recorded on a Bruker DPX 400 (400 MHz for 1 H, 100 MHz for 13 C, respectively), Bruker AVANCE III 400 Nanobay (400 MHz for 1 H, 100 MHz for 13 C, respectively), with chemical shifts (d) reported in ppm relative to the solvent residual signals of CDCl 3 (7.26 ppm for 1 H, 77.16 ppm for 13 C), CD 3 OD (3.31 ppm for 1 H, 49.00 ppm for 13 C), DMSO-d 6 (2.50 ppm for 1 H, 39.52 ppm for 13 C). Coupling constants are reported in Hz. High-resolution mass spectra (HRMS) were measured on a Micromass Q-TOF Ultima spectrometer with electrospray ionization (ESI) or Bruker Micro-TOF with ESI-TOF (time-of-flight). LC-MS was performed on a Shimadzu MS2020 connected to a Nexerra UHPLC system equipped with a Waters ACQUITY UPLC BEH C18 1.7 mm 2.1 Â 50 mm column. Buffer A: 0.05% HCOOH in H 2 O Buffer B: 0.05% HCOOH in acetonitrile. Analytical gradient was from 5% to 95% B within 5.5 min with 0.5 ml/min flow. Preparative RP-HPLC was performed on a Dionex system equipped with an UVD 170U UV-Vis detector for product visualization on a Waters SunFire Prep C18 OBD 5 mm 10 Â 150 mm Column (Buffer A: 0.1% TFA in H 2 O Buffer B: acetonitrile. Typical gradient was from 0% to 90% B within 30 min with 4 ml/min flow.). After lyophilization of HPLC purified compounds, the solid residue was generally dissolved in dry DMSO.
Bacterial protein expression and purification
Expression and purification methods refer to Fen49 and all derivatives. Coding sequences were subcloned from their pETCON constructs into the NdeI and BamHI sites of a modified version of pET28a (Novagen), which replaces the N-terminal thrombin cleavage site with a PreScission Protease (GE Healthcare Life Sciences) site. Expression clones were transformed into BL21 (DE3) cells and grown overnight in 2 mL of Terrific Broth II (TB-II, MP Biomedicals) supplemented with 150 mg/mL carbenicillin without first plating for colony selection. Overnight cultures were used to inoculate 1 L of TB-II and subsequently grown at 37˚C until an OD 600 of 0.8-1.0, at which point the shaker temperature was dropped to 18˚C and protein expression carried out for 16-20 hr by the addition of IPTG to a final concentration of 0.1 mM. Cells were harvested by centrifugation at 4˚C, 7500 x g for 20 min and the pellets from 2 L of culture were resuspended in~30 mL of Nickel Buffer A (500 mM NaCl, 20 mM Tris pH 8.0, 30 mM imidazole and 5% glycerol) and stored at À80˚C. Purifications were performed from 6 L of cells. Cells were lysed, while in an ice-water bath, by sonication using a Sonic Dismembrator Model 505 (Fisher Scientific) at 70% amplitude (4 Â 1 min cycles of 5 s pulses followed by 10 s rest, with 1 min in an ice-water bath in between cycles). Lysates were clarified by centrifugation at 4˚C, 43,000 x g for 30 min.
The supernatant was loaded onto a 5 mL HisTrap FF column, charged with NiSO 4 , at 2.5 mL/min using an Ä KTA Pure fast-protein-liquid-chromatography (FPLC) system (GE Healthcare Life Sciences). The column was washed with Nickel Buffer A until a baseline absorbance was achieved. Fen49 was eluted from the column by performing a linear gradient to 100% Nickel Buffer B (500 mM NaCl, 20 mM Tris pH 8.0, 200 mM imidazole and 5% glycerol) over 25 mL, and 2 mL fractions were collected.
Fractions containing Fen49 were pooled, PreScission Protease was added at a ratio of 1:20 with Fen49, and the protein was dialyzed at 4˚C against 2 Â 1L of 50 mM NaCl, 20 mM Tris pH 8.0, 5% glycerol. Cleavage with PreScission Protease leaves a vector derived gly-pro-his sequence on the N-terminus of the Fen49 sequence. Cleaved Fen49 was passed over a 5 mL GST HiTrap column and Q HiTrap column in series at 1 ml/min. The flow through containing Fen49, but free from contaminating proteins, was collected. Fen49 purity was estimated to be > 95%. All FPLC steps were carried out at RT.
Fen49 was concentrated at 4000 x g using an Amicon Ultra-15 10K Centrifugal Filter (EMD Millipore) to~200 ml. Protein buffer was exchanged to 10 mM NaCl 5 times by dilution to 15 mL and reconcentration to 200 ml.
Fluorescence anisotropy equilibrium saturation binding assays
Fluorescence polarization experiments were performed as previously described (Rossi and Taylor, 2011). All experiments were conducted at 25˚C in a SpectraMax M5e microplate reader (Molecular Devices) at excitation and emission wavelengths of 485 nm and 538 nm, respectively, using a 515 nm emission cutoff filter. Experiments were carried out in 40 ml reaction volumes using High efficiency 96-well black opaque microplates (Molecular Devices). Fentanyl-PEG-Alexa488 (Fen-A488) was used as the fluorescent ligand in all experiments. All protein and ligand dilutions were made in PBS, pH 7.4. For all experiments, the concentration of Fen-A488 was held at 500 nM, while the protein concentration was varied from 60 mM to 20 nM. Anisotropy values were collected over a period of 15 min, and the equilibrium dissociation constants (K d ) were determined as previously described (Tinberg et al., 2013).
Protein crystallization
Protein was spun at 4˚C, 20,817 x g for 20 min to remove insoluble material prior to crystallization. Typically none was observed. Protein concentration was determined using the Bradford Protein Assay (Bio-rad) and BSA to generate the standard curve. Crystallization trials were conducted using a variety of 96-condition spare matrix suites from Qiagen and Hampton Research. All crystallization trials were conducted with the sitting drop vapor diffusion method at 20˚C, using 3-well MRC crystallization plates (Swissci).
Fen49 crystallization trials were conducted using a Mosquito Crystal nanoliter robot (TTP Labtech). Fen49 at 30 mg/mL was mixed in 1:2, 1:1 and 2:1 protein to crystallization solution ratios in 400 nl drops. Crystals displaying a shard-like cluster morphology were obtained after~1 week from a solution containing 0.1M citric acid pH 3.5% and 25% (w/v) PEG-3350. Microseeding was employed in order to obtain crystals suitable for diffraction experiments. A drop containing Fen49 crystals was added to 50 mL of crystallization solution in a microfuge tube containing a Seed Bead (Hampton). This solution was vortexed for 30 s. Fresh drops were set up using 1 mL of Fen49 at 20 mg/mL plus 1 mL crystallization solution, to which 0.2 mL of a 1:100 dilution of the seed stock was added. Large, single crystals were observed overnight, which grew to a maximum size of over 300 mm in length within~3 days. Crystals were briefly dipped in a solution of 0.085M citric acid pH 3.5, 21.25% (w/v) PEG-3350% and 15% glycerol and flash frozen in liquid nitrogen.
Crystals of Fen49*-apo were obtained manually by mixing 0.5 mL of protein at 10 mg/mL with 0.5 mL and 0.8 M sodium phosphate, 0.8 M potassium phosphate and 0.1 M HEPES pH 7.5. Rod-like crystals appeared after~3 days and grew to a maximum of 200 mm in length. Crystals were briefly soaked in a solution of 0.6M sodium phosphate, 0.6 M potassium phosphate, 0.075 M HEPES pH 7.5% and 25% glycerol, then flash frozen in liquid nitrogen.
The Fen49*-fentanyl complex was obtained by soaking Fen49*-apo crystals overnight in mother liquor plus 20 mM fentanyl citrate (solution made in dH 2 O to~50 mM). The soaked crystals were sealed in a well containing mother liquor to allow excess water from the added fen-citrate to diffuse. Fen49*-complex crystals were cryo-protected the same as Fen49*-apo. Crystals were flash frozen the same as for Fen49*-apo.
Data collection and processing
All datasets were collected at the Advanced Light Source (Berkeley, CA) beam line 8.2.2. using an ADSC Q315R CCD area detector. The Fen49 and Fen49*-Apo datasets were processed in HKL2000 (Otwinowski and Minor, 1997). The Fen49*-Complex dataset was processed in XDS (Kabsch, 2010).
Fen49 -Diffraction data were collected over 220˚with 1˚oscillations, 1 s exposures, at 100K and at a wavelength of 0.75141 Å and a crystal-to-detector distance of 156 mm. Images were processed to 1.00 Å in space group P2 1 .
Fen49*-complex -Diffraction data were collected over 135˚with 0.5˚oscillations, 1 s exposures, at 100K, a wavelength of 0.999878 Å and a crystal-to-detector distance of 190 mm. Images were processed to 1.67 Å in space group P2 1 2 1 2 1 .
Structure determination and refinement
All structures were solved by molecular replacement (MR) using PHASER (McCoy et al., 2007) in the PHENIX software suite (Adams et al., 2010). Iterative rounds of manual building and refinement were conducted in Coot (Emsley et al., 2010) and Phenix.refine (Afonine et al., 2012), respectively. Hydrogens were added for all refinement jobs. The geometric quality of the final models was verified using the MolProbity server (Chen et al., 20092010). Resolution cutoffs were determined by monitoring the refinement statistics in the context of the reflection data completeness and the CC ½ and I/sI values (Karplus and Diederichs, 2012).
Fen49 -The Fen49 design model, with residues 63, 85-95 and 116-122 omitted, was used as search model for MR. Two copies of Fen49 were placed in the asymmetric unit (AU). An initial model was generated using the PHENIX Autobuild module. All defaults were used, with the following exceptions: 'Build-in-place' was set to 'False', simulated annealing was used for refinement, and prime-and-switch maps were used during model building to remove search model bias. All atoms except hydrogen were refined with anisotropic atomic displacement parameters.
Fen49*-apo -PDB 2QZ3, the parent scaffold from which Fen49 was designed, was used as the MR search model. Three copies were placed in the AU. Manual rebuilding was conducted directly from the MR solution. Poor electron density was observed for the third Fen49* copy in the AU, suggesting a large degree of disorder for this domain.
Fen49*-complex -Fen49*-apo was used as the MR search model. Manual rebuilding was conducted directly from the MR solution. Restraints for fentanyl were generated in Phenix.elbow (Moriarty et al., 2009) from the SMILES string using the eLBOW AM1 geometry optimization option.
Data collection and refinement statistics are given in Supplementary
Arabidopsis thaliana transcription factor reporter plasmid construction
The Fen49 and Fen21 transcription factors were engineered by N-terminal fusion of the yeast MATa gene degron and the Gal4 DNA binding domain and C-terminal fusion of the VP16 transcriptional activator to either Fen49 or Fen21. The resulting gene sequence was codon-optimized for optimal expression in Arabidopsis thaliana plants and cloned downstream of the CaMV35S promoter to drive constitutive expression in plants, and upstream of the octopine synthase (ocs) transcriptional terminator sequence. To quantify the transcriptional activation function of the Fen49 and Fen21 transcription factors, the luciferase gene from Photinus pyralis (firefly) was placed downstream of a synthetic plant promoter consisting of five tandem copies of a Gal4 upstream activating sequence (UAS) fused to the minimal (À46) CaMV35S promoter sequence. Transcription of luciferase is terminated by the E9 terminator sequence. These sequences were cloned into a pSEVA 141 plasmid and used for transient expression assays in Arabidopsis protoplasts.
TF-Biosensor assays in protoplasts and intact plants
We next inserted the genetic circuit for Fen21 transcription and luciferase reporting into the pCAM-BIA 2300 plant transformation vector and stably transformed them into Arabidopsis thaliana ecotype Columbia plants using a standard Agrobacterium tumefaciens floral dip protocol. Primary transgenic plants were screened in vivo for fentanyl-dependent luciferase production using a Stanford Photonics XR/MEGA-10Z ICCD Camera and Piper Control Software System, and responsive plants were allowed to set seed for further testing. Second generation transgenic plants (T 1 , heterozygous) were tested for fentanyl-dependent induction of luciferase expression using the same system described above. | 2018-03-03T23:52:39.580Z | 2017-09-19T00:00:00.000 | {
"year": 2017,
"sha1": "a53f03a28bbd5388255c84dce1fadf3a4c3442f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.28909",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a53f03a28bbd5388255c84dce1fadf3a4c3442f7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
210844915 | pes2o/s2orc | v3-fos-license | Glucose Determination by Means of Steady-state and Time-course UV Fluorescence in Free or Immobilized Glucose Oxidase.
Changes in steady-state UV fluorescence emission from free or immobilizedglucose oxidase have been investigated as a function of glucose concentration.Immobilized GOD has been obtained by entrapment into a gelatine membrane. Changes insteady-state UV fluorescence have been quantitatively characterized by means ofoptokinetic parameters and their values have been compared with those previouslyobtained for FAD fluorescence in the visible range. The results confirmed that greatercalibration ranges are obtained from UV signals both for free and immobilized GOD inrespect to those obtained under visible fluorescence excitation. An alternative method tothe use UV fluorescence for glucose determination has been investigated by using timecourse measurements for monitoring the differential fluorescence of the redox forms of theFAD in GOD. Also in this case quantitative analysis have been carried out and acomparison with different experimental configurations has been performed. Time coarsemeasurements could be particularly useful for glucose monitoring in complex biologicalfluids in which the intrinsic UV fluorescence of GOD could be not specific by consideringthe presence of numerous proteins.
Introduction
In the last years the employment of glucose oxidase (GOD) in glucose optical sensing has been largely investigated for clinical and industrial applications [1][2][3][4][5][6][7][8]. Different immobilization procedures have been adopted [9][10][11] aiming to extend the linear range of optical sensors, their sensibility, specificity, reproducibility and time stability. Recently new approaches to "in vivo" glucose measurements by means of fluorescence-based systems have been critically reviewed by Pickup et al [12,13]. As far as concerns glucose determination by means of GOD endogenoeus fluorescence, two different approaches have been followed. The former is based on the changes in steady-state fluorescence of the flavine (FAD) region during the enzymatic reaction [14][15][16]. This approach is very simple and highly specific to glucose and the use of visible light (λ exc = 420 nm; emission range = 480 -580 nm) makes it not very expensive as far as optical components. However, this approach requires large consumption of enzymes owing to the low quantum yield of flavine fluorescence. Moreover, fluorescence changes are not very strong and only particular immobilization procedures can allow a widening of linear calibration region for sensors operating in this wavelength range. The second approach exploits the GOD UV intrinsic fluorescence of some amino acids, basically tyrosine and tryptophan. This fluorescence is generally characterized by an excitation with two maxima at 224 and 278 nm and an emission around 340 nm and it is usually employed to obtain information about the enzyme configuration and bonding positions [17]. UV intrinsic fluorescence gives some advantages in comparison with flavine fluorescence: higher quantum yield and larger linear calibration range [2]. These features make interesting the optical glucose measurements in this region, even if the UV fluorescence has been considered not highly specific for glucose when biological samples are investigated owing to the presence of many proteins [18].
In 1997 some researchers [19] proposed an alternative method for using UV GOD fluorescence for glucose determination by exploiting the differential fluorescence of the redox forms of FAD bound to the enzyme and it was applied for determination of glucose concentration in the blood. The method is based on the evidence that the addition of glucose to GOD solution does not immediately change the UV fluorescence signal that remains still constant for a certain amount of time before increasing until a determined level. This fluorescence level remains stable for some time before slowly decreasing. The experiments [19] ruled out that this behaviour cannot be due to inner filter and oxygen quenching effects, but it has to be ascribed to a different energy transfer from tryptophan to reduced and oxidised FAD [20]. In such a way, by monitoring the high but not specific UV fluorescence signal, it is possible to characterize the lower but highly specific glucose-FAD fluorescence [19]. The time to reach the stable fluorescence level depends on the glucose concentration and then can be used for its determination. In addition, by changing the oxygen and enzyme concentrations it is possible to modulate the linear calibration range. In this paper we investigated the feasibility of using the fluorescence temporal changes to quantify the glucose concentration when GOD is not more free, as in reference 19, but it is immobilized by entrapment in a gelatine membrane. The latter system is more appropriate for sensor applications. The performances of this approach have been quantified by means of optokinetic parameters as in our previous paper [16]. These parameters have also been evaluated for the free GOD fluorescence temporal changes and for the steady-state UV fluorescence changes in free and immobilized GOD.
Materials
Glucose oxidase (GOD, EC 1.1.3.4) from Aspergillus niger (154 U mg -1 ) was employed for our study. GOD catalyses the oxidation of glucose to gluconic acid through the following reactions: The reaction mechanism is the following: glucose reduces FAD of glucose oxidase to FADH 2 with formation of gluconolactone, which is rapidly hydrolysed to gluconic acid. At this point the dissolved oxygen reoxidizes FADH 2 to GOD and produces H 2 O 2 .
The enzyme was immobilized by entrapment into bovine gelatine (average molecular weight 100 kDa). Gelatine was a gift of Deutsche Gelatine Fabric Stress, Eberbach, Germany.
All chemical products, including the enzyme, were purchased from Sigma (Sigma-Aldrich, Milano, Italy) and used without further purification.
Preparation of the catalytic membranes
A 10% gelatine (w/v) aqueous solution was heated in a water-bath at 90 °C for 15 min, then the solution was gradually cooled at 40 °C before adding a 50% (v/v) ethanol/formaldehyde solution to give a 1% final HCOH concentration. After 20 min of treatment at this temperature, GOD (final concentration 5 mg mL−1 ) was added under vigorous stirring and then the mixture was poured into a Plexiglas square frame, 4 x 4.5 cm in size and 5 mm in depth. The preparation was quickly put into a freezer at −24 °C and after 16 h brought back to room temperature, thus obtaining a flexible gelatine membrane, which was extensively washed with distilled water. At this point rectangular pieces of the same size (40 mm × 13 mm) were cut and used for fluorescence measurements. In this way, catalytic membrane comparable for dimension and amount of entrapped GOD were obtained. The gelatine membranes had a lattice structure, which efficiently held the biocatalyst and allowed free diffusion of substrate and reaction products. When not used, the membranes were stored at 4 °C in 0.1 M acetate buffer, pH 5.0.
Intrinsic fluorescence emission measurements
GOD is an oxidase and exhibits at pH 6.5 a very intense UV fluorescence with an emission maximum at 334 nm and two absorption maxima at 224 nm and 278 nm due to tryptophan. GOD is also a typical flavoprotein. GOD from A. niger is a dimmer with two very tightly bound FAD molecules per dimer. As all flavoproteins, GOD shows absorption maxima at about 380 and 450 nm and an intrinsic fluorescence with an emission maximum at about 530 nm, at pH 7.0. As previously reported changes in the fluorescence of free and immobilized GOD have been found during its interaction with glucose, since the oxidized and reduced flavines have been found to exhibit different fluorescences [12,14,18].
In this research the emission fluorescence spectra have been collected by means of a spectrofluorimeter (Perkin-Elmer, model LS55) equipped with a Xenon discharge lamp with an emission spectrum ranging from 200 to 800 nm. Sample excitation was performed at 295 nm, while the emission spectrum was recorded in the range 310 -400 nm. Spectra have been acquired with entrance and exit slits fixed at 5 nm and with a scan speed of 100 nm s -1 . Just to give an example, in Figure 1a the normalized emission fluorescence spectra of free GOD in the presence (2 mM) or in the absence of glucose are reported. Figure 1a shows a fluorescence increase (about 20% for both peak and integral values) when glucose is in the aqueous solution. In Figure1b the normalized emission fluorescence spectra for GOD entrapped into the gelatine membrane in the presence (20 mM) or in the absence of glucose are reported. Also in this case a fluorescence increase (nearly equal to 20% for both peak and integral values) is evident in the presence of glucose. We have checked that the changes in the fluorescence were not due to GOD diffusion from gelatine to solution. In fact fluorescence spectra were absent after removing the catalytic gelatine membrane. Either in Figure 1a or in Figure 1b normalization has been carried out by taking as unit the maximum intensity of the recorded spectra in presence of glucose. From a comparison between curves (i) of figures 1a and 1b it is also evident that the immobilization procedure doesn't significantly alter the UV fluorescence spectra according to ref. 21.
For analytical purposes the emission fluorescence spectrum can be treated as size of the peak value at 340 nm or as size of the integral area under the spectrum in the region 310 -400 nm. Both experimental parameters have been used in the elaboration of our figures.
UV fluorescence time course measurements
Also in this case, the changes in fluorescence measurements during the enzymatic reaction were followed with the same spectrofluorimeter above-mentioned. At the beginning the fluorescence of either free or immobilized GOD at 340 nm was measured. This measure constituted the baseline for the subsequent measurements. After addition of 200 μl of glucose solution at different concentrations the time course of the fluorescence intensity was monitored. Figure 2a shows the UV fluorescence signal obtained upon addition of 200 μl of 1 mM glucose solution to the GOD solution to obtain a final glucose concentration of 0.2 mM. As is evident the fluorescence intensity initially remains constant at I 0 , except for a small decrease resulting from dilution. After some time referred as "appearance time" , t app , the fluorescence intensity increases gradually to a final value I 1 at t 1 . I 1 remains constant for some time and then gradually decreases. Similar measurements were performed for GOD entrapped into a gelatine membrane and the results are shown in Figure 2b. At t = t 0, 200 μl of 10 mM glucose solution were added to the buffered solution where the gelatine membrane was positioned. In this way a final glucose concentration of 2mM was obtained. After some time, t app , the UV fluorescence signal started to increase. No initial decrease of the fluorescence signal was observed since it was coming from immobilized GOD and therefore no dilution effect on GOD concentration was present. Also in this case it has been checked that the changes in fluorescence were not due to the GOD presence in solution. According to ref. 19, the determination of glucose has been done by means of two different parameters: t app and the linear slope of the intensity signal rise.
Calibration curves of glucose concentration through emission spectra
3.1.1. Free GOD In Figure 3a the peak intensities, P c,free , of the emission spectra of free glucose oxidase are reported as a function of glucose concentration. Subscripts "c" and "free" indicate the glucose concentration and the enzyme form, respectively. P c, free values in Figure 3a have been decreased by P 0, free = 403 arbitrary units, i.e. the peak value of free GOD in the absence of glucose, i.e. when c = 0. The experimental conditions were: temperature 25°C and glucose in 0.1 M acetate buffer solution, pH 5.0. Data in Figure 3a exhibit a Michaelis-Menten behaviour and are well fitted by an equation of the type: where now the subscript "sat" indicates the peak value at saturation and K P is a "pseudo" Michaelis-Menten constant. A Lineweaver-Burk plot of the results reported in Figure 3a allows us to derive the K P, free and P sat,free values reported in Table 1. As in reference 16, K P and P sat are the optokinetic parameters.
The inset of Figure 3a shows the range where it is possible to have a linear relationship (R= 0.99 ) between the value of the intensity of the emission peak and the glucose concentration: K A, free and A sat, free values, obtained by Lineweaver-Burk plot of the results of Figure 3b, are reported in Table 2.
In the inset in Figure 3b the linear range between the integral area values and the glucose concentration is reported. Data are well fitted ( R= 0.98) by the linear equation : where free A S , is the sensitivity of the method. The free A S , value is reported in Table 2 together with the indication of the extension of the linear range of the calibration curve.
In Tables 1 and 2 we have also reported, for comparison, the analogous parameters obtained by us in the visible region in a previous paper [16]. Data reported in Table 1 and 2 show values of optokinetic parameters in UV spectral range higher than those obtained under visible conditions, the ratio ranging from 3.3 for the peak to 4 for the area. Also the extension of the calibration linear range is improved in the case of the measurement performed in the UV spectral range, while, as expected, the sensitivities calculated for the visible spectral range are higher than those obtained in the UV range. This circumstance confirms the general observation that the extension of the linear calibration range and the sensitivity are quantities not directly proportional.
GOD -gelatine membrane
In figure 4a the values of the peak intensity of the intrinsic fluorescence emission spectrum of GOD entrapped into the gelatine membrane are reported as a function of glucose concentration. The experimental points in the figure have been obtained by taking into account for the intensity of the fluorescence peak of GOD-gelatine membrane in the absence of glucose, i.e. P 0,gel = 900 arbitrary units. The experimental conditions (temperature, pH, and buffer solution) were the same that in Figure 3. Also in this case a Michaelis-Menten behaviour is observed. It follows that all the consideration done for free GOD are still valid for immobilized GOD. The inset in Figure 4a shows the linear range between the fluorescence peak intensity of GOD-gelatine membrane and the glucose concentration, as deduced from Figure 4a. The values of K P, gel , P sat, gel and S P, gel , calculated by means of equations identical to (1) and (2), are reported in Table 1. The subscript gel, in this case, indicates gelatine. When reference is done to the integral area of the emission fluorescence spectrum of the GODgelatine membrane one obtains the results reported in Figure 4b. Also in this case, the data in the figure have been obtained by subtracting from the measured values of the integral area of the emission spectra in the presence of glucose, the integral area value of the emission spectrum of the GODgelatine membrane in the absence of glucose, i.e. A 0, gel = 50,000 arbitrary units. In the inset in Figure 4b the linear range between the integral area of the emission spectrum and the glucose concentration is reported. Also in the case of Figure 4 b the considerations done for the free enzyme are still good, so that through equations similar to (3) and (4) one obtains the value of K A,gel , A sat, gel and S A, gel listed in Table 2.
Data listed in Table 1 and 2 show under UV conditions for GOD entrapped into the gelatine membrane larger calibration range and higher sensitivity in respect to the analogous values obtained under visible emission.
Free GOD
According to reference 19, the dynamic changes in the emission spectra can be used for determination of glucose concentration by utilizing two different parameters: the linear slope (Sl = dI/dt) of the intensity rise of fluorescence signal and the appearance time (t app ). Reference 19, moreover, allows us to use the parameter (t m -t 0 ) in place of the appearance time (t app ) that is more difficult to determine. (t m -t 0 ) is equal to the time required for the fluorescence intensity signal to reach a value equivalent to 10% of the overall increase. In the present study we used both the parameters Sl and (t m -t 0 ).
In Figure 5a the values of the linear slope of the fluorescence intensity increase at different glucose concentrations are reported. A Michaelis-Menten behaviour is observed and the curve through the experimental points is well fitted by the equation: where now Sl indicates the slope of the initial increase of the fluorescence signal, and C, sat and free have the same meaning as in previous sections.
In the inset in Figure 5a the linear calibration range relative to the experimental points of Figure 5a is reported. The corresponding linear relationship can be written as follows: where free , Sl S , now, is the sensitivity of the method. Values of Sl sat , K Sl, free and S Sl are reported in Table 3. Table 3. Optokinetic parameters for free and immobilized GOD evaluated from fluorescence time course and from the analysis of the slope of signal intensity rise. Coming now to the (t m -t o ) parameter, the model [19]
predicts that this time is influenced by both the concentrations of oxygen [O 2 ] and glucose [G] according to a linear relationship between (t m -t 0 ) and
Obviously this relationship is valid for glucose concentrations greater than twice [O 2 ] concentration. The authors [19] Figure 5b. The experimental points are the same reported in Figure 5a, but the data elaboration is now different. The results in Figure 5b indicate that by using this approach it is possible to extend (from 0.6 mM to 1.2 mM) the linear range in which a new calibration curve can be used. It is not clear if it is fortuitous that the new calibration curve starts from the glucose concentration (0.6 mM) which represents the upper limit of the calibration curve related to the linear slope of the intensity rise. If this is not fortuitous, it should be feasible to use different GOD and oxygen concentrations to modulate the whole range in which it is possible to perform accurate measurements of glucose concentrations by means of technologies based on fluorescence time course.
The characteristic parameters relative to this approach have been indicated by TC (for Time Course) and are reported in Table 4. S TC (measured in seconds) is the slope of best fitting line of experimental points reported in Figure 5b.
.2 GOD -gelatine membrane
Our main goal in this work was to investigate the possibility of broadening the field of application of the method proposed in reference 19 to immobilized GOD, since no industrial applications can be devised for the free enzyme. For this reason we investigated the changes in fluorescence signals of GOD immobilized in a gelatine membrane in the presence of different glucose concentrations. As shown in Figure 2 and as reported in reference 21, the immobilization procedure does not significantly alter the spectroscopic emission spectrum of enzyme in the UV region.
In Figure 6a the values of the linear slope dI/dt, i.e. the values of Sl , are reported as a function of glucose concentration when GOD is immobilized into the gelatine membrane. Also in this case a Michaelis-Menten behaviour is observed and the values of K Sl,gel and Sl sat,gel can be accordingly derived from a linearization process as that of Lineweaver-Burk. These values are reported in Table 3. Since we are interested to a linear calibration curve, in the inset in Figure 6a where the variables have the usual meaning and S Sl,gel is a constant giving the sensitivity of the method. The value of S Sl,gel is also reported in Table 3.
When the parameter (t m -t 0 ) is considered, the plot of Figure 6b is obtained. As for Figure 5b, also data in Figure 6b show that by using the plot of (t m -t 0 ) versus ln([G]/([G] -2([O 2 ])) it is possible to find a new calibration curve beyond the 10 mM glucose concentration which represents the upper limit for the calibration curve obtained by plotting the rate of intensity increase as a function of glucose concentration. All the considerations done for the free enzyme are still valid for the immobilized one.
Before concluding, let us make some observations emerging from data in Figures 5b and 6b. The straight lines fitting the experimental points allows us to define a "pseudo-sensitivity" of the method. Values of S TC and the corresponding extensions of the linear range, for free and immobilized GOD, are reported in Table 4. It is interesting to observe that when the approach of reference 19 is followed, the pseudo-sensitivity parameters and the extension of the calibration linear range for the immobilized GOD are higher than those obtained for the free GOD.
Conclusions
From data reported in the Tables it is possible to conclude that also for UV fluorescence signal, either for steady-state or for time course measurements, by using immobilized GOD an increase of more than one magnitude order of the linear calibration range is obtained. In a previous paper [16] we noticed a linear relationship between the K optokinetic values and the corresponding extensions of the linear range. Also the K optokinetic values evaluated for steady-state UV fluorescence emission follow the same relationship, as it can be shown in Figure 7 in which all the values for visible and UV emission are reported. Looking at the "sensitivity" of the measure method it is evident that for UV steady-state fluorescence the sensitivity decreases when GOD is immobilized but not in so high percentage as for visible emission: a factor of ten for visible and a factor of three for UV. For the method based on the slope of the intensity increase, the sensitivity decrease is still evident when immobilized GOD is considered, but for the (t m -t 0 ) method the sensitivity for immobilized enzyme is higher than the corresponding value of the free GOD. This is particularly encouraging in order to apply this method when using complex biological fluids in which many proteins are present and UV steady-state fluorescence is claimed to be not very specific to glucose (14). In our previous paper we already noticed that our systems operating in visible region were more sensitive than other GOD sensors using absorption changes [22]. Presently the use of UV fluorescence allowed us to get better results especially when time course method with immobilized GOD is adopted. In particular, this method can allow us to reach sensitivities higher than those shown in reference 23 where exogenous luminescent probes were used in sol-gel based glucose biosensors. Furthermore our method appears to be competitive also with the most recent fluorescence-based systems [12,13]. The results here reported indicate that it would be possible to design an optical biosensor for glucose determination based on GOD entrapped into a gelatine membrane coupled to optical fibers. | 2014-10-01T00:00:00.000Z | 2007-11-01T00:00:00.000 | {
"year": 2007,
"sha1": "890137d228c4cfd2afc95a26bf7dd656af3de79d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "CiteSeerX",
"pdf_hash": "ac09aca4f927e79a0a780d4a50ee46ceaac27ff0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
244525508 | pes2o/s2orc | v3-fos-license | Topological Optimization of a Complex Shape Forming Stamp
The scientific research is devoted to the mathematical modeling of the optimal topology of stamps with a complex forming surface. Topological optimization is based on the SIMP method by creating a field of pseudo-densities and minimizing the pliability of the structure under the influence of load. When setting the problem, it is proposed to take into account the fatigue strength of polymers, taking into account the restrictions on the stress state. According to the results of the calculation in the ANSYS software package, an optimal redistribution of the stamp material and a reduction in volume due to the removal of elements that have little effect on the rigidity of the structure is obtained. The results of the study can be further applied in the field of hot and cold stamping by creating stamping tools of minimal volume.
Introduction
This study is devoted to modeling the optimal topology of dies of complex shape in view of a large number of non-standard parts, in particular, half-pipes associated with a change in diameter, channel branching. Due to such features, which distinguish complex types of dies from dies for the manufacture of parts of the "cup" type, where there is a precise zone of flanges and a "stream" of shaping, the most effective method of material redistribution is the method of topological optimization based on maximizing rigidity, where for a given decrease in volume the optimal configuration of elements that are deleted and saved in the stamp is selected.
In the study of dies, one of the important issues is to assess its wear by the criteria of low-cycle fatigue and plastic crushing, which is reflected in work [1]. Since the dies experience a long-term force impact, the issues of cyclic loading in the projection on the optimization of the topology of the dies were investigated in [2, 3]. The issues of reducing deformations of simple-shaped stamps during cold stamping, the design of stamps of the "cup" type were considered in the works [4,5]. The study of the parameters of the "hemisphere" part shaping was studied in [6]. The problems of designing complex systems have been studied in sufficient detail in [7][8][9][10].
Since the problems of shaping are contact, important questions concern the study of the process of uneven loading of the punch, which is determined by the rate of loading from the press, the stiffness of 2 the workpiece and the elastic punch. The mechanics of the contact interaction of the punch, workpiece and punch was investigated in [11].
In view of the development of additive technologies, the use of polymers in the manufacture of dies has gained widespread acceptance. The limiting state of polymers and composites under high-cycle loading was considered in [12]. Methods for non-destructive testing of polymers were studied in [13]. Stamping processes using polymers were studied in articles [14].
In view of the complexity of the forms and formulations of optimization problems for dies, it is very important to apply the methods of mathematical modeling and numerical analysis, which was studied in [15][16][17]. Optimization models taking into account power and thermal loads were considered in articles [18][19][20]. The problems of using the finite element method in assessing the stress-strain state in the problems of shaping were considered in the works [21,22]. The application of optimization methods in the PAM-STAMP 2G software package was considered in [23]. Finite-element models of the optimal topology of shaping dies in the ANSYS program were considered in [24][25][26].
Optimization problems of contact interaction with uncertainties and the use of a probabilistic approach were considered in [27][28][29]. In addition, methods for optimizing the technology of manufacturing power aircraft structures were investigated in [30]. Optimization of operational reliability of stamping equipment was investigated in [31]. A mathematical model of the redistribution of the stamp material with the help of rod elements was constructed in [32]. Thus, according to the analysis of literature sources, the issues of die wear have begun to be considered from an optimization standpoint in recent years in view of the introduction of topological optimization methods into the finite element method, as well as the possibility of using composites in the manufacture of dies.
Numerical calculation method
The topological optimization of a complex shape forming stamp is considered in the paper on the basis of the SIMP method. The SIMP method of topological optimization is based on minimizing the malleability of the structure, which is divided into finite elements and a field of virtual pseudodensities is created ! [33]: where " -the work of external forces, { } − vector of external forces, { } − displacement vector. The removal of an element is determined by the minimum stiffness by assigning a small value $%& of the elastic modulus ! to the element [33]: The presence or absence of a material is determined by the pseudo-density parameter ! , which takes the values: 0 or 1.
Thus, the mathematical formulation of the problem is determined by the relations, where the condition of the fatigue strength of the polymer is added to the main system of equations of topological optimization, taking into account the duration of the force action: where ! -stiffness matrix, ! − node movements, C ! − intensity of the stressed state, ! − temperature, − time, ' , ' , − experimental constants for a polymer, − Boltzmann constant. As a result, the search for the optimal redistribution of the material that satisfies the required restrictions on the volume and stress state is carried out on the basis of the presented relations
Research results
The formulation of the topological optimization problem is shown in Figure 1: red color -the loading surface, the optimization-free zone; blue color -the internal optimization area of the stamp. 25653 elements were created when sampling the stamp volume, the size of one element is 2 mm. The surface load p=0.1 MPa was set as the pressure. Optimization limitations: the die volume should be more than 60%, and the stress state should not exceed 1 MPa.
The stress state of a solid die is shown in figure 2 based on the results of numerical calculation by the finite element method. The redistribution of the stamp material under the specified restrictions is shown in figure 3.
The discussion of the results
According to the results of mathematical modeling of the volume of the solid stamp ' = 2,23 • 10 2 mm 3 , the volume of the stamp optimized topology made up * = 1,7 • 10 2 mm 3 . As a result of optimization, the stamp volume was reduced by 24%. According to the analysis of the stress state of the solid die, the highest stress according to the Mises criterion is 0.5 MPa. The search for the optimal distribution is constructed iteratively and required 33 iterations for convergence. According to the calculation results, the distribution of pseudo-densities is obtained. The least loaded elements are the elements of the side faces of the stamp, the pseudo-density values of which are 0 ≤ ≤ 0.4.
Conclusion
The formulation of the problem of topological optimization of a stamp of a complex shape of the "tee" type is presented according to the results of the study. The distribution of the material under the given restrictions on the volume and stress state is obtained as a result of mathematical modeling. The presented approach is of great practical importance for reducing the material costs for the production of all-metal stamps and their manual revision. The use of polymers and the possibility of using additive technologies will allow us to produce stamps with a complex internal structure and a minimized volume in practice. | 2021-11-24T20:06:55.594Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "fa83e00938a3e43d7545e5cedba1d05cf7942822",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2096/1/012115",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fa83e00938a3e43d7545e5cedba1d05cf7942822",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
23345288 | pes2o/s2orc | v3-fos-license | Surveillance and Correlation of Antibiotic Consumption and Resistance of Acinetobacter baumannii complex in a Tertiary Care Hospital in Northeast China, 2003–2011
This study investigated the changes in resistance of Acinetobacter baumannii complex and the association of carbapenem-resistant A. baumannii complex (CRAB) infection and hospital antimicrobial usage from 2003 to 2011 in a tertiary care hospital in northeast China. In vitro susceptibilities were determined by disk diffusion test and susceptibility profiles were determined using zone diameter interpretive criteria, as recommended by the Clinical and Laboratory Standards Institute (CLSI). Data on consumption of various antimicrobial agents, expressed as defined daily dose/1,000 patients/day, were collected retrospectively from hospital pharmacy computer database. Most of 2,485 strains of A. baumannii complex were collected from respiratory samples (1,618 isolates, 65.1%), secretions and pus (465, 18.7%) over the years. The rates of antimicrobial resistance in A. baumannii complex increased significantly over the years. The rates of CRAB were between 11.3% and 59.1% over the years. The quarterly use of anti-pseudomonal carbapenems, but not other classes of antibiotics, was strongly correlated with the increase of quarterly CRAB (β = 1.661; p < 0.001). Dedicated use of anti-pseudomonal carbapenems would be an important intervention to control the increase of CRAB.
Introduction
Acinetobacter baumannii complex has emerged as an important pathogen causing a variety of infections, including urinary tract infections, skin and soft tissue infections, and pneumonia and bloodstream infections [1]. A. baumannii complex used to be an easy-to-treat pathogen because it was susceptible to a wide range of antibiotic agents [2], however, over the last decade, A. baumannii complex has developed various resistance mechanisms to antibiotics [3,4]. The ability to chronically colonize patients and cause outbreaks which are usually hard to eradicate poses significant challenges to infection control and increases healthcare expenditure [5]. The threat of antimicrobial resistance has extended from the hospital setting to the community setting. The trend in increased antimicrobial resistance among bacterial pathogens severely limits the choice of effective antimicrobial agents in both settings. Imipenem and meropenem were traditionally the most effective antimicrobials against A. baumannii complex [6], but carbapenem-resistant A. baumannii complex (CRAB) has become common worldwide [7]. Due to treatment failure, drug-resistant strains have been associated with higher mortality and prolonged hospital stay compared with susceptible ones. The major risk factors for spread of multidrug-resistant organisms, including CRAB, are poor adherence to infection control measures and overuse of certain antimicrobials [8].
The misuse and overuse of antibiotics is widespread, not only in developing countries, but also in the developed world, and this inappropriate use of antibiotics has led to the rise in antimicrobial resistance. The emergence and spread of antimicrobial resistance is a complex problem that is driven by numerous interconnected factors such as under-or overuse of antimicrobials [9]. However, it is also known that the genetic mechanism used by bacteria to acquire resistance to antibiotics not only promotes their spread within hospital environments, but also confers stability on the resistance genes, even subsequently, in situations of absence of exposure to antimicrobials [10]. In recent years, an increased effort has been directed towards controlling antibiotic use and raising public awareness of the need for prudent use of antibiotics. Over the past decade, many surveillance efforts have drawn attention to this phenomenon [11,12].
This interaction between antibiotic consumption and the development of bacterial resistance to them is of particular interest with regard to A. baumannii complex. Given that new therapeutic options for treating infections caused by A. baumannii complex with high levels of resistance are not expected to become available in the near future, it becomes imperative to study measures that might be capable of improving the antimicrobial susceptibility of these bacterial agents, thereby making it possible to use the drugs that are available. Studies showing the effects of modifications to antibiotic prescription patterns are especially of interest in relation to combating the emergence of resistance in A. baumannii complex [9]. The objectives of this study were to give an overview of changes in antibiotic consumption and resistance of A. baumannii complex isolated from a tertiary care hospital in northeast China in nine consecutive years (2003 through 2011).
Bacterial Isolates
For Acinetobacter spp., isolates were subcultured to blood agar and McConkey agar plates at this laboratory for purity check and to confirm species identification. Identification was performed using the VITEK 2 system (bioMérieux, Marcy l'Etoile, France) in the microbiological laboratory of the hospital. In addition, conventional biochemical tests including oxidase, Triple Sugar Iron, 42 °C, malonate, and hemolysis on sheep blood agar were used to aid in confirmation of the A. baumannii complex. Two thousand four hundred and eighty-five consecutive nonduplicate nosocomial isolates of A. baumannii complex were collected during the period from 2003 to 2011 in the hospital. Isolates of the same species from the same patient collected during the same in-patient stay were considered duplicate isolates, and only the first isolate was included from the analysis.
Antimicrobial Susceptibility Testing
In vitro susceptibilities of A. baumannii complex to 17 antimicrobial agents were determined by the disk diffusion method and susceptibility profiles were determined using zone diameter interpretive criteria, as recommended by the Clinical and Laboratory Standards Institute (CLSI) in 2011 (M100-S21). Breakpoints of cefoperazone/sulbactam were interpreted according to the supplier's recommendations. Mueller-Hinton agar (Oxoid) was used for all susceptibility tests. The proportion of resistant isolates was calculated by dividing the number of resistant isolates of A. baumannii complex by the total number of the isolates tested against the corresponding antibiotic multiplied by 100. Escherichia coli ATCC 25922, Escherichia coli ATCC 35218, Klebsiella pneumoniae ATCC 700603, and Pseudomonas aeruginosa ATCC 27853 were used as quality control strains for each batch of tests. Imipenem-resistant or meropenem-resistant A. baumannii complex was considered as CRAB. For analysis of susceptibility rates in different year and patient groups, we used the WHONET software.
Antimicrobial Utilization
We retrospectively obtained the antimicrobial utilization information for all patients by using the hospital pharmacy computer database. The evaluated periods were from 2003 to 2011. Defined daily dose (DDD) was developed by the World Health Organization (WHO) Anatomical Therapeutical Chemical (ATC)/DDD Index 2011 to standardize the comparative usage of various drugs between themselves or between different healthcare environments for all adult wards, and is defined as the assumed average maintenance dose per day for a drug used for its main indication. The amount of the antimicrobials used was calculated as DDD/1,000 patients/day as follows: total consumption measured in DDDs/(number of days in the period of data collection × number of patients) × 1,000 [9]. The six classes of antimicrobial agents analyzed in this study were: anti-pseudomonal penicillins (including mezlocillin, piperacillin, and ticarcillin), β-lactam/β-lactamase inhibitors with anti-pseudomonal effects (ampicillin/sulbactam, cefoperazone/sulbactam, piperacillin/tazobactam, and ticarcillin/clavulanate), anti-pseudomonal cephalosporins (ceftazidime, cefotaxime, ceftriaxone, cefoperazone, and cefepime), anti-pseudomonal carbapenems (imipenem/cilastatin, and meropenem), anti-pseudomonal fluoroquinolones (ciprofloxacin, levofloxacin, and gatifloxacin), and aminoglycosides (amikacin, tobramycin, gentamicin, and netilmicin), modified from suggestion by CLSI in 2011.
Statistical Analysis
Time series analysis model was used to analyze the relationships between the trend in quarterly antimicrobial consumption and the rates of CRAB over time by taking into account the possible time lags (delay for observing an effect of antimicrobial use) and the autocorrelation patterns. AIC were used to check for possible autocorrelation. Logistic regression analysis was performed to analyze the trends in rates of susceptibility of A. baumannii to antimicrobials within the study period. All analyses were performed with the Statistical Package for the Social Sciences version 18.0 (SPSS, Chicago, IL, USA). All reported p values were two-sided, and values of p < 0.05 were considered statistically significant.
Management of multidrug-resistant A. baumannii complex infections is a great challenge for physicians and clinical microbiologists. Its ability to survive in a hospital milieu and its ability to persist for extended periods of time on surfaces makes it a frequent cause for healthcare-associated infections and it has led to multiple outbreaks [13,14]. In humans, A. baumannii complex has been isolated from all culturable sites. A. baumannii complex can form part of the bacterial flora of the skin, particularly in moist regions such as the axillae, groin, and toe webs, and up to 43.0% of healthy adults can have colonization of skin and mucous membranes, with higher rates among hospital personnel and patients [15]. The most common specimen types in the present study were respiratory samples, secretions and pus over the years in this hospital.
Changes in Resistance to Different Antimicrobial Agents over the Years
Antibiotic susceptibility testing showed that over 40.0% of A. baumannii complex isolates were resistant to all 17 antimicrobial agents in 2011. The rates of antimicrobial resistance in A. baumannii complex increased significantly in the recent nine years. The resistance rates of A. baumannii complex to piperacillin, piperacillin/tazobactam, ticarcillin/clavulanic acid, trimethoprim/sulfamethoxazole, ampicillin/sulbactam, ceftazidime, cefotaxime, ceftriaxone, cefepime and gentamicin were almost more than 60.0%, especially in recent five years. The resistance rates for imipenem, meropenem, amikacin, cefoperazone/sulbactam, ciprofloxacin, levofloxacin and gatifloxacin were shown a relative low, however, they were almost more than 30.0% with a substantial increase during the nine years in the hospital. Resistance to piperacillin, ticarcillin/clavulanic acid, cefoperazone/sulbactam, imipenem, meropenem, cefotaxime, ceftriaxone, amikacin, ciprofloxacin, levofloxacin, and gatifloxacin increased slowly before 2006, then increased significantly thereafter ( Table 2). The resistance rate of these antimicrobials did not increase significantly from 2003 to 2006, maybe due to small case number in this period. In contrast, the rates of resistance to piperacillin/tazobactam, ampicillin/sulbactam, ceftazidime, cefepime, and trimethoprim/sulfamethoxazole increased constantly over the years ( Table 2).
The secular trend of resistance to piperacillin, ticarcillin/clavulanic acid, piperacillin/tazobactam, imipenem, ceftazidime, gentamicin, amikacin, and gatifloxacin over the years is shown in Figure 1 Table 2.
Due to long-term evolutionary exposure to soil organisms that produce antibiotics, A. baumannii complex can develop antibiotic resistance at a much faster pace than other Gram-negative organisms [16]. The emergence of antimicrobial-resistant A. baumannii complex is due both to the selective pressure exerted by the use of broad-spectrum antimicrobials and transmission of strains among patients, although the relative contributions of these mechanisms are not yet known [17]. Carbapenems such as imipenem and meropenem are the last resort of drugs for the treatment of multidrug-resistant pathogens including A. baumannii complex. However, the incidence of carbapenem resistance in A. baumannii complex increased steadily in the 2000s [7,18]. This study of 2,485 A. baumannii complex over the years revealed the continuous increase of antimicrobial resistance. Resistance to carbapenems, which is often accompanied with resistance to multiple other agents, has increased in all parts of the world. Our study revealed the rapid increase in the prevalence of CRAB over the years the hospital, from 11.3% in 2003 to 59.1% in 2011. The results of the present study indicate a strong burden of CRAB in the hospital. Table 3. Correlation between quarterly consumption of antimicrobial agents and rates of CRAB in First Hospital of Jilin University, 2003-2011.
Association of Hospital Antimicrobial Usage and the Rates of CRAB
Overall, the consumption of anti-pseudomonal carbapenems significantly increased during the 9-year study period. In contrast, the annual use of β-lactam/β-lactamase inhibitors with anti-pseudomonal effect, and fluoroquinolones significantly decreased. Use of anti-pseudomonal aminoglycosides, and cephalosporins remained stable, whereas the penicillins use fluctuated over the years (Table 3). The association between the rates of CRAB and quarterly consumption of antimicrobial agents of different classes from 2003 to 2011 are shown in Table 3. Table 3 shows correlation between quarterly consumption of antimicrobial agents used for treatment of infections due to A. baumannii complex and rates of CRAB. The results indicate that the quarterly use of anti-pseudomonal carbapenems was strongly correlated with the increase of quarterly CRAB (β = 1.661; p < 0.001). None of the other classes of antimicrobial agents was significantly associated with the increase in quarterly CRAB (Table 3).
Previous studies on the influence of antibiotic exposure on the risk for acquiring CRAB demonstrated that prior usage of carbapenem [8,19,20], and cephamycin [21] might play a role. Carbapenems may be considered the treatment of choice for empirical treatment of patients with ESBL-producing Enterobacteriaceae bacteraemia in China. This followed an outbreak of CRAB infections due to ESBL-producing Enterobacteriaceae during which use of carbapenems increased substantially in China. We found that prior exposure of imipenem and meropenem was associated with CRAB acquisition in this study. Imipenem and meropenem are broad-spectrum antibiotics with activities against most Gram-negative bacteria, including many nonfermentative Gram-negative bacilli. Therefore, it is understandable that carbapenems usage could change the bacterial flora in patients and facilitate the colonization and/or infection of resistant bacteria, such as CRAB. We found that the quarterly use of anti-pseudomonal carbapenems was strongly correlated with the increase of quarterly CRAB; however, none of the other classes of antimicrobial agents was significantly associated with the increase in quarterly CRAB.
Environmental contamination was also found to be important in the outbreaks of CRAB. Implicated items included respiratory equipment, ventilator tubing, suctioning equipment, bed rails, curtains, ambu bags, washbasins, trunking, peak flow meters, intravenous catheters, etc. [22]. Contaminated hands of healthcare workers were found to be involved in a significant number of cases [22]. It is obvious that multiple factors result in the outbreaks of CRAB in hospital.
The limitation of this paper is that we have no pulse field electrophoresis data of A. baumannii complex. We could not determine the outbreak of A. baumannii complex in different wards. Our research is an ecological study that use aggregated population-level data. Indeed, multilevel analysis makes possible the exploration of joint effects of exposures to antimicrobial agents at individual level and group level on the acquisition of resistant bacteria.
Conclusions
Thus, to decrease the spread of A. baumannii complex infections and reduce the pace of emergence of resistance in multidrug-resistant (MDR) A. baumannii complex, it is important to promote the rational use of antimicrobials with microbiology laboratory support in hospitals. Dedicated use of anti-pseudomonal carbapenems would be an important intervention to control the increase of CRAB. Hand hygiene and barrier nursing are important to keep the spread of infection in check. Surveillance is therefore important in providing useful information for physicians in choosing empirical antibiotics. It also helps to address specific resistant issues within a region to help identify targeted intervention measure. | 2016-03-01T03:19:46.873Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "67b7940be07531e86a9139020875c68e49f2ef26",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1660-4601/10/4/1462/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67b7940be07531e86a9139020875c68e49f2ef26",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268851627 | pes2o/s2orc | v3-fos-license | Contributions to the synthesis of the windshield wiper mechanism with one rocker-slider blade
The paper deals with synthesis of the windshield wiper mechanism with one rocker-slider blade. Starting from the kinematic scheme of the constructive model designed by Mercedes-Benz motor vehicle company, a structure analysis on this complex mechanism consisting of three kinematic chains, of which two are articulated bar mechanisms and one is planetary mechanism, has been accomplished. The novelty of this paper consists in the presentation of an optimum synthesis method of the planar bar mechanism in two variants for the wiper blade driving. The first variant is a planar 4-bar mechanism type crank-rocker where the cosines theorem has been applied in order to obtain the lengths of the coupler and the crank. The second analyzed variant is a bi-contour linkage consisting of two 4-bar mechanisms linked in series. Further, the calculation of the planetary gear angular velocity on the complex mechanism with dyadic kinematic chains has been achieved. In order to do this, the immobilization of the planetary carrier method has been applied. Finally, the geometry of the driven linkage of the wiper mechanism type crank-slider which has two mobilities has been studied. A vector equation has been written in order to obtain the length of the variable blade.
Introduction
The windshield wiper mechanism special type analyzed in the paper had been developed by Mercedes-Benz ® in the 1980's [1].It consists of several bars, including a tetradic chain, and an annulus planetary gear which drive a rocker-slider blade (figure 1).with two "parallel" blades, which still the most used; -with two opposite blades, which are increasingly used.The mechanism with one rocker-slider blade is special due to the fact that motion of the blade is composed of both rotation and translation which makes it more efficient than the classical mentioned types, from the covered area point of view.
The topological structure and geometry of the windshield wiper mechanism
The kinematic scheme of the windshield wiper mechanism type Mercedes-Benz ® with complex structure consisting of articulated bars and an annulus planetary gear is presented (figure 2) [2,4].
The geometry of the kinematic scheme of the mechanism is defined by the following linear and angular parameters: The mobility of the mechanism is calculated by the formula [5]: where m is the mobility of the kinematic joint, Cm is the number of the kinematic joints of mobility m, r is the rank of the associated space, and Nr is the number of the closed contours of rank r.By analyzing the kinematic scheme (figure independent kinematic contours of the rank 3 (planar).By substituting these numerical values in formula (1) the mobility is obtained: The driving element 1 defines the "actuator mechanism" ) (2) We notice that the first dyadic chain LD (2,3) defines the first closed kinematic chain Lc(0,1,2,3) of type crank-rocker, and then the dyadic chain LC (4,5) defines Lc(0,3,4,5) of type rocker-rocker.
The third dyadic chain LD(6,e60) defines Lc(0,5,e60,6) where the bar e60 has the direction on the shared normal direction to the two tangent profiles in point H (figure 2).The fourth dyadic chain LD(7,8) defines Lc (5,6,7,8) For each sub-mechanism (figure 3) the mobility is calculated by the formula (1): a) (angular displacement 10 ).The linking mechanism is consisting of the joint kinematic elements of the two neighbor mechanisms.The mobility of such linking mechanism is: (open kinematic chain 0, 5, 6); (kinematic chain 0, 5).We verify the mobility of the complex mechanism by means of the linking formula of the mechanisms: (3) By means of the two 4-bar sub-mechanism (figure 3c) we can obtain an oscillating rotation of the planetary carrier 5 by an obtuse angle equal to circa 145°.This angle results from the condition that the wiper blade (fixed together to the slider 8) to cover 80% from the entire windshield surface.
The 4-bar planar mechanism type crank-rocker
The oscillating rotation movement by the obtuse angle of 145° can be obtained by means of the first analyzed variant of the planar articulated mechanism type crank-rocker (figure 4b) [2].
For the geometric synthesis of this mechanism we consider as known parameters: the base length ).In order to calculate the lengths 1 l and 2 l , the cosine theorem in the triangles ) ( From the two linear equations ( 4), the lengths for the coupler and the crank are resulting: The disadvantage of this solution (figure 4b) lies in the fact that the length of the crank 1 l results too big, and the transmitting angle in the extreme positions 1 D and 5 D , ) ( , is too small, under 20°.This is due to the geometry of the solution and it could cause the stall of the mechanism.
The 4-bar bi-contour planar mechanism
The second analyzed variant for the blade driving linkage is consisting of two 4-bar planar mechanisms linked in series (figure 5), of which the first is a crank-rocker mechanism (0, 1, 2, 3) and the second is a rocker-rocker mechanism (0, 3, 4, 5), that allows the multiplication of the oscillating rotation angle (from the angular displacement 3 to the imposed angular displacement 5 ) [2].This solution has been studied at section 2 from the mobility point of view.The geometric synthesis of the mechanism with two kinematic contours in series (figure 5) requires two stages: a) The design of the 4-bar mechanism type rocker-rocker (0, 3, 4, 5) on which, in order to obtain the rocker 5 rotation by the imposed angle 5 , the angle .We use the procedure applied in the case of the first variant of driving linkage (figure 4b), using the formulas (4), by imposing the angle 30 .We denote ) cos( 2; cos 2 From the two linear equations ( 7) the following solutions are obtained: In this case (figure 5) we notice that the length of the crank is smaller and the minimum transmitting angle 5 is maintained over the limit of 30° [5].
The angular velocity calculus of the planetary gear for the complex mechanism with dyadic kinematic chains
For the windshield wiper blade driving we consider the second variant with two dyadic chains (figure 6) on which the bar 5 is fixed together to the planetary carrier p(5) on which the planetary gear 6 is articulated.This gear 6 is geared with the annulus fixed gear 0 [2].We notice that in the initial position D0D1, the rocker 5 is positioned through the angle 50 with respect to the direction of the fixed articulations B0 and D0.
Together with the rocker 5 (having the angular velocity 5 ) the planetary carrier p(5) geometrically defined by the segment D0E articulated in E by the planetary gear 6 that has the angular velocity 6 .With respect to the point F1 (rotation instantaneous center), the planetary gear 6 is rolling inside on the dividing circle of the fixed annulus gear 0.
For the calculation of the angular velocity 6 of the planetary gear 6, we apply the immobilization method of the planetary carrier p(5), resulting the following ratio:
The geometry of the driven mechanism type crank-slider
The driven mechanism is type crank-slider with wiper blade (figure 7) and it represents the third kinematic chain in the configuration of the complex windshield wiper mechanism with bars and gears (figure 2) [2].Point M (the end of the wiper blade) executes a rotation with respect to the fixed articulation D0 (figure 7) simultaneously with a sliding movement in respect of bar 5 through the rotation of the crank 6 with respect to the mobile articulation E.
Basically, the segment GM represents the wiper blade which is fixed together with the slider 8, performing a rotate-sliding movement on the windshield surface.The bar 5 is a rocker with the fixed articulation D0 (figure 7a) being positioned by the angle φ50 .The crank-slider mechanism geometry is defined by constant and variable parameters of the vector contour EFG (figure 8b).
The rotation angle of the crank 6 (fixed together with the planetary gear 6) is obtained by means of the formula (14): We choose the mobile coordinate system
Conclusions
The windshield wiper mechanisms are safety and comfort systems on any vehicle.These spatial mechanisms are assimilated as planar linkages with articulated bars situated in parallel planes.The advantage of the windshield wiper mechanism with one rocker-slider blade is that it covers the wiping surface with 20% higher than the classical mechanism with two parallel blades.However, the disadvantage of it is the decreased speed due to use only one blade.
Only two of four studied solutions for the driving linkages of this wiper mechanism have been highlighted: one 4-bar mechanism type crank-rocker and two 4-bar mechanisms linked in series type crank-rocker and rocker-rocker.The other two solutions that have been studied in another paper are: mechanism with a triadic chain and mechanism with a tetradic chain.
Figure 1 .
Figure 1.The windshield wiper mechanism with one rocker-slider blade Two types of windshield wiper mechanisms are the most used on present automobiles:
Figure 3 .
in which there are four planar articulations ) joint(8,5).The point M belongs to the slider 8 which translates with respect to the bar 5 that executes an oscillating rotation with respect to the fixed point 0 D .The three sub-mechanisms (independent kinematic chains): crank-slider with oscillating guide (a), annulus planetary gear (b) and two 4-bar chained linkages (crank-rocker and rocker-rocker) (c)
Figure 4 . 5 D
Figure 4. a) The discrete positions of the wiper blade on the windshield; b) The 4-bar mechanism type crank-rocker for the angle displacement achieving
Figure 5 .
Figure 5.The kinematic scheme regarding the optimum synthesis of the articulated mechanism in the second variant with two dyadic chains design of the 4-bar mechanism type crank-rocker (0, 1, 2, 3) through choosing the length 3 l choose the position of the fixed articulation 0A taking account the condition that the transmitting angle to be at least the minimum
Figure 6 .
Figure 6.Kinematic scheme of the windshield wiper mechanism in two positions
Figure 7 .
Figure 7. Kinematic scheme of the bi-mobile driven mechanism
5 5Figure 8 .
Figure 8. Kinematic scheme of the crank-slider mechanism (a) with the vector contour (b) | 2024-04-02T16:12:38.027Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "b864b6cfcf54496d1a16d566dfcdf849257a156c",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1303/1/012042/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "218973d5d4b330db5095134ab540fa5bac4cdae9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
21243750 | pes2o/s2orc | v3-fos-license | Quasiparticle Interference, quasiparticle interactions and the origin of the charge density-wave in 2H-NbSe$_{2}$
We show that a small number of intentionally introduced defects can be used as a spectroscopic tool to amplify quasiparticle interference in 2H-NbSe$_{2}$, that we measure by scanning tunneling spectroscopic imaging. We show from the momentum and energy dependence of the quasiparticle interference that Fermi surface nesting is inconsequential to charge density wave formation in 2H-NbSe$_{2}$. We demonstrate that by combining quasiparticle interference data with additional knowledge of the quasiparticle band structure from angle resolved photoemission measurements, one can extract the wavevector and energy dependence of the important electronic scattering processes thereby obtaining direct information both about the fermiology and the interactions. In 2H-NbSe$_{2}$, we use this combination to show that the important near-Fermi-surface electronic physics is dominated by the coupling of the quasiparticles to soft mode phonons at a wave vector different from the CDW ordering wave vector.
In many complex materials including the two dimensional cuprates, the pnictides, and the dichalcogenides the electronic ground state may spontaneously break the translational symmetry of the lattice. Such density wave ordering can arise from Fermi surface nesting, from strong electron-electron interactions, or from interactions between the electrons and other degrees of freedom in the material, such as phonons. The driving force behind the formation of the spatially ordered states and the relationship of these states to other electronic phases such as superconductivity remains hotly debated.
Scanning tunneling spectroscopy (STS) has emerged as a powerful technique for probing the electronic properties of such ordered states at the nanoscale [1][2][3] due to its high energy and spatial resolution. The position dependence of the current I-voltage V characteristics measured in STS experiments maps the energy dependent local density of states ρ(r, E) [4,5].
Correlations between the ρ(r, E) at different points at a given energy reveal the pattern of standing waves produced when electrons scatter off of impurities [6]. These quasiparticle interference (QPI) features may be analyzed to reveal information about the momentum space structure of the electronic states [7]. The intensity of the QPI signals as a function of energy and momentum also contains information about the electronic interactions in the material [8,9].
In this work, we take the ideas further, showing how impurities can be used intentionally to enhance QPI signals in STS experiments and how the combination of the enhanced QPI signals with electronic spectroscopic information available in angle-resolved photoemission (ARPES) measurements can be used to gain insight into the physics underlying electronic symmetry breaking and quasiparticle interactions. By observing the electronic response to the addition of dilute, weak impurities to the charge density wave material 2H-NbSe 2 we directly measure the dominant electronic scattering channels. We show conclusively that Fermi surface nesting does not drive CDW formation and that the dominant quasiparticle scattering arises from soft-mode phonons.
Our theoretical analysis begins from a standard relation between the current-voltage characteristic dI dV at position r and voltage difference V = E and the electron Green's function G, valid if the density of states in the tip used in the STS experiment is only weakly energy dependent Here M is a combination of the tunneling matrix element and wave functions (see supplementary material); M and G are matrices in the space of band indices.
To calculate the changes in dI/dV induced by impurities we observe that in the presence of a single impurity placed at position R a the electron Green's function is changed from the pure system form G toG given bỹ Here T(r, r , E) is the T-matrix describing electron-impurity scattering as renormalized by electron-electron interactions. It is a matrix in the space of band indices, and we suppress spin indices, which play no role in our considerations.
Assuming (see supplementary material) that M is structureless (couples all band indices equally), Fourier transforming and assuming that interference between different impurities is not important gives for the impurity-induced change in the tunneling current with v the volume of the systems and the scattering function of complex argument z given in the band basis in which G is diagonal as At this stage no assumption has been made about interactions.
From Eq. 3 we see that structure in δdI(k, E)/dV can arise from structure in the combination G p G p+k of electron propagators (Fermi surface nesting) or from structure in the T-matrix, the latter arising either from properties of the impurity or from interactions involving the scattered electrons. Combining an STS measurement with an independent determination of G (for example by ARPES) allows the two physical processes to be distinguished. However, a direct analysis of Eq. 3 requires precise measurement of the positions of all of the impurities so that the a e ik·Ra factor can be divided out. This is impractical at present, so we focus on |δdI(k, E)/dV | where for dilute randomly placed impurities the prefactor can be replaced by the square root of the impurity density. Eq. 3 can be further simplified if one assumes that the T matrix depends primarily on the momentum transfer k and has negligible imaginary part (i.e. scattering phase shift near 0 or π). Such an assumption is particularly appropriate when the scattering arises from weakly scattering uncharged point impurities. We find An integral of B over the occupied states yields the components of the noninteracting (Lindhard) susceptibility (see supplementary material) where f is the Fermi function. This observation permits an interesting analysis. If the impurity scattering potential V imp is structureless and weak, a measurement of the QPI then directly yields the Lindhard susceptiblity. Conversely, if the impurities are known to be weak, differences between the measured QPI intensity and the Lindhard susceptibility reveal the effects of interactions, which appear formally as a "vertex correction" of the basic impurity-quasiparticle scattering amplitude V imp (see supplementary material).
We apply these concepts to 2H-NbSe 2 , a quasi-2D transition metal dichalcogenide that displays a charge density wave (CDW) phase transition below T CDW ≈33 K [10][11][12]. The physics of this ordered state is still under debate. While some experiments point to an important role of Fermi surface (FS) nesting [13,14], perhaps accompanied by a van Hove singularity [15,16], an alternative scenario argues that the nesting of the FS is not strong enough to produce the CDW instability [17,18], and proposes that a strong electron-phonon coupling [19,20] to be approximately 1% from STM topographic images. In Fig. 1(a), we show a typical topographic image taken at 27 K (T<T CDW ) that displays the S defects as well as a few Se vacancies. In Fig. 1(b), we show a topographic image of pristine NbSe 2 in the CDW state for comparison. The CDW persists in the S-doped material as evidenced by its coverage across the entire sample, although the doped material is clearly less homogeneous than the pristine sample. This is also evident in the 2D Fast Fourier transform (2D-FFT) of the topographic images for the doped (Fig. 1(c)) and pristine ( Fig. 1(d)) samples. Well defined CDW peaks are peaks at k k Bragg /3 (black arrows in Fig. 2(b) at all energies measured by STS. This feature has been seen before in the pristine sample [21] and is a consequence of the CDW order. A second feature occurs along the same direction as the CDW wavevector but at an energy-dependent position. Since this feature disperses in k as E is changed, we identify it as a QPI signal. Thus, the light doping introduced in the system successfully enhances the QPI signal while not altering the electronic structure of NbSe 2 .
From Fig. 2(b) we see that the QPI peaks are located at wavevectors close to the Brillouin zone edge for E=-110meV and move towards the zone center with increasing energy. We see however that for all energies presented in this paper the QPI peaks remain far from the CDW wave vector. This is illustrated more clearly in Fig. 3(a) which presents a line-cut of the STS data along the Γ − M direction for each one of the energy slices of the STS maps.
At the Fermi energy, the QPI signal is separated from the CDW signal by ∆k 1 3 k CDW . Extrapolation to higher energies suggests that k QP I would reach k CDW only at E 300meV above the Fermi level.
Combining ARPES and STS measurements allows us to extract important additional information about the nature of scattering near the Fermi level in the CDW state of NbSe 2 .
Representative ARPES measurements are presented in Fig. 3(b). Comparison to similar data obtained on the pristine material [14,22] revealed no significant changes in the band dispersion, further confirming that study of the lightly S doped system reveals information relevant to pristine NbSe 2 . We fit our ARPES measurements in Fig. 3 renormalization of the observed bands relative to the calculated [17,18] bands, as previously noted [23]. Using this band structure, we then calculate G and hence B mn (k, E) from Eq. 6. Fig. 4(a) To further characterize the differences between the quasiparticle band structure and the QPI, we assume that the T-matrix couples all states equally ( T mn k (E) = T k (E) independent of band indices mn) and construct an experimental estimate of T k (E) from Eq. 5 by dividing the measured |δdI(k, E)/dV | by the calculated nm B nm (k, E). The resulting |T k (E)| in shown in Fig. 4(b) . The strong and non-dispersing peak seen in T k (E) at the CDW wave vector (indicated by the green arrows in Fig. 4(b)) is similar to the structure factors seen in X-ray diffraction experiments [24,25]. It is caused by the deformation of the band structure due to the periodic potential arising from the CDW ordering. Its lack of dispersion shows directly that this feature in our STS signal does not arise from quasiparticles.
We now consider the structure highlighted as a strong peak in T near the zone edge in the Γ − M direction indicated by the purple arrows in Fig. 4(b). All available evidence suggests that the potential induced by the S-dopants is weak and structureless, so that the enhancement is an interaction effect. The strong momentum dependence of |T | indicates that the intensity variation of the STS signal is not explained by the quasiparticle band structure. However, it is significant that at all measured energies, the strong peak in T lies within the |k| region delineated by the group of approximately concentric circles seen in the calculated B (denoted by the black boxes in Fig. 4(a) The main contribution to these circles arises from 2k F backscattering across each of the Fermi surfaces. This suggests that the observed QPI arises from an enhancement of backscattering [26] by a strongly directiondependent interaction [27]. Available calculations [19,28] suggest that soft acoustic phonons with wavevector along the Γ − M direction are strongly coupled to electrons for a wide range of |k|. By contrast the high intensity regions in B near the K point arise from approximate nesting of the Fermi surfaces centered at Γ and K; that these are not seen in the measured QPI again confirms that nesting is not enhanced by interactions and is not important in this material. We therefore propose that the observed QPI signal arises from a renormalization of a structureless impurity potential by the electron-phonon interaction.
In summary, we used dilute doping of NbSe 2 with isovalent S atoms to enhance the QPI signal and, by combining STS and ARPES measurements were able to show that the QPI signal measures more than just the fermiology of the material. We were able to confirm that the CDW does not arise from Fermi-surface nesting and we identified an important quasiparticle interaction, most likely of electron-phonon origin. Our approach reveals that the response to deliberately-induced dopants is an important spectroscopy of electronic behavior. We expect it can be extended to many other systems. Here we present specifics of the relation between the measured QPI intensity and basic electronic properties including the bare charge susceptibility.
A. Derivation of Eq. 3 of main text
We start from the basic tunneling Hamiltonian connecting the tunneling tip to a state of the system of interest: Expressing the system operator at position r in terms of the operators ψ np that annihilate electrons in band state n and momentum p in the first Brillouin zone as performing the usual second order perturbative analysis of the tunneling transition rate and differentiating with respect to the voltage difference between tip and sample gives dI dV (r; E) = mn;pq Writing a position r in unit cell j (central position R j ) as r = R j + ξ and averaging over the in-unit cell coordinate ξ gives with M tun mn;pq =ˆu nit cell Finally assuming that the combination of the tunneling matrix element and the atomic wave functions has no interesting spatial structure (M independent of p, q) and evaluating the momentum sums in Eq. 11 gives which is Eq. (1) of the main text.
B. Relation between QPI and Lindhard function
The static Lindhard or particle-hole bubble susceptibility representing transitions between bands n and m, χ mn (k, ν = 0) may be writen Evaluating the sum in the usual way by converting to a contour integral in the complex plane which is evaluated in terms of the discontinuity across the branch cut along the real axis gives Eq. 7 of the main text.
C. Interactions and the T-matrix
The basic electron-impurity vertex is shown diagrammatically in Fig. 1a. Multiple scattering off of the impurity is shown diagrammatically in Fig. 1(b). A general vertex corrections (interaction of the incoming and outgoing electron) is shown in panel Fig. 1(c). The particular case of an electron-phonon renormalization is shown in Fig. 1(d).
II. TIGHT BINDING FIT TO ARPES DATA
Energy distribution curves (EDCs) and momentum distribution curves (MDCs) for the ARPES spectra are shown in Fig. 2(a) and Fig. 2 We fit these two bands to a previously-proposed [23] five-nearest-neighbor tight-binding model to extract the band dispersions (red solid curves). The bands of the tight-binding model are given by the following expression: E i (k x , k y ) = t 0,i + t 1,i (2 cos(η x ) cos(η y ) + cos(2η x )) + t 2,i (2 cos(3η x ) cos(η y ) + cos(2η y )) + t 3,i (2 cos(2η x ) cos(2η y ) + cos(4η x )) + t 4,i (cos(η x ) cos(3η y ) + cos(5η x ) cos(η y ) + cos(4η x ) cos(2η y )) with These expressions model the quasi two-dimensional Nb-derived bands that are observed in ARPES experiments. A Se-derived band with strong k z dispersion is also found in DFT calculations [18] but is typically not seen in ARPES [29]. The strong k z disperson of this band also means that it will contribute less to the QPI. We do not consider it here. The parameters of the model are given in Table I. From the tight binding parameters we calculate the components of B via the computationally efficient expression [18]:
III. PARTIAL SUSCEPTIBILITY CALCULATED FROM STS
Proceeding from Eq. 5 of the main text we observe that if the T matrix has negligible energy dependence and couples all bands equally then the integral of the measured QPI signal over a range from −E 0 (chosen such that E 0 >> kT ) to the Fermi level is, up to a constant, just an approximation χ 0 to the sum of all components of the static susceptibility At the temperatures of the experiment (27 K), the Fermi function can be replaced with a step function, and the integral of the dI/dV signal from −E 0 to 0 is simply the experimentally measured current I(−E 0 ) . We choose a cutoff E 0 = 150meV >> kT = 2.5meV , and plot the experimentally measured I(−150mV ) in figure 5, where the portion of the signal coming from the CDW is highlighted with the blue rectangle while the dispersing QPI signal is indicated by the red rectangle. Also shown in Figure 5 is the calculated χ(k) obtained from Eq. 7 of the text using the B as computed from the ARPES bands as in Eqs. 17 and 18.
The broad peaks in the χ calculated from ARPES ( Fig. 5 (a)) are located at k 0.74k CDW as has noted before [22]. The clear disagreement between the two figures points to the key role played by the momentum dependence of the T-matrix in enhancing certain scattering wave vectors in the observed QPI. 19).
IV. REAL SPACE DI/DV MAPS
We present here a sequence of dI/dV measurements for different energies. In Figure 7 we present an expanded view ot the comparison between the Fourier transform of the measured STS data and the B calculated from ARPES at energies ranging from well below the Fermi level to well above. We zoom in a particular region of k-space that shows dispersing features. The left half of each subfigure is the FT-STS data while the right half is the calculated B. The dispersing QPI feature is located along the Γ − M direction at wavevectors larger than k CDW . Zooming in to the region of k-space where QPI is observed, we see from Figure 7 that the FT-STS signal is located near the edge of the Brillouin zone at energies well below E F , and disperses steadily inwards at higher energies. Within this restricted region of k-space, the dispersion of the FT-STS data matches very well with the B calculated from ARPES at all energies. | 2014-08-22T23:50:34.000Z | 2014-08-19T00:00:00.000 | {
"year": 2014,
"sha1": "7f4771addfafc2c85800afc3ca00bade934611f8",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.114.037001",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7f4771addfafc2c85800afc3ca00bade934611f8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
173990508 | pes2o/s2orc | v3-fos-license | Interlayer Charge Transfer and Defect Creation in Type I van der Waals Heterostructures
Van der Waals heterostructures give access to a wide variety of new phenomena that emerge thanks to the combination of properties brought in by the constituent layered materials. We show here that owing to an enhanced interaction cross section with electrons in a type I van der Waals heterostructure, made of single layer molybdenum disulphide and thin boron nitride films, electrons and holes created in boron nitride can be transferred to the dichalcogenide where they form electron-hole pairs yielding luminescence. This cathodoluminescence can be mapped with a spatial resolution far exceeding what can be achieved in a typical photoluminescence experiment, and is highly valuable to understand the optoelectronic properties at the nanometer scale. We find that in heterostructures prepared following the mainstream dry transfer technique, cathodoluminescence is locally extinguished, and we show that this extinction is associated with the formation of defects, that are detected in Raman spectroscopy and photoluminescence. We establish that to avoid defect formation induced by low-energy electron beams and to ensure efficient transfer of electrons and holes at the interface between the layers, flat and uniform interlayer interfaces are needed, that are free of trapped species, airborne ones or contaminants associated with sample preparation. We show that heterostructure fabrication using a pick-up technique leads to superior, intimate interlayer contacts associated with significantly more homogeneous cathodoluminescence.
INTRODUCTION
The development of deterministic stacking of individual or few layers from layered materials such as graphite, boron nitride, or transition metal dichalcogenides (TMDCs) in the last few years allows now to build van der Waals heterostructures at will [1]. Even though such structures are produced in conditions that are far from ultra high vacuum conditions usually required to obtain very high quality heterostructures, careful preparation yields clean enough interfaces to observe phenomena requiring efficient electronic coupling between layers. For instance, long-lived interlayer excitons were observed [2,3], interlayer exciton-phonon coupling was reported in WSe 2 /hexagonal boron nitride (h-BN) [4], interlayer phonon coupling was demonstrated in MoS 2 /graphene [5] as well as in TMDC-based van der Waals heterostructures [6]. These achievements demonstrate that both inter-layer electronic and structural coupling can be obtained in such heterostructures. Moiré superlattices, which are the van der Waals (soft) counterpart of dislocation networks in heteroepitaxial threedimensional semiconductors, enrich the electronic and optical properties [7][8][9][10][11].
Direct observations with high resolution transmission electron microscopy indeed revealed locally perfect crystalline interfaces between two-dimensional materials, with no apparent defects and only a van der Waals gap a few Ångström-thick, devoid of impurity species [12]. Even though beam-induced damages can be reduced by lowering the energy of the electron beam to a few 10 keV in the transmission electron microscope column [13], such analysis is destructive (the samples need to be cut and thinned down to a few nanometer), and it is tedious to extend it at the scale of the entire heterostructure. On the contrary optical hyperspectral microscopies, mapping excitonic (photoluminescence) or vibrational (Raman) interlayer modes of the heterostructures, require no additional sample preparation, and provide indirect information on the quality of interfaces [6,14]. Their downside is their limited spatial resolution, which conceals information on, e.g., strain or electronic doping variations at scales below a few 100 nm.
Cathodoluminescence (CL) is a powerful tool to study opto-electronic properties at the nanometer scale, when implemented in a scanning electron microscope. Here, an electron beam of adjustable energy in the keV range excites electrons and holes, that can form electron-hole pairs (excitons) and recombine radiatively, giving local spectroscopic information. In that case spatial resolution is linked to the size of the excitation source, i.e. the electron beam which is as small as a few nanometers, rather than limited by the optical diffraction limit. In fact, spatial resolution in such experiment could ul- timately be set by the diffusion of the excitons. It can reach several hundreds of nanometers at room temperature in monolayer TMDCs but is expected to be strongly quenched at low temperature due to the enhancement of the radiative recombination rate [15]. Due to the low interaction with the electron beam, atomically thin layers are expected to produce a small signal below the detection limit of most instruments. In fact, no CL could be measured so far on a free-standing TMDC single layer. Van der Waals heterostructures can be used to artificially enhance the interaction by encapsulating the active layer into an electronic barrier, very much like semiconductor quantum wells are built: a small band gap material (well) is surrounded by a larger band gap material (barrier). Electron beam irradiation generates hot electrons and holes inside the barrier, which can be arbitrarily thick and hence produce a significant population of hot charge carriers that relax and can be transferred into the low band gap material. This approach has been recently demonstrated with h-BN as a barrier and a single layer TMDC (MoS 2 ,WS 2 , WSe 2 ) as the active lay and the decisive effect of h-BN capping was demonstrated by varying its thickness [16] .
So far cathodoluminescence was observed only in limited area of van der Waals heterostructures, and the absence of cathodoluminescence was ascribed to a locally poor contact between the h-BN and TMDC surfaces [16]. In this scenario, it is implied that in that case charge carrier transfer between the barrier and active material is inefficient, and hence no exciton can be formed in the latter material. Poor-contact-regions are indeed very common in heterostructures. They correspond to blisters trapped at the interfaces, where contaminants associated with the manipulation of the two-dimensional materials gather [17].
Here we demonstrate that indeed an intimate contact between the materials is key to observe efficient cathodoluminescence. Conversely, we find that in presence of contaminants at the interfaces cathodoluminescence is quenched. This quenching is also observed in photoluminesence performed after the cathodoluminescence. It is hence not only the signature of an inefficient transfer of electrons and holes, as it was thought, but it proves the creation of crystal defects inside MoS 2 that presumably strongly promote non-radiative exciton recombination. Such defects are detected in Raman spectroscopy. We trace back the origin of defects to a possible chemical reaction between trapped species and the pristine MoS 2 , promoted by the electron beam. We show that the spatial uniformity of the cathodoluminescence response of the heterostructures can be greatly enhanced by reducing the amount of contaminants at interfaces (in particular, the amount of trapped blister), using polypropylene carbonate (PPC) to pick-up and release the different materials of the heterostructure [17]. Optical characterization using Photoluminescence (PL) and Cathodoluminescence (CL). A comparison of spatially-resolved spectra at 5 K obtained in PL and CL is presented in (a). The spectra are acquired within less than 1 µm of each other. In both cases, the signal is dominated by neutral A-exciton emission. The process of luminescence excitation in a van der Waals heterostructure with an electron beam is presented in (b). The vertical axis represents energy (band diagram). The spatially-resolved integrated CL intensity of the exciton is shown in (c) showing some strong inhomogeneities. This is not the case for the integrated PL intensity measured before CL at room temperature (d). In contrast, integrated PL intensity measured after CL (e) shows a strong inhomogeneity presenting a spatial correlation with the CL mapping.
EXPERIMENT Van der Waals heterostructure preparation
The van der Waals heterostructures studied in this work were assembled by two methods (Figure 1a,b). A dry viscoelastic transfer method using polydimethylsiloxane (PDMS) [18], involving two successive stamping, of MoS 2 and h-BN, and a pick-up technique using PPC [17,19], allowing to directly position a h-BN/MoS 2 stack, were both employed to encapsulate MoS 2 singlelayers between h-BN layers (∼ 20 nm-thick). The details are provided in [20]. The optical pictographs of the h-BN/MoS 2 /h-BN heterostructure are displayed in Figures 1c,d,e. The MoS 2 regions encapsulated between the top and bottom h-BN layers extend across tens of micrometers. We will first focus on the sample prepared using dry viscoelastic stamping (Figure 1a,c,d). [20]. Due to its thickness (top: 18 nm, bottom: 22 nm) a significant number of electrons and holes can be generated in h-BN. Using Monte-Carlo simulations, we have shown that at 1 keV, the absorption length in h-BN is of the order of several tens of nm [20]. When the contact with single-layer MoS 2 is intimate, electrons and holes can be transferred into the latter material which has a much smaller band gap than h-BN. They can then form an EX that will eventually contribute to luminescence when recombining radiatively (A-exciton emission in Figure 2a). Alternatively EX can be formed directly Raman spectroscopy and spatial mapping of defects. Representative spectra of the different regions are presented in (a): pristine (i.e. before CL), PL active and PL inactive/defective. In addition to a broader A 1 peak, the defective regions show the emergence of Raman signal around 230 cm −1 , which we refer to as the LA area. Such signal in this area is also a signature of defects. The spatially-resolved width of the A 1 peak is used as a metric for the presence of defects created during the CL experiment, as discussed in the main text. The map shown in (b) presents a strong spatial correlation with both CL and PL maps measured after CL (see Figure 2 c,e). A map with a better spatial resolution of a smaller area is presented in (c). The black and red dots in (c) correspond to the defective and PL active spectra shown in (a), respectively. The histogram of A 1 widths is presented in (d), in which two peaks attributed to the absence (blue, small width) or presence (green, large width) of defects appear. The color scale meaning in (c) is provided in (d). Note that the exact same color scale is used in (b), (c) and (d) for comparison. (e) shows the spatially-resolved integrated intensity in the LA area (see (a)) at the same position as in (c). The color scale used in (e) is detailed in (f), which shows also the histogram of integrated area for the LA region.
in h-BN and either recombine radiatively or relax into MoS 2 . The full CL process is illustrated in Figure 2b.
For comparison with CL, a PL spectrum taken in the same area is presented in Figure 2a. We see that the spectra are very similar proving their common origin, that is radiative recombination (photon emission) of the A exciton of single layer MoS 2 . We also want to stress that linewidths are both of the order of 10 meV. It shows that no additional broadening is brought by using a high energy electron beam (in the CL experiment) as the excitation instead of light (PL experiment). This is a very important point, and rather unexpected when referring to literature on CL. Large broadenings, associated with the large number of free charges generated, can indeed often be observed [21]. The lack of broadening demonstrated here is a strong asset for the potential application of CL in mapping properties of van der Waals heterostructures at the nanoscale. We note that the excitation energy used in our PL experiment is not sufficient (unlike in the CL experiment) to excite electrons and holes directly in h-BN, so we do not observe luminescence from this material in these conditions. The spatial mapping of the CL intensity measured at 1937 meV (+/-30 meV ) is presented in Figure 2c. Bright and dark regions are observed, in accordance with a previous report [16]. In area of the sample where MoS 2 is not encapsulated with h-BN, no CL is detected; CL is only detected in localized regions of the h-BN-encapsulated MoS 2 [20]. Encapsulation hence appears necessary, because h-BN is the source of electrons and holes that will eventually recombine in MoS 2 , but not sufficient. An absence of CL in presence of h-BN is in principle surprising, and this was proposed as an evidence of a poor contact between h-BN and MoS 2 [16]. Atomic force microscopy (AFM) reveals that the corresponding regions show a significant roughness and bubble-like features at the surface [20]. We relate these observations to the presence of blisters which are filled with species (airborne, contaminants from the polymer stamp) and are trapped at the interface between h-BN and MoS 2 . The regions exhibiting CL actually appear very flat in AFM [20], with a root mean square roughness in the range of 2 Å, typically several times lower than for other regions. This is indicative of very smooth and flat (buried) interfaces between the materials.
Prior to the measurement of CL maps, we measured PL maps. Figure 2d displays the integrated PL intensity of the EX (sum of neutral and charged excitons contributions, the latter appearing as a low-energy shoulder ). Unlike for CL, the MoS 2 layer appears essentially bright, except at locations where it is physically cracked. There are variations of the PL intensity and position which are attributed to local changes (strain, doping, coupling to h-BN) but no quenching. A straightforward interpretation for the only partial spatial correlations between CL and PL maps on one hand, and the clear correlation between dark regions as found with CL and rough regions as found in AFM on the other hand, relates to the effectiveness of electrons and holes transfer at the interface between MoS 2 and h-BN [16]. Indeed, it seems reasonable to expect that the presence of species intercalated in between the two materials, in the form of blisters or in other forms, hinders charge transfers. However, we will now see that other effects prevail.
Quenching of luminescence by defects.
While before exposure to the electron beam (used for the CL measurement), PL revealed essentially bright regions, the PL map measured after CL strongly correlates with the CL map, showing the same dark regions (compare Figures 2c,e). It appears thus that the irradiation by the electron beam locally quenches luminescence, regardless of the source of excitation used to observe it. This observation questions the common conception that CL is quenched only by charge transfer hindrance at interfaces; instead it suggests a more invasive effect of the electron beam, damaging MoS 2 .
To address this possibility, we analyzed the vibrational properties of MoS 2 , before and after electron beam irradiation. We performed Raman spectroscopy measurements at room temperature using an excitation wavelength of 532 nm (see [20]). In the regions showing CL, we always observe Raman spectra with two prominent peaks (Figure 3a), corresponding to the intralayer in-plane E and out-of-plane A 1 modes [22,23]. These modes correspond to first-order Raman processes and arise from single phonon at the center of the Brillouin zone. Small deviations from single Lorentzian lineshapes, expected for such processes, can nevertheless be extracted from the experimental spectra of pristine monolayer MoS 2 [20].
Such deviations likely come from the additional contributions of doubly resonant Raman (DRR) processes. For the E mode, a low energy shoulder was already reported in pristine monolayer MoS 2 [24,25]. As the E mode is doubly degenerate, such shoulder could be attributed to a degeneracy lifting induced by strain or doping [23].
Electron beam irradiation has no effect on the spectra in regions showing CL. In regions showing no CL, on the contrary, electron irradiation has strong effects. Prior to irradiation, we also exclusively observe signatures of the E and A 1 modes. After irradiation, the corresponding peaks appear broader and show some structure, and new modes are found at lower frequency, around 230 cm −1 (Figure 3a). They form a band, that is referred to as LA(M) in the literature [24][25][26]. While an exact fitting procedure would require a full theoretical description of Raman intensities including DRR contributions, such complex spectra have been fitted with a sum of Lorentzian in the literature [24,25,27]. Within this approximation, we can associate a peak with a given phonon in the band structure. The band around 230 cm −1 was shown to be defect-induced. A detailed, excitation energy dependent study, showed that this band arises from DRR processes involving one phonon; elastic scattering by a defect ensuring momentum conservation [25]. This band has three main contributions: a van Hove singularity in the phonon density of states between the K and M points, the LA branch in vicinity of the M point (LA(M)) and the LA branch in the vicinity of the K point (LA(K)) [25]. In addition we observe another defect-activated contribution near 250 cm −1 which was also reported by Mignuzzi et al. [24] but left unassigned.
We now discuss the evolution of the spectra in the vicinity of the E and A 1 modes upon electron irradiation in regions showing no CL. Defect-induced contributions are visible close to both E and A 1 . They have been attributed to phonons in the vicinity of the M points on the TO, LO and ZO branches [24]. We have quantitatively analyzed the weight of those defect-induced contributions in order to spatially map the occurrence of defects. The details of the analysis are presented in [20]. In Figure 3b-d , we image the presence of defects in the sample using this approach. We see that there is a remarkable spatial correlation between defect mapping using Raman spectroscopy and CL/PL mapping (Figure 2c,e). In addition, to strengthen the validity of our method, we perform a similar defect-mapping analysis based on the LA(M) band (Figure 3e,f). We see here that regions with a measurable contribution around 230 cm −1 are the ones that are defective and thus CL/PL inactive. The quenching of luminescence by electron beam irradiation is hence related to defect creation in MoS 2 .
Strong luminescence in case of clean interfaces
We now turn our attention to the origin of defects creation and the possibility to locate regions with an intimate contact between h-BN and MoS 2 prior to CL and the creation of defects.
In Figure 4, we present a doping analysis of the sample realized prior to CL. We used Raman spectroscopy and tracked local shifts in the positions of the E and A 1 modes (see the distribution in Figure 4a) to retrieve the spatial dependence of doping (Figure 4b). This is possible by discriminating the effect of strain, which alters the energy of the two Raman modes in a different way doping does. This kind of analysis was already outlined in the literature on monolayer MoS 2 [28,29]. We observe that regions exhibiting CL have a smaller n doping, by typically several 10 12 cm −2 , compared to regions presenting no CL. Noting that the analysis concerns measurements acquired before the CL measurements, i.e. prior to electron beam irradiation, we conclude that regions tightly coupled to h-BN are less electron doped than regions with a loose coupling. While the presence of an h-BN susbtrate has been shown to reduce electron doping compared to SiO 2 [29], in which case doping is created by charge traps at the interface [30], the exact mechanism at stake here to explain the coupling-dependent doping is more complex. Species trapped at the interface and gathered in the form of blisters are likely candidates to explain the observed n doping. But the effect of, for instance, oxygen, that was present during the sample preparation in ambient atmosphere is still debated and might depend on the substrate [31]. While the identification of the exact mechanism responsible for the observed reduction of n doping in tightly coupled h-BN/MoS 2 regions will require further investigations, it is nevertheless possible to identify those regions using Raman spectroscopy as shown in Figure 4b.
We note that we could not observe low-energy interlayer vibrational modes in Raman spectroscopy, that could be another signature, besides electronic doping level, of an intimate contact between MoS 2 and h-BN. Although such modes have indeed been observed in between TMDC layers [32][33][34], we are not aware of any reports of such modes between h-BN and MoS 2 .
Cause of defect creation by electron-beam irradiation
Having established that defects are induced by irradiation by the electron-beam used during the cathodoluminescence measurements, and that these defects yield prominently to non-radiative recombination of EXs, we now discuss their possible origin. We stress that all the signatures of the defects that we have presented here differentiate them from the ones that can be induced in h-BN upon electron irradiation to create single photon emitters [35]. Besides being created at much lower energy (1 keV here vs 15 keV in Ref. [35]), they do not show any optical activity and present a Raman signature. Regarding their nature, the energy of the electron beam is well below the threshold for knock-on displacement of individual S or Mo atoms in MoS 2 , which is several 10 keV and few 100 keV respectively [36][37][38]. This effect can hence be ruled out as a source of defects in MoS 2 here. The dose in a typical CL experiment is estimated to be of the order of tens of mC/cm 2 (tens of electrons/nm 2 ) which is several order of magnitude below the dose for typical transmission electron microscopy experiments [39]. Another possible origin, also related to the electron beam, is a chemical reaction rather than a scattering effect. MoS 2 has indeed been shown to act as an active catalyst for hydrogen evolution reactions (HER). Following a Volmer-Heyrovsky type of mechanism, the electron beam used in our experiment might thus promote a multistep chemical process proceeding for instance through the formation of MoH adducts [40,41]. In our case, it is reasonable to assume that the blisters located at the MoS 2 /h-BN interface contain airborne species such as water and/or oxygen that naturally adsorb on surfaces in ambient pressure conditions and will be trapped during the assembly of the heterostructure. It has been shown that the presence of oxygen can enhance catalytic reactions in MoS 2 [42]. The trapped blisters may hence behave as aqueous solution micro-or nano-reactors. The chemical environment of the MoS 2 atoms bonded to hydrogen is different from that of the pristine material, which may allow the electron scattering processes needed to activate the above-discussed defect-induced Raman signatures. The exact nature of the defects is still an open question at this point and will require further stud- ies. We note that the buried character of the interface makes traditional chemically sensitive surface probes (Xray photoemission spectropscopy, local microscopy) not suited to such investigations.
Cathodoluminescence with improved spatial inhomogeneity
The nanometer-scale spatial resolution of cathodoluminesence, as implemented in a scanning electron microscope, together with the tendency for defect formation under electron-beam irradiation of the blisters, represents a high resolution probe of the quality of the MoS 2 /h-BN contact. Employing this probe allowed us to conclude that the PDMS stamping technique does not yield extended clean contacts beyond a few µm.
We then used an alternative transfer technique based on pick-up and drop-down with a PPC stamp. This technique allows to reduce the amount of blisters trapped at the interface [17]. Figure 1e shows an optical micrograph of such a h-BN/MoS 2 /h-BN heterostructure. The CL map of this sample is much more uniform than that of heterostructures prepared with a PDMS stamping as expected from the more uniform contact (compare Figures 2c and 5a). Also PL measured before and after CL is very similar (see Figures 5b,c) in contrast to the sample prepared using PDMS (see Figures 2d,e). It shows that electron beam irradiation in CL has not created extended defective regions because of the uniform coupling in that sample. The complete suppression of bubbles should allow further optimization of the process [17].
CONCLUSIONS
Our work shows that clean interfaces between TMDCs (here MoS 2 ) and h-BN are required to allow efficient charge transfer between the barrier and active material. Assembly techniques that are commonly employed to prepare heterostructures often trap blisters of contaminants at the interface between the TMDC and h-BN surfaces. Contrary to what may have been thought, the detrimental effect of such blisters is not only a hindrance of charge transfers at the interface. Cathodoluminescence is there quenched due also to electron-beam-induced damage of the TMDC crystal. Defects are generated, and induce non-radiative charge carrier recombination. We ascribe the formation of defects to an electron-promoted chemical reaction. We find that the cleanliness of the interface is of superior spatial uniformity, and the generation of defects is greatly avoided, when a pick-up/drop-down assembly technique with PPC is employed.
The narrow emission linewidth observed in CL and the localized electron beam should allow to spatially map strain and doping profiles with nanometer resolution by analyzing the exciton peak position and the presence of trion emission. CL could also be used to study with unprecedented spatial resolution single photon emitters that were reported in several TMDCs [43][44][45][46][47].
This work was supported by the French National Research Agency (ANR) in the framework of the J2D project (ANR-15-CE24-0017), the 2DTransformers project under OH-RISQUE program (ANR-14-OHRI-0004), and of the "Investissements d'avenir" program (ANR-15-IDEX-02). J.R. acknowledges support from Grenoble Alpes University community (AGIR-2016-SUGRAF). G.N., A. B. and V.B. thank support from CEFIPRA. Growth of hexagonal boron nitride crystals was supported by the Elemental Strategy Initiative conducted by the MEXT, Japan and the CREST (JP-MJCR15F3), JST. We thank the Nanofab group at Institut Néel for help with van der Waals heterostructures preparation setup. We thank C. Bucher for fruitful discussions. | 2019-06-03T14:08:18.000Z | 2019-06-03T00:00:00.000 | {
"year": 2019,
"sha1": "d1f7852d88260a3fef920737b94cf831c269af97",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1906.00824",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cfab06bd2fc7997fb2ed351aaa26604638b9f85f",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
56450316 | pes2o/s2orc | v3-fos-license | Effect of Preceding Crops and Nitrogen Rates on Economic Studies of Winter Hybrid Maize ( Zea mays L . )
A field experiment was conducted at Agronomy research farm of IAAS, Rampur, chitwan, Nepal during summer and winter season 2010 and 2011 to study the effect of crop sequence and nitrogen rates on hybrid maize. The research finding revealed that maximum gross return (Rs 53660 /ha in 2010 and Rs 60450/ha in 2011) was obtained from maize grown after greengraml while maximum net return (Rs 30170/ha in 2010 and Rs 33440/ha in 2011) was obtained from maize under greengram-maize sequences. Benefit & cost ratio was maximum (1.22 in 2010 and 1.18 in 2011) from maize under greengram-maize sequences while it was minimum (0.50 in 2010 and 0.47 in 2011) under maize–maize sequences. Maximum maize equivalent yield 11516 kg /ha in 2010 and 12710 kg /ha in 2011) was obtained under maize-maize sequences while it was minimum (4310 kg /ha in 2010 and 4624 kg /ha in 2011) under clusterbean-maize sequences. Maize equivalent yield was maximum (10824 kg /ha in 2010 and 11923 kg /ha in 2011) with 200 kg N/ha while it was minimum (7384 kg /ha in 2010 and 8206 kg /ha in 2011) without nitrogen. Grain production efficiency was maximum (32.0 kg /ha/day under greengram– maize sequences in 2010. but in 2011, maximum grain production efficiency (34.8 kg /ha /day) was recorded under maize–maize sequences which was comparable to grain production efficiency under greengram-maize sequence. Maximum grain production efficiency (29.2 kg/ha/day in 2010 and 32.8 kg /ha/day in 2011) was obtained with 200 kg N/ha.
Introduction
Winter maize has got highest production potential among the crop plants and due to wide variability in plant morphology; it has extremely wider adaptability also.It is more efficient than rice, wheat, barley.It is a heavy feeder of fertilizer nutrients particularly nitrogen, its effect being manifested quickly on plant growth and productivity.Among cereals, Maize is an important food and feed crops which rank second after rice and then wheat where as in global context it ranks third after wheat and rice.It is the second most important staple food crop both in terms of area and production after rice in Nepal.It is grown in 8, 70,166 hectare of land with an average yield of 2159 kg/ha.It occupies about 28.15% of the total cultivated agricultural land.Winter maize has an important place amongst the winter crops of the country and the other crops are wheat, gram, lentil, pea etc, under upland rainfed condition summer maize, green gram, black gram, cowpea, cluster bean are grown in rainy season and after the harvest of these crops during winter wheat, lentil, gram, mustard and winter maize are grown.
Maize being C4 plant is called photosynthetically most efficient plant in general among cereal and among three season of maize i.e winter, spring season and rainy season.Winter maize is physiologically, biotic and abiotic point of view is most efficient.Hence, from maximum production point of view winter maize is top most among winter season crops i.e. wheat, barley and others.Although maize is grown in all seasons i.e. spring, rainy and winter season, the productivity of winter maize is much higher than other season maize (Sherchan et al., 2004) For many crop plants of temperate zone, optimum temperature for photosynthesis is lower than that for respiration, This has been suggested as one of the reasons for the higher yields of starchy crops Research Article such as maize and potatoes in cool climates as constrained with the yield of these crops in warmer region.Inner terai winter season temperature is favourable for photosynthesis than respiration during period of winter maize.For best growth, mean day temperature is 24 o C, which is likely to be available for maize in winter season rather than other season i.e. spring and rainy season.
Maize-wheat and Maize-toria is widely adopted crop sequence and more popular under upland conditions.Besides the higher production potential for grain, higher amount of feed and fodder is also obtained under this sequence.But the continuous adoption of this sequence on same piece of land may have adverse effect on physical, chemical and biological properties of soil as continuous cropping of cereals impoverish yield of succeeding crops but inclusion of legumes in the rotation benefits the succeeding crops (Bains 1962).
Nitrogen is the most limiting nutrient for maize production.Maize is an exhaustive crop and requires high quantities of nitrogen.The practice of fertilizer recommendation on the basis of individual crop is becoming less relevant because individual crop is a component of cropping system and cannot be grown in isolation.Therefore, fertilizer recommendation should be made by giving due considerations to nature of preceding crops or in other words the cropping system as a whole besides the soil condition.To analyze the benefit cost ratio of maize crop sequence with various nitrogen fixing legumes and their economic return rates from maize using nitrogen supplement is the main objective of this research.
Materials and Methods
Field experiment under upland ecosystem was conducted in split plot design with three replications at Institute of Agriculture and Animal Science (IAAS) Agronomical research farm Rampur, Chitwan during 2010 and 2011, keeping crop sequence in main plots and nitrogen rates to maize in sub plots.The main plots treatments consisted six crop sequence i.e. fallow-maize, Maize-maize, green grammaize, cowpea -maize, black gram-maize, cluster beanmaize.The sub-plot treatment consisted five nitrogen rates to maize i.e. 0, 50, 100, 150 and 200 kg N/ha.Experiment was laid down in a split plot deign with thirty treatment and three replications.The soil of the experimental field was free from any kind of salinity/sodicity hazards.Soil was suitable to variety of crops of tropical and subtropical regions.Soil was loamy sandy soil with neutral pH (7.0).The climate of the experimental farm was characterized as subtropical humid.Economics studies of different summer legumes crops was studied along with Rajkumar, Indian hybrid maize, variety sown at row to row distance 60 cm and plant to plant 20 cm, popularly grown in Chitwan and terai region of Nepal which is a semi-dent and orange flint type possess relatively longer ear with high disease resistant and responsive to fertilizer and water.
After harvest economic return studies was done i.e. gross return from maize, net return, benefit: cost ratio, gross return, net return and benefit: cost ratio from different crop sequences, productivity of different crop sequences and grain production efficiency was calculated and statistical analysis was done.Cost of cultivation was calculated on the basis of available local charges for different agro-inputs viz., price of seed, labor wages, fertilizer, machines, chemicals and other necessary materials.
Economic studies
Gross return from maize Different crop sequences brought significant variation in gross return obtained from maize during both the years.(Table1).Maximum gross return was obtained under greengram-maize sequences during both the years and it was significantly higher than under all other crop sequence.In 2010, gross return under cowpea-maize, blackgrammaize and clusterbean-maize sequences was significantly more than under fallow-maize and maize-maize.Further gross return under fallow-maize sequences was significantly higher than under maize-maize sequences.
In 2011, gross return obtained under cowpea-maize, blackgram-maize sequences was significantly more than under cluster bean-maize, fallow-maize and maize-maize.However, gross return under clusterbean -maize, fallowmaize was significantly more than under maize-maize sequence.
Nitrogen rate resulted significant increase in gross return obtained from maize during both the years.(Tale 1).Maximum gross return rupees 63250 and rupees 72800 /ha was obtained with 200 kg N/ha in 2010 and 2011 respectively.
Successive increase in nitrogen rate from 0 to 200 kg N/ha caused significant increase in gross return during both the years.However gross return under 150 kg N and 200 kg N were statistically at par.
Net Return from Maize
Various preceding crops caused significant variation in net return from maize during both the years (Table 1).Maximum net return was obtained under greengrain-maize sequences which was significantly more than other crop sequences except cowpea-maize, blackgram-maize sequence during both the years.In 2010, net return under cowpea-maize, blackgram-maize, clusterbean-maize sequence was significantly more than under fallow-maize and maize-maize sequence.Further net return under fallowmaize sequence was also significantly more than under maize-maize In 2011 net return under cowpea-maize, blackgram-maize sequence was significantly more than under clusterbean-maize, fallow-maize and maize-maize sequences.Net return under cluster bean-maize, fallow-maize was significantly more than maize-maize.Nitrogen rates had significant effect on net return during both the years.(Table 1).Maximum net return rupees 35850 in 2010 and rupees 40490 in 2011 was obtained with 200 kg N/ha.Significant increase in net return with successive increase in nitrogen rate from 0 to 200 kg N/ha was noted during both the year.
Benefit: Cost Ratio of Maize
Benefit: Cost ratio of maize was affected significantly due to various crop sequences during both the year.(Table 1) Maximum benefit cost ratio was obtained under greengrammaize sequence and it was significantly higher than under all the other sequences during both the years.
In 2010, benefit cost ratio under cowpea-maize, blackgrammaize, clusterbean-maize was significantly higher than under fallow-maize and maize-maize sequences.Further benefit cost ratio under fallow-maize was significantly more than maize-maize sequences.
In 2011, benefit cost ratio under cowpea-maize and blackgram-maize was significantly more than under clusterbean-maize, fallow-maize and maize-maize sequences.But benefit: cost ratio under clusterbean-maize and fallow-maize was significantly more than under maizemaize sequence.
Rate of nitrogen application had significant effect on benefit: cost ratio during both the years.(Table1).Maximum benefit cost ratio during both the years.Maximum benefit cost ratio 1.28 in 2010 and 1.25 in 2011 was recorded with 200 kg N/ha.The increase in rate of nitrogen from 0 to 50, 50 to 100, 100 to 150 and 150 to 200 kg N/ha resulted consistence and significant increase in the benefit: cost ratio during both the years.
Gross Return from Different Crop Sequences
Total number of crop sequences as a whole varied significantly under different crop sequences during both the years.(Table 2) Maximum gross return was obtained under maize -maize and it was significantly higher than other crop sequences during both the years.In 2010, gross return obtained under greengram -maize and cowpea-maize sequences was significantly more than under fallow-maize, blackgram-maize, clusterbean -maize sequences .
Gross return under fallow-maize sequences was significantly more than under black gram-maize, clusterbean -maize sequences.Further gross return under blackgram -maize was also significantly higher than under clusterbean -maize sequences.Further gross return under blackgram-maize was significantly more than under clusterbean -maize sequences like first years.
Gross return obtained from crop sequences as a whole significantly influenced due to nitrogen rates during both the years (Table 2).Successive increase in nitrogen rates up to 200 kg N/ha resulted significant increase in gross return.
Net Return from Different Crop Sequences
Net return from crop sequences as a whole was influenced significantly due to different crop sequences (Table 2).
During both the years, net return under greengram-maize sequences was maximum (Rs.60340/ha in 2010 and Rs 66880/ha in 2011) and significantly higher than under other crop sequences.Net return under cowpea-maize sequences was significantly higher than under maize-maize, fallowmaize, blackgram-maize and cluster bean-maize sequences.Net return under maize-maize and fallow maize was significantly more than under blackgram -maize and clusterbean -maize.Similarly net return under blackgrammaize was significantly more than under clusterbean -maize sequences.
Net return obtained from crop sequences as a whole was significantly influenced due to nitrogen rates during both the years (Table 2).Maximum net return (Rs 71700/ha in 2010 and Rs 80980/ha in 2011) was obtained with 200 kg N/ha.Net return increased significantly with successive increase in nitrogen rates from 0 to 200 kg N/ha.
Benefits: Cost Ratio of Different Crop Sequences
Benefit: cost ratio of crop sequences as a whole differed significantly due to different crop sequences during both the years.(Table2).Maximum benefit cost ratio was obtained under greengram -maize sequences (1.28 in 2010 and 1.25 in 2011) and it was significantly more than under all other crop sequences during both the years.Benefit cost ratio under cowpea -maize was significantly more than under clusterbean-maize, fallow-maize, maize -maize and blackgram -maize sequences.In 2010 significantly higher benefit cost ratio was noted under clusterbean-maize than fallow-maize, maize-maize and blackgram-maize sequences.
Further significantly more benefit: cost ratio was recorded under fallow -maize than maize -maize and black grammaize sequences.In 2011 benefits: cost ratio under fallowmaize and cluster bean-maize sequences was significantly more than blackgram -maize , maize -maize sequences .Further blackgram -maize sequence was significantly This paper can be downloaded online at http://ijasbt.org&http://nepjol.info/index.php/IJASBTsuperior to maize -maize sequences.Nitrogen rates brought significant variation in benefit cost ratio of crop sequences as a whole during both the years (Table 2).In 2010 significant increase in benefits: cost ratio was noted as a result of successive increase in nitrogen rate up-to 100 kg N/ha but in 2011 successive increase in benefit: cost ratio upto 200 kg N/ha.
Productivity of Different Crop Sequences
Maize equivalent yield varied significantly due to different crop sequences during both the years.(Table3).Maize equivalent yield under maize-maize and greengram -maize sequence was significantly more than all the other crop sequences in 2010 and then under fallow-maize, blackgram-maize and clusterbeanmaize sequences during 2011.Maize equivalent yield noted under cowpea-maize sequences was significantly more than under fallow-maize, black gram-maize, and cluster bean -maize sequences during both the years.Significantly more maize equivalent yield was obtained under fallow-maize sequences than black gram-maize and clusterbean-maize and under blackgram-maize sequence than under cluster bean -maize.
Various nitrogen rate resulted significant variation in maize equivalent yield during both the years.(Table3) Maximum maize equivalent yield (10824 kg/ha in 2010 and 11932 kg/ha in 2011) was recorded with 200 kg N/ha to maize.Successive increase in nitrogen rate from 0 to 200 kg N/ha brought significant and simultaneous improvement in maize equivalent yield during both the years.
Maize equivalent yield was influenced significantly due to the interaction effects of crop-sequences and nitrogen rages during 2010-2011 only ( Table 4).Maximum maize equivalent yield ( 14514 kg /ha ) was obtained under maizemaize sequence at 200 kg N/ha which was significantly more than under all the sequence at all the nitrogen rates except greengram-maize and cowpeamaize sequences at 200 kg N/ha.At 0 , 50 , 100, 150 and 200 kg N/ha maize equivalent yield obtained under maize-maize , greengram maize and cowpeamaize sequences was significantly more than under fallow-maize , blackgram-maze and clusterbeanmaize sequences .Similarly, maize equivalent yield under fallowmaize sequences was significantly higher than under blackgram-maize and clusterbeanmaize sequences.Further maize equivalent yield under blackgram-maize sequences significantly more than under clusterbean-maize sequences.Under maizemaize and clusterbean-maize sequences, successive increase in nitrogen rate resulted simultaneous and significant increase in maize equivalent yield.Under fallowmaize and blackgram-maize sequences maize equivalent yield obtained at 100, 150 and 200 kg N/ha was significantly more than at no nitrogen and 50 kg N/ha and at kg N/ha than at no nitrogen treatments.Under greengrammaize sequences maize equivalent yield obtained at 200 kg N/ha was significantly more than at all other nitrogen rates.Further maize equivalent yield at 100 and 50 kg N/ha was significantly more than at no nitrogen level.Under cowpea-maize sequences maize equivalent yield obtained at 100, 150 and 200 kg N/ha was significantly more than at no nitrogen and at 50 and 100 kg N/ha than no nitrogen treatment.
Grain Production Efficiency
Grain production efficiency varied significantly due to different crop sequences (Table 3) during both the years.Grain production efficiency under maize-maize and greengram-maize sequences was significantly more than all the other crop sequences in 2010 and than under fallowmaize, blackgram -maize and clusterbean-maize in 2011.Significantly more grain production efficiency was noted under cowpeamaize sequences than under fallow-maize, blackgrammaize and clusterbeanmaize during both the years.Grain production efficiency under fallow-maize sequences was significantly more than under blackgrammaize and clusterbeanmaize sequences.Further grain production efficiency under blackgram-maize was significantly more than clusterbean-maize.
Rate of nitrogen application significantly influenced grain production efficiency during both the years (Table 4).Maximum grain production efficiency (39 .6 kg /ha /day in 2010 and 38.8 kg /ha/day in 2011) was recorded with 200 kg N/ha to maize.Successive increased grain production efficiency significantly during both the years.
Effect of interaction between crop sequences and nitrogen rates on grain production efficiency was found significant in 2010 only (Table 4).Maximum grain production efficiency (34.8 kg /ha / day) was obtained under maizemaize sequences which was significantly higher than under all the sequences at all the nitrogen rate except under greengram-maize and cowpea-maize sequence at 200 kg N/ha.
At no nitrogen, grain production efficiency under maizemaize sequence was significantly more than under all other sequences.But grain production efficiency under greengrammaize and cowpeamaize sequences was significantly more than under fallowmaize, blackgrammaize and clusterbean-maize sequences was significantly more than under blackgram-maize and clusterbean-maize sequences.Further grain production efficiency under blackgrammaize sequences was significantly more than under clusterbeanmaize sequences.
At 50, 100, 150 and 200 kg N/ha grain production efficiency under maize-maize greengram-maize and cowpea-maize sequences was significantly more than under all other sequences, Grain production efficiency under fallow-maize was significantly morethan under blackgram-maize and clusterbean-maize sequences.While grain production efficiency under blackgram-maize sequence was significantly more than under clusterbean-maize sequence.
Under maize-maize and clusterbean-maize sequences grain production efficiency obtained at 200 kg N/ha was significantly more than at all the other nitrogen rates.But grain production efficiency at 100 kg N/ha was significantly more than at 50 kg N/ha and no nitrogen, while grain production efficiency at 50 kg N/ha was significantly more than at no nitrogen level.
Under fallow-maize and blackgrammaize sequences grain production efficiency obtained at 100, 150 and 200 kg N/ha was significantly more than at no nitrogen and 50 kg N/ha.While grain production efficiency at 50 kg N/ha was significantly more than no nitrogen level.Under greengram maize sequences grain production efficiency at 200 kg N/ha was significantly more than at all other nitrogen rates.While grain production efficiency at 100 and 50 kg N/ha was significantly more than at no nitrogen treatments.
Under cowpea-maize sequence grain production efficiency at 200 kg N/ha was significantly more than at no nitrogen and 50 kg N/ha.While grain production efficiency at 50 and 100 kg N/ha was significantly more than at no nitrogen treatment.
Economic consideration is one of the best criteria to choose the most appropriate cropping system for an area.Singh et al. (1967) reported that inclusion of legumes in rotation on Sandy soils low in nitrogen proved most profitable and the gross income was in the order of cowpeamaizewheat, guarmaizewheat, fallowmaizewheat, bajramaize wheat, jowarmaizewheat.Khybri et al. (1973) reported that in Doon valley, out of several double cropping systems, highest net profit (Rs.1622/ha) was obtained from maize-pea rotation fallowed by maize-wheat rotation.Faroda and Singh (1983) reported that among the crop rotations, i.e. blackgramwheat, greengramwheat, cowpeawheat, pigeonpea-wheat, pearlmillet fertilized with 60 kg N/ha.Wheat and pearlmillet fertilized with 90 kg N/hawheat, the rotation blackgramwheat gave maximum net return (Rs.7662.79 /ha) where as pearlmillet fertilized with 60 kg N/ha)wheat gave minimum net return.Ramshe and Patil (1987) obtained maximum net return from groundnutwheat rotation which had given an additional profit in order of Rs. 1924, Rs 2470, Rs 2008 and Rs.1268 over moong -wheat, bengalgramwheat and bajrawheat and cowpeawheat respectively.Singh and Faroda (1987) reported that pigeonpea + moongwheat rotation had given an additional return Rs. 1758/ha as compared to pigeonpea wheat rotation.Sharma and Thakur (1988) found maximum gross return (Rs 26610/ha) and net return (Rs.16325/ha) with a economic efficiency of Rs, 77.67 /day in terms of gross return and Rs 47.33/day /ha in terms of net return under maize -pea (grain pod)wheatmoong system.But maize + pigeonpeawheatgreengram sequences had stability in production and economics over maizepea (pods)wheatgreengram sequence.Deka et al. (1984) observed maximum net return of Rs. 6345 / ha from ricewheatmaize + cowpea (fodder) sequences, closely followed by riceberseem (Rs.5426/ha) and it was lowest (2919/ha) from rice-lentil sequence.Ramteke et al. (1986) reported highest net profit from berseemmaize sequence followed by legume (pea, lentil) -maize sequence and lowest in wheatmaize sequences.
Summary and Conclusion
Maximum gross return (Rs 53660 /ha in 2010 and Rs 60450/ha in 2011) was obtained from maize grown after greengram while it was minimum (Rs 35880/ha in 2010 and Rs 40700/ha in 2011) when grown after maize.Gross return was maximum (Rs 63250/ha in 2010 and Rs 72800/ha in 2011) from maize with 200 kg N/ha and it was minimum (Rs 26720/ha in 2010 and Rs 30280/ha in 2011) with no nitrogen.
Maximum net return (Rs 30170/ha in 2010 and Rs 33440/ha in 2011) was obtained from maize under greengrammaize sequences and it was minimum (Rs 12390/ha in 2010 and Rs 13700/ha in 2011) under maize-maize sequences.Net return was maximum (35850/ha in 2010 and Rs 40490 /ha in 2011) from maize with 200 kg N/ha and it was minimum (Rs 8680/ha in 2010 and Rs 9670/ha in 2011) with no nitrogen.
Benefit: cost ratio was maximum (1.22 in 2010 and 1.18 in 2011) from maize under greengrammaize sequences while it was minimum (0.50 in 2010 and 0.47 2011) under maize-maize sequences.Maximum benefit: cost ratio (1.28 in 2010 and 1.25 in 2011) from maize was obtained with 200 kg N/ha and it was minimum (0.47 in 2010 and 0.46 in 2011) with no nitrogen.
Maximum maize equivalent yield 11516 kg /ha in 2010 and 12710 kg /ha in 2011) was obtained under maizemaize sequences while it was minimum (4310 kg /ha in 2010 and 4624 kg /ha in 2011) under clusterbeanmaize sequences.Maize equivalent yield was maximum (10824 kg /ha in 2010 and 11923 kg /ha in 2011) with 200 kg N/ha while it was minimum (7384 kg /ha in 2010 and 8206 kg /ha in 2011) with no nitrogen.
Grain production efficiency was maximum (32.0 kg /ha / day under greengram-maize sequences in 2010.but in 2011 , maximum grain production efficiency (34.8 kg /ha /day) was recorded under maize-maize sequences which was comparable to grain production efficiency under greengram maize sequence.While grain production efficiency was minimum (11.18 kg /ha /day in 2010 and 12.6 kg /ha /day in 2011) under clusterbeanmaize sequences.Maximum grain production efficiency (29.2 kg /ha /day in 2010 and 32.8 kg /ha /day in 2011) was obtained with 200 kg N/ha while it was minimum (20.2 kg /ha /day in 2010 and 34.8 kg /ha/day in 2011) with no nitrogen application.
Table 1 .
Effects of crop sequences and nitrogen rates on economic return from maize.
Table 2 :
Gross return, net return and benefit: cost ratio under different crop sequences and nitrogen rates
Table 3 :
Productivity (maize equivalents) and grain production efficiency under different crop sequences and nitrogen rates
Table 4 :
Interaction effects between crop sequences and nitrogen rates on productivity (maize equivalents) and grain production efficiency during 2010-2011 | 2018-12-18T20:13:24.115Z | 2017-06-29T00:00:00.000 | {
"year": 2017,
"sha1": "6d40f0e7325773c15ae57df289c9ce835b0558e2",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/IJASBT/article/download/17612/14301",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d40f0e7325773c15ae57df289c9ce835b0558e2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
225364235 | pes2o/s2orc | v3-fos-license | An Electromagnetic–Thermal Coupling Numerical Study of the Synchronous Generator with Second-Generation High-Temperature Superconducting Armatures
: Generators with high-temperature superconducting armatures have an advantage in the fact that they can carry high currents. However, the AC loss of high-temperature superconducting (HTS) armatures is di ffi cult to calculate precisely because HTS coils exist in a complex and time-varying electromagnetic environment. In addition, when the HTS coil is carrying a short circuit fault overcurrent, an electromagnetic–thermal simulation study of this process is required to ensure that the HTS coil is not damaged. In this paper, first, a fully coupled T-A formulation model is used to calculate the AC loss of HTS armatures. Then, the current and temperature distributions are simulated, considering the intrinsic characteristic of superconducting coated conductors, when the generator su ff ers the worst short circuit fault accidently. It is found that the turn with the lowest critical current quenches after 0.01 s, but the temperature rise cannot damage the coil if the circuit breaker can clear the fault quickly. The e ff ects of the copper stabilizer thickness on the thermal stability of the HTS coil during the worst short circuit fault are also investigated. A thicker copper stabilizer improves the thermal stability of the HTS coil in the event of a short circuit fault, but the use of a simulation model is needed to make trade-o ff s between the engineering current density and the thermal stability of the HTS tapes. The work in this paper is necessary and can provide an important reference for manufacturing superconducting generators. the simulation results show that a thicker copper stabilizer improves the thermal stability of the HTS coil in the event of a short circuit fault, but for the HTS generator designer, this requires the use of a simulation model to make trade-o ff s between the engineering current density and the thermal stability of the HTS tapes.
Introduction
Second-generation (2G) high-temperature superconducting (HTS) generators are promising because they can achieve a higher power density compared to conventional generators [1]. For synchronous generator applications, HTS coils placed in the stator have an advantage in terms of static cooling, but HTS coils will suffer AC loss because they are under alternating magnetic fields and currents.
The AC loss of an isolated HTS coil has been studied by many groups [2][3][4][5][6]. To calculate the AC loss of HTS coils in the actual generator environment, a two-stage segregated model consisting of a machine model and an AC loss model has been proposed [7] and used [8,9]. The rotating machine is first simulated by the A-formulation, and then the HTS coil is simulated by the H-formulation [10], with the boundary condition calculated from the A-formulation model.
The T-A formulation model [11,12] can fully couple the A-formulation model of the generator and the HTS coil model. The efficiency of the T-A formulation model is higher than the H-formulation model due to the decrease in the degrees of freedom [11]. Moreover, the T-A formulation model can directly achieve the fully coupled simulation of the HTS coils under the generator's electromagnetic environment because rotating generators are usually simulated by the A-formulation [13].
When a short circuit fault happens, the magnetic field on the surface of the HTS coils and the current flowing in the coils will change drastically and cause a greater AC loss. The ohmic loss also occurs when the current exceeds the critical current of the coils. If the cooling capacity of the cryocooler is not enough to remove the heat immediately, the HTS coils may be permanently damaged [14]. To understand whether the short circuit fault will cause permanent damage to HTS coils, conducting an electromagnetic thermal coupling simulation is of great importance. It should be considered that these factors will have an effect on HTS materials, including the magnetic field, the current and the temperature.
To date, some numerical simulations of the short circuit faults of HTS generators have been done [15][16][17][18][19]. However, none of these simulations achieves fully coupled simulations, considering the dependence of the critical current density on the temperature and the magnetic field.
The main novelty of this paper is to conduct an electromagnetic-thermal coupling numerical study of a synchronous generator with 2G HTS armatures. In Section 2, the field-circuit thermal coupled model is introduced. In Section 3, the parameters of the HTS generator and the HTS coils are introduced. In Section 4, the model is first used to calculate the AC loss of the stator HTS coils under the rated condition of the generator. Then, the transient electromagnetic-thermal behavior of the HTS coils under a sudden three-phase short circuit fault at the no-load operation will be studied. This situation causes a maximum peak short circuit current, which is the worst case for the HTS coil. Moreover, the effects of the copper stabilizer thickness on the thermal stability of the HTS coil at the worst short circuit fault will also be investigated. Finally, our conclusions are drawn in Section 5.
Formulation and Model
To consider the complex anisotropy of YBCO tapes in our numerical model, the dependence of the critical current density on the magnetic flux density B and the temperature T is expressed as [4,[20][21][22]: where B par and B per represent the magnetic field component parallel and perpendicular to the tape surface, respectively. J c0 is the self-field critical current density at the operating temperature. b, k and Bc are the curve-fitting parameters. T c = 92 K is the critical temperature. T 0 = 77 K is the operating temperature. It is important to note that Equations (2) and (3) are only an approximate expression of the dependence of the critical current density on the magnetic flux density and the temperature. The actual properties of the HTS tapes will not follow these expressions completely.
The superconducting layer resistivity is calculated by the E-J power law relation [23]:
The Field Model
The current density vector potential T is defined in the superconducting layer to solve the current density distribution in HTS-coated conductors: As can be seen in Figure 1, set the current density vector potential of the endpoint on one side of each HTS tape as zero, and the boundary condition for the 2-D T-formulation is: where I su is the superconducting layer current of the HTS coil.
The Field Model
The current density vector potential T is defined in the superconducting layer to solve the current density distribution in HTS-coated conductors: As can be seen in Figure 1, set the current density vector potential of the endpoint on one side of each HTS tape as zero, and the boundary condition for the 2-D T-formulation is: where Isu is the superconducting layer current of the HTS coil. In all domains of the generator, the magnetic vector potential A is defined and the A-formulation is used: superconucting layers 1 () copper layers where Jcu is the copper layer current density of the HTS coil, calculated by coupling with the circuit part (introduced in Section 2.2). μ is the permeability of the material. The relative permeability for all the materials in the model is one. The boundary condition for the A-formulation is set as: In all domains of the generator, the magnetic vector potential A is defined and the A-formulation is used: where J cu is the copper layer current density of the HTS coil, calculated by coupling with the circuit part (introduced in Section 2.2). µ is the permeability of the material. The relative permeability for all the materials in the model is one.
The boundary condition for the A-formulation is set as:
The Circuit Model
The equivalent circuit model of the whole superconducting synchronous generator is shown in Figure 2. For the stator windings, since the three-phase stator windings are symmetrical, only the variables related to the phase-A winding are explained.
The Circuit Model
The equivalent circuit model of the whole superconducting synchronous generator is shown in Figure 2. For the stator windings, since the three-phase stator windings are symmetrical, only the variables related to the phase-A winding are explained. The circuit equation of the phase-A is: where UA, LLA, RLA, iA are the output voltage, the load inductance, the load resistance and the current of the phase-A winding, respectively. For the HTS winding, it can be seen as a series of turns (right part of the Figure 2), and the circuit equations are [24]: where the subscript k represents the k-th turn of the HTS coil. N is the total number of turns of the coil. ek is the generated electromotive force. Ez is the electric field strength in the coil domain. Sk is the cross-sectional area of the conductor. L is the core length. ik is the total current. i, R and T represent the current, the resistance and the temperature, respectively. The subscript su and cu represent the superconducting layer and copper layer, respectively. Bsu,k is the magnetic flux density in the superconducting layer. lk is the tape length. ρcu is the copper layer resistivity which is from [25]. Scu,k is the cross-section area of the copper layer.
The Thermal Model
The governing equation of the heat transfer model in the HTS coil is [25,26]: The circuit equation of the phase-A is: where U A , L LA , R LA , i A are the output voltage, the load inductance, the load resistance and the current of the phase-A winding, respectively. For the HTS winding, it can be seen as a series of turns (right part of the Figure 2), and the circuit equations are [24]: where the subscript k represents the k-th turn of the HTS coil. N is the total number of turns of the coil. e k is the generated electromotive force. E z is the electric field strength in the coil domain. S k is the cross-sectional area of the conductor. L is the core length. i k is the total current. i, R and T represent the current, the resistance and the temperature, respectively. The subscript su and cu represent the superconducting layer and copper layer, respectively. B su,k is the magnetic flux density in the superconducting layer. l k is the tape length. ρ cu is the copper layer resistivity which is from [25]. S cu,k is the cross-section area of the copper layer.
The Thermal Model
The governing equation of the heat transfer model in the HTS coil is [25,26]: where d, C and k are the mass density, the heat capacity and the thermal conductivity of the materials of each layer of the HTS coil, respectively. The material properties of the Kapton layer are from [26] and other layers are from [25]. The magnetization loss density Q su is generated in the superconducting layers. The ohmic loss density Q cu is generated in the copper layer, and it is caused by the shunting of the copper layer, which will occur after the total current exceeds the critical current of the HTS tapes.
The distribution of the same kind of loss in each turn is non-uniform because, in the current distribution, the magnetic field distribution and the temperature distribution of each turn are different. The values of these heat sources for each turn need to be calculated by coupling with the field model and the circuit model introduced previously: During the operation of the HTS machine, the cryocooler will take away a certain amount of heat in coils. However, in order to estimate the most serious consequences of short circuit faults conservatively, the adiabatic boundary condition is set on the surface of the coils.
The coupling relationship among the field, circuit and thermal model of the HTS coils is summarized in Figure 3: where d, C and k are the mass density, the heat capacity and the thermal conductivity of the materials of each layer of the HTS coil, respectively. The material properties of the Kapton layer are from [26] and other layers are from [25]. The magnetization loss density Qsu is generated in the superconducting layers. The ohmic loss density Qcu is generated in the copper layer, and it is caused by the shunting of the copper layer, which will occur after the total current exceeds the critical current of the HTS tapes.
The distribution of the same kind of loss in each turn is non-uniform because, in the current distribution, the magnetic field distribution and the temperature distribution of each turn are different. The values of these heat sources for each turn need to be calculated by coupling with the field model and the circuit model introduced previously: (14) During the operation of the HTS machine, the cryocooler will take away a certain amount of heat in coils. However, in order to estimate the most serious consequences of short circuit faults conservatively, the adiabatic boundary condition is set on the surface of the coils.
The coupling relationship among the field, circuit and thermal model of the HTS coils is summarized in Figure 3: The field model uses the temperature distribution calculated from the thermal model, and the current values of the superconducting layers and copper layers calculated from the circuit model, to calculate the magnetization loss of the superconducting layers, the magnetic field distribution, and the generated electromotive force of the HTS coils. The circuit model uses the temperature distribution calculated from the thermal model, and the magnetic field distribution calculated from the field model, to calculate the current values of the superconducting layers and copper layers and the copper layer loss.
All these models are implemented in the COMSOL Multiphysics software.
Parameters
The topology of the studied HTS generator is shown in Figure 1, each phase is composed of two coils in parallel. For example, the phase A winding consists of two coils: A1 and A2. A1+ and A1− are the positive current domain and the negative current domain of the A1 coil, respectively.
The studied generator has four poles and six slots. Each slot has one double racetrack coil with 84 turns. The rated current of these HTS field coils is about 37 A and the rated output power is 30 Kw. The generator is designed to operate at 1500 rpm, corresponding to a stator armature frequency of 50 Hz. Other design parameters for the generator can be found in a previous paper [27]. The parameters of the HTS coils are summarized in Table 1. b, k and Bc are calculated by fitting the The field model uses the temperature distribution calculated from the thermal model, and the current values of the superconducting layers and copper layers calculated from the circuit model, to calculate the magnetization loss of the superconducting layers, the magnetic field distribution, and the generated electromotive force of the HTS coils. The circuit model uses the temperature distribution calculated from the thermal model, and the magnetic field distribution calculated from the field model, to calculate the current values of the superconducting layers and copper layers and the copper layer loss.
All these models are implemented in the COMSOL Multiphysics software.
Parameters
The topology of the studied HTS generator is shown in Figure 1, each phase is composed of two coils in parallel. For example, the phase A winding consists of two coils: A1 and A2. A1+ and A1− are the positive current domain and the negative current domain of the A1 coil, respectively.
The studied generator has four poles and six slots. Each slot has one double racetrack coil with 84 turns. The rated current of these HTS field coils is about 37 A and the rated output power is 30 kW. The generator is designed to operate at 1500 rpm, corresponding to a stator armature frequency of 50 Hz.
Other design parameters for the generator can be found in a previous paper [27]. The parameters of the HTS coils are summarized in Table 1. b, k and B c are calculated by fitting the experimental data of a sample HTS tape. The surface plot of the normalized critical current density for the HTS tape under a parallel and perpendicular magnetic field is shown in Figure 4. experimental data of a sample HTS tape. The surface plot of the normalized critical current density for the HTS tape under a parallel and perpendicular magnetic field is shown in Figure 4. Since the six coils are symmetrical, only the results of the A1 coil are shown in this paper. The numbering sequence of each turn of the studied coil is shown in Figure 1, and this arrangement will be used throughout Section 4. Although this paper is only a numerical simulation study, it is worth mentioning that our group has now manufactured the stator of the generator, which is shown in Figure 5. Since the six coils are symmetrical, only the results of the A1 coil are shown in this paper. The numbering sequence of each turn of the studied coil is shown in Figure 1, and this arrangement will be used throughout Section 4. Although this paper is only a numerical simulation study, it is worth mentioning that our group has now manufactured the stator of the generator, which is shown in Figure 5.
AC Loss of HTS Coils under the Rated Operation
The induced AC loss on HTS armatures will directly affect the efficiency of the generator and thereby increase the cost of cooling. Hence, accurately and quickly estimating the AC loss of the HTS generator is urgent and beneficial. The T-A formulation model has offered a convenient tool to solve this problem. In this section, the AC loss of HTS coils in the generator during the rated operation will be calculated and discussed.
The calculated AC loss values of several representative turns, which are placed at different locations in the HTS coil, are summarized in Table 2. It can be seen that different turns of the coil have significantly different AC loss values. The calculated current waveform of the A1 coil in one cycle at the rated condition is shown in Figure 6, and four typical time moments, A, B, C and D, are chosen to show the calculation results.
AC Loss of HTS Coils under the Rated Operation
The induced AC loss on HTS armatures will directly affect the efficiency of the generator and thereby increase the cost of cooling. Hence, accurately and quickly estimating the AC loss of the HTS generator is urgent and beneficial. The T-A formulation model has offered a convenient tool to solve this problem. In this section, the AC loss of HTS coils in the generator during the rated operation will be calculated and discussed.
The calculated AC loss values of several representative turns, which are placed at different locations in the HTS coil, are summarized in Table 2. It can be seen that different turns of the coil have significantly different AC loss values. The calculated current waveform of the A1 coil in one cycle at the rated condition is shown in Figure 6, and four typical time moments, A, B, C and D, are chosen to show the calculation results.
AC Loss of HTS Coils under the Rated Operation
The induced AC loss on HTS armatures will directly affect the efficiency of the generator and thereby increase the cost of cooling. Hence, accurately and quickly estimating the AC loss of the HTS generator is urgent and beneficial. The T-A formulation model has offered a convenient tool to solve this problem. In this section, the AC loss of HTS coils in the generator during the rated operation will be calculated and discussed.
The calculated AC loss values of several representative turns, which are placed at different locations in the HTS coil, are summarized in Table 2. It can be seen that different turns of the coil have significantly different AC loss values. The calculated current waveform of the A1 coil in one cycle at the rated condition is shown in Figure 6, and four typical time moments, A, B, C and D, are chosen to show the calculation results. 6. Current waveform of the studied A1 coil in one cycle. Figure 6. Current waveform of the studied A1 coil in one cycle.
The critical current of the HTS tapes is mainly influenced by the magnetic field perpendicular to the tape surface. Figure 7 shows the perpendicular magnetic flux density magnitude distributions of the area around the HTS tapes. First, due to the lower coil being closer to the rotor than the upper coil, its surface magnetic field is significantly larger than that of the upper coil. The turn in the lower coil has a greater AC loss value than the corresponding turn with the same distance to the stator iron teeth in the upper coil (e.g., turn 42 and turn 84). In addition, the perpendicular magnetic field in region A1+ is much greater than that in the A1− region. Although the average perpendicular magnetic field of turn 42 in the A1− region is less than turn 21, the average perpendicular magnetic field of turn 42 in the A1+ region is much greater than turn 21, which makes the AC loss of turn 42 greater than turn 21.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 15 The critical current of the HTS tapes is mainly influenced by the magnetic field perpendicular to the tape surface. Figure 7 shows the perpendicular magnetic flux density magnitude distributions of the area around the HTS tapes. First, due to the lower coil being closer to the rotor than the upper coil, its surface magnetic field is significantly larger than that of the upper coil. The turn in the lower coil has a greater AC loss value than the corresponding turn with the same distance to the stator iron teeth in the upper coil (e.g., turn 42 and turn 84). In addition, the perpendicular magnetic field in region A1+ is much greater than that in the A1− region. Although the average perpendicular magnetic field of turn 42 in the A1− region is less than turn 21, the average perpendicular magnetic field of turn 42 in the A1+ region is much greater than turn 21, which makes the AC loss of turn 42 greater than turn 21. Figure 8 shows the normalized current density distribution of the HTS tapes. The normalized current density distribution is similar to the perpendicular magnetic field distribution. The critical current degradation is more severe in each turn of the lower coil due to the greater perpendicular magnetic field, so the current penetration into the interior of the HTS tapes is more severe, making the AC loss greater than the upper coil. Turn 42 in the lower coil suffers the maximum surface perpendicular magnetic field and has the minimum critical current because it is nearest to the iron teeth, which attracts the flux lines. Its current penetration is also the most severe and it has the greatest AC loss value. Figure 8 shows the normalized current density distribution of the HTS tapes. The normalized current density distribution is similar to the perpendicular magnetic field distribution. The critical current degradation is more severe in each turn of the lower coil due to the greater perpendicular magnetic field, so the current penetration into the interior of the HTS tapes is more severe, making the AC loss greater than the upper coil. Turn 42 in the lower coil suffers the maximum surface perpendicular magnetic field and has the minimum critical current because it is nearest to the iron teeth, which attracts the flux lines. Its current penetration is also the most severe and it has the greatest AC loss value.
The generator has three phases and is symmetrical. Since the current and magnetic field of each coil have three phases and are symmetrical, with only the phase angle being different, the AC loss is the same for all six coils. The calculated total AC loss of the 6 HTS coils is 229.38 W. Considering that the cooling penalty is about 10 [28], the loss ratio is about 5.29%.
Transient Electromagnetic-Thermal Behavior of HTS Coils under the L-L-L Fault
In this section, the worst short circuit fault in the generator output terminal is studied: a sudden three-phase short circuit fault at the no-load operation. When a short circuit fault occurs at a time when the electromotive force of the phase-A is passing through zero, the maximum fault current occurs in phase A [16], which is the worst scenario for the A1 HTS coil studied in this paper. The generator has three phases and is symmetrical. Since the current and magnetic field of each coil have three phases and are symmetrical, with only the phase angle being different, the AC loss is the same for all six coils. The calculated total AC loss of the 6 HTS coils is 229.38 W. Considering that the cooling penalty is about 10 [28], the loss ratio is about 5.29%.
Transient Electromagnetic-Thermal Behavior of HTS Coils under the L-L-L Fault
In this section, the worst short circuit fault in the generator output terminal is studied: a sudden three-phase short circuit fault at the no-load operation. When a short circuit fault occurs at a time when the electromotive force of the phase-A is passing through zero, the maximum fault current occurs in phase A [16], which is the worst scenario for the A1 HTS coil studied in this paper.
When the fault happens, the generator short circuit fault protection system will act to cut off the fault current to remove the fault. Therefore, it is only necessary to pay attention to whether the short circuit fault will cause permanent harm to the superconducting coil within a short period of time after the fault occurs.
The short circuit fault occurs at t = 0.04 s of the simulation and the circuit breaker can clear the fault in 0.08 s [17], which means that longer simulation is unnecessary. Figure 9 depicts the variations in the total current, the critical current (CC), the superconducting layer current (SLC) and the copper layer current (CLC) of turn 42 and turn 43 of the A1 coil before and after the occurrence of the worst short circuit fault. The scale of the x-axis has been changed to make it easier for the reader to observe the waveform at 0.04-0.06 s. Figure 10 When the fault happens, the generator short circuit fault protection system will act to cut off the fault current to remove the fault. Therefore, it is only necessary to pay attention to whether the short circuit fault will cause permanent harm to the superconducting coil within a short period of time after the fault occurs.
The short circuit fault occurs at t = 0.04 s of the simulation and the circuit breaker can clear the fault in 0.08 s [17], which means that longer simulation is unnecessary. Figure 9 depicts the variations in the total current, the critical current (CC), the superconducting layer current (SLC) and the copper layer current (CLC) of turn 42 and turn 43 of the A1 coil before and after the occurrence of the worst short circuit fault. The scale of the x-axis has been changed to make it easier for the reader to observe the waveform at 0.04-0.06 s. Figure 10 depicts the variations in the ohmic loss values and temperature values of turn 42 and turn 43 after the occurrence of the worst short circuit fault.
Turn 42 and turn 43 are chosen because the former has the lowest critical current and the most obvious temperature rise in each turn of the coil, while the latter has the highest critical current. As will be shown below, although the total current is the same for both them, during a short circuit fault, due to their different magnetic field environments, their critical current values are different. This difference in the critical current values at the beginning results in their different subsequent copper layer shunting and temperature rise phenomena in each HTS tape.
It can be clearly found that the total current increases rapidly after the occurrence of the short circuit fault. The critical current of turn 42 is much lower than that of turn 43 because of the different magnetic field environments on their surfaces. When the total current first exceeds the CC of turn 42 (0.0428 s), the excess current flows through the copper layer, producing the ohmic loss and raising the temperature. However, turn 43 is still in the superconducting state. As the short circuit current increases further, the total current exceeds the CC of turn 43 (0.0436 s); due to its high critical current, the copper layer current of turn 43 is less than turn 42 s, resulting in less ohmic loss and a lower temperature rise. For turn 43, due to the non-periodic current component in the short circuit transient process, a peak value (0.0471 s) appears in the first half-cycle after the short circuit fault happens. With this component gradually decaying to zero, the waveform will eventually stabilize into a steady state. Since there is not much of a temperature rise for turn 43 after the first and also the highest peak current (0.0471 s), the subsequent smaller current peaks (for example, the 0.0671 s moment) also cannot induce much of a temperature rise in turn 43.
As for turn 42, after the 0.0471 s moment, although the total current is decreasing, the CC is still decreasing as well and the CLC can still increase for a short time. This phenomenon is caused by the temperature rise due to the ohmic loss. The higher temperature means lower CC, making the copper layer shunt phenomenon more severe and accelerating the temperature rise. After the 0.0495 s moment, the temperature of turn 42 rises to the critical temperature. Turn 42 loses its superconducting characteristic, and all the short circuit currents go through the copper layer.
A large amount of ohmic loss in a short time may cause damage to the HTS coil and this phenomenon is worthy of vigilance. One solution is to design a HTS coil with a greater margin to get through the short time overcurrent process. However, this method is not economical. Another method is to allow the current to exceed the critical current value in a short time as long as the temperature rise is limited and the HTS coil can return to the superconducting state after the fault disappears [29].
For the whole HTS coil, although the temperature rise in the turn with the lowest critical current is obvious, the highest temperature at t = 0.12 s is not high enough to exceed the melting points of the materials of the HTS coil. As long as the machine short circuit protection device can remove the fault, the HTS coil will not be damaged.
Effects of the Copper Stabilizer Thickness on the Thermal Stability of the HTS Coil
Copper stabilizer can improve the thermal stability of the 2G HTS tapes [30]. In this section, the effects of the copper stabilizer thickness on the thermal stability of the HTS coil during the worst short circuit fault are numerically studied. Figure 11 depicts the variations in the total current, the critical current and the copper layer current of turn 42 after the occurrence of the worst short circuit fault, with copper stabilizer thicknesses of 10 µm and 20 µm. Figure 12 depicts the ohmic loss values and the temperature values of turn 42 after the occurrence of the worst short circuit fault with copper stabilizer thicknesses of 10 µm and 20 µm.
It is found that, as the copper stabilizer thickens, the temperature rise phenomenon becomes weaker. The temperature rise in the copper layer in a short period can be approximately considered to be proportional to the ohmic loss volume density (ignore the heat transfer from the copper layer to other layers to simplify the explanation). The ohmic loss volume density is proportional to the square of the current density of the copper layer, so the temperature rise in the copper layer can be approximately regarded as inversely proportional to the square of the thickness of the copper stabilizer. Thus, although the time when the total current exceeds the critical current is almost the same for the two cases, the final temperature will be lower for the thicker copper stabilizer.
However, though the thicker copper stabilizer improves the thermal stability of the HTS coil in the event of a short circuit fault, the engineering current density also decreases. For the HTS generator designer, this requires the use of an electromagnetic-thermal coupling numerical simulation model to make trade-offs between the engineering current density and the thermal stability of the HTS tapes. One noteworthy phenomenon is that the total current eventually becomes inconsistent after a short circuit fault occurs in both cases. The total current in both cases is the same at the beginning of the short circuit fault because the coil is initially in the superconducting state and the coil resistance is almost zero. The short circuit impedance is mainly the inductive resistance of the coil, which is the same for the two cases. Then, as the temperature rises, the coil resistance is gradually dominated by the copper layer resistance and the short circuit impedance becomes greater. For the thicker copper stabilizer case, the copper layer resistance is smaller, so the short circuit impedance is smaller, causing a greater short circuit current. Moreover, since the copper layer loss is proportional to the square of the current, finally, at t = 0.12 s, the 20-µm copper stabilizer case's ohmic loss even exceeds that of the 10-µm copper stabilizer case by a little.
Conclusions
In this paper, an electromagnetic-thermal coupling numerical study of a synchronous generator with 2G HTS armatures is conducted. The main results in this paper are summarized as follows: First, by coupling the field part model with the circuit part model, the AC loss of HTS coils during rated operation is calculated. Second, the transient electromagnetic-thermal behavior of the HTS coil under the worst short circuit fault is studied. The results show that although the total current is the same for each turn of the HTS coil during the short circuit fault, the critical current value of each turn is different due to their diverse surrounding magnetic fields. This difference in the critical current value at the beginning leads to their subsequently different copper layer shunting and temperature rise phenomena. The turn with the lowest critical current will eventually quench completely, and the temperature rise is obvious, but it is not high enough to exceed the melting points of the materials of the HTS coil. It is concluded from this work that, if the generator short circuit protection device can remove the fault as soon as possible, the HTS coil in this high-speed HTS synchronous generator will not suffer unrecoverable quench and be damaged. Last but not least, the simulation results show that a thicker copper stabilizer improves the thermal stability of the HTS coil in the event of a short circuit fault, but for the HTS generator designer, this requires the use of a simulation model to make trade-offs between the engineering current density and the thermal stability of the HTS tapes. | 2020-08-06T09:05:31.356Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "c85d31b27dc06a4040dc2743a344964bde7972e7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/15/5228/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5e683f1cfc91fb95731e8f72da1982c1516b9294",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
36066422 | pes2o/s2orc | v3-fos-license | Disorders of cerebrovascular angioarchitectonics and microcirculation in the etiology and pathogenesis of Alzheimer ’ s disease
There have recently appeared many reports dedicated to cerebral hemodynamics disorders in AD. However, certain specific aspects of cerebral blood flow and microcirculation during this disease are not fully understood. This research focuses on the identification of particular features of cerebral angioarchitectonics and microcirculation at preclinical and clinical AD stages and on the determination of their importance in AD etiology and pathogenesis. 164 patients participated in the research: Test Group—81 patients with different AD stages; Control Group— 83 patients with etiologically different neurodegenerative brain lesions with manifestations of dementia and cognitive impairment but without AD. All patients underwent: assessment of cognitive function (MMSE), severity of dementia (CDR) and AD stages (TDR), laboratory examination, computed tomography (CT), magnetic resonance imaging (MRI), brain scintigraphy (SG), rheoencephalography (REG) and cerebral multi-gated angiography (MUGA). All Test Group patients, irrespective of their AD stage, had abnormalities of the cerebral microcirculation manifested in dyscirculatory angiopathy of Alzheimer’s type (DAAT), namely: reduction of the capillary bed in the hippocampus and frontal-parietal regions; development of multiple arterio-venous shunts in the same regions; early venous dumping of arterial blood through these shunts with simultaneous filling of arteries and veins; development of abnormally enlarged lateral venous trunks that receive blood from the arterio-venous shunts; anomalous venous congestion at the border of frontal and parietal region; increased loop formation of distal intracranial arterial branches. Control group patients did not have combinations of such changes. These abnormalities are specific for AD and can affect amyloid beta metabolism contributing to its accumulation in the brain tissue and thereby stimulating AD progression.
INTRODUCTION
According to the Alzheimer's Association in 2013, one of eight Americans older than 60 has memory impairments [1].The number of patients suffering from Alzheimer's disease (AD) has been constantly growing in different parts of the world.In the US alone, the number of patients with AD aged 65 and older is expected to increase from 5.4 million to 13.8 -16 million by 2050 [2].
AD has for long been considered a purely neurodegenerative disease, so the research has mainly focused on structural changes in the brain tissue during this disease [3][4][5].The introduction of such radiological methods as CT, MRI and PET has made great progress in neuroimaging allowing to study in vivo the changes that occur in the brain tissue during various neurodegenerative processes as well as to differentiate various structural lesions [6][7][8][9].The use of biomarkers in the diagnosis of AD has recently allowed visualizing the accumulation of amyloid-beta and tau [10][11][12][13].
Compared to research aimed at understanding morphological lesions, the study of the brain vascular system in AD is much less developed.Back in the 1930s, F. Morel, using the material of postmortem autopsy, revealed the presence of cerebral vascular changes in AD and described dysoric or drusoidal angiopathy [14].His extremely important study went unnoticed, and there has been practically no research in this area [15].Only recently there have appeared significant studies aimed at investigating cerebral blood flow abnormalities in AD [16][17][18][19][20][21] which has resulted in the overall recognition of the fact that hypoperfusion and changes in the morphology of capillaries are involved in the etiology and pathogenesis of AD [22][23][24][25][26][27][28].
As a result, there have appeared several hypotheses concerning the role of microvascular changes in the etiopathogenesis of the disease [28][29][30][31][32][33][34][35][36] which points to the need for further research in this area.It has recently been repeatedly stated in the guidelines of the National Institute on Aging/Alzheimer's Association [37,38].
All these methods have their pros and cons, but they do not reflect the true antemortem state of arterial, venous and microvascular system of the brain in AD.
Pathomorphological research allows of histological, cytological and cytochemical analysis of the brain.Research conducted on transgenic animals allows to explore models of the disease.Antemortem study of cerebral blood flow abnormalities is difficult enough.SPECT, PET and Perfusion MRI demonstrate average results and show the perfusion of the whole brain being unable to visualize the cerebral vascular system.
The present research focuses on visualizing by means of MUGA the features of cerebral angioarchitectonics and on microcirculatory disorders occurring at both preclinical AD stage and during its progress, as well as on comparison of these disorders with the changes in the vascular system of the brain in patients suffering from other common neurodegenerative diseases.The objective of this research is to systematize cerebrovascular disorders in AD.
Patient Selection
The whole research has been carried out with the approval of the Ethics Committee and with the consent of the examined patients and their relatives.
The research involved 164 patients from 28 to 79 years old (average age 67.5), 76 (46.34%) male and 88 (53.66%) female patients, suffering from various neurodegenerative brain lesions accompanied by the development of dementia and cognitive impairment of varying severity.
Patient Examination
The examination plan included the following methods: • Assessment of cognitive functions was conducted by means of Mini-Mental State Examination (MMSE) [44].• Clinical determination of the severity of dementia was made according to the Clinical Dementia Rating scale (CDR) [45].• Tomographic identification of AD stages was performed among Test Group patients using the Tomography Dementia Rating scale (TDR) during CT and MRI examination [46][47][48][49].This method allows to determine not only clinical but also pre-clinical AD stages by the determination of the severity of atrophic changes in the temporal lobes of the brain.• Laboratory examination was performed according to the schemes generally accepted in iterventional neuroangiology including coagulologic, biochemical and clinical tests.• Scintigraphy (SG) of the brain was carried out on a gamma camera (Ohio Nuclear, US) following the classical method in dynamic and static mode with TC 99M pertechnetat 555 [20,21,30,32].• Rheoencephalography (REG) was conducted by means of "Reospektr-8" (Neurosoft, Russia) in accordance with the standard automated method with the identification of abnormalities of pulse blood flow in the cerebral hemispheres [22,32].• CT and MRI of the brain were performed on "Somatom" (Siemens), "Hi Speed" (GE), "Tomoscan" (Philips), "Apetro Eterna" (Hitachi) following the ATAA (Advance Tomo Area Analysis) procedure allowing to determine the volume of the temporal lobes of the brain with subsequent determination of the severity of the degree of atrophy as a percentage from the total natural weight of the unaffected lobe tissue [21,[46][47][48].
• Cerebral multi-gated angiography (MUGA) of the brain was performed on apparatus "Advantx" (GE) following the classical method of transfemoral access.Synchronously, taking into account the start and rate of administration, 10 -12 ml of Omnipack 350 was introduced intra-carotidally and 7 -8 ml intra-vertebrally.The registration was carried out in direct and side projections in constant subtraction mode at a speed of 25 frames per second.Further on, frame by frame analysis of the angiograms received in each phase contrast was conducted [20,21,32,42].Capillary density contrast analysis was performed at the corresponding phase by means of an automatic method using computer program "Angio vision" based on the determination of the degree of blackening of the corresponding part of the image [20,21,42].
Test Group
According to CT and MRI: • AD-specific neurodegenerative changes in the brain manifested in temporal lobes atrophy which at different stages of the disease lead to 4% -62% reduction in tissue mass were detected in 81 (100%) cases (Table 1); • individual cerebral neurodegenerative changes were observed in a limited number of cases (Table 1).
According to SG, the slowing of blood flow in the cerebral hemispheres was detected in 81 (100%) cases.
According to REG, pulse blood volume reduction in the carotid system was detected in 81 (100%) cases.
Elevated level of lipids in the blood was detected in 34 (41.98%)cases.
Hypercoagulation was observed in 37 (45.68%)cases.An analysis of 2 × 2 contingency tables for each of the parameters under study was made using the chi-square test which showed statistically significant differences for each of the parameters under study except for unocclusive hydrocephalus.venous shunts in the temporal and fronto-parietal region-73 (90.12%) patients (Figure 5); • Anomalous venous congestion at the border of frontal and parietal lobe caused by the increased blood flow from the arterio-venous shunts-74 (91.36%) patients (Figures 5 and 6); • Increased looping of distal intracranial arterial branches-64 (79.02%) patients (Figure 3).
Control Group
According to CT and MRI: • changes in the brain manifested in the local atrophy of the temporal lobes (specific to AD) were not identified in any case (Table 1); • cerebral neurodegenerative changes were found in almost all cases (Table 1).
According to SG, the slowing of blood flow in the cerebral hemispheres was detected in all 83 (100%) cases.
According to REG, pulse blood volume reduction in the carotid system was detected in all 83 (100%) cases.
Elevated level of lipids in the blood was detected in 71 (85.54%) cases.
Hypercoagulation fronto-parietal regions were not detected in any case; • multiple arterio-venous shunts in the basin of the front villous artery and in the basin of the arterial branches supplying the fronto-parietal cerebral cortex were not detected in any case; • some scattered areas of low capillary contrast at the level of the white matter of the brain-36 (43.37%) patients; • multiple scattered arterio-venous shunts at the level of the white matter of the brain-37 (44.58%) patients; • scattered, mainly in the white matter, early venous dumping-38 (45.78%) patients; • development of abnormally enhanced venous trunks and anomalous venous congestion was not detected in any case; • increased loop formation of distal intracranial arterial branches-5 (6,02%) patients.Thus, Control Group patients did not have any vascular and microcirculatory disorders of the brain similar to those detected among Test Group patients (Figures 7 and 8).
DISCUSSION
For antemortem studies of the brain vascular system, cerebral MUGA has been used.Due to its specificity and high image resolution, the method allows to obtain high quality vascular imaging and provides an opportunity for stepwise study of the state of the arterial, capillary, venous bed and the architectonics of the existing arterial and venous shunts and blood flows [20,21,30,32,50].By means of this method we were able to detect AD-specific disorders of blood circulation and microcirculation in the temporal and fronto-parietal brain regions among Test Group patients.
We have named those disorders "dyscirculatory angiopathy of Alzheimer's type" (DAAT) [32,50].They do not occur among Control Group patients, and they are specific for AD and non-specific for other neurodegen-erative diseases accompanied by the development of dementia and cognitive impairment.
DAAT is the combination of the following: • reduction of the capillary bed in the temporal and fronto-parietal brain regions; • development of multiple arterio-venous shunts in the basin of the front villous artery supplying the hippocampus and in the basin of the arterial branches supplying the fronto-parietal brain regions; • early venous dumping of arterial blood through these shunts with simultaneous filling of the arteries and veins in the temporal and fronto-parietal regions; • development of abnormally enlarged lateral venous branches that receive blood from the arterio-venous shunts in the temporal and fronto-parietal region; • anomalous venous stasis on the border of the frontal and parietal lobe due to excessively high blood influx from the arterio-venous shunts; • increased loop formation of distal intracranial arterial branches.
In fact, DAAT is a vascular sign of AD and is an important criterion in the differential diagnosis of neurodegenerative diseases [21,32,50].
The obtained data concerning the capillary disorders in AD is confirmed by morphological studies conducted by S. J. Baloiannis and I. S. Baloiannis [51].These authors used electron microscopy to reveal capillary degeneration and a significant decrease in the number of capillaries per cubic centimeter of the hippocampus tissue in patients with AD compared to the hippocampus tissue in people of the same age but without the disease.
In our opinion, DAAT progress begins with abnormalities of the cerebral microcirculation which are manifested in capillary bed reduction which leads to the reduction of arterial blood flow to the cerebral tissues.The result of it is chronic hypoperfusion of the temporal and fronto-parietal regions which causes AD-specific brain tissue hypoxia; that is also supported by other authors' studies [28].The process of capillary bed reduction is accompanied by a compensatory opening of arterio-venous shunts which relieve the arterial bed by dumping blood to the venous bed.
Such compensatory opening of arterio-venous shunts is observed in various human organs and tissues during the reduction of arterial blood flow-for example, in peripheral arterial occlusions [52].Opening arterio-venous shunts cause arterio-venous dumping and allow to balance the inflow and outflow of blood to the site with reduced arterial or capillary permeability.
With AD, the overflow of arterial bed by venous blood leads to abnormal enlargement of the lateral veins of the temporal and fronto-parietal region and subsequent blood congestion.
These hemodynamic changes may in their turn affect Scattered arteriovenous shunts at the level of the white matter of the brain 0 37 <0.005 Early venous dumping in the white matter of the brain 0 38 <0.005 An analysis of 2 × 2 contingency tables for each of the parameters under study was made using the chi-square test which showed statistically significant differences for each of the parameters under study.the metabolism of amyloid-beta and cause its deposition and accumulation in cerebral tissue thereby stimulating AD progression [21,30,32].Our hypothesis is confirmed by the work by B. V. Zlokovic et al. [53,54] in which the authors, carrying out research on genetically modified mice, have shown that an experimental model of AD is characterized by the accumulation of vasculotoxic and neurotoxic molecules in the brain tissue which causes hemodynamic instability, reduces capillary blood flow and promotes the development of specific cerebral hypoxia.This process leads to an increase in accumulation and a decrease in removal of beta amyloid, subsequent dysfunction and neurodegeneration.
The data obtained resonate with research by A. Dorr et al. [26] conducted on transgenic mice with an experimental model of AD.In the study of the material the authors revealed degeneration, diameter reduction and increased sinuosity of microvessels in the cortex of tested animals which was interpreted as the result of betaamyloid deposition in the vascular wall and parovazal tissue.
According to our results obtained among Test Group patients, the severity of arterial, venous, and microvascular abnormalities does not depend on the timing of the onset of AD symptoms, severity of dementia or severity of cognitive impairment [21,30,32].These abnormalities are almost equally observed among patients with clinical AD stages (TDR-1, TDR-2, TDR-3) and among those with a preclinical AD stage (TDR-0).Moreover, similar changes occur among children of 8-12 years of age and among AD patients' children [55,56].It suggests that microvascular changes in the brain are likely to develop before the process of beta-amyloid deposition.It seems unlikely that in genetically determined and sporadic forms of AD, beta-amyloid deposition in the vascular wall and brain tissue starts early, decades before any clinical manifestations of the disease.
Interestingly, there are studies that show that the high content of amyloid-beta in the brain tissue can be observed in healthy people and does not always lead to dementia and AD [57].
For antemortem studies of CBF and cerebral perfusion abnormalities in AD, SPECT, PET and MRI technologies are usually used being at present not sensitive enough to determine vascular and microcirculatory abnormalities, and therefore it is difficult to identify these abnormalities in 100% of cases [27,58].In contrast to MUGA, these technologies do not allow to visualize the vascular system of the brain and to explore its parts locally determining the state of the arteries, capillaries and veins.They allow to determine total perfusion in a certain area, lobe or in the whole brain.
However, studies using SPECT and PET have shown that progression of AD and cognitive impairment is characterized by a progressive decline of CBF in the temporal, parietal and frontal regions [27,59]; we have obtained similar results not only by means of MUGA but also by means of SG and REG.
When using Perfusion MRI technologies to determine cerebral perfusion, it should be noted that this method, though more progressive compared to SPECT and PET, also does not allow to determine the state of the local vascular system in certain areas of the cerebral tissue.As a result, as well as when using SPECT and PET, total perfusion is determined in some area or lobe of the brain.This particularity of Perfusion MRI technology has led to the appearance of reports describing compensatory enhancement of cerebral perfusion at preclinical and early AD stages but persistent hypoperfusion in the later stages of the disease [27], which potentially confirms our data.
As we have already noted, DAAT progression leads to natural compensatory opening of arterio-venous shunts with sufficiently powerful dumping of arterial blood to the venous bed.Preclinical and early clinical AD stages progress at a younger age, when compensatory mechanisms regulating CBF are expressed better, the deposition of amyloid-beta in the brain tissue being quite small.In this case, when determining the perfusion by means of Perfusion MRI technology, total perfusion is visualized in the temporal lobes of the brain including the compensatory powerful dumping of arterial blood into the venous bed as well.As a result, the obtained numbers may exceed the norm.Obviously, that was what the authors have received interpreting it as enhanced perfusion due to more active work of arteries and capillaries.Late AD stages occur in old age, when compensatory mechanisms of CBF regulation decrease and there is high accumulation of amyloid-beta in the brain tissue which helps to reduce CBF.As a result, the authors have observed the phenomenon of cerebral hypoperfusion which confirms our data.
Vascular and microvascular changes in AD are always associated with a decrease in size and with atrophic phenomena in the temporal and fronto-parietal brain regions [32,[47][48][49][50].It is interesting to note that the tendency for temporal lobes size reduction has been reported in new-born babies who have a high potential risk for the disease, from which the authors conclude that these changes begin to progress in utero [60].
These data indirectly confirm our hypothesis that DAAT is likely to develop before the deposition of amyloid-beta, its excretion process being affected, which may possibly lead to its accumulation.DAAT does not cause but only contributes to AD, and may perhaps be congenital in nature [55,56].
Thus, we can conclude that the reduction of the capillary bed, abnormal cerebral microcirculation, as well as the accumulation of amyloid-beta are interrelated processes that occur early enough, proceed for a long time, lead to hypotrophic and atrophic changes in the tissue of the temporal and fronto-parietal regions and finally cause AD.
The combination of these changes must be taken into consideration in the examination of patients with AD, the monitoring of the disease course and, naturally, in the development of new methods for treating AD [21,28,30,32,36,42].
Figure 2 .
Figure 2. Patient O., 72 years old.Angiogram of the left internal carotid artery; TDR-2; lateral projection, capillary phase; Absence of atherosclerotic changes of intracranial vessels.1: Development of hypovascular area; 2: Multiple arteriovenous shunts in fronto-parietal and temporal regions; 3: The development of early venous discharge in the temporal and fronto-parietal region.Simultaneous filling of arteries and veins.
Figure 5 .
Figure 5. Patient P., 75 years old.Angiogram of the right internal carotid artery; TDR-3; lateral projection, venous phase; 4: The development of pathologically enlarged veins that receive blood from arteriovenous shunts in the temporal and frontoparietal region; 5: Blood congestion on the border of the frontoparietal region.
Figure 6 .
Figure 6.Patient S., 45 years old.Angiogram of the right internal carotid artery; TDR-1; lateral projection, venous phase; 5: Blood congestion on the border of the fronto-parietal region.
Figure 7 .
Figure 7. Patient P., 61 years old.Angiogram of the left internal carotid artery; lateral projection, arterial phase; Diagnosis: atherosclerosis of cerebral vessels, chronic cerebrovascular insufficiency, CDR-1 Mild atherosclerotic changes; Good opacification of capillaries, absence of hypovascular areas in the temporal and fronto-parietal regions; Absence of multiple arterio-venous shunts in the temporal and fronto-parietal regions; Absence of simultaneous filling of arteries and veins.
Figure 8 .
Figure 8. Patient P., 61 years old.Angiogram of the right internal carotid artery; lateral projection, venous phase; Diagnosis: atherosclerosis of cerebral vessels, chronic cerebrovascular insufficiency, CDR-1 Absence of pathologically enlarged veins that receive blood from arteriovenous shunts in the temporal and fronto-parietal region; Absence of blood congestion on the border of the fronto-parietal region.
Those patients had MCI and the symptoms of mild beginning dementia-18 (21.69%) patients of whom 13 had CDR-1 and 5 had CDR-2;• A group with multiple atherosclerotic lesions of the brain, severe vascular dementia and cognitive impairment.Those patients' medical history showed re- • Early AD stage-TDR-1: a group with mild dementia, mild cognitive impairment, had previously been diagnosed with AD, history of the disease did not to exceed 2 years, the atrophy of the temporal lobes was 9% -18% which corresponds to CDR-1 (20 -25 MMSE points)-24 (29.63%) patients;• Middle AD stage-TDR-2: a group with mild dementia, sufficiently persistеnt cognitive impairment, had previously been diagnosed with AD, medical history of 2 to 6 years, the atrophy of the temporal lobes was 19% -32% which corresponds to CDR-2 (12 -19 MMSE points)-31 (38.27%) patients; • Late AD stage-TDR-3: a group with fairly severe dementia, gross cognitive impairment, had previously been diagnosed with AD, medical history of 7 to 12 years, the atrophy of the temporal lobes was 33% -62% which corresponds to CDR-3 (7-11 MMSE points)-17 (20.99%) patients.ment.Those patients had individual complaints revealing abnormalities of cerebral hemodynamics-19 (22.89%) patients of whom 7 patients had CDR-1; • A group with sufficiently severe chronic cerebrovascular insufficiency of atherosclerotic genesis without gross occlusive vascular lesions of the brain.
Table 1 .
CT and MRI data in test and control group patients.
neous filling of the arteries and veins in the temporal and fronto-parietal brain regions-81 (100%) patients (Figure2); • The development of abnormally enhanced lateral venous trunks that receive blood from the arterio-
Table 2 .
Vascular disorders of the brain identified among patients of the Test and Control Groups during MUGA. | 2017-08-28T13:42:33.103Z | 2013-12-03T00:00:00.000 | {
"year": 2013,
"sha1": "079950af40683dd114939eff480b83a4c29c5cfe",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=40410",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "079950af40683dd114939eff480b83a4c29c5cfe",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260700841 | pes2o/s2orc | v3-fos-license | What are the digestion and absorption models used to reproduce gastrointestinal protein processes?
Abstract Background: Animal, cell, and in vitro studies have been applied to simulate the human gastrointestinal tract (GIT) and evaluate the behavior of biomolecules. Understanding the peptides and/or proteins stability when exposed to these physiological conditions of the GIT can assist in the application of these molecules in the treatment of diseases such as obesity. This study describes a protocol of systematic reviews to analyze the methodologies that mimic the digestive and absorptive processes of peptides and/or proteins. Methods: The protocol follows the guidelines described by Preferred Reporting Items for Systematic Reviews and Meta-Analyzes Protocols (PRISMA-P). The search strategies will be applied in the electronic databases PubMed, ScienceDirect, Scopus, Web of Science, Evidence portal, Virtual Health Library, and EMBASE. The intervention group will be formed by in vivo, in cells, and in vitro (gastrointestinal simulating fluids) studies of digestion and absorption of peptides and/or proteins presenting a schedule, duration, frequency, dosages administered, concentration, and temperature, and the control group consisting in studies without peptides and/or proteins. The selection of studies, data extraction, and assessment of the risk of bias will be carried out independently by 2 reviewers. For animal studies, the risk of bias will be assessed by the instrument of the Systematic Review Center for Experimentation with Laboratory Animals (SYRCLE) and the Office of Health Assessment and Translation (OHAT) tool will be used to assess the risk of bias in cell studies. Results: This protocol contemplates the development of 2 systematic reviews and will assist the scientific community in identifying methods related to the digestive and absorptive processes of peptides and/or proteins. Conclusion: Both systematic reviews resulting from this protocol will provide subsidies for the construction of research related to the clinical application of bioactive peptides and/or proteins. In this context, they will make it possible to understand the gastrointestinal processes during administering these molecules, as the gastrointestinal environment can affect its functionality. Therefore, validating the effectiveness of these protocols is important, as it mimics in vitro biological conditions, reducing the use of animals, being consistent with the reduction, refine and replace program.
Introduction
Several bioactive molecules have emerged as candidates for clinical application in treating of obesity and associated diseases, such as bioactive proteins from animals and vegetables, phenolic acids, anthocyanins, flavonoids, tannins, among others. [1][2][3][4] It is known that experiments with animal models, especially rodents, are essential for understanding the mechanisms that trigger diseases, in addition to discovering methods aimed at prevention and/or treatment. [5] Rats and mice stand out for their small size, short biological cycle, lower maintenance cost, and similarity to humans, from anatomical points to common genes (95%), [6] making it possible in the future for the results obtained from these experiments can be reproduced in human trials. [5] However, in addition to animal studies, cell and in vitro experiments have been applied to mimic the human gastrointestinal tract (GIT) and show how biomolecules, especially proteins, behave when exposed to these physiological conditions. [7][8][9][10][11][12][13] In this perspective, it is important to carry out systematic reviews focused on this theme to increase the accuracy of the results obtained in vivo tests. Besides, decreasing the risk of falsenegative outcomes, improving the methodological quality of the experiments, among others. [14] So, the registration of study protocols is considered an excellent tool in reducing the accumulation of research due to non-notification and reducing publication bias. [15][16][17][18] Adopting in vitro test protocols is another way to obtain quick and adequate information regarding the application of biomolecules. In the scientific literature, there is a large availability of protocols in vitro, emphasizing the digestion and absorption systems. Among the applications, these systems can investigate the effects of cells or models that mimic biological conditions in humans.
Through these protocols, it is possible to generate important data, such as the application of the bioactive, since the main route of administration in vivo is oral. Several in vitro models are available to mimic the GIT, for example, models based on primary cells, monoculture or co-culture systems, and in vitro digestibility systems. [19][20][21] Thus, validating the effectiveness of these protocols is essential as it mimics human biological conditions through in vitro studies decreasing the use of animals. Therefore, in compliance with international guidelines such as the 3R's program (reduction, refine and replace), which is against the unnecessary use of animals and encourages the minimization of this practice. [22] Furthermore, according to Smith et al, [23] protocol registration can help standardize clinical research. Thus, this proposed review protocol aims to elaborate on 2 systematic reviews to identify the analysis protocols used in animal, cellular, and in vitro standardized models to mimic digestive and absorptive processes of peptides and/or proteins.
Protocol and registration
This protocol was prepared according to the guidelines described by Preferred Reporting Items for Systematic Reviews and Meta-Analyzes Protocols (PRISMA-P). [24] A 17-item checklist was used to improve the quality of the systematic review data. The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on August 25, 2020 (protocol number: CRD42020198709), available in: https://www.crd. york.ac.uk/prospero/display_record.php?ID= CRD42020198709.
Eligibility criteria
Peer-reviewed journal articles that meet the eligibility criteria based on the study population, interventions, control, outcomes, and study design (PICOS) [25] will be included in the review. There will be no restriction on the language and year of publication. Review studies and gray literature will not be included.
2.2.1. Inclusion criteria 2.2.1.1. Participants. For this review, original articles resulting from in vivo studies carried out with rats and mice of both sexes and varied ages (puppies, youngsters, adults, or the elderly) without the restriction of water or diet will be included, and also in cell and in vitro studies (gastrointestinal simulating fluids).
Types of intervention.
Studies in which the intervention group has been submitted to the administration of peptides, proteins, or gastrointestinal simulator fluids to mimic digestive and absorptive processes of peptides and proteins.
Types of controls.
Studies will be inserted that present the control group composed of animals, cells, or in vitro experiments without administration of peptides and/or proteins.
Types of intervention.
Studies that mimic the processes of digestion and absorption with the application of non-protein molecules associated with peptides and/or proteins will be excluded.
Types of controls.
Studies without a control group.
Outcome measures.
Articles that do not describe the protocol to simulate gastrointestinal conditions. Studies that do not have a schedule, time of experiment, frequency, dosages administered, concentration, and temperature will be excluded.
Information sources and literature search
Research strategies will be adopted based on keywords indexed in Medical Subject Headings (MeSH) (Tables 1 and 2). Two reviews will be elaborated on, one related to gastrointestinal digestion and the other about intestinal absorption. The equations will be defined considering the combinations of descriptors and their synonyms related to each review. The descriptors will be accompanied by the Boolean operators OR and/or AND.
The search strategies will be applied in the following electronic databases: PubMed; ScienceDirect; Scopus; Web of Science; Evidence portal, Virtual Health Library, and EMBASE. Two researchers will independently analyze the search keys (preliminary analysis) and will obtain the studies' return found in this stage. The results will assist in the assembly of the definitive equation for each electronic database. All articles will be imported into the Rayyan application (version 0.1.0), [26] the migration of articles to this platform will facilitate the removal of duplicate studies.
Our preliminary evaluation will be conducted based on the search for articles that mimic the digestion and absorption processes with peptides and/or proteins. For this purpose, 2 researchers, independently, will read the title and abstract respecting all eligibility criteria.
The secondary evaluation will be carried out from the complete reading of the articles considered eligible for inclusion. It is noteworthy that emphasis will be placed on methods to identify analysis protocols that comply with the objectives of the systematic review (Fig. 1). Discrepancies will be resolved with the support of a third researcher. The references of the included articles will also be revised to identify those potentially eligible studies not found in the database search, considered a manual search. For the management of references, the Mendeley software [27] will be applied.
Data extraction
Two reviewers will independently extract the data for each article. A spreadsheet will be created, and the data of interest for each review (gastrointestinal digestion and intestinal absorption) will be inserted. For this purpose, the Microsoft Excel Program will be used. The reviewers will be extract data related to the protocol adopted to simulate gastrointestinal processes (experiment time, frequency, dosages administered, concentration, and temperature) in all studies.
For animal studies will also be extractedanimal species, animal lineage, stage of life, sex; for cell studiescell origin, the genetic background of cells (normal or cancerous), cell line types (intestinal epithelium); and for in vitro articlescomposition of gastrointestinal simulator fluids, type of substance used, enzymes used, time and temperature of gastrointestinal processes. For any relevant data missing from the manuscripts, the reviewers will attempt contact with the study authors. If the necessary information is not obtained, the data will be excluded from the analysis and covered in the discussion section.
Risk of bias
Two reviewers will independently assess the risk of bias for each study entered. A third reviewer will resolve the discrepancies. For animal studies, the risk of bias assessment will be performed using the instrument of the Systematic Review Center for Experimentation with Laboratory Animals (SYRCLE). [28] The Office of Health Assessment and Translation (OHAT) tool will be used to assess the risk of bias in cell studies (SERVICES, 2019). [29] The reviewers will be previously trained and calibrated to ensure uniformity in the evaluation of the criteria.
Data analysis and synthesis
For both reviews, results summaries will be provided, and the protocols similarity in the simulation of digestion and/or absorption in vivo. The data will be presented in summary tables and in narrative forms to describe the characteristics of the included studies. These data will be structured with the type of protein used to simulate digestion and absorption, animal species and lineage, dose, type of administration, treatment time, and way of experimenting. Besides, considering other experimental conditions, including volume of simulator fluids, types of substances used in each stage of digestion and/or absorption, cultivation conditions, pH, time, and temperature, techniques and tools to study digestion and absorption in vivo and in vitro. It is noteworthy that no meta-analysis is applied for the type of study proposed.
Discussion
This protocol article aims to describe the development of 2 systematic reviews. In the first review, it is expected to identify the protocols related to protein digestion processes; in the second, the absorptive processes of peptides and/or proteins with emphasis on protocols that mimic these processes in humans involving studies with animals, cells, and in vitro (gastrointestinal simulating fluids).
From the studies, which will have the content and methodological quality critically evaluated, it will be possible to assist the scientific community in using models that can assess any protein for any purpose, as long as they need to analyze digestion and/or absorption processes.
Researchers highlight the bioactive potential of hydrolyzed proteins. [30][31][32] Protein hydrolysis procedures involve chemical and enzymatic methods, among which changes in pH, temperature, and the use of enzymes can be highlighted. [32] Therefore, it is crucial to develop studies to understand the behavior of proteins orally administered since they have better tolerance in administration. [33] However, some drugs have reduced bioavailability and bioaccessibility. It is noteworthy that parts of these drugs are peptides and proteins. However, there are limitations in applying of proteins due to their low solubility, low permeability, rapid degradation in the GIT, and the inability to permeate the mucus barrier. [34] Therefore, it is essential to carry out studies to understand the behavior of plant protein administration in the GIT, as their functionality can be modified. [35] Based on an in vitro system and cell cultures, the methods are frequently used for simulation studies of the gastrointestinal environment. Through them, it is possible to mimic exclusive characteristics of GIT, which helps to understand various effects, in particular, in the identification of peptides and drugs that can be absorbed by intestinal epithelial cells. [36][37][38] Although there is a great impact of in vitro studies on the efficacy, stability, and biological activity, the availability of such peptides and proteins for oral delivery in humans must also be verified in vivo experiments. [38,39] Several types of research with animal models, mainly rodents, have shown that the degree of genetic similarity with human beings allows extrapolating the scientific results obtained for potential effects of treatment in humans. [6] In this sense, numerous models are available to mimic the GIT and, and show how these biomolecules, especially proteins, behave when exposed to this scenario/environment. [40,41] Bringing together studies that adopt these protocols is essential since a vast series of protocols mimic the gastrointestinal processes of different food components can be found in the literature. However, there is a scarcity of systematic studies of models that mimic the digestion and absorption of proteins.
Thus, the reviews in question intend to share with the scientific community which protocols have been used to simulate the digestion and absorption of proteins in models in vivo, with cells, or in vitro (gastrointestinal simulating fluids). These studies are promising tools because they can help understand how bioactive proteins, after digestion and absorption, manage obesity and its comorbidities. It will be important to guide researchers on the feasibility of applying new bioactive proteins to humans. | 2021-07-28T01:49:51.985Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "224b80370983ae5c50d3ed826c10c07396e600e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "224b80370983ae5c50d3ed826c10c07396e600e4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15150408 | pes2o/s2orc | v3-fos-license | Major dietary patterns and risk of acute myocardial infarction in young, urban Pakistani population
Objective: To investigate the role of dietary intake in the development of premature acute myocardial infarction (AMI) in a hospital-based Pakistani population in Karachi. Methods: In a case control study, 203 consecutive patients (146 males and 57 females) with their first AMI and age below 45 years were enrolled with informed consent. Similarly, 205 gender and age matched (within 3 years) healthy adults were also included as controls. Dietary intake of both cases and controls was assessed by using a simple 14-item food frequency questionnaire. Using factor analysis, 3 major dietary patterns- prudent dietary pattern, combination dietary pattern and western dietary pattern were identified. Fasting plasma/serum of both cases and controls were analyzed for homocysteine, folate, vitamin B12, blood Pb, ferritin, cholesterol, LDL-cholesterol, HDL-cholesterol and triglycerides. ANOVA and conditional logistic regression were used to predict the association of dietary patterns with AMI. Results: Consumption of prudent diet, characterized by high consumption of legumes, vegetables, wheat, chicken and fruits, is protective against the risk of premature AMI. Moderate to high consumption of combination diet, characterized by high intake of eggs, fish, fruits, juices and coffee was associated with decreased risk of AMI. No association was observed between western diet, characterized by high intake of meat, fish and tea with milk and risk of AMI. Conclusions: Consumption of a prudent dietary pattern and a combination dietary pattern is protective against the risk of AMI in a Pakistani population.
INTRODUCTION
There is growing evidence that coronary artery disease (CAD) is seen in relatively younger Pakistani population. In a published report, 28.3% of patients with acute myocardial infarction (AMI) in 17 coronary care units (CCU) in all 4 provinces of Pakistan were found to be younger than 45 years of age. 1 In another study in Faisalabad, 35% of AMI patients admitted in CCU of Divisional Headquarter Hospital and Faisalabad Institute of Cardiology were found to be below the age of 45 years. 2 These reports are indicative of early onset of this disease in Pakistan. An unhealthy dietary intake has been shown to increase the risk of AMI globally. [3][4][5] Low consumption of fruits and vegetables and high intake of fat diet have been identified (among others) as risk factors for development of premature CAD. 6 Since Pakistanis are known to consume high fat diet and low amounts of green leafy vegetables 7-9 , we embarked on investigating the role of dietary intake and its association with premature AMI in a hospital-based study conducted in Karachi, Pakistan.
Most researchers examine the relationship between nutrients intake or individual food items intake (such as fish intake) and their association with AMI, while food consumption is complex phenomena with most individuals consuming a mix of food items in their daily diet with both protective and harmful effects with respect to development of disease such as AMI. 10 The study of dietary patterns has emerged as an important component of nutrition science as it allows researchers to look at the clustering of food items in diet. It also enables easy communication of health messages to the population as they are based on dietary patterns rather than nutrients or individual food items which are less meaningful. This study is a relatively large, hospital-based study conducted in Karachi, Pakistan to assess the association between dietary patterns and AMI in this population.
Participants:
Two hundred and three consecutive patients (146 males and 57 females) with their first AMI and below the age of 45 years admitted to the National Institute of Cardiovascular Diseases (NICVD), Karachi from June 2010 to July 2011 were included in this study with informed consent. Criteria for premature AMI were: age 18-45 years; both males and females; had confirmed diagnosis of AMI on the basis of clinical examination, ECG and biochemical data; and had no history of consuming B-vitamins (B 6 , B 12 and folate) during the last 4 months.
Individuals who were found to be pregnant or having malabsorption syndrome or suffering from tuberculosis or liver disease, or uremia or cancer were not included in the study because these chronic diseases/conditions may lead to compromised dietary consumption and therefore could function as confounders of diet-AMI relationships. Similarly, 205 gender, and age matched (within 3 years) healthy controls from the personnel of the Aga Khan University and other health-care facilities in Karachi were also included in the study as controls. All these controls were not suffering from any of the above mentioned diseases or conditions and were not taking B vitamin supplements. Both cases and controls belonged to a low socio-economic group as 83% of controls and 90% of cases had monthly house-hold income less than US$ 150. Determination of food frequency: Using a simple 14-item food frequency questionnaire (FFQ), the eating habits of both cases and controls were determined. This food frequency questionnaire has been used previously in a population-based study. 11 Information about the number of times each commonly used food item was consumed per month, per week or per day was recorded and then the frequency of each food/drink item was converted to its consumption per day as described previously. 11 To clarify it further, if there was a response of 6 servings per month of a food item, then it was converted to 0.2 serving per day. Study had been approved by the Ethics Review Committees of the Aga Khan University as well as NICVD. Since no nutrient analysis was done, therefore a food composition table was not used. Measurements of biochemical parameters: Ten mL of fasting blood was obtained within 24 hours of the admission and analyzed for serum folate, homocysteine, glucose, total cholesterol, low density lipoprotein (LDL)-cholesterol, high density lipoprotein (HDL)-cholesterol, triglycerides and ferritin using commercially available kit methods (Roche Diagnostics, USA), while serum vitamin B 12 was assayed using a radioassay. 12 The minimum limits of detection for serum/plasma folate, homocysteine, glucose, total cholesterol, LDLcholesterol, HDL-cholesterol, triglycerides, ferritin and vitamin B 12 were 0.64 ng/mL, 4 mmol/L, 2 mg/ dL, 9.7 mg/dL, 3.9 mg/dL, 3.0 mg/dL, 8.9 mg/dL, 0.5 ng/mL and 50 pg/mL, respectively. Statistical analysis: Using factor analysis, major dietary patterns were developed and then conditional logistic regression analysis was used to predict the association of dietary patterns with AMI.
Factor analysis was used to identify common dietary patterns from dietary intake data. For the generation of uncorrelated factors, factors were rotated orthogonally. Determination of number of factors to be retained in the model was carried out on the basis of Eigenvalue (>1), scree plot and factor interpretability. 13 The analyses were run in Statistical Package for Social Sciences ® (SPSS; version 16 for Windows, Apache Software Foundation, USA) using the data reduction procedure. As a result, three major dietary patterns were identified in this population, which were similar to the dietary patterns identified earlier in a Pakistani population based study in Karachi. 11 The three dietary patterns were then divided into quartiles. All cases and controls were matched for age and gender, and conditional logistic regression analysis was carried out to find out the association between each dietary pattern and AMI while adjusting for BMI, ferritin, total cholesterol, triglycerides, LDL-cholesterol and HDL-cholesterol. Values have been presented as OR (95% CI). Continuous variables have been presented as means±SD. Analysis of variance (ANOVA) was used for comparing means of quartiles across each dietary pattern. A p-value of <0.05 was considered significant.
RESULTS
Eating habits of both cases and healthy controls were assessed using a 14-item food group frequency questionnaire ( Table-I). Mean intake per day of all food items with standard deviations is reported in Table-I. Some of the food items used in this questionnaire have been previously studied for association with B-vitamins, plasma homocysteine and cardiovascular disease. 11,14 Factor analysis revealed 3 major dietary patterns. These were labeled as " prudent dietary pattern", which was characterized by high consumption of legumes, cooked and uncooked vegetables, wheat, chicken and fruits; "combination dietary pattern" which was characterized by high consumption of eggs, fish, fruits, juices and coffee; and "western dietary pattern", which was characterized by high intake of meat, fish and tea with milk (Table-I). Each of these 3 dietary patterns was further classified into quartiles.
The descriptive information presented in Table-II shows that mean concentrations of serum ferritin and LDL-cholesterol were lower in the highest quartile of prudent dietary pattern compared to the lowest quartile (p-value <0.01). On the other hand, mean concentrations of total cholesterol, HDL-cholesterol and triglycerides were higher in the highest quartile of prudent diet compared to the lowest quartile (p-value <0.01). Regarding the western dietary pattern, individuals in the highest quartile (quartile 4) appear to have increased concentrations of serum ferritin, LDL-cholesterol and triglycerides compared to individuals in the lowest quartile, however, the values were not found to be statistically significant. It was observed that consumption of prudent diet was protective against the risk of premature AMI, however, p-value for trend was insignificant (p-value for trend > 0.05) ( Table-III). It was also observed that moderate to high consumption of combination diet was protective against the risk of premature AMI after adjusting for BMI, ferritin, total cholesterol, triglycerides, LDL-cholesterol and HDL-cholesterol (p-value < 0.05). Compared to the first quartile, the adjusted odds ratios were 0.19 (95% CI, 0.07-0.54) and 0.27 (95% CI, 0.10-0.77) for the third and fourth quartiles, respectively (p-value for trend < 0.05). Consumption of western diet was not found to be associated with the risk of premature AMI.
DISCUSSION
Three major dietary patterns labeled as prudent dietary pattern, combination dietary pattern and western dietary pattern were obtained after using factor analysis of the 14 food items. Western dietary pattern was similar to the dietary pattern defined in an earlier study in Karachi, 15 while prudent diet was similar to the patterns generated from INTERHEART, Health Professional Follow-up Study and Nurses' Health Cohort. 3,14,16 High loading of eggs in the combination diet in the present study is, however different from the afore-mentioned research studies because eggs have been part of the western diet in those studies. Other investigators have also reported a combination dietary pattern where a mix of food items belonging to different food groups has been observed. 17 In a previous communication from our laboratory, an association of 3 dietary patterns with hyperhomocysteinemia has been reported. 11 However, in the present study an association of some of these dietary patterns has been found with premature AMI in a Pakistani population.
Findings of this study are consistent with previous results that consumption of a prudent diet is protective against the development of heart disease after adjusting the model for BMI and other covariates as has been reported by in the INTERHEART study, 3 and by Hu et al. 16 , however, no adverse association was observed between consumption of a western diet and risk of developing premature AMI after adjusting for the above mentioned covariates. In combination dietary pattern, an association with the risk of AMI was such that moderate to high consumption of a combination dietary pattern is associated with a reduced risk of AMI. This is indicative of a protective effect for the individuals in the third and fourth quartiles of consumption in this dietary pattern. A combination dietary pattern in Pakistani population is a unique finding of the present study. It is possible that moderate to high consumption of animal source food items combined with plant source food items exerts a beneficial effect; while no protective effect is observed when the intake is small.
In the present investigation, some biomarkers of myocardial infarction were associated with the prudent dietary pattern, while other biomarkers and physical measures were not significantly associated. Furthermore, no association was found between the combination dietary pattern and the western dietary pattern and any of the biomarkers and physical measures. Significantly low level of serum ferritin observed for individuals in the highest quartile of prudent dietary pattern compared to the lowest quartile of intake is consistent with the fact that dietary iron from plant sources is less bio-available compared to iron from animal sources, 18 consequently individuals consuming more of the prudent dietary pattern were likely to have low serum ferritin levels.
Lower mean level of LDL-cholesterol and a higher mean level of HDL-cholesterol in individuals in the highest quartile of prudent dietary intake compared to those in the lowest quartile of intake was an important observation in the current study. This is consistent with findings of a study in Korea in which individuals in the lowest quartile of a similar dietary pattern that they termed as "Korean Healthy pattern" had higher concentrations of total cholesterol and triglycerides compared to individuals in the highest quartile of intake. 19 Relatively high serum ferritin levels observed in the highest quartile of western dietary intake compared to the lowest quartile of intake (though not statistically significant due to large variance) are suggestive that consumption of red meat could be one of the major contributors of iron to body stores. 20,21 High body iron status has been reported to be associated with premature AMI in Pakistani population. 22 Therefore, it is conjectured that increased consumption of western diet rich in red meat could be contributing to high body iron stores, thereby increasing the risk of AMI. However, results of the present investigation did not show an association of western dietary pattern with the risk of AMI in this population. Results obtained in this investigation must be viewed within the context of certain limitations of this study. This was a case control study and there might be a possibility of recall bias between the cases and controls for dietary intake assessment. Moreover, it is likely that the cases may have changed their diets due to other preceding conditions such as hypertension or abnormal lipid profile which would minimize the association between AMI and diet. However, such a situation would lead to attenuation in the association between dietary patterns and AMI suggesting that our results might be conservative estimates. The FFQ that was used for collection of dietary data has not been validated, however similar FFQs have been used in other studies in Pakistan and they appear to have face validity. 11 Furthermore, as a short (14 items) food intake questionnaire was used, it was not possible to estimate and consequently adjust our analysis for total energy intake. However, adjustment was made in the analysis for significant determinants of energy intake i.e. age, sex and BMI 23 , and this should lend credence to our findings. Matching of cases and controls for age and sex in the design of the study has been the main strength of this study.
This was a hospital-based study conducted at a large cardiovascular diseases hospital but larger, community based studies that would include participants from urban as well as rural areas of Pakistan are required for further confirmation of our findings.
CONCLUSIONS
Consumption of prudent diet which is rich in legumes, vegetables, wheat, chicken and fruits is protective against the risk of premature AMI in a Pakistani population. Moderate to high intake of combination diet which is rich in eggs, fish, fruits, juices and coffee is associated with reduced risk of AMI. Western dietary pattern which is characterized by high intake of meat, fish and tea with milk does not appear to be associated with risk of AMI in Pakistani population.
ACKNOWLEDGEMENT
The study has been supported by a grant from Pakistan Academy of Sciences/Higher Education Commission to Mohammad Perwaiz Iqbal. Analysis and interpretation by Mohsin Yakub were supported by a grant # 614 of the Bill and Melinda Gates Foundation. | 2018-04-03T05:44:36.633Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "2df0c9952f503e27874e740c473c5c1f4922c38e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12669/pjms.315.7690",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2df0c9952f503e27874e740c473c5c1f4922c38e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118464765 | pes2o/s2orc | v3-fos-license | Energy-pressure relation for low-dimensional gases
A particularly simple relation of proportionality between internal energy and pressure holds for scale invariant thermodynamic systems, including classical and quantum Bose and Fermi ideal gases. One can quantify the deviation from such a relation by introducing the internal energy shift as the difference between the internal energy of the system and the corresponding value for scale invariant gases. We discuss general thermodynamic properties associated to the scale invariance, provide criteria for which the internal energy shift density is a bounded function of temperature. We then study the internal energy shift and deviations from the energy-pressure proportionality in low dimensional models of gases interpolating between the ideal Bose and the ideal Fermi gases, focusing on the Lieb-Liniger model in 1d and on the anyonic gas in 2d. In 1d the internal energy shift is determined from the thermodynamic Bethe ansatz integral equations and an explicit relation for it is given at high temperature. Our results show that the internal energy shift is positive, it vanishes in the two limits of zero and infinite coupling (respectively the ideal Bose and the Tonks Girardeau gas) and it has a maximum at a finite, temperature depending, value of the coupling. Remarkably, at fixed coupling the energy shift density saturates to a finite value for infinite temperature. In 2d we consider systems of Abelian anyons and non Abelian Chern-Simons particles: as it can be seen also directly from a study of the virial coefficients, in the usually considered hard-core limit the internal energy shift vanishes and the energy is just proportional to the pressure, with the proportionality constant being simply the area of the system. Soft-core boundary conditions at coincident points for the two-body wavefunction introduce a length scale, and induce a non-vanishing internal energy shift.
I. INTRODUCTION
At thermodynamic equilibrium, pure homogeneous fluids (found in regions of the phase diagram hosting a single phase) are characterized by an equation of state f (V, P, T ) = 0 relating pressure, volume and temperature. In general, an equation of state can be solved with respect to any of the three quantities V , P , or T , thus providing different ways to characterize the equilibrium properties of the system: for example, the partial derivatives of the form V = V (T, P ) have the physical meaning of thermal expansion coefficient α V ≡ (1/V )(∂V /∂T ) P and isothermal compressibility β T ≡ −(1/V )(∂V /∂P ) T [1][2][3][4].
The equations of state for classical and quantum ideal gases are the starting point for understanding the thermodynamics of interacting gases [1][2][3][4]: in particular, the equation of state for the classical ideal gas is approximately valid for the low-density region of any real gas. In general, the internal energy of an interacting gas is a function of both temperature and pressure as a result of forces between the molecules. If such forces did not exist, no energy would be required to alter the average intermolecular distance, i.e. no energy would be required to implement volume and pressure changes in a gas at constant temperature.
It follows that in the absence of molecular interactions, the internal energy of a gas would depend on its temperature only. These considerations lead to the definition of an ideal gas as the one whose macroscopic behavior is characterized by the two equations: P V = N k B T and E = E(T ), where E is the internal energy.
The determination of the deviation of thermodynamic properties of non-ideal gases from the ideal behavior is in general a long-standing problem: a commonly used approach to quantify such a deviation is to define the shift of thermodynamic quantities as the difference with respect to the corresponding value of the same quantities in the ideal case. Historically, several techniques have been developed in order to encode deviations from the ideal gas law.
Equations of states which are cubic in the volume feature a simple formulation together with the ability to represent for instance both liquid and vapor behavior. The first cubic equation of states was the Van der Waals equation [5] (v denotes the volume per particle), accounting for attractive intermolecular (or Van der Waals) forces and a finite excluded volume through its positive constants a and b respectively.
In the high-temperature regime, the deviations from the ideal equation of state can be expressed in a more general way, called virial expansion, and obtained by expressing the pressure as a power series in the density ρ in the form where B n (T ) is the n-th virial coefficient [1][2][3][4]. Many other similar functional forms have been proposed in various contexts for the equation of state of interacting gases, with the virial equations among the first to receive a firm theoretical foundation. In fact, virial expansions can be derived from first principles of statistical mechanics and such a derivation has also the merit to enlighten the physical significance of the various coefficients: the second virial term above written arises on account of interactions between pairs of molecules and, more generally, the n-th term depends upon the interactions among k-bodies, k ranging from 2 to n. In this paper we focus on the study of the energy-pressure relation in low-dimensional systems: for ideal gases, the internal energy is simply proportional to the product P V of pressure and volume, with the proportionality constant depending on the dimensionality d of the system. As we discuss in Section II this simple relation between energy and pressure holds for any scale-invariant thermodynamic systems, i.e. for systems having a N -body Hamiltonian H N that scales as H N → λ −α H N under a dilatation of λ-linear scaling factor, and boundary conditions at coincident points on the wave-function ψ N (x 1 , · · · , x N ) which are true also for any rescaled wave-functionψ N (x 1 , · · · , x N ) ≡ ψ N (λx 1 , · · · , λx N ) . The first condition means that H N is an homogeneous function of the coordinates: ideal classical and quantum gases are particular cases of this class of systems, since their Hamiltonian is scale-invariant with α = 2.
To quantify deviations from the ideal energy-pressure relation, in the following we introduce the internal energy shift as the difference between the internal energy of the system and the corresponding value of the scale-invariant (including ideal) gases. Low-dimensional quantum systems provide a natural playground for the study of the internal energy shift, since in 1d and 2d systems it is possible to naturally interpolate from the thermodynamic properties of an ideal Bose gas to those of an ideal Fermi gas, and determine how deviations from the ideal gas behavior affect thermodynamic quantities. We will consider the Lieb-Liniger (LL) model in 1d and the anyonic gas in 2d. For these two systems the physical nature of the interpolation between the Bose and Fermi statistics seems to be formally different: • in the LL model (a 1d model of interacting bosons), the interpolation between ideal bosonic and fermionic behavior is driven by the increase of the repulsive interaction among the particles.
• in 2d anionic gases, one can instead explicitly interpolate between the two canonical bosonic and fermionic statistics by tuning the statistical parameter.
However, the anyonic statistics incorporates the effects of interaction in microscopic bosonic or fermionic systems (statistical transmutation) and, from this point of view, it is again the variation of the underlying microscopic interactions that induces the interpolation between Bose and Fermi ideal gas.
So, our first paradigmatic example of interpolating behavior between ideal Bose and Fermi gases will be the LL model of one-dimensional bosons interacting via a pairwise δ-potential: the equilibrium properties of this model can be exactly solved via Bethe ansatz both at zero [6] and finite temperature [7]. In the exact solution of this model, a crucial role is played by the coupling γ, which turns out to be proportional to the strength of the two-body δpotential: the limit of vanishing γ corresponds to an ideal 1d Bose gas; on the other side, the limit of infinite γ corresponds to the Tonks-Girardeau (TG) gas [8][9][10], having (local) expectation values and thermodynamic quantities of a 1d ideal Fermi gas [11][12][13][14]. Two features makes the LL model attractive for the purposes of studying the internal energy: first, its integrability [15,16], crucial for getting non-perturbative exact results all along the crossover from weak to strong coupling regimes; second, its experimental realization by means of ultracold atom set-ups [11][12][13][14]17], where bosons are confined within 1d atom waveguides which freeze almost all transverse degrees of freedom [18][19][20]. The coupling strength of the LL system can be tuned through the Feshbach resonance mechanism [21].
Our second paradigmatic example will be the 2d ideal anyonic gases in which we will study the energy-pressure relation in the interpolation between 2d Bose and Fermi gases induced by the pure statistical Aharonov-Bohm interactions. We will consider Abelian and non-Abelian Chern-Simons particle systems, and both models admit a soft-core generalization that can be understood as the result of an additional contact interaction besides the pure statistical one. As it is well known, quantum two-dimensional systems of indistinguishable particles have the peculiarity of admitting generalized braiding statistics, because of the nontrivial topological structure of braiding transformations defined over the space-time ambient manifold. Ordinary bosonic and fermionic quantum statistics in 2d admit the generalization represented by Abelian anyons, where an elementary braiding operation is encoded in terms of a multiplicative phase factor acting on the multi-anyonic scalar wavefunction [22][23][24][25][26]. A different generalization of the standard quantum statistics is represented by non-Abelian anyons, described by a multi-component many-body wavefunction and corresponding to higher-dimensional representations of the braid group: non-Abelian anyons generalize the parastatistics, exactly in the same manner in which Abelian anyons generalize Bose and Fermi statistics.
Thermodynamic properties of ideal Abelian anyonic gas (assuming hard-core boundary conditions for the wavefunction at coincident points) were studied in the low-density regime [27]: the exact expression therein obtained for the second virial coefficient is periodic and non-analytic as a function of the statistical parameter. Different approaches have been subsequently used in order to approximate the values of a few higher virial coefficients, including the semiclassical approximation [28] and Monte Carlo computations [29] (for more references see [25,30]). The thermodynamics of a system of free non-Abelian anyons appears as a harder task and, so far, only results about the second virial coefficient are available [31][32][33][34]. In Section VI we also study the shift of the internal energy of soft-core anyonic gases: a family of models for "colliding" anyons (featuring generalized soft-core boundary conditions) can be introduced as the set of well-defined self-adjoint extensions of the Schrödinger anyonic Hamiltonian. The mathematical arguments underlying the possibility of such a generalization were discussed in [35], and the second virial coefficient of soft-core Abelian anyons was studied in [36][37][38]. The corresponding self-adjoint extensions for the non-Abelian anyonic theory have been as well discussed [39][40][41][42]. The model of soft-core anyons is here considered as an explicit example of scaling symmetry breaking due to the presence of an intrinsic length scale.
Among all thermodynamic properties of ideal classical and quantum gases, the linear relation between internal energy and pressure is particularly simple, and in this paper we study how it is affected by the various interactions represented by the low-dimensional models above mentioned. For extensive computer simulations of energy-pressure relation in 3D classical systems of interacting particles, see [43][44][45][46][47][48] (and [49] for the definition of "Roskilde systems").
The paper is organized as follows: in Section II we show that a simple relation of pro-portionality between internal energy and pressure holds for scale-invariant thermodynamic systems, including classical and quantum (Bose and Fermi) ideal gases, and we discuss some simple consequences of scale-invariance in generic dimensionality, including some useful properties of isoentropic transformations. In Section III we set criteria under which the internal energy shift per particle of an imperfect gas at fixed density saturates towards a finite value as the temperature becomes very large. These criteria are expressed in terms of the second virial coefficient and by distinguishing the different dimensionalities. Section IV is devoted to define the models we are going to study in next Sections: the LL models and the different anyonic models. The internal energy shift of the LL model is studied in Section V by using thermodynamic Bethe ansatz integral equations: the comparison with the 1d hard-core bosons is also discussed. Section VI deals with the internal energy shift of anyonic gases, and we present results for both the hard-and the soft-core anyonic gases.
Our conclusions are drawn in Section VII, while more technical material is presented in the Appendices.
II. SCALE-INVARIANT SYSTEMS
A proportionality between internal energy E and pressure P holds for any scale-invariant thermodynamic system. Indeed, let us consider a (classical or quantum) system of N particles in a volume V with Hamiltonian H N (V ). It is intended that in this Section and the next, we denote by V the length L in 1d and the area A in 2d. We define a classical system to be scale-invariant when the Hamiltonian transforms as under a dilatation of a λ-linear scaling factor such that the coordinate x of the particles transforms as x → λx (the momentum p transforms correspondingly as p → λ −1 p): therefore the Hamiltonian is an homogeneous function of its spatial coordinates. For quantum systems, we define them to be scale-invariant if they fulfill condition (2) and respect at the same time scale-invariant boundary conditions for the N -body wave-function ψ N at contact points, i.e conditions on the wave-function ψ N (x 1 , · · · , x N ) which are true also for any rescaled wavefunctionψ N (x 1 , · · · , x N )) ≡ ψ N (λx 1 , · · · , λx N ), where λ = 0 is a real constant. A typical example of scale-invariant boundary conditions for the N -body wave-function at contact points is given by the hard-core condition.
In the canonical ensemble the pressure P and the internal energy E are defined as where as usual β = 1/k B T and Z is the partition function: For any d-dimensional scale-invariant system of volume V , the map leaves log Z invariant in the thermodynamic limit (since βH N → βH N ). With the notation whence relations (3) imply Notice that Eq. (7) is valid both for classical and quantum scale-invariant systems, and follows from the invariance of the partition function under map (5): therefore the scaleinvariance of boundary conditions at contact points is required in the quantum case. So, for instance, in Section VI, we will show that, for the 2d ideal anyonic gas, Eq.(7) only holds in the case of hard-core boundary conditions while it is violated in the soft-core case.
From Section IV onwards, we study some low-dimensional quantum systems, since we are primarily interested in interpolating between the two ordinary quantum statistics.
From the considerations above, it follows that any quadratic scale-invariant Hamiltonian fulfills the scaling property H N → λ −2 H N under a dilatation of a λ-linear scaling factor and therefore enjoys the property Few examples of systems for which property (8) is known are the following: i) d-dimensional ideal classical and quantum (Bose and Fermi) gas have quadratic dispersion relations, and they all obey the well known relation E = (d/2)P V , as it can be also deduced from the virial theorem [50]; ii) as reviewed in Section VI, the 1d LL Bose gas has total internal energy E = P L/2 (since d = 1 and α = 2) for any temperature in both its scale-invariant limit regimes: the non-interacting limit (γ → 0) and the fermion-like Tonks-Girardeau limit (γ → ∞), which correspond respectively to the zero and infinite coupling associated with the δ-like contact interactions; iii) for the 3d Fermi gas at the unitary limit the relation E = (3/2)P V holds as well [51] (see the discussion in [52]).
As discussed in Section VI, also 2d hard-core ideal anyonic gases obey Eq. (7) for general values of the statistics parameters.
We derive now some scaling properties for scale-invariant d-dimensional systems undergoing adiabatic reversible thermodynamic processes (as above, the argument is carried out in the quantum case for the sake of generality). Let us consider the scale-invariant thermodynamic system confined in a region subjected to a quasi-static scaling transformation of the volume and the temperature (V, T ) → (λ d V, λ −α T ), under which the ratios E i /k B T are left invariant (same proof of (67)), as long as the N -particle Hamiltonian H N gains a λ −α factor under a λ-factor scaling of its spatial coordinates. The total entropy S of the system remains invariant under such a process: indeed the energy scales proportionally to λ −α (because of the transformation (T, of temperature and energy levels), exactly as required for any isoentropic process fulfilling relation (7). This last statement results from E = d α P V and P = −∂ V E(N, S, V ), which give (α/d)E isoentr /V = −(dE isoentr /dV ) and therefore: i.e. E ∝ λ −α , along the series of equilibrium state of a given isoentropic process. We conclude that adiabatic reversible expansions and compressions (as well as arbitrary isoentropic processes followed by thermal relaxation) of scale-invariant systems are characterized by the following transformations for internal energy and temperature: It is worth to point out three immediate consequences of (11): • as ideal gases, scale-invariant systems undergoing an isoentropic process comply with the invariance of P Vγ, whereγ ≡ 1 + α/d.
• isoentropic transformations of scale-invariant systems let the dilution parameter x ≡ ρλ d T invariant, by taking into account a generalized definition of the thermal wavelength depending on the dispersion relation and the dimensionality [53].
• the internal energy associated with equilibrium states of an isoentropic process is proportional to the temperature: where y remains constant along the isoentropic curve. Notice that the factor y would depend solely on the dilution parameter x = ρ λ d T for a given system if Eq. (12) is considered over the entire phase diagram.
It is worth to point out that for scale-invariant Hamiltonian systems, the dependence of virial coefficients upon the temperature is very simple, i.e.
As it will be discussed in Section VI, the fact that the B k (T ) respect Eq. (13) implies the validity of the relation (7) at all orders of the virial expansion (within its radius of convergence).
We conclude this Section by showing that it is also possible to deduce a property of the internal pressure for scale-invariant systems. The internal pressure π T is defined in general as the volume derivative of the internal energy in isothermal processes [54]: the internal pressure is a measure of attractiveness for molecular interactions and is related to the (thermodynamic) pressure P by the expression Eq. (15) is usually referred to as the thermodynamic equation of state, because it expresses the internal pressure just in terms of fundamental thermodynamic parameters P, V, T . For general scale-invariant systems, plugging (7) and (13) into the definition (14) [or equivalently Eq. (13) into Eq. (15)] leads to the following result in the dilute regime: where P ideal is the pressure of the ideal Boltzmann gas having the same V, T .
Relation (16) (14) can be exactly regarded in the dilute limit as the interaction contribution acting in favor to(/against) the external pressure P and counterbalancing the thermal contribution P Boltzmann .
III. ENERGY-PRESSURE RELATION FOR IMPERFECT GASES AT HIGH TEMPERATURE
Hereafter we denote by ρ the number density, by E res the internal energy shift and by e res (ρ, T ) the internal energy shift density (per particle) of a generic classical or quantum imperfect gas, defined as The internal energy shift represents a measure of the deviation from the relation (7) derived in Section II for scale-invariant systems whose Hamiltonians are homogeneous functions of the coordinates.
In many textbooks, deviations from ideal gas behavior are quantified by introducing the so-called departure functions (or residual thermodynamic quantities) (see, for instance, [55]). Such departure functions are obtained by taking the difference between the considered quantity and the corresponding value for the ideal gas, when two among the P , V and T parameters are kept fixed, typically P and T [55]. The quantity defined in (17) that the only scale-invariant system whose conventionally defined departure internal energy vanishes is the ideal gas). An example of a system which is scale-invariant but with nonvanishing (conventionally defined) departure internal energy is the hard-core anyonic gas, as discussed in Section VI: on the contrary, the one defined in (17) can be considered as the correct residual quantity measuring deviations from scale-invariance. To avoid possible misunderstandings, we decided to refer to E res [defined in (17)] as the internal energy shift rather than departure internal energy.
In the low-density regime, the thermodynamic quantities can be associated with the virial coefficients {B n (T )} of the equation of state P = P (ρ, T ). Following statistical mechanics textbooks [1][2][3][4], in the d-dimensional case the following virial expansions for the pressure P , the Helmholtz free energy A H , the Gibbs free energy G, the entropy S, the internal energy E and the enthalpy H are obtained: Helmholtz free energy : Gibbs free energy : Entropy : Internal energy : Below we state the necessary conditions (proven in Appendix A) under which the energy shift of a (classical or quantum) gas remains bounded in the limit of high temperatures, i.e.
For simplicity, hereafter, we limit ourselves to the case of quadratic dispersion relation α = 2, for which such conditions are (with c 1 and c 2 real coefficients): • For d = 1: In Section V the explicit expression of B 2 for the LL model as a function of the coupling constant γ [56] is reported: for finite γ, it is in general c 2 = 0, so that e res is bounded.
• For d = 2: In Section VI the 2D anyonic gas is studied, and shown to have vanishing internal energy shift in the high-temperature limit. This is in agreement with (21) because, referring as α to the statistical parameter, B h.c. [34,38], where c 1 , c 1 , c 2 are suitable functions, hence both are subleading w.r.t. β log β in the β → 0 limit.
• For d > 2 : and in this case lim
IV. THE MODELS
In this Section we recall the main properties of the LL and anyonic models studied in the next Sections: in Subsection IV A we introduce the 1d Lieb-Liniger model, in Subsection IV B we outline the main thermodynamic properties of an ideal gas of Abelian anyons (and its soft-core generalization), while in Subsection IV C we briefly introduce the system of Non-Abelian Chern Simons (NACS) particles, i.e. a model of non-Abelian anyons.
A. Lieb-Liniger model
The LL Bose gas is described by an Hamiltonian for N non-relativistic bosons of mass m in one dimension interacting via a pairwise δ-potential [6] having the form where λ is the strength of the δ-like repulsion (we consider here only positive or vanishing values of λ: λ ≥ 0). The effective coupling constant of the LL model is given by the dimensionless quantity where ρ = N/L is the density of the gas. We also use the notation so that γ = c/ρ. The limit γ 1 corresponds to the weak coupling limit: in this regime the Bogoliubov approximation gives a good estimate of the ground-state energy of the system [6]. For large γ one approaches instead the Tonks-Girardeau limit [9].
In the LL model temperatures are usually expressed in units of the quantum degeneracy temperature T D as The thermodynamic Bethe ansatz integral equations relate at temperature T the pseudoenergies ε(k) to density f (k) of the occupied levels [7,15,16]. One has the following set of coupled equations whereμ is the chemical potential. At T = 0 the energy level density gets a compact support, so that Eq. (27c) becomes where the boundary value K has to be determined from the condition If one measures energies in units of k B T D and wave-vectors in units of ρ, by defining the scaled wave-vector K ≡ k/ρ, the scaled pseudo-energies E(K) ≡ ε(k)/k B T D and the scaled potential µ ≡μ/k B T D , Eqs. (27) read One sees that scaled quantities depends only on γ and τ .
Once the TBA integral equations (30) are solved, thermodynamic quantities as the free energy can be computed. In Section V we report both the expressions of internal energy and pressure, and we study the internal energy shift.
B. Abelian Anyons
The dynamics of a systems of N identical Abelian anyons is expressed by [25] where with θ ij the relative angle between the particles i and j. The study of the thermodynamics for a system of identical Abelian anyons has been developed starting with [27], in which the exact quantum expression for the second virial coefficient has been derived: Eq. (32) holds provided that hard-core wavefunction boundary conditions are assumed, i.e. lim x i →x j ψ N (x 1 , · · · , x N ) = 0 for any 1 ≤ i < j ≤ N , being ψ N the N -body wavefunction in the bosonic gauge [25].
In (32) α = 2j + δ, where α represents the statistical parameter of anyons [25], j is an integer and |δ| ≤ 1. We remind that α = 1 and α = 0 correspond respectively to free 2d spin-less fermions and bosons, and that λ T is the thermal wavelength defined as The virial expansion is expressed in powers of the number density ρ; in the dilute regime, the second virial coefficient gives the leading contribution to the deviation of the energypressure relation from the non-interacting case, as a result of rewriting the grand canonical partition function as a cluster expansion [1,3]. About the higher virial coefficients of the ideal anyonic gas, only numerical approximations of the first few ones are available so far [25], and they are limited to the hard-core case.
The relative two-body Hamiltonian for a free system of anyons with statistical parameter α, written in the bosonic description, is of the form [25] H where A = (A 1 , A 2 ) and A i ≡ ij x j r 2 (i = 1, 2 and ij is the completely antisymmetric tensor).
By relaxing the regularity condition on the wavefunctions at contact points, it is possible to obtain the one-parameter family of soft-core boundary conditions (35), according to the method of self-adjoint extensions [57]. The s-wave solutions of the radial Schrödinger equation correspond to a one-parameter family of boundary conditions [36,38]: and correspondingly read as where σ = ±1 and κ is a momentum scale introduced by the boundary condition.
We refer to as the hard-core parameter of the gas. If σ = −1, in addition to the solution (36), there is a bound state with energy E B = −εk B T = −κ 2 /M and wavefunction The second virial coefficient for Abelian anyons in this general case has been computed through different approaches in [37,38,58], and is given by dt e −εt t |α|−1 1 + 2σ cos πα t |α| + t 2|α| , (39) where θ(x) is the Heaviside step function and B h.c.
C. Non-Abelian Anyons
The SU (2) non-Abelian Chern-Simons (NACS) spin-less particles are point-like sources mutually interacting via a topological non-Abelian Aharonov-Bohm effect [62]. These particles carry non-Abelian charges and non-Abelian magnetic fluxes, so that they acquire fractional spins and obey braid statistics as non-Abelian anyons.
Details on NACS statistical mechanics [39,[63][64][65][66] are given in Appendix B for general soft-core boundary conditions [40,41]. For non-Abelian anyons, the independence on the statistics of the virial coefficients in a strong magnetic field has been established in [67] while the theory of non-relativistic matter with non-Abelian Chern-Simons gauge interaction in (2+1) dimensions was studied in [68]. The N -body Hamiltonian for ideal non-Abelian Chern-Simons quantum particles can be written as [39] where M α is the mass of the α-th particles, ∇z α = ∂ ∂zα and In Eq. (40) α = 1, . . . , N labels the particles, (x α , y α ) = (z α +z α , −i(z α −z α ))/2 are their spatial coordinates, andQ a 's are the isovector operators in a representation of isospin l. From a field-theoretical viewpoint, the quantum number l labels the irreducible representations of the group of the rotations induced by the coupling of the NACS particle matter field with the non-Abelian gauge field: as a consequence, the values of l are of course quantized and vary over all the non-negative integer and half-integer numbers; l = 0 corresponds to a system of ideal bosons. As usual, a basis of isospin eigenstates can be labeled by l and the magnetic quantum number m = −l, −l + 1, · · · , l − 1, l.
The thermodynamics depends in general on the value of the isospin quantum number l, the Chern-Simons coupling κ, and the temperature T . In order to enforce the gauge covariance of the theory, the parameter κ in (40) has to fulfill the condition 4πκ = integer [69]. Therefore we adopt the notation: Similarly to the Abelian anyons case, the s−wave general solution of the radial Schrödinger equation (B12), derived from the projection of (40) over a generic two-particle isospin channel (j, j z ), belongs to a one-parameter family accounting for the range of possible boundary conditions, and reads where σ = ±1, and κ j,jz is a momentum scale introduced by the boundary condition.
We refer to the (2l + 1) 2 quantities as the hard-core parameters of the system [34], with the hard-core limit corresponding to σ = +1, ε j,jz → ∞ for all j, j z .
We conclude this Section by observing that, according to the regularization used in [27,70], the second virial coefficient is defined as where B (n.i.) 2 (l, T ) is the second virial coefficient for the system with particle isospin l and without statistical interaction (κ → ∞). In Appendix C, B (l, T ) is the (convergent) variation of the divergent partition function for the two-body relative Hamiltonian, between the interacting case in exam and the non-interacting limit (κ → ∞).
V. INTERNAL ENERGY SHIFT FOR THE LIEB-LINIGER BOSE GAS
Before studying the energy shift of the Lieb-Liniger model, we consider by comparison the 1d hard-core bosons model described by the Hamiltonian: where The thermodynamics of the 1d hard-core Bose gas has been determined and studied by Thermodynamics Bethe Ansatz [71,72]: the relation between pressure and internal energy is a Bernoulli equation [73] from which it follows that In this case the internal energy shift is negative, due to the fact that the pressure increases for the effect of the excluded volume. Furthermore E res vanishes for a → 0, as it should.
With regard to the low-dimensional models considered in Sections IV, V, VI, the reader will notice that in 2d the hard-core condition results in a vanishing internal energy shift, while it does not do likewise in 1d (46)(47)(48)(49); however, non-hard-core boundary conditions either in 1d (23) and 2d (36,43) result in a positive energy shift. Furthermore, unlike the non-hard-core case, the dependence (49) of the internal energy shift on the temperature is given only by T -dependence of the pressure.
Let us now address the internal energy shift of the LL model: using the results of Section IV A, the pressure and the energy are given by At T = 0 the energy per particle is given by where E(γ) is given by while the function g(t) is solution of the linear integral equation with ≡ c/K determined from the condition = γ 1 −1 g(t)dt. It is well known that E → 0 for γ → 0 and E → π 2 /3 for γ → ∞; furthermore E ≈ γ for γ 1 and E ≈ (π 2 /3) (1 − 4γ) for γ 1 [16].
At T = 0 the pressure P = −(∂E/∂L) N is then It is immediately seen that the shift is positive and that it vanishes for γ = 0 (1d ideal Bose gas) and for γ → ∞ (TG gas, having the equation of state of the 1d ideal Fermi gas). Furthermore , for γ 1 (53) and a maximum of the shift appears at a finite value of γ (at γ ≈ 4.7). The plot of e res (T = 0) in units of k B T D is the black lowest curve in Fig.1.
At finite temperature one gets A plot of Eq. (54) as a function of the coupling γ for different scaled temperatures is again in Fig.1. Even though the shift depends (rather weakly) on the temperature, the same structure occurring at zero temperature is seen: a maximum appears at a finite value of γ, i.e. between the two ideal bosonic and fermionic limits.
The high-temperature limit can be explicitly studied: indeed the second virial coefficient [16,56] written in scaled units is where λ T = 2π 2 /mk B T is the thermal De Broglie wavelength (33). Using the virial expansion (18) (valid for τ 4π) one gets at the first non-trivial order (i.e., B 2 ): Using the relation ρλ T = 2 π/τ , from (55)-(56) one gets then where we have introduced the error function Erf(x) = 2 √ π x 0 dy e −y 2 [74]. Using the asymptotic expansion One explicitly sees that e res → 0 in the two ideal limits γ → 0 and γ → ∞ and that there is maximum between them (roughly γ max ∼ √ τ ). From (58) we also see that if one fixes a finite coupling γ and increases the temperature (i.e., τ ), then the internal energy shift density approaches the value k B T D γ. Remarkably, the internal energy shift is finite also for infinite temperature: this is shown in Fig.2 where e res /k B T D is plotted as a function of the scaled temperature τ for different values γ, showing that the asymptotic value γ is reached for large temperatures.
VI. ENERGY-PRESSURE RELATION FOR ANYONIC MODELS
In this Section we study the Abelian and non-Abelian anyonic gases introduced in Section II and we discuss their internal energy shift: we show that in the hard-core case the energypressure obey (7), therefore in this case the gases have vanishing internal energy shift. The soft-core condition introduces instead a scale and this gives raise to a positive internal energy shift.
In the first part of this Section we treat together the Abelian and non-Abelian gases: the Hamiltonians for Abelian/non-Abelian anyons are defined respectively in (31) and (40): they are homogeneous with respect to the particles coordinates and they scale as The hard-core condition at coincident points (in both Abelian and the non-Abelian models) is a particular case of a scale-invariant boundary condition, because all finite-λ-scalings of the N -body eigenfunctionsψ are hard-core eigenfunctions too, and vanishing whenever any coordinate sits on the boundary of the rescaled volume. Denoting by A the area of the system, in the hard-core case the coordinate scaling results in a dilatation of the energy spectrum: then the map (A, β) → (λ 2 A, λ 2 β) let (log Z) invariant. As a consequence of (3) and (6) we obtain the exact identity: in agreement with the more general relation (7). Equivalently, hard-core anyonic gases fulfill of course: The validity of (62) for the particular cases represented by 2D Bose and 2D (spin-less) Fermi ideal gases is remarked in [25]. Thermodynamic relations (62) where r 2 i , and H N , H 1 transform according to under the canonical scaling transformation (r i , p i ) → (λ r i , 1 λ p i ); as a consequence, the following relation holds for the regularized Hamiltonian, expressed in terms of scaling for the regularizing frequency and the spatial coordinates: Notice that the harmonic regularization breaks the scale-invariance, which is retrieved in the ω → 0 limit. Now we apply the hard-core condition. For any eigenfunction ψ n (r i ) of H N,ω (r i , p i ) fulfilling the hard-core boundary condition, we correspondingly getψ n (r i ) ≡ ψ n ( √ γ r i ) (also fulfilling the hard-core boundary condition) as eigenfunction of H N,γ ω (r i , p i ), so that, denoting the hard-core condition by the superscript "h.c.", the frequency acts barely as a dilatation for the energy spectrum: whence the N -body partition function Assuming valid the existence of the virial expansion, one has that the thermodynamic imply that the coefficients B N +1 (T )λ −2N so that B N +1 (T )λ −2N T has to be independent of temperature, thus hard-core ideal anyonic gases fulfill From this relation follows that for these systems the last three identities of (18) take the form Entropy : Internal energy : Enthalpy : We point out that the corresponding entropy and heat capacity at constant volume are unaffected by the statistical interaction at the lowest order of virial expansion (being independent of B 2 ). Using formula (C2) for B 2 , one can obtain the leading deviation of the various thermodynamic quantities from their ideal gas value.
An important consequence of (72) is that from (18) and (73) one gets again Eq.(62) at all orders of the virial expansion for hard-core Abelian and non-Abelian anyonic gases (within the convergence radii of these expansion) [75], in agreement with the general relation 7 The scaling properties for isoentropic processes derived in Sec. II apply in particular to Abelian and non-Abelian anyonic gases with hard-core conditions. Isoentropic processes between initial and final states at equilibrium of hard-core anyonic gases are characterized by the following relation between internal energy and temperature: As a consequence, for hard-core anyonic gases subjected to an isotropic transformation one gets P ∝ A −2 , in agreements with (62) and (74), and the dilution parameter x ≡ ρλ 2 T remains invariant along isoentropic curves. Furthermore, according to (12), the internal energy associated with equilibrium states of an isoentropic process takes the form E = y × N k B T , where y remains constant along the isoentropic curve, while it depends solely on the dilution parameter x = ρ λ 2 T over the entire phase diagram. Since we are in two dimensions, y is just the compressibility factor.
Remarkably, Eq. (12) traces the study of free anyonic thermodynamics back to the determination of how the compressibility factor y(x) depends on the dilution parameter x, and this is a genuine consequence of the scaling symmetry, valid therefore also beyond the radius of convergence of the virial expansion.
For the family of systems represented by Abelian anyons gases, where α will denote henceforth the statistical parameter as in Subsection IV B, the factor y can be parametrized as y = y(x, α). The cases y(x, 0) (2D Bose gas) and y(x, 1) (2D Fermi gas) can be traced back to the analysis in Chapter 4 of Ref. [25]; as the dilution parameter x is swept from 0 to ∞ the gas moves from ideality to an increasingly dense regime, and y(x, 0) monotonically decreases from 1 to 0, while y(x, 1) monotonically increases from 1 to ∞; low/high density limit behaviors immediately follow from [25]: where B n denote here the Bernoulli numbers [74]. As expected, the general behavior at intermediate α is non-trivial, while the basic qualitative statements about y(x, α) are that y(0, α) = 1 for any α (limit of ideal gas) and since the dominance of the second virial coefficient in a very dilute regime. This approximate behavior interpolates the curves y(x, 0) and y(x, 1), and the sign of its slope at x = 0 switches at α = 1 − 1/2, i.e., within the dilute regime approximation the statistical energy is negative for 0 ≤ α < 1 − 1/2, positive for 1 − 1/2 < α ≤ 1.
A remarkable perturbative result is argued in Eq. (22) of [76] about the ground state energy for Abelian anyons, which, by assuming the continuity of E(N, A, T ) at T = 0, reads here y(x, α) ∼ α 2 x, x 1 and α 1 .
Let us pause here for a comment about the classical limit of the hard-core anyonic system.
In the picture of anyons as charge-flux composites written for instance in the bosonic bases, one is free to consider arbitrarily large magnetic fluxes Φ = α h/q, q = being the charge of the particles. The kinetic terms alone would yield the Bose statistics for the quantum case, and the Boltzmann statistics for its classical limit. The quadratic terms in α should be regarded as self-energies of the vortices (in both cases). Finally, the momentum-flux terms correspond to the magnetic inter-particle interaction, and they are responsible for the non-trivial anyonic thermodynamics, which is periodic in the flux variable α = qΦ/h. Correspondingly [25] as a uniform upward spectral shift whose α-dependence is periodic with finite period ∆α = 2, therefore this shift becomes irrelevant (to the density of states) for any values of the flux in the classical limit k B T ω 0 .
A. Soft-core anyons
The scale-invariance in force for hard-core anyons does not apply in presence of soft-core boundary conditions, in which case we will compare internal energy and pressure within the dilute regime (up to the first order in the dilution parameter ρλ 2 T ). Let us define the relative internal energy shift density e rel as the dimensionless quantity For Abelian anyons, Eqs. (39)-(78) give where The resulting shift is whose leading term in ρλ 2 T is illustrated in Fig. 3. The plot of the shift e rel (α, T, ε, σ = 1) exhibits a smooth behavior in the bosonic points and a cusp in the fermionic ones, as soon as the hard-core condition is relaxed. We observe that the restriction of e rel (α, T, ε) over the interval α ∈ [0, 1] is not a monotonic function of α for any ε. The proportionality E = P A remains valid at the bosonic points also in the soft-core case [i.e. e rel (α = 2n, T, ε) = 0], and the monotonicity of the shift e rel as a function of α occurs for any ε ∈ [ε − , ε + ], ε − ≈ 0.13, ε + ≈ 3.0, and in particular the relative shift is maximal at the fermionic points for any ε ∈ [ε − , ε + ], while outside of this interval the shift due to the soft-core boundary conditions reaches its maximum at an internal point α max (ε), featuring the properties α max (ε) → 1 − for ε → (ε ± ) ± , and α max (ε) → 0 + for ε → 0 or ε → ∞ .
This feature can be contrasted with the decay of e rel (α = ±1, ε, T ), as log ε → ±∞, for Abelian anyons in the fermionic limit: in this case, the energy shift is a power law in ε (and precisely linear) for ε 1, while instead it decays exponentially in ε for ε 1, as seen in Fig.6. The shift e rel (α = ±1, ε, T ) reaches its maximum 2/e ≈ 0.74 (in units of (ρλ 2 T )) at ε = 1. conventionally-defined departure internal energy is the hard-core anyonic gas, as discussed in Section VI: summarizing, the quantity (17) can be regarded as a good measure of the deviation from scale-invariance for non-ideal gases.
In particular, we have provided criteria for which the internal energy shift density of an imperfect (classical or quantum) gas is a bounded function of temperature. We have also shown that for general scale-invariant systems the dependence of virial coefficients upon the temperature is very simple, and is expressed by Eq. (13).
We have considered deviations from the energy-pressure proportionality in lowdimensional models of gases which interpolate between the ideal Bose and the ideal Fermi gases, focusing the attention on the Lieb-Liniger model in 1d and on the anyonic gas in 2d.
In 1d the internal energy shift is determined from the thermodynamic Bethe ansatz integral equations and an explicit relation for it is provided at high temperature: the internal energy shift is positive, it vanishes in the two limits of zero and infinite coupling (respectively the ideal Bose and the Tonks-Girardeau gas) and it has a maximum at a finite, temperaturedepending, value of the coupling. Remarkably, at fixed coupling the internal energy shift density saturates to a finite value for infinite temperature.
In 2d we have considered systems of Abelian anyons and non-Abelian Chern-Simons particles and we have showed that the relation between the internal energy and the pressure of anyonic gas is exactly the same found for 2D Bose and Fermi ideal gases as long as the hard-core case is considered. Soft-core boundary conditions introduce a length scale and determine a non-vanishing internal energy shift: we have provided details about this shift in the dilute limit. Asymptotic expressions with respect to the hard-core parameter ε are derived for both Abelian and non-Abelian soft-core anyonic gases. We use Eqs. (18) in order to write the internal energy shift in the dilute limit for quadratic dispersion relation of the particles where f (x) denotes the second virial coefficient as a function of its unique variable x ≡ 1/(k B T ). We are interested in the boundedness of e res as the high-temperature limit x → 0 is approached, i.e.
and Ψ H (z 1 , . . . , z N ) stands for the wavefunction of the N -body system of the NACS particles in the holomorphic gauge. Ψ A (z 1 , . . . , z N ) obeys the braid statistics [65] due to the transformation function U (z 1 , . . . , z N ), while Ψ H (z 1 , . . . , z N ) satisfies ordinary statistics: Ψ A (z 1 , . . . , z N ) is commonly referred to as the NACS particle wavefunction in the anyon gauge.
The statistical mechanics of the NACS particles can be studied in the low-density regime in terms of the cluster expansion of the grand partition function Ξ The virial expansion (with the pressure expressed in powers of the density ρ = N A ) is given as where B n (T ) is the n-th virial coefficient, which can be expressed in terms of the first cluster coefficients b 1 , · · · b n . The second virial coefficient B 2 (T ) turns out to be [3] where A is the area and Z N = Tr e −βH N the N -particle partition function. We assume that the NACS particles have equal masses and belong to the same isospin multiplet {|l, m >} with m = −l, . . . , l. The quantity Z 1 = Tr e −βH 1 is then given by where B B 2 (ω j , T, ε j,jz ) is the soft-core expression entering Eq. (39): B B 2 (ω j , T, ε j,jz ) = B h.c. 2 (δ j , T )−2λ 2 T e ε j,jz θ(−σ) + δ j σ π (sin πδ j ) ∞ 0 dte −ε j,jz t t |δ j |−1 1 + 2σ(cos πδ j ) t |δ j | + t 2|δ j | , with δ j ≡ (ω j + 1) mod 2 − 1, and B F 2 (ω j , T, ε j,jz ) is the previous expression evaluated for ω j → ω j + 1: with Γ j ≡ ω j mod 2 − 1.
(C10)
The depletion of B 2 in Eq. (C10), with respect to its hard-core value − 1 24 λ 2 T , arises from the anyonic collisions allowed by the soft-core conditions. | 2014-08-28T13:59:07.000Z | 2014-06-30T00:00:00.000 | {
"year": 2014,
"sha1": "904fe2a0a1cd4edf4b05bc384b090db13655e2ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2014.08.007",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2f5f55cbf7eac060048cfb849785e9bf8dcfcf8e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253081583 | pes2o/s2orc | v3-fos-license | Nucleocapsid-specific antibody function is associated with therapeutic benefits from COVID-19 convalescent plasma therapy
Summary Coronavirus disease 2019 (COVID-19) convalescent plasma (CCP), a passive polyclonal antibody therapeutic agent, has had mixed clinical results. Although antibody neutralization is the predominant approach to benchmarking CCP efficacy, CCP may also influence the evolution of the endogenous antibody response. Using systems serology to comprehensively profile severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) functional antibodies of hospitalized people with COVID-19 enrolled in a randomized controlled trial of CCP (ClinicalTrials.gov: NCT04397757), we find that the clinical benefits of CCP are associated with a shift toward reduced inflammatory Spike (S) responses and enhanced nucleocapsid (N) humoral responses. We find that CCP has the greatest clinical benefit in participants with low pre-existing anti-SARS-CoV-2 antibody function and that CCP-induced immunomodulatory Fc glycan profiles and N immunodominant profiles persist for at least 2 months. We highlight a potential mechanism of action of CCP associated with durable immunomodulation, outline optimal patient characteristics for CCP treatment, and provide guidance for development of a different class of COVID-19 hyperinflammation-targeting antibody therapeutic agents.
In brief
Viral neutralization is presumed to be essential for the activity of COVID-19 convalescent plasma (CCP). Herman et al. use high-dimension antibody profiling to interrogate the effects of CCP on the recipient's humoral immune response and how its modulation could affect COVID-19 clinical outcomes.
INTRODUCTION
The coronavirus disease 2019 (COVID-19) pandemic has claimed more than 4.5 million lives to date. 1 Despite the development and deployment of vaccines to prevent severe COVID-19 and hospitalization, a significant portion of the world's population still remains unvaccinated. The evolution of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants of concern that are more infectious and more evasive of prior immunity fuels an urgent need for more effective therapeutic agents for hospitalized individuals with severe COVID-19.
Because of its immediate availability and safety profile, COVID-19 convalescent plasma (CCP) was one of the first treatments for COVID-19. 2 However, evidence of CCP clinical efficacy has been mixed. Smaller clinical trials have shown a benefit of high-titer CCP in patients early in the course of COVID-19. [3][4][5][6][7][8][9] However, larger trials have not found an overall benefit of CCP, with the caveat that many of these trials treated patients with severe COVID-19 at later stages of disease. 10,11 Along these lines, the CONCOR-1 trial, a large, randomized controlled trial of CCP in hospitalized patients with COVID-19, did not find a clinical benefit of CCP but found that antibody-dependent cell cytotoxicity (ADCC) was associated with a lower risk of intubation or death by day 30. 12 This suggests that the efficacy of CCP may in part depend on antibody Fc-effector functions and needs to be further investigated.
Previous studies have highlighted the remarkable heterogeneity of SARS-CoV-2-specific antibody titers and antibody-effector functions. 13 However, whether particular functions or antibody qualities, including ADCC, are associated with differential therapeutic outcomes remains incompletely understood. We applied system serology to an open-label randomized clinical trial that had shown evidence of a mortality benefit from treatment with receptor-binding domain (RBD) ELISA-selected CCP treatment. 14 We found that CCP treatment delayed the evolution of Spike (S)-specific inflammatory antibody responses and induced stronger nucleocapsid (N)-specific antibody responses. Both of these changes were associated with improved outcomes in CCP-treated patients. We found that participants with lower pre-existing antibody function rather than low antibody levels experienced the greatest clinical benefit from CCP. It is clear that CCP modulated humoral immunity during acute disease and months thereafter, leading to more anti-inflammatory S-specific Fc glycans and persistent N-specific immunodominance.
RESULTS
Global SARS-CoV-2 humoral profiles of CCP-treated and control participants With the emergence of novel SARS-CoV-2 variants that can escape vaccine-induced neutralizing antibody responses and monoclonal antibody therapeutic agents, CCP has regained attention as a potential therapeutic strategy to treat COVID-19. [15][16][17] However, clinical trials evaluating the efficacy of CCP have had mixed results. The striking heterogeneity of CCP and our incomplete understanding of the mechanisms of action of this natural therapeutic agent are contributing factors. 12,13,[18][19][20] To attain a more granular understanding of the CCP properties that contribute to therapeutic efficacy, we profiled the SARS-CoV2-specific antibody response across a group of patients enrolled in a randomized control trial of CCP conducted at the University of Pennsylvania. 14 The University of Pennsylvania (UPenn) CCP2 trial enrolled 80 individuals hospitalized with COVID-19 pneumonia, which is defined as a positive SARS-CoV-2 PCR assay, saturation of oxygen (SaO 2 ) of less than 93% on room air or supplemental oxygen use, and radiological evidence of pneumonia ( Figure 1A). Seventy-nine participants were included in our final analysis, 40 of whom were randomized to receive two units of CCP plus standard of care treatment and 39 of whom received standard of care treatment alone. One patient declined CCP treatment and withdrew from the study early. Participants' median age was 63 years (interquartile range [IQR] [52,74]), 54% were female, 13% were on immunomodulatory treatments at baseline, and 26% had a prior cancer diagnosis. Prior to CCP randomization, 81% of participants had been treated with remdesivir, and 83% of participants had been treated with corticosteroids.
The UPenn CCP2 trial enrolled participants early in their disease course, the median of which was 6 days after symptom onset and 1 day of hospitalization. Mortality and the clinical severity score (CSC) were the two prespecified outcomes of this trial. 14 The clinical severity score (CSC) is a composite score that aims to effectively rank patients based on their disease severity, taking into account multiple endpoints in a prioritized manner. 21 The CSC in this trial took into consideration a participant's survival time, recovery time, and disease course while in the hospital, including the 8-point World Health Organization (WHO) ordinal score (WHO8), use of supplemental oxygen, and adverse events. The CSC was found to be significantly different between CCP recipients and control individuals (median [IQR] 7 [2.75, 12.5] vs. 10 [5.5, 30], p = 0.037 by Wilcoxon rank-sum test) ( Figure 1B). 14 The study also found a mortality benefit associated with CCP administration on day 28 (odds ratio [OR] 0.156, p = 0.013), with 5% (2 of 40) vs. 25.6% (10 of 39) mortality in CCP-treated vs. control participants. These clinically meaningful outcomes provided an opportunity to comprehensively examine the immunological profiles across CCP-treated and control participants to define potential biomarkers of immunity.
As expected, the humoral immune response to SARS-CoV-2 evolved across all participants ( Figures 1C and 1D). Nearly all S-and N-specific antibody features increased in the first 2 weeks of SARS-CoV-2 infection ( Figure 1C). Multivariate uniform manifold approximation and projection (UMAP) visualization highlighted the similarities of the two profiled groups at the start of the study, with most day 1 samples (red) at the top in Figure 1D and most day 60 samples (green) at the bottom. These data support our assertion that timing of COVID-19 illness resulted in significant changes in humoral immune responses over time in this cohort. Samples from the CCP-treated and control arms of the study were intermixed throughout the UMAP visualization ( Figure 1E), necessitating a more detailed analysis to identify whether the evolution of the SARS-CoV-2 specific humoral immune response differed between CCP-treated and control participants.
CCP results in a delay in development of SARS-CoV-2 anti-S inflammatory antibody profiles First, we confirmed that CCP-treated and control participants had similar pre-existing (day 1) anti-SARS-CoV-2-specific antibody profiles using UMAP plots (Figure 2A), local inverse Simpson's index (LISI) score analysis (Figure S1A), and univariate statistical testing ( Figure S1B). Next, we focused on the early evolutionary differences across the groups, over the first 2 weeks of the trial, to understand how the trajectories of the humoral immune response differed across the two groups. When we looked at the distribution of how long patients had symptomatic COVID-19 prior to enrollment, we found that it varied greatly from 1-20 days ( Figure S1C). Thus, we aligned all participant humoral A C B D E Figure 1. Global anti-SARS-CoV2 response in CCP-treated and control individuals (A) Schematic of the UPenn CCP2 randomized clinical trial of CCP and the Ab profiling performed in this paper. In total, we profiled 302 samples from 79 patients. Patients were randomly assigned to CP treatment (n = 40) or standard of care treatment (n = 39). Patient serum samples were collected on day 1 (n = 79), day 3 (n = 59), day 8 (n = 37), day 15 (n = 44), day 29 (n = 38), and day 60 (n = 45). Because patients experienced symptomatic COVID-19 for a variable number of days prior to presenting to the hospital, we organized patient serum samples by day of the trial (day = 1 enrollment in clinical trial) and by day of symptom onset (day 1 = first day of COVID-19-associated symptoms). (B) Clinical severity score in the CCP-treated and control groups. Significance corresponds to two-sided Wilcoxon test p values (p = 0.0333; *p < 0.05). (C) Heatmap of the SARS-CoV-2-specific Ab profiles of all patient time points, arranged by time point, arm of the trial, and patient age. Each bar represents the average of Ab measurements taken in technical duplicates (Ab level and FcR binding assays) and biological duplicate (ADCP, ADNP, and ADNKA). (D) This heatmap shows the Akaike weighted average parameter differences between the two groups. Each column shows a parameter, which is normalized across the features. The color intensity indicates whether the parameter is higher in the CCP-treated (blue) or control (orange) model. profile data by the time from onset of symptoms prior to randomization to adjust for heterogeneity in each participant's time from COVID-19 symptom onset. Using this approach we found that, by week 3 after symptom onset, CCP-treated individuals had lower S-specific titers, FcR binding, and antibody (Ab)-dependent functional activity ( Figure 2B). This delay in the evolution of the S-specific response was also evident on day 8, when data were analyzed agnostic of day of symptom onset ( Figure S1C). To gain more granular insight into the specific humoral immune responses that evolved differentially across the two groups, four-parameter logistic regression models were generated for each Ab feature across each group from week 1 to week 4 from symptom onset. 23 This modeling approach allowed us to quantitatively define how CCP treatment led to differences in (1) initial levels of Ab features, (2) the initial speed of developing an Ab feature, (3) the time it took for seroconversion, or (4) final Ab feature plateau levels ( Figure 2C). Although initial quantities and initial conversion speeds were mixed for RBD and N features, final RBD-specific titers (4, plateau level) and FcR binding (FcgR2a, FcaR, and FcgR3b) were largely higher in the control population (Figures 2D, S1F, and S2). In contrast, N-specific titers and Ab plateau levels were similar in the two groups or slightly higher in CCP-treated individuals. Specifically, N-specific IgM, IgG2, and FCgR3b binding levels were elevated in CCP-treated participants ( Figures 2D and S1F). These data suggested that CCP treatment was associated with blunting of the inflammatory anti-S-specific humoral immune profiles in a manner distinct from N-specific humoral immunity.
To stratify individual humoral characteristics that differed most across the CCP-treated and control groups over time, we used the Akaike information criterion (AIC) of the paired models ( Figures 2E and S2E). We found that S-, RBD-, and S1-specific FcgR2a binding differed most between the two treatment arms ( Figures 2E and S2). CCP-treated individuals exhibited lower levels and delayed evolution of S-specific FcgR2a binding Abs ( Figures 2E and S2). Conversely, N-specific ADCD, IgG3, and IgM differed between the two models and were enhanced in CCP-treated individuals ( Figures 2E and S2). Specifically, N-specific IgG3 and N-specific ADCD developed earlier in CCP-treated individuals, and N-specific ADCD reached higher levels in CCP-treated individuals ( Figure S2). By using a population-based logistic regression model, we found strong evidence that CCP treatment resulted in attenuated inflammatory anti-S immune evolution. Dampened anti-S profiles were also linked to selectively enhanced N-specific humoral immune features.
CCP-induced blunting of S-specific inflammatory Ab features is associated with improved clinical outcomes Given the differences observed in S-and N-specific humoral immune evolution between the CCP-treated and control groups, we then sought to understand whether Ab properties enriched and depleted in CCP-treated participants were associated with improved clinical outcomes (measured by CSC) in CCP-treated and control participants. 14 Specifically, we selected the 30 Ab features with the greatest |DAIC| values ( Figure 2E). A least absolute shrinkage and selection operator (LASSO) was then applied to identify the minimal features that differed most across the CSC score at week 3 after symptom onset, and a partial leastsquares regression (PLS-R) was applied to evaluate the association between CSC and the set of LASSO-selected features ( Figure 2F). The PLS-R model identified differences between the groups that were statistically significant (Figures S1F and S1G). Only six of the top 30 AIC-selected features were sufficient to separate all participants based on CSC scores, including S1-specific FcgR2a binding Ab levels, RBD-specific IgG1 levels, S-specific ADNP, RBD-specific FcaR binding levels, S-specific C1q binding levels, and S1-specific FcRn binding. These six features were enriched in controls ( Figure 2G) and in those with the most severe disease, defined as participants with a CSC of less than 20 ( Figure 2I).
To identify the particular Ab properties associated with treatment benefits, we then investigated the associations of the minimal LASSO-selected Ab features with other Ab qualities within the larger humoral immune response using co-correlation networks. A large co-correlation network connected three of the LASSO-selected features: S1-specific FcgR2AH, S ADNP, and RBD IgG1 ( Figure 2H). This co-correlation network contained a broad and highly inflammatory S/RBD/S1-specific humoral profile, including more functional Ab subclasses (IgG1 and IgG3), S-specific neutrophil activity (ADNP), and S-specific monocyte responses (FcR2A and ADCP). A second tight co-correlation network linked RBD-specific FcaR binding levels with S-, S1-, and RBD-specific IgA/FcaR features, confirming prior observations that RBD/S-specific IgA responses are associated with worse disease severity. [24][25][26][27] A third network consisted of RBDand S1-specific binding to the FcRn. These three co-correlation networks consistently highlight the expanded and highly inflammatory S-specific humoral immune responses in individuals with the most severe COVID-19 (highest CSC).
We next investigated whether features associated with poor clinical outcomes in non-CCP-treated individuals were generalizable. To this end, we tested whether a PLS-R model based only on CCP recipients could predict poor clinical outcomes for all trial participants. Specifically, we used the above PLS-R to predict whether participants, regardless of CCP treatment, could be classified into (1) high-severity COVID-19 outcome (CSC > 20) or (2) low-severity COVID-19 (CSC % 20). The model was highly predictive of disease severity, achieving an average area under the curve (AUC) in a receiver operating characteristic [ROC] curve of 77% ( Figure 2I). This demonstrated that these six inflammatory S Ab features predicted worse COVID-19 clinical outcomes and reinforces that the inflammatory S Ab features in control participants are associated with more severe outcomes. Thus, CCP modulation of S humoral immunity and dampening of inflammation are linked to improved disease outcomes.
Correction for co-morbidities points to a robust N-specific Ab signature of CCP treatment A major challenge in understanding the effect of CCP is the heterogeneity of COVID-19 clinical disease. Co-morbid conditions, including obesity, diabetes, cardiovascular disease, chronic kidney disease, concomitant immunosuppression, and cancer, have been associated with more severe COVID-19. 28 Age and obesity have been associated with decreased B cell responses and lower Ab responses to pathogens and vaccines. 29 To account for these covariates in our analysis of CCP-induced humoral immune evolution, we used a nested mixed-linear modeling approach of the Ab profiles of CCP-treated and control participants over the first 15 days of the study (days 1-15 of the clinical trial). Age, sex, race, ethnicity (Latinx vs. non-Latinx), blood type, quarter of enrollment, diabetes, cardiovascular disease, hypertension, obesity, chronic kidney disease, cancer, prior immunosuppression, concomitant treatment with remdesivir at study entry, concomitant treatment with steroids at study entry, and time of symptom onset were included in the models. For each Ab feature, we generated two mixed linear models. The full model incorporated treatment group (CCP treatment vs. control) as a fixed effect; the null model, on the other hand, did not. Then we compared the two nested models with the likelihood ratio test (LRT) to identify Ab features whose trajectories were affected by CCP treatment. We then extracted the T values (normalized coefficient) of the treatment group variable in the full mixed linear model to quantify the CCP treatment effect on Ab features. Ab features significantly affected by CCP treatment were defined as having a T value greater than 2 and a two-sided p value of less than 0.05. Most Ab features that significantly differed between CCP-treated and control individuals were enriched in CCP-treated individuals ( Figure 3A), suggesting that many of the S-specific features that increased in control individuals ( Figure 2F) were influenced by known COVID-19 disease severity risk factors. Most features enriched in CCP-treated individuals were N-specific Ab features, including binding strength to N-specific-FcgR2B, -FcgR3B, and -ADCD. The only feature enriched in control participants (T < À2) was RBD Ab binding to the IgA FcR FcaR ( Figure 3A), also identified in our modeling based on days from symptom onset ( Figure 2F). To ensure the validity of the model results, we next confirmed that the N feature levels prior to randomization were balanced across the two arms ( Figures 3B and S3A). Thus, using a multivariate mixed-effects model, a robust and unexpected N-specific humoral signature emerged in CCP-treated participants.
N features are associated with improved outcomes in CCP-treated and control participants Given the enrichment of N-specific Ab features in CCP recipients, we next sought to understand the relationship between N-specific Abs and clinical outcomes in all study participants. To define whether certain N-specific Ab features were associated with specific clinical outcomes, we applied a linear effects model to the N-specific Ab profiles of CCP-recipients and controls over the first 2 weeks of the trial (days 1-15). Data were corrected for co-morbidities associated with COVID-19. N-specific features explained 30% of the variation in clinical outcome across the cohort ( Figure 3C). The association of individual N-specific Ab features and clinical outcome (CSC) ( Figure 3D) pointed to an association between most N-specific Ab features and better clinical outcomes. Specifically, N-specific ADCD, the most strongly CCP-enriched Ab feature ( Figure 3A), was also one of the most strongly associated with better clinical outcomes ( Figure 3D). N-specific FcgR2B, FcgR3B, C1q binding, and IgM titers were also enriched in CCP-treated individuals and associated with improved clinical outcomes ( Figures 3A and 3D). On the other hand, N-specific NK cell CD107a and MIP1b expression were most strongly associated with better outcomes ( Figure 3D) but not differentially enriched between CCP-treated and control participants (Figures 2F and 3A), suggesting that N-specific ADCC may be beneficial in COVID-19 but not affected by CCP treatment. Not all N-specific Ab responses are beneficial. N-specific IgA and IgG4 levels were not enriched in CCP-treated individuals and were associated with worse outcomes. These analyses suggest that particular CCP treatment-associated N-specific humoral immune responses are associated with better clinical outcomes.
COVID-19 participants with low functional Abs benefitted most from CCP treatment
Emerging data from clinical trials suggest that participants who have not yet generated an Ab response to SARS-CoV2 may benefit the most from monoclonal Ab therapy. 3,10,30,31 We sought to understand whether seronegative individuals also benefitted from CCP treatment. Next, to understand which participants benefitted the most from CCP therapy in this study, participants were clustered based on their day 1 SARS-CoV-2 Ab profiles. We used a Spearman correlation distance-based neighborhood clustering approach ( Figure 4A). Four clusters of participants with similar pre-existing Ab profiles appeared ( Figures S4A and S4B). Cluster 1 contained participants with the highest S-and N-specific humoral responses, and clusters 2, 3, and 4 had more varied Ab profiles. Cluster 4 included individuals with the lowest S and N titers across all Ab features ( Figure 4A). Principal-component analysis and co-correlation network structure demonstrated that clusters 1 and 4 were most distinct in their SARS-CoV-2 Ab profiles ( Figures 4B and S4B). CCP-treated participants in cluster 4 exhibited the greatest benefits (lower CSC) compared with control participants ( Figures 4C and 4F). To gain a granular sense of how cluster 4 individuals differed from the other clusters, we performed univariate testing comparing cluster 4 Ab profiles with the Ab profiles of non-cluster 4 participants, including clusters 1, 2, and 3 ( Figures 4D, 4E, and S5). Specifically, cluster 4 participants possessed lower S-and N-specific Ab functions. They also exhibited the lowest S-and N-specific ADCP, low S-and N-specific Ab-mediated NK cell MIP1b production, and lower S-and N-specific IgA1 and IgG1 titers ( Figures 4D, 4E, and S5). Based on these observations, we created the CCP benefit signature: the set of features that best distinguished cluster 4 from clusters 1, 2, and 3. The CCP benefit signature included all N-and S-specific Ab functional measurements and all Ab titers with |log fold change [FC]| > 0.75 (N-IgG1 and N-IgA1). Our results suggested that participants with lower functional Ab responses were more likely to benefit from CCP treatment.
Additional comparisons of clinical factors across the four clusters pointed to a relatively balanced symptom duration prior to trial enrollment ( Figure S4C). However, participants in cluster 4 were less likely to have chronic kidney disease (CKD), be obese, or be African American. On the other hand, they were more likely (Table S1). Cluster 4 control individuals were significantly older than the individuals in the cluster 4 CCPtreated group ( Figure 4G; Table S2). To define whether the CCP response signatures identified in cluster 4 could predict benefits from CCP across the whole trial, we re-clustered participants based on the CCP benefit signature. Ab profiles clustered into 2 groups (Figures 4H, S4D, and S4E). Cluster A consisted of a heterogeneous mix of participants with overall lower S-and N-specific Ab features ( Figure 4H) and with statistically significant lower CSC (and better clinical outcome) in CCP-treated participants ( Figure 4F). In contrast, cluster B consisted predominately of higher levels of S-and N-specific Ab functions and titers ( Figure 4H) and nearly identical CSC across CCP-treated and control participants ( Figure 4I). Clinical characteristics were equally distributed across the two-cluster model (Table S3) as well as across cluster A CCP-treated and control groups (Table S4). These data suggest that the quality of the pre-existing humoral immune response to SARS-CoV-2 infection largely explained the benefit individuals received from CCP, rather than patient demographic factors or COVID-19 severity risk factors. We next sought to identify specific pre-existing Ab functions or levels that were predictive of benefits from CCP treatment. We created three linear models that predicted the clinical severity measured by the CSC of CCP-treated participants based on their pre-existing Ab levels, unadjusted Ab functions, or IgG1normalized Ab functions. For this comparison, we used the top 12 Ab functions and the top 12 Ab levels that differed between cluster 4 and clusters 1, 2, and 3 ( Figure 4D). Ab isotype and subclass alone only predicted 32% of the variation in CSC, whereas Ab functions predicted 57% of variation in CSC ( Figure 4K). When we normalized the Ab functions by IgG1 to eliminate the influence of Ab titer differences, we continued to explain 60% of the variation in CSC ( Figure 4F). Although S IgG1 level was predictive ( Figure S4), IgG1-normalized S-and N-specific humoral features, such as N-ADCP, N-ADNP, and S-as well as N-ADNKA MIP-1b, were more predictive ( Figures S4G and S4H). This suggests that the magnitude of the pre-existing Ab functional humoral response was more predictive than sero-status alone.
SARS-CoV-2-specific Ab titers are tightly associated with disease severity. [32][33][34] Thus, to understand whether individuals with lower Ab levels represented a surrogate of lower viral burden and, thus, a higher likelihood of surviving disease, we compared SARS-CoV-2 viral loads across cluster A and cluster B prior to CCP treatment. We found that, prior to CCP treatment, cluster B CCP recipients had lower nasopharyngeal swab viral loads than their cluster A counterparts ( Figure S6A). SARS-CoV-2 viral loads were significantly anti-correlated with many S protein Ab titers but not with many S and N Ab functions ( Figure S6B). We next found that Ab function and titer were far better predictors of clinical severity than SARS-CoV-2 viral load in a linear regression model, accounting for 41.5%, 15%, and 5.3% of the variation in clinical severity score, respectively ( Figures S6C and S6D). Further, Ab functional measurements were better predictors of clinical severity than SARS-CoV-2 viral load. A single functional Ab measurement, N-ADNP, was 3-fold more predictive of clinical severity than SARS-CoV-2 viral load. Though higher viral loads have previously been shown to be predictors of clinical response to CCP therapy in prior work, here we found that pre-existing anti-SARS-CoV-2 functional humoral responses are stronger predictors of response to CCP.
Two months later, CCP treatment resulted in a sustained shift in the inflammatory status of S-specific Ab via glycosylation changes Based on the pharmacokinetics of intravenous immunoglobulin [IVIG] in secondary immunodeficiencies, it is unlikely that Ab from 2 units of CCP ($400 mL) would continue to circulate for more than a month after therapy. 35 Thus, we next examined whether CCP had long-lasting effects on recipients' SARS-CoV-2 humoral immune response. First we found that CCPtreated and control participants did not have significantly different S IgG1 levels ( Figure 5A). In addition to changes in the overall levels of Abs, the functional and inflammatory properties of Abs are regulated by changes in IgG Fc glycosylation at asparagine 297. [36][37][38] Given the importance of the Fc glycan in severe COVID-19, 39,40 we profiled Fc glycan differences across CCPtreated and control participants 2 months after treatment (day 60). CCP-treated individuals exhibited selective enrichment of S-specific disialylated and diglycosylated peaks, such as G2S2F, G2S2B, and G2S21F ( Figures 5B and 5C). A LASSO/ PLS-DA model, using S-specific Fc glycan profile features only, was able to separate CCP-treated from control participants ( Figures 5D and S7A). Among the Fc glycan structures, digalactosylated and sialylated structures were selectively enriched in CCP-treated participants, whereas asialylated G1FB.G2 was Figure 5E). A co-correlational network was constructed to gain deeper insights into the collection of Fc glycans that may co-evolve in the setting of CCP treatment ( Figure 5F). G2S2F, enriched in CCP-treated par-ticipants, was strongly correlated with sialylation, disialylation, digalactosylation, as well as individual digalactosylated Fc glycan species, including G2S1FB, G2S2, and G2S2FB. Conversely, G2S2F was strongly anti-correlated with monogalactosylation and asialylated features such as G2F, G1F.1FB, and G1FB.G2, pointing to enrichment of heavily sialylated and galactosylated S Abs in CCP-treated individuals. Because high sialylation 41,42 and galactosylation 43 have been linked to anti-inflammatory Ab activity, these data point to the evolution of anti-inflammatory S-specific Ab profiles after CCP therapy. A second network was observed, including the CCP-treated enriched feature G2S1B with another bisected feature G2B, pointing to a potential role of bisecting GlcNAc in CCP-treated individuals. G2S2FB and disialylated Abs were individually significantly enriched in CCP-treated individuals compared with nontreated control individuals ( Figures 5G and 5H). These data point to a longer-term effect of CCP on shaping the inflammatory profile of the evolving SARS-CoV-2 humoral immune response.
N immunodominance persists 2 months after CCP treatment
Given the presence of a persistent anti-inflammatory humoral signature on S-specific Abs 2 months after treatment, we finally aimed to define whether early signatures of response to therapy persisted over time. Thus, we investigated the SARS-CoV-2specific Ab profiles of CCP-treated and control participants 2 months after treatment. Two months after therapy, CCPtreated individuals continued to exhibit enhanced N-specific Ab titers and FcR binding Abs. Control participants still had higher S1-and RBD-specific Ab titers and FcR binding ( Figures 6A and 6B), pointing to persistence of the immunodominant shift associated with CCP therapy. Using a LASSO/PLS-DA, we found that CCP-treated individuals continued to exhibit a unique overall humoral immune profile compared with control participants (Figures 6C-6E). Only 4 of the total 70 features were sufficient to separate Ab profiles across the 2 groups 2 months after therapy. Two features, NTD-specific IgA1 and NTD-specific FcgR3A binding Ab levels, were enriched in control participants in our model ( Figure 6D). N-specific IgM and C1q binding Abs were selectively enriched in CCP-treated individuals (Figure 6D). The LASSO-selected feature co-correlation network highlighted the presence of additional S-specific features associated with NTD-specific FcgR3AV binding levels in control individuals. These features were inversely correlated with N-specific ADCP, highlighting the dichotomous response represented by an S-or N-focused Ab profile ( Figure 6F). N-C1q was tightly co-correlated with 15 other N-specific Ab features that were all selectively enriched among CCP-treated individuals. The tight correlation of N-specific ADCD with C1q ( Figure 6F), as well as the LASSO selection of N-specific C1q and N-specific IgM ( Figure 6D), suggested that CCP treatment may contribute to a durable, classical complement pathway response. These data point to durable effects of CCP that result in long-lived attenuation of S-specific inflammatory responses in favor of a durable, N-specific, complement-focused humoral response. These data suggest that the benefits of CCP in hospitalized patients with COVID-19 are in part due to an immunodominance shift in humoral immune evolution. CCP treatment is marked by a reduced S-specific humoral immune response and augmented N-specific humoral immunity, resulting in durable changes in Ab profiles months after treatment.
DISCUSSION
Since the start of the COVID-19 pandemic, many clinical trials have studied the efficacy of CCP. Many large studies of hospitalized patients with COVID-19 have not demonstrated benefits from CCP. However, select studies of high-titer CCP earlier in disease have shown a mortality benefit and improvement in clinical status. [3][4][5][6][7][9][10][11]14 We used systems serology to study a randomized study showing clinical benefits of CCP treatment early in hospitalization for COVID-19 pneumonia 14 to understand the signatures of protective immunity provided by CCP. Insight into the specific components of a polyclonal Ab therapy that are associated with improved patient outcomes could inform how we choose and design monoclonal Abs (mAbs) for future SARS-CoV-2 treatment therapies. We found that CCP shifted immunodominance to SARS-CoV-2 by diminishing the S-focused evolution in exchange for expanded N-specific activity. The clinical benefits associated with this immunodominance shift support three major findings: (1) the importance of blunting the inflammatory S-targeted humoral response in severe COVID-19 disease, (2) the critical role of N-specific immune complexes in CCP benefits, and (3) the anti-inflammatory effects on the S and N humoral response are long lasting. These findings expand our previous study earlier in the COVID-19 pandemic, which was not designed to assess the clinical benefits of CCP but found that CCP enriched in N-specific Abs blunted development of the inflammatory anti-SARS-CoV-2 host response. 13 Here, in the UPenn CCP2 study, we found that CCP treatment led to slower development and lower levels of FcR2A-binding Abs, the predominant FcR on monocytes, as well as lower levels of S-, S1-, and RBD-specific Abs, suggesting that CCP may actually blunt development of monocyte-activating S Abs. Emerging work has shown that the afucosylated inflammatory S Abs found in participants with severe COVID-19 promote macrophages to produce pro-inflammatory cytokines 44 and that inflammatory monocyte/macrophages are central to the hyperinflammatory state in severe COVID-19. 45 This strongly supports the possibility that part of the therapeutic benefit of a polyclonal Ab therapy occurs via immunomodulatory effects of the Abs rather than solely via antiviral activity alone. Thus, in some instances, CCP treatment therapy may acutely dampen the Ab-induced macrophage/monocyte hyperinflammatory host response, tempering the cytokine storm, and potentially result in long-lasting anti-inflammatory effects after resolution of COVID-19 viremia. The underlying immunologic mechanisms of how CCP leads to long-lasting immunomodulation remain unclear but may include several non-mutually exclusive mechanisms. Blockade of viral spread by CCP-derived neutralizing Abs coupled to the opsonophagocytic activity of CCP-derived functional Abs may lead to attenuation of inflammation at the time of infection, permitting the immune system to develop a balanced adaptive immune response. 46,47 Now CCP is derived from convalescent individuals months from acute infection with rested, less inflammatory Fc domains. Conversely, it is plausible that CCP-formed immune complexes may also drive uptake via type 2 receptors, 48-50 germinal center activation, and clearance of the virus in the setting of antiinflammatory signals that lead to epigenetic programs that result in longer-lived anti-inflammatory responses. 51 Our results show that N-focused immunodominance in COVID-19 disease is associated with improved clinical outcomes. Emerging work suggests that freely circulating N protein can activate complement via the alternative pathway 52,53 and is likely involved in the hyperinflammatory lung damage seen in people with severe COVID-19 that leads to acute respiratory distress syndrome. 52,54 mAbs targeting N in an in vitro system can inhibit free N-induced MASP-2 activation. 53 In this work, we found that CCP induces stronger N-specific humoral responses that were associated with improved clinical outcomes. This suggests that N immunodominance may be a mechanism to attenuate inflammatory activity of N-specific immune complexes in the lung while allowing the rest of the immune system to control and clear the infection. N Ab function in addition to binding is important for the effects we see with CCP. Of the 19 N-specific Ab features that were associated with CCP treatment, the two most strongly associated with clinical benefits were ADCD and FcR2B binding. The long-lasting immunodominance shift associated with CCP treatment identified in this work highlights the potential importance of clearing N-immune complexes early in severe COVID-19 and suggests that N could be a unique target for mAb therapies modifying severe COVID-19. The observed immunodominance shifts associated with CCP treatment were linked to improved clinical outcomes. Using orthogonal analytical approaches, we found that the diminished S features and enhanced N features in CCP-treated individuals were associated with better outcomes in control as well as in CCP-treated participants. Our data suggest that the benefits of CCP in hospitalized patients with COVID-19 may not only be due to neutralizing Ab but also shifting immunodominance of CCP recipients' immune response via Ab functional activity. This observed immunomodulatory effect suggests that passive polyclonal Ab therapy may have distinct benefits in patients with COVID-19 compared with anti-RBD mAbs and antiviral agents such as nirmatrelvir/ritonavir 55 and molnupiravir, 56 both of which target viral invasion/replication to provide clinical benefits. Most anti-RBD monoclonal agents are not designed to be immunomodulatory and likely contribute to control of infection by limiting viral spread. Thus, monoclonal capture of the virus may limit the inflammatory properties of the virus but not temper host-driven inflammation. Even sotrovimab, a newer-generation IgG1 anti-RBD Ab with a half-life extending LS mutation (M428L/ N434S), is decorated with the same Fc glycans as standard monoclonal agents 57,58 and likely mediates protection via a similar mechanism of action. The continued evolution of new variants of concern (VOCs), most recently Omicron, has led to loss of activity for many of the RBD-targeted mAbs. [59][60][61][62] Polyclonal Ab therapies such as CCP, hybrid convalescent/vaccinated plasma, 15 COVID-19 hyperimmunoglobulin, 63 equine COVID-19 hyperimmunoglobulin, and transchromic COVID-19 hyperimmunoglobulin such as SAB-185 64 may contain the breadth of Abs needed to combat perpetually evolving pathogens by targeting multiple epitopes to bind, clear, and attenuate inflammation.
The emergence of the omicron variant has rendered most of our mAb therapeutic agents inactive. [59][60][61][62] As a result, there is a renewed interest in use of polyclonal Ab therapies like CCP. This class of Ab therapeutic agents is less likely to lose efficacy to new variants because it targets multiple sites in the virus, and plasma from survivors of recently circulating variants can be procured relatively quickly. By using a systematic approach in our study of the factors contributing to the therapeutic benefits of CCP, we found untapped targets for future severe COVID-19modifying treatments. Our findings contribute to a burgeoning literature showing the promise of anti-N mAbs as a diseasemodifying treatment for severe COVID-19-induced hyperinflammation. Identifying biomarkers that will predict who will respond to a passive Ab therapy like CCP will be essential to streamline COVID-19 therapy and improve outcomes. Our findings show that, by choosing CCP based on high S titers alone and selecting patients based on low pre-existing S titers, we are likely incorrectly matching patients with therapies. Finally, our research confirms the importance of the functional S and N Ab response in treatment of COVID-19 and should guide development of COVID-19 mAb and polyclonal Ab therapeutic agents that focus not only on neutralization but also on Fc-directed functionality.
Limitations of the study
In this work, we studied a randomized control clinical trial of hospitalized patients with severe COVID-19 where CCP treatment led to a significant decrease in mortality and improvement in disease severity. Although the UPenn CCP2 trial enrolled fewer participants than multicenter CCP trials, it used local, singlesourced plasma, which may in part explain its positive results. 65 By focusing our analysis on this single center, we were able to use time-to-event analysis as part of the CSC and help us better parse out the continuum of COVID-19 outcomes. Designed as a randomized trial that compared CCP with standard of care treatment, our analysis of the UPenn CCP2 trial cannot rule out the effects of non-Ab proteins present in CCP. However, Sullivan et al. 9 found that CCP reduced the risk of hospitalization in a trial of CCP vs. fresh-frozen plasma (FFP), suggesting that serum proteins present in FFP and CCP are not responsible for the clinical benefits found in their and our trial. We were not able to seek CCPspecific Ab features that drove COVID-19 clinical outcomes because the majority of participants in our clinical trial received CCP from two separate plasma donors. It is important to note that this study was conducted before there was widespread vaccination in the United States; we do not know how vaccination status will affect CCP response. Because the majority of participants in this study were already being treated with corticosteroids and remdesivir, we cannot address whether the activity of CCP we found here is independent or contingent on combination treatment. Despite these limitations, we were able to use deep humoral immune profiling to understand how CCP modulates host immunodominance and affects clinical outcomes.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
RESOURCE AVAILABILITY
Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Galit Alter (galter@mgh.harvard.edu).
Materials availability
This study did not generate new unique reagents.
Data and code availability d The processed dataset generated during and analyzed during the current study have been made available in Table S5 and deposited at Mendeley Data: https://doi.org/10.17632/zc5dzbn9tb.1. Antibody class, subclass, FcR-binding, and functional assay measurements are included in the first sheet of the table and Spike-specific Fc-glycans data is included in the second sheet. d Custom code was used in this manuscript and has been made available at Zenodo.org: 6110200. The R packages used for data analysis are described in more detail in the STAR methods section, and more information is available upon request. d Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Clinical studies and human serum samples The cohort described here participated in a randomized control trial of convalescent plasma in hospitalized patients with severe COVID-19, as described in Bar et al. 14 Briefly, the study enrolled hospitalized adults with RT-PCR-confirmed SARS-CoV-2 infection, radiographic documentation of pneumonia, and abnormal respiratory status, defined as room air saturation of oxygen (SaO2) <93%, or requiring supplemental oxygen, or tachypnea with a respiratory rate R30 breaths per minute. Participants were excluded if they had a contraindication to transfusion, were participating in other clinical trials of investigational COVID-19 therapy, if there was clinical suspicion that the etiology of acute illness was primarily due to a condition other than COVID-19, or if ABO-compatible CCP was unavailable. Between May 2020 and January 2021, a total of 80 eligible participants were randomized to receive either 2 units of CCP and standard of care (treatment arm) versus standard of care alone (control arm). Participants were assigned to treatment or control in a 1:1 ratio. 41 participants were randomized to treatment, but two declined CCP administration and 40 were included in our analysis; 39 participants were randomized to control. 39 participants in the treatment arm received up to 2 units of convalescent plasma on study day 1; with 4 participants receiving 2 units from the same donor and 35 receiving units from two distinct donors. Participants were enrolled a median of 6 days (IQR 4-9) after the onset of COVID-19 symptoms. None of the participants were on mechanical ventilation on enrollment. The majority of participants received steroids (83%) and remdesivir (81%) at enrollment. The median age of participants was 63 (IQR 52-74), and 41% had diabetes, 67% had hypertension, 45% had obesity, 32% had chronic kidney disease, 27% had cancer, and 14% had immunodeficiencies. Of the enrolled participants, 54% were female and 45% were male. The majority of participants identified as African American (53%), with 5% identifying as Asian, 4% Identifying as Latino/a, 34% identifying as Non-Latino/a Caucasian, and 4% without an identified race or ethnicity.
The clinical cohort described in Bar et al. 14 All assays were acquired via flow cytometry with iQue (Intellicyt) and an S-Lab 384-well plate handling robot (PAA). For ADCP, events were gated on singlets and bead-positive cells. For ADNP, neutrophils were defined as CD66b positive events followed by gating on bead-positive neutrophils. A phagocytosis score was calculated for ADCP and ADNP as (percentage of bead-positive cells) x (MFI of bead-positive cells) divided by 10,000. For ADCD, complement deposition was reported as the median fluorescence intensity of C3 deposition on Spike-coupled beads. For ADNK, NK cells were defined as CD3 À and CD56 + events. NK cell activation was quantified as the percentage of NK cells positive for the degranulation marker CD107a 73 and for two markers of NK cell activation, MIP-1b, and IFNg. 74 In the text, we referred to these readouts as CD107aNK, MIP-1bNK, and IFNgNK.
Fc glycan analysis
Capillary electrophoresis was conducted as previously described. 75 Briefly, recombinant S was biotinylated and coupled to 1 mm neutravidin-coated magnetic beads; 5 mg of protein was coupled to 50 mL of beads for each sample. Heat-inactivated sample (100 mL) was incubated with 50 mL of un-coupled magnetic beads to clear non-specific bead binding for 30 min. Pre-cleared plasma was incubated with 50 mL of protein-coupled beads and incubated for 1 h at 37 C, were washed, and the captured antibody Fc was cleaved off by incubating with 1 mL of IDEZ at 37 C for 1 h. The isolated Fc fragments were deglycosylated and the freed glycans were fluorescently labeled and purified using a GlycanAssure APTS Kit according to the manufacturer's instructions. Glycans were analyzed by capillary electrophoresis on 3500xL genetic analyzer (Applied Biosystems). Samples were run with N-glycan fucosyl, afucosyl, bisecting and mannose N-glycan libraries to enable identification of twenty-two discrete glycan species. Glycan profiles of each labeled and purified participant sample was measured with technical duplicate. The relative frequencies of each glycan peak were plotted as a percentage of total glycans, calculated using GlycanAssure software.
QUANTIFICATION AND STATISTICAL ANALYSIS
Data pre-processing Duplicate measurements of antibody isotypes, subclasses, FcR-binding levels and ADCD measurements were averaged for each sample and then log10 transformed. Duplicate measurements of ADNK, ADCP, and ADNP were averaged for each sample. In order to remove antibody features with low magnitude signals, we used the variation in the control samples as a cut off. More specifically, we removed antibody features whose maximum signal in the CCP recipients was less than four standard deviations over the negative control PBS wells (Mean PBS + 4x PBS SD).
Visualization
The heatmaps were created with the function pheatmap in R package 'pheatmap' (version 1.0.12). To eliminate the effect of extreme values and visualize the predominant differences clearly, the color ranges were equally divided into 100 intervals by the quantile range of the percentage of adjusted values across all the measurements. The UMAP visualization was performed on principal components whose cumulative explained variance is larger than 90% by umap function in R package 'umap' (version 0.2.7.0) with fine-tuning parameters (neighbor = 8, min.dist = 0.1), and visualized by ggplot function in R package ggplot2 (version 3.3.5).
Polar plots
Polar plots were used to visualize the mean percentile of groups in Figures 2C, 3C, and 5A. Percentile rank scores were determined for each feature across all considered samples using the function 'percent_rank' of the R package 'dplyr' (version 1.0.5).
Polar plots for Figure S1C were used to visualize the S-specific individual antibody profile of CCP-treated and control participants over the course of the clinical trial. Each feature across the respective populations was scaled by min-max normalization.
Multivariate models
Four parameters logistic regression model The details of four parameters logistic regression model were explained in our previous paper. 23 Briefly, all the measurements were normalized to make sure the minimal values across groups were zero and the maximum values were one. To determine the difference of fitted models in each antibody feature involved in the control and treatment groups, the dynamics along the days since symptom onset were described in each group (CCP-treated and control) at the population level using a four-parameter logistic growth curve. Furthermore, to detect differences explained by different explicit parameters between control and CCP-treatment group, we built two paired models simultaneously, allowing for combinations of parameters to differ between the two groups, while the others are shared between the groups. 16 models controlled by the combination of four parameters were evaluated by Akaike Information Criterion (AIC) to balance the model fitness and model complexity. Finally, the best model was picked with the lowest AIC values. Additionally, to analyze the overall difference in parameters across the groups (Figure 2E), the maximum likelihood estimates for all the models were combined by weighing the contribution of individual models by the Akaike weight. Regression with clinical severity score The regression model in Figure 2H was trained to associate Clinical Severity Score with top 30 features suggested by the four parameters logistic regression model with a minimal set of features. First, we applied the least absolute shrinkage and selection operator (LASSO) feature selection algorithm to extract significant features. Here, we run the LASSO feature selection 10 times on the e4 Cell Reports Medicine 3, 100811, November 15, 2022 Article ll OPEN ACCESS whole dataset and picked the set of features, whose occurrences are more than seven times. The details were implemented in the function 'select_lasso' in systemseRology R package (v.1.0). Then, a partial least square regression model was trained using the Lasso-selected features. Model performance was evaluated by 5-fold cross-validation and the negative models were constructed from permuted labels with multiple iterations. The permuted control models were generated 20 times by shuffling labels randomly for each iteration. Coefficient of determination, denoted as R 2 was used to evaluate the regression performance. For PLS-R, we use the 'opls' function in R package 'ropls' (v.1.22.0) for regression and functions in systemseRology R package for the purpose of visualization.
To further investigate the predicted performance of clinical severity score using the features selected by LASSO, the whole samples were divided into two groups (higher severity and lower severity) using the threshold 20. Then, the predicted clinical severity scores for all the involved samples were predicted using 5-folds cross-validation for 100 repetitions. After that, the averaged ROC curve with the roc curve from each repetition were visualized by roc function in R package pROC (version 1.18.0) as depicted in Figure 2K.
Network analysis
The correlation networks were used to visualize the additional immune measurements significantly associated with the LASSOselected features, indicating enhanced insights of biological mechanisms. The measurements that were significantly (p value <0.05) correlated with the selected features after a Benjamini-Hochberg correction were defined as co-correlates. Significant spearman correlations above a threshold of |r| > 0.7 were visualized within the networks. In detail, the spearman correlation coefficients were calculated using 'rcorr' function in 'Hmisc' package (v4.4.2) and the p values were corrected b ''Benjamini-Hochberg' correction in 'stats' package (v.4.0.3). For the purpose of visualization, the correlation networks were visualized using 'ggraph' (v.2.0.4) and 'igraph' (v.1.2.6) packages.
Mixed linear model
We used two nested mixed linear models (null and full model) without/with treatment group information to assess the significance of the association between measured antibody levels and treatment groups while controlling for potential confounding clinical characteristics. We fit two mixed linear models and estimated the improvement in model fit by likelihood ratio testing to identify the associated measurements for participant timepoints from the first two weeks of the trial (D1, 3, 8, and 15).
Here, the subject clinical information includes age at enrollment category, gender, race, ethnicity, blood type and enrollment period (May-Jun 2020, July-Aug 2020, Sep-Oct 2020, Nov-Jan 2021). In addition, we included the total number of COVID-19 disease severity modulating co-morbidities (TOTALCMB), diabetes (type1, type2) (DM), obesity (OBESITY), hypertension (HTN), cardiovascular disease (CVD), pulmonary disease, chronic kidney disease (CKD), chronic liver disease, cancer (CANCER) and immune deficiency (IMMDF). Additionally, the model also included whether the patients were treated with the drug Remdesivir (CM_RMDSVR) or Steroids (CM_STEROIDS) at baseline and how long they had been symptomatic from COVID-19 (SymOnSet). The R package ''lme4'' was used to fit the mixed linear model to each measurement and test for difference in antibody features depending on whether a patient received CCP or not. The p value from the likelihood ratio test and t value (normalized coefficients) associated with the variable represented two arms of the clinical trial, CP_vs_Control in full model, were visualized in a volcano plot using the ggplot function in R package 'ggplot2' (Version 3.3.5). Linear model to identify the percentage of explained variance We built a linear model to identify the association of Nucleocapsid-related antibody features with clinical outcomes in CCP-treated and control individuals. The linear model used the same clinical characteristics involved in our previous models and N-related measurements from the second week (Day 8 and Day 15 measurements) of the clinical trial to predict the clinical outcomes as measured by the CSC. The details of the linear model are shown as follows: CLINICAL_SEVERITY_SCORE $1 + N_C1q + N_FCAR + N_FCGR2AH + N_FCGR2B + N_FCGR3AV + N_FCGR3B + N_FCRN + N_IgA1 + N_IgG1 + N_IgG2 + N_IgG3 + N_IgG4 + N_IgM + N_ADCD + N_ADCP + N_ADNP + N_107a._ADNK + N_IFNg_ADNK + N_MIP1b_ADNK + AGE_CAT + GENDER + RACE + ETHNICITY + POOL_BLOOD + ENROLLQTR + TOTALCMB + DM + CVD + HTN + OBESITY + CKD + CANCER + IMMDF + CM_RMDSVR + CM_STEROIDS + SymOnSet.
Then, the percentage of explained variance in CSC attributed to each antibody feature was calculated by Sum of Square in ANOVA.
OPEN ACCESS
Unsupervised clustering to identifying the patterns in Day0 Using all the measurements or the selected subset of measurements, the spearman correlation coefficients across the measurements were calculated to represent the sample-sample similarities. Then, we applied community detection method to the similarity matrix to identify the groups with more homogeneous immune profiles. First, we made a K-nearest neighbor graph based on similarity distance. Secondly, we calculated the adjacent matrix and identified the communities using the R package 'igraph'. Here, the parameter K was searched exhaustively from low number (2) to high number, in which all the samples were grouped into one cluster. The number of clusters was selected with the largest averaged silhouette values across all the clustering results.
To identify the measurements distinguished cluster four from the other clusters (Cluster 1,2, 3), we compared the samples inside the cluster four and those outside it using the wilcox-rank test implemented in the wilcoxauc function, in the presto R package. This process was repeated only with the 14 measurements that made up the CCP benefit signature to create Cluster A and B. The same clustering procedure described above was followed to determine cluster A and B.
To evaluate whether antibody levels or function was a more important determinant of response to CCP therapy, we created three linear models based on following measurements that most distinguished Cluster 4 from Cluster 1, 2, 3: 1) the 14 antibody titers, 2) the 14 antibody functions w/o IgG1 normalization, and 3) 14 antibody functions w/IgG1 normalization. These models were fitted and the percentage of explained variance was determined with the R package ''lme4''.
Discriminant analysis in Day60
The log2 Fold change of the average value per each measurement was calculated between CCP treatment arm and control arm. The p values were estimated by the permutation test through shuffling the arm labels. In detail, we randomly shuffled the arm labels and recalculated the log2 Fold change of the mean values in the two groups for 1000 times and then the p values were estimated by the rank of the actual Fold change among the values of shuffled fold changes.
Then, the classification models were trained to distinguish control and treatment groups with a minimum set of measurements. We first applied LASSO feature selection and then trained partial least squares discriminant analysis (PLS-DA) classifier on the selected features as described above. The model performance was evaluated by 5-fold cross-validation. Finally, the network analysis was used to investigate the correlated features between selected features and non-selected features. The significant spearman correlation above the threshold of |r| > 0.3, were visualized within the networks. | 2022-10-24T13:19:54.367Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "08ca25e1e3e4b84d49010274fbf6f25acca69ee0",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2666379122003706/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1bacccd9db2233dc67d1367f9e117bfd5b5e38b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226650050 | pes2o/s2orc | v3-fos-license | Diversity of polyketide synthase ( PKS ) genes in the metagenomic community of epilithic biofilms from the littoral zone of Lake Baikal 1035
Multi-domain enzymatic “mega-synthases”, including polyketide synthases (PKSs), nonribosomal peptide synthetases (NRPSs) and their NRPS/ PKS hybrid complexes, synthesize a wide range of secondary metabolites of bacterial origin. Polyketides have a diverse chemical structure and functional activity, including antibiotics, statins, tumour growth inhibitors, and other pharmaceutically important compounds (Staunton and Wilkinson, 2001). Biofilms are complex highly dynamic structured ecosystems, in which severe chemical competition occurs; therefore, bacteria in this community must have well-equipped chemical mechanisms to be able to survive and impose themselves in such a competitive environment.
Introduction
Multi-domain enzymatic "mega-synthases", including polyketide synthases (PKSs), nonribosomal peptide synthetases (NRPSs) and their NRPS/ PKS hybrid complexes, synthesize a wide range of secondary metabolites of bacterial origin. Polyketides have a diverse chemical structure and functional activity, including antibiotics, statins, tumour growth inhibitors, and other pharmaceutically important compounds (Staunton and Wilkinson, 2001). Biofilms are complex highly dynamic structured ecosystems, in which severe chemical competition occurs; therefore, bacteria in this community must have well-equipped chemical mechanisms to be able to survive and impose themselves in such a competitive environment.
Materials and methods
During fieldworks onboard the RV "G. Titov" in August 2019, scuba divers from Limnological Institute SB RAS sampled rocky substrates from depths of 12-20 m in the littoral zone of Lake Baikal near the settlements of Bolshiye Koty, Listvyanka and Bolshoye Goloustnoye. The total DNA was extracted from natural samples of epilithic biofilms using phenol-chloroform extraction by standard methods (Sambrook, 2001). The resulting matrix was used in PCR with degenerate primers for PKS genes (Ehrenreich et al., 2005). The resulting amplicons were cloned in a pJET1.2/blunt vector (CloneJET PCR Cloning Kit, Fermentas, Lithuania); then, they were transformed into the cell of the DH-5α competent E. coli strains. Nucleotide sequences were determined on a genetic analyser at the Syntol research and production company (Moscow, Russia). A comparative analysis of the obtained sequences was carried out using the BLASTX and BLASTP software packages.
Results
A molecular genetic analysis of the samples identified 65 amplicons that belonged to the type I PKS as well as to PKS/NRPS hybrid compounds. Moreover, the identified sequences showed similarity from 40 to 80% with the closest homologues from the NCBI database, which indicates the presence of new PKS genes in the epilithic biofilm community. The closest homologues of the detected PKS genes belonged to unclassified bacteria as well as to the phyla Cyanobacteria, Proteobacteria, Acidobacteria, Planctomycetes, and Verrucomicrobia. The bulk of the obtained sequences of PKS genes belonged to the phylum Verrucomicrobia. Among the homologous nucleotide sequences, there were genes responsible for the biosynthesis of toxins (nodularin, microcyctine, nostocine, and phthicerol) and antibiotics (tuggacin, erythromycin and curacin A). The closest homologues for PKS genes were obtained from the northern lakes of Canada, poplar phyllosphere, soil, Baikal and marine sponges. *Corresponding author. E-mail address: sukhanovalena17@gmail.com (E.V. Sukhanova) ABSTRACT. The secondary metabolite genes detected in the metagenomic community of epilithic biofilms of Lake Baikal belong to the representatives of well-known producers of various metabolites, e.g. Cyanobacteria, which produce toxins and antibiotics (nodularin, nostocine, microcyctine, and curacin), and Proteobacteria, which produce antibiotics (erythromycin and tuggacin). Additionally, in the production of natural substances, we detected PKS genes of new representatives: Acidobacteria, Planctomycetes and Verrucomicrobia. Most of the obtained PKS gene sequences belonged to the phylum Verrucomicrobia, which corresponds to the gene diversity typical of soil communities. The investigated biotopes are a source for the isolation and study of new producers that may possess unique active substances.
Limnological Institute, Siberian Branch of the Russian Academy of Sciences, 3,Irkutsk,664033,Russia Diversity of polyketide synthase (PKS) genes in the metagenomic community of epilithic biofilms from the littoral zone of Lake Baikal
Discussion
In one study of meadow soils, genomes of microorganisms were obtained, in which various clusters of biosynthesis genes of polyketide and nonribosomal peptides were found (Crits-Christoph et al., 2018). These biosynthetic loci are encoded by recently identified members of Acidobacteria, Verrucomicobia and Gemmatimonadetes as well as a candidate of the phylum Rokubacteria. Bacteria of these groups are widespread in soils but were not previously associated with the production of secondary metabolites. In particular, a large number of biosynthesis genes was characterised for recently identified members of Acidobacteria, which is the most common bacterial type among soil biomes (Crits-Christoph et al., 2018). In another study, a correlation analysis between the main phyla and the diversity of A and KS domains revealed the relationship between the NRPS or PKS genes with less typical phyla, such as Bacteroidetes and Verrucomicrobia, especially in Antarctic soils (Borsetto et al., 2019). These two phyla are additional microbial constituents in the diversity of metabolites together with well-known producers: Actinobacteria, Proteobacteria, Firmicutes, and Cyanobacteria. A small number of the Verrucomicrobia genomes were analysed, and possibly new NRPS and PKS genes were detected. Unused or unstudied taxa, such as Verrucomicrobia and Bacteroidetes, are potential sources of new NRPSs and PKs (Borsetto et al., 2019). Members of another phylum, Planctomycetes, isolated from biofilms of macroalgae are promising producers of biologically active compounds because they have common characteristics, such as large genomes and complex life cycles, compared to the most biologically active bacteria, i.e. actinobacteria (Graça et al., 2016) . The 13 analysed genomes of Planctomycetes showed the presence of genes or clusters of secondary metabolites. Gene screening revealed that 65% of Planctomycetes potentially have one or both secondary biologically active genes; 85% were amplified with the PKS-I primers and 55% ̶ with the NRPS primers (Graça et al., 2016).
Conclusion
Therefore, epilithic biofilm community of Lake Baikal contains producers of secondary metabolites with both studied characteristics (toxins and antibiotics) and new ones, i.e. this biotope is a good source for the isolation and study of new producers that may possess unique active substances. | 2020-10-28T18:05:53.056Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8dd6de351bbf35c48e017b406110ea065e72775a",
"oa_license": "CCBYNC",
"oa_url": "http://limnolfwbiol.com/index.php/LFWB/article/download/755/504",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce25434323efdaf43a3a6c822f0cd3e9cc64c366",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
18777117 | pes2o/s2orc | v3-fos-license | Congenital upper eyelids ectropion in Down’s syndrome
Congenital bilateral ectropion of the upper eyelids is a rare, benign condition reported in ophthalmic literature. It is more frequently associated with Down’s syndrome, ichthyosis, and sporadic cases in newborns from black population. We report three cases of congenital bilateral upper eyelid ectropion associated with Down’s syndrome. Management of these patients usually requires medial and lateral canthoplasties, full-thickness pentagonal resection of the upper eyelids and placement of skin grafts. We present herein the evolution of one of these patients and we will discuss the mechanism of the eyelid ectropion and its treatment.
Introduction
Congenital bilateral ectropion of the upper eyelids is a rare, benign condition [1], [2], [3]. The eversion usually presents at birth and resolves spontaneously within two weeks of birth [1], [3], [4]. Its etiology is unknown and several possible mechanisms have been proposed, however it is frequently associated with Down's syndrome, ichthyosis, and newborns in the black population [1], [2], [4]. We present three cases of this very rare condition and we will discuss the mechanism of the eyelid ectropion and its management.
Case descriptions Case 1
An eight-month-old boy with clinical signs and diagnosis of Down's syndrome was brought to consultation because of bilateral ectropion of the upper eyelids that was noted at birth (Figure 1). He was born at term to a 33-year-old mother following an uneventful pregnancy and delivery. Chromosome analysis showed trisomy 21 (karyotype 47 xy + 21). Among the typical phenotypic facial changes of Down's syndrome, he was diagnosed with a congenital heart disease, polydactyly in the left hand and adduct right foot. His ophthalmic examination revealed horizontal nystagmus, strabismus, epicanthal folds and ectropion of both upper eyelids as a result of severe anterior lamella shortening. With a gentle digital pressure the upper eyelids could be repositioned but were everted again while crying or with forced lid closure. The corneas remained clear in both eyes. Visual acuity could not be recorded and the rest of his ophthalmic exam was within normal limits. Due to the previous findings, the patient was taken to surgery for bilateral correction of the upper eyelid ectropion. A horizontal skin incision was performed 2 mm above the upper eyelid margin and the edges of the wound were undermined and separated. Once the skin was loose, the eyelid returned to its normal position. A severe laxity of both upper eyelids was noted due to canthal laxity that was corrected at this point with a medial and lateral canthopexy using a 5-0 Vicryl suture for bony attachment, and a full-thickness pentagonal resection of 3 mm at the junction of the lateral one-third and the medial two-thirds of the lid. The margins are closed with three 6-0 black silk suture and the tarsus are closed with interrupted 5-0 Vicryl suture. This helped to improve the horizontal laxity. Finally, two elliptical retroauricular skin grafts measuring 12 mm long and 5 mm wide were grafted into upper eyelids and sutured with interrupted 6/0 black silk suture ( Figure 2).
OPEN ACCESS
Postoperative there were no complications and the final outcome was satisfactory ( Figure 3).
Case 2
A 10-month-old boy was brought to consultation because of bilateral ectropion of the upper eyelids since the neonatal period. He was born to a 35-year-old mother following an uncomplicated pregnancy and delivery. Initial management was conservative, consisting of frequent application of topical lubricants and ointments and patching of eyelids. Ocular examination revealed: upward slant of the eyelid fissures, epicanthal folds, and ectropion of both upper eyelids, which were able to be repositioned easily, but returned to the everted position spontaneously with crying and forced closure ( Figure 4). Chromosome analysis showed trisomy 21 (karyotype 47, xy + 21). Pediatric evaluation revealed congenital heart disease, cryptorchidism and umbilical herniation. We proposed a surgical correction of both eyelids, but the parents refused surgery and lost follow-up. A boy with Down's syndrome was first brought to medical attention to us at the age of 11 months because of ectropion of the upper eyelids since birth that accentuated while crying. His 33-year-old mother reported that the pregnancy was uneventful, the labour was not prolonged and delivery was normal vaginal. As with the previous cases, the eyelids could be returned easily to their normal positions but immediately turned out again with force closure ( Figure 5). His karyotyping was 47, xy + 21. Both corneas were clear and showed no fluorescein staining. We proposed a surgical correction, but the parents refused surgery and lost follow-up.
Although the physiopathology of congenital upper eyelid ectropion is unknown, multiple factors have been implied, including absence of an effective lateral canthal ligaments, lateral elongation of the eyelid, hypotonia of the orbicularis, vertical shortening of the anterior lamella, and failure of the orbital septum to fuse with the levator aponeurosis [1], [2], [4], [6], [11], [12], [13], [14]. Treatment of congenital upper eyelid ectropion is controversial. Different options have been suggested. Some believe that a simple and conservative management with lubricants ointments and moist chambers may be enough to prevent desiccation of the exposed conjunctiva, reduction of conjunctival edema and to allow spontaneous inversion of the eyelid within 2 to 3 weeks. [3], [7], [11], [13]. Surgical treatment for more severe cases that did not respond to conservative treatment include sub-conjunctival injection of hyaluronic acid [4], [8], [13], tarsorraphy [2], [3], [6], [7] tarsorraphy with excision of redundant conjunctiva [5], [7], fornix suture [3], [13], full-thickness skin graft [1], [2], [5], [11], full-thickness horizontal lid shortening [2], [6], and attachment of the orbital septum to the levator aponeurosis [2]. Most cases of congenital eversion of the eyelids without Down's syndrome responded to patching or taping of the eyelids and the use of ointments [3], [6], [7], [11], [13], however surgical intervention may be necessary in patients with Down's syndrome [6], [12], [14]. In our cases, the subconjunctival hyaluronic acid injection was not available at the moment and it was not likely to be effective due the lack of conjunctival chemosis. We believe that correct management to the congenital ectropion upper eyelid in these cases should include the correction of the underlying anterior lamella shortening with full-thickness skin grafts that should be extended beyond the horizontal limb of the canthal tendon to compensate for subsequent contraction of the graft. In addition, horizontal lid laxity needs to be addressed with a lateral and medial tarsal strip procedure and a fullthickness pentagonal lid resection.
In our three cases, the patients did not fulfill the conservative treatment parameters due the congenital abnormalities of the eyelids that may occur in Down's syndrome.
In conclusion, congenital ectropion of the upper eyelid is a rare abnormality that can threaten the cornea and visual acuity if not treated early. In cases where the eyelids can be repositioned mechanically but continue to evert with forced closure or crying, we recommend surgical intervention to prevent further complications. Management's goal is to protect the cornea, improve cosmesis, and prevent amblyopia.
Notes Competing interests
The authors declare that they have no competing interests.
Informed consent
Written informed consent was obtained from the patients' parents for the publication of these case reports and accompanying images. | 2018-04-03T05:16:29.650Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "dfd3baec36a8e3acf97349aaebea2ae3342e03b3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dfd3baec36a8e3acf97349aaebea2ae3342e03b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246212295 | pes2o/s2orc | v3-fos-license | Multi-temporal analysis of past and future land cover change in the highly urbanized state of Selangor, Malaysia
This study analysed the multi-temporal trend in land cover, and modelled a future scenario of land cover for the year 2030 in the highly urbanized state of Selangor, Malaysia. The study used a Decision Forest-Markov chain model in the land change modeller (LCM) tool of TerrSet software. Land cover maps of 1999, 2006 and 2017 were classified into 5 classes, namely water, natural vegetation, agriculture, built-up land and cleared land. A simulated land cover map of 2017 was validated against the actual land cover map 2017. The Area Under the Curve (AUC) value of 0.84 of Total Operating Characteristics (TOC) and higher percentage of components of agreement (Hits + Correct rejection) compared to components of disagreement (Misses + False alarm + Wrong hits) indicated successful validation of the model. The results showed between the years 1999 to 2017 there was an increase in built-up land cover of 608.8 km2 (7.5%), and agricultural land 285.5 km2 (3.5%), whereas natural vegetation decreased by 831.8 km2 (10.2%). The simulated land cover map of 2030 showed a continuation of this trend, where built-up area is estimated to increase by 723 km2 (8.9%), and agricultural land is estimated to increase by 57.2 km2 (0.7%), leading to a decrease of natural vegetation by 663.9 km2 (8.1%) for the period 2017 to 2030. The spatial trend of land cover change shows built-up areas mostly located in central Selangor where the highly urbanized and populated cities of Kuala Lumpur and Putrajaya and the Klang valley are located. The future land cover modelling indicates that built-up expansion mostly takes place at edges of existing urban boundaries. The results of this study can be used by policy makers, urban planners and other stakeholders for future decision making and city planning.
Introduction
Land use and land cover (LULC) change is the human modification of the earth's terrestrial surface. Over the past 50 years, the driving force behind LULC change has been the increase in agricultural land, and since 1992 the rapid urbanization has also added to the ever-growing change of land use and land cover (IPBES 2019), and worldwide half of the habitable land is used for agriculture (Ritchie and Roser 2019). The changes from one LULC to another can affect the biogeochemical cycles and biogeophysical processes occurring between the surface and atmosphere (Zhou et al. 2020). LULC change can impact biogeochemical cycles like the carbon cycle by altering carbon sinks and carbon dioxide (CO 2 ) emissions (Li et al. 2020a, b). LULC change affects biogeophysical processes like surface albedo, roughness, and evapotranspiration (Lejeune et al. 2017;Winckler et al. 2017;Hirsch et al. 2018), and hence alter the energy budget, water budget, and atmospheric variables like temperature and rainfall.
Many studies have shown that urbanization can result in increasing of surface temperature, which leads to Urban Heat Island (UHI) effect (Chao et al. 2020;Hafoud et al. 2020;Qiu et al. 2020;Son et al. 2020;Sultana and Satyanarayana 2020), this can furthermore alter rainfall over urban areas (Pielke et al. 2002;Liang and Ding 2017;Schmid and Niyogi 2017;Liu and Niyogi 2019;Singh et al. 2020;Yu et al. 2020). Urbanization can also affect the hydrological cycle, where impermeable urban surfaces increase runoff and lead to an increased risk of flooding (Li et al. 2018;Ohana-Levi et al. 2018;Hu et al. 2020).
Besides affecting biogeochemical and biogeophysical processes, LULC change also has a great impact on biodiversity and ecosystem services. The conversion of natural vegetation to agricultural land and urban areas can lead to the degradation of the ecosystem services and loss of biodiversity, particularly in biodiversity-rich hotspots like South East Asia. Forests and protected areas play an essential part in providing a healthy environment that benefits both people and nature, by protecting biodiversity, culture, and livelihoods of indigenous people and local communities within these areas (Shadeed and Almasri 2010;UNEP-WCMC and IUCN 2016), and providing important ecosystem services such as carbon sequestration, landscape value, and regulation of major element cycles (Ronchi and Salata 2017). Deforestation can amplify and increase the severity of floods, hence forests and protected areas are important in flood mitigation (Bhattacharjee and Behera 2018; Hasyim et al. 2020;Tembata et al. 2020).
Furthermore, LULC change modelling can be used to estimate and predict future LULC change scenarios and their effects on the environment, by analysing past and current LULC change trends, and the variables that bring about these changes. Moreover, these LULC change models can be integrated with other models, like climate models (Fattah et al. 2021), hydrological models (Shirmohammadi et al. 2020;Tankpa et al. 2020;Galleguillos et al. 2021), and ecosystem models (Krause et al. 2019;Li et al. 2020a, b;Rocca and Milanesi 2020), to better understand the impact of LULC change. Modelling future urban land use change, in particular has been of interest to the scientific community in recent times, due to the growing global population and urbanization. In the past few years many studies have used various LULC models, to model future urban expansion and to study their effects on the environment. For example, studies Rimal et al. 2020;Grigorescu et al. 2021;Okwuashi and Ndehedehe 2021) all modelled future urban land use change, and in all the studies it was estimated urban land will increase and other land uses will decrease.
In the 25 years the largest conversion of forest area to other land uses occurred in the tropics, with South East Asia having the highest rate of deforestation since 1990 (Masum et al. 2017). In Malaysia agriculture especially palm oil plantations, was the main contributor to the economy until 1987, and then manufacturing took over as the main contributor to the economy as the government shifted its development policy to the manufacturing sector, and as a result by the year 2000 other sectors like infrastructure and commercial development started growing, giving a rise to urbanization (Abdullah and Hezri 2008). However, palm oil is still a major part of the Malaysian economy and continues to expand. Between 1990 and 2017 agricultural land increased by 55.7% with 98.2% of this area being plantations (Yan et al. 2020). And as of 2017 palm oil plantations occupy 17.62% of the landmass in Malaysia, which has led to a 20% loss in forest land (Ezechi and Muda 2019).
Urbanization on the other hand has seen a rapid increase, and Malaysia is one of the most urbanized countries in East Asia (Plecher 2020). 50.4% of the population lived in urban areas in 1991, this number reached 65% in 2010 and as of 2020 the number of the population living in urban areas has reached 75%, and with projections by 2040, it will reach 85% (Samat et al. 2020). This has resulted in the expansion of urban areas at the expense of other land covers, urban areas increased from 1793.2 ha in 1992 to 3235.4 ha in 2002, and in 2010 urban areas reached 3987.8 ha, on the other hand, agricultural land decreased from 6171.3 ha (53.8%) in 1992 to 3883 ha (35%) in 2010 (Mohammed et al. 2016). At the state level, Selangor and Penang have historically been the most urbanized states in Malaysia and the rate of urbanization has continued to rise over the years. On the other hand, the federal territories of Kuala Lumpur and Putrajaya located within Selangor have been 100% urbanized since 2010 (Hasan and Nair 2014). Overall the level of urbanization in Malaysia as of 2020, stands at 77.16% (O'Neill 2021).
In Malaysia, there have been few studies that have used LULC models, GIS, and remote sensing to study LULC change, with these studies having a varying spatial scope. For example, Gambo et al. (2018) and Rafaai et al. (2020) used LULC change modelling to study the changes within and around protected areas, whereas Verburg et al. (2002), Memarian et al. (2012), Ibrahim and Ludin (2016), Kamarudin et al. (2018) and Majid et al. (2018) studied LULC change at the basin level, and others like Boori et al. (2015), Almdhun et al. (2018) and Samat et al. (2020) studied land use change in cities/towns. In Selangor, several studies have carried out LULC change modelling For example, Boori et al. (2015) and Nourqolipour et al. (2015aNourqolipour et al. ( , b, 2016 analysed LULC change for certain parts of Selangor. Therefore, there are still gaps in knowledge in the trends and changes in LULC in Malaysia, and only a few handful of studies have modelled future LULC changes. LULC change models are great tools for researchers and professionals to explore the dynamics and drivers that bring about change in LULC (Agarwal et al. 2002). LULC change models are capable of capturing (reproducing) these complex dynamics of LULC change and be used to extrapolate future land use scenarios (Soesbergen 2016), which can help to inform policies affecting such change. A broad array of models and modelling methods are available to researchers, and each type has certain advantages and disadvantages depending on the objective of the research. There are statistical and empirical models like logistic regression and Markov chain, dynamic models like Cellular Automata (CA), and integrated models (Al-sharif and Pradhan 2014). The Markov chain and CA are the most commonly used methods in LULC change and many studies use an integration of the CA-Markov method (Hamad et al. 2018;Karimi et al. 2018;Huang et al. 2020;Khawaldah et al. 2020;Mansour et al. 2020).
In recent years several machine learning ( The RF algorithm has several advantages over other machine learning methods: it is faster and easier to understand and interpret, the algorithm is completed at a fixed number of operations, it can process large volumes of data, a small quantity of parameters is needed to be adjusted during modelling, and has higher accuracy compared to other machine learning algorithms (Kamusoko and Gamba 2015;Legdou et al. 2020;Mao et al. 2020).
LULC change is increasing and surpassing climate change as having the most significant impact on environmental change dynamics. Selangor has been experiencing rapid LULC change and urbanization, with the federal territories of Kuala Lumpur and Putrajaya being 100% urbanized. This has therefore had a significant impact on the environmental dynamics in Selangor. The LULC change and urbanization have increased the risk of flooding and air pollution, increased urban temperatures, and resulted in the degradation of the natural ecosystem within Selangor. With Kuala Lumpur and Putrajaya being 100% urbanised, the continuous expansion of these urban centres could affect the surrounding areas and result in urban sprawl.
Therefore, for local decision-makers and urban planners to mitigate the future impacts of the urban expansions and LULC changes, and to implement better land use policies, it is important to know what the future scenarios of LULC change are, determine the spatial changes, and quantify these changes. A few studies have focused on LULC change modelling in Selangor; however, there has not been any study that has analysed the trend and changes of LULC for Kuala Lumpur, Putrajaya, and Selangor as a whole, and what the future scenario could be. Therefore, the objective of this study is to determine what future land-use change scenarios in Selangor are and how the expansion of the urban centres could affect the surrounding areas.
The Decision Forest algorithm, which is a model rarely used in LULC change modelling, has shown to have high accuracy in past studies, and has several advantages over other models. An integrated Decision Forest algorithm and Markov Chain model in the LCM tool of the TerrSet2020 software developed by Clark Labs (Clark Labs 2021) was used in this study. The DF algorithm is a modification of the original RF algorithm by Leo Breiman and Adele Cutler (Breiman 2001;Cutler et al. 2012).
Study area
The state of Selangor, with an area of 8200 km 2 , is located in the western part of Peninsular Malaysia and lies near to the equator (Fig. 1). Selangor is one of the 13 states of Malaysia. It is located at latitudes 2°35′-3°60′ N and longitudes 100°45′-102°00′ E on the west coast of Peninsular Malaysia. It is bordered by Perak to the north, Pahang to the east, Negeri Sembilan to the south, and the Strait of Malacca to the west. The state capital is Shah Alam, while Klang serves as the royal capital. It is the most populated state in Malaysia, with a population of about 6.5 million (Department of Statistics Malaysia 2021) and it's highly urbanized.
Data
The study used Landsat 5 TM (1999 and2006) and Landsat 8 OLI (2017) satellite images obtained from United State Geological Services (USGS) website at (http:// earth explo rer. usgs. gov/). The SRTM digital elevation map was also obtained from the USGS website. The ancillary data like road networks and rivers were obtained from the open street website at (https:// www. opens treet map. org) which were used for land use change modelling. The slope map was created from the DEM and using the slope tool in ArcGIS 10.2.2 software developed by ESRI (2014).
Land cover classification and accuracy assessment
For the land cover classification ERDAS IMAGINE 2020 software developed by Hexagon Geospatial (2020) was used. First, image mosaic was carried out to merge Fig. 1 The study area (the state of Selangor and its districts) located in Western Peninsular Malaysia images which give the full extent of the study area, this was then followed by a haze reduction process to remove or reduce any haze in all the images. For the 2017 image, the panchromatic band of Landsat 8 was used to pan sharpen the image and improve its spatial resolution from 30 m to 15 m for better interpretation and classification of land cover. The maximum likelihood algorithm under supervised classification which is a commonly used method for classification was performed to classify Landsat images of 1999, 2006, and 2017. The following 5 land cover classes were generated: water, natural vegetation, agriculture, built-up land, and cleared land.
For accuracy assessment, the accuracy assessment tool in ERDAS software was used. The accuracy assessment tool allows the comparison of random sample points in the classified map with reference pixels with known class labels. For each map 150 sample points were generated using stratified random sampling, and an error matrix was constructed for each land cover map. The reference data were obtained using Google Earth and its historical imagery were unable to obtain onsite ground truth data. This was then followed by the calculation of the producer accuracy, user accuracy, and the overall accuracy.
Land cover change modelling
To simulate a future land cover map, a Decision Forest-Markov chain model is used. The Land Change Modeller (LCM) in TerrSet2020 software developed by Clack Labs (Clark Labs 2021) is used for the modelling. The LCM is based on historical land cover data, transition potential maps, and Markov matrices, to simulate future LC change. The LCM consists of 3 main steps, change analysis, transition potential modelling, and change prediction.
Change analysis
The change analysis step calculates the nature and extent of land cover change between time 1 and time 2 and between 2 land cover maps. The changes that are identified are transitions from one land cover state to another. The change analysis evaluates gains and losses, detects net gains, and creates change maps.
Transition potential modelling
In this step, the potential of land to transition is identified, and transition potential maps for each transition are created. The transition potential maps that have the same underlying driver variables are grouped within an empirically evaluated transition sub-model. A transition sub-model can consist of a single land cover transition or a group of transitions that are thought to have the same underlying driver variables. These driver variables are used to model the historical change process.
The driver variables used in this study are: distance to rivers, distance to roads, distance to urban area, DEM, and slope (Figs. 2, 3). The driver variables were selected based on the literature review (Camara et al. 2020;Rafaai et al. 2020) and the author's knowledge of the study area. The transition potential maps are created using the Decision Forest algorithm, which is an implementation of the Random Forest method.
Change prediction
In the final step, the historical change of rates calculated in the change analysis step and the transition potential maps, are used to predict a future scenario for a specified future date. The Markov Chain determines the amount of change using the earlier and later land cover maps along with the date specified. The procedure determines exactly how much land would be expected to transition from the later date to the prediction date based on a projection of the transition potentials into the future and creates a file of transition probabilities. The file of transition probabilities is a matrix that records the probability that each land cover category will change to every other category.
In this study, the 1999 and 2006 land cover maps are first used for the calibration and validation stages of the model, and a land cover map of 2017 is simulated and validated with a reference land cover map of 2017. Then in the next step, a future land cover map of 2030 is simulated using the 2006 and 2017 maps. The gains and losses and net gain for the year 1999-2006 and 2006-2017 from the change analysis step is shown in Fig. 4.
Validation
For the validation of the model the area under the curve (AUC) of Total Operating Characteristic (TOC) and the 3 maps comparison cross tabulation method which measures components of agreement and disagreement are used. TOC method indicates how well the model is predicting change, while the 3 maps cross-tabulation matrix provides detailed information on the accuracy of predicted change and persistence of each land cover class. The 3 maps cross tabulation method uses a reference map of time 1, a reference map of time 2, and a simulated map of time 2 to create the cross-tabulation matrix. In this study reference map 2006 (t1), reference map 2017 (t2), and simulated map 2017 (t2) are used. There are 2 components of agreement called Hits and Correct Rejection and 3 components of disagreement called Misses, False alarm, and Wrong hits. These metrics of agreement and disagreement are recommended by and Pontius and Millones (2011) as an alternative to Kappa statistics since Kappa indices attempt to compare accuracy to a baseline of randomness, but randomness is Fig. 2 Raster maps of distance to roads, distance to rivers, and distance to urban area, used as driver variables for the transition potential modelling step not a reasonable alternative for map construction, hence Kappa statistics can give an illusion of high accuracy.
Land cover classification and accuracy assessment
The spatio-temporal land cover classification maps of 1999, 2006, and 2017 are shown in Fig. 5. In the period 1999 to 2017 built-up land increased by 608.8 km 2 (7.5%), agricultural land increased by 285.5 km 2 (3.5%), and water bodies increased by 21.1 km 2 (0.3%), whereas natural vegetation decreased by 831.8 km 2 (10.2%) and cleared land decreased by 83.7 km 2 (1%) ( Table 1). The majority of the urban expansion took place in central Selangor, where the federal territories of Kuala Lumpur and Putrajaya are located. The agricultural expansion from 1999 to 2006 took place mostly in the North of Selangor.
The overall accuracy for the land cover map 1999, 2006, and 2017 are 84%, 92.74%, and 88.67% respectively, and the producer accuracy and user accuracy are shown in Table 2.
For land cover maps 1999 and 2017, the water has low producer accuracy and user accuracy of 50%. In the accuracy assessment of both land cover maps, there were only 2 random samples for the water, and in both cases, the model wrongfully classified one sample point as another class, hence the 50% producer and user accuracy, as shown in the error matrix in Table 3.
Land cover change modelling and validation
The simulated land cover map of 2030 is shown in Fig. 6. The future simulation of land cover change shows an increase of 723 km 2 (8.9%) in built-up land and an increase of 57.2 km 2 (0.7%) in agricultural land from the year 2017 to 2030, on the other hand, there is a decrease (Table 4).
Model validation
The area under the curve (AUC) of the Total Operating Characteristic (TOC) is shown in Fig. 7. The validation had an AUC of 0.84.
The hits, misses, false alarms, wrong hits, and correct rejections of the cross-tabulation matrix are shown in Fig. 8. The total components of agreement were 71.1% which is the sum of Hits (3.1%) and Correct rejections (68%), and the total components of disagreement were 28.9% which is the sum of Misses (14.9%), False alarm (9.4%) and Wrong hits (4.7%).
Discussion
This study developed a DT-MC model to analyse the trend in land cover and to estimated future change in Selangor. The trend in land cover change from the period 1999 to 2017, shows an increase of 608.8 km 2 (6.8%) in built-up area. The majority of the built-up areas are located in central Selangor, where the highly populated federal territories of Kuala Lumpur and Putrajaya are located. This increase in built-up areas in Selangor could be mostly attributed to the expansions of Kuala Lumpur, Putrajaya, and the surrounding Klang Valley, as previous studies have shown these cities have expanded and resulted in urban sprawl (Rosni et al. 2016;Almdhun et al. 2018). The AUC value of 0.84 and the higher percentage of total components of agreement (71.1%) compared to total components of disagreement (28.9%), show that the model is adequately calibrated and validated, and it's suitable for simulating future land use maps. This is comparable with the study by Samardžić-Petrović et al. (2015), that had similar AUC value and accuracy, which indicates that the DF-MC is a suitable model in land cover change modelling.
The projected 2030 land cover map shows this urban expansion to continue, and the model estimated that urban land covers to increase by 723 km 2 (8.9%) in the period 2017 to 2030 (Fig. 9). The model shows the urban expansion taking place mostly at the edges of existing urban boundaries, and road networks, an indication of the distance to roads and distance to exiting urban areas, are the most impactful variables on urban expansion.
The increase of transportation networks has played a major role in the expansion of urban areas in Malaysia. For example, in George Town, Northern Malaysia, it is estimated by 2030 urban land to expand from 925.77 to 1253.95 km 2 , with the North-South highway and the et al. Ecological Processes (2022) 11:2 second bridge between Georg Town and Penang island playing a major role in this expansion (Samat et al. 2020). Similarly, a study in Northern part of Selangor shows an estimated urban expansion of about 33% from 2015 to 2033, with distance to road and distance to build-up areas, being the major variables (Camara et al. 2020).
Similar to build-up areas, agricultural land has seen an increase in area of 285.5 km 2 (3.5%) from 1999 to 2017, and the model estimated that agricultural land will increase by 57.2 km 2 (0.7%) from 2017 to 2030. The increase of built-up area and agricultural land on the other hand has resulted in a decrease in natural vegetation by 831.8 km 2 (10.2%) from 1999 to 2017, where natural vegetation is turned into build-up and agricultural land cover. The hotspots of these agricultural land use change, are peat swamps in the north east of Selangor and the coastal regions, where distance to transportation infrastructure plays a major role, as distance to transport increases, agricultural land decrease (Olaniyi et al., 2015). The model estimates that there will be a loss of 663.9 km 2 (8.1%) in natural vegetation from 2017 to 2030. Water and cleared land both had very minimal changes, therefore it shows that the majority of the land for urban development and agriculture comes from natural vegetation, as the population and the economy of Selangor continue to increase.
The rapid increase in urbanization and uncontrolled urban expansion can have a negative impact on the environment. Selangor and in particular the cities of Kuala Lumpur and Putrajaya have experienced regular flooding events due to urban development (Bhuiyan et al. 2018 (Ravindran and Rajendra 2020). As the demand for urban land and agricultural land increases, more of the protected areas and urban green spaces will be used to meet these demands. The model shows the urban and agriculture expansion by 2030 will result in more loss of natural vegetation and urban green spaces, and the expansion of the major urban centres will affect their surrounding areas. Therefore, the results of this study can help local policymakers and urban planners to visualise and compare the impacts of future land cover change, which can help to formulate and implement better land use policies and structural plans, which can limit or prevent urban sprawl and uncontrolled urban expansion into surrounding areas. The future land cover change projection can also help local government to mitigate the effects of flooding, and develop a more resilient structural plan. The results of this study show that, at the current rate of development and urbanization, large parts of natural vegetation will be lost by 2030. Therefore, the findings of this study can help Selangor state government, policymakers, urban planners, and other stakeholders to better understand and manage future development in Selangor and to achieve the objectives of the Selangor State Structural Plan 2035, where one of the objectives is to maintain and preserve 32% of forest areas. Furthermore, the results of this study can be incorporated in other studies, like climate and hydrological studies, where effects of land cover change and increasing urbanization on temperature and flood risk can be studied.
There are some limitations in using the DF-MC model. The Markov chain is essentially a projection model and not policy sensitive, therefore making it difficult to include policy variables like socioeconomic and population variables into the modelling (Iacono et al. 2015). To overcome this limitation would need to integrate the MC model with other models that can incorporate socioeconomic variables (Hamad et al. 2018). Moreover, the Decision Forest is a complex model that requires high computational power and is more time-consuming. The complexity of the model can be reduced by reducing the number of decision trees and lowering the number of variables used, and therefore it is advisable to carry out variable selection processes during modelling (Samardžić-Petrović et al. 2015). Overall, the model is capable of estimating future land cover change and can be used in future studies. | 2022-01-24T14:34:15.624Z | 2022-01-24T00:00:00.000 | {
"year": 2022,
"sha1": "9fab74114a2dd9eeed895b133a9e1b4e35391715",
"oa_license": "CCBY",
"oa_url": "https://ecologicalprocesses.springeropen.com/track/pdf/10.1186/s13717-021-00350-0",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "9fab74114a2dd9eeed895b133a9e1b4e35391715",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
263812144 | pes2o/s2orc | v3-fos-license | Detecting Danger: AI-Enabled Road Crack Detection for Autonomous Vehicles
: The present article proposes the deep learning concept termed ―Faster -Region Convolutional Neural Network‖ (Faster-RCNN) technique to detect cracks on road for autonomous cars. Feature extraction, preprocessing, and classification techniques have been used in this study. Several types of image datasets, such as camera images, faster-RCNN laser images, and real-time images, have been considered. With the help of GPU (graphics processing unit), the input image is processed. Thus, the density of the road is measured and information regarding the classification of road cracks is acquired. This model aims to determine road crack precisely as compared to the existing techniques.
INTRODUCTION
Nowadays, to ensure diving safety, the road crack detection technique has become very much popular in transportation industry.It occurs due to heavy rainfall, transportation, weather changes, cyclones, weightage vehicles, etc.For detecting road cracks, a deep-learning-based approach is basically introduced, which uses computer vision to identify each image from the collected data set and this paper is equipped with a Faster region convolution neural network.A data set is signified by a quantitative assessment in an increasingly globalized world.Road cracks are a significant problem for autonomous driving.With the help of Faster-RCNN, images can be classified, and road cracks can be detected.According to some Sino, it is necessary to develop a road crack detection system due to the increase in the number of accidents in our country.Realistically speaking, transportation is the main reason for road cracking.It causes traffic jams, noise and air pollution, unwanted fuel waste, and wastage of time.Hence, if a country has a well road transportation system, it is beneficial for its economy.Many studies have been conducted based on Faster-RCNN for road crack detection, which are discussed in this paper.The purpose of this paper is to develop a highly sophisticated road crack detection system in the context of Faster-RCNN.In the modern industrialized world, the number of heavy vehicles is augmenting, which is reason behind the road cracks.In Indian cities, such as Hyderabad and Mumbai, as the demands for the optimization and efficient operation of social systems are rising, many awareness programs have been conducted for efficient and safe driving.This includes more sophisticated transportation systems, fewer accidents, and reduced fuel consumption with the help of modern technology in the transportation field.The road crack detection system can reduce road accidents due to road cracks.Consequently, transportation will become easier.This research aims to propose a more advanced technique that can detect damage on the road efficiently.However, this approach has several drawbacks including high costs and the fact that this machine is required for all geographical areas.*corresponding author e-mail: devnarayan87@gmail.comTherefore, it is an optional solution for reducing road crack issues.This methodology reliably identifies any crack on highways with the aid of an algorithm for road division, pre-processing, and feature extraction techniques used to create up-and-comers.
The primary goal of this thesis is to analyse road crack detection methods, which include image processing with a GPU and a camera.In the proposed work, input images are extracted from acollection of standard and road crack images.Afterward, it is decided whether there is a road crack or not based on the prediction and learning rate.The images are converted from RGB (red-greenblue) to grey-scale, which allows the system to process digital data.This technique is very versatile, accurate, and cost-effective.This paper presents a pre-processing image technique for road crack detection focusing on the faster-region convolution neural network method.Figure 1 shows the flow of the considered road crack detection model.At first, the standard and road crack detection images are collected.The pre-processing of those images is conducted afterwards for removing the irrelevant data.Thus, all the images will be of enhanced quality and same size.after training the model, image classification is carried out to determine whether there is a crack or not and the result will provide better accuracy.Then the conclusion with a brief analysis of future directions is illustrated.
BACKGROUND STUDY
In the industrialized world, In Indian cities like Hyderabad, Mumbai, and elsewhere, progress is being made toward demands for the optimization and efficient operation of social systems.Many awareness programs have been conducted for efficient and safe driving.These are more sophisticated transportation systems, fewer accidents, and reduced fuel consumption with the help of modern technology in the transportation field.The road crack detection system can reduce road accidents, making transportation easier.This research aims to propose a more advanced technique that can detect damage on the road efficiently.However, this approach has several drawbacks, including high costs and the fact that this machine is needed for all geographical areas, including the nation's states and other countries and cities.It is an optional solution for reducing road crack issues.This methodology reliably identifies any of the cracks on highways by using an algorithm for road division, pre-processing, and feature extraction techniques used to create up-and-comers that are highly referred to as road cracks, and the primary goal of this thesis is to study road crack detection methods that include image processing with a GPU and a camera.It will use typical and road crack images taken by the camera, with a laser camera as the input dataset.In the proposed work, we extract input images from a collection of standard and road crack images, and we decide whether there is a road crack or not based on prediction and learning rate; the images are converted from RGB (red-green-blue) to grey-scale, which allows the system to process digital data.It is very versatile, accurate, and cost-effective.This paper presents a pre-processing image technique for road crack detection, focusing on the faster-region convolution neural network method.Figure 1 shows the flow of the road crack detection model.Then conclude with a brief analysis of future directions; it can take place on time.
LITERATURE REVIEW
Mei et al. [1]presented in their study, which takes into account pixel connection, has the potential to supplement the costly, ineffective, and time-consuming optical inspection practice currently in use, and transposed convolution layers are employed for multiple properties and convolutional layers are tightly coupled in a feed-forward manner and remodel the properties from several layers, the output of transposed convolution layers, a novel loss function that takes into account pixel connection is proposed after study this paper founded it is more costly to implement.Dung et al. [2] demonstrated a crack identification method semantic segmentation on concrete defect images based on deep fully convolutional networks (FCN) encoder-is assessed for image classification and used pre-trained model VGG 16 although the pothole has been reasonably recorded by the suggested method.
Li et al. [3] used traditional computer vision and convolutional neural network based on deep learning.The author proposes a pyramid pooling to extract the accuracy of lane detection and the address of the detection problem.It used a feature map for binary output and a clustering technique to separate the pixels of the images, which can classify the output.Bang et al. [4] suggested methodology is reliable for detecting road lane markings 2D images help measure and evaluate pavement distress along roads, such as pavement cracking detection and classification and rutting measurement.
Bello et al. [5] observed cracks in pavement and road bumps.This paper aims to survey the body of work in the field of Vehicular Ad-hoc Network Technology (VANET), focusing on pothole or road defect identification using image processing techniques.This paradigm approach has recently attracted interest in this field.The advantages and disadvantages of specific image processing techniques, and particular areas for development, as part of an ongoing study, and the primary goal is to provide an overview of this developing field of image processing applications.Future work aims to increase the effectiveness and performance of the image processing and road defect identification approach.Future research will consider these elements during the design phase; after the study, the overall paper based on ad-hoc networks needs to be properly suited for detection.A dataset of 1500 images on Indian highways has been developed for this work.YOLO algorithm was used to train the dataset (You Only Look Once), and in the future, they will use a raspberry pi equipped with a camera to implement the system in real-time in a car dashboard.Additionally, the system may be integrated with a GPS to track the location of the Road crack detection found.After using Yolo, they achieved 78 per cent of accuracy [6].
Dharneeshkar et al. [7]used CNN Convolutional Neural Network and 3D asphalt data surface and using the investigation using test 3D images show that Crack Net can simultaneously achieve Precision (90.13 per cent), Recall (87.63 per cent), and F-measure (88.86 per cent).They used GPR analysis and sweep rectification for crack detection.The proposed demonstration, drawn from a sizable sample, displayed high repeatability to prove that the methodology can be regarded as competent to assess damaged roads with cut and fill portions [8].Also,Dharneeshkar et al. [7] used the commercial solutions covered in this evaluation, and a gap analysis was then carried out.It determined that, to be implemented cost-effectively, considerably more research is required, particularly regarding the distresses associated with pavement microtexture; however, there is always the possibility that new methods will result in gains in both accuracy and efficiency.The author proposed in the study, which takes into account pixel connection, has the potential to supplement the costly, ineffective, and time-consuming optical inspection practice currently in and transposed convolution layers are employed for multiple properties and convolutional layers are tightly coupled in a feed-forward manner and remodel the properties from several layers, the output of transposed convolution layers, a novel loss function that takes into account pixel connection is proposed after study this paper founded it is more costly to implement [7].
The author concluded that there are two families of commonly used algorithms based on minimal cost path analysis and picture percolation, and we draw attention to their drawbacks in this situation.Additionally, provide an enhanced method based on a contrario model that can resist giant motion blur in the absence of several thresholds that are often needed to deal with different crack appearances and levels of degradation and propose an approach based on acontrario model that eliminates the requirement for several thresholds to be defined to identify crack segments and reconnects them under various scenarios of image and structure (road or concrete) degradation [9].Iyer et al. [10] described a three-step process for extracting crack-like patterns from pipe photos with increased contrast.The suggested approach uses curvature evaluation and mathematical morphology to find crack-like features in noisy environments.Careful examination reveals that the cracks typically have a tree-like geometry, which can be used as a feature to help register photos of the same location acquired at different depths along the thickness of the buried pipe (3D visualization) and, finally, using alternating filters to create the final segmented binary crack map.This research presents a low-cost method for road tunnel inspections based on a straightforward approach that can be applied to tunnels with regular traffic flow and does not necessitate extensive preparation work.This work proposes the design, creation, and testing of a low-cost prototype for automatic crack identification on surfaces inside road tunnels.The initial findings are essentially encouraging, and a test accuracy of 94.5 percent was achieved with a tiny dataset and a GPU [11].
Guo et al. [12] proposed a study that compares the perceptions of occasional drivers with frequent drivers to examine how consumers perceive autonomous driving (FDs).It used content analysis, and their comments were divided into thematic groups or subjects.The topics were organized using the core-periphery paradigm, and the author used Ten topics were used to group respondents' understanding of autonomous driving.Between ODs and FDs, there were notable variances in the subjects and their connections.In this paper, the author maintains vehicle safety in these situations; an Advance Driver Assistant System (ADAS) is required to evaluate the driving space and warn of the impending road crack detection proposed, and its focus is on a novel based framework to detect the Road crack detection [13].
Rastogi et al. [14] demonstrated a Road crack detection technique based on mobile sensing that is suggested in this work, and to get pothole information, the accelerometer data is normalized using the Euler angle computation and used in the pothole identification algorithm.Additionally, the spatial interpolation technique is employed to minimize the location mistakes in GPS (Global Positioning System) data.Findings from trials demonstrate that the suggested approach performs with higher accuracy and can precisely detect Road cracks without producing false-positive results [15].As a result, the suggested real-time pothole detecting method can increase ITS traffic safety (Intelligent Transportation System).The author trains a pothole detector using the most recent VR (virtual reality) technology and creates a Road crack detection system that can produce holes of different depths, widths, and shapes.The training dataset is expanded with the virtual pothole images, and the detector's performance is assessed using actual data [16].
Fan et al. [17] described road crack detection as based on estimation and segmentation of the road disparity map.To generalize perspective transformation, we first add the stereo rig roll angle into the shifting distance computation.After that, semi-global matching is used to effectively estimate the road disparities, and the proposal is put into practice using CUDA on an NVIDIA RTX 2080 Ti GPU.The trial outcomes show the state-of-the-art accuracy and effectiveness of our suggested road pothole identification system.Dhital et al. [18] described crack detection as the process of detecting the crack in the structure due to heavy snowfall, poor drainage, or heavy-weighted vehicles.For that purpose, we use any preprocessing techniques used in the CNN algorithm and some related models like Resent, VGG 16, and VGG 19.Kheradmandi and Mehranfar [19]conducted a literature review to establish the development and interpretation of previous studies.They heavily concentrated on the three major approaches in the field of image segmentation, namely thresholding-based, edge-based, and data driven-based methods.This research compared and analysed different image segmentation methods, which gave researchers working on improved segmentation strategies useful information that could eventually lead to a fully automated distress identification process for pavement photographs under varied settings.
Previously, many techniques, like ground-penetrating radars (GPR), the Internet of Things (IoT), Vehicular Ad-hoc Network Technology (VANET), a-contrario, and YOLO FGPS systems, were used for detecting the road crack and were also costly.This paper aimed to develop a robust and adaptable model that could be easily implemented on any hardware.We studied the most effective algorithms for building robust models.We came up with the FCNN deep learning technique, which is both affordable and accurate when used for road crack detection in autonomous vehicles [20].
PROPOSED WORK
Initially, traditional CNN extracts only the texture with the highest value from the max-pooling layers and produces the output.Nevertheless, the exact positions of the cells with the highest values are sometimes lost in the max-pooling, which stores the position data.We proposed conducting work to inspect road crack detection methods, by which the model will be very flexible, reliable, and cost-effective.The object detection has occurred based on Faster-RCNN.In this paper, we present an image pre-processing technique that focuses on faster-region convolutional Neural networks.Initially, we collect datasets of normal road images and cracked road images, then we pre-process the images so that all images come in the same size.The proposed work is to identify the cracks on the road and how their accuracy can be improved by a depth image surface.The prediction is based on a deep learning concept subjected to Faster-RCNN.It comprises a convolutional Neural Network that takes spatial information from the pictures and layer of surface encoding to recognize and, segmentation is a tool for removing boundaries and separating an image from a cognitive perspective into meaningful units.
These are a few objectives that include:
Methodology
This research aims to review road crack detection methods, where we take images with the camera and monitor the road crack.
The steps to implement the model are as follows: Data Collections:The image will be selected from open source Kaggle and will be gathered to prepare the CNN.Such image datasets consist of different kinds of images, e.g., camera images and laser images.
Image-Preprocessing:The above architectural diagram consists of many phases that are described in the flowchart of road detection and separation for self-driving vehicles.
Data Augmentations:In order to accomplish a fair execution, a fix will be created for each example picture by utilizing some testing procedures, for example, an irregular point pivot between poor coverage degree across two positive patches, 00 to 3600, and so on -a picture with break will show a Real (1) in the picture marking measure, and correspondingly images will clearly show a False (0)‖.
Fine Tuning and Normalization:The photos are normalized after converting them into an n-dimensional array and the range of values is altered for pixel intensity.Examples of applications would be photos with low contrast from glare.Normalization is often referred to as histogram stretching or contrast stretching.Dynamic range expansion is the term used in more general data processing disciplines like digital signal processing.
Class Balancing:To check the class, predict the output, and train a model, we need to have balanced classes.If the classes are not balanced, we must first employ a class-balancing strategy.Therefore, class balance is explained in this article and use of Python is exhibited to implement class balancing approaches.
Model Training:The model architecture is created subjected to compiling and fitting.
Prediction:A managed learning-based grouping procedure is utilized to arrange the pictures according to their order as pictures with and without cracks are divided into two classes, -Autonomous Driving Cars‖.To apply the proposed approach to the real-time videos in order to locate road crack detection for the test process, this project will use this view to collect frames from the videos.
RESULT DISCUSSION
In this paper, we discussed the classification by using F-RCNN algorithm to predict the crack on the road and also elaborated the result of the proposed work.
Deep-learning-based outcomes
We used several different algorithms in order to get the best possible result from the categorization.Therefore, since we only have a limited amount of data to work with, let us begin by selecting the smaller dataset.In the beginning, we captured 100 image datasets of a normal road and 300 image datasets of a cracked road.After confirming the reliability of the results, we expanded the image dataset to include 700 images of cracks and 300 images of typical road conditions.After adding more images to the dataset, the level of accuracy reached its highest possible level.
5.2Comparative result of classification
Several different algorithms are discussed and described in the Table-1.In addition to that, the accuracy of the model is evaluated, and predictions are made using it.It was discovered that the proposed F-RCNN-based method produces more accurate precision when compared to the YOLO, RCNN, and CNN models [14], although they have the same issue.
5.3Analysis and Validation
This work has been performed with changing the Epochs and different results are obtained.Initially, we took 15 Epoch and saw that the accuracy of our model is 0.85 percent and loss function is 0.38 (It is seen in Fig. -2).Second Analysis: Afterward, we took 25 Epoch in second time and we got Accuracy =0.89 Loss Function= 0.34, which is below 0.9.It gives very high accuracy in 25 epochs.So, the accuracy of the model is increased (Fig. 3).
Fig. 3.Classification performance of first Loss and accuracy Function
Third Analysis:We take 40 Epoch in the second time then there Accuracy =0.92 Loss Function= 0.22 that shows below (Fig. 4).
CONCLUSIONS
This research has provided a novel method that makes use of artificial intelligence (AI) for the purpose of detecting road cracks in autonomous vehicles.This method is both effective and unique.The suggested system displays a high level of accuracy and reliability in spotting road cracks by utilizing advanced computer vision techniques and machine learning algorithms.As a result, autonomous driving systems will be safer and more efficient as a result of their implementation.It is anticipated that the incorporation of AI-enabled road crack detection would offer tremendous potential for the prevention of accidents, the reduction of expenses associated with maintaining infrastructure, and the improvement of the overall transportation infrastructure.More research and development in this area is necessary if we are going to improve the algorithms, increase the efficiency of the computational resources, and tackle real-world problems.The continuous development of AI-based road crack detection has the potential to play a significant role in making it possible for autonomous vehicles to be operated in a manner that is both safer and more dependable, which will ultimately have a transformative effect on the future of transportation.
Fig. 1 .
Fig.1.An architecture diagram of the whole work of this work.
Fig. 2 .
Fig. 2.Classification performance of first Epoch Loss and Accuracy Function
Table . 1
. Comparison among the models.From the table it is shown that our proposed model gave the best result. | 2023-10-11T15:30:18.478Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "c26f329f9601ff0d362d8884c2efee2a6c3c36ba",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/67/e3sconf_icmpc2023_01160.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce203849a48d2d975456c1ea8dc1ae32b9141bef",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
254687932 | pes2o/s2orc | v3-fos-license | Feature optimization based on improved novel global harmony search algorithm for motor imagery electroencephalogram classification
Background Effectively decoding electroencephalogram (EEG) pattern for specific mental tasks is a crucial topic in the development of brain-computer interface (BCI). Extracting common spatial pattern (CSP) features from motor imagery EEG signals is often highly dependent on the selection of frequency band and time interval. Therefore, optimizing frequency band and time interval would contribute to effective feature extraction and accurate EEG decoding. Objective This study proposes an approach based on an improved novel global harmony search (INGHS) to optimize frequency-time parameters for effective CSP feature extraction. Methods The INGHS algorithm is applied to find the optimal frequency band and temporal interval. The linear discriminant analysis and support vector machine are used for EEG pattern decoding. Extensive experimental studies are conducted on three EEG datasets to assess the effectiveness of our proposed method. Results The average test accuracy obtained by the time-frequency parameters selected by the proposed INGHS method is slightly better than artificial bee colony (ABC) and particle swarm optimization (PSO) algorithms. Furthermore, the INGHS algorithm is superior to PSO and ABC in running time. Conclusion These superior experimental results demonstrate that the optimal frequency band and time interval selected by the INGHS algorithm could significantly improve the decoding accuracy compared with the traditional CSP method. This method has a potential to improve the performance of MI-based BCI systems.
Introduction A brain-computer interface (BCI) system is utilized to sense and transform the electroencephalogram (EEG) signal from the scalp into commands to control external devices and help users to accomplish tasks (Wolpaw, 2002;Cervera et al., 2018;Lazarou et al., 2018;Xu et al., 2018Xu et al., , 2021Mudgal et al., 2020;Rashid et al., 2020). The EEG is commonly used for brain analysis (Nicolas-Alonso and Gomez-Gil, 2012). A large number of researchers pay more attention to the research of BCI based on motor imagery (MI). The mechanism of EEG-based MI-BCI is that the subject can autonomously regulate the sensorimotor rhythm (SMR) through performing the MI task (Pfurtscheller et al., 1993(Pfurtscheller et al., , 2006. The SMR is characterized by power changes in specific frequency bands (8-30) over the sensorimotor cortex. The modulation of SMR generates contralateral preponderant event-related desynchronization (ERD) and synchronization (ERS), which are short lasting attenuation and enhancements of SMR. It is generally accepted that ERD/ERS happens in the different spatial-frequency-temporal domains when different subjects execute MI task, causing difficulty in extracting effective features (Hamedi et al., 2016;Li et al., 2019;Jiao et al., 2020). A standard BCI system comprises a signal acquisition unit, signal processing unit, controlling unit, and application or feedback unit. The signal processing unit further includes three parts, namely, preprocessing, feature extraction, and feature classification. The effective feature extraction method is very important for the recognition of MI intention (Rasheed, 2021). Various feature extraction techniques are used for the feature extraction of EEG-based MI, such as Principal Component Analysis (PCA) (Mirzaei and Ghasemi, 2021), Wavelet Transform (WT) (Sreeja et al., 2017), Fast Fourier Transform (FFT) (Chaudhary et al., 2019), and Common Spatial Pattern (CSP) (Miao et al., 2017b). Currently, the CSP is one of the most popular feature extraction methods which can effectively extract the spatial information of ERD/ERS (Siuly and Li, 2015). However, due to the influence of nonstationary in EEG and inherent defects of the CSP objective function, the spatial filters, and their corresponding features are not necessarily optimal in the feature space used within CSP. On the one hand, internal feature selection method of CSP based on L1norm and Dempster-Shafer theory was proposed to result in a significant increase in the performance of MI-based BCI systems (Jin et al., 2021). On the other hand, the selection of frequency band and time interval has a great influence on the CSP features. Under the same experimental paradigm, the most reactive frequency band and response time interval of different subjects performing the MI are distinct (Ramoser and Muller-Gerking, 2000). It is demonstrated that the classification performance of the BCI system could be enhanced through the selection of the distinguishable frequency band, maximum discriminative time interval, and high-separability power channels for specific participants (Ince et al., 2009).
The main study for the select of frequency band and temporal interval focuses on the following aspects. (1) Frequency band optimizing: the sub-band common spatial pattern (SBCSP) was reported (Quadrianto et al., 2007). Mutual information-based feature selection method was employed to select distinguishable pairs of frequency bands in filter bank common spatial pattern (FBCSP) algorithm that yields superior classification performance compared with CSP and SBCSP (Kai et al., 2008). Discriminative filter bank common spatial pattern (DFBCSP) was reported to extract the optimal frequency band by means of fisher ratio and achieved better classification accuracy (Thomas et al., 2009). A sparse filter band common spatial pattern (SFBCSP) was introduced to optimize the frequency domain (Zhang et al., 2015). (2) Temporal domain optimizing: the novel correlation-based time window selection (CTWS) algorithm was applied for MI-based BCIs, and the results indicate that compared to the classical CSP method, the CTWS algorithm significantly enhanced the average classification accuracy of healthy participants and stroke survivors (Feng et al., 2018). (3) Frequency-temporal optimizing: the frequency-time synthesis optimizing method for the MI-based BCI system was reported to adapt to the individual difference (Tao et al., 2004). The local discriminant bases algorithm was proposed to find the starting time of the ERD/ERS in the sub-band of the EEG (Ince et al., 2007). Fisher discriminant analysis-type F-score approach was developed to simultaneously optimize the frequency-time domain for multiclass classification (Yang et al., 2017).
All these mentioned works have demonstrated that optimizing frequency band or time interval could contribute to yield better classification results. However, most of the studies aim to find the optimal time-frequency parameters in multiple sub-bands and time intervals based on the same bandwidth and time window length. The fixed bandwidth and time window length is not individual variability. Furthermore, although most of the proposed algorithms can automatically optimize the frequency band and time interval, they are independent of each other in the selection process. Since the optimal CSP features are determined by the mutual influence of both frequency and time parameters, the above sequential select procedure method is not the optimal solution in terms of finding the optimal frequency band and time interval. In essence, it might be the best choice to select simultaneously frequency-temporal parameters in the optimization process so that the CSP features obtained by the optimal frequency-time parameters can enhance the classification performance for MI-based BCI systems. Recently, some meta-heuristic algorithms were introduced to optimize frequency-temporal parameters. The particle swarm optimization (PSO) algorithm was utilized to select the optimal frequency and time parameters to extract the effective CSP features (Xu et al., 2014). The artificial bee colony (ABC) algorithm was proposed to solve the frequency-temporal optimization problem (Miao et al., 2017a). However, most of these algorithms require complex operations when creating an offspring. Moreover, these algorithms have many options for parameters and need a relatively long run time to find the global optimal solution.
Harmony search (HS) algorithm was firstly proposed in 2001 (Geem et al., 2001). Since then, HS and its variants have been reported and widely applied to various optimization problems (Mahdavi et al., 2007;Omran and Mahdavi, 2008;Zou et al., 2010). An improved novel global harmony search algorithm (INGHS) was proposed (Ouyang et al., 2015) and the results indicate that the INGHS algorithm performs better than PSO and ABC algorithms in solving the reliability optimization problem. INGHS algorithm has been successfully applied to data clustering and engineering design optimization problems (Ouyang et al., 2018;Talaei et al., 2020). To sum up, the iterative updating principle of INGHS is simpler than PSO and ABC, with faster convergence and better performance. The PSO and ABC algorithms can find good time-frequency parameters in the application of BCI system, but the time cost is high. Therefore, in our work, the INGHS algorithm is introduced in this article to solve the combined frequency-time optimization problem for more accurate MI-related EEG classification. Extracting CSP features from MI EEG signals is often highly dependent on the selection of frequency band and time interval. The CSP features obtained with fixed frequency band and time interval might affect the classification performance of MI-based BCI systems. To address the above drawbacks, the contribution of this work is: 1. Propose an approach based on an INGHS to optimize frequency-time parameters for effective CSP feature extraction.
2. Conduct a set of experiments validating the effectiveness of the proposed method. 3. Compared with PSO and ABC, INGHS algorithm can converge to the global optimal solution faster, so it is helpful for specific subjects to find the optimal time interval and frequency band in the actual offline experiment in a shorter time, thus shortening the offline calibration time.
Therefore, the rest of the article is organized as follows. The applied datasets and methods are described in section "Methods and materials." Then, in the section "Results and discussion, " we describe the results of channel selection, test classification comparison, analysis of frequency-temporal parameters optimization, and computational time comparison. Finally, this study is summarized in section "Conclusion."
Electroencephalogram data description
(1) Data 1: The first dataset was from the BCI Competition IV dataset 1. The EEG signals of seven subjects ("a" to "g") at 59 EEG electrodes were recorded. The calibration data consisting of 200 trials for each subject was utilized in this study. In each trial, cue show a duration of 4 s, during which each subject performed the corresponding MI (right hand and left hand or foot) tasks. The original data are downsampled to 100 Hz. The timeline of a trial is illustrated in Figure 1A. More details can be found in the following website: http://www.bbci.de/competition/iv/. (2) Data 2: We used the BCI Competition III dataset IVa for the experimental study. Five healthy participants ("aa" to "ay") from 118 EEG electrodes were recorded in this dataset. The data are downsampled to 100 Hz. In each trial, cues show a duration of 3.5 s, during which each subject performed the corresponding MI (right hand and right foot) tasks. The timeline of a trial is shown in Figure 1B.
More details can be found in the following website: http: //www.bbci.de/competition/iii/. (3) Data 3: The third dataset was from the BCI Competition III dataset IIIa. The EEG signals of three subjects were recorded in this dataset at 64 electrodes but the competition received data of only 60 electrodes. Only the EEG data of left-hand and right-hand are employed in this study due to the binary classification. During each run, the first 2 s were quiet and a cross was displayed at t = 2 s. Then from t = 3-7 s, the subject executed the imagery task. The sampling rate is 250 Hz and have different numbers of trials for each subject in this study. The subjects are "k3b" (90), "k6b" (60), and "l1b" (60). The timeline of a trial is illustrated in Figure 1C. More details can be found in the following website: http://www.bbci.de/competition/iii/.
Data preprocessing and channel selection
At first, the continuous EEG data from three datasets are divided into single-trial data and then common average reference (CAR) is applied for the spatial filter to enhance the signal-to-noise ratio (Mcfarland et al., 1997). Moreover, the EEG data are filtered by using a fifth-order Butterworth band-pass filter from 5 to 40 Hz (Miao et al., 2017a).
The channel selection method could not only remove the irrelevant and redundant channels but also reduce the calculation cost for the subsequent time-frequency parameter optimization to obtain better classification performance (He et al., 2013). The discriminative power of each channel is calculated by Fisher's discriminative criteria (FDC) value between the two classes. First of all, time segmentation is conducted by using rectangular time windows (100 points) and the length of signal (250 points) for datasets 1 (100 Hz × 4 s) and dataset IVa (100 Hz × 3.5 s), and dataset IIIa (250 Hz × 4 s), respectively. The 50% overlapping is used in neighboring t-segments for three datasets. For single-channel, P ch,t = log(var(x ch,t )) is calculated as the feature of each segment, where x ch,t is signal data of t-segment of channel ch, and P ch,t denotes log-power. Then, the FDC value between two classes is φ ch,t = (m 1 − m 2 ) 2 /(var(P 1 ch,t ) + var(P 2 ch,t )), where m 1 and m 2 are means of P ch,t of all trials in two classes. P 1 ch,t and P 2 ch,t denote log-power of two classes, respectively. Finally, the maximum FDC of all t-segments is taken as the FDC value of each channel. The FDC values of all channels are arranged in descending order. In the set of FDC values, the first K corresponding channels are taken as the optimal channels in this study. K denotes the number of the selected channels.
Feature extraction and classification
The CSP is a feature extraction method that projects multichannel EEG signals from the two classes into a subspace and decomposes them into different spatial patterns (He et al., 2010;Alvarez-Meza et al., 2015;Nicolas-Alonso et al., 2015;Wang et al., 2020;Mladenović et al., 2022). The CSP algorithm maximizes the difference between classes by simultaneously diagonalizing the covariance matrix and is described as follows: The e-th MI EEG data could be represented as X e = [x 1 (t), x 2 (t), ..., x n (t)] T t = t 0 , ..., T, where n is the number of electrodes. X d , d ∈ {1, 2} denotes the EEG data of class 1 or class 2. The normalized average covariance matrix of class 1 and class 2 are calculated as: where N is the number of trials for EEG data in a class. Then, the covariance space C c = C 1 + C 2 consists of mean covariance matrices of the two classes. The eigendecomposition of C c can be expressed as C c = U c λ c U T c . P is the whitening matrix: B is the orthogonal matrix and λ d is a diagonal matrix. If R 1 = Bλ 1 B T , thenR 2 = Bλ 2 B T , and λ 1 + λ 2 = I. When the λ 1 is closer to I, the λ 2 is closer to 0. Thus, the difference between the two classes is the largest. The projection matrix W is calculated as: W = B T P. The X e is projected onto Z = WX e . The number of features is 2m and m = 1 in this study. The features f p , could be calculated as follows: In this article, linear discriminant analysis (LDA) and a Radial Basis Function (RBF) kernel-based support vector machine (SVM) are used as classification methods (Jin et al., 2019;Jin et al., 2020;Mladenović et al., 2022). The MATLAB Toolbox (LIBSVM) is used in this study for classification (Chang and Lin, 2011).
Improved novel global harmony search algorithm
Inspired by the music improvisation process, Geem proposed a new meta-heuristic optimization algorithm called harmony search (HS) (Geem et al., 2001). The novel global harmony search algorithm (NGHS) was proposed based on the idea of swarm intelligence of particle swarm (Zou et al., 2010). The NGHS algorithm first initializes the problem and parameters including genetic mutation probability (P m ), maximum iteration number, and harmony memory size (HMS). Then position updates and low-probability genetic mutations are used to produce a new harmony. Finally, no matter whether the new harmony is better than the worst harmony in the harmony memory (HM), the new harmony would replace the worst harmony. If the predetermined termination criterion is not met, the above process is repeated. However, the purpose of the position update operation is to move the worst harmony in the HM to the best harmony in each iteration in the NGHS algorithm, which can easily result in premature convergence. Moreover, the algorithm has never considered that other harmony solutions except for the worst harmony can improvise better harmony in each iteration. Therefore, an INGHS algorithm was proposed to boost the quality of the solution and keep the NGHS algorithm from falling into local optimal solution (Ouyang et al., 2015). The INGHS algorithm proposed a coefficient of optimization opportunity, which can dynamically adjust to keep a balance between the exploitation and exploration to enhance the local search ability and accelerate the convergence rate of algorithm. Figure 2 presents the flow chart of the INGHS algorithm.
The INGHS works as follows: Step 1: Initialize the problem and parameters The optimization problem is defined as minimize (or maximize) f (x) such that x iL ≤ x i ≤ x iU (i = 1, 2, ..., n), where the objective function denotes f (x), and x is a candidate solution composing of n decision variables (x i ). This step also needs to determine the parameters which include the HMS, genetic mutation probability P m , and the number of iterations (Ni).
Step 2: Initialize the harmony memory (HM) The initial HM is yielded from a uniform distribution in the variable interval [x iL x iU ], where x iL and x iU are the lower and upper bounds for x i , respectively. This is done as follows: Where r ∼ U(0, 1), and f (x) is the objective function values of each harmony vectors, as shown in Equation 3.
Step 3: Improvise a new harmony Generating a new harmony is called improvisation. x = (x 1 , x 2 , ..., x n ) is the new harmony vector. O(u) is defined as the coefficient of optimization opportunity and its expression is represents the i-th components of the best harmony (minimum fitness value) in HM, and x worst i represents the i-th components of the worst harmony (maximum fitness value) in HM. x s i is the i-th components of stochastic harmony in HM. r is uniformly generated random number in the region of [0 1]. The objective of the new harmony vector f(x ) is calculated. The specific procedure is as follows: For each i ∈ [1, n] do If r < O(u) Step 4: Update HM If the objective value of the improvised harmony vector x is better than that of the stochastic selected harmony x s , we replace the stochastic selected harmony in the HM with x .
Step 5: Check the stopping criterion The iteration is terminated when the maximum Ni is reached. Otherwise, Steps 3 and 4 are repeated.
An improved novel global harmony search-based frequency-temporal parameter optimization scheme Firstly, the original EEG data sets were preprocessed by CAR and band-pass filter. Then, the FDC-based method is used for the raw EEG data to select the optimal channel sets. The 10-fold cross-validation is employed to verify the effectiveness of the proposed INGHS method. Specifically, the raw EEG data with channel selection were randomly divided into 10 The flow chart of the INGHS algorithm.
parts, 9 parts of which were used as training data and the remaining one as test data. Figure 3 presents the flow chart of the INGHS-based frequency-temporal parameter optimization. The proposed method mainly includes training phase and test phase. For the training phase, the INGHS algorithm was used for the training data sets to search the optimal frequency band and time interval. Thereafter, the projection matrix is obtained by CSP algorithm to applied training data extracted with the optimal frequency-temporal parameter. The CSP features is applied to train the SVM model. For test phase, the optimal frequency-temporal parameter is employed for testing samples to extract the EEG segment. Then, the projection matrix is used to extract the CSP features which is putted into SVM model to classification.
It should be noted that the objective function in INGHS algorithm was defined to calculate the fitness value for evaluating the quality of a solution. The objective functions are the mean error rate of fivefold cross-validation.
The INGHS-based approach works as follows: (1) Initialize the problem and algorithm parameters.
The optimization problem is defined as minimize H (f, t) subject to h iL ≤ h i ≤ h iU (i = 1 − 4). h 1 and h 2 denote the start frequency (f start ) and bandwidth (f width ), while h 3 and h 4 denote the starting of time interval (t start ) and the length of time interval (t length ). Therefore, the solution is expressed as {f start , f width , t start , t length }. In this study, the introduced INGHS algorithm was applied for the training data sets to find the globally optimal solution. The searching ranges for each variable of feasible solution are listed in Table 1. For all subjects, if f start + f width ≥ 40, then f width = 40 − f start . For datasets 1, if t length + t start ≥ 4 × 100, then t length = 4 × 100 − t start . For dataset IVa, if t length + t start ≥ 3.5 × 100, then t length = 3.5 × 100 − t start . For dataset IIIa, if t length + t start ≥ 4 × 250, then t length = 4 × 250 − t start . The INGHS algorithm parameters (HMS, P m , and Ni) are also determined in this step.
(2) Initialize the HM and calculate the fitness value. The initial HM is generated from a uniform distribution in the ranges [h iL , h iU ], as shown in Figure 4. Each harmony vector (solution) is applied on the training data. At the same time, the features are extracted by the CSP algorithm based on the training data sets. Frequently, the obtained features is inputted into the LDA (or SVM) classification algorithm to calculate the fitness value H(f, t), and then sort by fitness values.
(3) Improvise a new harmony. According to Step 3 of the INGHS algorithm to improvise a new harmony and calculate the fitness value.
(4) Update HM and (5) Stopping criterion are the same as the INGHS algorithm. Finally, the optimal frequency band and time interval are derived. Meanwhile, the test data sets are processed by the optimal frequency band {f start , f start + f width } and time interval {t start , t start + t length }. Features are extracted by the CSP filters from the INGHS algorithm. Furthermore, the LDA (or SVM) classifier is utilized to recognize the MI task. The flow chart of the proposed INGHS-based method.
To remove the irrelevant and redundant channels and enhance the classification accuracy and reduce the computational complexity, channel selection methods for MIbased BCI have been widely studied (Miao et al., 2017a;Jin et al., 2019Jin et al., , 2020. Compared with other channel selection methods, the FDC method is widely used and has low complexity, the FDC is selected for channel selection. We investigated the effect of the change of the number of selected channels (K) on the test accuracy. The CSP algorithm is used as feature extract method and is not executed frequency-time parameters optimization. The frequency band is 5-40 Hz and time interval is MI time (paradigm setting). The LDA method is utilized to the classification. K is tuned from 8 to 59 for Data 1. For Data 2, K is tuned from 8 to 118. Figure 5 presents the test classification accuracies of subjects from Data 1 and Data 2 with the change of K. The test accuracy obtained by each K is the average of 10-fold cross validation. According to the results in Figure 5, the test accuracies are different for subjects of Data 1 and Data 2 with the increasing of the K. However, we note that the average
Parameter analysis of improved novel global harmony search
The selection of HMS depends on the specific problem. The bigger HMS cause that the search space of the algorithm is larger, which is easier to find the global optimal solution. However, a larger HMS will inevitably bring about an increase in the running time of the algorithm. Therefore, considering the efficiency and diversity of the algorithm, an appropriate HMS value for the INGHS algorithm is very vital. Mutation in INGHS is an auxiliary search operation whose main purpose is to preserve the diversity of group. Generally, a small P m value may lead to rapid convergence of the algorithm, which will easily generate local optimization solutions. However, a higher P m value will make INGHS algorithm tend to be purely random Initialize the harmony memory. Test classification accuracies of subjects from Data 1 and Data 2 with the change of K.
Frontiers in Computational Neuroscience
search, resulting in slow convergence of the algorithm and greatly affecting the efficiency of problem solving. Therefore, an appropriate P m value can not only prevent the algorithm from falling into local optimum and keep the diversity of solutions, but also make the algorithm timely converge. To select the optimal HMS and P m , this study analyzed the effects of different HMS and P m on three data sets, and HMS values are set as 5, 10, 20, and 30. If the total number of variables 1 ≤ x n ≤ 4, P m is reasonably selected from the range region of [0.2 × (1 − 50%), 0.2 × (1 + 50%)]. Otherwise, P m is selected from the region of [(1 − 50%) / x n , (1 + 50%) / x n ] (Zou et al., 2010). The P m value in this study is most reasonable between 0.1 and 0.3, so the P m value is set to 0.1, 0.15, 0.2, 0.25, and 0.3. Set the number of channels to K = 16. The classifier uses LDA. Table 2 shows the mean test accuracy of all subjects for Data 1 under different HMS and P m . It can be seen from the table that when HMS = 10 and P m = 0.15, the average test accuracy is up to 78.43%. Therefore, for Data 1, the HMS and P m are set to 10 and 0.15, respectively. Table 3 shows the mean test accuracy of all subjects for Data 2 under different HMS and P m . It can be seen that when HMS = 10 and P m = 0.2, the average test accuracy is the highest, reaching 87.78%. Therefore, for Data 2, parameters HMS and P m are selected as 10 and 0.2, respectively. Table 4 shows the mean test accuracy of all subjects for Data 3 under different HMS and P m values. It can be seen that the average test accuracy is up to 81.76%. Therefore, for Data 3, HMS and P m were selected as 20 and 0.25, respectively.
In this study, all experimental simulations are implemented by using MATLABR2019b on a Windows personal computer with Core i5-9500H 3.00 GHz CPU and RAM 8.00 GB. For fair comparisons, PSO (Xu et al., 2014), and the ABC (Miao et al., 2017a) with the recommended parameter setting were also employed to find the optimal frequency-temporal parameters. Moreover, the training data, test data, and the prepossessing methods are consistent. The parameter values of all the algorithms are listed in Table 5. Similarly, the number of iterations (Ni) for the all of algorithm is 100.
Results and discussion
Channel selection channel, on the contrary, the lighter blue indicates the smaller the FDC value of corresponding channel. As shown in Figure 6, for subjects of Data 2, the selected channel sets distributed in the left motor cortex area. Moreover, for subjects b, c, d, e, and g, the selected channels mainly located in right and left motor cortex area, which corresponds to MI of left hand and right hand. For subjects a and f, selected channels tend to locate in the right and central motor cortex area. It is consistent with the neurophysiological principle which is MI of the left hand and foot corresponding to the right and central motor cortex area. Furthermore, for subjects k3b to l1b, the channels correspond to higher FDC values are distributed in the adjacent area at C3 and C4 positions. However, the selected channels for each subject are distinguishing each other due to individual differences.
Test classification comparison
Aiming to evaluate the effectiveness of the proposed INGHS method for optimizing frequency-time selection, we compared the test classification between the proposed INGHS method and ABC, PSO, and the traditional CSP method in Table 6. For PSO, ABC, and INGHS feature optimization methods, the number of channels K is 16. Furthermore, the CSP method with 16channels employed the fixed time windows (0-4 s for data set 1 and IIIa, and 0-3.5 s for data set IVa) and the frequency band selected 5-40 Hz. Note that all the methods in Table 6 are evaluated by standard competition procedure. Each value in Table 6 is the 10-fold cross-validation mean test accuracy. Table 6 shows that for LDA and SVM classifiers, the average accuracy rate improvement achieved by INGHS was 12.9 and 11.6% in comparison with the classical CSP method. Thus, the proposed INGHS method achieved higher average classification accuracies compared to the traditional CSP method. In addition, the average accuracy rate improvements achieved by INGHS based on LDA were 1.6 and 4.1% in comparison with the ABC and PSO methods. Meanwhile, for the SVM classifier, the average classification accuracy obtained by the proposed INGHS method is higher than 0.1% (ABC) and 3.9% (PSO). The average test accuracy for INGHS is found to be slightly higher as compared to the accuracy achieved by ABC and PSO based on two classifiers. Furthermore, the Wilcoxon signed-rank test was used to analyze the statistical differences between the CSP method and the proposed INGHS method. The classification result of INGHS was significantly better than that of the CSP method (p < 0.001).
For the swarm intelligence optimization algorithm, PSO adjusts all variables of each solution during each iteration and the ABC algorithm adjusts one variable of each solution, whereas the INGHS algorithm adjusts each variable independently based on all of the existing vectors. This feature could increase the flexibility of the algorithm and produce better solutions. In addition, the steps and the structure of the INGHS algorithm are relatively simple. In summary, on the one hand, the proposed INGHS method could effectively optimize the time and frequency parameters and achieve better test accuracy compared with the CSP method. On the other hand, the test accuracy that is obtained by the time-frequency parameters selected by the proposed INGHS algorithm is slightly better than that obtained by ABC and PSO.
Frequency-time analysis
The optimal frequency-time zone optimized by the INGHS method based on LDA for Data 1 is shown in Figure 7. The figure shows the optimal frequency-time parameters corresponding to the highest test accuracy in the 10-fold crossvalidation for each subject. It should be noted that, for all subjects, the optimal frequency band covers the µ (8-12 Hz) and β rhythms (13-30 Hz). However, they still vary a lot. Furthermore, we observe that the starting time are different in a small range for all the subjects and the optimal time lengths are × u is the current iteration, Ni is the number of iterations, c1 and c2 are acceleration weight, and w is inertia weight. The topographic map of channels' discriminative power distributions of each subject.
different. Meanwhile, the optimal time segment of most subjects starts at 0.5 s after the visual cue which is consistent with the starting time point used in most literature (Thomas et al., 2009). For brain signal analysis, some common frequency bands including , and β (13-30 Hz) have been popularly used in various EEG studies. Here, we also investigate the classification performance between the commonly used frequency band configuration and the frequency band and time configuration optimized by the INGHS method. The time segment was 0-4 s, and the classifier was LDA. It should be emphasized that all the comparisons were made when the number of channels K was 16. Figure 8 presents the comparison of classification accuracies obtained by standard competition procedure with INGHS and common frequency settings for Data 1. The results show that the optimal frequency-time parameter based on INGHS achieved a better average test accuracy in comparison with the common frequency band setting. Specifically, the average test accuracy of INGHS is 78.43%, which is higher than 10. 79,10.43,10.72,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] Hz, The optimal frequency-time region found by the INGHS approach based on LDA for subject (A-G).
respectively. Therefore, it is suggested that INGHS could adaptively select the optimal time-frequency parameters and achieve the better classification performance.
Comparison of spatial patterns
To better interpret the experimental results, we visualized the spatial pattern derived by the traditional CSP method and the INGHS-based method. A pair of spatial patterns is composed of the first and last columns of W −1 (W is the spatial filter as in section "Feature extraction and classification"). For the traditional CSP method, the fixed frequency band (5-40 Hz) and time segment (0-4 s) are applied to the training data to obtain the spatial filter W. Meanwhile, the based-INGHS method applies the optimized frequency band and time period to the training data to obtain the spatial filter W. The channel used by INGHS and CSP method is the optimal 16channel mode after channel selection approach. It should be noted that the training data used by CSP and INGHS are Comparison of classification accuracies obtained by INGHS and common frequency settings for Data 1.
consistent. In Figure 9, a comparison of spatial patterns between the traditional CSP method with 16-channels and the INGHSbased method for "l1b" are displayed. The results indicated that compared with the traditional CSP method, the spatial pattern based on the INGHS method had significant ERD, which is concentrated around C3 and C4. When the unilateral MI, there was significant ERD in SMR at the contralateral hemisphere (Pfurtscheller et al., 2006). Moreover, the information features of sensorimotor areas are closely related to ERD, which provides important discriminant information for the decoding of motor imagination tasks (Blankertz et al., 2007). The obvious ERD in SMR derived by the INGHS-based method leads to better decoding accuracy in Table 6. Additionally, these analyses provide explicit evidence for the superior decoding performance of our proposed INGHS method over the traditional CSP method.
Computational cost comparison
The computational time of the different methods for three data sets is shown in Table 7. The running time mainly refers to the iteration time of the algorithm, excluding preprocessing and channel selection. The computational time denotes the average running time of all subjects in a single data set. As shown in Table 7, compared with PSO and ABC, the average time spent by the LDA-based INGHS method for all of the data was reduced by 78.2 and 85.2%, respectively. The results show that the proposed method in our study takes less time than PSO and ABC. The main reason is that in contrast to INGHS algorithms in which a unique solution is generated at each iteration, population-based meta-heuristic algorithms (PSO and ABC) take more time to maintain a set of solutions that evolve at each iteration. Although our proposed INGHS method takes less time than other methods, we expect to dramatically reduce the computational cost to speed up the training phase. Table 8 shows the test classification accuracy between the proposed INGHS method and the existed methods (FBCSP and SFBCSP). The time length was 0-3.5 s for Data 2 on the FBCSP and SFBCSP. For the FBCSP, the sub-frequency bands are divided into 4-8, 6-10, 8-12..., 36-40 Hz. CSP features are extracted for the whole time window in each sub-frequency band, and then the Mutual Information based Best Individual Feature (MIBIF) selection algorithm is used. CSP features of the frequency band are automatically selected. Based on the mutual information value of a single feature, the features corresponding to the first four sub-frequency bands are selected for subsequent training and testing. For SFBCSP method, sub-band division are the same as FBCSP, the LASSO is used for feature optimization. The 16-channel mode for the FBCSP and SFBCSP is the same as INGHS. Each value in Table 8 is the fivefold crossvalidation mean test accuracy. The LDA is used as classification Comparison of spatial patterns for the subject "l1b." method. The proposed INGHS method achieved higher average classification accuracies compared to the FBCSP and SFBCSP methods. In addition, the average accuracy rate improvements achieved by INGHS were 3.5 and 2.5% in comparison with the FBCSP and SFBCSP methods.
Limitations and extensions
The proposed frequency-temporal parameters optimization method based on INGHS could yield better test accuracy compared with the traditional CSP method. Furthermore, compared with PSO and ABC, the proposed method takes less computation time because of the simple iterative principle of the INGHS algorithm which can quickly converge to the global optimal value. However, the channel selection step precedes time-frequency optimization, and we will further explore the impact of spatial-frequency-time domain simultaneous optimization on classification performance in the future. Moreover, the proposed INGHS method only validates the binary classification performance of MI-BCI systems, and further research is needed to apply the proposed time-frequency parameter optimization algorithm to multiclassification problems. Mean ± SD 82.9 ± 10.7 83.9 ± 11.9 86.4 ± 11.5
Conclusion
In this study, an approach of frequency-time feature optimization based on the INGHS is proposed for MI EEG decoding. Three EEG datasets are used to verify the effectiveness of proposed INGHS method. The proposed method could improve classification accuracy in comparison to the classical CSP method. Moreover, the average test accuracy achieved by the INGHS is slightly better than that obtained by ABC and PSO based on LDA and SVM. Furthermore, the INGHS algorithm is superior to PSO and ABC in running time. The results demonstrate that the optimal frequency band and time interval provided by the INGHS algorithm could indeed improve the classification accuracy. Future studies will investigate the performance of our proposed INGHS method on other types of BCI systems.
Data availability statement
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. | 2022-12-16T14:44:47.984Z | 2022-12-16T00:00:00.000 | {
"year": 2022,
"sha1": "cc02f47624c34fc07af0b6328c01dd72ccb73610",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "cc02f47624c34fc07af0b6328c01dd72ccb73610",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
249053776 | pes2o/s2orc | v3-fos-license | Optimal connected subgraphs: Integer programming formulations and polyhedra
Connectivity is a central concept in combinatorial optimization, graph theory, and operations research. In many applications, one is interested in finding an optimal subset of vertices with the essential requirement that the vertices are connected, but not how they are connected. In other words, it is not relevant which edges are selected to obtain connectivity. This article is concerned with the exact solution of such problems via integer programming. We analyze and compare (mixed) integer programming formulations with respect to the strength of their linear programming relaxations. Along the way, we also provide a tighter (compact) description of the connected subgraph polytope—the convex hull of subsets of vertices that induce a connected subgraph. Furthermore, we give a (compact) complete description of the connected subgraph polytope for graphs with no four independent vertices.
INTRODUCTION
In many clustering and network analysis applications one is interested in finding an optimal subset of vertices with the main requirement being that the vertices are connected, but not how they are connected. In other words, one looks for a subsets of vertices, such that the subgraph induced by these vertices is connected. Which edges are selected to obtain connectivity is not relevant.
Applications of such an induced connectivity span a diverse set of areas: Computational biology [13], wildlife conservation [11], computer vision [9], social network analysis [30], political districting [19], wireless sensor network design [6], and even robotics [4]. Besides this practical relevance, connectivity is also a central and well-studied theoretical concept.
From an optimization perspective, a fundamental prototype for induced connectivity problems is the maximum-weight connected subgraph problem (MWCSP); see, for example, [1]. Given an undirected graph G = (V, E) and vertex weights p ∶ V → R, the task is to find a connected subgraph S = (V(S), E(S)) ⊆ G such that ∑ v∈V (S) p (v) is maximized. The literature also describes variations of the MWCSP such as the rooted and the budget constrained problem, see [2]. Another well-known optimization problem that is based on induced connectivity is the unweighted (as well as uniformly weighted) Steiner tree problem: Any solution (i.e., Steiner tree) consisting of n nodes will be of weight n − 1; it
Definitions and notation
For the vertices and edges of an undirected, graph G we write V(G) and E(G), respectively. For a directed graph D, we write A(D) for its set of arcs. For a subset of vertices U ⊆ V, we define Further, the notation n ∶= |V| and m ∶= |E| will be used. For U ⊆ V define (U) ∶= {{u, v} ∈ E|u ∈ U, v ∈ V∖U} and for a subgraph G ′ ⊆ G and U ′ ⊆ V(G ′ ) define A corresponding notation is used for directed graphs (V, A): For U ⊆ V define + (U) ∶= {(u, v) ∈ A|u ∈ U, v ∈ V∖U} and − (U) ∶= + (V∖U). For a single vertex v, we use the short-hand notation (v) ∶= ({v}), and accordingly for directed graphs. We define the neighborhood of a vertex set U ⊆ V as N(U) ∶= {v ∈ V∖U|∃u ∈ U, {u, v} ∈ (U)} .
For a single v ∈ V, we set N(v) ∶= N({v}). For directed graphs, we define We denote by (G) the maximum number of independent vertices in graph G. Let (V, A) be a directed graph, let r, t ∈ V, and consider an r − t flow f in (V, A). We denote the net flow value of f by |f | ∶= f ( + (r)) − f ( − (r)). Given an MWCSP instance (V, E, p) we define T P ∶= {v ∈ V|p(v) > 0}.
Let v and w be two distinct vertices of G. A subset C ⊆ V∖{v, w} is called (v, w)-separator, or (v, w)-node-separator, if there is no path from v to w in the graph (V∖C, E [V∖C]). The family of all (v, w)-separators is denoted by (v, w). Note that (v, w) = ∅ if and only if {v, w} ∈ E. For directed graphs, we say that C ⊆ V∖{v, w} is a (v, w)-separator if all directed paths from v to w contain a vertex from C.
For any function x ∶ M → R with M finite, and any M ′ ⊆ M define x(M ′ ) ∶= ∑ i∈M ′ x(i). Given an IP formulation F we denote its optimal objective value by v(F). Further, we denote the optimal objective value and the set of feasible points of its LP-relaxation by v LP (F) and LP (F), respectively. If we want to emphasize a specific problem instance I, we also write F(I).
Preliminaries: MWCSP and related problems
The MWCSP is -hard; see, for example, [23]. It is even -hard to approximate the MWCSP within any constant factor as shown in [1]. Note that in the case of only non-negative vertex weights, the MWCSP reduces to finding a connected component of maximum vertex weight; in the case of only non-positive vertex weights, the empty set constitutes an optimal solution.
Rooted MWCSP
A close relative of the MWCSP is the rooted maximum-weight connected subgraph problem (RMWCSP); see, for example, [2], which incorporates the additional condition that a non-empty set T f ⊆ V needs to be part of any feasible solution. For simplicity, we usually assume that p(t) = 0 for all t ∈ T f .
Unweighted Steiner tree problem
Given an undirected connected graph G = (V, E) and a set T ⊆ V of terminals, the unweighted Steiner tree problem in graphs (USPG) is to find a tree S ⊆ G with T ⊆ V(S) such that |E(S)| is minimized. The USPG can also be seen as a Steiner tree problem with uniform edge weights. Moreover, the USPG can be formulated as an RMWCSP by setting T f ∶= T and assigning each nonterminal vertex a weight of −1. Many of the hardest Steiner tree benchmark instances are unweighted; see [25] for an overview. Moreover, many theoretical articles consider just the unweighted case; see, for example, [17], who describe an exact polynomial-space algorithm for the USPG.
Steiner arborescence problem
Several results of this article rely on the Steiner arborescence problem (SAP), which is defined as follows: We are given a directed graph D = (V, A), costs c ∶ A → R ≥0 , a set T ⊆ V of terminals and a root r ∈ T. The SAP requires an arborescence (i.e., directed tree) S ⊆ D with T ⊆ V(S) that is rooted at r, such that c(A(S)) is minimized.
FORMULATIONS FOR ROOTED CONNECTED SUBGRAPHS
This section is concerned with connected subgraph problems where a predefinded, non-empty set of vertices needs to be part of any feasible solution. We start with formulations for the SAP. While the SAP is not based on induced connectivity itself, it forms the base of several other results in this article.
Formulations for the Steiner arborescence problem
Consider an SAP instance (V, A, T, r, c). Associate with each arc a ∈ A a binary variable y(a) indicating whether a is contained in the Steiner arborescence (y(a) = 1) or not (y(a) = 0). A natural formulation by [35] (i.e., one in the original variable space) can thereupon be stated as: One verifies that the constraints (2) ensure the existence of (directed) paths from the root to each terminal in a feasible solution. We note that a feasible but not optimal solution to DCut is not necessarily the incidence vector of a Steiner arborescence. Indeed, the convex hull of all y ∈ N A 0 that satisfy (2) is of blocking type, that is, its recession cone equals R A ≥0 . Another well-known formulation, see, for example, [35], is based on flows. This formulation affiliates with each terminal t ∈ T∖{r} an r − t flow f t .
Formulation 2. Directed multicommodity flow formulation (DF)
if v ∈ V∖{r, t}, By using the max-flow min cut theorem, one shows that DF is an extended formulation of DCut, that is, proj y ( LP (DF)) = LP (DCut); see, for example, [14]. Both formulations can be strengthened by the so called flow-balance constraints from [24]: We will refer to the extensions of the above formulations that additionally include (9) as DCut FB and DF FB , respectively. We end this section with a (new) result for SAP, which will be used several times in the following.
Proof. For the case of |T| = 1 and |T| = 2, the lemma holds already without the flow-balance constraints: The case |T| = 1 is clear, and the case |T| = 2 (corresponding to the shortest-path problem with non-negative weights) results in a totally unimodular constraint matrix. So let (V, A, T, c, r) be an SAP with two terminals t, u besides the root r. We additionally require that a feasible solution does not have any leaves apart from r, t, u. For this so called two-terminal Steiner tree problem a complete polyhedral description is given by [3]: The above description is based on the following observation: Any feasible arborescence for the two-terminal Steiner tree problem consists of a path from r to a splitter node s, as well as a s − t and a s − u path. Note that any of these paths can be a single node. The flow from r to the terminals t and u is split into a common part f , and two separate parts f t and f u . Let (f t , f u , y) be an optimal LP solution to DF FB . Assume that this solution is minimal, that is, for any feasible solution is contained in the polyhedron described above. Define for all a ∈ A: First, we show (10). Let v ∈ V. Because of the assumed minimality of (f t , f u , y), we have that y = max{f t , f u }. Together with (15) we get:f If v = r, then and thus (19) implies that (10) holds. If v ∈ {t, u}, then and Finally, if v ∈ V∖{r, t, u}, the flow-balance constraints imply that (19) is non-negative.
Next, consider (11)-and equivalently (12). By definition it holds that which implies (11). Likewise, (13) follows from the definition off ,f t , andf u . ▪ Note that the lemma is best possible in the sense that there exist SAP instances with |T| = 4 such that v LP (DCut FB ) ≠ v(DCut FB ); see, for example, [28,31].
Rooted maximum-weight connected subgraphs
This section discusses the directed variant of the RMWCSP, see [2]: Given a directed graph D = (V, A), vertex weights p ∶ V → R, a non-empty set T f ⊆ V and an r ∈ T f , find a connected subgraph S ⊆ D containing T f such that any v ∈ V(S) can be reached from r on a directed path in S, and p(V(S)) is maximized. Any undirected RMWCSP can be formulated in directed form by choosing an arbitrary r ∈ T f and replacing each edge by two anti-parallel arcs.
Note that any solution to the directed RMWCSP can be represented as an arborescence. This observation leads to the following IP formulation, see, for example, [2], which is based on a well-known formulation for SAP, see, for example, [21]. Define for each v ∈ V a variable x(v) ∈ {0, 1} that is equal to 1 if and only if vertex v is part of the solution. Analogously, define for each a ∈ A a variable y(a) ∈ {0, 1}.
Formulation 3. Rooted Steiner arborescence formulation (RSA)
Constraints (26) establish the relation between the arc variables and (the actually redundant) vertex variables. Constraints (27) correspond to constraints (2) in the DCut formulation. The constraints make sure that in a feasible solution S, for any v ∈ V(S) there is an r − v path in S as well. Finally, constraints (28) assure that all fixed terminals are contained in any feasible solution.
In [2], a new formulation for the directed RPCSTP based on node-separators is introduced. Note that the use of node-separators for modeling connectivity is already suggested in [18].
Formulation 4. Rooted node separator formulation (RNCut)
Constraints (32) ensure that connectivity is fulfilled: By enforcing that for any solution vertex v, all (r, v)-separators contain at least one solution vertex as well. Constraints (33) ensure the inclusion of all fixed terminals.
Besides the two IP models introduced above, several other formulations for RMWCSP (sometimes including a budget constraint) have been introduced in the literature; see, for example, [2,11]. However, one can show that these formulations are weaker with respect to the LP-relaxation than both of the above models, see [2] for some such results. Another example is the formulation from [10] that is based on single-commodity flows. However, also this formulation can be shown to be weaker than Formulation 3 by using max-flow/min-cut arguments-similar to corresponding results for minimum spanning tree or Steiner tree problems, which can be found for example in [22].
In [2], it is stated that the LP-relaxations of the RNCut and RSA model yield the same optimal value. Unfortunately, this claim is not correct, as the following proposition shows. Appendix B gives a counterexample-and furthermore provides some insight on how the node separator constraints miss to capture structures accurately described by edge cut constraints. Proposition 2. It holds that proj x ( LP (RSA)) ⊂ LP (RNCut) and the inclusion can be strict.
Proof. The inclusion is essentially shown in [2]. For the sake of completeness, we nevertheless prove it the following. Let (x, y) ∈ LP (RSA). We will show that x ∈ LP (RNCut). Consider any v ∈ V∖({r} ∪ N + (r)) and a non-empty C ∈ (r, v). Let V r be the vertices that are reachable from r in the graph (V∖, E[V∖]). We obtain: which shows that (32) is satisfied. Thus, x ∈ LP (RNCut).
Finally, an RMWCSP instance for which proj x ( LP (RSA)) ⊊ LP (RNCut) holds is given in Figure B1 in Appendix B.
Because of x( ) = 0.5, we have that either y((a, )) < 0.5 or y((b, )) < 0.5 holds. Thus, we have either , which contradicts (27). In particular, it holds that One can strengthen the RSA formulation by inequalities similar to the flow-balance constraints. However, these constraints depend on the objective vector (i.e., they are only valid for specific node-weight assignments), so they cannot directly be used for polyhedral results.
We refer to the strengthened formulation as RSA FB . One readily obtains the following result from Lemma 1.
. For each t ∈ T p , we add a new terminal t ′ to T ′ and arcs (r, t ′ ) of weight p(t) and (t, t ′ ) of weight 0 to A ′ . It holds that recall that we assume T p ∩ T f = ∅. Any optimal LP solution (y, z) to RSA FB can be extended to a feasible LP solution y ′ to DCut FB defined by y ′ (t, t ′ ) = x(t), y ′ (r, t ′ ) = 1 − x(t) for all t ∈ T p , as well as y ′ (a) ∶= y(a) for all a ∈ A. Thus, Because I ′ has at most three terminals, Lemma 1 and (36) imply that the above inequalities are satisfied with equality. Consequently, v LP (RSA FB (I)) = v(RSA FB (I)). ▪
Unweighted Steiner tree problems
This section analyzes and compares two formulations for the USPG. First, we state the node-separator formulation from [16]. Note that in [16], a more general version for the prize-collecting USPG is used. However, the prize-collecting USPG is essentially a MWCSP. The results of this section can be partly extended to this more general variant (which is done in Section 3 for the non-rooted case), but for simplicity, we now consider the USPG only.
Formulation 5. Terminal node separator formulation (TNCut)
Second, we look at the well-known bidirected cut formulation (BDCut) for (U)SPG. This formulation corresponds to the DCut formulation for the SAP obtained by replacing each edge of the SPG by two anti-parallel arcs of the same weight, and choosing an arbitrary terminal as the root.
Exactness of the bidirected cut formulation
This section formulates conditions under which the bidirected cut formulation has no integrality gap. We start with a direct consequence of Lemma 1, which applies also to weighted SPG.
A simple reduction technique for USPG is to contract adjacent terminals (and delete one edge from each resulting pair of multi-edges). The following proposition shows that the absolute integrality gap of BDCut is invariant under this operation. This property will be exploited in (the subsequent) Theorem 6 to reduce the size of instances with a small number of independent vertices. Proposition 5. Let I be an USPG instance with adjacent terminals t, u. Let I ′ be the USPG obtained from contracting t and u. It holds that v LP (BDCut(I)) = v LP (BDCut(I ′ )) + 1. (42) Proof. Throughout the proof, we assume that u is the root for the BDCut formulation, that is, r = u. It is well known that the choice of the root does not affect v LP (BDCut); see, for example, [21] (this result also follows from the proof of Theorem 6). Furthermore, let D ′ = (V ′ , A ′ ) be the bidirected graph obtained by contracting r and t and let r ′ be the new vertex. So, Let y be an optimal LP solution to BDCut(I). The optimality of y implies that y( − (t)) = 1, see [31]. Create an optimal solutionỹ (which can possibly be equal to y) as follows.
With this result at hand, we obtain the following theorem (recall that (G) denotes the independence number of graph G). (I ′ )). Furthermore, because of (G) ≤ 3 it holds that |T ′ | ≤ 3. For |T ′ | < 3, the BDCut formulation is well known to have no integrality gap. So assume |T ′ | = 3. By construction of I ′ , the terminals form an independent set. Further, let y be an optimal LP solution to BDCut(I ′ ) with an arbitrary r ∈ T ′ being the root.
Suppose that v LP (BDCut(I ′ )) ≠ v(BDCut(I ′ )). By Lemma 1, there is a v ∈ V ′ ∖T ′ such that Because of (G) ≤ 3, at least one of the terminals needs to be adjacent to v. We may assume that this property holds for r. Otherwise, we can readily create another optimal LP solutionỹ that satisfies (43) and has a root adjacent to v: Assume that a t ∈ T∖{r} is adjacent to v and let f t be a unit flow from r to t such that f t ≤ y; defineỹ((q, u)) ∶= y((q, u)) − f t ((q, u)) + f t ((u, q)) for all (u, q) ∈ A ′ . Define a new LP solution y ′ from y as follows. y(a). Note that because of (43) it holds that y ′ (A ′ ) < y(A ′ ). It remains to be shown that y ′ is feasible. Suppose that there is a U ⊆ V∖{r} with U ∩ T ′ ≠ ∅ and y ′ ( − (U)) < 1. Because y is feasible, it has to hold that v ∈ U. LetŨ ∶= U∖{v}. By the construction of y ′ it holds that y( − (Ũ)) = y ′ ( − (Ũ)) = y ′ ( − (Ũ)) + y ′ ((r, v)) − y ′ ( + (v)) ≤ y ′ ( − (U)) < 1, which contradicts the feasibility of y. Consequently, we have shown that v LP (BDCut(I ′ )) = v (BDCut(I ′ )) and, thus, v LP (BDCut(I)) = v (BDCut(I)). ▪ The theorem is best possible; that is, there exist USPG instances such that (G) = 4 and v LP (BDCut) ≠ v(BDCut); see, for example, [14,15].
2.3.2
Comparison of edge and node-based formulation Formulation 5 (TNCut) was used within a branch-and-cut algorithm by the most successful solver at the 11th DIMACS Challenge [12]. Furthermore, the solver was able to solve several USPG benchmark instances that had been unsolved for more than a decade to optimality. Thus, one might wonder how this formulation theoretically compares with the better known bidirected cut formulation. As the next proposition shows, BDCut is always stronger than TNCut and the relative gap can be quite large.
Proposition 7. It holds that v LP
where the supremum is taken over all USPG instances.
Proof. For the first inequality consider an optimal LP solution y to BDCut. Define x ∈ R V by x(v) ∶= y( − (v)) for all v ∈ V∖{r} and x(r) ∶= 1. The optimality of y implies x(v) ≤ 1 for all v, see [31]. Let t, u ∈ T with t ≠ u and C tu ∈ (t, u). We will show that C(t, u) satisfies (39). If C tu ∩ T ≠ ∅, then x(C tu ) ≥ 1, because x(q) ≥ 1 for all q ∈ T due to (2) and the definition of x. Thus, (39) holds. If C tu ∩ T = ∅, let U r be the connected component in the graph induced by V∖C tu with r ∈ U r . By definition of C tu , either t ∉ U r or u ∉ U r . Therefore, y( + (U r )) ≥ 1, which implies y( − (C tu )) ≥ 1 because of + (U r ) ⊂ − (C tu ). Now we obtain from the definition of x that Finally, by construction of x we have that note that y( − (r)) = 0 because y is optimal. For (44) we construct the following family of USPG instances. For any k ≥ 3 let I k be the USPG instance with k + k 2 nodes, k + k 2 edges, and k terminals defined as follows. Let t i for i = 1, … , k be the terminals and define for Figure 1. A feasible (and indeed optimal) LP solution x to TNCut(I k ) is given by x(t) ∶= 1 for all terminals t and x(v) ∶= 0.5 for any Steiner node v. Its objective is k 2 2 + k − 1. On the other hand, it holds that v LP (BDCut(I k )) = v(BDCut(I k )), because I k consists of a cycle (and is thus in particular series-parallel); see, for example, [20]. Any optimal Steiner tree in I k contains all edges except for those between two FIGURE 1 USPG instance I 3 . Terminals are drawn as squares. adjacent terminals t i and t i+1 . Thus, v LP (BDCut(I k )) = v(BDCut(I k )) = (k + 1)(k − 1) = k 2 − 1. Consequently, which concludes the proof. ▪
Corollary 8. The (relative) integrality gap of TNCut is at least 2.
Note that one can strengthen TNCut by constraints that correspond to the flow-balance constraints for BDCut; see [16]. However, if compared to BDCut FB , the results of Proposition 7 remain the same for this stronger version of TNCut.
FORMULATIONS FOR NON-ROOTED CONNECTED SUBGRAPHS
In this section, we consider the undirected MWCSP. Some of the following results can also be extended to the directed case. However, the undirected MWCSP is the more common (and, arguably, also more natural) problem.
Node-based formulations
This section considers formulations for the MWCSP that use only node variables. The probably best known one, see, for example, [34], is given below.
Formulation 6. Node separator formulation (NCut)
Constraints (47) guarantee that in a feasible solution S for any disjoint v, w ∈ V(S), at least one node of each (v, w)-separator is also contained in S.
In this section, we are interested in cases where NCut has no integrality gap. Recalling the invariance of the BDCut integrality gap to the contraction of terminals, one might wonder whether a corresponding property holds for the MWCSP and the NCut formulation. As show in the next proposition, the answer is yes. As before, we will exploit this property for showing an integrality condition based on the independence number: By exhaustively contracting all adjacent, positive vertices. Note that when contracting adjacent vertices t, u ∈ T p into a new vertex t ′ , we set p(t ′ ) ∶= p(t) + p(u).
Proposition 9. v LP (NCut) is invariant under the contraction of adjacent vertices of positive weight.
Proof. Let I be an MWCSP instance with an edge {t, u} ∈ E such that t, u ∈ T p . Let I ′ = (V ′ , E ′ , p ′ ) be the instance obtained from I be contracting {t, u} into a new vertex t ′ . It holds that v LP (NCut(I ′ )) ≤ v LP (NCut(I)), because any x ′ ∈ LP (NCut(I ′ )) can be mapped to a x ∈ LP (NCut(I)) with For the opposite case, let x be an optimal LP solution to NCut(I). The optimality of x, and the fact that {t, u} ∈ E imply Assume that x ′ (t ′ ) ∈ (0, 1)-otherwise, the proof is already complete. It remains to be shown that x ′ ∈ LP (NCut(I ′ )). Suppose this is not the case. Then there are a, b ∈ V ′ and an a-b separator C ′ ab ⊂ V ′ such that Because x is feasible, t ′ ∈ C ′ ab . Thus, we obtain from (50) that and therefore min{x(a), Now we return to the original instance I. Because x is optimal, and x(t) = x(u) < 1, there is a q ∈ V∖{t, u} and a C qt ∈ (q, t) such that Similarly, there is a s ∈ V∖{t, u} and a C su ∈ (s, u) with x(u) + x(s) − x(C su ) = 1. At least one such combination q, C qt , or s, C su satisfies u ∉ C qt or t ∉ C su , otherwise we could increase x(u) and x(t). Assume w.l.o.g. u ∉ C qt . From (53), we obtain Thus, (53) and (52) imply a, b ∉ C qt . We note that C qt ∉ (a, q), because (52) and (53) imply Likewise, C qt ∉ (b, q). Consequently, any path from {t, u} to a or b needs to cross C qt ; otherwise, the latter would not separate q and t. Therefore,C ab ∶= (C ′ ab ∖{t ′ })∪C qt separates a and b (in the original graph). However, from (50) and (54) we obtain which contradicts the feasibility of x. ▪ Furthermore, one obtains the following optimality criterion:
Proof.
Consider an MWCSP I = (G, p) with |T p | ≤ 2. The case |T p | ≤ 1 is clear. Let {a, b} ∶= T p and assume p(a) ≥ p(b). Thus, there is a minimal optimal LP solution x such that x(a) = 1. Let (V, A) be the bidirected equivalent of G. Create a new directed graph (V ′ , A ′ ) by replacing each node v ∈ V∖{a, b} by two nodes v 1 , v 2 and arcs (v 1 , v 2 ), (v 2 , v 1 ). Further, all ingoing arcs of v become ingoing arcs of v 1 , and all outgoing arcs of v are now outgoing arcs of v 2 . Define arc capacities k for each pair of these new arcs by x(v); for any (remaining) arc e ∈ A set k(e) ∶= ∞. 1 By the max-flow/min-cut theorem there is an a-b flow f with |f | = x(b) in this extended network. Define the directed MWCSP I r ∶= ((V, A), T f , r, p) with T f ∶= {a} and r ∶= a, and set y ∶= f ↾ A . Because of the optimality and minimality of x it holds that (x, y) ∈ LP (RSA(I r )). Thus, v LP (NCut(I)) ≤ v LP (RSA(I r )). Furthermore, y satisfies constraints (35). Because of v(NCut(I)) = v(RSA(I r )), Lemma 3 implies that v LP (NCut(I)) = v(NCut(I)). ▪ Figure 2 shows an MWCSP instance with |T p | = 3 and v LP (NCut) ≠ v(NCut). It holds that v(NCut) = 1, but v LP (NCut) = 1.5 (set the values of all negative weight node variables to 0.5 and the remainder to 1).
Finally, by combining the previous two propositions we obtain a significantly shorter proof of a main result from [34].
Proof. Let p ∈ R V . If (G) ≤ 2, then Proposition 9 implies that the MWCSP (G, p) can be transformed to an MWCSP with at most two positive weight vertices without changing v LP (NCut). Now, Proposition 10 gives v LP (NCut) = v(NCut). Because p can be chosen arbitrarily,
Indegree constraints
Given an undirected graph G = (V, E), a ∈ Z n is an indegree vector if there is an orientation For each indegree vector the corresponding indegree inequality is given as where x ∈ R V ≥0 are the node variables. [26] shows that the indegree inequalities describe the connected subgraph polytope if G is a tree. Furthermore, [34] shows conditions for (57) to be facet inducing, and show that the constraints can be separated in linear time. It is further shown that the constraints (57) can strengthen the NCut formulation.
Edge-based formulations
An edge-based formulation for the directed MWCSP is introduced in [1], based on a transformation to the prize-collecting SPG. We will use essentially the same formulation for the undirected MWCSP, but without the transformation to the prize-collecting SPG, and thus with a different objective function. Consider the bidirected equivalent D = (V, A) to the given undirected graph. Let (V r , A r ) be the directed graph defined as follows with an additional node r: Define the following extended MWCSP formulation based on the new graph (V r , A r ).
Formulation 7. Extended Steiner arborescence formulation (ESA)
ESA is almost the same as RSA (Formulation 3). The additional constraint (61) ensures that at most one arc incident to the (artificial) root node r is selected. Otherwise, a solution could consist of several connected components in the original graph (V, E).
The remainder of this section aims to prove an integrality condition for proj x ( LP (ESA)) based on the independence number. Our approach can be divided in two parts. First, we show that for any MWCSP instance with 1 ≤ |T p | ≤ 3 there is an optimal LP solution (x, y) with x(v) = 1 for a v ∈ T p . In the second part (following Lemma 15), we use this v as a root node and apply the same principal ideas already used in Section 2.3.1 for the USPG: We show the invariance of the integrality gap under edge contraction and reduce any MWCSP instance with bounded independence number to an MWCSP instance with bounded number of positive vertices. We start with an easy technical result.
Lemma 12.
Let (x, y) be an optimal LP solution (x, y) to ESA, and let v ∈ V. There is aỹ ∈ R A r withỹ((r, v)) = x(v) such that (x,ỹ) is an optimal LP solution to ESA.
Lemma 13.
Let (x, y + ) be an optimal LP solution to ESA + . Then (x, y) ∈ R V+A r with y(a) ∶= y + (a) for a ∈ A + r and y(a) ∶= 0 for a ∈ A r ∖A + r is an optimal LP solution to ESA.
Proof. Let ESA ′ be the reduced version of ESA where constraints (60) are only enforced for vertices In this proof, we only consider minimal optimal LP solutions, that is, solutions for which no entry can be reduced without losing either feasibility or optimality. First, we show that any optimal LP solution to ESA + is also optimal for ESA ′ . To this end, we show the existence of an optimal LP solution (x ′ , y ′ ) to ESA ′ such that y ′ ((r, v)) = 0 for all v ∈ V∖T p . Assume there is an optimal LP solution (x ′ , y ′ ) to ESA ′ with y ′ ((r, v)) > 0 for a v ∈ V∖T p . Because (x ′ , y ′ ) is optimal, there is an r − t flow f t with f t ≤ y ′ for a t ∈ T p with |f t | = y ′ ((r, v)). We can now proceed as in Lemma 12 to revert the flow going to t. The resulting optimal solution (x,ỹ) satisfiesỹ((r, v)) = 0 andỹ((r, u)) ≤ y ′ ((r, u)) for all u ∈ V∖{t}.
Second, we show that any optimal LP solution (x ′ , y ′ ) to ESA ′ with y ′ ((r, v)) = 0 for all v ∈ V∖T p satisfies constraints (60) also for v ∈ U with v ∉ T p . We use essentially the same line of argumentation used in [21] for the SPG bidirected cut formulation. Suppose there is a U ⊆ V and a u ∈ U with Choose such a U with |U| as small as possible. Because of (65), there is a e ∈ − (u)∖ − (U) such that y ′ (e) > 0. Because of the minimality of (x ′ , y ′ ), there is a W ⊆ V and a t ∈ W ∩ T p such that e ∈ − (W) and Because of e ⊆ U and |e ∩ W| = 1, one obtains |U ∩ W| < |U|. We will show that U ∩ W satisfies (65), which contradicts the minimality of |U|. By standard graph theory we have that y ′ (( − (U)) + y ′ ( − (W)) ≥ y ′ ( − (U ∩ W)) + y ′ ( − (U ∪ W)).
With (66), it follows that y ′ (( − (U)) ≥ y ′ ( − (U ∪ W)), which leads to a contradiction. ▪ Further, we require the following result. The (quite lengthy) proof is given in the Appendix.
Lemma 15.
If |T p | ≤ 3, then there is an optimal LP solution (x, y) to ESA such that x(t) ∈ {0, 1} for all t ∈ T p .
As the last piece, we have the now familiar contraction result (with a slight generalization).
Proposition 16. v LP (ESA) is invariant under the contraction of adjacent vertices of non-negative weight.
The proposition can be proven in a similar way as Proposition 5, with a few additional technical details. We now reach the main result of this section. (I ′ )). Also, I ′ satisfies |T ′ p | ≤ 3 and the vertices T ′ p are independent. By Lemmas 12 and 15, there is an optimal LP solution (x,ỹ) to ESA(I ′ ) such that x(u) ∈ {0, 1} for all u ∈ T ′ p , and y((r, t)) = 1 for one For simplicity, we deviate from the assumption that fixed terminals have 0 weight. It holds that v(ESA(I ′ )) = v(RSA(I ′ t )) and v LP (ESA(I ′ )) = v LP (RSA(I ′ t )). We will show that v LP (RSA(I ′ t )) = v(RSA(I ′ t )), which concludes the proof. Let (x, y) be the restriction of (x,ỹ) to (V ′ , A ′ ). Note that (x, y) is an optimal LP solution to RSA(I ′ t ). Suppose that (67) does not hold. Thus, by Lemma The case |T ′ p | < 3 can be readily ruled out by a flow argument. So assume |T ′ p | = 3. Because of (G) ≤ 3, at least one vertex u ∈ T ′ p is adjacent to v. Recall that x(u) ∈ {0, 1}. If x(u) = 0, we reduce the problem to the support graph of (x, y), which corresponds to the case |T ′ p | < 3. So assume Further, construct an optimal solution (x,ỹ) to I ′ u with root u analogously to Lemma 12. In this way, y( + (v)) <ỹ( − (v)) holds again (for the same v as above). In the following, assume u = t. Define a new LP solution (x ′ , y ′ ) from y as follows. For a 0 ∶= (t, v) set y ′ (a 0 ) ∶= y( + (v)). For any a ∈ − (v)∖{a 0 } set y ′ (a) ∶= 0. For all a ∈ A ′ ∖ − (v) set y ′ (a) ∶= y(a). Set x ′ (v) ∶= y( + (v)), and x ′ (w) ∶= x(w) for all w ∈ V∖{v}. By construction of I ′ t it holds that p(v) < 0 (otherwise, v would have been contracted into u). Thus, p ′ T x ′ > p ′ T x. The feasibility of (x ′ , y ′ ) can be seen as in the proof of Theorem 6. ▪ Note that there are graphs with a(G) = 4, such that proj x ( LP (ESA)) is not integral. For an example, extend the graph in Figure 2 as follows. Add a new vertex v and edges between v and the (three) vertices of negative weight shown in the figure.
Comparison of the formulations
A result from [1] states that the directed equivalents of ESA and (a slight generalization of) NCut induce the same polyhedral relaxation of the directed connected subgraph polytope. This result suggests that the same relation holds for the undirected case. Unfortunately, the result from [1] is not correct (the proof suffers from a similar problem as that discussed in Appendix B for the rooted case). The strict inclusion result given in the next proposition can indeed also be extended to the directed case. Proposition 18. The following inclusion holds and can be strict: Proof. Let (x, y) ∈ LP (ESA) and let a, b ∈ V, a ≠ b. Let C ∈ (a, b) and let U a be the connected component in the graph (V∖C, E[V∖C]) with a ∈ U a . DefineŪ b ∶= V∖U a andŪ a ∶= U a ∪ C. Because ofŪ a ∩Ū b = C, one obtains where we use − ∶= − D r . Thus, = y( + (r)) + y( − (C)) (72) An example for a strict inclusion is given in Figure 2. Consider the following point that is in LP (NCut), but not in proj x ( LP (ESA)): Set the x values of all negative weight node variables to 0.5 and the remainder to 1. To see that this point is indeed not in proj x ( LP (ESA)), consider arc variables y such that y( − (v)) = x(v) for all vertices v. Because of x( ) = 1, we can proceed as in Lemma 12 and assume that y({r, }) = 1. First, we have y( − ({a, b, c})) ≥ x(a) = 1. Because of Next, we consider the indegree constraints. Following [34], we define While ′ ⊈ LP (NCut), see, for example, [34], the indegree constraints cannot improve the ESA formulation, as the following proposition shows.
Proposition 19.
The following inclusion holds and can be strict: Proof. Consider an undirected graph G, and let D be its bidirected equivalent. Furthermore, let D r be the extended, directed graph on which ESA is defined. Let (x, y) ∈ LP (ESA). First, note that constraints (59) and (60) imply for all {v, w} ∈ E that Let be an indegree vector. It holds that ∑ v∈V which implies that (57) is satisfied by x; thus, x ∈ ′ . For a strict inclusion consider the graph in Figure 2 and the point x as defined in the proof of Proposition 18. To see that this point satisfies all indegree constraints, consider an indegree vector q that minimizes ∑ v∈V q v x(v). Because all adjacent vertices of a, , and f have value 0.5, we obtain q a = q = q f = 0. Thus, , which shows that the indegree constraint is satisfied. ▪ Summarizing the results of this section, one obtains: and the inclusion can be strict.
Finally, note that by using one flow for each vertex, similar to the DF formulation, it is also possible to obtain a compact extended formulation for the connected subgraph polytope that is equivalent to ESA-and thus (strictly) stronger than the combined node-separator and indegree formulation.
CONCLUSION
This article has analyzed node and edge-based formulations for combinatorial optimization problems based on induced connectivity. Furthermore, we have shown conditions for the LP-relaxations to be tight. In particular, a (compact) complete description of the connected subgraph polytope for graphs with less than four independent vertices has been given. Overall, it has been demonstrated that the edge-based formulations consistently provide stronger LP-relaxations than their node-based counterparts. For MWCSP, the considered edge cut formulation has been shown to be strictly stronger than the combination of the well-known node-separator and indegree formulations.
Finally, we note that the theoretical predominance of edge-based formulations over node-based ones is complemented by recent computational results: In [32], a MWCSP solver that uses the ESA + FB formulation is shown to significantly outperform all other (and in particular node-based) MWCSP solvers from the literature.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. Because of (A8) and (A9), we obtain Defineỹ ∈ R A + r byỹ (e) ∶= max{y(e),f a (e), f b (e)}, for all e ∈ A + r , and definex ∈ R V accordingly. It holds that p Tx = p T x, andx(c) = 1. ▪
APPENDIX B: NODE SEPARATORS AND REJOINING OF FLOWS
Consider the directed RMWCSP instance (G, T, p, r) with G = (V, A) depicted in Figure B1. A proof from [2] intends to show that v LP (RNCut) ≤ v LP (RSA) holds. For this purpose, the authors consider an arbitrary solution x ∈ LP (RNCut) and construct an auxiliary graph G ′ by replacing each node v ∈ V∖{r} with an arc (v 1 , v 2 ). All ingoing arcs of v become ingoing arcs of v 1 , and all outgoing arcs of v are now outgoing arcs of v 2 . Moreover, (non-negative) capacities k ′ on G ′ are introduced for each arc (v ′ , w ′ ) of G ′ by 1, otherwise. Figure B2 shows an auxiliary support graph of the instance illustrated by Figure B1. It is possible to send a flow with flow value x(v) from root node r to each arc (v 1 , v 2 ) with v ∈ V∖{r} because of constraints (32). Let f v (j, l) be the amount of a flow with source node r, sink node v ∈ V∖{r}, and flow value x(v) sent along arc (j, l). Define the arc variablesŷ(j, l), (j, l) ∈ A, of FIGURE B1 Directed RMWCSP instance FIGURE B2 Illustration of an auxiliary support graph G ′ corresponding to the instance in Figure B1 regarding the optimal solution x(v) = 0.5, v ∈ V∖T, and x(t) = 1, t ∈ T, to the RNCut formulation the RSA formulation as follows:ŷ (j, l) ∶= { max v∈V∖{r} f v (j 2 , l 1 ), j, l ∈ V∖{r}, max v∈V∖{r} f v (j, l 1 ), j = r, l ∈ V∖{r}.
Hence, the arc variables of the instance in Figure B2 are given byŷ(j, l) = 0.5 for each (j, l) ∈ A. Moreover, define the node variables asx(v) =ŷ( − (v)). Thus, in our case, it holdsx(a),x(b),x(c),x(e) = 0.5, andx( ) = 1. The proof from [2] claims that we can follow x(v) =x(v), v ∈ V, by this definition of the variables. However, this claim is not true because of 0.5 = x( ) ≠x( ) = 1, and therefore, no solution can be constructed from the solution x to the RNCut model. In summary, and somewhat broadly speaking, the weaker LP-relaxation can be explained as follows. The RNCut formulation can be interpreted as a multi-commodity flow problem in an enlarged graph. However, enlarging the graph opens new possibilities for what is sometimes called rejoining of flows [31]: Flows for different commodities enter a node on different arcs, but leave on the same arc. Such a rejoining can lead to an increased integrality gap. | 2022-05-26T15:01:23.649Z | 2022-05-23T00:00:00.000 | {
"year": 2022,
"sha1": "e8225851e40fbdbb3b18f7f68d3cda9a954192a8",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/net.22101",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "cc9f4efd3d4bace14c2a692a5e696aad786617c2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247403565 | pes2o/s2orc | v3-fos-license | DEVELOPMENT AND CHARACTERIZATION OF PEGYLATED CAPECITABINE LIPOSOMAL FORMULATIONS WITH ANTICANCER ACTIVITY TOWARDS COLON CANCER
Objective : Capecitabine is widely used in colorectal cancer treatment and has first-pass metabolism problem. Despite of its promising anticancer potential, capecitabine has not been used due to its poor solubility in water. The purpose of this study was to develop colon targeting capecitabine loaded stealth liposomes, which is a promising technique to avoid first-pass metabolism to achieve the desired bioavailability profile, increased water solubility and sustained release. Methods : Thin film hydration method was used to prepare capecitabine stealth liposomes. Prepared liposomes were characterized for drug release kinetics, stability studies, cell viability studies to determine the cytotoxic effect and in vivo studies in mice bearing colon carcinoma for evaluation of antitumor potential. Results: In vitro releases of liposomes were best fitted in the Higuchi matrix kinetic model with an n value from 0.868-0.964, indicating non-fickian release diffusion. Stability data indicated that liposomes were stable for at least 06 mo at 5±3 ° C. inhibiting activity was increased and with a Significant improvement in AUC, MRT and t 1/2 observed as 29.65±5.08, µg h/ml for Stealth liposomes compared with the pure capecitabine and the conventional liposomes. Conclusion: Results suggested that Capecitabine-loaded stealth liposomes can be an effective delivery system for targeting colon cancer.
INTRODUCTION
Colon cancer is considered much leading cause of deaths in the world. Colon cancer is a very lethal malignant tumor with an increased incidence rate in 40-50 y of age, associated with high morbidity and mortality worldwide [1,2]. Colon cancer arises from the epithelial cell lining of the colon or rectum in the gastrointestinal tract (GIT) most often it may be a result of a mutation in the Want signaling pathway that falsely increases signaling activity [3,4]. For colon cancer Chemotherapy, Radiotherapy, and surgery are the Clinical therapeutic strategies. Chemotherapeutic approaches often suffer from multidrug resistance, poor bioavailability, and high system toxicity, which may result in poor efficacy and significant adverse effects [5,6]. To overcome these problems, different approaches have been attempted by giving "selective" delivery to the affected area. Targeting the drug to only those tissues, cells or organs, which are affected by the disease, would be a better solution. Presently, taking the response to chemotherapy of cancer drug delivery into consideration, methods like nanoscale systems (liposomes, micelles and nanoparticles) is growing steadily [7].
It has enormous applications, like the increase in drug uptake by cancer-affected cells, controlled drug expulsion, and capacity to boost drug stability and increase liposomes solubility [8,9]. Their desirability lies in the composition, making them biodegradable and biocompatible [10,11]. Liposomes are being considered widely as drug delivery systems of potential importance ever since the observation of Bangham and coworkers was published. Liposomes are biocompatible, biodegradable, nonimmunogenic and nontoxic [12,13]. Liposomes made up of phospholipids are weakly immunogenic, biologically inert with low intrinsic toxicity. Drugs having different lipophilicities can also be encapsulated in the liposomes: strong lipophilic drugs can be entrapped almost completely inside the bilayer of lipid; strong hydrophilic drugs are located specifically in aqueous compartment [14,15]. Liposomes which are composed of a lipid bilayer are used as drug delivery vehicles. Liposomes on the same molecule have both non-polar and polar groups [16]. On in vivo administration of conventional liposomes, they rapidly get cleared from the blood circulation by the macrophages and monocytes. Unlike the conventional liposomes, with PEGylated liposomes hepatosplenic rapid uptake is avoided [17]. Stealth liposomes delay opsonization because of their biocompatible PEG coating on the surface, and hence have comparatively longer blood circulation time, thus giving a possibility in targeting pathogens which are intercellular and macrophages (which are infected) outside the spleen and liver. PEGylated liposomes after a long-term circulation of blood extravasate into the infected tissues and thus act as drug delivery systems with site-specificity [18,19].
Poly-ethylene glycols are extensively used in the derivatization of therapeutic peptides and proteins, increasing the drug stability, lowering toxicity, increasing solubility half-life, decreasing immunogenicity and clearance. The PEG presence on the liposomal surface avoids the aggregation of vesicle and helps to improve formulations stability [20,21].
Capecitabine drug has been approved by the Food and Drug Administration (FDA) for colorectal cancer treatment in the year 2005. It is a pro-drug which can be enzymatically converted into 5fluorouracil in the tumor cells. This 5-fluorouracil inhibits the Deoxyribonucleic acid (DNA) synthesis and slows down the tumor cell growth gradually. The drug capecitabine has a half-life of 38-45 min with frequent dose administration and causes more of adverse effects like angina, hand-foot syndrome, myocardial infarction, diarrhea, stomatitis, nausea, anemia, thrombocytopenia, and hyperbilirubinemia when used in the conventional dosage form. These problems can be overcome by delivering capecitabine in stealth liposomes which can deliver the drug in a very controlled manner using much of the reduced dosing schedule to increase the therapeutic efficiency [22].
Cell lines
HCT116 and HT-29 Cell lines were obtained from NCCS, Ganeshkhind, Pune). RPMI supplemented with a 10% FBS, a 1% penicillin and a 0.16% kanamycin was used to culture Cells are grown in the humidified CO2 incubator at a temperature 37 °C.
Animals
Male mice (20 ± 3 g) were purchased from Venkateswara enterprises, Bangalore. All the animal experiments are performed at the Animal Experimental Center of Aditya BIPER. Protocols approvals were taken from proforma B, for the animal studies and were submitted for the IAEC of Aditya Bangalore institute of pharmacy education and research Bangalore. The Approval no. was 1611/PO/Re/S/12/CPCSEA. A standard diet was fed to animals and they had access to the water and the food ad libitum for a week and were kept in the laboratory environment at 25 °C ± 2 temperatures of before the experiment was started.
Preparation of liposomes
Thin film hydration method was used for the preparation of PEGylated liposomes of capecitabine using varied combinations of phospholipids. The weighed quantity of drug, phospholipids and cholesterol was dissolved in a mixture of anhydrous Ethyl acetate and ethanol (2:1) in a sterile flask with round bottom and is attached to a rotary evaporator subjected to evaporation to get a thin, dry film of lipid. The lipid film is thoroughly dried, and the film was allowed to hydrate using phosphate buffer saline, pH 7.4 above transition temperature and subjected to sonication. The non-entrapped drug was removed by centrifugation; this step is called as liposome purification. The liposomal dispersion after centrifugation was filled in glass vials and covered with special stoppers for lyophilization [23,24].
In vitro release studies
The capecitabine in vitro release from the PEGylated liposomes is determined by dialysis method. The liposomal dispersion was placed in a dialysis tube (donor compartment), then the tube was immersed in a beaker containing release medium, i.e. phosphate butter saline pH 7.4 and mixed with magnetic stirrer at a speed of 100 rpm to maintain sink condition. The sample (1 ml) was taken at fixed time intervals at 1st, 2nd, 4th, 6th, 12th, 24th, 28th, 30th and 36 th hours from release medium and the samples were withdrawn with a replacement of equal volumes of fresh dissolution medium into the cell. By using the UV spectrophotometric method, drug concentrations in the dissolution medium were determined [25,26].
Drug release kinetic study
The mechanism of the drug release kinetics of dosage forms was analyzed by fitting the obtained formulations into different kinetic equations of zero order, first order, Higuchi model and korsemeyerpeppas model. The best model was considered based on the maximum correlation coefficient value [27].
Stability studies
Stability studies are performed for formulations (optimized) according to ICH guidelines. Formulations are divided into sets of 2 samples each and were stored at 5 °±3 °C, 25 °±2 °C and 60% RH±5% RH in amber-colored sealed glass vials for 6 mo. The liposomal formulated suspensions were observed visually for their appearance, ease of their redispersion, and the sedimentation. The samples were evaluated for their particle size, the drug release and the drug entrapment at the specified time interval's viz., 0, 1, 3, 6 mo in the triplicates [28,29].
Cell viability studies
The in vitro antitumor activity of capecitabine-loaded liposomes and pure drug were determined by MTT assay. The MTT assay test was used for the evaluation of the cellular viability, for the determination of the cytotoxic effects of the free and liposomally entrapped capecitabine on the human colorectal carcinoma cellsHCT116 and HT-29. The evaluation of viability of cells was determined by estimating the quantity of colored formazan crystals which are formed while performing the biological test. 1.6 × 103/100 μl cancer cells were transferred aseptically in each well of 96-well plate in triplicates and incubated at 37 °C. Cells were treated with varying amounts of capecitabine and capecitabine stealth liposomes and incubated for 24 h. The Cells are incubated for 24 h time period at a temperature of 37 °C in a CO2 incubator. After a time of incubation, MTT of 20 μl (5 mg./ml. dissolved in PBS) are added into each of the wells and were again incubated for a time period of3 h. Supernatant was removed from the wells after 3 h and a200 μl of the dimethyl sulfoxide was then added for dissolving formazan crystals. Later 96 of the well-plates are shaken slowly and absorbances of the different samples were measured using the ELISA microplate-reader at 295 nm. The cell viability percentage was calculated according to the given following equation [30,31].
In vivo anti-tumor efficacy
Using male albino mice (20-25 g) the Pharmacokinetic studies, were done. In the study, animals were arranged randomly into four groups. Each group was comprised of six animals. 2.5X104 HT-29 cells were suspended in the Phosphate buffer solution and then were subcutaneously injected into the right flank of the mice and, the tumor was allowed to grow. After 7-10 d of tumor implantation, the free-capecitabine, Conventional-liposomes and the stealthliposomes were administered into the mice with tumor through the tail vein at 10 mg/kg animal body weight. The group I was given a normal saline buffer solution via tail vein of the mice. Similarly, group II, III and IV administered with 10 mg/kg dose of the pure solution of the drug in the saline buffer, the conventional liposomes and stealth liposomes, respectively. After 10 d of the implantation (HT-29) of tumor, when the tumor sufficiently developed and grew with a specific volume, the samples of blood are drawn at intervals of 1h, 6h, 12h, 24h and 48 h from retro-orbital plexus. The amount of capecitabine in each blood sample was measured by using HPLC analysis. The albino mice were sacrificed by euthanasia (Ketamine 90 mg/kg-IP route and xylazine 10 mg/kg-IP route) and the colon region with the tumor removed. This was washed with a normal saline solution and was subjected to homogenization, and then was analyzed by HPLC to estimate capecitabine. Distribution profiles of capecitabine in different organs also including the plasma are analysed by HPLC analysis in this the stationary phase is C18G (250 × 4.6 mm, 5 μm) and the mobile phase is Acetonitril e: methanol (55:45) with a flow rate of 1.0 ml/min, Injection volume: 20 μL and the detection wavelength was295 nm. Capecitabine was estimated using a standard curve. The solution was later injected inside the unit and the chromatogram was then recorded. For measuring In vivo Antitumor Activity, the anti-cancer activity of capecitabine was calculated by estimating its effect of cytotoxicity on the tumor by estimating its dimension in a suited animal model depending on the parameters of tumor volume, and tumor weight [32].
Tissue distribution study
To estimate the pattern of distribution of capecitabine in the biological organs which give assurance for either the localization of the drug to the required tumor site via prolonged circulation or drug uptake by the RES rich organs, like the liver and spleen, which stop the desired localization. Hence, the distribution profile of the capecitabine having, both liposomes conventional and stealth are checked with the use of animal model bearing tumor. Similar manner, like pharmacokinetics section by receiving a 10 mg/kg dose of the pure drug solution in buffer saline, conventional-liposomes and stealth-liposomes after tumor implantation, and when the solid tumor sufficiently grown with a specific volumes mice were sacrificed and the major organs-liver, spleen, kidneys and lungs. The tumors are removed, was washed using the (normal) saline solution and was subjected to centrifugation at a speed of 25000 rpm for duration of 10 min. The aliquots are then analyzed using HPLC to estimate capecitabine content in the various organs, in due respect to the time, by preparing a standard curve of capecitabine [33,34].
Effect on solid tumor volume
Colon carcinoma cell line, i.e. HT-29 cell line, was diluted using phosphate buffer solution and was subcutaneously injected into the right flank of the mice and tumors were let to develop. After 10 d of tumor implantation, the free capecitabine, the Conventionalliposomes and the stealth-liposomes were injected into the mice having tumor through the tail vein with a dose of 10 mg/kg. The size of tumor and the weight of each individual mouse were monitored from thereon. The Anticancer effect of capecitabine loaded formulation was then evaluated on the basis of changes observed in the volume of tumor and the weight obtained at the chosen timeinterval, i.e. when the tumor acquires a particular size after the implantation of HT-29 cell line (at the 10th day) and the administration of the sample. In the selected days of interval, the mice are sacrificed for tumor harvest for determination of the volume of the tumor, two bisecting diameters each of the tumors was measured with the help of slide caliper and calculations are performed using the formula.
V=0.5 X ab 2 a=largest of the diameter of the tumor (mm) b= smallest of the diameter of the tumor (mm) [35,36].
Effect of the solid tumor weight
By the end of the study, weight profiles of the tumor after treatment using the different forms of capecitabine as, pure capecitabine, optimized conventional and the stealth liposome formulation were analyzed by comparing with measuring the tumor weight, with which implicates the capecitabine anticancer activity [37].
In vitro drug release studies
In vitro dissolution study performed was by using the dialysis method. The release profile of all the formulations is presented in (table 1) and shown in (fig. 1). The maximum percentage of capecitabine release was observed in the formulation F3 CAP and F7CAP. As expected for the liposomes, fast drug release behavior was observed due to the enhanced dissolution and forming of the lipid vesicles as much as the smaller size of the vesicles [25].
Release kinetic studies
To study the mechanism of drug release, data which were obtained from in vitro drug release studies fitted in kinetic models. The correlation coefficient (2) was used as a tool for best fitting, regression values for formulation were between ( 2 ) = 0.744 to 0.991 and all the formulations F1 CAP to F8 CAP was best fitted in the Higuchi matrix kinetic model with a n value from 0.868-0.964 indicating non-fickian release diffusion. The n value was higher than 0.5 for stealth liposomes containing capecitabine. The kinetic data of all the formulations are shown in (table 2). The kinetic plots obtained of respective batches are shown in (fig. 2). The drug release pattern was obeying the Higuchi diffusion model. The highest correlation coefficients were found with the Higuchi model (r=0. 0.991) among all models. Drug release profiles of capecitabine stealth liposomes follow a diffusion mechanism. The release of capecitabine stealth liposomes was found to be sustained over a time frame. The model Korsmeyer-Peppas power law equation states type of the diffusion, which was evaluated by n value, which was higher than 0.89, which had implied that drug release from system, follows Super case II transport [38].
Stability studies
Stability studies of optimized formulation, F7 CAP pegylated liposomes, at 25°±2 °C, 60% RH±5% showed no significant changes in the drug release profile. Alteration in the drug release profile of the optimized formulations, when stored at 5±3 °C, was negligible. Entrapment efficiency of the optimized formulation when stored at 5±3 °C was not changed significantly [29]. The formulation was stable at a temperature of 5±3 °C and significant changes in entrapment efficiency of the drug and also the size of the liposomes was not observed and presented in (table 3). No significant changes in physical appearance, particle size, and the size distribution were observed for the formulations during the stability studies at 5±3 °C. However, when the formulation of liposomes was subjected to 25 °±2 °C and 60%±5%, there was a loss of liposomal structure and entrapment efficiency.
In vitro anticancer activity
The biological efficacy of capecitabine entrapped in PEGylated formulation was tested on the human colorectal carcinoma cells HCT116, HT-29 by using MTT assay. Significant improvement in drug anticancer activity, in respect to the free drug, was observed and obtained with the help of PEGylated capecitabine loaded liposomes [37]. The inhibiting activity was increased in PEGylated stealth liposomes against HCT116, HT-29 cells when compared to pure capecitabine and represented in (table 4). The improvement in the anticancer efficiency of capecitabine on colorectal carcinoma cells, which was provided by the PEGylated formulation, suggested the protective and long circulation properties of it. At a same level concentration, the modified PEG-liposomal group showed a very strong inhibition of HCT116, HT-29 cells. The obtained results indicated, prolonged circulation of the delivery system can be useful in giving the strongest cytotoxicity against HCT116 cells, HT-29 cells, it showed that the endocytosis mediated by PEG promotes cellular uptake. It can enhance cytotoxic effect of the modified PEGylated liposomes [39].
Pharmacokinetic study
For assessing the pharmacokinetics of capecitabine loaded optimizedconventional liposomes and stealth liposomes with a dose of 10 mg/kg is administered with the route I. V. to mice carrying HT-29 tumor. Plasma profile of free capecitabine, conventional liposomes, and stealth liposomes shown in the ( fig. 3) and pharmacokinetic parameters is given in (table 5). Statistically significant improvement in the AUC total of the formulation was observed and was found to be 29.65±5.08, µg h/ml for Stealth liposomes. Considering the pharmacokinetic profile, after administration of I. V. injection to the animal model comparatively, AUC, MRT and t1/2 of Stealth liposomes was much greater than pure capecitabine and conventional liposomes. This showed the improved residence time and also sustained release of drug from the formulation of Stealth liposomes, as a result of the decreased clearance of capecitabine loaded stealth liposomes. Rapid removal of conventional liposomes by RES represents one of the major drawbacks in drug delivery. This problem was addressed by using long circulated liposomes. Conventional liposomal grafting was done with a biocompatible and inert polymer like the PEG, led to the formation of much protective and a hydrophilic layer on the liposomes surface. The t1/2 of Stealth liposome and MRT increased than Conventional liposomes proved that prolong circulation half-life of Stealth liposomes reduced the chances of rapidity in uptake by the element of Mononuclear Phagocytic system (MPS) by incorporating PEG residue on vesicles which makes liposome formulations much hydrophilic and physiologically more stable.
The relative percent bioavailability of capecitabine was found to be 100 %, 72.1±0.2 and 86.4±3.5 % for pure capecitabine, conventional liposomes and stealth liposomes, respectively. Compared to the pure capecitabine solution, conventional and the stealth liposomes bioavailability has been decreased maybe because that the conventional liposomes may be rapidly are cleared from the systemic circulation, unlike the stealth liposomes have shown little higher values of the relative percentage-bioavailability when compared to the conventional liposomes (F7 CAP) due to long time in systemic circulation. The stealth liposomes altered the pharmacokinetic profile of capecitabine. The serum levels of capecitabine were significantly higher for stealth liposome's in comparison to free capecitabine [30].
Tissue distribution study
Tissue distribution of the pure drug, Conventional liposomes and Stealth liposomes was examined by inoculating HT-29 cell line into the mice. The biodistribution effect of capecitabine was evaluated, followed by the administration of 10 mg/kg of capecitabine injection through i. v., conventional, and stealth-liposomes in the mouse model shown in (table 6). The capecitabineAUC0-t and Cmax µg/ml of stealth liposomes were less in the spleen and liver, and more in the plasma and the tumor tissue when liver, plasma, and tumor tissue between both. The results showed that stealth liposomes decrease capecitabine uptake in the RES-containing organs (liver and spleen) when compared with conventional liposomes. The longer circulation time and a slower release of capecitabine from the stealth-liposomal formulation offered a fair chance, for capecitabine to get attained at the tumor through an increased permeability, the retention (EPR) effect, and also maintain the desired effective therapeutic level of dose for a long time period through depot effects. The stealth liposomes distribution pattern to spleen compared with the conventional capecitabine liposomes. There were much important differences observed in the spleen, was dynamically changed because of the steric stabilization from the inclusion of grafting PEG, which avoided the spleen uptake. In case of the free capecitabine, it was interestingly noted about its rapid appearance in the kidney after 1 h.
The above phenomenon maybe because of the metabolism of capecitabine and a rapid eliminating, through the urine, but the entrapment of the drug into the vesicles protected against the metabolism with a small appearance inside the kidney. Grafting PEG on the stealth-liposome formulations was most promising for avoiding the uptake of capecitabine in the RES rich organs and enhanced the circulation and half-life of capecitabine, small vesicular size and steric stabilization promoted enhanced permeability retention (EPR) by favorably promoting stealth liposomes into the tumor interstitial space and extravasation-effect for maximum localizing the drug into the tumor cells. This kind of accumulation of the liposomes with longcirculation having encapsulated drugs using the EPR effect represents the mechanism of passive targeting, increasing the drug delivery and the therapeutic potential of the drug. The biodistribution studies showed a higher uptake per a gram of tissue of pure capecitabine and conventional liposomes uptake was in the spleen and kidneys followed by the liver. The high uptake in the spleen and the liver was due to a fact that the mentioned organs are a part of mononuclear phagocyte system (MPS), which in turn is responsible, for the filtering of foreign particles from blood circulation [32].
Effect on tumor volume
The Mice, bearing HT-29 tumor are parenterally given free capecitabine, conventional-liposomes, capecitabine loaded stealthliposomes for cancer therapy. Stealth liposomes 10m g/kg dose and the mice were given a saline solution as a control. The pure form of capecitabine was not of much effect in preventing tumor growth in comparison with the conventional-liposomal treatment; conventional-liposomes displayed a stronger inhibition of tumor having the volume of tumor found as 2.7±0.21 cm3 unlike with pure capecitabine treated tumor volume was 3.2±0.23 cm 3 were presented in (table 7). When the tumor was treated using the stealth-liposomes, they provided cellular advantages in terms of the tumor site accumulation of capecitabine because of the PEG coating. Here, stealth liposomes distribution to tumor cells induced interaction to the tumor cell membranes and consequently to promote the effective drug delivery, it reduces the volume of the tumor to 1.1±0.12 cm3after 30 d of study, notably was lower compared to conventional liposomes and the free-drug [37,39].
Effect on tumor weight
As shown in the table 8 the influence of the formulation, on the tumor's weight, indicated that, the weight of the tumor was 3 times less compared to (1.4±0.21 gm) the control group, as (7.41±1.22 gm), hence the growth of the tumors was retarded up to 30 d of the study. The same way the influence of the pure-capecitabine and the optimized, conventional liposomal formulation on the weight of tumor was (6.74±1.35 gm to 4.38±0.85 gm) respectively and reported in the. Additionally, capecitabine concentration from stealth liposomes in the tumor was notably high compared with conventional liposomes, which was mostly may be due to targeting nature of stealth liposomes caused much greater accumulation of carrier inside the tumor and also subsequently increasing the drug delivery [37,39].
CONCLUSION
The results demonstrated that compared with capecitabine, modified-liposomes possessed a notable prolonged circulation time, with high drug concentrations in the plasma compared with free capecitabine and conventional capecitabine liposomes. Liposomes with PEG showed higher uptake by the tumor, but also toxicity was lower inside organs like liver, kidneys, and spleen with PEG in mice with HT-29 colon carcinoma. Capecitabine stealth liposomes showed a prolonged circulation of drug in plasma, has increased the targeting of tumor and also improved therapeutic efficiency.
ACKNOWLEDGEMENT
I would wish to extend my because of Mr. Rahil M Patait, for generous gift of capecitabine (pure drug). I would like to heartfully thank my guide Dr. B A Vishwanath for his continuous, enormous support and encouragement.
AUTHORS CONTRIBUTIONS
This work was carried out together among all authors. Author MP carried out the experiments analyzed the data. Author BV Supervised the experimental design, Laboratory analysis and major contributor in writing manuscript. All authors read and approved the final manuscript. | 2022-03-12T16:12:29.108Z | 2022-03-07T00:00:00.000 | {
"year": 2022,
"sha1": "d29dd1fe9ee3390834fabe2c4e8f5bef21d744d3",
"oa_license": "CCBY",
"oa_url": "https://innovareacademics.in/journals/index.php/ijap/article/download/43658/26126",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "08364bbb7f331d72663292d33aad852872201ad2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
19189803 | pes2o/s2orc | v3-fos-license | THz Time-Domain Spectroscopy of Mixed CO2-CH3OH Interstellar Ice Analogs
The icy mantles of interstellar dust grains are the birthplaces of the primordial prebiotic molecular inventory that may eventually seed nascent solar systems and the planets and planetesimals that form therein. Here, we present a study of two of the most abundant species in these ices after water: carbon dioxide (CO2) and methanol (CH3OH) using TeraHertz (THz) time-domain spectroscopy and mid-infrared spectroscopy. We study pure and mixed-ices of these species, and demonstrate the power of the THz region of the spectrum to elucidate the long-range structure (i.e. crystalline versus amorphous) of the ice, the degree of segregation of these species within the ice, and the thermal history of the species within the ice. Finally, we comment on the utility of the THz transitions arising from these ices for use in astronomical observations of interstellar ices.
Introduction
Cometary bombardment and meteoritic impacts have long been known to deliver substantial quantities of water and organic molecules to Earth, which may well have been the primordial prebiotic seeds of life. 1 This raises the question: What is the ultimate origin of this material?While some chemical evolution can certainly occur in situ in these icy bodies, a substantial portion of the molecular material is inherited directly from the parent molecular cloud. 2 Thus, a thorough understanding of the primordial origins of our prebiotic molecular reservoir necessitates an examination of the genesis of this material in star-and planet-forming interstellar clouds. 3enerally, simple, unsaturated molecules, as well as a number of long-chain hydrocarbons and fullerene species, can efficiently form via gas-phase ion-molecule reactions 4 .][6][7][8][9][10] For example, the presence and abundance of methyl formate, one of the most prevalent interstellar complex organic molecules, has been argued to be explainable only through formation via radical-radical recombination reactions in these icy bodies. 11Indeed, a recent laboratory study has shown that three abundant complex molecules -methyl formate, glycolaldehyde, and ethylene glycol -are efficiently formed in the solid phase through recombination of free radicals formed via H-atom addition and abstraction reactions that occur during the hydrogenation of CO ice at 15 K under dense molecular cloud conditions. 12spite their origins in molecular ices, the most complex molecule yet detected in the condensed-phase of the interstellar medium (ISM) is CH 3 OH. 13Indeed, only six species -H 2 O, CO, CO 2 , CH 3 OH, NH 3 , and CH 4 -have been securely identified observationally, although there is strong evidence for the additional presence of H 2 CO, OCN − , and OCS. 13 Thus, while characterizing these ices is critical for understanding the genesis of complex prebiotic material, we are currently limited in our ability to constrain models of chemical evolution in these condensed-phase environments where it occurs.
Much attention in the laboratory has been focused on the formation, destruction, and reaction of species within interstellar ice analogs, primarily using mid-infrared (mid-IR) spectroscopy. 14hese studies, while crucial, have difficulty unambiguously measuring a critical component of the equation: the physical structure of the ice, which can have profound effects on reactions within the bulk material. 5Indeed, mid-IR spectroscopy is not the most powerful tool for examining this long-range structure, as, in general, the signals observed in the mid-IR only probe intramolecular modes which are characteristically perturbed by the surrounding ice structure.In the far-IR, or TeraHertz (THz, 0.1 -10 THz, 30 -3000 µm), region of the spectrum, however, it is the softest degrees of freedom of the ice (i.e.inter-molecular modes) that are probed. 15hese inter-molecular modes offer a unique probe of ice structure (i.e.crystalline vs. amorphous ice).7][18][19][20][21] Recent observations of crystalline water ice have suggested this may be a powerful tool in studies of the evolution of planetary systems from the initial collapse phase through planet formation. 22The extreme sensitivity of the THz region to these structural modes opens the door to the study of species less-abundant than water, that are just as critical to our understanding of both physical and chemical evolution within forming systems.The Far Infrared Field-Imaging Line Spectrometer (FIFI-LS) aboard the Stratospheric Observatory for Infrared Astronomy (SOFIA) offers bandwidth that is well-matched to these THz modes, covering 51 -203 µm (1.5 -5.9 THz) across two spectral bands.
The THz region of the spectrum has historically been challenging to access.Recent advances in generation and detection techniques for THz photons, however, have allowed us to construct a broadband, sensitive, and coherent spectrometer whose spectral resolution is ideally-matched to the modes arising from the bulk motion of interstellar ice analogs.We have previously reported on THz time-domain spectroscopy (THz-TDS) of pure, mixed, and layered ices of simple species (CO 2 , H 2 O) 16 , as well as more complex species (HCOOH, CH 3 COOH, CH 3 CHO, CH 3 OH, and (CH 3 ) 2 CO). 17Here, we present a comprehensive study of CO 2 -CH 3 OH mixtures in crystalline ices.We examine the role of segregation within the ices on the spectra at various mixing ratios, and discuss the possible impacts on the utility of these spectra for comparisons to observations.
Experimental Methods
The underlying principles of the experiment, as well as the technical details of the instrument, have been described in detail elsewhere, 16,17 ; a schematic is shown in Figure 1.Briefly, a 35 fs, pulsed Ti:Sapphire regenerative amplifier at 800 nm drives an optical parametric amplifier (OPA) producing 1745 nm radiation in the idler beam.A portion of this radiation is co-linearly doubled in a beta-barium borate (BBO) crystal, and the two pulses are focused in a dry N 2 purge, sparking a two-color plasma which produces intense, broadband THz radiation. 23The THz light is then focused through the sample, recombined with a portion of the original 800 nm pulse in a gallium phosphide (GaP) crystal, and detected via free-space electro-optic sampling. 24In this arrangement, the spectrometer provides coverage from ∼0.3 -7.0 THz.Data were collected for 30 ps, producing an experimental resolution of ∼0.03 THz (1 cm −1 ) when Fourier-transformed.
To prepare the ices, gas-phase samples of CH 3 OH and CO 2 were first mixed in the desired ratios in a 1 L glass bulb attached to the dosing line.The pressures of each gas were monitored by a mass-independent pressure gauge.Gas-phase CH 3 OH was obtained by allowing a liquid sample of ≥ 99.9% CH 3 OH (Sigma-Aldrich), which had been subjected to several freeze-pump-thaw cycles, to volatilize.High-purity CO 2 from Air Liquide was used without further purification.Once prepared and mixed, the samples were introduced into the chamber via an all-metal leak valve, typically at a rate of ∼3.5 mTorr s −1 , to the desired total pressure (P tot ), where they were frozen onto a high-resistivity Si substrate held at T dep = 80 K.In this high vacuum system, our ices are typically of order 10 4 monolayers (ML) thick.After deposition, the samples were immediately cooled to a substrate temperature of T sub = 10 K, and spectra collected at 10 K, 20 K, and 30 K, followed by annealing, typically for ∼5 minutes, to T ann = 90 K, 120 K, and 140 K.After each annealing, the samples were cooled to 10 K and spectra collected before the next annealing.A detailed list of experiments is given in Table 1.
Results
We have previously reported on the temperature-dependent spectra of pure end-member crystalline methanol (herafter c-CH 3 OH), 17 but have re-measured the spectra for this study under identical temperature and annealing conditions for consistency.Figure 2 shows that while c-CH 3 OH ice is characterized by a series of sharper bands between 2 -6 THz, amorphous methanol (a-CH 3 OH) is largely characterized by a broad feature around 4.3 THz, and the beginning of a second, broad signal around 6 THz.Like the a-CH 3 OH shown in Figure 2, the a-CH 3 OH ice generated in this study via deposition at 80 K displays only a single, broad absorption across the 0.5 -7 THz window.An instrumental artifact around ∼2.1 THz is also seen in some scans.The sharper, characteristic signals from c-CH 3 OH become evident after annealing at 140 K, when sufficient energy is available to enable crystallization.In a few cases, most prominently those where CH 3 OH strongly dominates the CO 2 , some c-CH 3 OH features are seen at 120 K. Notably, there does appear to be some weak signal from c-CH 3 OH after the 120 K annealing step in the most dilute (10:1) CO 2 :CH 3 OH mixture.
Our first study of pure crystalline carbon dioxide (c-CO 2 ) was reported in Allodi et al. 16 , but a lack of sensitivity and resolution in these initial experiments showed no clear features, despite a reported prior observation of a feature at ∼3.3 THz in the literature. 25More recently, Giuliano et al. 26 reported an observation of the 3.3 THz feature, as well as an additional feature around 2.1 THz.However, they attribute these signals to amorphous carbon dioxide (a-CO 2 ) rather than c-CO 2 .We have recently conducted a thorough investigation of a-CO 2 and c-CO 2 features in this frequency range.In brief, we find unambiguous evidence that these two features are due solely to c-CO 2 , and that a-CO 2 shows no distinct features within the coverage of our spectrometer.For the purposes of this study, we have repeated the measurements of pure CO 2 under the same experimental conditions used for the mixtures.For comparison, we also present a spectrum of a-CO 2 from the forthcoming Ioppolo et al. publication (Fig. 2).We note that the sensitivity and resolution of the Caltech spectrometer has significantly improved in the two years since the publication of Allodi et al. 16 through a combination of instrumental upgrades.Spectra collected for this work are shown in Fig. 3.The reduction procedure has been described in detail previously 17 .Briefly, a fast Fourier transform of the time-domain data is performed after applying an asymmetric Hann window to the data.This converts the spectra to the frequency domain.Baselines were then removed from the spectra by fitting the line-free regions to either a static offset or 1st-order linear fit, although the later was rarely required.
CO 2 -Dominated Mixtures
Signals from the 2.1 THz and 3.5 THz c-CO 2 modes are clearly seen in the 1:1, 3:1, and 10:1 mixtures until annealing at 140 K.In the case of pure c-CO 2 , these features disappear after annealing at 120 K, unlike in the mixed cases.Signal from c-CH 3 OH is clearly seen after annealing to 140 K in all three mixtures, at which point no c-CO 2 is apparent, although the near-coincidence of the c-CO 2 features with two c-CH 3 OH features makes this determination somewhat ambiguous.For the 3:1 and 10:1 mixtures, features from c-CH 3 OH, particularly at ∼5.2 THz and ∼2.6 THz, do begin to appear after annealing at 120 K.The same features are possibly present in the 1:1 mixture, but would be just above the noise floor (baseline noise) if real.
Figure 4 (top) shows a quantitative analysis of the linewidths of the observed c-CO 2 transitions, as determined by a Gaussian fit to the features.Due to the relatively low signal-to-noise ratio (SNR) of the lines, fits to the 10 K, 30 K, and 60 K signals were averaged to determine the linewidth, with the standard deviation of this average given as the error bars.While some broadening of the transitions was observed with temperature, this broadening was not linear over the entire range of mixing ratios.Instead, the linewidth appears to depend heavily on mixing ratio.The results show a significant increase in linewidth between pure CO 2 and mixed ices (Fig. 4).For the pure ice, the 2.1 THz transition is substantially narrower than the 3.5 THz transition.The mixed Fig. 2 (Top) spectra of amorphous CO 2 (a-CO 2 ) deposited at 10 K (red) and crystalline CO 2 (c-CO 2 ) deposited at 80 K (black).Both spectra were acquired at 10 K, and have been vertically-offset for clarity.(Bottom) spectra of a-CH 3 OH deposited at 10 K (red) and c-CH 3 OH deposited at 140 K (black).Both spectra were taken at 10 K, and vertically-offset for clarity.ices, conversely, have a largely uniform linewidth regardless of mixing ratio (within the uncertainties).They are also all significantly broader than either transition in pure CO 2 .The increase in linewidth between the pure and mixed 2.1 THz transitions, however, is markedly greater on average compared to the 3.5 THz transition: factors of 4.5 vs 2.0, respectively.
Figure 4 (bottom) shows a quantitative analysis of the linewidths of the observed CO 2 transitions in the FTIR shown in Figure 5 (middle), as determined by a Guassian fit to the features.Because of the much higher SNR, only the 10 K scan with no annealing was used for the determination, and the errors are purely due to uncertainty in the Gaussian fit.As in the THz, the mid-IR linewidths show a mixing ratio dependence.The magnitude of the change is about half that in the THz, and unlike the THz, seems to increase linearly for both transitions from pure CO 2 to the 3:1 mixture.Finally, while the linewidths for the 1:1 mixture in the THz remain similar to those of the 10:1 and 3:1 mixtures, in the FTIR the 1:1 and 1:3 mixtures are significantly narrower.
CH 3 OH-Dominated Mixtures
The THz signatures of c-CO 2 appear to be strongly suppressed in both CH 3 OH-dominated mixtures.A very weak indication of the 2.1 THz c-CO 2 may be visible in a handful of scans, but it is difficult to distinguish from both the weak artifact (introduced by the HDPE beam block) in this region, and the overlapping c-CH 3 OH mode which begins to appear after crystallization starts at 120 K.Of the mixtures studied, only pure CH 3 OH and the 3:1 CO 2 dominated mixture show a clear separation of the two c-CH 3 OH transitions around 2.6 THz.The others show only a single blended peak in this region.
Discussion
After H 2 O and CO, CO 2 is one of the most abundant ice species in the ISM, with abundances of nearly 20% that of H 2 O ice. 27,28In dense molecular clouds, CO 2 is formed in the solid phase primarily through the CO + OH reaction that has been experimentally found to be 10 times more efficient than the CO + O channel. 29herefore, although CO is its parent molecule, CO 2 tends to reside primarily in polar ices (i.e.H 2 O-rich rather than CO-rich), where OH radicals are more abundant and available for reaction with CO.Thus, mid-IR and THz spectroscopic studies of CO 2 in polar environments are important to our understanding of the origin and evolution of interstellar CO 2 ices. 16,30tside of mixtures with H 2 O, CH 3 OH is the most abundant polar ice constituent at ∼6-9% of the abundance of H 2 O, or ∼20-50% of the abundance of CO 2 . 13CO and CO 2 have been shown to be the products of UV and cosmic ray irradiation of CH 3 OH-containing ices. 31Therefore, in later stages of star formation, when ices are extensively exposed to heating, UV photons, and cosmic rays, CO 2 is thermally-processed and mixed with CH 3 OH. 32As the segregation of CO 2 into ordered crystalline micro-domains is thought to be a powerful probe of thermal processing in astrophysical environments, 30,33 it is interesting to explore that segregation in the laboratory, and its observational implications.
Structure and Segregation
The segregation of CO 2 ice upon deposition at warm temperatures has been previously reported in mixed CO 2 -H 2 O 34 and CO 2 -H 2 O-CH 3 OH 28 ices in studies with mid-IR spectroscopy.One of the many advantages of THz-TDS in studying ices, however, is that the features arising from these species in the THz regime result from the collective motion of many molecules (intra-molecular modes) and thus serve as a direct probe of the structure of the ice. 16,17This is in contrast to mid-IR spectroscopy, where the ice structure must be indirectly inferred from changes in the lineshape of intermolecular modes.This utility is immediately obvious from the observations of the linewidths of the c-CO 2 features within the various mixtures.While the features from pure c-CO 2 are relatively sharp, any amount of contamination from CH 3 OH significantly broadens the transitions.This demonstrates how the THz transitions of CO 2 are useful as a probe of local structure.As these are solid-phase materials that do not have structural rearrangement happening of a timescale faster than our measurement, the broadening of the observed spectra features results from the different ice environments experienced by the CO 2 molecules, and can be correctly characterized as inhomogenously broadened.Even a 10% CH 3 OH contamination is apparently sufficient to create a variety of local environments within the ice, and inhomogeneously broaden these transitions.
Since the spectral features at THz frequencies are intermolecular in nature, the amount of inhomogenous broadening offers a direct measure of the number of unique structural environments present in the ice.While the inhomogenous broadening of intra-molecular modes provides insight into the number of different local environments of individual molecules, inhomogenous broadening of inter-molecular modes can only happen when there are many different structural environments, leading to differences in the frequency of the collective motion of many molecules.
Interestingly, there is a lack of a clearly increasing trend in linewidth in the THz observations, unlike those in the FTIR.If we assume that the linewidth from pure c-CO 2 represents the homogenous value, this suggests that the CO 2 -CH 3 OH interaction is somewhat uniform across mixing ratios.It is possible that the data show a trend of increasing linewidth with increasing CO 2 concentration within the ice, but this cannot be claimed definitively given the uncertainties in the measurements.Follow-up studies will examine this effect, as well as providing an in-depth examination of the degree of segregation and the size of the c-CO 2 domains under various conditions.Finally, the larger degree of broadening observed in the 2.1 THz transition, relatively to the 3.5 THz transition, may provide insight into the nature of these motions in the bulk ice.
Hodyss et al. 34 observed that features of c-CO 2 in their CO 2 -H 2 O mixtures initially appear at 60 K, are distinct by 70 K, and gradually lessen as the ice is heated further to 80 K and disappear at 100 K as the CO 2 sublimates.In initially-amorphous CO 2 -CH 3 OH ices studied by Ehrenfreund et al. 28 , CO 2 persists in the mixture at temperatures as high as 125 K, and additional spec-troscopic features emerge.These features are likely interactions between CO 2 which has crystallized to a degree at lower temperatures, and the CH 3 OH, which does not begin to crystallize until ∼120 K.This is supported in our data, especially in the 3:1 mixture after annealing to 120 K, where strong c-CO 2 features remain visible while c-CH 3 OH signal begins to emerge.Indeed, although the THz signals of c-CO 2 are obscured by those of c-CH 3 OH, c-CO 2 is still present in the ice even after annealing to 140 K, as indicated by FTIR spectra recorded contemporaneously (Fig. 6).
Interestingly, three sharp features are observed in the THz spectra of the 3:1 mixture around 2.2-2.8THz, after the mixture has been annealed to 140 K.While these features might be present as broad shoulders in the pure CH 3 OH ice after the same heating process, they are not nearly as distinct.Perhaps, as was the case with the CO 2 stretch at 2340 cm −1 observed by Ehrenfreund et al. 28 , this is an indication of a coupling between the CO 2 and CH 3 OH motions.
In the CH 3 OH-dominated mixtures, no clear signal from c-CO 2 is observed in any of the THz spectra, and while the bright ν 3 band is always visible (and saturated) in the FTIR, the combination modes are heavily suppressed in the CH 3 OH-dominated mixtures (Fig. 5 middle and bottom).While the FTIR spectra can distinguish between crystalline and amorphous CO 2 under most circumstances (Fig. 5 top), the THz spectra offer a more powerful probe of segregation within the ice.The presence of c-CO 2 features would unambiguously indicate the presence of significantly-sized micro-domains of ordered CO 2 .The lack of any such features indicates that little to no aggregation of CO 2 domains from within the CH 3 OH, beyond the initial segregation at deposition, has occurred.This agrees with the conclusion of Ehrenfreund et al. 28 that the CH 3 OH-dominated ices are more thermally-stable, and less subject to reorganization with heating.
Finally, the degree of inhomogeneous broadening of the c-CO 2 , relative to a pure, bulk ice, is an indicator of the variety of size scales of the crystalline micro-domains within the bulk ice.As the segregated c-CO 2 approaches a uniform size, and nears the formation of a single bulk crystal, this broadening should reduce to the narrower profiles of the pure c-CO 2 .A follow-up study could potentially monitor the change in broadening as a function of time while the ice is gently heated and CO 2 segregation occurs, offering a direct probe of migration and crystallization timescales and dynamics within the ice.Further, as the broadening for the 3.5 THz transition relative to the 2.1 THz transition does not appear to be unity, this ratio could be used in astronomical observations to probe the level of segregation, even without a baseline, purely c-CO 2 signature within the source to set the intrinsic width.
Mixture-Dependent Frequencies
As discussed above, the change in lineshape with mixing ratio in the THz spectra is the most obvious indicator of the effect of the bulk environment on the spectra.In the FTIR spectra, however, this effect is more readily apparent in the observed frequencies of the transitions.This is most clear in the case of CO 2 .For example, the ν 3 band of CO 2 shows a distinct blue-shift as the mixing ratio tends toward pure CO 2 (Figure 5 bottom).While Fig. 6 FTIR spectra of pure c-CO 2 at 10 K (bottom), the 3:1 mixture at 10 K after annealing to 140 K (middle), and pure CH 3 OH at 10 K after annealing to 140 K (top).
the 12 CO 2 feature is too saturated due to the thickness required for this experiment for a quantitative analysis, the 13 CO 2 peak presents a large, 8.7 cm −1 shift from the 1:10 to pure c-CO 2 ices.An analogous shift has been previously observed for clusters of CO 2 in the gas phase.Here, the increasingly large clusters are essentially a shift toward the purely crystalline CO 2 domain, and even larger blue shifts (∼21 cm −1 ) are observed as the cluster grows from the monomer to N = 13. 37e also note the possibility that signals could arise in the FTIR spectra from combination bands of the long-range, intermolecular modes in the THz region and the intra-molecular motions in the infrared.These modes would in theory be distinguishable from simple isotopically-shifted intra-molecular modes by their frequency shifts: such isotopic shifts would be to the red, while combination bands would lie primarily to the blue of the monomer features.Because our ices are by necessity so thick, and the resulting monomer features saturated, it is likely that the lowest-lying of the combination bands will be buried beneath the overly-wide monomer signal.Nevertheless, the relatively sharp transitions observed in the THz should have counterparts in the IR, although the frequencies will be shifted and the lineshapes altered due to the effects of vibrational excitation and different coupling to the bulk environment.Isotopic labeling, which would differentially affect the intra-vs inter-molecular modes and thus make combination bands distinct from monomer features, is a promising avenue to exploring this potential interaction.Such studies are beyond the scope of the current work, however.
Observational Implications
In terms of detectability, it seems likely that features of c-CO 2 will be present in astronomical observations, assuming it is sufficiently segregated both from smaller contaminant species, such as the CH 3 OH studied here, and from the polar H 2 O ices where it is formed.This segregation has already been observed astronomically. 30,33,38It follows that the THz signals of c-CO 2 would be excellent targets for interstellar observations, as they are un- 3:1 @ 10 K (k 120 K) 10:1 @ 10 K (k 120 K) Pure c-CO 2 @ 10 K Fig. 7 FTIR of the 3:1 and 10:1 mixtures after annealing to 120 K showing the remaining CO 2 in the ice.The transitions are saturated in our spectrometer, and the spectra have been vertically-offset for clarity.A pure c-CO 2 spectrum at 10 K is provided for reference.
ambiguous evidence of segregated, c-CO 2 .Given the extensive broadening of the THz modes when the c-CO 2 is in a mixture, it is also possible that such a width could be used as an indicator of the degree of segregation within these interstellar ices.Further work will certainly be needed to determine whether this is truly a useful probe, especially at mixing ratios higher than those used in this initial study (i.e.> 10 : 1).Interestingly, the THz features of CH 3 OH are also distinct after annealing at 120 K, even in both mixtures (10:1 and 3:1) that clearly still contain substantial fractions of CO 2 (see Fig. 7).Indeed, the c-CH 3 OH features are far stronger than the c-CO 2 , although broader.Their presence in an interstellar observation would therefore indicate the thermal history of the ices, while the peak positions of the features have already been shown to be dependent on the observed temperature of the ice. 16,17n important consideration is the role of dust grain size and composition underlying interstellar ices, and the impact these have on the ice structure and subsequently their spectra.Indeed, recent modeling work by Pauly and Garrod 39 has shown that while the chemical makeup of the ices does not greatly vary with grain size, there can be significant stratification in which size-class of grains is the dominant ice carrier.For example, during cloud collapse, they find that small grains dominate as ice carriers, resulting in ice thickness of <40 ML, versus those of order ∼10 2 if the model includes only a single grain size.This raises the important question: to what degree can species segregate within these ices, and how will that affect the spectra they present?
Due to the sensitivity of the second-generation THz spectrometer presented here, we could only study thick laboratory ices (of order ∼10 4 ML).The current-generation spectrometer is now capable of studying ices of order ∼10 3 ML, and the next-generation instrument should push into the 10 1 -10 2 ML regime with the ability to directly measure the optical constants of the ices under investigation.This is because the electro-optic sampling technique employed here directly measures both the amplitude and phase of the THz pulse.This presents a experimental approach which is far simpler than complementary techniques with an FTIR that require an asymmetric configuration where the sample resides in an arm of the interferometer. 40Combined with molecular dynamics simulations, radiative transfer, and scattering models over a realistic range in dust particle size, these optical constant measurements will allow for far more accurate modeling of observational spectra.Such models will in turn provide even more precise information on not just the ice composition and structure, but that of the underlying grain substrate.With the critical role that the size of icy grains has on the growth of macroscopic grains and gaps in protoplanetary disks, 41 this information will be essential for understanding physical evolution in star and planet formation.
Indeed, factors such as composition, physical structure and segregation, temperature, and thermal history all play crucial roles in the evolution of molecular complexity both in molecular clouds and in evolving planetary systems. 5Observations of ices in the THz regime offer the potential to shed light on these factors, and in the case of crystallinity and segregation, to do so unambiguously due to the nature of the THz modes.Molecular ices are the birthplace of the prebiotic complexity which will eventually be incorporated into nascent solar systems, and thus understanding the environments in which they form, and the mechanisms of formation, is critical to understanding the genesis of this primordial material.N 2 Purge Box Gas Dosing Line Fig. 1 Schematic overview of the Caltech THz-TD spectrometer, and its application to the study of astrochemical ice analogs.The 800 nm output of the Legend oscillator is split, with a portion of the light passing through an optical parametric amplifier (OPA).This light is then doubled in a beta-barium borate (BBO) crystal and focused with an off-axis parabolic mirror (numbered optics) to spark a plasma.A high-density polyethylene (HDPE) beam block filters out the visible light, and after passing through the sample, the THz beam is recombined with the other 800 nm beam in an indium tin oxide (ITO) dichroic beamsplitter and focused onto a gallium phosphide (GaP) crystal for detection.The infrared spectrometer signal is detected by a mercury cadmium telluride (MCT) detector.Fig. 3 THz-TDS spectra collected for this study.All ices were deposited at 80 K, and spectra were collected at the temperatures indicated in the caption (10 K, 30 K, and 60 K).In cases where the ices were annealed, the annealing temperature is indicated by .Ice compositions are given in the upper right of each panel.Spectra are vertically-offset for clarity, and, when noted, are scaled to show detail.The positions of the c-CO 2 features are marked by asterisks (*) when present.
Fig. 4 (
Fig. 4 (Top) Average FWHM values of the 2.1 THz and 3.5 THz c-CO 2 transitions observed in this work at 10 K, 30 K, and 60 K for pure CO 2 and the indicated mixing ratios.Values are normalized to the pure, 2.1 THz width.Error bars are 1σ standard deviations in the averages.(Bottom)FWHM values of the 3599 cm −1 and 3710 cm −1 2ν 2 + ν 3 and ν 1 + ν 3 , respectively, CO 2 transitions at 10 K for pure CO 2 and the indicated mixing ratios.
Fig. 5 (
Fig.5(Top) FTIR spectra of the ν 1 + ν 3 and 2ν 2 + ν 3 combination bands of a-CO 2 (black) and c-CO 2 (red) collected in our laboratory at 10 K. (Middle) The same modes collected during the 10 K experiment (no annealing) for each of the mixtures studied in this work.(Bottom) The ν 3 band of CO 2 and 13 CO 2 collected during the 10 K experiment (no annealing) for each of the mixtures studied in this work.Spectra are vertically-offset for clarity, and several additional features due to isotopologues are indicated.35,36
Table 1
Mixing ratios, total deposition pressure, deposition temperature, substrate temperatures, and annealing temperatures for ices described in this work. | 2016-06-09T18:36:39.000Z | 2016-06-09T00:00:00.000 | {
"year": 2016,
"sha1": "a98c1fca51add815318bd4a7070de42ca62b0c58",
"oa_license": null,
"oa_url": "https://europepmc.org/articles/pmc6842323?pdf=render",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a98c1fca51add815318bd4a7070de42ca62b0c58",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
250672164 | pes2o/s2orc | v3-fos-license | Aerodynamics and Characteristics of a Spinner Anemometer
A spinner anemometer is a wind measurement concept in which measurements of wind speed in the flow over a wind turbine spinner is used for determination of the free wind. Analogies to the concept are the flow around a sphere and a five hole pitot-tube. But, in stead of measuring pressure differences on the surface, the spinner anemometer measures directional air speeds in the flow above the spinner surface. A spinner anemometer, based on a modified 300kW wind turbine spinner, was mounted with three 1D sonic wind speed sensors. The flow around the spinner was calculated with the EllipSys3D CFD-code. Calculations were made for varying wind speeds and yaw angles, and the air speed within the sonic sensor path was determined during rotation. The calculated air speeds were used as "calibration" data for an analogue spinner anemometer algorithm. The algorithm converts, by inclusion of a measured rotor position, the measured sonic sensor air speeds to free wind speed, wind direction relative to the spinner and flow inclination angle. A wind tunnel concept test and a full scale field experiment with a comparison to a 3D sonic anemometer were made. The results indicate that the 300kW spinner anemometer characteristics are comparable to the 3D sonic anemometer with respect to time traces and average and standard deviation of wind speeds.
Introduction
A spinner anemometer is a wind measurement concept intended for horizontal axis wind turbines [1]. The spinner anemometer integrates the spinner, which is the aerodynamically formed glass-fibre cover over the rotor hub, with active wind sensors on the spinner into the wind measurement concept. The concept is best explained by an analogy to the five hole pitot-tube or pressure sphere anemometer. But in stead of measuring pressure differences as on the five hole pitot-tube, the spinner anemometer measures directional flow speeds in the flow above the spinner surface, and above the spinner boundary layer. __________________________________ 4 Acknowledgments to Siemens Wind Power Systems for construction, building and provision of the 300kW modified spinner and tripod support for the experiments and analysis.
Description of a spinner anemometer
The shape of a spinner nose on most wind turbines is spherical and similar to the nose of a five hole pitot-tube or a pressure-sphere anemometer, though the size of a spinner is somewhat larger. A pressure-sphere anemometer nose is spherical [2] [3]. The spinner anemometer presented here also has a spherical nose, but it has a transition to a cone rather than to a cylinder as for the five hole pitot-tube. One way to measure the wind speed on a wind turbine spinner could therefore be to measure the pressure differences from five holes in the spinner. There are some disadvantages of this, though. One is the sensitivity to rotation, and the other is the sensitivity of pressure tabs to rain and icing.
Principles of a spinner anemometer
A better sensing method is to measure directional air speed over the spinner, and to use sonic anemometry, which is a conventional and robust measurement principle [4]. A spinner anemometer can thus typically consist of three 1D sonic wind speed sensors, mounted symmetrically at the front of the spinner, as shown in Figure 1. The sonic sensors are mounted with the sensor paths in plane with the rotor axis and with the sonic paths tilted somewhat backwards to allow wind to come undisturbed into the sensor paths and to avoid the sensor head shadow effect [4]. In this way, the wind component on the sensors due to rotation is cancelled out, and only the wind along the axis direction is measured. The tilted-back configuration of the 1D sonic sensor also allows the sensor to be sufficiently narrow to be mounted from the inside of the spinner through a hole in the mounting fitting on the spinner.
The physical principle of the spinner anemometer is based on the air flow over the spinner, see Figure 2. The air flow from the right with an inflow angle of 10º is accelerates from zero at the stagnation point to an air speed above the free wind speed on the upper side of the spinner. On the lower side the air flow accelerates less. This systematic variation of air speed over the spinner with variation of inflow angle is used in reverse to determine the flow angles. Axial wind speed contours around a rotational symmetric spinner. Wind is from the right and the air inflow angle is 10º from below.
A spinner anemometer for theoretical and experimental investigations
The specific S300 spinner anemometer shown in Figure 1 is used for theoretical and experimental investigations. The spinner length is 0.92m and the diameter at the back of the spinner is 1.10m. The spinner nose has a radius of 0.40m. The 1D sonic sensors and electronics are based on ordinary 3D sonic sensor technology, but the sensors themselves are specifically designed and built for spinner anemometry. The sensor path lengths are 0.162m, and the sensor path angle to the rotor axis is 11º. Two different types of sonic sensors were used, both having the same supporting geometry, but with
Wind tunnel concept tests
A test of the measurement concept was made in a large 4x4 m 2 wind tunnel. The S300 spinner was mounted horizontally on a shaft on a tripod. The tripod was mounted on a turn-table in the floor which could be yawed to change angle of attach of the wind on the spinner. A cup anemometer was mounted at the lower left corner of the inlet to the test section. Unfortunately, this was close to a contraction wedge on the floor, that locally increase outlet wind speed, see Figure 3. At first, measurements were made without rotation in order to measure the basic flow characteristics. The spinner was positioned so that one sonic sensor was horizontal for the static test. Figure 4 shows the sensor wind speed relative to the cup wind speed, measured at continuous yaw sweeps and at three different tunnel wind speeds for the "classic" type of sonic sensors. Figure 5 and 6 show measurements during rotation with the "classic" type of sensors at 20º and 10º yaw and a wind speed of 9m/s. Index c indicates data have had an "internal" calibration. This means that all data of sensor 1, 2 and 3 have been sorted individually according to ascending values, and sensors 2 and 3 have been regressed on sensor 1. This "internal" calibration is possible because all sensors statistically measure the same wind. In this way the influence of imperfections of spinner, sensors and sensor mounting can be minimized. Figure 5 Scatter of measurements of 1D sonic sensor wind speeds relative to cup anemometer wind speed during rotation at a yaw angle of 20º and with "classic" sensor heads. Figure 6 Scatter of measurements of 1D sonic sensor wind speeds relative to cup anemometer wind speed during rotation at a yaw angle of 10º and with "classic" sensor heads.
Aerodynamics of a spinner anemometer
The aerodynamics of the S300 spinner anemometer was investigated by a detailed CFD analysis.
CFD calculations on the S300 spinner
The in-house flow solver EllipSys3D is used in all computations presented in the following. The code is developed in co-operation between the Department of Mechanical Engineering at DTU and The Department of Wind Energy at Risø National Laboratory, see [5], [6] and [7]. The EllipSys3D code is a multiblock finite volume discretization of the incompressible Reynolds Averaged Navier-Stokes (RANS) equations in general curvilinear coordinates, and is second order accurate in space and time. The code uses a collocated variable arrangement, and Rhie/Chow interpolation [8] is used to avoid odd/even pressure decoupling. As the code solves the incompressible flow equations, no equation of state exists for the pressure, and in the present work the SIMPLE algorithm of [9] is used to enforce the pressure/velocity coupling. The EllipSys3D code is parallelized with MPI for executions on distributed memory machines, using a non-overlapping domain decomposition technique. In the present work the turbulence in the boundary layer is modeled by the k-ω SST eddy viscosity model [10].
Computational grid
When modeling the spinner in the computations, the geometry was as shown in Figure 1 and 7. In contrast with the measurements in the tunnel, the support structure for the spinner was not included, and the base of the spinner was modeled as a closed surface. Additionally, only the spinner was modeled, and the flow speeds in the sonic sensor paths were calculated. No attempts were made to include the sonic anemometer geometry in the calculations. Before the computations can be performed, a computational mesh is needed. For the present computations, the geometry of the spinner was given as a solid body in the form of an IGS file, see Figure 7. Based on the IGS files, the commercial GridGen mesh generation code, were used to generate the surface mesh on the cube.
Finally, the 3D volume mesh was generated using the inhouse Risø enhanced hyperbolic mesh generator HypGrid3D. For the S300 spinner, 256 cells are used around the spinner in the cross stream direction, and 64 cells in the flow direction, and two additional square blocks of 64x64 cells are placed at the nose and rear of the spinner respectively. The number of cells in the normal direction is 64, giving a total cell count of approximately 1.6 million cells. The cells are stretched towards the surface in the normal direction, to assure that the y+ value is below 2. To avoid influence of location of the outer boundary, the boundary is located more than 10 diameters away from the spinner.
Influence of wind speed
In the original wind tunnel setup, the 'free stream' velocity was measured with a cup anemometer placed near the floor at the entrance of the tunnel. Due to local flow distortion by the contraction wedge on the floor, the velocity measured by the cup anemometer was not representative for the actual speed in the test region of the tunnel. To investigate this phenomenon a series of computations were performed for the 10º and 20º yaw settings for varying wind speeds. These computations showed that good agreement with the measured values by the cup anemometer of 9 m/s could be obtained by adjusting the free wind speed to 6.5 m/s for the 20º degrees case, see Figure 9. For the 10 º yaw case an adjustment to 7.5 m/s was made as shown in Figure 8. These findings indicate that the cup measurements were influenced by the local flow distortion, and that the tunnel blocking was influenced by the presence of the spinner. Afterwards, the experiment in free wind confirmed that the change in wind speed predicted by the CFD computations agree well with the actual behavior of the flow near the spinner.
Besides establishing the actual flow velocity in the test section, it could be seen from the computations with varying inflow velocity that it is possible to directly scale the velocity to obtain the same good agreement as by adjusting the inflow velocity. This can be seen as a proof that the main effect of the spinner can be represented by an inviscid flow assumption.
Influence of rotation
To investigate the effect of the spinner rotation on the flow field, comparison of computations with zero and a finite rotational speed corresponding to 15 RPM for 0º, 10º and 20º yaw cases were performed, see Table 1. From Figure 9 it can be seen that the rotation of the spinner, has a very weak influence on the measured velocity profiles, and can easily be ignored for practical purpose. Even though the results do not show any dependence of the spinner rotation, we still include the spinner rotation when computing the velocity characteristics at different yaw angles, as the additional computational overhead is unimportant.
Deviation from the measurements
Due to the physical design of the sonic sensors, flow from some directions will be distorted by the supporting structure and the sensor heads see the measurements in Figure 9. In the computations, the sensors are not modeled, and the computations thus do not include any distortion of the flow by the sonic sensor itself. Looking at Figure 9 for the 20º yaw situation, there is observed a region with large deviations by a narrow gap between measured and computed velocities. This is due to the sensor head shadow effect. A similar observation is not made for the 10º yaw situation shown in Figure 8, if the wind is scaled to the same factor as in Figure 9.
Variation of the velocity signal with yaw setting
Having verified that the CFD method is capable of predicting the correct behavior of the flow around the spinner, when excluding the sensor head shadow effect, a series of computations for different yaw settings were performed, see Table 1. Here we use a wind speed of 10 m/s, as the previous computations has shown that the curves can easily be scaled to fit other wind speeds. The azimuth behaviour of the velocity signal as function of the yaw settings of the spinner can be seen in Figure 10, where the behaviour is shown for yaw setting of 0 º, 10 º, 20 º, 30 º, 40 º, 60 º and 80º. The calculated average wind speeds in the sonic sensor paths are seen to be very close to sinus shaped, and they appear to be of a family of curves that reduce the average sensor wind speed over one rotation with increasing yaw setting angle while the amplitude increases. In fact the average sensor wind speed is reduced with the cosine to the yaw setting angle while the amplitude increases with the sine. The signal of one sonic sensor can thus be expressed as a function of the flow angle α and the azimuth position ϕ , the free wind speed U , and two constants and : (1) With this expression for one sensor we can now express the air speed for the three sensors positioned symmetrically on the spherical nose: 1 1 2 ( cos sin cos point to the rotor axisα , and the free wind speed U . The azimuth angle is expressed by one out of three formulations as: The flow angle to the rotor axis is expressed by one out of three formulations as: And the wind speed as: The azimuth angle ϕ depend on the three airspeeds alone, and is independent of the constants and . The wind speed is proportional to the average of the three sensor wind speeds, and is only dependent on the flow angle to the rotor axis Figure 10 The computed velocity distribution around the S300 spinner for the 0º, 10º, 20º, 30º, 40º, 60º, 80º yaw case (the velocities are not scaled), and the values from formula (2).
Free field comparison to 3D sonic anemometer
The S300 spinner anemometer was tested under field conditions at the Risø test site. In this case only the "new" sensors were used in order to reduce the sensor head shadow effect. 5.1. Free field experimental setup A 3D Gill Windmaster sonic anemometer was mounted with the sensor heads at same height and about three meters to the side and a little in front of the spinner anemometer for comparison tests. Measurements were made while rotating at 15rpm. The whole arrangement of the spinner and sonic sensors are shown in Figure 11. Figure 12, 13 and 14. Figure 12 shows comparison of scalar winds, while Figure 13 shows the yaw angle, and Figure 14 the flow inclination angle. The measured wind speeds and wind directions by the two anemometers seem to follow each other quite well. The most significant variations are the same for the two sensors. The 10min average absolute wind speed of the 3D sonic is only 2% lower than the spinner anemometer wind speed for the time trace, of which is shown 100secs in Figure 12. Figure 15 and 16 show compared average wind speeds and standard deviations over a 3 hour measurement period. Average values have a slope of 0.975, and the standard deviations 1.045.
Conclusions
A 300kW wind turbine spinner anemometer has been investigated by wind tunnel tests, field tests, and by theoretical CFD calculations. Two types of sonic sensors were tested with 20mm "classic" and 12mm "new" sensor head diameters, respectively. The wind tunnel tests revealed that the "classic" sensor heads did have flow distortion that gave a wind speed drop gap for azimuth positions when the flow is parallel to the sonic sensor paths.
A full CFD analysis of the spinner anemometer was made. The results showed that the calculations were almost insensitive to rotation and to wind speeds. For all flow or yaw angles up to 60º the azimuth variation was a pure cosine. At 80º there was a significant deviation from the cosine form.
The shape of the responses of the sonic sensors to a flow or yaw angle can be described with a simple function that over one revolution decreases the average value with a cosine to the flow angle and increases the amplitude with a sine to the flow angle. The function can be described for all three sonic sensors, and from the equations, the wind speed, the flow angle to the rotor axis and the azimuth position on the spinner can be determined and be used in a conversion algorithm for the spinner anemometer.
Field measurements were made with the spinner anemometer and the "new" sonic sensors with reduced sensor head shadow effect. The conversion algorithm was used, and no sensor head corrections were applied. The spinner anemometer was compared to a standard 3D sonic anemometer, and the results showed that the time traces of the two instruments follow quite well, considering the distance between the two instruments and the low level of 2.5m above the ground. Over a period of three hours, the average values of the spinner anemometer were 2.5% higher than the sonic anemometer, and the standard deviation values were 4.5% lower. | 2022-06-28T02:24:47.261Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "4d555d3c691a7275451fc588a26f7baa6fcd460d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/75/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4d555d3c691a7275451fc588a26f7baa6fcd460d",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
72799630 | pes2o/s2orc | v3-fos-license | Fog Juice Poisoning
A 19 yrs old male was admitted with upper abdominal pain and vomiting following ingestion of a fluid he thought to be beer. After repeated inquiry it was found that the clear fluid was “Fog Juice’’, used to produce artificial “Fog” or Smoke on the stage. After 36 hours of ingestion the patient developed severe respiratory distress & itching. Investigation reports confirmed Acute Renal Failure & Hyperkalaemia. He was sent to Nephrology Unit for dialysis, but the poor victim expired there about 48 hours after ingestion. DOI: http://dx.doi.org/10.3329/jom.v14i2.19687 J Medicine 2013, 14(2): 195-197
Introduction:
Fog juice is a fluid that is used to make artificial fog or cloud on the stage during concert or drama to make a different divinely atmosphere.In most smoke machine a "Glycol" based fluid is pumped into a heated chamber.The fluid evaporates very quickly so vapour exits under pressure and upon contact with the cool air it forms a dense cloud that is very similar to real fog.Commonly used ingredients to make fog juice include: 1. Distilled water.2. Glycerine.3. Diethylene glycol, 4. Dipropylene glycol, 5. Propylene glycol, 6. Triethylene glycol, etc.There are numerous kinds of glycol, most of them are poisonous but some are safe and are used in food and medicine industry.Just a little change in name can make a big difference and using wrong chemicals can kill people.
Case Report:
A 19 yrs old male was admitted in Medicine Unit-IV of Sir Salimullah Medical College and Mitford Hospital (SSMC & MH), on 25 th December 2010 at 12:30 AM, with the complaints of pain in the upper abdomen and vomiting for couple of times within one hour following ingestion of about 100 ml of a colorless fluid, he thought to be beer.Stomach wash was given in the emergency department.Patient was conscious & vital signs were normal.He was kept nothing by mouth with intravenous fluid, Thiamine & Ranitidine.The following day was uneventful.
On the next (26 th December) morning about 34 hours later the patient was found to be dyspnoeic which aggravated gradually over time, he also developed intense itching.He was conscious but anxious and said that he has not passed any urine for last 16 hours.His respiratory rate was rapid, pulse 100/min, BP was 150/90 mm of Hg.His chest and abdomen examination revealed no abnormality.A bed side ECG showed tall peaked "T" wave.Patient's blood sample showed: Urea-136 mg/dl, Creatinine -8 mg/dl, Na + -126 mmol /L, K + 5.5 mmol/L, Cl --97 mmol/L, HCO 3 --18 mmol/L.After repeated questions the patient and his attendant said that that the clear fluid he took was fog juice.Patient was urgently transferred to Nephrology Department.Intermittent Peritoneal Dialysis was initiated but patient died at that night (about 48 hours after ingestion of the acquiesced fluid).
Discussion:
There are numerous types of glycols.Examples of some glycols in common use are mentioned below: Ethylene glycol (Ethane-1,2-diol) is an organic compound, in its pure form, it is an odorless, colorless, syrupy, sweettasting liquid.Ethylene glycol is not to be confused with diethylene glycol, a heavier ether diol, or with polyethylene glycol, a nontoxic polyether polymer.The major uses of ethylene glycol are to make PET melt, automotive antifreeze and a precursor to polymers.Minor uses of ethylene glycol include the manufacture of capacitors, shoe polish, some inks and dyes.Ethylene glycol is commonly used as a preservative for biological as a safer alternative to formaldehyde.It can also be used in killing jars. 1 Ethylene glycol is moderately toxic. 2 Upon ingestion, ethylene glycol is oxidized to glycolic acid which is, in turn, oxidized to oxalic
Abstract:
A 19 yrs old male was admitted with upper abdominal pain and vomiting following ingestion of a fluid he thought to be beer.After repeated inquiry it was found that the clear fluid was "Fog Juice'', used to produce artificial "Fog" or Smoke on the stage.House No: 12/A (704), Eskaton Garden Road, Ramna, Dhaka-1000.E-mail: mofakharulislam@yahoo.com.Mobile: 01711812191 acid, which is toxic.It and its toxic byproducts first affect the central nervous system, then the heart, and finally the kidneys.Ingestion of sufficient amounts can be fatal if untreated. 3ethylene glycol (DEG) 4 is an organic compound.It is a colorless, practically odorless, and hygroscopic liquid with a sweetish taste.It is miscible in water, alcohol, ether, acetone and ethylene glycol.DEG is a widely used solvent.Diethylene glycol is used in the manufacture of unsaturated polyester resins, polyurethanes and plasticizers .It is a humectant for tobacco, cork, printing ink, and glue.It is also a component in brake fluid, lubricants, wallpaper strippers, artificial fog solutions, and heating/cooking fuel.A dilute solution of diethylene glycol can also be used as a coolant; however, ethylene glycol is much more commonly used.It is very poisonous.Some authors suggest that minimum toxic dose is estimated at 0.14 mg/kg of body weight and lethal dose between 1 and 1.63 g/kg of body weight, Because of its adverse effects on humans, diethylene glycol is not allowed for use in food and drugs.Its use in adulterated consumer products has resulted in numerous epidemics of poisoning since the early 20th century.
Triethylene glycol 5 is a colorless odorless viscous liquid.It is used as a plasticizer for vinyl.Because of its exceptionally low toxicity and low odor combined with its antimicrobial properties it is used in air sanitizer products.Triethylene glycols are also used as liquid desiccants for natural gas and in air conditioning systems.It is an additive for hydraulic fluids and brake fluids .
Polyethylene glycol (PEG) 6,7 is a polyether compound; it has a low toxicity and is used in a variety of products, from industrial manufacturing to medicine.PEG is used as an excipient in pharmaceutical products.Lower-molecularweight variants are used as solvents in oral liquids and soft capsules, whereas solid variants are used as ointment bases, tablet binders, film coatings, and lubricants.It is the basis of many skin creams, and sexual lubricants, laxatives and lubricant eye drops .Polyethylene glycol is the primary component in a type of antifreeze solution used in automobiles and boats as a low-toxicity alternative to the traditional highly poisonous ethylene glycol solutions used in standard antifreeze products.
Propylene glycol, 8 also called 1, 2-propanediol or propane-1, 2-diol, is an organic compound.It is a colorless, nearly odorless, clear, viscous liquid with a faintly sweet taste, hygroscopic and miscible with water, acetone, and chloroform.It is used as a solvent in many pharmaceuticals, including oral, injectable and topical formulations.As a moisturizer in medicines, cosmetics, food, toothpaste, shampoo, mouth wash, hair care and tobacco products.In hand sanitizers, antibacterial lotions, and saline solutions.It is also used in smoke machines to make artificial smoke.As a solvent for food colors and flavorings Dipropylene glycol 9,10 finds many uses as a plasticizer, an intermediate in industrial chemical reactions, and as a solvent.Its low toxicity and solvent properties make it an ideal additive for perfumes and skin and hair care products.It is also a common ingredient in commercial fog fluid, used in entertainment industry smoke and haze machines.
Tripropylene Glycol is a viscous, colorless liquid featuring low toxicity, low volatility, slow evaporation and water miscibility.It and its ethers are not toxic and can be used in food, soaps and personal care products as well as industrial.
Commercially available glycol based fog fluids are safe but may be costly and not widely available in our country.So locally manufactured Fog juice is used in our entertainment industry.Most fog juice manufacturers present their fluid as top secret formulas.Most local manufacturers prepare their fluid by their won way.We know there are numerous kinds of glycols and most of them are poisonous.Using the wrong chemical or in wrong concentration may make the fluid toxic.Illiteracy, ignorance and tendency to have more profit, may lead to the production fog fluid that is poisonous.Death of the above mentioned patient was due to the drinking of such a toxic fluid.We collected the fluid and sent it to Bangladesh Council of Scientific & Industrial Research BCSIR (Science laboratory) for its chemical analysis.But the laboratory failed to analyze it and to give it's chemical composition.The physical appearance of the acquiesced fluid and the mode of death of the patient suggest that it contains substantial amount of DEG.Similar deaths from ARF following accidental drinking of fog fluid have also been noticed by other observers at other parts of the country.
Conclusion:
Diethylene glycol poisoning is not unknown to us.Between 1990 and 1992, 339 children developed kidney failure, and most of them died, after being given paracetamol (acetaminophen) syrup contaminated with diethylene glycol. 11In 2009 similarly 29 deaths were also reported from Brahman Baria district due to DEG in Syrup Paracetamol.Fog juice poisoning is a new addition to the older issue.The physicians have to become alert about its occurrence, so that appropriate and timely intervention can save lives.It is also time to take measures to create public awareness to prevent such unwanted deaths.Nevertheless we have only noticed the fatal effect of the poison after ingestion but we don't know its effect on the performers and other people working on the stage who are inhaling the poisonous gas chronically.
After 36 hours of ingestion the patient developed severe respiratory distress & itching.Investigation reports confirmed Acute Renal Failure & Hyperkalaemia.He was sent to Nephrology Unit for dialysis, but the poor victim expired there about 48 hours after ingestion.Assistant professor, Dept. of Biochemistry, National Institute of Diseases of Chest and Hospital.Mohakhali.Dhaka.Correspondence: Dr. FM Mofakharul Islam.Associate professor of Medicine, Sir Salimullah Medical College & Mitford Hospital, Dhaka. | 2018-12-22T18:43:59.893Z | 2014-07-24T00:00:00.000 | {
"year": 2014,
"sha1": "e54458369fc338af4b10805f1e8a107ab9e336ea",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/JOM/article/download/19687/13620",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e54458369fc338af4b10805f1e8a107ab9e336ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45594863 | pes2o/s2orc | v3-fos-license | The Design and Development of an Expert System Prototype for Enhancing Exam Quality
This paper discusses the development of an expert system prototype for use in college institutions. Our aim is to enhance exam quality and student performance by obtaining metrics pertaining to assignments, study materials, textbooks, and lecture quality, then learning dynamically from this information to create a human-readable course evaluation. The goal is to obtain a model which can be applied to courses in which students struggle, so we can identify ways to enhance the most determining factor of their grade: the quality of the exam. This expert system will serve as a prototype for a larger, more comprehensive automated system which will be proposed to enhance curricula.
INTRODUCTION Exam quality relates to how well an examination of learned material reflects the information provided in course learning materials. Learning materials include study guides, homework, books, lecture information, course videos, and exams themselves, should a comprehensive exam be given. Courses may suffer from lack of coherence between the offered materials. If a material is utilized it should be relevant and effective in preparing students for an exam. In a case of minimal coherence, students could have a potentially difficult time deciding exactly what to prepare for regarding an examination. The idea here is to capture the areas of the course which are deficient in providing preparatory knowledge needed for exam success, so that they can be modified to achieve a better synergistic effect on overall exam success. We also wish to keep or enhance effective course materials. A system which reports these findings to the instructor would benefit the course as a whole by providing the instructor information to support modification of the learning framework, which would guide students to optimal course performance.
Our work achieves this goal using a concept from data mining known as association rule learning. This technique is used to discover relations between variables in large example sets. This concept is particularly interesting because the rules this technique produces can be interpreted as easily as they can be read. For example, a typical rule may take the form {studies daily} => {has a high GPA}. This would mean that the feature 'studies daily' implies a given example {has a high GPA}. The left most side of a rule is called the antecedent whereas the result on the right hand is known as the consequent. In order to generate meaningful and useful rules, support and confidence thresholds are supplied, which can limit the generation of weak rules. Support is defined as the probability that an example contains a subset X when randomly chosen from the total set of responses. Support of an association rule 'A=>B' is defined as the 'support of (A union B)'. Confidence refers to the likelihood that for a transaction containing A, how likely is it that it also contains B. Confidence of an association rule 'A=>B' is defined as the 'probability that an example contains B given A divided by the probability that an example contains A'. This is the same as the 'support of (A union B) divided by the support of (A)'. Our study uses the algorithm proposed by Agrawal [1,2]. This method has become popular in market basket analysis, intrusion detection and bioinformatics. It is also extensible to discovering correlations within data [3]. In the realm of education, association rules have been used to generate efficient learning work-flows [4], aid in academic advising [5], and to provide learning insights in the Moodle Course Management system [6].
This paper reports on a pilot study which was performed as a proof of concept to support the building of an automated system which can be utilized by instructors.
We based our design on the W-CAT model proposed by Rizk [7]. This model is composed of four modules. A base module contains student background information, course material, and teacher methods. This provides input into a reasoning module. Here we use association rule learning to find combinations of attributes contained in the base module which lead to high or low exam performance. These results are then analyzed through an inference module which searches for specific course deficiencies related to communication and pedagogical factors. A final expert module then produces an assessment by recognizing strong and weak course materials as well as study combinations which lead to poor or high exam performance.
This study was undertaken at the University of Houston. The evaluation of the results' study focused on students' exam success as determined by the letter grade received. The study investigated student perceptions of study material in terms of their preparatory significance and effectiveness in order to generate the rule sets. It discusses the findings and limitations of using association rule mining in a computer science course to determine the factors which contribute to exam quality, as well as providing evidence to support the creation of an automated course evaluation system.
The first purpose of our study is to identify students' perceptions of course materials, so as to support decision making with regards to whether adoption would likely enhance exam quality and thus affect student performance positively. This focus may help leaders and faculties de-termine whether to pursue a particular solution in improving a specific course curriculum. The second purpose of this study is to acquire knowledge for the development of an automated system to aid instructors in future course development by dynamic evaluation. The next section describes the method used to gather our data.
II. METHOD
The first phase of analysis began with data collection. Participants in this study include 52 students enrolled in an introduction to computer organization and design course (COSC2410). The majority of students were full time and aged between 18 and 23 years old. The samples are comprised of students of different levels (freshman, sophomore, junior, senior) and different degrees.
We utilized the open-source Limesurvey software to generate an expert survey to collect information from students in COSC2410. The idea was to use a pre-survey before examination 2, a post-survey after completing examination 2, and then evaluate these as compared to the students actual exam results. The responses can be considered valid, as invitations to the survey were distributed using a secure token system which would allow only a finite number of tokens to be used in the survey, which were e-mailed to students individually. Students were encouraged to participate in the surveys, as we offered extra credit on the exam in return. Anonymity of responses was achieved by matching data to student identification numbers, with no need to ever ask for identifying information beyond that.
The surveys assessed the students' perception of the efficacy of certain course materials in preparing for the topics on the examination. The evaluation of survey information was based on statements regarding the course materials, as well as yes/no and multiple choice questions. The primary dependent variable is the exam performance measured by the letter grade received. Additionally, we asked students what type of what materials they would prefer to see more of.
A. Students' perceptions
This study considers including students as stakeholders in the evaluation process [8]. Thus, information regarding students' attitudes and preferred learning style should be known (Table I).
B. Course Material Usage
A series of questions about materials employed in the course was asked to gain insight into which materials students were using to prepare for the exam (Table II).
C. Course Material Effectiveness
Students rated the effectiveness of course material items in regards to how well they prepared them for the exam. This was done using a 5-point scale (Table III).
D. Student Suggestion
Students were asked if they recommended more programming assignments concerning the information on exam 2. 62% supported this recommendation.
E. Exam Performance
The information in A-C was to be evaluated against actual exam performance, indicated by letter grade received as determined by raw score (Table II).
To generate our rulesets we used the open-source data mining tool RapidMiner. The following describes the steps used in our RapidMiner process tree. The survey data was cleansed by converting the numerical grades to nominals A-F. These were then converted to binomial data. Questions which used a 5 point ranking scale were discretized into bins and processed as binomial data. We processed the data to find the frequent itemsets and proceeded to generate one-to-one association rules. III. RESULTS We used two strategies to obtain our results in the rule generation step. The first sought to lower the support threshold, so that rules could be generated which would contain a specific letter grade in the consequent. This would give us a rule that would apply to a final letter grade received and thus insight to exam performance. The second method sought to find strong association rules using high values for support and confidence. This would produce meaningful one to one associations among the independent variables or intercorrelations. IV. LIMITATIONS Our findings from the pilot study have aided in discovering inherent limitations which we will attempt to control better in the design of our automated expert system.
We must first acknowledge the limitations present in our choice of using association rules. In many cases where association rules are used, very large numbers of rules can be generated which can prove to be fruitless and/or time consuming when trying to discover valid correlations. We did not have this issue as our small sample size, thresholds, and our single pass generation of one-to-one rules limited this effect. This explanation raises more issues though. We have not used a large sample size. Given that our method applies to a specific course; it may never be-come large unless we perform a longitudinal study, which, due to the nature of course curriculum changes, would likely prove to be irrelevant over time. Our design is unable to check for negative correlations, i.e. a rule which has the form 'does not contain A=>B'. Mining for such rules necessitates the examination of an exponentially large search space. Although potentially useful, we have neglected to use an algorithm which extends to generate these rule types [3,9]. Due to our use of low support and confidence threshold in results [A], we must guard against reporting contradictory rules. This did not occur in this study but does remain a possibility so long as support and confidence thresholds below .51 are used. In the pilot study we did not generate rules greater than one-to-one. There may be useful antecedent pairings present in our data but we have not reported them.
We do not feel that a pre-survey will be necessary in future endeavors due to two factors. First, we wish to limit the total number of questions and surveys to a bare minimum. We would expect higher survey completion rates and more honest data from students if we reduce the frequency of survey delivery. Second, the data appears to be most relevant after the exam has been taken due to the fact that students have now seen the exam and are not answering questions based on expectation, but rather actual experience. In the future we could bring back the pre-survey to predict success based on responses from a previous course semester.
The teaching method used by courses which adopt this method for evaluation may have an impact on whether the results are useful. We assume that most courses with a pre-determined syllabus would be able to benefit from an association -rule based system which allows the instructor to specify the learning materials present in the course. Some courses may only have a textbook, lecture, and final project. In this case, we could evaluate against the final project grade rather than exam grades, but may not capture other factors that would lead to success on the project. For example, team size, student background, and student classification may prove to be better indicators. We may need to develop a different survey for different pedagogical styles.
Furthermore, the validity of the questions used in the survey has not been determined. We feel this should be done in the future when other courses have applied our system. We could ask instructors to rate the perceived usefulness of the results and identify the teaching style used in a final survey to obtain validity measures. We could also obtain suggestions from users to better aid in capturing relevant success indicators.
V. CONCLUSION
The results of this pilot study have aided in the development of an automated course/exam evaluation system. We performed this study with the intent of gaining the domain knowledge necessary to build an expert system of this type. A description of this system follows: Instructors will be able to login to a system and define a course to evaluate. They simply need to provide information concerning materials present in the course and student names / identification numbers. The system then generates dynamic surveys and a link is sent to students to collect responses. Upon completion of the surveys, the software will generate the association rules. From these rules, we still need to develop an inference engine which can provide dynamic feedback to course instructors. Finally, we parse rule sets into a human-readable paragraph which will provide the patterns followed by students who are earning high or low exam marks and a summary of the course material efficacy.
An example of what instructor feedback would look like in this pilot study: From what we have obtained in our results, it is indeed possible to create an automated system which could produce results similar to this for any course. Regarding exam performance is taken directly from the results in A. Provided data is captured and parsed properly we would expect this to be repeatable. Intercorrelations are taken from B. Again, this could also be repeated assuming proper data acquisition and handling. Summary generates information directly from the survey. It is based on response frequency in the survey. Recommendations are also determined directly from response frequency.
It should be noted that inter-correlations may not always be useful, as the results may be too vague or unrelated. For example, in this study we would generate 'students who though the lectures correlate to the knowledge expected for exam 2 -> watched the review video'. This information does not seem very useful. On the other hand, inter-correlations may give support for conclusions arrived at in A. For example, 'students who viewed the video->expected to do well on exam2' and students who received low grades->studied primarily using the review video. An instructor may infer that the review video inspires false confidence in students.
We took knowledge from this study survey and applied it to the course in the following semester. We noticed students were not using the book and did not think it was effective. This prompted further inquiry from the professor to the students. Students explained that we do programming and logic design in this course, and that the programming is not covered well in the current textbook, and also that the logic design portion is contained in an appendix. We replaced the old textbook with a smaller one focused on MIPS programming and another textbook dedicated to logic design. With a book that better models the information on the exams; we expect to see students using it more-and more effectively-as a study-aid. We have also added more programming assignments to supplement exam2. We believe this will increase the frequency of high marks on the exam, as it was one of the indicators in our study.
We currently have a web interface where instructors can enter the information necessary to create dynamic surveys for their courses and no longer rely on LimeSurvey. Responses have already been collected from two computer science courses this semester using this system. We are currently developing the inference engine and parser to produce human readable feedback. We seek to involve more courses to adopt our software when it is complete so that we may begin evaluating its efficacy. We believe the final product will be able to actively evaluate physical and electronic classrooms which are based on examinations, with the benefit of improving teaching and student learning.
1) Regarding exam performance
Students receiving high marks on exam 2 completed the programming assignment. They study using the exam review video and find its content relevant to the exam material. Students with low marks on exam 2 study primarily using the exam review video.
2) Intercorrelations
Students who thought the lectures correlate to the knowledge expected for exam 2 watched the review video. Those who viewed the video expected to do well on exam 2. Before and after the exam students thought the lectures correlated to the exam information.
3) Summary
Students are not using the textbook to study. Only 62% of students have the textbook. The homework exercises, review video, and programming assignments are considered effective learning tools.
4) Recommendations
Students support the recommendation of assigning more programming assignments based on the exam 2 information. | 2018-01-23T22:41:36.499Z | 2010-08-03T00:00:00.000 | {
"year": 2010,
"sha1": "f321bd47cc33d935a13afaa202cacc309b78f3f8",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jac/article/download/1356/1472",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "746f9951f08f9d5025a218e42d66e593378ce3a9",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
33033764 | pes2o/s2orc | v3-fos-license | Treatment of hepatoma with liposome-encapsulated adriamycin administered into hepatic artery of rats.
AIM
To observe the therapeutic effects of liposome-encapsulated adriamycin (LADM) on hepatoma in comparison with adriamycin solution (FADM) and adriamycin plus blank liposome (ADM + BL) administered into the hepatic artery of rats.
METHODS
LADM was prepared by pH gradient-driven method. Normal saline, FADM (2 mg/kg), ADM+BL (2 mg/kg), and LADM (2 mg/kg) were injected via the hepatic artery in rats bearing liver W256 carcinosarcoma, which were divided into four groups randomly. The therapeutic effects were evaluated in terms of survival time, tumor enlargement ratio, and tumor necrosis degree. The difference was determined with ANOVA and Dunnett test and log rank test.
RESULTS
Compared to FADM or ADM + BL, LADM produced a more significant tumor inhibition (tumor volume ratio: 1.243 +/- 0.523 vs 1.883 +/- 0.708, 1.847 +/- 0.661, P < 0.01), and more extensive tumor necrosis. The increased life span was prolonged significantly in rats receiving LADM compared with FADM or ADM+BL (231.48 vs 74.66, 94.70) (P < 0.05).
CONCLUSION
The anticancer efficacies of adriamycin on hepatoma can be strongly improved by liposomal encapsulation through hepatic arterial administration.
INTRODUCTION
Adriamycin (ADM) is extensively used to treat patients with hepatocellular carcinoma (HCC). In vitro studies have verified that the cytocidal effect of ADM depends on the concentration and duration of exposure [1] . Unfortunately, intravenous administration (iv) of a highdose ADM is often associated with acute toxicity, such as myelosuppression, immunosuppression, and dosecumulative cardiotoxicity. As a result, drug concentrations in hepatomas are limited. During the past few years, many researchers have reported on drug delivery systems for cancer chemotherapy that aim at the specific targeting of antitumor agents to tumor cells or tumor tissues, thus enhancing the efficacy of chemotherapy as well as reducing its toxicity [2] . Among them, liposomes have drawn much attention for their excellent bioavailability, biodegradability, and targeting, characteristically, to the reticuloendothelial system (RES), especially the liver and spleen [3] .
On the other hand, liver primary tumors receive approximately 90% of their blood supply from the hepatic artery. Therefore, transhepatic arterial chemotherapy (TAC) is now widely recognized as an effective means for the treatment of HCC. With TAC, a much higher drug concentration in the tumor-residing zone can be achieved. It was reported that administration of ADM via the hepatic artery (ia) was able to increase the tumor ADM concentration three-fold compared to intravenous (iv) administration [4] . In patients with cancers, the ia administration of ADM reduced the plasma AUC by about 30% [5] . On the basis of these evidences, we hypothesized that a further significant therapeutic effect could be expected by combining TAC and liposome techniques. The current study was carried out to observe the therapeutic effects of liposome-encapsulated adriamycin (LADM) on hepatoma in comparison with the treatment results of adriamycin solution (FADM) and adriamycin plus blank liposome (ADM + BL) administered into the hepatic artery in rats. patic artery in rats bearing liver W256 carcinosarcoma, which were divided into four groups randomly. The therapeutic effects were evaluated in terms of survival time, tumor enlargement ratio, and tumor necrosis degree. The difference was determined with ANOVA and Dunnett test and log rank test.
L i p o s o m a l a d r i a m y c i n ( L A D M ) p r e p a r a t i o n a n d characterization
Large unilamellar vesicles (LUVs) were prepared by the extrusion method described by Hope et al [6] . Appropriate amounts of lipid mixtures (EPC/Chol, 55:45 mol/mol) in chloroform were dried under a stream of nitrogen gas to form a homogeneous lipid film. The trace amount of solvent was then removed under vacuum overnight. The lipid film was hydrated in a low pH citrate buffer (pH 4.0, 300 mmol/L) by vortex mixing. The resulting multilamellar vesicles (MLVs) were frozen/thawed (liquid nitrogen/55℃) 5 times and extruded 10 times at 55℃ through two stacked 100 nm polycarbonate filters (Nuclepore) employing an extrusion device (Lipex Biomembranes, Inc., Vancouver, BC, Canada). ADM was then encapsulated by pH gradient-driven method as described previously [7] .
The final product looked like a reddish, semitransparent, colloidal solution. Under electron microscope, LADM showed a global, regular contour with homogeneous size and distribution. The average 120 nm diameters of the liposomes were determined by quasi-elastic light scattering (QELS) using a Nicmp 370 submicrom particle sizer (Sante Burbara, CA, USA) as shown in Figure 1. EPC and Chol were quantified by high performance liquid chromatography (HPLC) method (Table 1). HPLC analysis was performed using a Perkin Elemer HPLC with a PE 200 LC pump, a PE ISS 200 Advanced LC sample Processor, PE 900 Series Interface and an Evaporative Light Scattering Detector (ELSD, Shimadzu). A YMC, PVA 5 m normal phase column was used in this study. The chromatograph was run in a gradient mode at a flow rate of 1 mL/min with the following program, where solution A contained chloroform-isopropanol (50:50, v/v) and solution B contained chloroform-isopropanol-water (36:55:9, v/v).
The drug-to-lipid ratio was 0.25. The drug-embedding ratio was more than 98%. Encapsulation efficiency was calculated as the percentage of ADM incorporated into liposomes relative to the initial amount of ADM in solution as shown in the equation below. The free ADM was separated by elution over a Sephadex G-50 column. Then the free ADM and liposomal ADM were determined by HPLC.
(adriamycin/lipid)Encapsulated Encapsulation efficiency (%) = × 100 (adriamycin/lipid)total The liposomal particle sizes were identical before and after drug loading, and ADM encapsulated liposomes were very stable at room temperature for three days. There was no significant leakage of ADM from liposomes. Just before use, normal saline was added to adjust the final concentration of total ADM to 1.0 mg/mL. Free ADM (FADM) was prepared by dissolution of ADM in normal saline at the same concentration.
Animals and anesthesia
Sixty male Wistar rats weighing 230-270 g (mean, 250 g) were provided by Laboratory Animal Center of Fourth Military Medical University and randomly divided into 4 groups, 15 in each. Sumianxin (Changchun Agricultural Pastoral University) was used as anesthetic.
Establishment of hepatoma model
One milliliter of suspension containing 10 7 Walker-256 (W256) carcinosarcoma cells (Shanghai Medical Industrial Institution) was injected into the thigh muscle of a carrier rat (not included in experimental rats). One week after inoculation, a palpable tumor was found at the injection site. Viable tumor tissue was excised under sterile conditions and soaked in 20 mL of Hanks balanced salt solution. Tissue was cut into approximately 1 mm × 1 mm × 1 mm fragments. Experimental rats were anesthetized with intramuscular injection of Sumianxin at 0.2 mL/kg. Median incision beneath the metasternum was made and the liver was exposed. The tumor fragment was implanted into the left liver lobe.
Administration through hepatic artery
On the 7th d after tumor implantation, all animals received laparotomy again. The longest (a) and shortest (b) diameters of the tumor were measured. The tumor volume was calculated as a × b 2 2 By cannulation method described previously [8] , normal saline (NS), FADM, LADM, or ADM mixed with blank liposomes (ADM+BL) were injected into the hepatic artery of rats in groups 1-4 respectively. The dose of ADM in each formulation was 2.0 mg/kg body weight. The concentration of ADM was 1.0 mg/mL.
Assessment of therapeutic effect
Tumor growth inhibition: Seven days later, all rats received the third laparotomy. The longest (a) and shortest (b) diameters of the tumor were measured again and the tumor volume after drug administration was calculated. The tumor volume ratio (TVR) was calculated as Tumor volume after administration TVR = Tumor volume before administration Tumor necrosis degree: Seven random rats in each group were killed and anatomized. Hepatoma was removed completely and fixed in 10% formalin. Three 5 μm thick sections from each tumor were sliced on the maximal transverse plane and mounted on glass slides overnight at room temperature. After HE staining, the sections were examined under microscope. According to the percentage of necrosis area, tumors were graded according to the following criteria: Ⅰ, 0%-30%; Ⅱ, 30%-70%; Ⅲ, 70%-100%. Statistical significance was tested by ANOVA and Dunnett test for tumor growth inhibition, log rank test for survival time. P < 0.05 was considered statistically significant.
Tumor growth inhibition
No difference in tumor volume was found among the four groups before treatment (P > 0.05, Table 2). After treatment, the tumor grew rapidly in rats which received NS. The mean tumor volume was 31.55 times greater than that before treatment. Metastases were observed in about half of these rats. In contrast, the rate of tumor growth lowered apparently in rats which received FADM or ADM + BL (P < 0.01). No difference between FADM and ADM + BL was observed (P > 0.05). The slowest tumor growth was found in rats which received ia administration of LADM. The mean tumor volume ratio on the 7th day after LADM infusion was 1.243. Statistics indicated that LADM produced a further significant tumor inhibition, as compared with FADM or ADM + BL (P < 0.01). In addition, no metastasis was found in rats receiving LADM.
Tumor necrosis degree
Under microscope, W256 tumor cells of rats that received NS showed extensive proliferation and active mitoses. Small areas of necroses were observed in the center of tumor tissue accompanied with a few inflammatory cells (Table 3). Moderate to severe necroses were found in tumors of rats that received FADM or ADM+BL. Grade Ⅲ of tumor necroses was found in 4 of 7 tumors after administration of LADM, including 2 cases of complete tumor necrosis.
DISCUSSION
Hepatocellular carcinoma (HCC) is by far the third most common cause of cancer mortality throughout the world, especially in the Asian-Pacific region, and its incidence continues to increase. On the other hand, HCC has a high expression rate of multi-drug resistance gene (MDR1) and consequently high levels of P-glycoprotein (P-gp), resulting in its insensitiveness to most chemotherapeutic drugs [9] . Few agents show response rates (RRs) above 20%. ADM is one of the most successful anticancer agents to date for the treatment of HCC. Early phase Ⅱ trials and case series | 2018-04-03T02:17:07.421Z | 2006-08-07T00:00:00.000 | {
"year": 2006,
"sha1": "d05b9ed5a31024422bbf1246ebf16f2650794966",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v12.i29.4741",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1075f24ef5da87f08fbdc07249b5beb6a716ab13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225947637 | pes2o/s2orc | v3-fos-license | Hierarchy of coherent structures in turbulent channel flow
By analyzing a database of fully developed turbulent channel flow at the friction Reynolds number Reτ = 4179, we investigate the sustaining mechanism of a hierarchy of coherent structures in the wall-bounded turbulence. For this purpose, we decompose the turbulent fields into different scales by a band-pass filter. Using the filtered velocity and velocity gradients, we identify the hierarchy of coherent structures to observe that the largest-scale structures at each distance from the wall are composed of quasi-streamwise vortices and low-speed streaks. Since these are similar to well-known coherent structures in the buffer layer, they are likely to be maintained by the self-sustaining process. In contrast, structures smaller than the distance from the wall distribute isotropically. These observations are also confirmed by using a conditional sampling method. Moreover, quantifying the scale-dependent contributions to the vortex stretching and energy transfer, we show that the largest-scale coherent structures are strongly affected by the mean shear, whereas smaller-scale vortices are generated by the energy cascading events. Incidentally, in large-scale ejection regions (i.e. large-scale low-speed streaks), the cascading events are stronger than in large-scale sweep regions.
Introduction
Near-wall turbulence is sustained by the so-called self-sustaining process (SSP) [1,2], namely, streamwise vortices induce low-and high-speed streaks by the advection of the streamwise momentum, while an instability induces the meandering of streaks, causing the regeneration of streamwise vortices. It is also known that these quasi-streamwise vortices are inclined in the wall-normal and spanwise directions, and they are located in a staggered array [3]. When the Reynolds number is low, there is no scale separation and the SSP explains the sustaining mechanism of the turbulence very well. However, as the Reynolds number increases, larger-scale vortices appear in addition to the near-wall coherent vortices. We emphasize that we cannot identify larger-scale vortices in terms of the velocity gradients. For example, the yellow objects in figure 1 are the positive isosurfaces of the second invariant of the velocity gradient tensor in turbulent channel flow at the friction Reynolds number Re τ = 4179 (see §2. 1 for the details of the database). The visualized vortices are at the smallest scale. On the other hand, looking at the (blue) isosurfaces of the fluctuating streamwise velocity, we can see larger-scale motions. In other words, the quantities related to the velocity are appropriate for extracting largest-scale structures, whereas those related to its gradients are appropriate for extracting smallest-scale structures. The large-scale velocity structures have been extensively investigated in relation to large-scale vortices. For example, in a turbulent boundary layer, hairpin vortex packets form the source of bulge structures in the outer layer, which are one type of large-scale motions (LSM) [4,5]. Lee et al. [6] also investigated the relation to the hairpin vortices and showed that the creation of very-large-scale motions (VLSM) is associated with the merging events of hairpin vortex packets. Incidentally, Kevin et al. [7] found that large-scale quasi-streamwise vortices exist along the side of large-scale low-speed structures.
In our previous study [8], we extracted the hierarchy of multiscale vortices in a turbulent boundary layer by coarse-graining the simulated velocity fields. Since the coarse-graining enables us to quantify the scale-dependent contributions, evaluating the scale-dependent contributions of the vortex stretching, we revealed the generation mechanism of multiscale vortices. Namely, vortices as large as the height are stretched by the mean shear, whereas smaller-scale vortices are stretched by the larger-scale vortices. The latter generation mechanism is consistent with a picture of the energy cascade in turbulence in a periodic cube [9,10]. Incidentally, by using over-damped large-eddy simulations (LES), Hwang and his coworkers [11,12,13] extracted the attached eddies in the log and outer layers and showed that they were self-similar structures composed of quasi-streamwise vortices and streaks. The most important ingredient in both of our study [8] and the over-damped LES studies [11,12,13] is the analysis of coarse-grained turbulent fields.
In the present study, we investigate the hierarchical structures in turbulent channel flow at Re τ = 4179 [14]. The Reynolds number of the flow is much higher than in the turbulent boundary layer examined in our previous study [8]. The purposes of the present study are (i) to show the similarity in the sustaining mechanism of the hierarchy of vortices in these two wallbounded turbulent flows, and (ii) to show the relationship between vortex generation processes and the energy cascade. For these purposes, we extract the hierarchy not only of vortices but also of the velocity. We showed, for a turbulent boundary layer [8], that small-scale vortices in the log layer were stretched predominantly by the twice-larger-scale vortices, but we did not show how the energy was transferred. In the rest of the present paper, we show the mechanism of the generation of coherent vortices and of energy transfer in the log and buffer layers of fully developed wall-bounded turbulence.
Numerical databases
To reveal hierarchical structures near a wall, we investigate data of a direct numerical simulation of turbulent channel flow at the friction Reynolds number Re τ = 4179 [14]. The simulation was conducted by integrating the Navier-Stokes equations for an incompressible fluid in terms of the wall-normal component of the vorticity and its Laplacian [15]. For the spatial discretization, the Fourier spectral method was used in the wall-parallel directions, whereas seven-point compact finite difference schemes [16] were used in the wall-normal direction. The sides of the computational domain are L x = 2πh, L y = 2h and L z = π, where x, y and z denote the streamwise, wall-normal and spanwise directions, respectively, and h denotes the channel half-width. This computational domain is large enough to obtain one-point statistics [14] of fully developed turbulence, and the Taylor-length-based Reynolds number at y/h ≈ 0.4 is approximately 200.
Scale-decomposition
To explicitly extract the hierarchy of flow structures in turbulence, we employ a filter corresponding to the Fourier band-pass filter for the velocity. First, we apply a Gaussian filter to the fluctuating velocityǔ i . Here, σ denotes the filter scale and C is the coefficient to ensure that the integration of the kernel is unity. For the wall-normal direction, we use the method proposed by Lozano-Durán et al. [17] that the filtering operation is extended by reflecting the filter at the walls and the sign of contains the information for all scales larger than σ, the filter corresponds to a low-pass filter of the Fourier modes of the velocity. We take the difference between the low-pass filtered fields at two different scales, i.e.
which was also used in our previous study [18] to examine the scale-dependent contributions of the enstrophy production rates in a turbulent boundary layer. This filter corresponds to a bandpass filter of the Fourier modes in the sense that u [σ] i has the contributions from only around the scale σ. We evaluate a scale-decomposed quantity · [σ] (for example, scale-decomposed strain-rate tensor S
Hierarchy of vortices and low-speed structures
As mentioned in the introduction, when we visualize the positive isosurfaces of the second invariant Q of the velocity gradient tensor, only the smallest-scale vortices are captured (see figure 1a). This is because quantities related to the velocity gradient are predominantly determined by the smallest-scale flow structures. On the other hand, when we visualize, for example, the negative isosurfaces of the fluctuating streamwise velocity, structures as large as the distance from the wall are captured (see figure 1b). This is because quantities related to the velocity are predominantly determined by the largest-scale structures. Figure 1 shows an obvious scale separation between the structures associated with vortices and velocity. Therefore, to extract arbitrary-scale structures, we employ the band-pass filter defined by (2). structures are hierarchical. We emphasize that we cannot extract them from the simulated field (figure 1) without the filtering. Looking at the largest scale (figure 2a), we notice that vortices are quasi-streamwise but they are also inclined to the wall-normal direction, and that largest-scale streaks are located by these quasi-streamwise vortices. This combination of quasistreamwise vortices and meandering streaks is reminiscent of the coherent structures near the wall that were found by Jeong et al. [3] for low-Reynolds-number turbulence. The observed largest streaks correspond to a VLSM and the quasi-streamwise vortices to LSM, which were observed by Hwang [11] in the over-damped LES. Moreover, not only for the largest scale but also for smaller scales (figure 2b,c), we observe similar structures. Evidence is shown in figure 3, where we crop them in rectangular boxes whose faces are parallel to the computational domain. We see that the relationship between the vortices and streaks is quite similar irrespective of the scale. In other words, the largest-scale structures at each height (σ ∼ y), i.e. attached structures, are the coherent structures composed of quasi-streamwise vortices and low-speed streaks irrespective of the height y.
Although it is difficult to observe such hierarchical structures in the real space of simulated turbulent fields without scale-decomposition (or coarse-graining), we can easily find the selfsimilar hierarchy of structures (quasi-streamwise vortices and low-speed streaks) with it. To investigate whether the observed structures are dominant, we evaluate averaged distributions of Q [σ] and u [σ] with a given scale σ at a given height y r around the intense vortical structures. For this purpose, under the conditions that Q [σ] at a fixed height y r is larger than the standard deviation Q [σ] rms at the same height y r and the streamwise vorticity ω [σ] x is positive, we take averages of Q [σ] and u [σ] around the points satisfying the condition. Here, the latter condition of ω [σ] x (y r ) > 0 is imposed to break the spanwise symmetry. We emphasize that, since this conditional sampling corresponds to a moving average over the points satisfying the condition, the shape of the structures obtained by this conditional average does not correspond to the shape of typical coherent structures. In other words, they are the averaged distributions of Q [σ] and u [σ] around the intense structures (Q [σ] (y r ) > Q and v [σ]+ = 3.0×10 −2 (red). The black arrows indicate the center (0, y r , 0) of the reference frame in which the conditional average is taken. The grid width on the wall indicates σ + (= y + r = 960). The flow is from lower left to upper right. The blue arrow indicates the direction of positive ω x . σ = y r , is identical to the one in figure 4(c), we reconfirm that the vortex is inclined to both the wall-normal (see figure 5h) and spanwise directions (see figure 5i). In contrast, detached structures are more spherical (see figure 5a,b,c). However, this observation does not imply that the vortical structures themselves are spherical but implies that they are distributed isotropically. The fact is consistent with the conclusion by Jiménez [19] that smaller-scale vortices away from the wall decouple from the mean shear and become isotropic. In our previous study [8], we also showed that smaller-scale vorticity became less aligned to the mean-flow stretching direction in a turbulent boundary layer.
Before closing this subsection, we investigate the spatial relationship between small-scale vortices and large-scale structures. For this purpose, we take a sampling of the large-scale (σ + = 960) fields under the condition that small-scale (σ + cond = 60) vortices away from the wall (y + r = 960) exist. Here, we set rms (y r ) as the condition so that we can see the correlation between large-scale structures and intense small-scale vortices. Note that, since the imposed condition cannot break the spanwise symmetry, obtained structures are always symmetric in the spanwise direction. Figure 6 shows the isosurfaces of the conditionally averaged quantities of ω [σ] x , u [σ] and v [σ] . Looking at the yellow (positive ω [σ] x ) and green (negative ω [σ] x ) objects and the dot (0, y r , 0) indicated by the black arrow where the small-scale vortices exist, we can see that small-scale vortices away from the wall are more likely to exist in the strong upflow (red) induced by these large-scale streamwise vortices (yellow and green). Note again that this does not necessarily mean the existence of counter-rotating vortices. In addition, since a large-scale low-speed streak exists in the upflow region, large-scale ejection events occur in this region. Thus, intense small-scale vortices are likely to be in large-scale ejection regions. This is consistent with the observation by previous authors [20,21,22] that a cluster of small-scale vortices resides in a large-scale attached ejection.
There are two main results in this subsection. One is that the largest-scale structures at each height (i.e. attached structures) are composed of quasi-streamwise vortices and a low-speed streak (figures 3 and 4). The other is that the orientation of small-scale vortices away from the wall is isotropic ( figure 5), and small-scale vortices are more likely to exist in the large- 6). In other words, in terms of the momentum transfer, intense small-scale vortices are in the large-scale ejection. However, this does not necessarily imply that they are carried from the wall. In the next subsection, we investigate the sustaining mechanism of these small-scale vortices and its relation to the energy cascade.
Sustaining mechanism: vortex stretching and energy cascade
The transport equation for the enstrophy ω 2 i /2 is expressed by 1 2 where ω i is the vorticity, S ij is the strain-rate tensor and ν is the kinematic viscosity. Only when the vortex stretching term is positive, is the enstrophy amplified, because the viscous term νω i ∇ 2 ω i weakens the enstrophy. Therefore, to investigate the generation mechanism of vortices, we focus on the vortex stretching term. Since strain rates at all scales simultaneously contribute to the stretching of vorticity at a given scale, we decompose ω i and S ij into various scales and introduce a quantity Here, ω [σω] i is the fluctuating vorticity filtered at the scale σ ω , and S is the fluctuating strain rates filtered at σ S . Hence, g f (σ S → σ ω ) indicates the contribution of the production rates of the enstrophy at σ ω from strain rates at σ S . If g f (σ S → σ ω ) is positive (or negative), it implies the stretching (or contraction) of the vorticity. Similar quantities were defined by Goto et al. [10] for periodic turbulence and by ourselves [8] for a turbulent boundary layer. We also define the contribution from the mean shear to the stretching of vortices at σ ω by where S ij is the mean rate of strain, in which S 12 and S 21 are non-zero for channel flow. Note that (4) and (5) and S ij ). We have quantitatively evaluated these contributions for filtered fields of a turbulent boundary layer [8], and have concluded that vortices smaller than approximately one-fifth of the height are predominantly stretched by strain rates at a scale twice as large as themselves, whereas larger vortices are stretched directly by the mean shear. Here, we examine the ratio between averages g f and g m to investigate their respective dominance in the turbulent channel flow. When Γ ω > 1 (and g m > 0), the vortices with the scale σ ω are predominantly stretched by strain rates with the scale σ S , whereas, when 0 < Γ ω < 1, the vortices with the scale σ ω are stretched by the mean shear more significantly than strain rates at the scale σ S . When Γ ω < 0, they are contracted by strain rates at σ S on average. Note that Γ ω is a function of y (as well as σ S and σ ω ). Here, when taking the average at a fixed y, we impose two conditions. One is the condition that the stretched vorticity is in rotational regions (Q [σω] (y) > Q (y)) for a large scale σ + cond = 960. The latter condition is imposed to examine the origin of the observation that smaller-scale vortices away from the wall are more likely to exist in the largest-scale upflow regions ( figure 6).
We show, in figure 7, Γ ω in the upflow case (black solid lines) and the downflow case (blue dashed lines) for (a) y + = 960, (b) 240 and (c) 60. We confirm that g m is always positive, and the sign of Γ ω is determined by the sign of g f . Since the contributions in the upflow and downflow cases are quantitatively similar, first we describe observations common to both the cases. Looking at the open circles in figure 7(a) for a small-scale (σ + ω = 30) at a location (y + = 960) near the upper boundary of the log layer, the contributions from fluctuating strain rates at the scales 1-8 times larger than σ + ω (= 30) are dominant (Γ ω > 1). In particular, the contribution Γ ω (2σ ω → σ ω ) from the twice larger scale is the largest. This is also the case for the other small scales σ + ω = 60 (open squares) and 120 (open triangles). Moreover, the fact Γ ω < 0 for σ S ≤ σ ω /2 implies that the vortices are likely to be contracted by smaller-scale strain rates on average. On the other hand, for vortices (σ ∼ y) whose size is of the order of the distance from the wall, the contributions from the mean shear are more significant than those from the fluctuating strain rates at any scales (Γ ω 1). In addition, the results for a location (y + = 240) around the lower boundary of the log layer ( figure 7b) show a similar tendency that small-scale vortices are stretched by the twice larger scale the most, whereas large-scale vortices are stretched directly by the mean shear.
Next, let us look at the contributions in the buffer layer (y + = 60; figure 7c) where the hierarchy of vortices does not exist. If we refer to σ ∼ y as "large scale", there are only largescale vortices in the buffer layer. Although, among the contributions from the fluctuating strain rates, the twice larger scale contributes most significantly, the contributions from the mean shear is always more important (Γ ω (2σ ω → σ ω ) < 1).
Next, we discuss the difference in the large-scale upflow and downflow. Γ ω (2σ ω → σ ω ) indicated by the black solid lines (in the upflow case) for small-scale vortices in the log layer (i.e. σ + ω = 30 in figure 7b and σ + ω = 30, 60 and 120 in figure 7c) are larger than the blue dashed lines (in the downflow case). The results show that the contributions from the fluctuations (rather than the mean shear) in upflow regions are larger than those in downflow regions. This is reasonable because, in upflow regions, the source of the vorticity is more likely to be carried from the wall. In other words, small-scale vortices in the large-scale low-speed streak (i.e. the large-scale ejection region) are not simply advected by the largest-scale vortices but they are stretched by 1-8 times larger vortices.
We have so far evaluated the scale-dependent enstrophy production rates and have shown that small-scale vortices away from the wall are stretched predominantly by the twice larger-scale vortices rather than the mean shear. This generation mechanism of the hierarchy of vortices seems consistent with the notion of the energy cascade, namely, the scale-by-scale energy transfer from larger to smaller scales. However, the creation of smaller-scale vortices does not necessarily correspond to the energy transfer to smaller scales. This issue was investigated in Ref. [9,23] for periodic turbulence, but here we examine a different approach. The transport equation of the (mean) turbulent kinetic energy K =ǔ iǔi /2 is given by whereŠ ij is the fluctuating strain-rate tensor (∂ǔ j /∂x i + ∂ǔ i /∂x j )/2. The terms in ∂/∂x j {· · · } in (7) are the mean-flow advection, turbulent advection, velocity-pressure correlation, and viscous diffusion terms, respectively. The last term −2νŠ ijŠij is the viscous dissipation term. These terms do not contribute to the production of the turbulent energy. On the other hand, −ǔ jǔi ∂u i /∂x j represents the production of K due to the mean flow. To investigate inter-scale energy transfer, we must decompose the velocity in (7) into scales. Recently, Kawata and Alfredsson [24] used a decomposition to analyze inter-scale transfer of the Reynolds-stress in plane Couette flow. They decompose the velocity into large-scale u (L) and small-scale u (S) parts by sharp Fourier filtering in the spanwise wave number. Since the Fourier filter is orthogonal, their cross-correlation u (L) u (S) vanishes. Then we can derive the transport equation of the energy for the large-and small-scale parts. For example, the transport equation of the small- where and denote the inter-scale interaction related to the spatial redistribution of K (S) and the energy transfer between large and small scales. In particular, −u ij is the energy transfer from the large to small scales, while −u ij is that from small to large scales. Since we use the Gaussian filter in the present study, our decomposition is not orthogonal. Therefore, the transport equation for the turbulent energy K (σ) at a given scale can be complicated. However, taking into account that T r S implies the energy transfer from the large to the small scale, we introduce a quantity and interpret this as the energy transfer rate from strain rates with the scale σ S to the σ u -scale energy. If t f is positive, the σ S -scale flow transfers the energy to the σ u -scale flow; otherwise, the σ S -scale flow reduces the energy with the scale σ u . Similarly to the investigation of the scale-dependent enstrophy production rates, we also define the energy transfer from the mean flow to a given scale σ u , i.e.
and the ratio 13 between the two averages. With a similar interpretation to (6), when Γ u > 1 (and t m > 0), σ S -scale flow transfers more energy to the σ u -scale flow structures than the mean flow, whereas, when 0 < Γ u < 1, the mean flow transfers more energy to σ u -scale structures. When Γ u < 0, the σ u -scale energy is reduced by σ S -scale flow on average. Therefore, by evaluating the scale-dependent energy transfer rates, we can discuss the inter-scale and mean-flow energy transfer. Similarly to the enstrophy production rate, we take the averages at a fixed y conditioned by the large-scale (σ (y)) in the wall-normal direction. We show, in figure 8, Γ u evaluated for (a) y + = 960, (b) y + = 240 and (c) y + = 60. We have confirmed that t m for the parameters shown in this figure is always positive. The results for y + = 960, which is in the log layer, show that Γ u is similar to Γ ω shown in figure 7. For example, for the small-scale energy (σ + u = 60, 120 and 240) in both upflow and downflow cases, the twice larger-scale strain rate contributes the most. Here, for σ + u = 30, we cannot observe the trend that the contribution from twice larger scale is the largest because this scale is too small (σ u /η < 10, where η is the Kolmogorov scale). Moreover, the contributions from the smaller scale (σ S < σ u ) are negative. This implies that the smaller-scale (σ S ) reduces the energy at the larger scale (σ u ). In other words, the direction of the energy transfer is forward on average. This implies that the creation of small-scale vortices indeed corresponds to the energy cascading events. On the other hand, for "large scales" (σ + u = 480), although the contributions from the twice larger scale are the largest among the fluctuations, the energy transfer from the mean flow is more important (Γ u < 1). The results for the other heights y + = 240 (in the lower log layer) and y + = 60 (in the buffer layer) are similar to the results for y + = 960, if we refer to the scales comparable to the distance from the wall as "large scales". Namely, at any height in turbulent channel flow, the large-scale energy is directly transferred from the mean flow, whereas smaller-scale energy is generated by an energy cascading process.
Incidentally, in the log layer (for example, open circles figure 8a), the comparison between black solid lines and blue dashed lines shows that the energy cascading process is more active in large-scale upflow regions than downflow ones. This is similar to the observation in figure 7, which shows the contributions of enstrophy production rates. These results also support the idea that the energy cascading process sustains small-scale structures in the region away from the wall.
It is also important to observe again that the balance of the scale-dependent contributions to the vortex stretching (figure 7) and to the energy transfer (figure 8) is similar to each other. This result gives a direct evidence that the vortex-stretching process plays a major role in the energy cascade [9, 10,23,25,26]. Although we have interpreted ω as the enstrophy production and the energy transfer between the two scales, this aspect of our work needs more careful consideration. In particular, it is an important future issue to examine spatial correlations between the enstrophy production at a given scale and energy transfer to that scale.
Conclusions
To reveal the hierarchy of coherent structures in wall-bounded turbulence and to understand its sustaining mechanism, we have analyzed the database of fully developed turbulent channel flow at Re τ = 4179 [14]. The key to our analyses is the scale decomposition using a band-pass filter, which is composed of the combination (2) of the Gaussian filters at two different scales.
Using the band-pass filtered velocity and its gradients, we visualized the hierarchy of vortices and low-speed structures ( figure 2). An important observation is that the largest-scale structures at each height are composed of low-speed streaks and quasi-streamwise vortices located at the sides of the streak in a staggered manner (figure 3). The structures (figure 4) obtained by the 14 simple conditional average are consistent with this observation. These observations are also similar to those in the over-damped LES [11,12,13]. Moreover, since the observed structures ( figure 3) are similar to the coherent structures in the buffer layer [3], the present result suggests that the hierarchy of quasi-streamwise vortices and low-speed streaks is maintained by SSP. Here, we reemphasize that these quasi-streamwise vortices are the largest vortices at each height in the sense that their size is comparable to the distance from the wall. By evaluating the scaledependent contributions of the vortex stretching (6) and the energy transfer (13), we have shown that these largest-scale vortices are stretched predominantly by the mean shear (figure 7) and the largest-scale energy is transferred directly from the mean shear ( figure 8). These contributions of the mean shear may be incorporated in the SSP. In contrast, smaller-scale vortices (i.e. smaller than the distance from the wall) are stretched by strain-rate fields at scales one to eight time larger (for example, see open symbols in figure 7a). In particular, the contribution from the twice larger scale is most significant. These trends are similar to the results of the energy transfer (for example, see open symbols in figure 8a). Incidentally, vortices at a given scale are contracted by smaller scales (for example, see open symbols for σ S < σ ω in figure 7a), and at the same time, the energy of the given scale is reduced by the smaller scales (for example, see open symbols for σ S < σ u in figure 8a). These results in terms of the vortex stretching and the energy transfer are consistent with the picture of the energy cascade. In addition, we have shown that the cascading events are stronger in the large-scale upflow (ejection) regions. This is because the source of the vorticity, rather than the small-scale vortices themselves, are carried from the wall.
Thanks to the analysis by using the database of high-Reynolds-number turbulent channel flow [14], we have developed our understanding on the sustaining mechanism of wall-bounded turbulence. However, there remain several future issues. (i) Although the analyses of the vortex stretching are reasonable and consistent with the picture of the energy cascade, the arguments regarding the inter-scale energy transfer are preliminary. We need further examination, in particular, of the physical meaning of the quantity defined by (11). It is also unclear in the present analysis whether or not the energy transfer to smaller scales occurs at the same spatial locations where smaller-scale vortices are stretched and amplified. (ii) The largest-scale structures at each height are likely to be affected by each other. We need to investigate the interactions between them. (iii) We have shown the generation mechanism of the hierarchy of multi-scale vortices in the log and buffer layers. The shown results for the channel flow are similar to those for a turbulent boundary layer [8]. However, we have not yet shown the sustaining mechanism for vortices in the outer layer. This is an interesting problem because there would appear to be a difference between channel flow and turbulent boundary layers. | 2020-06-11T09:03:51.129Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "29f720656329cbe1f5155d89365442520e1abb40",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1522/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6f014f752fbca605f3c182242f6bbbb4adf051ff",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
261609064 | pes2o/s2orc | v3-fos-license | From pre-COPD to COPD: a Simple, Low cost and easy to IMplement (SLIM) risk calculator
Extract COPD is defined as a heterogeneous lung condition characterised by chronic respiratory symptoms (dyspnoea, cough, expectoration and/or exacerbations) due to abnormalities of the airways (bronchitis, bronchiolitis) and/or alveoli (emphysema) that cause persistent, often progressive, airflow obstruction [1]. COPD has a long latency period and once the diagnosis is established, modifying disease progression has proven difficult [2].
Introduction
COPD is defined as a heterogeneous lung condition characterised by chronic respiratory symptoms (dyspnoea, cough, expectoration and/or exacerbations) due to abnormalities of the airways (bronchitis, bronchiolitis) and/or alveoli (emphysema) that cause persistent, often progressive, airflow obstruction [1].COPD has a long latency period and once the diagnosis is established, modifying disease progression has proven difficult [2].clinical spirometric data in young and middle-aged adults.Chronic airflow limitation (CAL) is often used as a surrogate marker of COPD incidence in long-term epidemiological studies [4][5][6][7][8].From these studies, we have learned that the estimated incidence in ever-smokers ranges between 13% and 22%.Predicting who will develop CAL can provide an opportunity to inform those subjects at risk and potentially design effective disease-modifying interventions at early disease stages [4].
Different studies have shown that the risk for CAL is increased among subjects with poor lung growth and low maximal forced expiratory volume in 1 s (FEV 1 ) attained by the fourth decade of life [5,9,10], the amount of smoking [10], the presence of chronic mucous production [10,11], a low baseline value of diffusing capacity of the lung for carbon monoxide (D LCO ) [12], a rapid rate of FEV 1 decline [13] or abnormalities on chest computed tomography (CT) [14].However, except for chronic mucous production and smoking, which are easily obtainable, clinicians would need to perform yearly spirometry to determine FEV 1 slope, obtain a chest CT or measure D LCO to identify possible cases [3].
We hypothesised that a more practical alternative could be integrating variables easily obtainable by any clinician and building a predictive model to identify ever-smokers with baseline normal spirometry more likely to develop CAL.We tested this hypothesis using the prospectively collected data from the Lovelace Smokers' Cohort (LSC).This longitudinal, well-characterised, observational cohort studies the factors associated with CAL development among ever-smokers.Analysis of data from the LSC has helped identify lung function trajectories [5,13], spirometric variability over time [15] and individual risks for CAL development [16][17][18].
Study design, setting and population
The LSC recruited 2273 individuals from the Albuquerque area (NM, USA) aged 40-75 years with ⩾15 pack-years of smoking from 2001 to 2015.Subjects are followed every 18 months with questionnaires, anthropometric measurements and pulmonary function tests [18].The Western (20031684) and Mass General Brigham ( protocol 2020P003513) Institutional Review Boards approved the study, and all participants provided informed consent.
Eligibility criteria
To study CAL incidence, we included all current and ex-smokers with a post-bronchodilator FEV 1 /forced vital capacity (FVC) ⩾0.7 and FEV 1 ⩾80% predicted, who were followed for at least 3 years and performed at least three spirometries measured at least 1 year apart.We used the 3 years and three spirometries criteria because the progression to COPD based on FEV 1 measurements can vary [19,20] and three spirometries help establish a reliable progression pattern [15,21].
Study measurements
Demographics, anthropometrics and smoking status were obtained by trained personnel at baseline and each visit.Body mass index (BMI) was calculated in kg•m −2 .Dyspnoea was evaluated using the modified Medical Research Council dyspnoea scale (mMRC) [22], and quality of life using the St George's Respiratory Questionnaire (SGRQ) [23] and the adult American Thoracic Society Division of Lung Disease-78 questionnaire [24].Chronic bronchitis was defined as the persistence of cough and phlegm for 3 months for at least 2 years.Self-reported comorbidities, including asthma, were ascertained during the initial visit.
Outcomes
To determine incident CAL, we included only those subjects with normal spirometry at baseline.Lung function trajectories were determined using all available spirometries rather than just the first and last measurements.Four possible trajectories were determined based on each visit's spirometric classification: 1) sustained normal spirometry included subjects with normal spirometry at every visit; 2) incident CAL included those who transitioned and remained in any spirometric GOLD stage in the subsequent visits; 3) incident PRISm included those who transitioned and remained in the PRISm classification; and 4) an "Unstable" category referring to subjects with fluctuating spirometric patterns without a clear trajectory (see supplementary table E2 for a graphic representation).
Statistical analysis
Summary statistics included means, standard deviation, medians and interquartile ranges for continuous variables and proportions for categorical variables.Chi-squared and Fisher's exact tests were used to analyse categorical variables, while the two-sample t-test was used for continuous variables.
Derivation of a prediction model for incident CAL
A multivariable logistic regression model was used to evaluate parameter estimates and odds ratios with 95% confidence intervals for factors associated with the incidence of CAL.Based on prior knowledge and clinical plausibility, we included: age [10], gender [10], Hispanic ethnicity [16], pack-years of smoking [10], current smoking status [10,29], level of education, chronic bronchitis [11], SGRQ scores, mMRC score [30], history of asthma [10,29,31], have received a diagnosis of COPD, BMI, baseline post-bronchodilator FEV 1 /FVC, FEV 1 % pred and FVC % pred as candidate predictors.The least absolute shrinkage and selection operator (LASSO) with cross-validation was used as the variable selection method for our final models.We calculated that a conservative sample size of at least 368 subjects was needed to minimise model overfitting and to target precise estimates based on the use of 15 candidate variables, an estimated incidence of 15% of the primary composite outcome and the expected Nagelkerke's R 2 of at least 0.57 [32].
To increase clinical applicability, a second model was derived.Continuous variables were dichotomised at threshold values calculated using a recursive partitioning decision tree with incident CAL as the outcome.Recursive partitioning splits continuous predictors by optimising the cutting value based on the LogWorth statistics [33].Each model was evaluated for discrimination and calibration, and the Youden index field [34] was applied to choose the optimal probability threshold used to define a case.
External validation
The external validation population was drawn from eligible participants from the COPDGene study (www.COPDGene.org)[35] and none of the subjects were represented in both cohorts.COPDGene (ClinicalTrials.gov:NCT00608764) recruited 10 371 smokers with at least 10 pack-years of cigarette smoking at 21 US clinical centres between 2007 and 2011.COPDGene collects longitudinal data on study participants at 5-year intervals, with the 10-year study visit (Visit 3) ongoing.For this analysis, we included the 830 subjects who completed Visit 3 (having three spirometries) (supplementary figure E1).All coefficients from the derivation model in the LSC were precisely applied in this sample.We estimated the discrimination and calibration in the external validation cohorts by setting the same probability threshold chosen in the derivation cohort.COPDGene was approved by Institutional Review Boards at all study centres and all subjects signed written informed consent.
Sensitivity and secondary analysis
We repeated the analyses using the lower limit of normal (LLN) to define CAL, using the NHANES III values as the reference (supplementary table E1b) [28].
To better inform CAL evolution, we calculated each subject's FEV 1 and FVC slopes using linear regression and expressed them in mL per year [19].The slopes of FEV 1 and FVC were compared between incident CAL and those with persistent normal spirometry.Also, we determined longitudinal changes in BMI, smoking quitting and relapse rates, symptoms progression by the mMRC, and SGRQ scores over the observation period using a mixed model for repeated measures.For these models, the group membership (incident CAL and persistent normal spirometry), time in years and the interaction were entered as the fixed effects, while each subject was the random effect.Previous reports have demonstrated that missingness in the LSC and COPDGene cohorts is completely at random; in this analysis, missingness was <25% and we used single imputation to complete the dataset.For hypothesis testing, p⩽0.05 was deemed statistically significant.We used SAS JMP Pro version 16.1.0(SAS Institute, Cary, NC, USA) and R version 4.0.3(R Foundation for Statistical Computing, Vienna, Austria).
Results
Among the 1087 LSC participants with at least three spirometries and at least 3 years of follow-up, 677 (30%) had normal baseline spirometry and were included in this analysis (supplementary figure E2).These subjects had a mean age of 54 years, 51% were current smokers, 35 pack-years of smoking, 82% were female and 18% were Hispanic (table 1).On average, these subjects had five spirometries measured over 6.3 years of observation.For the COPDGene validation cohort, 830 participants met the inclusion criteria (supplementary figure E1).These subjects had a mean age of 57 years, 52% were female and 33% were African American (table 1).
Incidence of CAL in the LSC
Over the observation period, 110 subjects (16%) developed incident CAL, 489 (72%) maintained normal spirometries, 15 (2%) transitioned to PRISm and 63 (9%) demonstrated an "Unstable" pattern (figure 1).Thus, the incidence rate for CAL in the LSC was 26 cases per 1000 person-years.19% of the symptomatic (n=70) and 13% of the asymptomatic (n=40) participants evolved to CAL (figure 1), and by the end of the observation period, 76 (69%) of the 110 with incident CAL were symptomatic, reflecting that six asymptomatic participants developed new symptoms during the observation period.The comparison of the baseline characteristics between subjects with and without incident CAL is presented in table 2 for the LSC cohort.As seen in supplementary figure E3, most of the subjects who developed CAL (red dots) were clustered close to the FEV 1 /FVC 0.70 and FEV 1 80% predicted thresholds.
Derivation and validation of a prediction model for incident CAL
From the 15 candidate predictors for CAL incidence (supplementary table E4a), FEV 1 /FVC, pack-years, age, BMI, FEV 1 % pred, FVC % pred, diagnosis of COPD, a history of chronic bronchitis and education level were the LASSO's selected candidate predictors.In the multivariate model, FEV 1 /FVC, BMI, age, FEV 1 % pred and a history of chronic bronchitis were the best predictors of incident CAL in the derivation cohort.This model's discrimination and calibration characteristics are presented in table 3 (model 1).The prediction formula, graphic profiler for each predictor and predictor's contribution index table are shown in supplementary material C. To facilitate its practical use, we built a model in which the six continuous candidate variables (age, BMI, pack-years, FEV 1 /FVC, FEV 1 % pred and FVC % pred) were dichotomised on thresholds determined by recursive partitioning.The resulting optimal split values were 55 years for age, 25 kg•m −2 for BMI, 30 pack-years for smoking, 0.75 for FEV 1 /FVC, 100% for FEV 1 % pred and 95% for FVC % pred.We applied LASSO to select the new transformed variables (supplementary table E4b).We found that FEV 1 /FVC 0.70-0.75,⩾30 pack-years of cumulative smoking, BMI ⩽25 kg•m −2 , FEV 1 80-100% predicted and a history of chronic bronchitis were significant predictors of incident CAL (table 4: model 2); the model's characteristics are listed in table 3.However, a more parsimonious model was evaluated by excluding FEV 1 80-100% predicted from the previous model due to collinearity with FEV 1 /FVC (table 4: model 3).In this simpler model, the 6-year probability of developing CAL in a subject with all four predictors is 85% compared with only 2% for those without any predictors (figure 2).As seen in table 3, the area under the receiver operating characteristic curve (AUC ROC) is 0.84 (95% CI 0.81-0.89).With the optimal threshold to classify a case as those with a probability of ⩾16% based on the probability formula presented in supplementary material D, sensitivity is 0.72, specificity is 0.85, positive predictive value is 0.52 and negative predictive value is 0.93, with a misclassification rate of 0.17.
External validation of the prediction model
The LSC's simpler model with dichotomised variables was validated in the COPDGene cohort (n=830).Over the 10 years of follow-up, 146 subjects (18%) met the criteria for incident CAL (incidence rate of 18 cases per 1000 person-years).The comparison of the baseline characteristics between subjects with and without incident CAL in the COPDGene cohort is presented in supplementary table E3.We tested the model performance in this cohort using the derived prediction formula and the estimated probability of ⩾16% for a positive case.The results are displayed in table 3.
Sensitivity and secondary analyses
Analyses using spirometric classification by the LLN offered similar results to those obtained using the GOLD spirometric classification (supplementary material E).Finally, the longitudinal changes in lung function, smoking status, BMI and symptoms analyses are presented in supplementary material F.
Discussion
The present study shows that a combination of simple spirometry-derived parameters, FEV 1 /FVC, combined with cumulative pack-year history of smoking, BMI and a history of chronic bronchitis provides a reliable estimate for the risk of developing CAL within 6 years in ever-smokers.identify high-risk subjects likely to develop CAL.In addition, we then validated the model in a different cohort of at-risk smokers.
Currently, most patients diagnosed with COPD are in their seventh or eighth decade of life when the management of these patients offers a limited impact on the natural progression of the disease.In COPD, as in other non-communicable chronic diseases [37][38][39], efforts are being made to identify patients with earlier stages of the disease.This concept has been adopted recently by the GOLD initiative to identify those individuals with a high likelihood of developing poorly reversible airflow limitation and labelled as "pre-COPD" [1,40].Pre-COPD is defined by the presence of respiratory symptoms, structural lung lesions, physiological abnormalities (including low-normal FEV 1 , gas trapping, hyperinflation, reduced D LCO or rapid FEV 1 decline) without airflow limitation (FEV 1 /FVC ⩾0.7) [1,40].In the LSC cohort, only 54% of the participants at baseline met the criteria for pre-COPD, albeit not having data from chest CT, D LCO or exacerbation-like events (table 1).In our study, FEV 1 /FVC between 0.70 and 0.75 was the strongest predictor for incident CAL (table 3), making a strong case to add this criterion to the pre-COPD definition.When FEV 1 /FVC between 0.70 and 0.75 is combined with a cumulative smoking history of ⩾30 pack-years, BMI ⩽25 kg•m −2 and a history of chronic bronchitis, the probability of developing CAL reaches 85%.Using the colour-coded, easy-to-use chart (figure 2), or using the logistic regression formula in supplementary material C and D, integrated into an online or clinical decision calculator, the risk can be easily computed in many different settings.
Evidence from cohorts of smokers at risk has shown a relationship between the existence of symptoms (cough and phlegm), low FEV 1 % pred, rapid rate of lung function decline, low D LCO , low BMI and emphysema detected with chest CT scans with incident CAL [10,41,42].Currently, a chest CT is hard to justify if the subject does not meet lung cancer screening criteria, repeated spirometries over time to establish FEV 1 decline is time and resource consuming, while the D LCO may not be readily available, particularly in low-or middle-income countries with resource-limited healthcare systems [43].However, we recognise that the addition of chest CT and D LCO could improve the accuracy of our model and expand case detection not only for CAL but also for more structural phenotypes as well.However, our model offers a practical and economical clinical tool for prognostication in ever-smokers or for the much-needed trials to prevent further progression to CAL as a case-finding tool for study recruitment, as exemplified in the algorithms presented in figure 3 and supplementary figure E4.
Having FEV 1 /FVC between 0.70 to 0.75 suggests a dissociation between expiratory airflow (FEV 1 ) and lung volume (FVC) as described in the Tasmanian Longitudinal Health Study [44], where individuals with lower FEV 1 /FVC trajectories from age 7 to 53 years had lower BMI, higher cumulative smoking and chronic sputum production [44].Our observation could represent a snapshot of the evolution of some of these trajectories over 6 years in those who reach middle age and beyond.
Our results also support the concept of a dose-response relationship for cumulative smoking ( pack-years) and also that of early quitting of smoking, which decreases the risk for CAL [9,45].A novel and interesting finding was the risk conferred by a lower BMI, a known predictor of worse outcomes in patients with established COPD [46][47][48] but less acknowledged as a risk factor for incident CAL [49].Importantly, BMI did not change over 6.3 years, suggesting that, on average, lung disease progression was not accompanied by cachexia (supplementary material F).This indicates that the subjects were already in the lower BMI range at an earlier stage of life, an observation supported by longitudinal studies that included BMI measurements at younger age [50][51][52].
A history of chronic cough and phlegm, asthma or being an active cigarette smoker did not hold in our final model as strong predictors for CAL.Although this differs from the findings from the CARDIA, SPALDIA, UK MRC cohort and Copenhagen City Heart studies, the baseline spirometric values were not included in the prediction models of those studies [11,[53][54][55].Our study agrees with that of the TESAOD cohort, where baseline spirometry was included in the model, and there, FEV 1 /FVC at baseline was a strong predictor of future CAL [56].
There are several strengths to our study.The model is derived from a well-phenotyped cohort of at-risk subjects, with multiple spirometries over a median of >6 years.The final CAL predictors were externally validated in a second independent cohort with similar model performance.However, we also acknowledge important limitations.First, our model is far from perfect, with 40% of variance explained and acceptable performance in predicting this low probability event (26 cases per 1000 person-years); it is possible that the addition of chest CT imaging (emphysema or air trapping and dysanapsis) or low D LCO could improve the model, but at the expense of more complexity and cost.Moreover, SMITH et al. [57], using data from CanCOLD, showed that participants with dysanapsis assessed by chest CT had a higher incidence of CAL; interestingly, they also had lower baseline FEV 1 /FVC, a finding indicating that perhaps these are surrogates' markers of the same process.Second, we cannot assume that those who maintained non-obstructed spirometry over 6 years of observation will remain normal if followed for a more extended period.Third, in addition to not having chest CT and D LCO , we did not capture "exacerbation-like" events in our questionnaires; therefore, our estimate of 54% of participants meeting the pre-COPD definition is perhaps an underestimation (table 1).Fourth, the LSC cohort is mostly comprised of females (82%), with an important representation of Hispanics of Mexican origin (18%) and no African Americans, all of which affect external validation.We observed a drop in the AUC ROC in the validation cohort, which could be reflecting the baseline characteristics differences (table 1).However, an AUC ROC of 0.77 in the external validation cohort is good, particularly considering that we are predicting a low-incidence event.
Importantly, the confidence intervals of both derivation and validation AUC ROC have some overlap, suggesting the truth is somewhere in the vicinity of the reported values (table 3).
In conclusion, in two different cohorts of at-risk smokers, we found that FEV 1 /FVC 0.70-0.75,smoking history ⩾30 pack-years, BMI ⩽25 kg•m −2 and FEV 1 80-100% predicted provide a reasonable estimate for the risk of developing chronic airway obstruction.The variables included in the model are simple to obtain and provide an objective estimate to identify pre-COPD subjects.
FEV 1 FIGURE 2
FIGURE 2 Prediction estimates for the incidence of chronic airway limitation (CAL).The probabilities were calculated with the prediction formula derived and presented in supplementary material D. The dashed line represents the cut-off threshold of >16% for case (CAL) assignment (see text for details).FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity; BMI: body mass index.
2 Your
Current or ex-smoker • Quantify cumulative smoking (pack-years) • Check for symptoms of chronic bronchitis • Measure BMI • Obtain spirometry • Counsel for smoking cessation • Evaluate appropriateness for lung cancer screening CT Spirometry 0.70≤ FEV 1 /FVC <0.75 Calculate the risk using figure
FIGURE 3
FIGURE 3 Proposed algorithm to apply our model to calculate the risk of incident chronic airway limitation (CAL) in the clinical setting.BMI: body mass index; CT: computed tomography; FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity.
TABLE 1
Baseline characteristics of the Lovelace Smokers' Cohort (LSC) (derivation cohort) and the subjects of the COPDGene cohort (validation cohort) Data are presented as mean±SD, n (%) or median (interquartile range).BMI: body mass index; FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity.All subjects in both cohorts had normal lung function at baseline, performed at least three spirometries, were followed for at least 3 years and had ⩾15 pack-years of cumulative smoking.
[41]pre-COPD is defined as having normal spirometric results and either chronic sputum production, cough or modified Medical Research Council dyspnoea scale (mMRC) score >2 points at baseline[41]; chest computed tomography or exacerbation-like events were not available.During the baseline evaluation, there were missing data for cough (n=69) and mMRC (n=43).https://doi.org/10.1183/13993003.00806-20234 EUROPEAN RESPIRATORY JOURNAL ORIGINAL RESEARCH ARTICLE | M.J. DIVO ET AL.
TABLE 2
Comparison of baseline characteristics between subjects with incident chronic airflow limitation (CAL) and those who maintained normal lung function at the end of the observation period in the Lovelace Smokers' Cohort The 29,36]tion model was first derived from the LSC and then externally validated in the COPDGene study, obtaining comparable results.Many of the model's individual components are known to increase the risk of CAL[10,29,36]; here, we replicated and combined them to build a single practical risk calculator able to 77% FIGURE 1 Cumulative incidence of subjects with chronic airflow limitation (CAL), preserved ratio impaired spirometry (PRISm), normal spirometry or those who fluctuate between these spirometric classifications ("Unstable") at the end of the observation period for those symptomatic and asymptomatic subjects who entered the study with preserved lung function in the Lovelace Smokers' Cohort.# : symptomatic is defined as having either chronic sputum production, cough or modified Medical Research Council dyspnoea scale (mMRC) score >2 points at baseline; chest computed tomography or exacerbation-like events were not available.During the baseline evaluation, there were missing data for cough (n=69) and mMRC (n=43).https://doi.org/10.1183/13993003.00806-2023EUROPEAN RESPIRATORY JOURNAL ORIGINAL RESEARCH ARTICLE | M.J. DIVO ET AL.Data are presented as mean±SD, n (%) or median (interquartile range).BMI: body mass index; FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity; mMRC: modified Medical Research Council dyspnoea scale; SGRQ: St George's Respiratory Questionnaire.# : applies to those who quit smoking at baseline evaluation; ¶ : baseline data were missing in 15 subjects with incident CAL and 100 subjects who remained within the normal lung function range.
TABLE 3
Performance comparison of the derivation models (with continuous, dichotomic and dichotomic parsimonious variables) and their performance in the external validation cohort CAL: chronic airflow limitation; AUC ROC: area under the receiver operating characteristic curve.https://doi.org/10.1183/13993003.00806-20236 EUROPEAN RESPIRATORY JOURNAL ORIGINAL RESEARCH ARTICLE | M.J. DIVO ET AL.
TABLE 4
Estimates (odd ratios) and weight of predictors for the incidence of chronic airway limitation analysed by multivariate logistic regression in the Lovelace Smokers' Cohort | 2023-09-09T06:17:43.164Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "a8bfcd9c9b3ca770085794f2764c5927087f0fc4",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5919f37f9aeddd47203575f8e813bf69e9371283",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261964794 | pes2o/s2orc | v3-fos-license | Culture and Compliance: Evidence from the European Union Emissions Trading Scheme
I study the role of culture in firms’ compliance decisions in the context of the EU Emissions Trading Scheme, an international regulation implemented in multiple countries with different levels of cultural indicators. To probe causality, I look within countries and exploit the differences in the locations of central headquarters of multinational firms. Using trust as a main cultural indicator, this exercise reveals that installations owned by firms headquartered in high-trust countries were more likely to comply with the regulation than those owned by firms headquartered in low-trust countries, even when they operated in the same geographic area. Using other relevant indicators of culture such as morality and civic virtue yields similar results, which suggests that culture, measured by several indicators, exerts influence on the compliance behavior of firms.
Introduction
There exists a growing literature investigating the influence of culture on the design and stringency of regulation (Algan and Cahuc 2009;Aghion et al. 2010;Aghion, Cahuc, and Algan 2011;Alesina et al. 2015). Related questions that have not received as much attention are whether and to what extent cultural traits affect the compliance of regulated entities under a given regulation. These questions are empirically challenging, as culture tends to vary mostly at the country level, and each country has its own regulations, which makes it difficult to study how compliance with respect to the same regulation may differ across countries because of the differences in culture.
To study these research questions, I take advantage of the European Union Emissions Trading Scheme (EU ETS), the world's largest carbon-trading market.
This paper was previously circulated as Jo (2019). I thank two referees and the editors for valuable comments. I also thank Tim Besley, Antoine Dechezleprêtre, Steve Gibbons, Stefania Lovo, Daire McCoy, Antony Millner, Henry Overman, Olmo Silva, Thomas Stoerk, Ulrich Wagner, and participants in various seminars and conferences for helpful discussions. All errors are my own. I gratefully acknowledge financial support from the Grantham Foundation for the Protection of the Environment through the Grantham Research Institute and the Marshall Institute for Philanthropy and Social Entrepreneurship, London School of Economics. This setting offers a number of advantages. First, it provides an ideal environment in which the same legislation is implemented in multiple countries, which thus allows me to investigate the systematic differences in compliance behavior caused by cultural traits measured at the country level. Second, the penalty for noncompliance is set at the EU level, and formal enforcement was generally weak across countries (European Court of Auditors 2015). This feature substantially reduces the problem of having different levels of severity of punishment for violation. Finally, the European Union Transaction Log (EUTL), a system harmonized at the EU level, provides detailed installation-level compliance data comparable across countries. Existing papers that study the compliance of firms under environmental regulations use data on a single industry or several industries in a single country (for example, Gray and Deily 1996;Ward 2005, 2008;Dasgupta, Hettige, and Wheeler 2000;Nyborg and Telle 2006;Duflo et al. 2018;Evans 2016). I address this lacuna by taking advantage of this unique international data set that contains over 16,000 installations across Europe.
To measure cultural traits relevant to compliance, I use generalized trust-the expectation that a random member of society is trustworthy-as a proxy. Previous studies emphasize the role of trust as a fundamental cultural value in a variety of economic outcomes. 1 In particular, a number of papers show its effect on the design (Algan and Cahuc 2009;Aghion, Cahuc, and Algan 2011) and the stringency (Aghion et al. 2010) of formal regulation. I believe this documented importance of trust with respect to regulation makes trust an appropriate proxy for culture in studying how culture affects compliance, a key operational aspect of formal regulation. Later I examine additional measures of culture such as morality and civic-mindedness to explore other cultural traits conceptually relevant to compliance. I hypothesize that trust has a positive influence on compliance, given the longheld view in sociology that trust shared in a society reflects cooperative norms and civic attitudes (Putnam 1994;Fukuyama 1995;Portes 1998). A growing literature on corporate culture suggests that such norms and values are also present at the firm level. Guiso, Sapienza, and Zingales (2015b) observe that firms have clearly defined corporate cultures-principles and values that inform the behavior of the firm's employees. 2 Another motivation for the hypothesis relates to social motivations for compliance such as reputation and fear of social sanctions (Cialdini and Goldstein 2004;Banerjee and Shogren 2010;Qin and Shogren 2015). In the context of the EU ETS, the name-and-shame sanction whereby member states "ensure publication of the names of operators who are in breach of requirements to surrender sufficient allowances" (Directive 2003/87, of the European Parliament and of the Council of 13 October 2003 Establishing a Scheme for Greenhouse Gas Emission Allowance Trading within the Community and Amending Council Directive 96/61/EC, art. 16 [2], 2003 O.J. (L 275), 32-46) clearly embodies the threat of social punishment for noncompliance. Trust may also affect the compliance decisions of firms through the high expected compliance rate in society because social sanctions for violators tend to be stronger when the overall compliance rate is higher (Cialdini and Goldstein 2004). 3 Firms in high-trust countries are thus more likely to be cautious about violating the regulation because of their expectation that most of their neighboring firms will be in compliance and the strong expected social punishment from violation.
For identification, the main specification uses the subsample of installations that are owned by multinational enterprises (MNEs), 40 percent of which are owned by foreign MNEs whose central headquarters are located in a country different from the country where the installations operate. This subsample allows me to include country-of-operation fixed effects and to exploit the differences in the location of headquarters as the main source of variation, similar to the approach in Bloom, Sadun, and Van Reenen (2012b), where the authors investigate the effect of trust on the degree of decentralization within firms. I then compare compliance decisions of firms that are exposed to the same external environment (for example, formal enforcement, stringency of other related regulations, and so on) but have different levels of trust in their countries of origin.
Consistent with the hypothesis, I find that there is a negative association between noncompliance and trust prevalent in the country where the installation is located. More important, exploiting the differences in the locations of global headquarters of MNEs reveals that installations owned by firms headquartered in high-trust countries are more likely to comply with regulations than installations owned by firms based in low-trust countries, even when they operate in the same geographic area: for example, in Germany, an installation operated by an MNE headquartered in Norway (a high-trust country) is more likely to be in compliance with the EU ETS than an installation owned by a firm whose global headquarters are in Greece (a low-trust country). The magnitude of the estimated effect is economically meaningful: a change in ownership from an MNE based in the lowest-trust country in my sample (Philippines) to an MNE headquartered in the highest-trust country (Norway) is associated with a 1.2-percentage-point decrease in the probability of noncompliance. Given the average noncompliance rate of 3.2 percent in the sample, this effect implies a 38 percent treatment effect, which is comparable to the effect of formal enforcement measures (Gray and Shimshack 2011;Evans 2016).
Although the main empirical strategy successfully mitigates omitted-variable bias at the country-of-operation level, there might remain omitted-variable bias related to institutional factors in the headquarters countries or firm-specific characteristics that are difficult to measure. I attempt to check the extent to which the estimated effect of trust is driven by these remaining factors by including additional controls such as EU ETS-related enforcement practices or general institutional quality in the headquarters countries. The estimated effect of trust appears to be robust to the inclusion of additional controls. (This exercise is discussed in detail in Section 4.3.) Once I establish how trust as the main proxy for culture affects compliance, I also examine additional measures of culture. The challenge of empirically measuring culture is acknowledged in the literature (Tabellini 2008;Algan and Cahuc 2013), and this measurement issue is exacerbated by the existence of related and overlapping concepts such as morality, moral hazard, and civic-mindedness that could also explain compliance. 4 Thus, in an attempt to explore other cultural traits conceptually relevant to compliance, I use two additional indicators, namely, nonpecuniary motivations behind tax compliance (tax morale) and civic-mindedness. Using these two measures, I find that the tax morale prevalent in a country can also predict the compliance decisions of firms headquartered there in the EU ETS context even when they operate abroad. The association is weaker when civic-mindedness is used, although I observe a strong correlation between civic-mindedness and compliance across countries without country fixed effects. I believe that this exercise, together with the strong evidence of the effect of trust on compliance, provides support for the argument that culture, measured by several indicators, exerts influence on compliance over and above institutional factors.
The findings from this study provide a microempirical foundation for a number of papers that investigate how culture affects the stringency of regulation in a cross-section of countries, implicitly assuming that individuals and firms are more law-abiding in countries with higher levels of trust and civic-mindedness (Algan and Cahuc 2009;Aghion et al. 2010;Tabellini 2008;Aghion, Cahuc, and Algan 2011). For instance, Aghion et al. (2010) find a strong negative association between trust and a variety of regulations in a cross-section of countries and provide a model that shows that formal regulation is weaker in high-trust countries because of lower demand for state regulation, assuming that individuals behave in a more law-abiding manner in those countries. Similarly, Algan and Cahuc (2009) argue that governments in high-trust countries tend to insure their workers against unemployment through more generous unemployment benefits rather than strong labor protection laws since individuals are less likely to cheat on government benefits. The positive relationship between civic culture and compliance at the micro level presented in this paper adds strong empirical support for these macro-level studies.
This paper also relates to a group of studies that investigate the influence of informal, as opposed to formal, institutions on compliance in different con-texts. This paper closely relates to the seminal paper Fisman and Miguel (2007) in terms of the empirical setting and the findings. Fisman and Miguel show the importance of cultural norms in home countries on the corrupt behavior (unpaid parking tickets) of individuals who are subject to the same external environment (diplomats from different countries stationed in New York City). The main finding of the present paper-that cultural traits of a country can predict the compliance decisions of firms headquartered in that country even when the firms operate abroad-is consistent with their findings but arguably concerns the institutionally more important case of a particular international environmental regulation. There is also a large literature in public economics documenting the strong presence of nonpecuniary motivations behind individuals' tax compliance (surveyed in Torgler 2007;Luttmer and Singhal 2014). Finally, in environmental economics, the effectiveness of informal enforcement measures on compliance is recognized (Ostrom 1990(Ostrom , 2000, but rigorous empirical evidence is lacking. 5 This paper adds to the literature by providing empirical evidence of the effect of informal institutions such as culture on compliance in environmental regulation.
The paper is organized as follows. Section 2 provides background information on the institutional setting, and Section 3 describes data used for the analysis. Section 4 presents the empirical analysis, and Section 5 concludes.
The European Union Emissions Trading Scheme
Launched in 2005, the EU ETS is the world's first and the largest carbon trading market and operates in 31 countries (all 28 EU countries plus Iceland, Liechtenstein, and Norway). It limits emissions from heavy-energy-using installations (including power stations and industrial plants) and airlines operating between the countries, covering around 45 percent of the EU's greenhouse gas emissions. Its geographic coverage, as large as all of Europe, offers a unique setting for investigating the extent to which compliance behavior with respect to the same regulation may differ across countries because of differences in trust and civicmindedness of the populations.
As the central climate policy in the EU, the EU ETS is designed to facilitate the reduction of greenhouse gas emissions by at least 80 percent relative to 1990 levels by 2050. The regulation is currently in its third phase (2013-20) under the EU-wide cap on emissions that is scheduled to decline indefinitely at an annual rate of 1.74 percent. 6 A number of empirical studies show that the EU ETS has 5 On the other hand, the effect of formal enforcement actions is well documented. See, for example, Gray and Deily (1996), Deily and Gray (2007), Ward (2005, 2008), Dasgupta, Hettige, and Wheeler (2000), Nyborg and Telle (2006), Telle (2013), Duflo et al. (2018), and Evans (2016). 6 Originally, the European Union Emissions Trading Scheme (EU ETS) was proposed as a means for the EU and its member states to meet their obligations under the Kyoto Protocol, and the design of the phases reflects this consideration. After the relatively short first phase, the second phase was chosen to overlap with the first Kyoto commitment period . The third phase bridges the gap between the end of the first Kyoto period and the start of the new global climate agreement (the Paris Agreement) in 2020. The fourth phase will run from 2021 to 2030. had robust negative effects on greenhouse gas emissions, while it does not seem to have had strong detrimental effects on economic performance (for a review of the literature, see Martin, Muûls, and Wagner 2016).
Formal Enforcement
According to the annual compliance cycle, operators of industrial installations and aircraft operators (henceforth, installations) are required to report their emissions from the previous year verified by third-party accredited verifiers by March 31 of each year. 7 Installations should then surrender a quantity of allowances equal to the volume of their verified greenhouse gas emissions from the previous year by April 30. An installation is considered out of compliance if the number of allowances surrendered by April 30 is lower than its verified emissions. A penalty harmonized at the EU level (40 euros in phase 1 and 100 euros in phases 2 and 3 for each metric ton of carbon dioxide for which the installation failed to surrender allowances) is levied on noncompliant installations. The shortfall in compliance is then added to the compliance target of the following year. In other words, paying a fine does not exempt noncompliant installations from their obligations to surrender sufficient allowances.
To have a sense of the strength of formal enforcement, I analyze the annual reports on the monitoring and enforcement activities of the regulatory body that each national government is required to submit to the European Commission. From these reports, I find that enforcement was generally weak across countries. For instance, of the countries that submitted a report for compliance year 2005 (four countries did not), only three (Portugal, Spain, and the United Kingdom) issued penalties to violators, although most countries had violators that year. 8 A policy report prepared by the European Commission (European Court of Auditors 2015) confirms that most countries are not successful in implementing EU ETS-related penalties. Countries are often limited in their legal and administrative capacity for the successful implementation of EU ETS penalties (European Court of Auditors 2015). Institutions under the regulation are either not empowered to impose sanctions themselves (for example, Italy) or need to await the outcome of lengthy court procedures and appeals (for example, Germany). 9 The report adds that on-the-spot inspections to assess the implementation of the self-monitoring plans submitted by installations are also very limited. 10 This lack 7 Third-party verifiers are accredited at the national level. However, verifiers who are accredited in one country can work directly in another country without being accredited there. Furthermore, the authorities cannot impose additional conditions on foreign verifiers that are not imposed on domestic verifiers. 8 The reports were downloaded from the European Environmental Agency Reporting Obligation Database in May 2017. Table OA1 in the Online Appendix shows whether there were any penalties imposed on violators in each country in the first phase. 9 Indeed, the German authority mentions in its 2007 report that penalties for violators in 2005 were not issued until 2007, and a majority of the cases (11 of 16) are being appealed. 10 Other types of visits to installations were often performed in the context of the Integrated Pollution Prevention and Control Directive (2008/1/EC) or other environmental legislation that was considered to be of higher priority (for example, in France, Germany, and Poland) without specifically addressing EU ETS-related issues (see European Court of Auditors 2015). of strong institutional enforcement may explain the existence of noncompliance despite the fact that the cost of purchasing allowances was well below the penalty for not surrendering sufficient allowances throughout the sample period ( Figure OA1 in the Online Appendix). It also makes the EU ETS a suitable context in which to study the influence of culture on compliance, which would be minimal in the presence of perfect monitoring and enforcement.
For other forms of noncompliance such as failing to report changes in the installation's capacity or monitoring plans, each national government is required to establish its own enforcement rules and penalties (Directive 96/61/EC, art. 16[1]). The presence of these country-specific enforcement rules may also affect the installation's decision to purchase sufficient allowances to cover its verified emissions. 11 Below I propose identification strategies that overcome this obstacle.
Compliance
Data on compliance are provided by the EUTL, a system harmonized at the EU level that publishes information on compliance status, permit allocation, verified emissions, and surrendered allowances at the installation level. Papers that study compliance behavior of firms have focused on a single industry or several industries in a single country (see, for example, Gray and Deily [1996] on the US steel industry; Ward [2005, 2008] Duflo et al. [2018] for industries in India). While providing valuable insights into the motivations behind compliance decisions, these studies are unable to shed light on the systematic differences in compliance behavior caused by cultural traits such as trust, which largely varies at the country level. I address this lacuna by taking advantage of this unique international data set that contains installations operating in multiple industries and multiple countries. 12 I use information on compliance status from 2005 to 2015, which includes all three phases so far. There are five possible compliance codes that installations can be given: A, when the number of allowances and permits surrendered by the deadline (April 30) is greater than or equal to verified emissions; B, when the number of allowances and permits surrendered by the deadline is lower than ver-ified emissions; C, when verified emissions are not entered until the deadline; D, when a competent authority corrects verified emissions after the deadline and decides that the installation is not in compliance; and E, when a competent authority corrects verified emissions after the deadline and decides that the installation is in compliance. The distribution is reported in Table OA2 in the Online Appendix. Using this categorization, I construct a binary noncompliance variable that equals one if an installation is given code B or code D and zero if an installation is given code A or code E. In my preferred specification, I treat code C as missing in order to be conservative. 13 Alternative specifications such as considering A and B only or treating C differently yield similar results.
The cross-country compliance rates in Figure 1 reveal startling variation across countries. 14 It is noteworthy that the distribution is highly right skewed, with a majority of countries close to full compliance and several countries with very high levels of noncompliance. Some countries such as Bulgaria, Italy, and Slovakia have close to or over 10 percent noncompliant installation-year observations. However, the average noncompliance rate across all regulated installations is very low-3.2 percent-and half the countries have noncompliance rates of less than 1 percent during the sample period. 15 13 Although failing to report verified emissions is, strictly speaking, noncompliance, two observations call for a more cautious approach. First, among observations with compliance status C, around 80 percent have incomplete information on permit allocation, either missing or 0 even in the first two phases when most permits are given for free on the basis of historical emissions. Second, these installations tend to have missing verified emissions for multiple periods followed by missing compliance status in the following periods. Taken together, it is plausible that these installations were no longer regulated (or active) and therefore did not have reporting obligations.
14 The noncompliance rate of a country is calculated as the share of noncompliant installation-year observations between 2005 and 2015. 15 The occurrence of noncompliance was higher in the first phase (over 60 percent of all noncompliance occurred in the first phase) and fell in subsequent phases ( Figure OA2). This is likely to be
Measuring Culture
To measure cultural traits relevant to compliance, I use generalized trust as a main indicator of culture. The influence of trust as a fundamental cultural value on a variety of economic outcomes is well documented in the literature (see note 1). In particular, given the strong effect of trust on the design (Algan and Cahuc 2009;Aghion, Cahuc, and Algan 2011) and the stringency ( Aghion et al. 2010) of formal regulation, the use of trust as a proxy for culture seems appropriate in the context of studying how culture affects compliance, a key operational aspect of formal regulation.
I build trust measures from the World Value Survey (WVS) by pooling data from six waves (1984, 1993, 1999, 2004, 2009, and 2014) that cover 19 European countries and 33 non-European countries in my sample where global headquarters of the EU ETS-regulated firms are located. The WVS measures generalized trust-the expectation that a random member of the society is trustworthy-by asking the classic question, "Generally speaking, would you say that most people can be trusted or that you need to be very careful in dealing with people?" Respondents' answers are given by a binary choice of zero or one, where zero implies "You need to be careful," and one means "Most people can be trusted." 16 The variable that I use in the econometric regression is the average score of this answer in each country. Figure 2 plots the average level of trust by country in Europe. Two points are noteworthy. First, as shown in previous studies, there exists substantial variation because the EU-level fine for violation increased by 2.5 times beginning with phase 2 (40 euros per metric ton of carbon dioxide (tCO 2 ) in phase 1 to 100 euros per tCO 2 in phases 2 and 3). I include year fixed effects to deal with such year-specific factors that affect all regulated firms. 16 A number of papers confirm in experimental settings that survey-based trust measures are indeed correlated with trusting behavior (Fehr et al. 2003;Fehr 2009;Glaeser et al. 2000; Sapienza, Toldra-Simats, and Zingales 2013). in trust across countries. The average level of trust ranges from a minimum of .15 in Romania to a maximum of .7 in Norway (with 15 percent and 70 percent of people saying that most people could be trusted, respectively). Second, it is readily observable that there are differences across regions of Europe: for instance, Nordic countries (Norway and Sweden) display the highest levels of trust in the sample. On the other hand, Eastern European and Mediterranean countries appear to have lower levels of trust. Table OA3 presents the level of trust for a larger group of countries (52) where the global headquarters of regulated firms are located. Below I examine additional measures of culture such as morality and civicmindedness to explore other cultural traits conceptually relevant to compliance.
Other Variables
To control for the quality of formal institutions, I use governance indicators that measure the rule of law and regulatory quality at the country level, which are also used in prior studies that attempt to isolate the effect of trust from that of formal institutions (for example, Bloom, Sadun, and Van Reenen 2012b). 17 In addition, I use information on log gross domestic product (GDP) per capita, percentage of population with tertiary education, and log population from Eurostat. Descriptive statistics for these variables are reported in Table OA4.
Data on firms' characteristics come from Bureau Van Dijk's Orbis Database. I obtain key financial variables that may affect compliance decisions including the number of employees, operating revenue, and total assets for the sample period; firms' ownership structure in 2015; and the number of installations run by each firm. These controls also account for firm-level heterogeneity more generally. Table OA5 reports the descriptive statistics of these variables. Figure 1 shows that there is indeed a negative correlation between trust and noncompliance rates in the EU ETS across countries. To investigate this relationship further, I start with micro-level regressions of the following form:
Trust and Compliance
where Noncomp i,t is a binary variable that equals one if installation i is out of compliance in year t and Trust c(i) is the average trust of country c where installations are located. It is reasonable to suppose that Trust c(i) does not vary over time during the 11-year period I study, given the persistent nature of trust across 17 The governance indicators come from the World Bank (Kaufmann, Kraay, and Mastruzzi 2011). Rule of law captures "perceptions of the extent to which agents have confidence in and abide by the rules of society, and in particular the quality of contract enforcement, property rights, the police, and the courts, as well as the likelihood of crime and violence." Regulatory quality measures "perceptions of the ability of the government to formulate and implement sound policies and regulations that permit and promote private sector development." generations. 18 Most empirical analyses in the trust literature follow this approach by taking the average of trust in surveys conducted since the 1980s (for example, Tabellini 2010; Bloom, Sadun, and Van Reenen 2012b). 19 Therefore, I estimate a pooled regression despite the panel nature of the dependent variable. To avoid understating the standard errors from repeated observations, the errors are clustered at the country level over all years. The terms C c(i),t and F j(i),t represent country-level controls and firm-level controls, where firms are indexed by j. I further include year dummies Y t and industry dummies I k(j(i)) , where industries are indexed by k and based on information on the main activity type provided by the EUTL.
I begin by simply regressing Noncomp on the trust measure of the country where the installation is located, without any controls (column 1 of Table 1). The coefficient on Trust is negative and statistically significant. I include an increasingly extensive set of controls that may affect compliance at the country level and at the firm level. Country-level controls include log per capita GDP, log population, the share of population with tertiary education, rule of law, and regulatory quality; firm-level controls include total assets, revenue, employees, and the number of installations. The correlation between trust and noncompliance seems to exist independently over and above these factors. 18 To formally test if there is time variation over the period of my study, I check whether there is overlap in the 90 percent confidence intervals of Trust for the start and end years using the 2000 and 2014 waves, respectively. Only two of 25 countries in my sample have nonoverlapping confidence intervals over that period. 19 Few studies exploit time variation in trust, with a notable exception being Algan and Cahuc (2010). The authors suggest a methodology to recover long intertemporal variation in trust by comparing immigrants who moved to America at different points in time and generate a trust measure for 25 countries with time variation over 60 years, which arguably covers multiple generations. Their trust variable measures trust at two points, 1935 and 2000, to allow sufficient time for the evolution of trust. Algan and Cahuc (2009) also exploit time variation in trust over 20 years in one of their specifications, using the end points of their data (1980 and 2000) to get enough variation. Note. The dependent variable is a noncompliance measure that equals one if the installation is out of compliance and zero otherwise. The instrumental variable (IV) in column 5 is a measure of inherited trust that instruments trust in each country by the average level of trust held by second-generation immigrants from that country. Standard errors, in parentheses, are clustered at the country-ofoperation level. * P < .05. ** P < .01.
I also try a measure of inherent trust as an instrument to further test the relationship between trust and noncompliance. This epidemiological approach has gained recognition in the literature (Fernandez 2008) and is used in several papers that attempt to isolate the causal effects of trust on economic outcomes Cahuc 2009, 2010;Butler, Giuliano, and Guiso 2016;Jo and Carattini 2018). The insight is based on the evidence that trust is highly persistent across generations through the transmission of values within families (Rice and Feldman 1997;Guiso, Sapienza, and Zingales 2009). Inherited trust observed in second-generation immigrants is expected to be correlated with the level of trust in the countries where their parents came from and yet is less likely to directly affect the compliance behavior of firms operating in their source countries because of geographical disconnect (that is, they are born and reside in their adopted countries). Column 5 reports an instrumental variables probit estimate, which is negative and statistically significant (with a first-stage F-statistic of 77).
Exploiting Differences in the Locations of Headquarters
The estimates in Table 1 suggest that there is a negative correlation between trust and noncompliance. However, it is possible that trust might be picking up the effect of the country-specific regulatory environment or institutional capacity that might be correlated with trust, given the documented influence of trust in shaping institutions and regulations (Algan and Cahuc 2009;Aghion et al. 2010;Aghion, Cahuc, and Algan 2011). I control for the quality of formal institutions using variables for the rule of law and regulatory quality, but they may not be perfect. 20 In the main specification that follows, I look within countries by restricting my sample to installations owned by MNEs and exploiting the differences in the locations of their central headquarters. This subsample offers a chance to probe the causality of the relationship between trust and compliance using country-ofoperation fixed effects. Country fixed effects remove any bias associated with unobservable national characteristics that may be spuriously correlated with trust and compliance. I then compare the compliance behavior at installations that are exposed to the same external environment, such as formal enforcement and stringency of other related regulations, but that have different levels of trust from the country of origin.
There are 10,692 installations in this subsample, and 4,310 of them are operated by foreign MNEs whose central headquarters are located in a country different from where the installations operate. Summary statistics of firm-level variables for the full sample and the sample of MNEs are presented in Table OA5. Not surprisingly, MNEs tend to operate more installations, be larger, and have more 20 Using inherited trust as an instrument (column 5 in Table 1) may not sufficiently address the omitted-variable bias in this context. The approach is based on the persistent nature of trust, and the main source of omitted-variable bias in my context, which is institutional quality, also tends to be persistent over time (Aghion et al. 2010;Aghion, Cahuc, and Algan 2011). Thus, I believe that a country fixed effects approach that allows me to compare installations operating in the same institutional environment is likely to be more robust than the instrumental variables approach. revenue and total assets. The mean noncompliance rate of installations owned by MNEs is 2.9 percent, lower than the noncompliance rate of the full sample (3.2 percent) and that of installations owned by non-MNEs (4 percent). 21 The importance of country-of-origin characteristics in MNEs' management and organizational structure has long been recognized in the literature. A study most relevant to my analysis is Bloom, Sadun, and Van Reenen (2012b), which provides evidence that the level of trust prevalent in the country where the MNE is headquartered has a strong positive effect on the degree of decentralization in the affiliate's foreign location (for instance, a Swedish affiliate operating in the United States is typically more decentralized than a French affiliate in the United States). Furthermore, Bloom, Sadun, and Van Reenen (2012a) show that US MNEs operating in Europe displayed higher productivity because of the use of information technologies (IT) than non-US MNEs in Europe during a period when the US experienced a rapid productivity growth in sectors that intensively use IT. Burstein and Monge-Naranjo (2009) and Bloom and Van Reenen (2007) also document the transmission of knowledge and management practices in MNEs across countries. Given this evidence of the influence of source-country characteristics in MNEs' operations abroad, it seems legitimate to investigate whether there might be different patterns in compliance behavior across MNEs based in different countries. 22 For this purpose, I estimate the following equation: which is similar to equation (1) except for two main differences. First, the main explanatory variable of interest is now Trust h(i) , which measures the average trust in the country of headquarters h, which may or may not be the same as the country of operation c. Second, the specification includes CO c , which are country-ofoperation fixed effects. 23 The results of this analysis are reported in Table 2. Column 1 shows the relationship between compliance and the level of trust in the country where the central headquarters are located, without any controls. Standard errors are clustered at the country-of-operation level. The coefficient is negative and significant at the 1 percent level, which suggests that trust prevalent in source countries is positively correlated with the affiliates' compliance decisions. The influence of trust in 21 I reestimate the previous specification using this subsample to ensure that the motivating evidence in Table 1 is not sensitive to the composition of the sample. The results are reported in Table OA6. 22 In a descriptive analysis, I check if there is a shared pattern in compliance behavior across installations owned by the same multinational firm (MNE). To do so, I investigate if the probability of compliance at other installations belonging to the same MNE can predict an installation's probability of compliance. For instance, if there are three installations owned by the same MNE and one of them is out of compliance in a given year, the compliance probability of the others would be 1 for the violating installation (because both other installations are in compliance), while the probability would be .5 for the two compliant installations. I find that the compliance probability of other installations belonging to the same MNE is a very strong predictor of an installation's own compliance probability, with a coefficient of .89 (P-value < .001) in a linear probability model. 23 Firms that experienced changes in ownership through mergers and acquisitions just before or while being subject to the EU ETS (264 firms) are dropped from the sample to reduce potential measurement error. the country of headquarters remains strong even when I include year and industry fixed effects (column 2) and control for individual firm-level characteristics (column 3). Next I include country-of-operation fixed effects. The magnitude of the coefficient falls sharply with an extensive set of fixed effects, but the coefficient in column 4 is still negative and statistically significant. This implies that installations owned by firms based in high-trust countries are less likely to violate the regulation than those owned by firms in low-trust countries, even when they operate in the same institutional environment. One might still worry that time-varying omitted-variable bias confounds the estimate. For instance, after the 2011 nuclear accident in Japan, Germany decided to dramatically reduce its dependence on nuclear power plants while increasing the share of renewable sources in producing electricity. This led to changes in the regulatory environment and energy prices that might have affected firms' compliance behavior under the EU ETS. To deal with this concern of time-varying country-specific confounders, I include country-year fixed effects in column 5. The sample size decreases, as there are country-year pairs with perfect compliance and thus no variation. However, the coefficient remains qualitatively similar. The findings for a strong influence of trust in the country of headquarters on firms' compliance are in line with previous studies that document the importance of country-of-origin characteristics in MNEs' operations abroad (for example, Bloom, Sadun, andVan Reenen 2012a, 2012b).
In column 6, I attempt to compare the effect of trust in the location of operation with trust in the location of headquarters. To do so, I add the trust variable measured in the country of operation to the specification in column 3 without fixed effects. I find that the coefficient on trust in the country of headquarters becomes marginally insignificant, while the coefficient on local trust measured in the country of operation is negative and statistically significant. Two explanations are possible. First, since 60 percent of the installations are owned by domestic MNEs, there is a large overlap in the variation in the two variables. The local trust variable seems to dominate, as the variable for trust in the country of headquarters turns on only for installations owned by foreign MNEs, which account for 40 percent of the sample. Second, without country-of-operation fixed effects, the endogeneity concern explained at the beginning of this section might be reflected in the strong negative estimate of the local trust variable. To check this possibility, I include a region-level local trust variable with country-of-operation fixed effects in column 7. The coefficient on trust in the region of installation is insignificant, while the effect of trust in the country of headquarters is negative and statistically significant, with a magnitude similar to that in column 4, which is consistent with the second explanation.
Not only is the estimated effect of trust on compliance statistically significant, it is also economically meaningful. The marginal effects calculated from the specification in column 4 in Table 2 imply that a change in ownership from an MNE based in the Philippines (the lowest-trust country in my sample) to an MNE headquartered in Norway (the highest-trust country) is associated with a 1.2-percentage-point decrease in the probability of noncompliance. How large is this effect relative to that of formal enforcement on compliance? To provide a sense of the magnitude, I compare this effect with other estimates of the effectiveness of formal enforcement actions reported in previous papers. Estimates for the effect of traditional regulatory measures (for example, inspections and fines) range between 42 and 52 percent (Gray and Shimshack 2011). 24 Evans (2016) documents that an information-based enforcement tool such as the watch list for the Clean Air Act is associated with a 21-percentage-point decrease in the probability of noncompliance, which indicates a 29 percent treatment effect given the average noncompliance rate of 72 percent. Compared with these estimates, the effect of trust still seems large: given the average noncompliance rate of 3.2 percent in my sample, the predicted decrease in the probability of noncompliance of 1.2 percentage points caused by the change in ownership from a Filipino firm to a Norwegian firm implies a 37 percent treatment effect.
Discussion
Although the approach so far has dealt with omitted-variable bias at the country-of-operation level arguably well, it is still possible that the compliance behavior of firms is influenced by enforcement practices or general regulatory quality in the countries where their headquarters are located. Thus, I check the extent to which the estimated effect of trust is driven by such institutional factors in the countries of headquarters by adding additional controls. I use the qualitative data from the annual reports on the monitoring and enforcement activities each government submitted to the European Commission in phase 1 (discussed in Section 2.2). For phase 2 (2008 onward), most reports are not open to the public, so I restrict the sample to the first phase for this exercise. Using this information, I create a dummy that equals one if any penalties were imposed on installations with excessive emissions (that did not buy enough permits) in the country of headquarters and zero otherwise. For example, if there were such fines in France but not in Germany in year t, the dummy equals one for all French subsidiaries regardless of their country of operation and zero for all German subsidiaries for that year. This dummy corresponds to enforcement that is narrowly defined to match the dependent variable. I also create an additional dummy that equals one if there were penalties for any type of infringement related to the EU ETS such as not reporting changes in capacity or not submitting monitoring plans. This dummy reflects more widely defined enforcement activities related to 24 Deily and Gray (2007) study the deterrent effects of regulatory measures on compliance with the Clean Air Act using compliance data on large steel mills in the United States. They find that being subject to an enforcement activity in the prior 2 years decreased the probability of noncompliance by 32 percentage points. Given the overall noncompliance rate of 62 percent, the estimate suggests a 52 percent treatment effect. In a similar context using compliance data on pulp and paper mills, Gray and Shadbegian (2005) find that a typical regulatory action decreased the probability of violation by 10 percentage points, which implies a 42 percent treatment effect (with the average violation rate of 24 percent in the sample). the EU ETS. In addition, I use the measures of rule of law and regulatory quality explained in Section 3.3 that are also used in prior studies to isolate the effect of informal institutions (Bloom, Sadun, and Van Reenen 2012b). Finally, I use an indicator of corruption control to further account for the degree to which corruption prevails in the headquarters countries. 25 Table 3 reports the results from this exercise. The enforcement variables constructed using the annual reports do not appear to have any explanatory power (columns 1 and 2). This is not surprising, as discussed in Section 2.2, because enforcement was generally weak in the EU ETS, and countries that did impose penalties for violations were not necessarily high-trust countries. I then include the measures of the quality of formal institutions and corruption control in the countries of headquarters as controls. The additional controls do not show statistical significance, but in column 6, where all additional controls are included, the regulatory quality variable has a negative and statistically significant coefficient. It is noteworthy that the coefficients on trust remain relatively stable and significant when various controls are added. 26 Although this exercise does not rigorously deal with the omitted-variable bias coming from the countries of headquarters in the absence of credible instruments, I believe it at least assures that the estimated effect of trust is not entirely driven by perceived enforcement or general regulatory quality in the countries of headquarters. I further note that firm-level omitted-variable bias might still remain. For instance, Bloom, Sadun, and Van Reenen (2012b) show that MNEs with headquarters in high-trust countries are larger and more decentralized than those with headquarters in low-trust countries. If compliance is correlated with firms' characteristics related to trust in the country of headquarters, this may lead to biased estimates. In the analysis so far, I have directly controlled for firm size to address this concern. However, it is worth keeping in mind that there might be other firm-level omitted factors for which it is difficult to measure and control. 27
Robustness Checks
In this section I report the results from a number of robustness checks. Table OA8 reports robustness checks for the cross-country analysis using all firms (as in Section 4.1), where I exclude late joiners of the EU ETS, exclude Scandinavian countries with very high levels of trust, use alternative specifications for noncompliance, and use an alternative measure of trust. Here I focus on the main results from the specification using MNEs that includes country-of-operation fixed effects. First, I add region-level economic controls (log GDP per capita, log population, and the percentage of population with tertiary education) in addition to country-of-operation fixed effects (column 1 in Table 4). I also exclude Bulgaria and Romania, which joined the EU ETS later, in case there were technical difficulties arising from immature infrastructure that affected compliance. Bulgaria and Romania became subject to the regulation in 2007 when they joined the European Union. Excluding these late joiners does not affect the relationship between trust and compliance (column 2).
I also restrict the sample to MNEs headquartered in Europe since European firms might be more aware of the EU ETS than non-European firms and perhaps are more sensitive about violating it than non-European MNEs. I find a qualitatively similar estimate using this restricted sample (column 3). Furthermore, I exclude Scandinavian countries in the sample (Finland, Norway, and Sweden) with very high levels of trust in case firms headquartered in those countries are driving the results. The estimate remains robust and qualitatively similar (column 4).
Next I try alternative specifications for the binary noncompliance variable. In my preferred specification, I drop installations with compliance status C that did not report their verified emissions (the step before they surrender corresponding amounts of permits) to be conservative because there is suggestive evidence that such installations are no longer regulated or active (see note 13). Alternatively, I treat those installations as noncompliant when they can be reasonably presumed to be active by two standards: when they have nonmissing information on permit allocation in the current period and when they have nonmissing compliance status other than C in the following period. The regression in column 5 of Table 4 uses this alternative measure of noncompliance. The coefficient is still negative but turns insignificant, which confirms the ambiguous nature of status C. Next I drop installations whose verified emissions were corrected later by the competent authority (that is, those with codes D and E) and find a coefficient similar to one from the main specification (column 6).
Finally, I use an alternative measure of trust to get a sense of potential measurement error in the trust variable. To do so, I construct a measure that takes into account year-specific shocks since I pool multiple waves conducted in different years to calculate the average level of trust in each country. Following Guiso, Sapienza, and Zingales (2009), I regress trust on year dummies, form residuals, and then compute the means of these residuals by country. Column 7 shows that the estimated relationship between trust and compliance is not sensitive to this alternative trust measure.
Additional Measures of Culture
Throughout the analysis, I have used generalized trust as a main indicator of culture relevant to compliance. However, the challenge of empirically measuring culture is acknowledged in the literature (see, for example, Tabellini 2008; Algan and Cahuc 2013), and this measurement issue is exacerbated by the existence of many related and overlapping concepts such as morality, moral hazard, and civic-mindedness that could also explain compliance. Thus, I explore other conceptually relevant cultural traits by using two additional measures of culture in this section. Note. The dependent variable is a noncompliance measure that equals one if the installation is out of compliance and zero otherwise. Standard errors, in parentheses, are clustered at the country-ofoperation level. Column 1 includes region-level controls; column 2 excludes countries not covered by the regulation until 2007 (Bulgaria and Romania). Column 3 includes firms with central headquarters in Europe; column 4 excludes Scandinavian multinational firms. Column 5 uses an alternative specification that treats installations with missing verified emissions as noncompliant. Column 6 excludes installations with verified emissions corrected by the competent authority. Column 7 uses an alternative measure of trust that accounts for year-specific shocks. All regressions include firm controls and country, year, and industry fixed effects. + P < .10. * P < .05. ** P < .01.
First, I use the question in the WVS that is often used in the tax compliance literature to measure tax morale: "Please tell me for each of the following statements whether you think it can always be justified, never be justified, or something in between . . . : Cheating on taxes if you have a chance." 28 The answer is given on a scale of 1 to 10, with 1 implying "never justifiable" and 10 implying "always justifiable." I rescale this variable so that higher values imply higher levels of tax morale. The second additional measure is based on a related scenario that has similar responses: "Claiming government benefits to which you are not entitled," which has been used to measure civic attitudes (Algan and Cahuc 2009). This variable is rescaled like tax morale. The variables are highly correlated with each other (significant at the 1 percent level). As expected, both measures are also positively correlated with the trust variable across European countries, although neither correlation coefficient is statistically significant ( Figure OA3). Table OA9 reports these two additional measures of culture for the larger group of countries where the global headquarters of regulated firms are located. 29 Column 1 of Table 5 shows a negative correlation between tax morale in the country of operation and noncompliance in the EU ETS (as in Section 4.1), after controlling for various firm and country characteristics and year and industry dummies. The estimate from the main specification that exploits the differences in the location of headquarters is reported in column 2. The negative coefficient indicates that even when I compare firms operating in the same geographic area, the strength of nonpecuniary motivations behind tax payment measured in the headquarters countries can predict firms' law-abiding behavior in another context in their operations abroad. To provide an economic sense of the coefficient in column 2, I calculate the marginal effect of a change in ownership from an MNE based in the country with the lowest tax morale in the sample (Latvia) to an MNE headquartered in the country with the highest tax morale (Japan) as in Section 4.2. The hypothetical change is associated with a .9-percentage-point decrease in the probability of noncompliance with an implied 28 percent treatment effect. This effect is smaller but comparable to the 37 percent treatment effect implied by a similar ownership change from a firm in the lowest-trust country to a firm in the highest-trust country. Column 3 includes additional controls that measure the enforcement practices and general regulatory environment in the headquarters countries, as in Table 3. The coefficient remains qualitatively similar. Table 5 repeats the specifications with civic-mindedness as an alternative measure of culture. Using civic-mindedness as a cultural indicator also reveals a negative association between culture and noncompliance, although the relationship is weaker, especially in the more demanding specifications that exploit the differences in civic-mindedness measured in the headquarters countries. I believe that 28 Tax morale can be considered part of culture to the extent that it reflects internalized beliefs and values that persist over long periods of time. A number of studies have approached tax morale from a cultural perspective by conducting similar laboratory experiments across two or three countries. See Luttmer and Singhal (2014, p. 160) for a related discussion and references. 29 The country coverage of these questions is very similar to that of the trust question. Only Israel and Saudi Arabia are missing. Note. The dependent variable is a noncompliance measure that equals one if the installation is out of compliance and zero otherwise. Columns 3 and 6 are estimated using the data from phase 1. All estimations are from a probit model that includes firm controls and country, year, and industry fixed effects. Standard errors, in parentheses, are clustered at the country-of-operation level. + P < .10.
this exercise, together with the strong evidence of the effect of trust on compliance, provides support for the argument that culture, measured by various indicators, plays an important role in the compliance decisions of firms.
Conclusion
In this paper, I attempt to provide rigorous empirical evidence of the effect of culture on compliance. Using trust as a main indicator of culture, I find strong evidence that trust positively affects compliance, and more important, that there exist systematic differences in firms' compliance patterns depending on the country in which they are headquartered even when they operate in the same geographic area and have the same external environment.
One important implication of my findings is related to the idea of using corporations as a lab in which to study the role of culture. Although the role of culture in economic activities has long been recognized, economists' attempts to develop a deeper insight into specific workings of culture have not been straightforward because it is difficult to know where culture comes from, culture is sticky with rare drastic changes, and even when cultural changes occur, they take place over a long period with many other circumstances happening at the same time. Guiso, Sapienza, and Zingales (2015a) note this problem and suggest corporations as an alternative environment in which to study the role of culture. This is indeed promising since with corporate culture, we know when, how, and on what values corporations are founded; corporate culture is subject to more frequent changes (for example, through hiring, firing, and mergers and acquisitions); and performance is more easily measured (Guiso, Sapienza, and Zingales 2015a). There is an increasing interest in this line of reasoning that sheds light on specific mechanisms behind the documented effect of culture at the macro level. For instance, Bloom, Sadun, and Van Reenen (2012b) provide evidence of the influence of trust in firms' decisions to decentralize, which allows more efficient resource allocation within and across firms that leads to greater productivity and economic growth. This serves as micro evidence for the long-held belief that trust facilitates economic growth through lower transaction costs (Arrow 1972). Similarly, this paper provides micro empirical evidence of the role of trust in compliance and by doing so validates the documented effect of trust on the design of formal regulation through how law-abiding people are (Tabellini 2008). I concur with Guiso, Sapienza, and Zingales (2015a) that these approaches substantially enhance our understanding of how cultural norms affect economic behavior and relate to formal institutions. | 2021-05-21T13:16:33.387Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "958442ce1ac9b6215ed1bf8d14c6b91bf4c15222",
"oa_license": "CCBYNC",
"oa_url": "https://purehost.bath.ac.uk/ws/files/244459687/forwebsite.pdf",
"oa_status": "GREEN",
"pdf_src": "UChicago",
"pdf_hash": "958442ce1ac9b6215ed1bf8d14c6b91bf4c15222",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
219403099 | pes2o/s2orc | v3-fos-license | Identification of Enterprise Social Network (ESN) Group Archetypes in ESN Analytics: Metrics Selection and Case Application
With the proliferation of Enterprise Social Networks (ESN), the measurement of ESN activity becomes increasingly relevant. The emerging field of ESN analytics aims to develop metrics and models to measure and classify user activity to support organisational goals and outcomes. In this paper we focus on a neglected area of ESN analytics, the classification of activity in ESN groups. We engage in explorative research to identify a set of metrics that divides an ESN group sample into distinct types. We collaborate with Sydney-based service provider SWOOP Analytics who provided access to actual ESN meta data describing activity in 350 groups across three organisations. By employing clustering techniques, we derive a set of four group types: broadcast streams, information forums, communities of practice and project teams. We collect and reflect on feedback from ESN champions in fourteen organisations. For ESN analytics research we contribute a set of metrics and group types. For practice we envision a method that enables group managers to compare aspirations for their groups to embody a certain group type, with actual activity patterns.
Clustering with these metrics resulted in four distinct ESN group types: 1) broadcast streams, 2) information forums, 3) communities of practice and 4) project teams. A brief evaluation with feedback solicited from ESN champions across 14 SWOOP client organisations shows that these group types are a good reflection of ESN activity more broadly. We present some ideas for extending the typology.
For the emerging field of ESN analytics our study serves to demonstrate the feasibility of group type identification with typical ESN-specific and more general social network analysis (SNA) metrics. We do not claim generalisability for our group archetypes, though we would expect our group archetypes to be somewhat typical of ESN group proliferation more broadly and thus of use to broader research into understanding the role of groups in ESN. For practitioners, such as network managers and group leaders, we outline the scaffold of a method for visualising the discrepancies between aspiration and actual activity in ESN groups. This will help group leaders understand how their group is tracking against the patterns of a particular group type that they envision their group to embody. Given its pioneering nature, the study points to various promising avenues for future research.
Definitions and characteristics
Common to all social media is that they facilitate user participation, interaction, and the generation of content by users (Boyd and Ellison 2007). Specifically, ESNs are services, accessed through a web browser or mobile app, that allow people to (1) communicate with their co-workers or broadcast messages to everyone within the organisation; (2) explicitly indicate or implicitly reveal particular co-workers as communication partners; (3) post, edit, and sort text and files linked to themselves or others; and (4) view the messages, connections, text, and files posted, edited and sorted by anyone else in their organisation at any time of their choosing (Leonardi et al. 2013).
Another defining characteristic of ESN is their malleability (Richter and Riemer 2013b). Unlike more traditional information systems that are employed to solve a concrete problem and are thus associated with a concrete task or purpose, ESN are best understood as infrastructures that are intended to create potentials for new ways of communicating and working (Riemer et al. 2009). Hence, the proliferation of ESN often follows a bottom-up approach of implementation, a more inclusive and egalitarian process (Schneckenberg 2009). As a result, ESN have been associated with a variety of organisational practices such as communication, collaboration (Riemer et al. 2010), knowledge management (Levy 2009) crowdsourcing (Schlagwein and Bjorn-Andersen 2014), open innovation (Dahlander and Gann 2010), or open strategy (Tavakoli et al. 2015). This renders ESN both an interesting and important context for IS research, and a challenge for organisations as they have to keep track of the emerging activity in their ESN. The research presented here aims to help organisations understand their own enterprise social networks.
Prior research
Prior research on ESN typically falls within one of four existing streams. The first stream captures conceptual work outlining typical ESN characteristics (Leonardi et al. 2013;Treem and Leonardi 2012), comparing ESN with traditional ways of relationship building in organisations (Kane et al. 2014), envisioning high-level benefits, such as for knowledgesharing (Fulk and Yuan 2013;Majchrzak et al. 2013) or interacting in the workplace more generally (DiMicco et al. 2008;Zhang et al. 2010).
The second stream comprises concrete, explorative, usually qualitative case studies investigating usage patterns of ESN. Typical findings reveal benefits of ESN for information sharing and discovery (Zhao and Rosson 2009), for creating awareness within the organisation (Zhang et al. 2010), or for knowledge creation and sharing in professional service contexts (Riemer and Scifleet 2012).
The third stream represents studies that measure the benefits of ESN from different angles and in different contexts; these studies are typically quantitative in nature, employing either survey-based approaches, e.g. to measure individual ESN benefits for knowledge workers (Mantymaki and Riemer 2016), or they use data obtained from the ESN services directly, such as for studying the connection between social capital and employee performance (Riemer et al. 2015).
Finally, the fourth stream, to which this study contributes, contains generative works that aim to identify metrics and develop analytics frameworks for measurement of ESN activity in organisations. Studies in this context focus on the application of social network analysis (SNA) to reveal emerging informal networks between ESN users (Behrendt et al. 2014a), identify particular value-adding users (Berger et al. 2014), characterise individual user activity profiles (Holtzblatt et al. 2013), or develop a comprehensive framework for identifying user types in knowledge work contexts (Hacker et al. 2017b).
ESN analytics and metrics development
As an emerging stream of research, ESN analytics is a sub field of social media analytics (Stieglitz et al. 2014;Stieglitz et al. 2018). Sometimes called social collaboration (or social software) analytics, it is "a specialized form of examination of log files and content data, to gain a better understanding of the actual usage of ESS" (Schwade and Schubert 2017, 401). Here, we define ESN analytics as methods and practices for the identification and utilisation of metrics and models for measuring different aspects of user activity in enterprise social networks, including user activity levels and user profiles, network activity levels, structural network characteristics, and network health indicators, in support of support organisational goals and outcomes.
ESN analytics becomes relevant for organisations for two main reasons. First, due to their malleability, concrete implementations of ESN in organisations typically emerge in quite different ways, supporting a range of different use cases and activities (Richter and Riemer 2013a). It is thus important for organisations to be able to track and understand user activity on their own particular ESN. Second, ESNs are becoming more integral to organisational communication and collaboration practices, which renders gaining an understanding for the activity on the platform and ways in which to support individuals and social groups within the ESN more important (Schwade and Schubert 2017).
At the same time, there are opportunities in ESN analytics for both academia and industry. First, the particular nature of ESN affords detailed analysis of user activity, because all (or most) user interactions are logged and, in principle, available for analysis. Second, most ESN platforms currently do not provide sophisticated analytics capabilities, leaving room for new offerings by third-party providers and for academia to explore new ways of measuring and accounting for various aspects of ESN activity.
Given the newness of this field the number of studies contributing to establishing metrics or models to support ESN analytics is still limited (cf. Schwade and Schubert 2017). So far two main areas of application for analytics exist. The first area focuses on metrics characterising the social network as such. This is where traditional social network analysis (SNA) techniques are brought to bear (Wasserman and Faust 1994). For example, Riemer et al. (2015) have shown how social capital metrics can be utilised to link certain network characteristics to employee performance. Behrendt et al. (2014b) provide an overview of SNA metrics and studies for use in ESN contexts. The second area aims to develop new dedicated ESN metrics to characterise individual user behaviours and to generate models that classify user populations into distinct user types. Most notably is the research program by Hacker and colleagues (e.g. Hacker et al. 2017b). Other works include a study by Cetto et al. (2018) who classified users by knowledge sharing and seeking behaviours, and Frank et al. (2017), who utilised log data from Exchange, Microsoft Office 365 and Sharepoint to identify user roles (Frank et al. 2017).
Analytics of ESN groups
What is lacking so far are works that engage with ESN groups, the intermediate level of analysis between network and individuals. Groups play an important role in ESN as they allow for the creation of dedicated spaces for conversation and information exchange between Australasian Journal of Information Systems Riemer, Lee, Kjaer & Haeffner 2020, Vol 24, Selected Papers from ACIS 2018 Enterprise Social Network Archetypes in Analytics a subset of users. Given their usefulness many companies find that the number of groups tends to proliferate over time, with some groups very active and many others abandoned. At the same time groups are used for different purposes, and they exist in different shapes and forms, from very small ones to large behemoths. We suggest that a better understanding of different group types, their structural features and activity patterns, will be useful for decision-makers in better harnessing their ESN for value.
However, we are aware of only one study engaging in detail with ESN user activity at the group level, classifying groups in the context of knowledge work (Riemer and Tavakoli 2013). However, this study is not useful in the context of ESN analytics, since the classification was based on a manual coding of user messages, which is impractical as the basis for analytics practices. Accordingly, we investigate the following question: Which set of metrics discriminates best a population of ESN groups such that it results in a set of meaningful group types characterised by different activity patterns?
Study overview
We utilise ESN activity meta-data from three Yammer networks, obtained from Australian analytics company SWOOP Analytics Pty Ltd (in the following just: SWOOP). We collect a range of ESN metrics from extant literature and operationalise these based on the SWOOP data set. We then test each metric to see which ones divide the sample of groups in our data set into distinct types. We briefly introduce our research setting and data set, before we outline our method and research approach, utilising actual ESN meta data describing activity in 350 groups across three organisations.
Research setting and ESN data set
SWOOP offers a cloud-based platform that provides analytics for organisations' Yammer, Facebook Workplace and Microsoft Teams networks. When given permission by an organisation to integrate with its network, SWOOP "provide[s] access to more than 30 measurement indicators giving organisations and individuals deep insights into collaboration across the enterprise." It uses these metrics to provide user profiles, in the form of a typology that classifies each user.
Generally, any action performed by an ESN user is stored in the backend database of the ESN system and available in the form of digital traces, "digitally stored, event-based, chronological records of activities of actors, which result in direct or indirect actor relations or content in different data formats" (Behrendt et al. 2014a, 4). We distinguish usage data, or meta-data from user-generated data, or content, which contains what was posted. In order to ensure confidentiality SWOOP does not collect any content from organisations, only meta-data. Metadata is data about activities or interactions that indicates how, when and where an ESN activity was performed, what kind of interaction was performed, and who was involved.
Whereas the Yammer data model is organised around messages, SWOOP provides ESN activity data already organised as interactions between users. Moreover, SWOOP collects from an organisation's ESN information that is not included in the Yammer database, such as information on 'Likes' or 'Mentions' of other users (tagging), each of which are represented in the SWOOP data model as particular interactions. SWOOP distinguishes the following interaction types: Post, Reply, Notification, Mention or Like. Table 1 shows what meta data is available for each interaction. For this study we had access to data from Yammer networks of three firms (two financial service firms and one professional service firm). The data provided by SWOOP (with the firms' permissions) contained meta-data of all interactions in the various groups across these networks for a representative 10-week period. To protect user privacy SWOOP only shared anonymised meta-data, which was stripped of user and group names. Users, groups and all interactions remain traceable however through their unique IDs. In total, the data set contained 683,733 interactions by 40,304 users in 350 groups.
Data preparation: construction of the ESN social graph
Any analytics of ESN user activity has to begin with the construction of the social network graph. Generally, a social network "consists of a finite set or sets of actors and the relation or relations defined on them" (Wasserman and Faust 1994, 20). Whereas in public social networks, such as Twitter or Facebook, networks can be inferred from explicit friend or follower relationships, in ESNs relationships have to be constructed from user activity, as follower relationships either do not exist or are inconsequential to communication on the platform (Behrendt et al. 2014a).
At the most basic level a dyadic relationship between two individuals is said to exist when one user responds to another's message (Ahuja et al. 2003). This is in line with social network theory, which asserts that relationships emerge from interactions (Granovetter 1973;Krackhardt 1992). ESN meta-data can thus be utilised to infer the ensuing network (Behrendt et al. 2014b). For our study, SWOOP provided various types of interactions between users that can be utilised to construct network graphs for each group in our sample. At the same time, the inclusion of different interaction types in graph creation has implications for calculating and interpreting metrics; for example, does liking someone's post constitute a relationship with that person, or should a relationship only be considered based on a reply to a message, as this suggests that the respondent has actually read (and not merely seen) the message and found it stimulating enough to interact?
Iterative research process utilising cluster analysis
Our aim was to identify those metrics that best discriminate the sample of ESN groups in a way that results in certain archetypes describing groups regarding their activity patterns. Much like individual user profiles and archetypes already provided by SWOOP, the question we explore in this study is thus, can we identify a set of metrics that provides a similar set of group archetypes?
Given the explorative nature of this question, our research approach needed to be 'creative' and iterate between identification and calculation of metrics and a clustering of groups based on varying sets of metrics. Hence, the steps in this process were: A tool was implemented using the software package Matlab to facilitate iterating on steps 2 to 6 until a result emerged that a) discriminated well into distinct group clusters, and b) was interpretable in a way that corresponds with typical ESN use, i.e. that made sense from a practical point of view.
A cluster analysis is a method for semi-automated grouping of large numbers of objects based on their similarity described by a vector of quantified characteristics (Hartigan 1975). Previous research already demonstrated that clustering techniques are useful for classifying complex networks of different kinds (Newman and Girvan 2004;Strogatz 2001). For this study we experimented with a number of clustering algorithms (Song et al. 2012). Ultimately agglomerative clustering, in particular the complete-linkage algorithm (Defays 1977;Krznaric and Levcopoulos 1998) with a standardised Euclidean distance measure (Pandit and al. 2011) produced the most useful results.
Cluster analysis is 'semi-automated' because it is up to the researchers to determine whether or not a clustering was successful. According to Everitt (1993) success is given when the researcher, who is familiar with the data, can sensibly interpret the resulting clusters. A good set of clusters shows homogeneous and clearly separable clusters.
To identify clusters we used dendrograms, plotting of metrics and a three-dimensional plot of group locations according to their metric values. In turn, the requirement to judge and interpret the clustering result in each instance, meant that it turned out not to be feasible to include more than three metrics in each clustering attempt. Each clustering was thus done on the basis of triplets of metrics. This allowed surfacing first which individual metrics, and second which metric combinations discriminated the group sample most distinctively (given that some metrics correlate and didn't discriminate in distinct ways). A large number of individual clustering runs were performed in a semi-automated way using Matlab, from which first suitable metrics emerged and which in a second step converged on the most suitable group clustering. We refrain here from reporting the details of the individual cluster analyses as this goes beyond the scope of this paper.
Findings: key metrics and four ESN group archetypes
In this section we present our findings in three steps. First, we briefly introduce a list of key metrics candidates derived from existing ESN analytics literature; in a second step discuss those three metrics in more detail that were shown to be the ones best discriminating our data set into distinct and interpretable groups. Third, we introduce and discuss the four ESN group archetypes that resulting from the cluster analyses.
Metrics that best discriminate the groups sample
Our explorative analysis 'tested' varying triplet combinations of metrics by running separate cluster analyses on the sample of 350 groups each time. The analyses surfaced a set of three metrics that not only discriminate well within the group sample, but also differentiate the groups into four distinct clusters that are well interpretable and that correspond with known uses of ESN groups in organisations:
1.
Density of directed Graph: for each group a directed graph is created by adding a node for each active user and a directed edge between all node pairs whose user-IDs appear as "From" and "To" in one or more transactions inside the group; the
Australasian Journal of Information Systems
Riemer, Lee, Kjaer & Haeffner 2020, Vol 24, Selected Papers from ACIS 2018 Enterprise Social Network Archetypes in Analytics edge points to the node whose user-ID appears as "To". The density of this graph is defined as the number of existing edges divided by the number of possible edges. Density is a measure of the degree to which members of the group are connected, resulting from people talking directly to each other.
2.
Gini Coefficient: this metrics stems from economics and was originally intended to measure wealth inequality, that is the unevenness of wealth distribution. In the ESN context, it measures how evenly activity in a group is distributed. The higher the Gini coefficient, the more uneven is the activity distributed in a group. A Gini of 1 means that only one person is responsible for all activity, a Gini of 0 means everyone contributes exactly the same amount of activity.
3.
Thread reciprocity: thread reciprocity measures the share of all posts with at least one reply. It is thus akin to a response rate measure. Groups with a high thread reciprocity are more conversational. Note that a Like is not regarded as a Reply; rather, a genuine response post is required.
Group types resulting from the cluster analysis
From these metrics the clustering algorithm derived a total of initially five clusters (chosen after visual inspection of the resulting dendrogram). After a further detailed analysis of the five clusters we decided to merge the two smallest of the clusters (shown as clusters 3 and 5 in figure 1, and in red and green in figure 2) as they turned out to be quite similar in terms of metrics. Figure 1 demonstrates for each of the three metrics separately how they discriminate between the clusters; figure 2 provides a three-dimensional plot which visually locates all 350 groups; and table 3 names and summarizes the metrics for each of the four clusters. In the following we interpret each of the clusters. Broadcast streams: These groups are quite large in terms of active users (those who interacted at least once in the 10-week period), yet they show only low levels of interaction and participation across the user population. Rather, they feature many single messages written by a small number of participants, and a large number of people who mostly read and only occasionally post. In addition, people are not well-connected with each other. Such characteristics are typical of groups used for announcements and the broadcasting of information. Typical uses are corporate communications or HR departments and business divisions pushing information to users in ways that resemble one-to-many 'Intranet' use. Such communication does not require responses from (reciprocity), or interaction among users (density). The relatively large number of active users is explained by 'Likes' acknowledging posts.
Information forums: Significant about this group type is that, while it shows rather even participation among users posting into the group, these posts do not solicit many replies from other users, or lead to interactions among users to build relationships. Such properties are typical of information forums, in which people post information, questions or requests for other users, but which are not home to many conversations or actual work interactions. Community of practice: These groups show uneven participation but high reciprocity. This means that, while many posts receive replies from other users, these initial messages are written by a core group of members. In addition, the overall network density is low in that people are not well connected among each other. The latter is partly explained by the fact that these groups are the largest on average in our sample. We term these groups 'Communities of Practice' (CoP). CoPs are groups of loosely connected members which often congregate around a particular topic and a core group of leaders or experts in the context of organisational learning and knowledge exchange, while a rather large number of group members follow the conversation as an audience and only occasionally participate.
Project teams: These groups are by far the smallest in our sample and show significantly higher levels of connection between the group members than groups in the other three clusters. They are also highly interactive and conversational with even participation. Such properties are typical of project teams in which all group members are actively involved in performing joint work and all group members interact and converse with each other on a daily basis.
User feedback on group archetypes
We had the opportunity to present this research and the resulting ESN group archetypes to a selection of SWOOP client organisations utilising Microsoft Yammer, in order to solicit feedback. A brief online questionnaire was set up for participants of one of SWOOP's user group events to fill in. The survey comprised two free-text questions: 1.
What are your thoughts on the groups presented? Do they make sense? Are they useful? Why/why not?
2.
If you think there is a particular type of group missing, please tell us about it.
We solicited responses from Yammer champions across fourteen organisations.
Overwhelmingly the responses were in support of the group typology with 13 of the respondents clearly signalling that they were useful or mostly useful, commenting that they "make sense", or "resonate" with their own networks in that they themselves "have a mix of them all" or that the typology covers the groups that they had seen in their own Yammer network. Given the overall agreement, we suggest that the archetypes exhibit a certain universality or applicability across organisations.
Additionally, a few respondents named further group types as complements to the typology; the most relevant ones were:
2)
Groups for distributing corporate resources (1 mention), akin to an Intranet site that allows users to ask questions and discuss content.
We note firstly that we did not investigate and discriminate our group sample by content, which means that a distinction between work or non-work content cannot be made, but we would expect that such groups could show interaction patterns of all four archetypes above, as one can imagine non-work initiatives of all different shapes. This would not justify adding
Australasian Journal of Information Systems
Riemer, Lee, Kjaer & Haeffner 2020, Vol 24, Selected Papers from ACIS 2018 Enterprise Social Network Archetypes in Analytics a special group type on those ground. Secondly, we would expect that groups for distributing corporate content which are quite interactive will manifest as an outlier of the broadcast cluster, in that reciprocity and density are slightly higher, but not at the levels that would warrant classification as one of the other group types. Finally, large temporary groups with high levels or density, reciprocity and even participation might well exist (as in project teams), but did not show up in our sample as all groups of this type were quite small, likely because they are rare and thus none fell into our time-bounded sample.
Discussion
We set out to investigate which set of metrics discriminates best in a sample of ESN groups, using cluster analysis, to derive a set of meaningful group types characterised by different activity patterns. Our explorative analysis, utilising activity meta-data from 350 groups from three organisations, converged on three metrics that measure reciprocity in terms of the proportion of messages eliciting replies, evenness of user participation, and density in terms of user connectedness in the group. Those metrics in turn distinguish four distinct group types, which we named broadcast streams, information forums, communities of practice and project teams. Our brief evaluation with client organisations lends further support to the typology. We note that these groups correspond to, but also extend, the group types that ESN providers such as Microsoft Yammer or Facebook Workplace provided as templates for their users when creating new groups (see figure 3).
Comparison of our group typology with ESN provider templates
In the case of Yammer, two of our group types, project teams ('Project') and broadcast streams ('My Organisation') have direct equivalents, while Yammer subsumes all other use cases under a broad category 'community', intended for users to "share best practices, learn new skills and connect around shared interests." Yammer's recent decision to suspend the group classification feature, after feedback from users, indicates that the typology was not granular enough and thus unhelpful.
Our finding suggests that a distinction should be made between communities of practices and information forums to differentiate those groups that are intended to focus on learning and sharing best practices from those that revolve around common interests and information sharing. The former, communities of practice, require more interaction and conversations between users (as measured by reciprocity), but at the same time will show a certain unevenness in participation (as measured by Gini), given that sharing of best practices and learning come with a differentiation in roles between experts/teachers and a broader audience of learners. This distinction is further supported by earlier, content-based studies that classify ESN use cases, where a strong distinction is made between communication genres that generate 'discussion and conversation' and those that are mainly one-way for 'providing input' for others ).
In the case of Facebook Workplace, we note that this ESN similarly operates with three main categories, 'Teams & Projects', 'Open Discussions' and 'Announcements'. In addition, Workplace uses a group category 'Social & More' to separate out non-work-related communication, much like recommended by four of the SWOOP client organisation, and also adds two more specialised groups ('Multi-Company' and 'Buy & Sell') that are out of scope in our context as they stray from the purpose of an ESN. We note that separating 'social' conversations from work-related ones, while appealing to certain executive managers, might send the questionable signal to employees that non-work-related conversations, while tolerated, are somehow 'second rate'. Prior research has shown however that healthy ESNs exhibit about 40% communication that are not necessarily work-related but form the basis for any network community to exist in the first place .
Usefulness of the ESN group typology
The feedback from SWOOP and its client base suggests that our typology will be helpful for ESN group leaders and community managers for managing their ESN networks. Specifically, we suggest that measurement of group characteristics will allow group leaders to compare their aspiration for what a particular group intends to become with actual usage patterns. For example, a group that intends to support a project team might, upon application of our metrics, be classified as a community of practice, indicating a lack of density, which comes with unhelpful network fragmentation in the project team. Similarly, an intended CoP might be classified as a broadcast stream, indicating a lack of engagement (reciprocity) among its members. Finally, an intended information forum that lacks even participation becomes lopsided with a lack of diversity in contributions and perspectives (see figure 4). We suggest that knowledge of such discrepancies will allow group leaders to manage and counteract accordingly. Information forums: -Will be content with rather low engagement -But how even is participation to ensure diverse input?
Conclusion
Our study contributes to ESN research in general, and the emerging field of ESN analytics more specifically, by extending ESN analytics approaches to the group level. Specifically, we contribute initial metrics and a typology of ESN groups according to activity patterns, as the basis for broader research into understanding the role of groups in ESN networks. Furthermore, our study contributes to ESN practice a method for ESN group leaders and network managers to measure group activity in a meaningful way, to visualise discrepancies between group aspiration and actual user activity, as measured by our metrics, and thus to improve group communication to achieve intended communication patterns. We envision that our metrics and classification could suitably be implemented in platforms such as that provided by SWOOP.
Every research study is circumscribed by certain design choices and limitations and ours is no exception. Firstly, our study was driven by curiosity and a practical interest in learning about user behaviour in ESN groups. As theorising in this space is still in its infancy, future studies will aim to consolidate empirical findings into more generalisable insights. Secondly, our study is merely a first, necessarily limited step in a broader research endeavour to extend analytics to the ESN group level. While we had access to a unique and relevant data set, we only derived one typology. Without doubt, future research is needed to corroborate our findings. We envision that future analyses will apply similar explorative analyses to different ESN networks to replicate our results, unearth additional useful metrics for discriminating group activity and extending our typology. Thirdly, we focused on the group level alone, but did not measure any interaction effects between individual behaviour and collective group behaviour. Hence, it will be worthwhile investigating the link between group-level and individual-level metrics and types, such as those identified by (Hacker et al. 2017b). For example, will groups of certain types benefit from the presence of certain individual user types among its members? Fourthly, while we did not set out to explore the best clustering technique to identify group types, we acknowledge that there might be clustering methods that could potentially provide equally useful or better results. Finally, we only carried out a limited survey-based study to corroborate our results with user organisations. Additional qualitative research, utilising interviews with network managers and group leaders, might investigate the usefulness of our proposed typology, whether it captures all intended use cases for groups, and how it might be used to support decision-making. It would be particularly interesting to see how the implementation of group metrics in platforms such as SWOOP will shape user behaviour.
1 I thought the presentation was amazing and I really got a better understanding of SWOOP and just how many businesses use it! No 2 Yes they make sense N/a 3 Yes they resonate but there is a cross over type-between project and community of practice. Based off networks working with broader bandwidth than one project or topic. Similar several stronger contributors but lots of conversation to purposeexample culture and strategy networks across teams Maybe to define the prior mix between CoP and project 4 Yes I think you need a work social info group. There is an employee discount group at Westpac that is both broadcast and also questions on discounts. 5 Mostly useful. Don't include non-business groups. Otherwise they cover most of the groups fairly well.
Non-work groups.
6 Yes, they cover the groups we have seen in our organisation Not really 7 They make sense, but I think there might be some groups that do not fit into these definitions 12 Yes, they are the most commonly useful groups in a Yammer network.
Corporate Resource. Our organization has some very interactive corporate resource groups where users can comment on current initiates and ask questions. Health and Wellness is an example. 13 Yes -am witnessing similar structures the short term team related to large engagement events like hackathons 14 Seems sensible... perhaps there is another category of persistent team e.g. call centre team...as opposed to project team, which has a defined beginning and end. | 2020-05-21T00:15:56.457Z | 2020-05-11T00:00:00.000 | {
"year": 2020,
"sha1": "d8d8983767c1c6160b46cc2376979026a93d5cc8",
"oa_license": "CCBYNC",
"oa_url": "https://journal.acs.org.au/index.php/ajis/article/download/2355/963",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "98df78da48f58e2c567b85c5d919a5fe94dd882e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244448448 | pes2o/s2orc | v3-fos-license | Natural Compound Resveratrol Attenuates TNF-Alpha-Induced Vascular Dysfunction in Mice and Human Endothelial Cells: The Involvement of the NF-κB Signaling Pathway
Resveratrol, a natural compound in grapes and red wine, has drawn attention due to potential cardiovascular-related health benefits. However, its effect on vascular inflammation at physiologically achievable concentrations is largely unknown. In this study, resveratrol in concentrations as low as 1 μm suppressed TNF-α-induced monocyte adhesion to human EA.hy926 endothelial cells (ECs), a key event in the initiation and development of atherosclerosis. Low concentrations of resveratrol (0.25–2 μm) also significantly attenuated TNF-α-stimulated mRNA expressions of MCP-1/CCL2 and ICAM-1, which are vital mediators of EC-monocyte adhesion molecules and cytokines for cardiovascular plaque formation. Additionally, resveratrol diminished TNF-α-induced IκB-α degradation and subsequent nuclear translocation of NF-κB p65 in ECs. In the animal study, resveratrol supplementation in diet significantly diminished TNF-α-induced increases in circulating levels of adhesion molecules and cytokines, monocyte adhesion to mouse aortic ECs, F4/80-positive macrophages and VCAM-1 expression in mice aortas and restored the disruption in aortic elastin fiber caused by TNF-α treatment. The animal study also confirmed that resveratrol blocks the activation of NF-κB In Vivo. In conclusion, resveratrol at physiologically achievable concentrations displayed protective effects against TNF-α-induced vascular endothelial inflammation in vitro and In Vivo. The ability of resveratrol in reducing inflammation may be associated with its role as a down-regulator of the NF-κB pathway.
Introduction
Cardiovascular disease (CVD) is the number one cause of death in the United States and one of the top leading causes of death worldwide, mostly due to the westernization of traditional diets [1][2][3]. Atherosclerosis, a major cause of CVDs, is an inflammatory vessel disorder commonly characterized by plaque formation as a result of monocytederived macrophages that ultimately develop into lipid-laden foam cells [4][5][6]. Previous studies have reported that endothelial dysfunction following chronic inflammation is essential in the initiation and development of atherosclerosis [7][8][9]. In the early stages of atherosclerotic plaque development, circulating monocytes are recruited by activated endothelial cells (ECs) followed by EC-monocyte adhesion and subsequent transmigration into the intima [5]. Accumulating evidence suggests that these processes are driven by proinflammatory chemokines, such as interleukin-8 (IL-8) and monocyte chemoattractant protein-1 (MCP-1), and adhesion molecules, such as intracellular adhesion molecule-1 (ICAM-1) and vascular cell adhesion molecule-1 (VCAM-1) [6,10,11].
It is well-established that tumor necrosis factor-alpha (TNF-α), a major pleiotropic proinflammatory cytokine, plays a pivotal role in endothelial dysfunction and subsequent damage to vascular function [12,13]. Indeed, elevated levels of circulating TNF-α were found in the plasma of humans with vascular diseases [14,15], while in TNF-α knockout mouse models, decreased endothelial adhesion and atherogenesis have been reported [16]. TNF-α is also known to induce apoptosis in aortic endothelial cells [17] and demonstrate a high presence in atherosclerotic lesions [18], indicating its critical role in developing vascular disease. In research, TNF-α has been commonly used as an inflammation trigger due to its ability to increase expression of other proinflammatory cytokines, chemokines, such as MCP-1, and adhesion molecules, including VCAM-1 and ICAM-1 [19,20]. Previous studies reported that TNF-α-induced up-regulation of chemokine and adhesion molecule gene expression is mediated largely by nuclear factor-kappa (NF-κB) [21,22]. NF-κB can be activated upon phosphorylation of inhibitors of NF-κB by TNF-α-stimulated activation of the IkB kinase (IKK) complex [23]. The p65 heterodimer, also known as RelA, is a member of the NF-κB family of transcription factors and shows increased nuclear translocation in the thickened intima of human atherosclerotic lesions [24,25]. Since inflammation-driven endothelial dysfunction is a prime trigger in atherosclerosis initiation and exacerbation, compounds that attenuate TNF-α-induced NF-κB activation and subsequent expression of inflammatory markers are potential therapies to vascular endothelial dysfunction.
Resveratrol (3,5,4 -trihydroxy-trans-stilbene) is a stilbenoid phytoalexin compound found in skins of grapes and berries and naturally present in high concentrations in red wine [26]. Resveratrol has drawn wide attention due to potential cardiovascular-related health benefits potentially stemming from its role as an antioxidant and anti-inflammatory agent [27][28][29][30]. Indeed, results from a study done in a cell-free system indicated resveratrol as a scavenger for superoxide anion radicals [29]. Additionally, data from in vitro studies suggest resveratrol inhibits LPS-induced ROS generation and Nox1 expression, protecting vasculature by reducing oxidative stress [30]. Animal studies demonstrated that resveratrol treatment decreased neutrophil infiltration into myocardial ischemia/reperfusion tissue [27] and reduced cardiac hypertrophy [28], indicating a cardioprotective role. While these data shed light on protective effects of resveratrol against vascular disease, they do not reflect physiological effects of resveratrol as the concentrations used in these studies exceed plasma resveratrol levels (≤5 µm) that are attainable in animals and humans after consumption of resveratrol-containing food or supplements [31][32][33]. In a study done with 12 healthy males aged 25-45 years, depending on whether 25 mg resveratrol was delivered by vegetable juice, wine or grape juice, the peak serum concentration of free and conjugated resveratrol was 1.8-2 µm [31,32]. In another phase I study, up to 2.4 µm of unmetabolized resveratrol was found in the plasma of human participants who orally ingested a single dose of 5 g of resveratrol [33]. When 5 g of resveratrol was ingested daily for 29 consecutive days, peak plasma concentrations of trans-resveratrol reached up to 4.2 µm [34]. In both studies, oral intake of high doses (5 g) of resveratrol was demonstrated to be safe, as evidenced by the lack of any serious adverse events [33,34]. Since most of the previous studies used resveratrol concentrations well above those that were nutritionally relevant, the biological significance of previous findings is largely unclear, and the cellular and organismal action of resveratrol at physiologically achievable concentrations in the plasma (≤5 µm) needs to be examined further. In this study, we investigated whether resveratrol at physiologically achievable concentrations attenuates TNF-α-induced adhesion of monocytes to endothelial cells and its underlying mechanisms. We also analyzed the effect of dietary intake of resveratrol on TNF-α-induced vascular inflammation in C57BL/6 mice.
Resveratrol Reduced TNF-α-Induced Monocyte Adhesion to ECs
Adhesion of monocytes to ECs is a crucial step in propagating endothelial dysfunction in inflammatory diseases. We investigated whether resveratrol would have an antiinflammatory effect by affecting monocyte adhesion to ECs. Exposure of EA.hy926 ECs to TNF-α showed at least a twofold increase in THP-1 monocyte adhesion to ECs ( Figure 1). However, 1 h pretreatment with resveratrol in concentrations as low as 1 µm significantly suppressed TNF-α-induced monocyte binding, and 20 µm resveratrol reduced monocyte adhesion to levels seen in the control group that was not treated with TNF-α. The inhibitory effect of resveratrol on monocyte adhesion was found to be concentration-dependent.
Resveratrol Suppressed Gene Expression of TNF-α-Induced Chemokine and Adhesion Molecules in ECs
Prior to monocyte adhesion, monocytes are recruited to ECs through chemokines and adhesion molecules [35,36]. Real-time PCR determined that exposure of ECs to TNF-α for 1 h significantly increased mRNA expression of monocyte chemoattractant protein CCL2 and intercellular adhesion molecule ICAM-1 (Figure 2A,B). Pretreatment of resveratrol in concentrations as low as 0.25 µm markedly suppressed TNF-α-induced expression of these chemokines and adhesion molecules. These results indicate that pretreatment of resveratrol has an anti-inflammatory effect.
Resveratrol Inhibits TNF-α-Induced NF-κB Activation in HUVECs
NF-κB activation through nuclear translocation of the p65 heterodimer is an essential step in TNF-α-induced transcription of chemokines and adhesion molecules [23][24][25]. Thus, we investigated the role of resveratrol on TNF-α-stimulated activation of NF-κB signaling. Immunofluorescence-stained images of NF-κB p65 nuclear translocation showed that cells pretreated with resveratrol showed a significant reduction in positive fluorescence as compared to cells treated only with TNF-α ( Figure 3A,B). These results suggest that resveratrol has a potent anti-inflammatory effect that is partly mediated through inhibition of the NF-κB signaling pathway.
Dietary Ingestion of Resveratrol Suppresses TNF-α-Induced Vascular Inflammation In Vivo
We further examined whether resveratrol could affect TNF-induced vascular inflammation in C57BL/6 mice. First, monocyte binding to mouse aortic endothelia Ex Vivo was evaluated using WEHI 78/24 monocytic cells. TNF-α treatment caused significantly increased monocyte adhesion to the endothelia of aortic cross-sections, which was largely reduced in mice fed 0.4% resveratrol in the diet ( Figure 4A-E).
Previous studies indicated that chemokine MCP-1 and CXCL1/KC play a key role in monocyte recruitment, while adhesion molecules ICAM-1 and VCAM-1 are involved in ensuring firm adhesion of monocytes to the endothelial layer and subsequent transmigration into the intima of the artery [37,38]. As seen in Figure 4B-D, chemokines MCP-1/JE and CXCL1/KC (mouse homologs of human chemokine MCP-1) and adhesion molecules sICAM-1 and sVCAM-1 were present in significantly higher concentration in mice treated with TNF-α compared to that of the control group. However, in mice that were fed dietary supplementations of resveratrol, serum concentrations of MCP-1/JE, CXCL1/KC, sICAM-1 and sVCAM-1 regressed. Based on these results, we demonstrate that resveratrol attenuates endothelial inflammation partly by reducing chemokine and adhesion molecule production. During inflammation, monocytes undergo sub-endothelial transmigration and differentiate into macrophages [35][36][37][38][39]. F4/80 is one of the most commonly used monocytederived macrophage markers [40]. To further corroborate the hypothesis that resveratrol suppresses inflammation In Vivo, immunohistochemistry was employed to assess the expression of vascular adhesion molecule VCAM-1 and monocyte-derived macrophage marker F4/80 in mouse aortic cross-sections (Figure 5A-D). As shown in Figure 5A
Resveratrol Prevents TNF-α-Induced Disruption of Aortic Elastin Fiber in Mouse Aortic Cross-Sections
Histopathological examination of aortas using Verhoeff-Van Gieson staining revealed severe vascular structural abnormalities, primarily disruption and discontinuity of elastin fibers, in mice treated with TNF-α ( Figure 6). Dietary ingestion of resveratrol significantly inhibited these structural abnormalities in the aortas and aided in the maintenance of the delicate organization of elastin fibers, comparable to that of the control group ( Figure 6). . C57BL/6 mice were fed AIN-93G rodent diets with and without 0.4% resveratrol for one week followed by 25 µg/kg/day of TNF-α injected intraperitoneally for 7 days. After the treatment periods, the animals' aortas were harvested for sectioning. T, TNF-α; R, resveratrol; T + R, TNF-α + resveratrol.
Resveratrol Diminishes TNF-α-Induced NF-κB Activation in Aortic Cross-Sections
Immunohistochemistry was used to identify activation of the NF-κB p65 heterodimer in mice aortas. As displayed in Figure 7A,B, a strong NF-κB staining was present in the mouse aortic cross sections in the TNF-α-only treatment group, indicating inflammation in the aortic vessels. However, dietary supplementation of resveratrol significantly diminished the intensity of the staining, suggesting the inhibitory role of resveratrol in NF-κB signaling In Vivo. . C57BL/6 mice were fed AIN-93G rodent diets with and without 0.4% resveratrol for one week followed by 25 µg/kg/day of TNF-α injected intraperitoneally for 7 days. After treatment periods, the animals' aortas were harvested for sectioning. Representative photomicrographs of immunohistochemical staining for NF-κB p65 (A). Quantitative analysis of NF-κB p65 (B). T, TNF-α; T+R, TNF-α + resveratrol. *, p < 0.05 vs. control; #, p < 0.05 vs. TNF-α-alone-treated mice.
Discussion
Extensive studies demonstrated that resveratrol, primarily consumed through grapes and red wine, exerts cardioprotective effects through its antioxidant and anti-inflammatory properties [23][24][25][26][27]. Its first recognized importance was in the early 1990s with the introduction of the "French paradox", an incidence where the French population, despite having a regular high-fat diet, had a lower susceptibility to heart disease partially due to regular consumption of resveratrol-containing red wine [41]. However, the protective role of resveratrol against vascular inflammation and the underlying mechanism at physiologically achievable concentrations in a TNF-α-induced inflammatory model remains largely unknown to the best of our knowledge.
Numerous clinical studies reported low bioavailability of resveratrol in the body due to the rapid metabolism of trans-resveratrol into glucuronide and sulfate conjugates [31,[42][43][44]. However, many previous studies showed the anti-inflammatory action of resveratrol in ECs using concentrations that are far beyond physiologically achievable through dietary intake [45,46]. In this research, we demonstrated that resveratrol at physiologically achievable concentrations (<5 µm) attenuates TNF-α-stimulated monocyte-EC adhesion. Clinical studies revealed that free and conjugated forms of resveratrol were present in plasma and urine samples of human subjects who orally consumed resveratrol [32,33], but only up to 2.4 µm of resveratrol was found in the plasma of human participants who orally ingested a high dose of resveratrol (5 g) [33]. Adhesion molecules such as VCAM-1 and ICAM-1 and chemokines such as MCP-1/JE and CXCL1/KC are important modulators in monocyte recruitment, rolling, and adhesion to the vascular endothelium and play a fundamental role in the pathogenesis of atherosclerosis [35,36,47]. We report that resveratrol suppressed TNF-α-induced increases in adhesion molecules and chemokines in ECs. Additionally, resveratrol reduced TNF-α-stimulated activation of NF-κB by inhibiting IκB-α degradation and subsequently preventing nuclear localization of NF-κB p65 subunits in ECs, suggesting that resveratrol may exert its anti-inflammatory effect by interfering with the NF-κB signal transduction pathway. Mice fed a diet of 0.4% resveratrol showed suppressed serum concentrations of adhesion molecules and chemokines as well as attenuated expression of VCAM-1-and F4/80-positive macrophages in the vascular tissue of aortic cross sections. Overall, our results suggest that resveratrol can be an easily attainable naturally occurring and low-cost compound that can be used to ameliorate atherosclerosis.
Endothelial dysfunction and monocyte recruitment is essential in the initiation and exacerbation of atherosclerosis [48]. Previous studies implied that up-regulated expression of adhesion molecules and chemokines are involved in endothelial dysfunction and chronic endothelial inflammation, hence aiding in the development of atherosclerosis and other cardiovascular diseases [8,49,50]. Adhesion molecules such as ICAM-1 and VCAM-1 are mediators that aid monocytes' transition from rolling to firm arrest and subsequent transmigration into inflamed tissue [35]. In fact, elevated expression of these leukocyte adhesion molecules was reported in vascular-lesion-prone sites and in human coronary atherosclerotic plaques [51,52]. Additionally, C-C and C-X-C chemokines such as MCP-1 and IL-8 play a vital role in monocyte recruitment, rolling, and adhesion to vascular endothelial monolayers [35,37,47]. Here, we demonstrated that resveratrol significantly reduced TNF-α-activated mRNA expression of ICAM-1 and MCP-1 in ECs, suggesting that the anti-inflammatory effect of resveratrol may be partially due to a reduced production of proinflammatory adhesion molecules and chemokines. The in vitro results were recapitulated in the animal study, which showed that the up-regulated serum levels of MCP-1/JE, CXC1/KC, sVCAM-1, and sICAM-1 after TNF-α treatment was vastly attenuated in mice that were fed resveratrol. Murine animals do not have IL-8 but chemokine CXCL1/KC can act as a functional homolog [53]. These results suggest that resveratrol may be exerting its cardioprotective effects against vascular inflammation partially by preventing production and/or secretion of chemokines and adhesion molecules. Since adhesion molecules and chemokines are secreted by various cell types, these results alone are insufficient to pinpoint ECs as the target of resveratrol's anti-inflammatory effects [54,55]. NF-κB is well-recognized as a key regulator of inflammation and has been implicated to be essential for the pathogenesis of atherosclerosis [56,57]. One of the ways in which NF-κB exerts its proatherogenic effects is by up-regulating the transcription of adhesion molecules (e.g., VCAM-1 and ICAM-1) and chemokines (e.g., MCP-1) [23]. As mentioned previously, these proinflammatory adhesion molecules and chemokines are critical in inducing inflammation due to their involvement in monocyte attraction and adhesion to the endothelium [35,36,47]. In fact, a previous In Vivo study reported that inhibition of NF-κB activation in ECs reduced formation of atherosclerotic plaques in atherosclerosis-prone mouse models [57]. Activation of the NF-κB pathway is mediated by diverse extracellular stimuli, including cytokines such as TNF-α [19,58]. In a normal unstimulated state, NF-κB is kept inactive in the cytoplasm by being bound to inhibitors such as IκB-α [23]. In the classical (canonical) pathway, the stimuli induce a signal transduction pathway that activates NF-κB by IKK complex-mediated phosphorylation and degradation of IκB-α [59]. As a result, NF-κB p50/65 heterodimers translocate into the nucleus, where they bind to promoters of NF-κB-induced proinflammatory genes such as MCP-1, TNF-α, and IL-6 [60]. Our immunofluorescence staining results suggested that resveratrol inhibited TNF-α-induced NF-κB 65 nuclear translocation in ECs. To the best of our knowledge, this is the first time that resveratrol at physiologically achievable concentrations is shown to prevent IκB-α degradation and subsequent NF-κB translocation into the nucleus in ECs. Immunohistochemical analyses of mouse aorta cross-sections additionally confirmed resveratrol's inhibitory effect on NF-κB activation In Vivo. The aortic cross-sections of mice treated with TNF-α showed the high intensity of NF-κB p65 staining, indicative of TNF-α-induced inflammation in the aortic vascular wall. Mice given dietary resveratrol showed greatly reduced expression of NF-κB p65 staining compared to the TNF-α-only treatment group, suggesting that resveratrol may exert its anti-inflammatory properties by targeting NF-κB signaling, in line with in vitro results we have previously discussed above. However, the exact mechanism of how resveratrol interferes with the canonical NF-κB pathway is still unclear.
F4/80 is a well-characterized murine macrophage marker. In previous studies, F4/80-positive macrophages were present in elevated levels in highly inflamed mouse aortas, suggesting it may be a potential inflammatory marker [61,62]. Our immunohistochemical examination showed high abundance of F4/80-positive monocyte-derived macrophages and elevated VCAM-1 staining in mice aortic cross-sections when treated with TNF-α, suggesting that TNF-α treatment induced inflammation in aortic walls. However, mice that were fed dietary resveratrol showed a significant decrease in both F4/80-positive macrophages and VCAM-1 staining, suggesting that resveratrol may target the vascular wall to exert its protective effects against inflammation. Based on hematoxylin and eosin stains, TNF-α triggered extensive structural changes in the intima layer of the artery, implying endothelial injury. However, resveratrol supplementation significantly improved such structural damages. Verhoeff-Van Gieson staining revealed resveratrol's ability to restore disruption of aortic elastin fiber in mice aortas induced by TNF-α. Although the exact mechanism is unknown, this restorative property may be linked in part to the ability of resveratrol in down-regulating cytokines and adhesion molecules and inhibiting the NF-κB signal pathway as discussed above.
In summary, this study demonstrates for the first time that dietary ingestion of resveratrol reduces vascular endothelial inflammation in mouse models by inhibiting NF-κB activation and reducing VCAM-1 and F4/80 expression in aortic tissue after TNF-α stimulation. Resveratrol at concentrations as low as 1 µm significantly suppressed TNF-α-induced EC-monocyte adhesion and endothelial expression of chemokines and adhesion molecules. We suggest a possible link between the ability of resveratrol to protect against vascular inflammation and its down-regulating effect of the NF-κB signaling pathway, but further studies are required to determine the exact mechanism. Our findings shed light on resveratrol as a potential novel therapeutic agent that can provide protection against inflammation and inflammatory diseases.
Monocyte Adhesion Assay
Monocyte adhesion to ECs was determined by using THP-1 cells as previously described [63]. EA.hy926 cells were grown to confluence in 98-well plates and treated with various concentrations of resveratrol (1 µm, 5 µm, and 10 µm) for 1 h before the addition of 10 ng/mL of TNF-α. Cells were then incubated in medium containing TNF-α in the continued presence or absence of resveratrol for 24 h. EA.hy926 cells were gently washed with serum-free medium and then incubated with calcein-AM-labeled THP-1 cells (1 × 106/mL RPMI1640 medium containing 1% FBS) for 1 h. In order to discard unbound monocytes, the EC monolayer was gently washed with medium. Monocyte adhesion was determined through fluorescence measured using a BioTek Synergy 2 Multi-Mode Microplate Reader (Winooski, VT, USA) at excitation and emission wavelengths of 496 nm and 520 nm.
Reverse Transcription and RT-PCR
EA.hy926 endothelial cells were pretreated with various concentrations of resveratrol (0.25 µm, 0.5 µm, 1 µm, and 2 µm) for 1 h. Ten ng/mL of TNF-α was then added in the continued presence or absence of resveratrol for 1 h. Trizol reagent was used to extract total RNA per the manufacturer's protocol. Complementary DNA was generated by reverse transcription using 1 µg of total RNA. Each well contained a reaction mixture of 10 µL of SYBR green, 5 µL of distilled autoclaved H2O, and 2 µL each of forward and reverse oligonucleotide primers. SYRBR Green RT-PCR Master Mix was used (Life Technologies, Grand Island, NY, USA). The primers used were ICAM-1 (forward, 5 -CTCCCTCTCGGGTCTCTCTC-3 ;reverse,5 -ACT GTG GGG TTC AAC CTC TG-3 ) and MCP-1 (forward, 5 -CCC CAG TCA CCT GCT GTT AT-3 ; reverse, 5 -TGG AAT CCT GAA CCC ACTTC-3 ). The amplification profile was 50 • C for 2 min, then 95 • C for 10 min, followed by 40 cycles of 94 • C for 15 s and 60 • C for 1 min. The average amounts of each chemokine and adhesion molecule were averaged then normalized to that of the control housekeeping gene GAPDH.
Confocal Immunofluorescence Study of NF-κB p65 Nuclear Translocation
HUVECs were pretreated with 1 µm of resveratrol for 1 h on eight-well chamber slides. The cells were then incubated with 10 ng/mL of TNF-α for 15 min in the continued presence or absence of resveratrol. Cells were washed with PBS then fixed with 100% ice-cold methanol. Blocking was carried out at room temperature for 30 min using 10% normal goat serum (Sigma, St. Louis, MO, USA). Rabbit anti-NF-κB p65 primary antibody was added and incubation occurred for 2 h at 4 • C. After three consecutive PBS washes, cells were incubated for 1 h with goat antirabbit IgG DyLight™ 488 conjugated secondary antibody. After the last wash with PBS, the chamber slides were mounted with Fluroshield with DAPI mounting medium (Sigma Chemicals, St. Louis, MO, USA), and NF-κB p65 was visualized using an Olympus Fluoview FV5OO/IX81 confocal microscope (Waltham, MA, USA). The localization of the p65 signal with respect to the nucleus was evaluated to score as follows: cytoplasm only (score 0); evenly appear in cytoplasm and nucleus (score 1); most appear in nucleus with mild cytoplasm (score 2); nucleus only (score 3). DAPI was used to determine the approximate location of the nucleus. Score were averaged and compared with control group.
Animal and Experimental Design
Male C57BL/6 mice (age 10 weeks) purchased from the Jackson Laboratory were housed in microisolator cages located in a pathogen-free animal facility. All animal procedures were approved by the Institutional Animal Care and Use Committee and performed in accordance with the National Institutes of Health Guidelines for the Care and Use of Laboratory Animals. The mice were randomly separated into three groups (control, TNF-α, TNF-α + resveratrol), with 6-8 mice per group. Mice were fed an ANI-93G rodent diet or basal-modified AIN-93G rodent diet containing 0.4% resveratrol (Dyet, Inc., Bethlehem, PA, USA) depending on their allocated group. The use of resveratrol dosage was based on previous publications [31][32][33][34][64][65][66][67][68]. After one week, the mice were injected intraperitoneally (i.p.) with 25 µg/kg/day of TNF-α (PeproTech Inc., Rocky Hill, NJ, USA) for 7 consecutive days. Previous studies have indicated that this dosage of TNF-α resulted in markedly elevated expression of intracellular adhesion molecules and vascular barrier dysfunction [69,70]. Control mice were injected i.p. with PBS for the same period of time. Throughout the i.p. administration process, mice were continually fed with either the control or resveratrol diet. For the entire duration of the study, body weight and feed intake were recorded weekly. Two hours after the last i.p. injection, all the mice were euthanized, and blood samples were collected. The serum was frozen at −80 • C for ELISA analysis.
Ex Vivo Monocyte Adhesion Assay
Aortas were isolated from euthanized mice. The surrounding connective tissue and fat were removed, and then the aorta was gently washed with ice-cold PBS twice. After being placed in DMEM at 37 • C for 10 min, the aorta was opened longitudinally and fixed with needles onto 4% agar in 35 mm plates. The aortic strip was placed in 1 mL of DMEM containing 1% heat-inactivated FBS. WEHI 78/24 monocytes were fluorescently labeled with calcein-AM by following the manufacturer's instructions. Fluorescence-labeled WEHI monocytes (1 × 10 6 ) were added to the aortic strip and incubated for 30 min. Non-adherent cells were washed away and the number of bound monocytes were examined using a confocal microscope. Data was quantified using the Image J software (Version 1.48k, 2013, National Institute of Mental Health, Bethesda, MD, USA).
Measurements of Chemokines and Adhesion Molecules
Serum concentrations of adhesion molecules (sVCAM-1 and sICAM-1) and chemokines (MCP-1/JE and CXCL1/KC) were detected using Quantikine ELISA Kits (R&D Systems, Minneapolis, MN, USA) and procedures were performed as per the manufacturer's instructions. To determine serum concentrations, samples were plotted against standard curves.
Histology
The thoracic aorta was isolated, and adherent fat was removed. The aorta was incubated in 10% buffered formalin solution overnight. After overnight fixation, 5 µm of the proximal artery was sliced off and placed in 200-proof ethyl alcohol for 24 h. The sectioned aorta was embedded in paraffin and then stained with Verhoeff-Van Gieson stain for elastin and hematoxylin-eosin. Staining was performed at AML Labs (Baltimore, MD, USA) while following standard protocol. Aortic sections were visualized under a bright field EVOS XL microscope (AMG, Bothell, WA, USA).
Analysis pf VCAM-1, F4/80, and NF-κB p65 in Mice Aortas
Paraffin-embedded tissue sections of 5 µm were deparaffinized in xylene and rehydrated through graded concentrations of ethanol washes. Sections were then boiled in 10 mM sodium citrate buffer (pH 6.0) followed by cooling at room temperature for 30 min. The tissue sections were incubated in 3% H 2 O 2 for 10 min and then placed in 5% normal goat serum (Vector Laboratories) in TBST for additional 30 min. Following these procedures, the tissue sections were incubated in primary antibodies overnight at 4 • C. For VCAM-1, rabbit anti-VCAM-1 primary antibody (1:1000 dilution, Santa Cruz Biotechnology) and the Vectastain Elite Rabbit IgG kit (Vector Laboratories) were used. For F4-80, a rat monoclonal anti-F4/80 primary antibody (1:50 dilution, Bachem) and the Vectastain Elite Rat IgG kit (Vector Laboratories) were used. For NF-κB p65, a rabbit monoclonal anti-NF-κB p65 primary antibody (1:50 dilution, Santa Cruz) and the Vectastain Elite rabbit IgG kit (Vector Laboratories) were used. Afterward, tissue sections were incubated in corresponding secondary antibodies from the rabbit/rat Vectastain ABC-AP kit (Vector Laboratories). Immunohistochemistry was visualized using 3,3 -diaminobenzidine (Dako) and Harris hematoxylin was used to counterstain the nuclei. Photomicrographs of stained mouse aortas were captured using an AMG EVOS XL digital inverted bright field and phase-contrast microscope (Bothell, WA, USA). Quantitative analysis of VCAM-1-and F4/80-positive areas in the aortas was carried out using the Image J software.
Statistical Analysis
All data are expressed as mean ± SEM. Statistical analyses were performed using ANOVA and the GraphPad Prism ® software (La Jolla, CA, USA). Significant treatment differences were subjected to Tukey's multiple comparison tests. The level of statistical significance was set at p < 0.05.
Author Contributions: P.N. and Z.Y.K. contributed equally. Z.J. and D.L. designed the experiments; P.N. and Z.J. completed the experiments. P.N., Z.J. and Z.Y.K. processed the experimental data; Z.Y.K. and Z.J. drafted the manuscript; Z.Y.K., X.S., P.V.A.B. and D.L. contributed to analysis and interpretation of results. All authors have read and agreed to the published version of the manuscript.
Funding:
The work was supported in part by a grant from the U.S. National Institutes of Health (1R15AT005372).
Institutional Review Board Statement:
The experimental procedures were conducted according to National Institutes of Health guidelines and were approved by the University of North Carolina at Greensboro Animal Care and Use Committee.
Informed Consent Statement:
Not applicable as the studies not involving humans.
Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors have no conflict of interest to declare. | 2021-11-21T16:10:22.405Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "73d14df341e59cd4d635387a6a29e89d8460ceaf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/22/12486/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03e0ac747ac483ac83ca9179ab5c813d084b0784",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
254134497 | pes2o/s2orc | v3-fos-license | The effect of moisture on the properties of cement-bonded particleboards made with non-traditional raw materials
The paper presents research into the changes of properties in cement-bonded particleboards caused by moisture saturation over the course of 504 h. Three particleboard variants were tested, all at the age of 18 months. The first is a standard production-line board manufactured by CIDEM Hranice, a.s. (identified as CP-R). The other two variants were modified by by-products of the particleboard manufacturing process—dust (CP-D) and a particulate mixture (CP-P). The experiment observed changes in the boards’ dimensions, volume, and mass. The effect of moisture on their basic material properties was also investigated. While the boards were being saturated by water, changes in their structure were examined using an optical microscope. It was found that the boards behave differently depending on their composition. Also there were differences in the dynamics of the property changes. The modified particleboards are more susceptible to dimensional and volume changes. Both, volume and mass undergo the most significant changes during the first 24 h. Cracks and air voids inside the wood chips begin to close upon contact with water as a result of swelling. It was observed by optical microscopy that this process occurs within 3 to 5 min since immersion in the water bath. Between 24 and 96 h the rate at which the air voids and pores are closing begins to decrease and there is a difference in the dynamics of mass and volume changes as well. Wet–dry cycling of the boards was analysed as well. Temperature and moisture fluctuations negatively affected particleboard behaviour and properties. Strength dropped up to 50%. Wider cracks in structure of the particleboards were detected by optical microscopy, namely in ITZ (internal transition zone) of cement matrix and spruce chips.
Introduction
Particleboards are composite materials consisting of small wood particles bonded by a matrix [1]. There are many binders being used, including cement-based ones. Wang et al. [2] investigated a mixture that consisted of waste wood bonded by magnesium phosphate cement. Miranda de Lima et al. [3] presented research into the modification of binders by metakaolin and calcined ceramics. Caprai et al. [4] dealt with the substitution of cement by MSWI (municipal solid waste incineration) bottom ash. The wood particles may come from many different kinds of wood, as documented by the findings of Odeyemi et al. [1] (African balsam tree), Nadhari [5] (banana trunk), A.N. Papadopoulos [6] (Carpinus betulis L.), Sotannde et al. [7] (Afzelia africana wood residues), Fuwape et al. [8] (tropical wood), Borysiuk et al. [9] (sugar beet pulp), Taha et al. [10] (tomato stalk), Amiandamhen and Izekor [11] (Gmelina arborea wood), Hossain et al. [12] a Wang [13,14] (construction wood waste), Cabral et al. [15] (stalk particles of Jerusalem Artichoke), Karade et al. [16] (lignocellulosic wastes), Sassoni et al. [17] (hemp) and Schwarzova et al. [18,19] (hemp fibres). Melichar et al. Journal of Wood Science (2021) 67:75 The properties of wood and the cement matrix are markedly different. Wood is a heterogeneous material consisting mostly of structural components (cellulose, hemicellulose, and lignin) and non-structural components (polysaccharides of starch, extractives, proteins, some water-soluble organic compounds, and inorganic compounds). Wood is susceptible to volume changes due to moisture. Wood can hold water in the cell walls as bound water or in the cell cavities as free water [20]. In theory, most, if not all, of the hydroxyl groups in hemicelluloses are accessible to moisture [21]. Christensen and Kelsey [22] estimated that cellulose, hemicelluloses and lignin in Eucalyptus regnans are responsible for approximately 47%, 37% and 16% of the total water sorption of this wood species. The use of wood in cement composites is also strongly influenced by the leaching of hemicellulose, which slows the hydration rate of the cement matrix; cf. Janusa et al. [23].
However, with the proper treatment, the properties of wood can be at least somewhat stabilized against the effect of moisture or the unwanted extraction of hemicellulose. The modification of wood in cement-bonded particleboards has already been the subject of research by Sotannde et al. [7] (modification by CaCl 2 , MgCl 2 , and AlCl 3 ), Makarona et al. [24] (nanostructured ZnO coating), Ahmed et al. [25] (oil impregnation, thermal modification), and Pelaez-Samaniego et al. [20] (thermally modified acacia and sesendok). Lee et al. [26] conclude that a particleboard made with higher proportion of oil palm trunk particles has better dimensional stability than that with higher proportion of rubberwood particles. Nasser et al. [27] analysed the pre-treatment of the particles with either cold or hot water and the addition 3% of CaCl 2 , Al 2 (SO 4 ) 3 , or MgCl 2 (by cement weight) in terms of its influence on the properties of the particleboards, specifically boards made with the pruning wastes from six wood species. Interactions between wood species and the W/C (water/cement) ratio were highly significant for all of mechanical properties and dimensional stability characteristics. Treatment by hot water and CaCl 2 was investigated by Amiandamhen and Izekor [11]. Their research also aimed to determine how the treatment of the wood (flakes and sawdust of Gamelina arborea) influences the strength, modulus of elasticity, water absorption, and swelling in thickness of the particleboards after 24 h in a water bath [11]. Li et al. [28] reached the conclusion that the vapour diffusivity of wood-cement composites decreases when the water content increases as a result of capillary condensation. Liquid diffusivity dramatically increases because of the high relative humidity at which the liquid water tends to saturate the pores, which allows the water to be free and thus to diffuse more easily [28]. Lee [26], drawing on Sulaiman [29], found that rubber wood is more hygroscopic than oil palm trunk. Pelaez-Samaniego [20] confirmed that the main factor contributing to the improvement of dimensional stability of particleboards of wood composites is the removal of hemicelluloses from the wood.
Currently there are several by-products (from cementbonded particleboard production) that lack any further utilization and are therefore discarded as waste. The most promising of these materials is dust produced by the cutting and grinding of the boards (6500 t/year) and a leftover particulate mixture consisting of cement, wood chips, admixtures, etc. (1000 t/year). Both the dust (D) and the particulate (P) contain cement and spruce chips. These materials could potentially be re-used in the production of new cement-bonded particleboards. A survey of available literary sources and particleboard manufacturers has revealed that the re-use of the above-mentioned materials is still a largely unexplored area. Similarly, authors who have investigated the water absorption of particleboards have failed to produce a more complex and detailed analysis. Studies usually limit themselves to testing the compliance of materials with the requirements of technical standards [30][31][32][33][34][35][36][37][38][39].
Materials
Cement-bonded particleboards were manufactured by CIDEM Hranice, a.s. The batch per every test mixture was 11 m 3 . The reference mixture represents a standard market-available particleboard. Two more mixtures were designed, both modified by alternative components produced as by-products during the particleboard manufacturing process. The modification took into account prior research conducted in collaboration with CIDEM Hranice, a.s.; see [40,41]. Components D and P were not treated (milling, crushing, sifting, etc.). Composition of designed mixtures is shown in the Table 1 (CP-R-reference particleboards; CP-D-particleboards modified by dust D; CP-P-particleboards modified by particulate mixture P). The mixtures are based on blended cement CEM II/A-S 42,5 R (Českomoravský cement, a.s., Mokrá, Czech Republic; specific surface area of 458 m 2 /kg; density 3124 kg/m 3 ; initial set 215-250 min; 28-day strength 59 MPa). The structure and microstructure of the wood chips are shown in Figs. 1 and 2. Their particle size distribution curve is in the chart below along with the other materials (Fig. 3). Besides these components, the mixtures also contained water and hydration-control admixtures.
The dust D is formed during cutting by a format saw, from where it is suctioned away via a cyclone that separates it from coarser particles. The particulate mixture P is created, for instance, by changes made to the manufacturing process, such as when the mixture is adjusted in response to weather conditions, etc. Analysis of D and P microstructure (Fig. 2) has revealed a very thorough mineralization of a large number of spruce chips. Cement matrix residues (hydration products) are present on wood chips surface. It is therefore evident that the wood chips are to a large extent sealed against the ingress of moisture. The properties of the alternative components D and P have already been analysed by Melichar et al. [40][41][42]. Particle size distribution of P and D is shown in Fig. 3.
The chemical and mineralogical composition of D and P corresponds to the composition of a productionline particleboard. The dust D has a larger wood content (determined by TOC analysis; Table 2).
Experimental methods
The particleboards with density 1300-1400 kg/m 3 and 12 mm in thickness were made and cut into specimens at the production line at CIDEM Hranice, a.s. 28 days later the boards were transported to the laboratories of the Institute of Building Materials and Components, Faculty of Civil Engineering, BUT. Specimens matured for roughly 17 months in the laboratories. All the tests and analyses were planned so that their end would coincide with the specimens reaching 18 months of age.
Specimen preparation and dimensions
Specimens with the dimensions of 50 mm × 50 mm × 12 mm (water saturation dynamics up to 30 min), 290 mm × 50 mm × 12 mm (testing of mechanical properties), and 300 mm × 100 mm × 12 mm (dimensional and mass changes) were made. All the specimens were stored in a climate chamber until they reached a stable mass. Afterwards the specimens were placed in an exicator for temperature stabilization. Next, all the dimensions (length, width, and thickness) as well as mass were measured. Also detailed images of the specimen structure were taken using an optical microscope. Each individual parameter was determined using a set of 6 specimens where 3 specimens were cut from the boards in the longitudinal and 3 in the transverse direction.
Analysis of structure-computed tomography
Before the water saturation, three-dimensional structure of CP-R particleboard was analysed. This non-destructive analysis was focused on identification of prevailing direction and orientation of spruce chips in the boards. Orientation of wood chips is important due to different behaviour during dimensional changes related to water saturation. XRD tomography device phoenix v|tome|x m 300 was used for this purpose.
Water saturation, swelling
Standard EN 317 [34] determines swelling in the thickness of a specimen after being completely submerged under water at (20 ± 1) °C. In principle, the experiment followed EN 317. The standard only observes changes in thickness. The presented research examined volume changes in greater detail, including changes in length and width. The specimens had dimensions of 300 mm × 100 mm × 12 mm (see Fig. 4), which is similar to those of particleboards used in real-life applications. Besides dimensional changes, mass increases were observed as well. The standard EN 634-2 [32] only examines the value of swelling in thickness 24 h after The specimens were maintained in laboratory environment after their removal from the water. Gradual drying of the tested particleboards occurred till the stable weight was reached. Subsequently, irreversible changes of monitored parameters were determined.
Wet-dry cycling
Conditions characterized by cyclic wetting and drying were simulated in water bath and drying oven. One cycle (total 48 h) consisted of two follow-up phases-immersion in water at 20 °C for 24 h; drying in oven at 103 °C for 24 h. A total of 7 such cycles were performed. Mass and dimensions were determined after each cycle. Evaluation of reversible, irreversible changes and macroscopic failures was possible.
Leachability of sugars
Cement-bonded particleboards contain sugars, mainly hemicellulose, which has negative effect on cement matrix hardening. Hemicellulose is dissoluble in presence of water. Thus, the water from the bath (where the specimens were stored) was sampled after 168 and 504 h and tested for sugar content. The sugar content determination ( Fig. 5) followed the principle of reducing sugar with potassium permanganate (KMnO 4 ) in an alkaline environment. The reduction of Mn VI to Mn IV changes the colour of the solution to yellow or yellow-brown. Brown MnO 2 precipitate may form at high permanganate concentrations.
Density, bending and tensile properties
Three sets of specimens were tested. One reference set was tested. The second set contained particleboards that were saturated by moisture (up to 504 h) and subsequently dried in laboratory conditions. The last set consisted of particleboards that were exposed to wet-dry cycles. Their dimensions were 50 mm × 50 mm × 12 mm (density, tensile) and 290 mm × 50 mm × 12 mm (bending). Before the tests of physical and mechanical properties were conducted, the particleboard specimens were stored at relative humidity of (65 ± 5)% and temperature (20 ± 2) °C to allow them to reach stable mass.
The method for determining the density of particleboards is described in EN 323 [37]. Bending strength and modulus of elasticity in bending were determined according to EN 310 [33] (the three-point bend test). The strength of the specimen is calculated as the ratio between the bending moment M at maximum load F max and the moment of the whole cross-section. The distance between the centres of the two support rollers is set at 20 × the thickness of the board. The maximum load (specimen failure) must occur no later than (60 ± 30) s at a constant loading rate. Tensile strength perpendicular to the plane of the board was determined from the maximum force acting upon the specimen surface according to EN 319 [36]. The loading rate was set so that the specimens would suffer failure within (60 ± 30) s; 3 mm/min.
Microstructural analysis-optical microscope
The structural changes were analysed with a Keyence VHX-950F optical microscope (up to 200 × magnification). Special attention was paid mainly to areas where cracks had formed, as these parts make it possible to see how moisture saturation influences the dimensional or volumetric changes of the boards. The analysis focused mainly on the moisture saturation dynamics of the cement-bonded particleboards and its role in the closing and subsequent re-occurrence of microcracks. The moisture saturates the cell walls of the wood chips, which causes it to swell and cause microcracks to form in the cement matrix. A total of 3 sets of specimens were subjected to a detailed analysis. The first set was examined from 0 through 30 min after immersion and consisted of specimens with the dimensions of 50 mm × 50 mm × 12 mm. The microstructural analysis of these specimens took place while they remained immersed in water (see Fig. 6). The second set comprised specimens of 300 mm × 100 mm × 12 mm, which were examined at a set time interval together with weighing and dimension measurement; i.e. before water saturation and after 1, 4, 6, 8, 12, 24, 48, 72, 96, and 168 h under water. The third set comprised specimens of 300 mm × 100 mm × 12 mm which were analysed during wet-dry cycles, i.e. each 24 h (after water saturation and also after drying).
3D structure of the particleboards
Following figures (Figs. 7 and 8) show CP-R board with emphasis on orientation of spruce chips. Orientation of chips in the particleboards is important in terms of dimensional changes because wood is an anisotropic material. Significant dimensional and volume changes occur in radial and tangential direction of wood [24,43].
CT analysis proved that axial direction of the chips is mostly parallel with plane of the particleboard. Radial and tangential direction of the wood chips is oriented perpendicular to the length (linear direction) of the particleboards. Thus, the most apparent dimensional changes could occur in thickness direction.
Dynamics of water saturation
The charts (Fig. 9) show a detailed evaluation and comparison of the dimensions, mass, and volume of the particleboard specimens (development up to 504 h). Irreversible changes of tested parameters after gradual drying in laboratory environment are shown in Table 3. Results confirmed diverse behaviour of boards in dependence on their composition during soaking. This relates to different ratio of cement and chips in CP-R, CP-D and CP-P. Ratio of cement, chips and alternative raw materials also has effect on structure of ITZ of matrix and spruce chips. The sorption of the boards not only occurs separately in spruce chips and cement matrix, but it is also complicated by the interfacial region [44]. Sorption dynamics, characterized by rapid changing dimensions and mass, is apparent up to 96 h of soaking. The most rapid change was determined after the first hour of the immersing boards. This indicates relatively open porous structure of the boards when the first water penetrates through cement matrix capillary system into the cell structure of spruce chips. Penetration of water is more difficult with increasing water amount within the boards, because porous structure is gradually filled by the water and less Fig. 6 Observation and evaluation of volume changes in a specimen being saturated by liquid moisture using an optical microscope: a the microscope with a specimen; b specimen detail under the lens of the microscope free space remains. Therefore, another penetration of water into the boards slows down. When fibre saturation point (FSP-fully saturated cell walls of wood) is reached, dimensional and volumetric changes are not so significant. However, it is quite difficult to determine exactly FSP of spruce chips in the boards with regard to heterogeneous structure of this composite material (cement matrix, spruce chips, ITZ). The spruce chips are also stabilized (modified) by water glass, Ca ions of cement binder and thermal treatment during the production of boards. This stabilization of the chips perceptibly changes their sorption dynamics. Reference boards CP-R show the most stable properties in terms of dimensional, volume and mass changes. Irreversible changes of the tested parameters were determined in case of all types of boards. Irreversible mass changes of CP-R correspond to findings Fan et al. [45].
The linear direction is characterized by the smallest dimensional changes (0.39% CP-P; Fig. 9a). This relates to the orientation of spruce chips in the particleboards and direction of pressure during production of the boards (Figs. 7 and 8), including residual stress in the chips. Residual stress did not affect linear expansion strongly. Length change 0.23% (CP-R), 0.28% (CP-D) and 0.29% Radial and tangential swelling occurs mostly in thickness direction of the boards due to the chips orientation. Also residual stress (from production of the boards) in the chips is relaxed during their soaking. This stress causes increasing expansion pressure of spruce chips and thus strongly affects thickness change. Therefore, compared to the linear changes, much higher values of swelling in thickness were determined (see Fig. 9b)-CP-P (1.58% after 504 h). After 24 h, the CP-R specimens swelled by 0.3%, which corresponds to the information declared by the manufacturer [46]. The dynamics of water saturation in case of CP-P is the most apparent up to 96 h of soaking. CP-R shows a more significant increase between 168 and 504 h. After gradual drying, irreversible change of thickness was determined in range of 0.26% to 0.35%. Particleboards CP-D and CP-P suffered from swelling in thickness after 24 h in a much smaller degree than particleboards containing other available agricultural waste (various plants and woods). This is supported by results-Nasser et al. [27] (4.51% to 4.87%), Odeyemi et al. [1] (2%), Sotannde et al. [7] (0.8% to 2.19%), Fuwape et al. [47] (4.44%), Okino et al. [48] (1.1 to 1.8%) and Zhou et al. [49] (0.53%). Strong dependence of swelling in thickness on particleboards composition was proved. However, relationship between dimensional changes (see Fig. 9a,b) and density (see "Physical and mechanical parameters" section) of the particleboards was not proved, as for example Nasser et al. [27] present.
Volume changes (see Fig. 9c) roughly follow the trend of swelling in thickness (see Fig. 9b). Particleboard specimens CP-P appear the most susceptible to volume change with 1.52% (24 h) and 2.51% (504 h) during water saturation. The volume increases the most rapidly during the first 48 h of immersion. This is partially caused by the dissolution of soluble substances of the spruce chips, which may increase stress in the remaining wood substances and surrounding solids of cement paste. This leads to a corresponding volume change in both the wood chips and the adjacent cement paste [44]. Particleboards CP-D and CP-P exhibit the same volumetric behaviour over the first hour of soaking (0.76% and 0.77%). After gradual drying, irreversible volumetric change was determined in range of 0.69% to 1.38%. These values are not negligible considering the maximal volumetric changes after 504 h. The results show that differences between mass (Fig. 9d) of CP-R, CP-D and CP-P boards are smaller in comparison with the dimensional and volume changes. The most dramatic increase occurs within 1 h of initial immersion (approximately 10%). After 504 h of immersion, the mass of all the specimens increased to a similar degree, ranging from 24.40% to 25.79%. After gradual drying, irreversible change of mass was determined in range 1.20% to 3.14%. The tested particleboards exhibit different water absorption depending on the type of alternative material used. The dynamics of mass change vary as well. This confirm results and findings related to water absorption after 24 h immersion by Sotannde et al. [7] [48] show a more dynamic water saturation during the first two hours of initial immersion.
CP-D boards appear more resistant to moistureinduced changes than CP-P. The cause is partly due to the nature of the alternative components. Component P contains larger particles and aggregations of cement and wood chips. Another crucial factor is that dust D comes from particleboards that have passed through the entire manufacturing process, including two thermal curing cycles. On the contrary, particulate mixture P accumulates in the production line after mixing and layering into a continuous sheet. This means that component P has not completed its full mineralization cycle. Performed tests confirmed that the origin of the alternative component plays a significant part in the dimensional and volumetric changes. However, effect on water absorption (mass changes) of the particleboards is not so evident.
Wet-dry cycling
Effect of cyclic water soaking and drying on parameters of the particleboards was analysed (Fig. 10). Irreversible and reversible changes of the tested parameters after wet-dry cycling are apparent from Table 4. Linear changes are the smallest, with irreversible component approximately − 0.1%. The differences between the CP-R, CP-D and CP-P are not significant. On the other hand, thickness change is more obvious and increase was noticed, while linear irreversible change is characterized by shrinkage. The differences in thickness between individual types of particleboards are more evident. Reference boards CP-R show the smallest dimensional, volume and mass changes. Boards CP-P are the most susceptible to the analysed parameters due to wet-dry cycles.
Irreversible swelling is caused by the release of the residual compressive stresses imparted to the board during the pressing process [47]. Relief of residual stresses within the wood due to hot pressing process was also presented by Rowell [51]. Composition of the particleboards also has significant effect on its changes during wet-dry cycles. Based on comparison of results [47] with outcomes in this study, very strong dependence on composition of the particleboards is obvious.
Boards CP-R, CP-D and CP-P show irreversible change in range of 0.40 to 0.74%, whereas authors [47] determined irreversible changes of the particleboards in range 15 to 21%. An interesting fact is that these different values are noticeable despite very similar density and cement/ particle ratio of the particleboards. Another key factor, affecting rate of changes, is de-bonding of the cement matrix and wood particles due to chips shrinkage during drying [47]. Failures between cement matrix and spruce chips are apparent from microscopic observation (see "Microstructure" section). Regarding to very low changes of CP-R, CP-D and CP-P it is evident that de-bonding effect is very slight in comparison with boards based on sawdust and wastepaper [47]. Irreversible changes of particleboards mass (growing trend) and length (downward trend) were determined by Fan et al. [45] during wet-dry cycles (90-65-35-65-90% RH). Length and mass trend is obvious also from hysteresis loops (isotherms) [45].
Results ( Fig. 10 and Table 4) are supported, among others, by Ahmed et al. [25]. Mass loss of the particleboards is connected with removal of water-soluble carbohydrates or extractives from spruce chips, especially after the first cycle (see Fig. 10d). Nevertheless, the mass loss is very low, i.e. 0.76%. Mass and length trend (Fig. 10a and d) is confirmed also by Fan et al. [44]. Wood chips degradation in cement-bonded particleboards can occur due to highly alkaline environment (cement matrix). This causes loss of mass and dimensional changes (chips).
Sugar leachability
Negative effect of sugars during cement matrix hardening analysed and described among others [52][53][54][55]. Therefore the water (in which the specimens resided) was sampled after 168 and 504 h and tested for sugar content.
By this time, the water had changed colour and taken on a yellow hue. Sugars inhibit the hydration of the cement matrix. Janusa et al. [23] claim that sugars at concentrations as low as 0.03-0.15% (originating from hemicellulose) will delay initial setting and strength gain in cement composites. Table 5 lists the measured values of sugar content in the eluate. Selected representative samples during analysis-see Fig. 11. Each of the specimen sets was placed in a separate water bath.
The results show that over 168 h, only a very small amount of sugars had leached. Given the findings of Janusa et al. [23], this amount is too insignificant to have any particular influence on cement hydration. A slightly higher amount of sugar was found in the water where the specimens remained for 504 h. In the case of CP-R and CP-P, this amount was at the bottom threshold of the sugar concentration given by Janusa et al. [23]. A higher amount was found only in the eluate of CP-D. The results of sugar concentration in the water bath correspond to the material composition of each mixture.
The results show that 504 h of moisture contact will cause sugar leaching that is capable of affecting the hydration of the cement matrix. C 3 A reacts very rapidly. Therefore it can be assumed that sugar leaching of a rate that was observed within this experiment (see Table 1) is too slow to affect the C 3 A reaction. It is more likely that the sugars would affect the C 3 S reaction. However, this clinker mineral also reacts relatively quickly. Especially when a large amount of water is present, it is very likely that C 3 S will have reacted before enough sugars have leached to slow it down.
Physical and mechanical parameters
The density values of all the specimens are shown in Fig. 12a (ranging between 1320 and 1350 kg/m 3 ). Immersion in the water bath has increased these values to 1350 to 1380 kg/m 3 . Slight decrease of density was determined in case of the boards subjected to the wet-dry cycles, values in range of 1300 to 1320 kg/m 3 .
Bending strength tests have revealed a slight benefit of the water immersion, but very significant deterioration due to wet-dry cycling (see Fig. 12b). The bending strength and density of reference and immersed boards has similar trend. The bending strength of specimens matured in a normal environment ranges between 12.3 and 12.7 N/mm 2 . Prolonged immersion in water has increased these values by 1.3 to 4.1%. An interesting finding is that the modified particleboard specimens showed higher strength than the reference. This is probably related to higher density of modified boards. It is obvious that the minor volume changes do not reduce the final bending strength. It is also evident that the modification by alternative components brings a slight benefit. In addition, the bending strength results prove that the sugars leached during immersion did not hinder the continuing hydration of the cement matrix. The boards subjected to the wet-dry cycling show significant decrease of bending strength, from 34 to 37%, which confirms all aforementioned phenomena (release of residual stress in chips, de-bonding chips and cement matrix etc.).
Regarding the modulus of elasticity in bending (see Fig. 13a), the tested particleboards behave a little differently than in the case of bending strength. Immersed particleboards CP-D show a slight decrease in the bending modulus, specifically 2.2%, while the other mixtures show an increase between 2.7 and 7.6%. On the contrary, particleboards subjected to wet-dry cycles show decrease of modulus in range of 47.3 to 49.4%. There is no strong relation of modulus to density trend.
Transverse tensile strength perpendicular to the plane of the board (see Fig. 13b) indicates the bond and coherence of the particleboards in the direction of their compacting, i.e. thickness. Strong decrease in tensile strength was determined after wet-dry cycling (39.4 to 48%). Immersed boards show increased tensile strength. The smallest degree of swelling was found in CP-R, which also showed a slightly higher increase in tensile strength than modified boards CP-D and CP-P. This is a negative effect of swelling in thickness (release of stress and pressure of swelled chips on adjacent matrix). All phenomena (release of residual stress from pressing boards, orientation of chips, de-bonding chips and matrix, etc.) showed strong effect in lateral direction (thickness).
Microstructure
First, selected areas of the specimens were observed for 30 min (Figs. 14 and 15). During the examination, the specimens remained in the water bath. The structure of the particleboards was also analysed over the longer term; i.e. 1 through 504 h, during which the specimens were removed from the water bath to be examined. Images of microstructure were taken immediately after the removal. The most interesting changes were observed after 0, 1, 24, and 168 h. It was possible to record movement of the chips in all three directions by 3D analysis (Figs. [16][17][18]. Some publications examine the volume changes of wood as such. However, the wood that is part of a composite behaves differently. Specifically in cementbonded particleboards, the wood chips are surrounded by the cement matrix. Adjacent cement matrix prevents the wood from dilating normally. Moreover, the wood is chemically and thermally treated. Observing areas where cracks occurred in the wood/cement interfacial zone was interesting. The examination by an optical microscope showed that the dynamics of volume (dimensional) changes is varying rapidly. Cracks situated in the interfacial zone close within several minutes. This is a result of the wood chips' swelling upon contact with water (Figs. 14 and 15). The initial width of chip (Fig. 14) was determined at 552 µm, after 3 min 609 µm (10.3% increase) and after 10 min 621 µm (12.5% increase). However, it must be emphasized that the adjacent crack allowed the spruce chip to move in this direction. The crack closes rather rapidly, already after 3 min (Fig. 14). Crack in the interfacial zone of CP-D board also closed immediately after 3 min of board immersion in water (Fig. 15). Reason is that cell walls are filled by water in the first stage. Saturation of cell walls by liquid water before reaching FSP is very fast. The dynamics of saturation is intensive up to 5 min from immersion to water bath. All free gaps, pores or cracks situated in contact zone of chips and matrix are closed due to spruce chips swelling in several minutes. Spruce wood is subjected to a macroscopic swelling, whereas, at the microscopic scale, the cell walls absorb bound water through the hemicellulose [56]. Water is bound to hemicellulose and amorphous component of cellulose.
After reaching the FSP, there is no obvious macroscopic swelling. The cell walls are already saturated by bound water. The free water is located in lumens. Swelling of spruce chips is obvious in all directions from microscopic pictures. Especially if there is free space around the chips. However, generally with regard to orientation of chips, swelling is more obvious in thickness direction (Fig. 9). There is an important factor when the boards are immersed to the water. That factor is releasing of stresses induced during the production of the particleboards. Some chips are characterized by lower value of swelling. It could therefore mean that this particular spruce chip comes from the alternative component (particulate mixture). The wood contained in these components had been treated by a mineralization process. That is why these particles should be more resistant to swelling due to moisture.
Analysing the specimens' structural changes in 2D and 3D makes it possible to analyse the profile changes with more complexity. This analysing is interesting mainly in terms of the closure of cracks and volume changes. The following images show selected outcomes (see Figs. 17 and 18). The images show that the pictured crack closed within the first hour as the wood swelled.
It is evident that not all the spruce chips observed in the area undergo the same volume changes. This is caused either by their orientation, damage to their structure, or by their origin (whether they are primary spruce chips or part of the alternative components). However, a detailed microscopic analysis can be of great use in verifying, confirming, or explaining certain observations and findings obtained during the determination of dimensional changes in the cement-bonded particleboards. The combination of the 2D and 3D cross-section profile is particularly useful, as it enables an easy yet thorough analysis of these changes.
Irreversible extending of crack width was observed (Fig. 19) during wet-dry cycling. Also it is evident that chip itself was deteriorated, i.e. damaged structure is apparent. Failures formed mostly in ITZ zone between spruce chips and cement matrix. The reason is that different volume changes (within the chips and the matrix) during temperature and moisture fluctuation occurred. During wetting, the chips swelled and caused pressure on the adjacent matrix (supported by results and findings of Rowell [57]). This pressure could contribute to damage of surrounding cement matrix of the boards. Wood chips shrink more quickly than the matrix during drying phase. Therefore, separation of the chips and matrix occurs. Fluctuation in moisture and temperature caused decrease in mechanical properties of the particleboards (Figs. 12 and 13).
Apparent irreversible changes were determined in case of CP-P (Fig. 19), which is in accordance with previous results and findings ( Fig. 10 and Table 4). Significant aspect is also structure of cement matrix, which is affected by the amount of substituent (particulate mixture P or dust D). Increased amount of substituent P resulted in less compact and rigid structure. Less coherent structure of the boards is obvious from tensile strength results (Fig. 13b). Loss of coherence of the Fig. 18 Detailed picture of a crack in particleboard CP-D (after 1 h of immersion in water)-picture from an optical microscope: a 3D view the surface with the crack; b 2D view of the cross-section; c profile curve of the cross-section tested boards was more apparent after wet-dry cycling. Irreversible changes of cement-bonded particleboards also relate to deformation of cement matrix. The matrix is subjected to irreversible changes during swelling and subsequent drying of the boards. Contrary, the chips are more flexible and reach almost the same dimensions after drying. Therefore, widened cracks in ITZ of analysed boards are apparent after 7 wet-dry cycles (Fig. 19b, d).
An interesting research would be also study of transport of the water in the boards and separate components (matrix and chips) in terms of different water entering the edge and inner structure of the boards. However, this research requires more complex and detailed study, which is above the scope of the presented study.
Conclusions
All the parameters were determined in terms of the effect of the particleboards' different composition.
Cement-bonded particleboards behave differently during water saturation (up to 504 h under water) depending on their composition. The changes in the observed parameters follow different dynamics. Particleboards with a modified composition are more susceptible to volume and dimensional changes. Most volume and dimensional changes occur roughly within the first 24 h. Very fast changes were detected by optical microscopy within 3 to 5 min of being immersed in water. Between 24 and 96 h the rate at which pores and air voids close begins to decrease. Volume and mass changes occur at a different rate. Mass increases at larger increments than volume.
Mechanical properties (strength and modulus of elasticity) were not harmed by swelling. In fact, the 504 h of water saturation caused them to increase by a small degree. The continuing cement hydration had a stronger effect than the structural damage caused by the swelling of the wood within the particleboard matrix. On the other hand, boards subjected to wet-dry cycling deteriorated significantly. Especially, apparent decrease of Fig. 19 Development of failure in contact zone of matrix and spruce chip within the particleboard during wet-dry cycling: a CP-R before cycling; b CP-R after 7 cycles; c CP-P before cycling; d CP-P after 7 cycles strength (up to 50%) and modulus has occurred. Microstructure also contained more failures and flaws.
Despite the fact that the properties of the wood chips had been stabilized as part of the manufacturing process, sugar leaching had occurred during the water saturation. A slightly higher sugar concentration was found only after 504 h of water saturation. However, the strength tests show that the leached sugars have not affected properties of the particleboards. | 2022-12-02T14:44:52.027Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "eb4aa2f76c1327cb58d04e52e39eb203de871efa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s10086-021-02008-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "eb4aa2f76c1327cb58d04e52e39eb203de871efa",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
133939680 | pes2o/s2orc | v3-fos-license | Characteristics and engineering properties of residual soil of volcanic deposits
Residual soil knowledge of volcanic-sedimentary rock products provides important information on the soil bearing capacity and its engineering properties. The residual soil is the result of weathering commonly found in unsaturated conditions, having varied geotechnical characteristics at each level of weathering. This paper summarizes the results of the research from the basic engineering properties of residual soil of volcanic-sedimentary rocks from several different locations. The main engineering properties of residual soil such as specific gravity, porosity, grain size, clay content (X-Ray test) and soil shear strength are performed on volcanic rock deposits. The results show that the variation of the index and engineering properties and the microstructure properties of residual soil have the correlation between the depths of weathering levels. Pore volume and pore size distribution on weathered rock profiles can be used as an indication of weathering levels in the tropics.
Introduction
The existence of residual soil on the surface of the land can cause geological engineering problems, especially related to the strength and carrying capacity of the soil. The most frequent land movement disasters have been linked to the residual soil on the hills. Soil movement is the movement of the soil/rock mass down or out of the slope as the material of the slope composition such as soil mass, rocks, or weathering materials were disrupted by the slope stability [1]. This is largely related to several key factors such as the geological, climatic, vegetation and land use conditions. Generally, ground motion occuring on the slopes of soil residue has not been well consolidated. It has been argued that the factor triggering the soil motion can be derived from the slopes such as the weathering of rocks [2].
The ground motion generally occurs in rainy season when rain triggers landslide due to its certain size so that rainwater seeps into the slope and push the ground to move. The rains which triggers ground motion are heavy rainfall and longer normal rain. Heavy rain, for example, is rain that can reach 70 mm per hour or more than 100 mm per day. This will only effectively trigger an landslide on slopes whose soil easily absorbs water [2], such as in clay and sandy soil. It has been concluded that rain absorbed into the soil may cause an increase in pore water pressure, which disturbs the slope stability especially when the resisual soil formed [3].
Residual soil research has been undertaken by some former researchers, among which are [4][5][6][7][8], that provide restrictions for residual soil engineering from grade VI, V and IV. The results of the study have discussed the characteristics of soil properties related to the properties of the index (index properties), the properties of soil properties (engineering properties) and the soil development index. However, there are not any researches on the soil of tropical volcanic rocks associated with the movement of the soil. Reference [4], in his research from several samples in Java, gave an illustration that the laps of breccia had higher triaxial test values than sedimentary rocks.
Weathering profile
There are various definitions of residual soil from several researchers. Residual soil is defined as a soil material which is derived from rock bedding and has not undergone transportation, usually found in tropical climates with relatively high temperatures and rainfall. Mc Lean & Gribble said that residual soil is a rock material that has been turned into soil which loses both structures of rock texture, changes in the weight of the soil content, and has not experienced transportation. In general, residual soil can be defined as a soil material which is the result of weathering and decomposition of rocks that has not been transported from its original place. Residual soils from Quaternary volcanic breccia weathering and lapilli tufts generally have texture of a mixture of clay, silt, and fine sand [6,9]. The hilly area is occupied by volcanic sediment with soil classified in CH group that is clay with high plasticity properties and classified as heavy clay. Also, the level of development in residual soils of this type is high; the higher growth rate causes the decrease of parameters of soil resistance (strength parameters). Factors that control weathering include rock characteristics, climate, topography, hydrology, and vegetation (biology). Moreover, the influence of human cultivation in the process of land for agriculture/plantation can encourage weathering. Taha says that tropical climates have very important role to the formation of residual soil. In this case, rainfall, temperature, and chemical process cause the formation of relatively thick residual soil compared to mechanical weathering. In the weathering process, there is a gradual shift from the rock to the ground, and it is difficult to determine the change of stone to the soils. In the weathering profiles shown in Fig. 1, the upper three layers (grade VI, V and IV) are more likely to be soil, while the three layers down are more likely to be rock. Residual soils are weathered rocks and the soil for engineering purposes is in degrees VI, V and IV [10]. Residual soils have different engineering properties compared to sedimentary soil deposits and uniformly distributed grains. The differences in residual soil characteristics varied both laterally and in depth due to the differences in weathering intensity. Land movement often occurs in residual soil, especially during the rainy season due to the decreasing strength of soil shear.
Grade I is a fresh rock that has not undergone changes, either in composition or in color. A slight change in the composition is given in grades II and III, where the color change begins and the size of the material becomes smaller. Grade IV and V tend to be classified as soil as it has undergone a change in the composition of more than 50% rock mass. Grade VI is a residual soil group that is the result of weathering both physical and chemical rock. The level of weathering affects the composite material on a residual soil. The higher grade of weathering, the greater the composition of the clay material will be. The degree of weathering is usually proportional to the depth. The greater the depth of a residual soil, the higher the weathering level it has, and the clay component predominates. Therefore, the grain size will vary greatly due to the influence of saturation level as well as the minerals formed. Sand material decreases along with the increasing degrees of weathering. This affects the value of void ratio and density. Both values decrease with the increasing degrees of weathering. The main rock will contribute to the residual soil composition, while the presence of fine grains is the result of weathering.
Quarter volcanic residual soil in Indonesia
As a tropical climate area, residual soil covers more than two-thirds of Indonesia [1]. The residual soil is submerged by the volcanic rock of Quarter in a general textured mixture of clay, silt, and fine sand. The hilly areas occupied by volcanic deposits possess a high-profile nature [6]. This residual soil also has swelling properties. The nature of expanding and shrinking (expansive) is influenced by changes in groundwater content resulting in volume changes. The effect of soil water content on engineering properties is indicated by the decrease/change in the shear strength of the soil (internal shear angle and cohesion). The cohesion and shear angle in the soil correlate to the size of the clay content and can absorb water. In addition to the increase in clay content, the soil mass will have a lower density, liquid limit and the increased plasticity index with the increase in water content.
There is a problem in geotechnical as the change of the degree of soil saturation causes the occurrence of the small value of shear stress or soil cohesion. The residual soil engineering properties may differ from composite soils and uniformly distributed granules. Its properties can vary greatly, either laterally or in depth, due to the differences in weathering processes. The development process weakens the bond between the clay particles in the andesite Tuff. As a result, the shear strength of the andesite tuffs decreases until there is a glance of the weathering material.
Technical properties of volcanic breccias
The result of brick residual soil sampling from several regions obtained cohesion value 0.15 -27.4 kN/m2, the shear angle in 1.36o -21.36o while the physical properties test results show that the soil can be classified in inorganic silt (MH) and has low-moderate plasticity properties and low-moderate expansion potential with activity value <1. The test results can be seen in Table 1.
Based on the results of laboratory testing above, it can be seen that the weathering process on the breccia form residual soil, causing easy water infiltration due to the increased porosity value and a decrease in soil shear strength. The data analysis shows significant differences in effective cohesion values as well as the shear angle in the effective soil. This can occur due to several influencing factors such as degrees of weathering and mineral composition of breccia rocks that turn into clay minerals which have different percentages so that it will have varied soil engineering properties. Also, groundwater content, grain distribution and sampling highly affect the shear strength of the soil.
Conclusions
Based on the results of the research data above, the following conclusions can be drawn. Research on residual soils associated with the weathering of volcanic rock deposits in Indonesia has not been widely expressed by researchers. Soil mass behavior is a description of the soil material in responding to an activity. The response to be given by the soil mass is influenced by the geological conditions especially the hydrological aspects, the type of soil material, the grain shape, and its distribution. Physical properties of soil mass that most contribute to the shear strength of the soil are grain size, gradation, moisture content, porosity, and permeability. The geological aspects of the role are the type of soil/rock, the direction and slope of soil/rock, rock structure, hydrological conditions, and morphology. Rainfall that seeps into the soil will cause an increase in the water content of the soil, thus causing a decrease in the shear strength of the soil. | 2019-04-27T13:09:05.845Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "0450fa397de4dffd3c1833e63843b2bb291300ed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/118/1/012041",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "36566c2396b4ed7403f820f211d9f3956556e18d",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
221464266 | pes2o/s2orc | v3-fos-license | Treatment of Thoracolumbar Spinal Fracture Accompanied by Diffuse Idiopathic Skeletal Hyperostosis Using Transdiscal Screws for Diffuse Idiopathic Skeletal Hyperostosis: Preliminary Results
Study Design This retrospective case series enrolled 13 patients who underwent posterior fixation with both transdiscal screws for diffuse idiopathic skeletal hyperostosis (TSDs) and pedicle screws (PSs) to treat spinal injury accompanied by diffuse idiopathic skeletal hyperostosis (DISH). Purpose To describe the usefulness, feasibility, and biomechanics of TSD. Overview of Literature Vertebral bodies accompanied by DISH generally have lower bone mineral density than normal vertebral bodies because of the stress shielding effect. This phenomenon tends to makes screw fixation challenging. To our knowledge, solutions for this issue have not previously been reported. Methods Patients were assessed using the data on surgical time, estimated intraoperative blood loss, mean number of stabilized intervertebral segments, number of screws used, perioperative complications, union rate, and the three-level EuroQol five-dimensional questionnaire (EQ5D-3L) score at the final follow-up. The Hounsfield unit (HU) values of the screw trajectory area, and the actual intraoperative screw insertion torque of TSDs and PSs were also analyzed and compared. Results The surgical time and estimated intraoperative blood loss were 165.9±45.5 minutes and 71.0±53.4 mL, respectively. The mean number of stabilized intervertebral segments was 4.6±1.0. The number of screws used was 4.9±1.3 for TSDs and 3.0±1.4 for PSs. One death occurred after surgery. The union rate and EQ5D-3L scores were 100% and 0.608±0.128, respectively. The HU value and actual intraoperative screw insertion torque of TSDs were significantly better than those of PSs (p<0.001, p=0.033). Conclusions We were able to achieve stable surgical outcomes using the combination of TSDs and PSs. The HU value and actual intraoperative screw insertion torque were significantly higher for TSDs than for PSs. Based on these results, when treating thoracolumbar spinal fractures accompanied by DISH in elderly populations, the TSD could be a stronger anchor than the PS.
Introduction
Diffuse idiopathic skeletal hyperostosis (DISH) is characterized by spinal fusion between the vertebral bodies, including the anterior and posterior elements, resulting from bone hyperplasia and involving the osteophytes and bone bridges that arise from the anterior and posterior elements of the vertebral bodies. Morphologically, this results in the spine appearing as a single pillar. Accordingly, spinal fracture in the ankylosing spine, such as DISH, results in a three-column injury that is likely to cause greater instability of the fracture site and is more likely to lead to spinal cord injury than spinal fracture in the normal spine. Therefore, early diagnosis of injury and rigid spinal fixation are recommended [1,2]. Although it is commonly thought that the bone mineral density of vertebral bodies in the spine with DISH is elevated because of osteophytes and bone bridges that develop outside of the vertebral bodies, stress shielding arising from the ankylosing spine can reduce the bone mineral density in the vertebral column [3]. Therefore, in order to solve this problem, we propose the new concept of a screw insertion technique for posterior spinal fixation. The goal of this technique was to secure rigid fixation of the screw by penetrating the two vertebral endplates of the vertebral body, even in the osteoporotic spine accompanied by DISH in elderly patients. We report the usefulness and preliminary clinical results of this screw insertion technique that is referred to as the insertion of a transdiscal screw for diffuse idiopathic skeletal hyperostosis (TSD) in this report.
Patient population and clinical and biomechanical outcome analyses
Total 212 patients who were admitted to Kagawa Prefectural Central Hospital for the treatment of thoracolumbar spine fracture from April 2016 through March 2018 were assessed retrospectively based on their medical records. Patients aged ≥65 years with spinal fracture in the form of a three-column injury accompanied by DISH who underwent posterior fixation using the combination of a TSD and pedicle screw (PS) in the range of two vertebral bodies superior and inferior from the fracture level who could be followed up for at least 1 year postoperatively were enrolled. Thirteen patients were finally identified and investigated in this study. All surgeries in this series were performed by one senior spine surgeon. Bone grafting for the posterior or anterior portion and vertebroplasty were not performed in this series. We evaluated the patients' demographic data, including age, sex, injury level, preoperative Frankel grade, and mean follow-up period; surgery-related parameters of operative duration, estimated intraoperative blood loss, number of spinal segments stabilized, number of screws used, and perioperative complications were also recorded. The postoperative follow-up was performed once every 3 months. At the final follow-up, we examined patient status by assessing the Frankel grade, bone union rate, and health-related quality of life (three-level EuroQol five-dimensional questionnaire [EQ5D-3L]) [4]. Bone union was determined using computed tomography (CT) images (sagittal plane, coronal plane, and transverse plane) that were obtained once every 6 months, and the presence of bone union was determined when trabecular bone continuity and sufficient bone bridge formation were observed at the fracture site.
To evaluate the degree of screw fixation, we measured the Hounsfield unit (HU) value of the screw trajectory using the method that Matsukawa et al. [5] recently reported. They reported that the fixation stability can be predicted by calculating the HU of the planned screw trajectory using preoperatively obtained CT data. The maximum intraoperative screw insertion torque was also calculated using an exclusive measuring instrument.
Transdiscal screw for diffuse idiopathic skeletal hyperostosis
A transdiscal screw (TS) that passes through the vertebral endplate was first reported in 2003 by Minamide et al. [6] in a biomechanical study for L5/S spondylolisthesis. The TS construct was 1.6-1.7 times stronger than the conventional PS construct, thereby demonstrating the usefulness of TSs [6]; unlike TSs, TSDs are used for the osteoporotic spine accompanied by DISH occurring from the thoracic to the lumbar spine. The aim of TSD insertion is to achieve better screw fixation by passing through two vertebral endplates, even in patients who have an osteoporotic spine. The vertebral endplate can offer improved screw fixation because of the subchondral bone around the vertebral endplate.
Transdiscal screw for diffuse idiopathic skeletal hyperostosis insertion technique
The entry point of the TSD is just below the pedicle (toward the 5 o' clock position for the right pedicle and toward the 7 o' clock position for the left pedicle) and can be confirmed in the anteroposterior view using fluoroscopy. Especially for the lumbar spine, the entry point of the TSD is around the accessory process. Regarding the trajectory of the TSD, the screw passes through the pedicle in a cranial direction to penetrate two vertebral endplates, including the cranial vertebral endplate of the vertebrae that the screw entered through and the caudal endplate of the cranially adjacent vertebral body (Fig. 1). The surgeon always needs to be cautious to avoid penetrating the caudal side of the pedicle causing the radiculopathy when making the screw trajectory in the pedicle. The length of the TSD is determined with the aim of penetrating two vertebral endplates as much as possible without penetrating the anterior surface of the vertebral body. We utilized a percutaneous pedicle screw (PPS) for the spinal implant to minimize surgical invasiveness in all study subjects. If a guidewire is prepared for screw insertion in advance, it can enable the PPS to pass through two vertebral endplates. We commonly used intraoperative CT navigation for TSD insertion because most patients with the complication of DISH tend to have severe spinal kyphosis and ossification surrounding the vertebral body, making it difficult to determine the entry point and trajectory for TSD insertion using only fluoroscopy. In the case of small amount of spinal kyphosis and ossification surrounding the vertebral body, TSD insertion under only fluoroscopy may be possible. The fluoroscopy is used for the lateral view for the verification of navigation error during the surgery in all study subjects.
Hounsfield unit calculations
Using CT image data, we calculated the HU value of the TSD and conventional PS trajectories. To perform image analysis on the preoperative CT scans, we used ZedView VEGA (LEXI Co. Ltd., Tokyo, Japan), wherein a virtual screw was placed according to the TSD and PS trajectories. The PS adopted a trajectory along the anatomical axis of the pedicle in order to reach the area just before the anterior wall of the vertebral body. The trajectory of the TSD passed through the pedicle to the posterior third of the cranial endplate of the vertebral body as well as the caudal endplate of the adjacent vertebral body to reach the area just before the anterior wall of the adjacent vertebral body. Thereafter, the screw trajectory was selectively extracted on the CT image, and the HU value of the screw trajectory was calculated as per to the sum of the mean HU values of each 1-mm segment along the screw axis of the A B largest circle that could be delineated within the pedicle (Fig. 2). Using this method, the HU measurements of the TSD and PS trajectories were calculated and compared, and the maximum screw diameter and maximum screw length capable of insertion were measured on CT images.
Maximum intraoperative screw insertion torque measurements
We measured and compared the maximum torque during TSD and PS insertion in the same patient in our case series. We used Precept (NuVasive Inc., San Diego, CA, USA) as the TSD that was inserted percutaneously in all patients. Torque measurements were performed using DTDK-CN500REV (Nakamura Mfg. Co. Ltd., Sagamihara, Japan). After a specialized screwdriver for measuring screw torque was attached to the screw, we measured the maximum torque using the continuous mode of the DTDK-CN500REV. We obtained measurements taken from the point of screw insertion until the screw head came into contact with the bone surface, and the screw was unable to rotate. The maximum value until this point was recorded as the maximum torque value. Furthermore, we calculated the mean screw diameter and the mean screw length of an actually inserted screw.
Statistical analysis
The screw torque test and HU were statistically examined using unpaired t-tests. A p-value <0.05 was considered statistically significant. In the present study, we used y-stat 2006 for Macintosh (Igakutosho shuppan Co. Ltd., Toda, Japan) to perform the statistical analyses. We performed this investigation in accordance with our institutional guidelines that comply with international laws and policies (Institutional Review Board of Kagawa Prefectural Central Hospital, IRB approved no., 802). Informed consent was obtained from all individual participants included in this study.
Clinical outcomes
The demographic data of the patients showed that their mean age was 81.2±7.5 years; the study population included 11 men and two women. The mean follow-up duration was 17.08±4.4 months, and the mean body mass index was 23.2±4.1 kg/m 2 . The preoperative neurological status was classified as Frankel E for all the patients. The injury mechanism was most commonly minor trauma, such as a fall at same level, and there was a high incidence of injuries at the thoracolumbar junction (Table 1). The mean surgical time and the mean estimated intraoperative blood loss were 165.9±45.5 minutes and 71.0±53.4 mL, respectively. The mean number of spinal segment levels stabilized was 4.6±1.0, and the mean number of screws used was 8.0±0.2 (4.9±1.3 for the TSDs and 3.0±1.4 for the PSs). The perioperative complications included one death caused by chronic heart failure 1 month after the surgery. However, no implant-related complications were observed during the follow-up. At the time of the final follow-up, the Frankel grade was E in all patients; all patients achieved bone union, and the mean EQ5D-3L score was 0.608±0.128. Over the course of the followup, there was no clear correction loss, screw loosening, or screw pull-out (Tables 2, 3).
Hounsfield unit values
For the initial six patients (five men and one woman, with a mean age of 82.6±3.7 years) in our series, we used preoperative CT to measure the HU values of the TSD trajectories (96 screws) and PS trajectories (96 screws) in 48 vertebral bodies (37 thoracic vertebrae and 11 lumbar vertebrae) with DISH factors. The mean HU value of the TSD trajectory was 13,277±3,734, while the mean HU value of the PS trajectory was 9,114±3,331, indicating a higher mean value for TSDs (p<0.001). The maximum di- ameter for screw insertion was 5.7±0.9 mm for the TSDs and 5.7±0.9 mm for the PSs, indicating no significant difference. The maximum screw length was 50.6±4.0 mm for the TSDs and 44.8±5.3 mm for the PSs, indicating that a significantly longer TSDs could be inserted than the PSs (p<0.001) ( Table 4).
Maximum intraoperative screw insertion torque
We were able to measure the maximum torque at the time of screw insertion during surgery for 33
Case presentation
The subject was a 78-year-old man (patient number 2) who was injured after being hit by a compact car while walking. He had severe back pain at the first visit. Disc space opening was found at T5/6 on CT scan. The thoracic spine, including the injury level, had extensive DISH cranially and caudally, and the amount of trabecular bone in the anterior to middle column around the fracture site was significantly reduced. Additionally, magnetic resonance imaging (MRI) revealed intensity changes in the disk space at T5/6 and around the posterior elements at the same level. It was assumed that a three-column injury was present in the T5/6 area. Furthermore, at the injury level, spinal canal stenosis was found on MRI; however, no clear neurological deficit was observed. Posterior fixation from T4 to T7 (three levels) using a combination of TSDs and PSs was performed in the lateral decubitus position for maintaining the spinal alignment. Ten months after the surgery, sufficient bone bridging and union were accomplished, and no loosening of the implants was observed. At the final follow-up (21 months after surgery), he had no back pain and no difficulties with daily living (Fig. 3).
Discussion
Spinal fracture accompanied by DISH often occurs with severe spinal instability because of three-column injury. Therefore, displacement of the fracture site can easily occur, resulting in delayed spinal injury [3]. Furthermore, the concentration of stress on the fracture site because of the long lever arm of the ankylosed spine can delay bone union and eventually cause pseudoarthrosis [7]. Thus, spinal fixation using spinal instrumentation should be performed as early as possible for spinal fractures accompanied by DISH [1,2]. For PS insertion into the vertebrae accompanied by DISH, especially in elderly populations, adequate PS stability cannot be achieved because of the considerable loss of bone quality. According to Reinhold et al. [8], spinal columns accompanied by DISH, particularly in cases involving elderly populations, exhibit continuous bone bridging hyperplasia on the outside of the vertebral bodies, while cancellous bone progressively weakens inside the vertebral body. Moreover, Westerveld et al. [9] reported that stress shielding occurs in areas with ossified ligaments outside the vertebral body caused by hyperplasia and thickening. Therefore, the bone density within these vertebral bodies is often significantly lower than that within general vertebral bodies [9]. Furthermore, in spinal columns with many segments affected by ankylosing spondylitis, the lever arm becomes longer; thus, bone fracture can easily occur with trivial injury mechanisms. Therefore, the spinal instrumentation around the fracture site can create a stress concentration following spinal fixation [10]; this should be kept in mind by the spine surgeon.
Spinal instrumentation for spinal fracture accompanied by DISH is generally performed via the posterior approach rather than the anterior approach because of the aforementioned problems of the osteoporotic vertebral body and lever arm. The posterior approach offers the following advantages over the anterior approach: various fixation methods can be selected, depending on the spinal instrument used, and posterior spinal fixation can be performed simultaneously with nerve decompression [2]. Many reports investigating the use of screws have focused on PSs, with which the extent of fixation is determined by stress concentration and bone fragility and ranges from one vertebra up and one vertebra down to at least three vertebrae up and three vertebrae down. In recent years, an increasing number of studies have reported on screw pullout and loosening occurring with a fixation range of up to two vertebrae up and two vertebrae down that ultimately may cause construct failure; many of these reports have recommended posterior fixation with PSs with a fixation range of at least three vertebrae up and three vertebrae down [11,12].
The TSD is characterized by the screw passing through the endplates of two adjacent vertebral bodies. Therefore, we think that TSD insertion is a specialized technique only for spinal columns accompanied by DISH, wherein the adjacent segments are ankylosed and the flexibility of the spinal column is remarkably reduced. Our results showed that the HU value of the TSD trajectory was significantly higher than that of the PS trajectory and that the maximum intraoperative screw insertion torque for TSDs was significantly higher than that for PSs. Therefore, we believe that TSDs can be a stronger anchor than conventional PSs even in cases of the osteoporotic spines.
According to anatomical studies of the vertebral endplates from the thoracic spine to the lumbar spine, the center to the anterior portion is the weakest region, while the posterolateral portion is the strongest [13,14]. Therefore, it is important to avoid an excessively large screw insertion angle in the axial direction and to insert the TSD along the axis of the pedicle as much as possible to enable the screw to hold the posterolateral portion of the vertebral endplate.In recent years, there have been increasing reports of the use of the posterior approach and PPSs for minimizing the surgical invasiveness. In many of these reports, although posterior spinal fixation was performed over an extensive range of three vertebrae up and three vertebrae down, the estimated intraoperative blood loss and rate of perioperative complications were lower than those with conventional open surgery using PPSs [10]. The TSD is essentially based on the use of PPS, using which, we could reduce the surgical invasiveness with a mean surgical time and estimated intraoperative blood loss of 165.9±45.5 minutes and 71.0±53.4 mL, respectively. According to the literature, spinal injury accompanied by DISH often occurs in the elderly popu-lation with a mean age of 68-81.5 years, and the perioperative mortality rate is relatively high, at approximately 10%-30% [9,10,15]. Caron et al. [11] performed posterior fixation for at least three vertebrae up and three vertebrae down with conventional open surgery and reported good surgical result. However, the perioperative mortality rate was approximately 30%, and they experienced surgical wound complications in 12% of the patients. Our patients also represented an elderly population with a mean age of 81.2±7.5 years; one patient (7.6%) died after surgery due to preexisting heart disease. The mortality rate remained high even though a minimally invasive technique was used for these elderly patients. Therefore, careful attention is required for the successful treatment of these patients.
TSD can pass through the disk space to reach the adjacent vertebral body; therefore, the adjacent vertebral body is simultaneously fixed with one screw. Thus, by inserting the TSD continuously into the adjacent vertebral body, total four screws can be inserted into the cranial adjacent vertebral body. Two screws from the caudal adjacent vertebrae are inserted into the anterior column and two other screws from the involved vertebrae are inserted into the posterior to the middle columns. This can be considered a characteristic feature of the TSD. This means that the local stability of craniocaudal level around the fracture site of the spine can be secured even with a small number of spinal instruments. Therefore, compared to conventional posterior fixation with PSs, TSDs has the potential to reduce the number of spinal instruments per case (Fig. 1). This ability of the TSD can help decrease the surgical invasiveness and mortality rate. Although there is a possibility that the local stress may increase by the insertion of four screws for one vertebral body; we believe that TSD inserted into the craniocaudal vertebrae around the involved vertebra can decrease this local stress. If there is no vertebral endplate injury at the level of vertebral body fracture, this endplate can be used for TSDs.
This study has certain limitations. The relatively low incidence of spinal injury accompanied by DISH in clinical practice resulted in the small sample size in this study. In the future, additional examinations are warranted with a larger study population and longer follow-up periods. Furthermore, as our biomechanical study only examined the maximum intraoperative screw insertion torque, further studies might be warranted to examine the TSD pullout strength, the strength of a construct with a rod connected, and stress loading to the adjacent segments. The combination of TSDs and PSs was evaluated in this study. When we started to use TSD, the claw-like configuration with combination TSD and PS was much better for pullout strength as screws and rods construct. However, as a result of our evaluation of the HU measurement of the screw trajectory and the screw insertional torque during the surgery, there is a potential that all TSDs construct can be recommended, and this issue requires continuous investigation in the future. However, we found that TSDs can be a stronger anchor than conventional PSs for fractures of the thoracolumbar spine accompanied by DISH, especially in elderly patients with severe osteoporosis.
Conclusions
We reported on the surgical results of 13 elderly patients who underwent posterior fixation using TSDs combined with PPSs for the treatment of fracture of the thoracolumbar spine accompanied by DISH and performed a biomechanical study of TSDs. Better surgical results were achieved using TSDs combined with PPSs for posterior fixation, and utilizing PPSs with TSDs could reduce the surgical invasiveness. The HU value of the screw trajectory and the maximum intraoperative screw insertion torque were significantly higher for TSDs than for PSs. These results could suggest that TSDs could be a stronger anchor than PSs for spinal columns with low bone density and a loss of flexibility because of DISH. | 2020-09-03T09:03:43.123Z | 2020-09-03T00:00:00.000 | {
"year": 2020,
"sha1": "937a44d077934fe953efa05f9f4a59af833e3233",
"oa_license": "CCBYNC",
"oa_url": "https://www.asianspinejournal.org/upload/pdf/asj-2020-0089.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8134b9c87d2bdc9ac78128ed2f24fbe8c2e3d11b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238261336 | pes2o/s2orc | v3-fos-license | A novel approach to develop wheat chromosome-specific KASP markers for detecting Amblyopyrum muticum segments in doubled haploid introgression lines
Many wild relative species are being used in pre-breeding programmes to increase the genetic diversity of wheat. Genotyping tools such as single nucleotide polymorphism (SNP)-based arrays and molecular markers have been widely used to characterise wheat-wild relative introgression lines. However, due to the polyploid nature of the recipient wheat genome, it is difficult to develop SNP-based KASP markers that are codominant to track the introgressions from the wild species. Previous attempts to develop KASP markers have involved both exome- and PCR-amplicon-based sequencing of the wild species. But chromosome-specific KASPs assays have been hindered by homoeologous SNPs within the wheat genome. This study involved whole genome sequencing of the diploid wheat wild relative Amblyopyrum muticum and development of a SNP discovery pipeline that generated ∼38,000 SNPs in single-copy wheat genome sequences. New assays were designed to increase the density of Am. muticum polymorphic KASP markers. With a goal of one marker per 60 Mbp, 335 new KASP assays were validated as functional. Together with assays validated in previous studies, 498 well distributed chromosome-specific markers were used to recharacterize previously genotyped wheat-Am. muticum doubled haploid (DH) introgression lines. The chromosome specific nature of the KASP markers allowed clarification of which wheat chromosomes were involved with recombination events or substituted with Am. muticum chromosomes and the higher density of markers allowed detection of new small introgressions in these DH lines. Key Message A novel methodology to generate chromosome-specific SNPs between wheat and its wild relative Amblyopyrum muticum and their use in the development of KASP markers to genotype wheat-Am. muticum introgression lines.
Declarations 23
Funding 24 Institute while this work was carried out. 30 The remaining authors declare that the research was conducted in the absence of any 31 commercial or financial relationships that could be construed as a potential conflict of 32 interest. 33
Availability of data and material 34
Raw reads data for Am. muticum has been made available through the Grassroots data 35 repository hosted by the Earlham Institute and funded by DFW programme 36 (https://opendata.earlham.ac.uk/wheat/under_license/toronto/Grewal_et_al_2021-09-37 13_Amybylopyrum_muticum/). All DH lines used in this study are available through the 38
INTRODUCTION 85
Bread wheat (Triticum aestivum L., 2n = 6x = 42, AABBDD) is one of the most widely 86 grown and consumed crops worldwide. After two spontaneous interspecific hybridisation 87 events (Dvořák et al. 1993 Genotyping of introgression lines is more efficient when using wild-relative genome-112 specific SNPs. The Axiom ® Wheat-Relative Genotyping SNP Array was developed ( The genotyping procedure was as described by Grewal performed using a QuantStudio 5 (Applied Biosystems) and the data analysed using the 216 QuantStudio TM Design and Analysis Software V1.5.0 (Applied Biosystems). 217
Multi-colour Genomic in situ Hybridisation (mc-GISH) 218
Preparation of the root-tip metaphase chromosome spreads, the protocol for mcGISH and 219 the image capture was as described in Grewal et al. (2020b). Briefly, genomic DNA from 220 T. urartu (to detect the A-genome), Aegilops speltoides (to detect the B-genome), and 221 Aegilops tauschii (to detect the D-genome) and Am. muticum were isolated as described 222 above. The genomic DNA of (1) T. urartu was labelled by nick translation with 223 ChromaTide TM Alexa Fluor TM 488-5-dUTP (Invitrogen; C11397; coloured green), (2) Ae. for counterstaining all slides. Metaphases were detected using a high-throughput, fully 232 automated Zeiss Axio ImagerZ2 upright epifluorescence microscope (Carl Zeiss Ltd., 233 Oberkochen, Germany). Image capture was performed using a MetaSystems Coolcube 1m 234 CCD camera and image analysis was carried out using Metafer4 (automated metaphase 235 image capture) and ISIS (image processing) software (Metasystems GmbH, Altlussheim, 236 Germany). 237 robust and their positions on the wheat chromosomes are indicated in Fig. 2d. 286
Results
In total, 498 well-distributed, chromosome-specific KASP markers (Online Resource 3), 287 polymorphic between wheat and Am. muticum, were used for downstream genotyping of 288 introgression lines. Fig. 2e shows a line plot of the physical distance between these 289 markers in wheat where each gridline of the y axis represents 10 Mb physical distance on 290 a chromosome. The distance between the markers ranged from just 3 bases to ~82.5 Mb 291 with an average distance of 26 Mb. The average distance between the tip of the short arm 292 and the first marker on the arm was 2.9 Mb while that from the last marker to the end of 293 the long arm was 2.3 Mb. There were only seven instances where the gap between two 294 KASP markers exceeded the desired 60 Mb and these are shown with a red stroke in the 295 line in Fig. 2e. All these gaps were due to poor availability of SNPs within the desired bin 296 as shown by the corresponding SNP density plot (Fig. 2b). 297 specific cases of aneuploidy in some DH lines to support the GISH observations but also 320 suggested disparities with previously reported results. 321 Table 1 Details of the type of introgression, its code (as indicated in Fig. 2g) still present in these lines (Fig. 3a). However, GISH indicated that the 2T was potentially 356
Validation of KASP markers through genotyping of introgression lines 298
introgressed as a whole chromosome due to the presence of Am. muticum telomeric repeat 357 signals on both ends of this introgression (Fig. 3d). If the 2AS segment had recombined 358 with 2T or translocated onto another wheat chromosome, it would not be visible via GISH 359 due to its small size. 360 The markers also showed that DH lines 16-21 had a small 6T segment (up to 10 Mb) on 361 the distal end of 6DL (6T.D2; Fig. 3a) which was not previously detected by the Axiom 362 array and is not visible by GISH. Genoytping analysis of four other DH lines 62, 71, 74 and 363 348, showed that in addition to the 4T.D2 segment, a very small segment (up to 20 Mb) 364 from 7T was present at the distal end of 7AS (7T.A1; Fig. 3b) which had not been detected 365 before in these lines. This very small segment on the distal end of chromosome 7AS was 366 also detected by GISH in this study (Fig. 3e). The KASP markers were also able to detect 367 another small segment (between 20-30 Mbp) from chromosome 5T in DH lines 121 and 368 122 (5T.D1; Fig. 3c). Due to its slightly bigger size, this Am. muticum segment can be 369 viewed by GISH on the distal end of chromosome 5DL in DH-122 as shown in Fig. 3f
Discussion 390
Previous studies have reported chromosome-specific KASP markers between wheat and 391 Am. muticum (Grewal et al. 2020a) and other wild relative species (Grewal et al. 2020b; 392 Grewal et al. 2021), which have been used for genotyping wheat-wild relative introgression 393 lines. The objective of this work was to fill in the gaps with more KASP markers to increase 394 the efficiency of genotyping by using an approach that involved faster SNP discovery and 395 a more robust, chromosome-specific assay design than the ones reported in previous 396 studies. In this work, we produced ~38K SNPs between wheat and its wild relative Am. 397 muticum in single-copy regions of the wheat genome and then converted some of these 398 into wheat chromosome-specific KASP markers. In combination with previously designed 399 chromosome KASP markers, a new set of well-distributed markers was obtained and used 400 to re-genotype wheat-Am. muticum DH introgression lines ) to validate 401 the functionality of these markers as efficient genotyping tools and detect as many Am. 402 muticum introgressions as possible. 403 A recently developed set of KASP markers (Set 2) was tested on Am. muticum accessions 404 in this study but only 5.8% of the 224 assays were found to be polymorphic with wheat. 405 This was as expected since this set of markers was originally developed to detect T. urartu 406 introgressions in a wheat background (Grewal et al. 2021). When the 13 KASP markers 407 were added to the 150 Am. muticum KASP markers developed during the original study 408 (Grewal et al. 2020a), numerous gaps between markers were still present (Fig. 2c) 409 preventing a uniform spread of markers able to detect Am. muticum introgressions across 410 the whole of the wheat genome. 411
SNP Discovery 412
A major bottleneck at this stage was the lack of SNPs between wheat and Am. muticum 413 that could be converted to KASP markers in regions that lacked an existing assay. With 414 the advent of cheaper sequencing costs, it was possible to sequence the wild relative 415 species to gain abundant SNPs for KASP assay design, some of which would be polymorphic 416 between the species. However, in polyploid crops like bread wheat, it is challenging to 417 generate chromosome-specific KASP assays able to distinguish heterozygous from 418 homozygous individuals (co-dominant SNPs) and requires extensive validation (Allen et 419 al here was to find SNPs in single-copy regions of the wheat genome using bioinformatic 423 tools, thereby, resulting in ~38K SNPs, each specific to a wheat chromosome (Fig. 1). 424 When the wheat genome assembly RefSeq1.0 was published ( Fig. 3a-c). The chromosome-specificity of the KASP markers 511 allowed detection of the wheat chromosome that was involved in the recombinant 512 chromosome or that had been substituted. Thus, it was observed that in DH lines 15 and 513 16, 2T.A1 was a whole chromosome that had replaced both 2A chromosomes rather than 514 recombined with B genome chromosomes as previously reported. Where possible due to 515 the size of the introgression, some of these results were validated by mcGISH in this work 516 (Fig. 3d-f). 517 The chromosome-specificity of these KASP markers also allowed the detection of a number 518 of wheat chromosome deletions in the DH lines as shown in Table 1. However, these were 519 limited to the detection of homozygous deletions. These homozygous deletions included 520 both whole wheat chromosomes or segments (both large and small) from the wheat 521 chromosomes. In this context, one of the main observations involved the 15 sister DH 522 lines (codes between DH-124 to 147 and DH-355 to 357) that showed that the pair of 1B 523 chromosomes had been deleted in these lines expect for a small segment at the distal end 524 of 1BL. We proposed that it was this 1BL segment that had translocated/recombined with 525 a pair of A genome chromosomes, most likely chromosomes 1A. These lines also have 16 526 A genome chromosomes (King et al. 2019) and so it is possible that the pair of 1A-1BL 527 recombinant chromosomes are present in addition to the pair of 1A chromosomes since 528 the KASP markers at the distal end of 1A do not indicate the absence of any of the 1A 529 wheat alleles. 530
Conclusion 531
Unlike previous work that relied on PCR-based amplicon sequencing (Grewal et al. 2020a), 532 this method of generating SNPs between wheat and Am. muticum in single-copy regions 533 of the wheat genome, made possible due to whole genome sequencing of the wild species, 534 is rapid and allows for the development of chromosome-specific KASP assays. A variety of 535 wild relative species are being used to increase the genetic diversity in hexaploid wheat. 536 This approach can therefore be applied to other wheat wild relative species for SNP 537 discovery, highlighting the need for greater investment in whole genome sequencing of 538 these wild species. These KASP markers have greatly increased our capability to 539 characterise, screen and identify both introgressions and wheat chromosomal aberrations 540 in wheat-wild relative introgression lines. However, it is important to note that their 541 efficiency is dependent on their density across the wheat genome and small introgressions 542 existing between two KASP markers could have gone undetected. With the reducing cost 543 of DNA sequencing, we envisage that the next improvement in characterisation of such 544 introgressions, with the potential to give higher resolution, would be low-coverage whole 545 genome resequencing of wheat-wild relative introgression lines. 546 547 | 2021-10-05T13:22:16.749Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "13af90f2f386a2b40fe107c115e90accec60fc3d",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/10/01/2021.09.29.462370.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "13af90f2f386a2b40fe107c115e90accec60fc3d",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
210064238 | pes2o/s2orc | v3-fos-license | Quantum delocalization of strings with boundary action in Yang-Mills theory
A. S. Bakry, M. A. Deliyergiyev, 1, ∗ A. A. Galal, A. M. Khalaf, M. N. Khalil, 4, 5, 6 and A. G. Williams Institute of Modern Physics, Chinese Academy of Sciences, Gansu 730000, China Institute of Physics, The Jan Kochanowski University in Kielce, 25-406, Poland Department of Physics, Al Azhar University, Cairo 11651, Egypt Department of Mathematics, Bergische Universität Wuppertal, 42097 Germany Department of Physics, University of Ferrara, Ferrara 44121, Italy Research and computing center, The Cyprus Institute, Nicosia 2121, Cyprus Center for excellence for particle physics at tera scale, university of Adelaide, 5005 SA,Australia (Dated: January 9, 2020)
The width of the quantum delocalization of the QCD strings is investigated in effective string models beyond free Nambu-Goto approximation. We consider two Lorentezian-invariant boundaryterms in the Lüscher-Weisz string action in addition to the self-interactions term equivalent to two loop order in the (NG) string action. The geometrical terms which conceive the possible rigidity of the QCD string is scrutinized as well. We perform the numerical analysis on the 4-dim pure SU (3) Yang-Mills lattice gauge theory at two temperature scales near deconfinement point. The comparative study with with this QCD string model targets the width of the energy profile of static quark-antiquark system for color source separation 0.5 ≤ R ≤ 1.2 fm. We find that including symmetry effects of the boundary action and rigidity properties into the string model paradigm to reproduce a good match with the profile of the Mont-Carlo data of QCD flux-tube.
I. INTRODUCTION
Understanding the confining force in elementary particle physics is essential for modeling the hadron structure. It remains, however, insurmountable to put an analytic form to the binding forces between quarks in the nonperturbative region of QCD from first principles.
Perturbative QCD provides a good description to the short-distance aspects of the QQ potential of a two-body (Coulombic) one-gluon-exchange (OGE) interaction potential [1]. Discussions concerning the intermediate and large distances, however, are usually carried out either on phenomenological bases, making use of the strongcoupling expansion [2,3] or lattice simulations.
The effective description is expected to hold over distance scale 1/T c [21] where the effects of the intrinsic thickness of the flux tube diminish. Many gauge models have accurately verfied the Lüscher correction to the potential [22][23][24][25][26][27].
Many features of the fine structure of the profile of QCD flux-tube at high temperature and relatively short distances [68? -72] are hoped to be compatible with modifications dictated by considering other effects beyond the free approximation. The delicate effects of emerging from the symmetry breaking near the edges or the string's resistance to bending, or in other words possible rigidity properties [73][74][75], suggest the boundary terms and/or the geometrical terms suppressing the sharp fluctuation in LW action, respectively.
The broadening profile of the energy field ought to receive similar corrections from these Lorentzian-invariant boundary terms in the action. The contributions of the boundary action to the width profile is calculated recently in Ref. [81]. The modification to the mean-square width around the free NG string is evaluated using perturbative expansion of two boundary terms at the orders of fourth and six derivatives.
Recent observation concerning the simulation of baryonic flux tubes [50][51][52] indicates resistance to bending or repulsion among the flux lines at a junction of the network such that the angles between the three flux tubes are kept equally divided into 120 o . This would suggest that the sharply-creased worldsheet configurations are energtically unfavorable. This interpretation of the selfrepulsion or resistance to bending nature appears within the vortex line picture [82,83] of the confining string appears as well.
The smooth string model rigorously preserves the fundamental properties of QCD of the ultraviolet (UV) freedom and infrared (IR) confinement [73,74]. The model is in consistency with glueballs [84,85] formation, and a real (QQ) potential [75,86,87] with a possible tachyonic free spectrum [88] above some critical coupling [85].
The assumption of smooth QCD strings have consequences that could clearly manifest in the intermdiate distances at high temperatures or for the excited states. Though not dominating the IR region, rigidity effects are reported [69,70,76] in recent numerical simulation of abelian and non-abelian gauge groups.
Recently a set analytic solutions targeting the profile of flux-tube have appeared in the literature [41,81,[89][90][91]. The target of the present paper is to examine in detail the physical implications of each string model against the lattice numerical data.
The map of the paper is as follows: In section(II), we review the most relevant string model to QCD corresponding to the energy width versus different approximation schemes. In section(III), lays out the numerical discussion of the lattice data with verious combinations of the effective strings models. In the last section, we provide concluding remarks.
II. STRING ACTIONS AND ENERGY WIDTH
It is a reverberate conjectured [92,93] that the gluonic field may condenses into thin stringlike object that can admit a long string description. The string may follow from the intuition picture that the chromo-electric field is squeezed by the dual Miesener effect similar to Abriksov line in the superconductive scenario [24,[94][95][96][97][98][99] of QCD vacuum.
The dynamical description of the effective string is based on a low-energy effective Lagrangian respecting the symmetries of the system. The classical long string solution, even though, breaks the translational invariance of the Yang-Mills vacuum leading to generation of transverse massless Goldston bosons [100,101].
The Lüscher-Wiesz effective action includes all massless fields which are necessarily derivatives to impose the translational invariance and are expressed in the physical gauge [102,103]. The Lorentz-invariance of the LW action is realized nonlinearly sice the worldsheet gauge diffeomorphism is fixed to static/physical gauge.
The Lüscher and Weisz [38] (LW) effective action up to four-derivative term read in the above S cl is the classical term, the operators X µ (ζ 0 , ζ 1 ) define the mapping from C ⊂ R 2 into R 4 taken with an Euclidean signature. The geometrical invariant R and K are the Ricii-scalar and the extrinsic curvature [73,74] of the worldsheet configuration, respectively. The LW action Eq. (1) encompasses built-in surface/boundary terms to account for an open string with boundaries. The boundary action S b is located at the boundaries ζ 1 = 0 and ζ 1 = R. The kinematic couplings c 1 , c 2 are dependent and are subject to constraint which follows from the Lorentz-transform in terms of the string collective variables X i [78,104] that the action is invariant under SO(1, D − 1).
The Nambu-Goto action, however, is the most simple form of string actions and is proportional to area of the world-sheet. With the the above condition (2), the first two terms in LW action coincide with the leading and next to leading order terms of NG action given by and respectively.
The quantum delocalization of the stringlike flux-tube around its classical configuration results in an energy distribution profile along the line connecting two color charges. The second moment of the transverse fluctuations typically characterizes the mean-square width of the string where ζ = (ζ 1 , iζ 0 ) is a complex parameterization of the cylindrical worldsheet of surface area R L, S ef f denote general effective string action. The Dirichlet and periodic boundary condition in ζ 0 with period L T corresponds to The leading order perturbative solution of Eq. (5) subject to the boundary condition Eq. (6) revealed the famed logarithmic divergence of the width at the middle of the string and at zero temperature which is the famed property shown by Lüscher, Münster and Weisz [28] long ago.
where R 0 is an ultraviolet (UV) scale. Allais and Casselle [41,42] using point-split [30] regularization and conformal mapping techniques have evaluated the expectation value of the quardratic operator Eq (5) at any temperature and plane in accord to , q 1 = e −π 2 τ , τ = L T R is the modular parameter of the cylinder, and L T = 1/T is the temporal extent governing the inverse temperature and R 0 (ζ) is the UV cutoff which has been generalized to be dependent on distances from the sources.
At high temperature the long string limit R > L T of Eq. (8) implies a linear broadening pattern in the string's width [41,42]. The second logarithmic term in Eq. (8) implies a different width at each plane around the middle of the string. This curved form becomes more pronounced with the increase of the temperature and the string's length.
F. Gliozzi. M. Pepe and Wiese [39,42] extended the calculations of the width to two-loop order of perturbative expansion of NG action Eq. (4), the next-to-leading width reads as with the leading order term W 2 o in accord to Eq. 8 and the NLO term given by where q = e −π L R . The form of W 2 o in terms of Dedekind η function given in Ref. [42] is equivalent to Eq. (8) through the standard relations of elliptic functions.
where Eisenstein series E 2 defined as and Dedkind eta function is defined as As mentioned above, an interesting generalization of the Nambu-Goto string [105][106][107] has been proposed by Polyakov [73] and Kleinert [108] to stabilize the NG action in the context of fluid membranes. The Polyakov-Kleinert string is a free bosonic string with additional Poincare-invariant term proportional to the extrinsic curvature of the surface as a next order operator after NG action [73,108]. That is, the surface representation of the Polyakov-Kleinert (PK) string depends on the geometrical configuration of the embedded sheet in the spacetime. That is, the bosonic free string action is equiped with additional terms of the extrinsic curvature as a nextorder operator after NG action [73,74].
Many properties have been rigorously worked out by Kleinert and German such as the exact potential in the large dimension limit [109,110], the dynamical generation of the string tension [111] and the perturbative stability in critical dimensions [112]. This is in addition to various thermodynamical characteristics of the geometric strings gas including the partition function [113], free energy and string tension at finite temperature [114][115][116] and the deconfinement transition point [117].
The smooth configurations of quantum fluctuations swept in the Euclidean space-time by the Nambu-Goto string are favored in the string's partion function by adding a new term proportional to the geometrical second fundamental form, or simply extrinsic curvature of the worldsheet.
The second fundamental form (or the shape tensor) in a differential geometer notation defines a quadratic form on the tangent plane of a smooth surface in the threedimensional Euclidean space. With a smooth choice of the unit normal vector at each point, this quadratic form is generalized as a smooth hypersurface in a Riemannian manifold.
The action of the Polyakov-Kleinert (PK) string with the extrinsic-curvature term reads with S R defined as The extrinsic curvature K defined as where is Laplace operator and M 2 = σ0 2αr is the rigidity parameter. The term satisfies the Poincare and the parity symmetries and can also be considered [76] in the general class of (LW) actions (1).
The perturbative expansion [86] of the rigidity term Eq. (15) has the leading term given by and the next to leading-order term The rigidity parameter weighs favorably the smooth worldsheet configuration over the creased one. In nonabelian gauge theories this ratio is expected to remain constant in the continuum limit [76].
The numerical simulations [76] of the confining potential in U (1) gauge theory have first addressed the rigidity of the effective bosonic string. Possible manifestation in SU (N ) gauge theories in 3D [70] has been reported as well. The rigidity effects in the confining potential of SU (3) at high temperature manifests as a necessary ingradient to retrieve the correct dependency of the string tension on the temperature [68,69].
The mean-squared width of the Polyakov-Kleinert string can be calculated by expanding around the freestring action Eq. (3) the squared width of the string where <> 0 represents the vacuum expectation value with respect to the free-string action. The modification to the mean-square width by virtue of the leading term in the rigidity In the following, we evaluate the correlator Eq. (21) of the rigid string up to one loop order using Green function as the two point propagator. On a cylindrical sheet of surface area RL with Dirichlet and periodic boundary condition in ζ 0 with period L T the Green propagator of the free string is Equation (21), representing the perturbation in the width due to rigidity around the free NG string, in terms of the corresponding Green functions is The modification to the mean-square width of the string of smoothed fluctuation is calculated in detail in Ref. [118] using ζ function regularization technique and turn out to be The mean-square width of the rigid string at the next to leading term in the perturbative expansion of extrinsic curvature Eq. (19) is explicitly calculated in Ref. [118], the two-loop version of Eq. (25) read as Apart from the possible stiff structure of QCD strings, the symmetry breaking of the action at the boundaries can have detectable effects on the energy density along the QCD flux tube as well. This perturbation from the free bosonic string behavior has been discussed in the numerical data of static potential [69,70,104,119].
A Generic boundary action can be defined as with the Lagrangian density L i associated with the corresponding effective low-energy parameter b i . Dirichlet boundary conditions X i = 0 at both ends means that ζ 0 -derivatives vanish on the boundary (∂ n 0 X = 0). At the lowest order, the only possible term [49] in the Lagrangian is therefore The leading-order corrections due to second boundary terms with the coupling b 2 appears at the four-derivative term in the bulk. On the boundary the general term is of the form ∂ 3 X 2 one should note that possible terms proportional to the equation of motion can be set to zero by field redefintion.
The third term of coupling b 3 is given by The Lorentz symmetry is a crucial aspect of Yang-Mills theory that ought to be preserved. The application of the Lorentz-transformation on the boundary action Eq. (27) and requiring S bi to vanish, we obtain constraints [79,120] on the values of the couplings; or realize higherorder derivatives in the choice of the action of a given coupling.
The variation of the boundary actions at first and third orders with infinitesimal nonlinear Lorentz-transform [104] of the Lagrangian densities Eq.(27) entails vanishing value for b 1 = 0 and b 3 = 0. It can be shown [38] that b 1 = 0 based on duality in two different channels corresponding open-closed string.
The nonlinear realization of Lorentz transformation of the Lagrangian densities Eq. (29) and Eq. (32) generates higher-order terms at the same scaling [79,104]. The invariance of (29) leads to recursion relation when solved Ref. [104] give rise to a more general form of the Lagrangian density which encompasses the naive constructs Eq. (29) as special cases, The next Lagrangian density L 4 of coupling b 4 , the leading general effective Lagrangian on the boundary is The first two-terms derived in Ref. [104] are given by The boundaries do affect the average width of the delocalization along the string. The mean-square width is similarly expressed as perturbation around the free NG string as The evaluation of leading correlator involve cumbersome manipulations, we presented the detailed calculus in Ref. [81] using ζ-function regularization of the divergent sums appearing after the evaluation of the Green propagator Eq. (23) where E 2 (τ ) is Eisenstein series defined by Eq. (12). The expectation value of the six-derivative order boundary-term is similarly evaluated substituting the free propagator Eq.(23) (see Ref. [81] for detail). The next non-vanishing expectation value appears at the coupling b 4 , Equations (36) and Eq. (38) lay out the perturbative expansion of the boundary action and estimate the subsequent augmentation/lessening of mean-square width of the effective string in D dimension at any temperature.
A. Width measurements of the Action Density
In the following we construct a color-averaged infinitely-heavy static quark-antiquark QQ state by means of two Polyakov lines P 2Q ( r 1 , r 2 ) = P ( r 1 )P † ( r 2 ).
We measure the mean-square width of the action density in SU(3) gluonic configurations. The action density is related to the chromo-electromagnetic fields via 1 2 (E 2 − B 2 ) and is evaluated via a three-loop improved lattice field-strength tensor [121].
A scalar field characterizing the action density distribution in the Polyakov vacuum or in the presence of color sources [122] can be defined as with the vector ρ referring to the spatial position of the energy probe with respect to some origin, and the bracket ... stands for averaging over gauge configurations and lattice symmetries. We make use of the symmetry of the four dimensional torus, that is, the measurements taken at a fixed color source's separations R are repeated at each point of the three-dimensional torus and time slice then averaged. The lattice size is sufficiently large to avoid mirror effects or correlations from the other side of the finite size periodic lattice. The characterization Eq. (39) yields C → 1 away from the quarks by virtue of the cluster decomposition of the operators.
To eliminate statistical fluctuations, uncompromising the physical observable are left intact, only 20 sweeps of UV filtering using an over-improved algorithm [123,124] have been applied on all gauge configurations.
Different UV filtering schemes can be calibrated [45,125] in terms of the corresponding radius of the Brownian motion. The above prescribed number of stout-link sweeps would be the equivalent of 10 sweeps of APE [126] algorithm [45,125] with an averaging parameter α = 0.7.
A careful analysis that we have performed in Ref. [45,127] ensured that with a number of n sw of improved cooling sweeps [123,124] no smearing effects are detectable on either the quark-antiquark QQ potential or the energy density profile for color source separation distances R ≥ 0.5 fm which is the distance scale under scrutiny in this investigation.
To estimate the mean-square width of the gluonic action density a long each transverse plane to the quarkantiquark axis. Taking into consideration the axial cylindrical symmetry of the tube, we choose a double Gaussian function of the same amplitude, A, and mean value In the above form the constraint σ 1 = σ 2 corresponds to the standard Gaussian distribution. Table. III compares the returned value of the χ 2 for both optimization ansatz, namely, the constrained form σ 1 = σ 2 and σ 1 = σ 2 unconstrained form. The fits of the double Gaussian form return acceptable values of χ 2 at the intermediate distances.
Good χ 2 values are returned as well when fitting the action density profile to a convolution of the Gaussian with an exponential [50,128], however, considering statistical uncertainties at large distances (see Fig. (1)) we opt to the form Eq. 40 with σ 1 = σ 2 for stable fits. Fit TABLE III. The mean-square width of the action density W 2 (z) and the corresponding χ 2 at the temperature T /Tc = 0.9 in the middle transverse plane intersecting the QQ line z = R/2. The width estimates and the relative differences are obtained in accord to Eq. (40), with σ1 = σ2 corresponding to the standard Gaussian.
T /Tc = 0.9 Fit Range A measurement of the width of the string's action density may be taken by fitting the density distribution C( ρ; z) to Eq. (40) through each transverse to the cylinder's axis z to Eq. (40) with r 2 = x 2 + y 2 in each selected transverse plane ρ(r, θ; z). The second moment of the action density distribution with respect to the cylinder's axis z joining the two quarks is W 2 (z) = dr r 3 G(r, θ; z) dr r G(r, θ; z) , which defines the mean square width of the tube on the lattice. The locus of the color sources corresponds to z = 0 or z = R, respectively.
In Table III the numerical values of the mean-square width of the string at the middle plane between the two color sources are in-listed. The percentage differences in the measured width measured with the use of both ansatz in Table III indicate an almost constant shift amounting approximately to 22% of that measured using unconstrained optimization σ 1 = σ 2 .
Further measurements of the mean-square width at consecutive transverse planes z = 1 to z = 4 are enlisted in Table I of Appendix. A. The width is estimated in accord to Eqs. (40) and (42) at each selected plane z i fixed with respect to one color source. We found the unconstrained optimization Eq.(40) is returning σ 1 = σ 2 at all color separation distances.
The numerical values in Table I are indicating a broadening in mean-square width of the string at all transverse planes z i as the color sources are pulled apart. The plot of the width at consecutive planes in Fig. 4 more clearly depicts an increasing slop in the pattern of growth as one considers farther planes from the quark sources up to a maximum slop in the middle plane.
B. Pure Nambu-Goto String
The broadening of the width at each selected transverse plane can be compared to that of the corresponding width of the quantum string Eqs. (8) and (11). Our discussion in Ref. [68,69] for the fit analysis of the two Polyakov loop correlator enlightens that both the LO and NLO approximations are substantial different when the temperature T /T c 0.9 is close to the deconfinement point.
In Table IV summarized are the resultant values of the fit considering various range of sources separations. For this fit procedure the string tension is fixed to its value returned at T /T c 0.8 form fits of QQ data.
The free string (LO) Eqs. (8) and self-interacting Eq. (11) (NLO) solutions are one parameter fit functions in the ultraviolet cutoff R(ξ). While in the LO formula the ultraviolet cutoff has the effect of a constant shift in the flux-tube width, the UV cutoff alters the slop in the NLO formula of NG string.
The leading order approximation would show a strong dependency on the fit range if the data points at small sources separations are considered. The first three entries in Table IV compares the value of χ 2 for both approximations at source separations R = 0.5 fm up to R = 0.9 fm, that is, excluding the last three points. The free string picture poorly describes the lattice data at short distances.
With the data points at short distances excluded from the fit the values of χ 2 decrease gradually. For example, first four points excluded from the fit, the returned χ 2 is smaller, indicating that only the data points at large source separation are parameterized by the string model formula. With the consideration of the next leading order solution of the NG action the values of χ 2 are reduced. Nevertheless, the values of χ 2 are still significantly too large to precisely match the numerical data in intermediate distances. Large values of χ 2 are retrieved if a source separation such as R = 4a to R = 12a is included for either LO and NLO approximations. This is, eventhough, a relative improvements when considering the the two loop approximation. The fits in Table IV [41,45,46] at this distance scale. The analysis of the lattice data has revealed curvatures along the planes transverse to the quark-antiquark line at large distances [45,46]. In the intermediate distances the profile along the transverse planes is geometrically more flat than the free-string picture would imply.
Re-render of the mean-square width of lattice data together with fits to Eqs.(8) and (11) discloses the geometrical effects of the inclusion of NLO order terms. The width W 2 (z)a −2 at the middle plan z = R/2 is subtracted from that at the plane W 2 (z i ), shown in features due to the higher loops in the string interactions. Although the statistical fluctuations increase with the decrease of the temperature (see Fig. 2), the width estimates obtained through fitting action density to Eq. (40) can be stabilized with the use of the standard Gaussian form σ 1 = σ 2 in Eq.(40) at the temperature T /T c = 0.8 instead.
Our expectations from the fit behavior of QQ potential at T /T c = 0.8 to both the LO and NLO formulas that higher order effects are negligible at this temperature scale.
Most of the considerations concerning the validity of both approximations to the QQ potential at the temperature T /T c = 0.8 seem to hold as well for the string profile. Considering the same fit range, the solid and dashed lines corresponding to the two approximations (LO) Eq.(8) and (NLO) Eq.(11) in Fig. 5 almost coincide, with exception of subtlety at the end point R = 0.5 fm. The mismatch at 0.5 fm is less obvious when considering fits at other transverse planes than the middle as can be seen in Fig 6. This can be attributed to the high value of χ 2 dof (only at R = 0.5 and R = 0.6 fm) when measuring the width through the standard Gaussian distribution, i.e, σ 1 = σ 2 in Eq. (40). We report numerical analysis with other fit functions that depends on the separation between the quarks in a next version.
At temperature T /T c = 0.8, the fit results summarized in Tables V of the LO and NLO forms of the NG string return very close parameterization behavior in both the asymptotic and intermediate distances regions regardless of the selected fit range. Indeed, higher order effects are almost suppressed at this temperature scale.
In Table V the fit to the LO approximation unveils good values of χ 2 for color source separation up to R = 0.6 fm, the next to leading order fits, however, improves with respect to fit range when including these source separations R = 0.5 fm and R = 0.6 fm, this manifests at the middle and other the consecutive planes z as well.
At the same temperature T /T c = 0.8, a rendering compares the width difference δW 2 = |W 2 (z i ) − W 2 (R/2)| of lattice data and the corresponding fits to string model in Fig. 7. The display in the figure width at each plane is subtracted from the middle plan z = R/2. This unveils an almost constant width along the transverse planes to the color sources. The curvatures induced by thermal effects [45,46] only manifests at temperatures closer to the deconfinement point and at large distances. This shows 8) and (11), respectively that regardless of the diminish of the thermal form factors the flux-tube density lines assumes the same shape.
C. Polyakov-Kleinert/Rigid String
Our expectations based on the fit analysis of the QQ potential data of the ordinary Nambu-Goto string Eq. (3), Eq.(4) and Polyakov-Kleinert action Eq. (18) are substantial improvements at the temperature very close to the deconfinement point T /T c 0.9. These improvements have been observed in the compact Abelian U (1) gauge model as well [24].
Let us again set the broadening of the width at each selected transverse plane into comparison with, however, that of the rigid/smooth strings Eqs. (25) and Eq. (26). We summarize the resultant χ 2 of the fits in Table. VI and Table. VII. In the first entries the returned parameter is the renormalization R 0 for the fit to the width at the leading order and the next-to-leading order terms of NG string Eq. (8) and Eq. (11), respectively. The following entries are for the rigidity α parameters from the fits to the mean-square width of the relevant order for both NG and PK string Eqs. (44).
The following shows respectively the formula used for each corresponding model depicted in the first column of the table The resultant fits to the smooth string consisting of the rigidity terms Eqs. (43) added to the next-to-leading order solution of the NG action Eq. (11) are inlisted in Tables. VI and VII.
The values of χ 2 in the two Tables. VI and VII are indicating significant reduction in the values of χ 2 at the temperature T /T c = 0.9 compared to returned residuals (Table IV) considering only the NG string Eq. (11). Moreover, the fit to stiff string are returning good values of χ 2 on the whole the intermediate source separation distances at all transverse planes along the tube.
The solid and dashed lines in the plot of Figs. 8 and ?? corresponding to the NG string in the interaction approximations Eqs. (11) and rigid strings Eqs. (43) and (44) show the dramatic improvement in the fits with respect to the stiff strings when considering fit range covering the whole fit range R ∈ [0.5 − 1.2] fm.
The χ 2 values returned from the fit to a string model (43) at only the LO perturbation from both NG and PK string, indicate improvement as well compared to the free string NG string. However, We find that considering both corrections returns improved values of χ 2 dof = 11.09/6 over the shorter source separation interval R ∈ [0.5, 1.2] fm.
Nevertheless, drawing a comparison between the fit behavior of any of the leading and the formula for NLO in the extrinsic curvature Eq. (45) reveals a subtle difference in the returned χ 2 on fit interval R ∈ [0.5, 1.2] fm the corrections within the uncertainties of the measurements. Effects of the NLO in perturbation Eq. (45) could be relevant when considering smaller distances, finer lattices or much higher resolutions in general.
The improvement in the fit with respect to the rigid strings compared to that obtained merely on the basis of pure NG string is displayed in Figs. 8. For source separation range R ∈ [0.5, 0.7] fm, the rendering in Fig. 8 of the fitted width of the pure NG string at either LO or NLO corrections exhibits significant deviations from the data compared to the corresponding fits in Fig. 3 over the fit interval R ∈ [0.7, 1.2] fm. The plots in Fig. 8 indicate the incompetence of the pure NG string as a physical description integrating out the properties of the QCD flux tubes.
Apart from mitigate deviation at short distances when describing the flux-tubes width over planes other than the middle, .i.e, x = 2, 3, 4 planes ( assertion still holds true that the match is enhanced with respect to the rigid string models over larger intermediate separation distances as evidently displayed in Fig. 9. The fit to the NG approximation Eq. (44) returns good values of χ 2 for the mean square width of the string in the middle plane. However, the fit to the stiff string Eq. (43) exhibits improvements with respect the planes near to the color sources, this matches the intuitive picture that rigidity effects/resistance to bending may be more stringent near the string ends.
D. Lüscher-Wiesz string with two boundary-terms in the action
The corrections provided by the boundary action to static QQ potential seem to explain to some extend the deviations appearing when constructing the static mesonic states with Polyakov loop correlators [69]. At a higher temperature, the inclusion of the boundary corrections up to the fourth order b 4 together with string rigidity the QQ potential has been found to be viable [69] is well described in providing good fits for distances as small as R = 0.5 fm.
The goal in this section is to compare the analytic estimate of the mean-square width resulting from the boundary terms in Lscher-Weisz (LW) effective string action Eq. (27). This could be compatible with the energy fields set up by a static mesonic configurations. We consider the perturbative expansion of two boundary terms at the order of fourth and six derivative given by Eq. (31) and Eq. (33) respectively.
In the following we select possibly interesting combinations of boundary terms with LO and NLO Nambu-Goto and rigid string with Eqs. (36) and (38). We consider which excludes the rigidity structure of the string. Similar to the foregoing sections, comparison with the broadening of the width at each selected transverse plane can be drawn with that of the corresponding width of various models string models. We summarize the observations on the resultant χ 2 and fit parameters in Tables. VIII, IX and X.
• Tables VIII and IX In the first two column are the returned parameter is the UV cutoff R 0 and the following entries are for the boundary action couplings b 2 .
The first observation is that the fit over short distance ranges in Table VIII • Tables X and XI The first Table. X covers fit results over selected short intervals. The immediate observation that the consideration of the next six derivative term in the boundary action had improved the fits over these intervals in particular for distances commencing from R = 0.4 fm, even without considering the NLO term from NG string.
For longer distances (Table. XI) the inclusion of the NLO term from NG string seems to be in effect, nevertheless, the fits at the planes z = 2, z = 3 are still show sensible deviations from the lattice data. Figure 10-(a) illustrates the resultant fitted curves of the NLO width at the middle plane of NG string together with leading boundary term W 2 b2 Eq. (47). The fits of the string model which employs the next nonvanshing boundary term W 2 b4 Eq (49), is depicted in Fig 10-(b). Here we evidently see the match with the LGT data up to a surprisingly small distances R = 0.4 fm.
The plot in Fig. 11 shows the contribution of the boundary action to the width profile at two consequtive planes from the quark, namely, z = 1 and z = 2. The line are for the fits of the string model with two boundary terms (b 2 , b 4 ) Eq. (49) over interval R ∈ [0.5, 1.2] fm. For the planes apart from the middle plane it seems the mismatch is evident at the large source separation distances.
More variants of the string models can be attained by switching on the rigid properties of the QCD string model, in the above only the first boundary correction W 2 b2 is encompassed. FIG. 9. The mean-square width of the string W 2 (z) versus QQ separations measured in the planes z = 1,z = 2, z = 3, and z = 4 respectively from the top to bottom at T /Tc ≈ 0.9. The solid and dashed line are the fits to Nambu-Goto and Rigid string Eqs. (11) and (25) very good match with the numerical data at all planes up to source separation distances as small as R = 0.4 fm. However, subtle differences in the values of χ 2 are observed when considering the next-surviving boundary term W 2 b4 , in the models of Eq. (50) and Eq.(51) as the inspection of Table. XIII divulges. The good fit, nevertheless, is remarkable in these model at planes from z = 3 andz = 4 from the quarks. This seems to suggest restoring forces, by virtue of the inclusion of the rigidity, that pull down the strings into its classical configuration. The expectations of an almost flat width geometry along the transverse planes is consistent with the analysis shown in Figs. 12, which indicates that the thermal effects strongly diminishes near the QCD plateau region [129]. In Figs. 7 the render of the action densities corresponding to the both temperatures T /T c = 0.8 and T /T c = 0.9 unveils an independent prolate-shaped action density for the color map.
These are two typical instances where the string's width profile is exhibiting a constant width along the tube. The first is due to the diminish of thermal effects near the end of QCD plateau, the second manifests at the intermediate color source separations and the temperature close to the deconfinement point as a result of the role played by the string-self interactions. This is culminated in the squeeze/suppression along the transverse planes. Figs. 12 and 7 disclose the fact that the geometry of the density isolines are quite independent from both the width profile
IV. SUMMARY OF NUMERICAL RESULTS
The corrections received from the Nambu-Goto (NG) action expanded up next to leading order terms have been set into comparison with the corresponding SU (3) Yang-Mills lattice data in four dimensions. The region under scrutiny is where the free string picture poorly describe the energy profile. The considered source separation are R = 0.5 to R = 1.2 fm for two temperatures scales near the end of QCD plateau and just before the critical point T /T c = 0.8 and T /T c = 0.9.
The theoretical predictions laid down by both the LO and the NLO approximations of Nambu-Goto string show a good fit behavior for the data corresponding to the QQ potential near the end of the QCD plateau region at T /T c = 0.8. The fit returns almost the same parameterization behavior with negligible differences for the measured zero temperature string tension σ 0 a 2 . The returned value of this fit parameter is in agreement with the measurements at zero temperature [130] On the otherhand, at a higher temperature near the deconfinement point T /T c = 0.9, the values of χ 2 indicate improvements with respect to the fits to the NLO width profile of the pure NG string Eq.(3.11) compared to the leading-order approximation. Nevertheless, the NLO approximation does not provide an accurate match with the numerical data except at R ¿ 0.8 fm. The fits of the QQ potential data to the Nambu-Goto string model considering either of its approximation schemes of NG string return large values of χ 2 if the fit region span the whole source separation distances R = 0.5 fm to R = 1.2 fm. The fits at next to leading order approximation of the NG string show some improvements on each corresponding fit interval. In general, the values of the residuals decrease by the exclusion of the data points at short distances for both approximations. The inclusion of leading boundary term of Lüscher-Weisz action W 2 b2 in the approximation scheme improves the fits at all the considered source separations. The fits of the string model employing the next non-vanshing boundary term W 2 b4 Eq (49) displays evident match with the LGT data up to a surprisingly small distances R = 0.4 fm. Although large distance deviations reappear for planes away from the middle plane of the flux-tube, we see that a good match is recovered when considering rigid properties of the string.
We found that the rigid string width profile accurately matches the width measured from the numerical lattice data near the deconfinement point. This suggests that the rigidity effects can be very relevant to the correct description of Monte-Carlo data of the field density and motivates scrutinizing the stiffness physics of QCD-fluxtube in other frameworks [131].
For the considered fit interval R ∈ [0.5, 1.2] fm the corrections received from the two-loop in the extrinsic curvature is vey small and within the uncertainities of the measurements. The next to leading-order in perturbation Eq.(3.24) could be be very relevant when considering smaller distances, finer lattices, or other gauge models.
Although the rigidity effects seems to be very similar to considering higher-order terms in NG string, we find that considering both two terms return improved values of χ 2 dof = 10.58/6 for shorter distances R ∈ [0.5, 1.2] fm. Drawing a comparison between the fit behavior of both the LO and the NLO formula for the extrinsic curvature Eq.(2.19) and Eq.(3.24) reveals a subtle difference in the returned χ 2 and around (20 to 30) percent change in the value of the rigidity parameter.
At higher temperature T /T c = 0.9, the color tube exhibits a suppressed growth profile in the intermediate region. The fits considering both intermediate and asymptotic color source separation distances show noticeable improvement with respect to the string self-interacting picture (NLO) compared to that obtained on the basis of the free string approximation. Nevertheless, the next to leading approximation does not provide an accurate match the numerical data. This manifests as signif-icantly large values of the returned χ 2 when considering distances less than R<0.8 fm.
The oscillations of a free NG string fixed at the ends by Dirichlet boundaries traces out a nonuniform width profile with a geometrical curved fine structure. This is detectable [45] at source separations R > 1.0 fm and near to the critical temperature. However, in the intermediate region the lattice data are not in consistency with the curved width of the free fluctuating string. The fits to mean-square width extracted from the NLO expansion of NG string, however, indicate that self interactions flatten the width profile in the intermediate region. The string's self-interactions accounts for the constant width along consecutive transverse plane to the tube in addition to the decrease in slop of the suppressed width broadening.
At the end of the QCD plateau region at temperature T /T c = 0.8 the constant width property is manifesting at all source separation distances and is in consistency with the pure NG action. These results indicate not only the fade out of the thermal effects at this temperature but also indicate a form of the action density map independent from the geometrical changes induced by the temperature. That is, the main features of the density map would persist at lower and zero temperature.
V. CONCLUSION
In this investigation, we discussed the effective bosonic string model of confinement in the vicinity of critical phase transition point [119]. We conclude that the free Nambu-Goto string can be a good description to energy profile of QCD flux-tubes up to temperatures on the QCD plateau. With the gradual decrease of the string tension the pure NG string does not precisely describe the lattice Mont-Carlo data even, at two-loop orders. Nevertheless, we evidence that the effective bosonic string model is competently integrating out the physical properties of the flux-tube when including symmetry effects of the boundary action and rigidity properties into its paradigm. | 2020-01-08T06:36:27.000Z | 2020-01-08T00:00:00.000 | {
"year": 2020,
"sha1": "e4d817163bdc3705cb1f010eda411793fad9fc2a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "03c8b1d69e7f4ede4d4f3f2f27c2121789189174",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235299222 | pes2o/s2orc | v3-fos-license | Severity indexes of blunt trauma victims in intensive therapy: prediction capacity for mortality*
* Extracted from the dissertation: “Índices de gravidade em vítimas de trauma internadas em Unidade de Terapia Intensiva”, Programa de Pós-Graduação em Enfermagem na Saúde do Adulto, Universidade de São Paulo, 2018. 1 Universidade de São Paulo, Escola de Enfermagem, Programa de Pós-Graduação em Enfermagem na Saúde do Adulto, São Paulo, SP, Brazil. 2 Faculdade dos Carajás, Marabá, PA, Brazil. 3 Universidade Federal do Rio de Janeiro, Campus Professor Aloísio Teixeira, Macaé, RJ, Brazil. ABSTRACT Objective: To identify the predictive capacity for mortality of the indexes Revised Trauma Score, Rapid Emergency Medicine Score, modified Rapid Emergency Medicine Score, and Simplified Acute Physiology Score III in blunt trauma victims hospitalized in an intensive care unit and compare their performance. Method: Retrospective cohort of patients with blunt trauma in an intensive care unit from medical records. Receiver Operating Characteristic and a 95% confidence interval of the area under the curve were analyzed to compare results. Results: Out of 165 analyzed patients, 66.7% have received surgical treatment. The mortality in the intensive care unit and in the hospital was 17.6% and 20.6%, respectively. For the mortality in the intensive care unit, the area under the curve varied from 0.672 to 0.738; however, better results have been observed in surgical patients (0.747 to 0.811). Similar results have been observed for in-hospital mortality. In all analyses, the areas under the curve of the indexes presented no significant difference. Conclusion: The accuracy of the severity indexes was moderate, with an improved performance when applied to surgical patients. The four indexes presented a similar prediction for the analyzed outcomes.
INTRODUCTION
Trauma is an important public health problem in Brazil and worldwide due to its high death rate or severe consequences which lead to temporary or permanent disabilities of its victims. According to information of the Informatics Department of Brazil's Unified Health System (Departamento de Informática do Sistema Único de Saúde do Brasil -DATASUS), in 2008, there were 135,936 deaths by external causes in Brazil; in 2017, the mortality due to this occurrence was 158,657 cases (1) , leading to a 17.0% increase in only one decade, whereas the population growth in that period was 9.5% (2) .
The data of DATASUS of the last 20 years still point external causes as having the main responsibility for the deaths in the age group from 1 to 49 years; in 2017, 72.1% of deaths of individuals aged 15 to 29 were due to trauma (1) .
Guaranteeing a better care for trauma victims depends on the efficiency of all professionals involved in their care, from care in the trauma setting to the complete treatment in an intra-hospital environment, which will be concluded with the rehabilitation and reinsertion of this individual in society (3) . The professional involved in all these steps must understand the severity of the clinical condition of the trauma victims to adopt immediate conducts of intervention and screening, planning care and qualifying services involved in this care (4)(5) .
To meet this purpose, mortality and prognosis scores, commonly named severity indexes, were developed (6)(7) . There are many severity indicators which may be used for trauma victims: those which are specific for traumatic occurrence, those elaborated for individuals which receive care in emergency services and, in severe cases and referred to the Intensive Care Unit (ICU), there are indexes which are particular to patients of these units.
Trauma victims are frequent in ICU and their risk of death characterizes them as a group whose clinic severity is of great interest. However, choosing an appropriate instrument to this end constitutes a challenge. In the literature, the Revised Trauma Score (RTS) (8) , Rapid Emergency Medicine Score (REMS) (9) , modified Rapid Emergency Medicine Score (mREMS) (10) , and the Simplified Acute Physiology Score III (SAPS III) (11) stand out as physiological indexes to estimate the severity of patients admitted in the emergency services and ICU.
One of the most used trauma severity indexes is the prognosis index RTS, developed in 1989, whose coefficients derive from a logistic regression analysis applied to the broad database Major Trauma Outcome Study (8) .
The REMS is a physiological severity index used in emergency derived from Acute Physiologic and Chronic Health Evaluation II (APACHE II), which requires rapid evaluation for the obtention of physiological parameters and provides for immediate calculation, with no use of laboratory or complementary exams (9) .
Although studies have considered the use of REMS as appropriate for trauma victims, researchers have identified that this score should be adjusted to provide a better prediction of these patients' mortality. Thus, mREMS was developed and published in 2017 (10) .
When validating this new version of the index, researchers have identified that mREMS has provided precise predictions for in-hospital mortality, surpassing Injury Severity Score (ISS) and Shock Index (SI), paralleling RTS and Mechanism, Glasgow Coma Score, Age, and Arterial Pressure (MGAP). Consequently, mREMS is considered a simple, objective, and valuable tool for trauma victims in the emergency setting (10) .
To evaluate the severity of ICU patients, the index SAPS III (11) is currently applied; this is a uniform system which is internationally accepted for this objective, a result of an improvement of APACHE and versions of SAPS.
The use of the severity indexes, in addition to trauma records, holds diverse possibilities of clinical and scientific application to provide a better description and classification of trauma victims. In this sense, the improvement of indexes used to measure the severity of trauma victims is under ongoing development (4,7) . There are many severity indexes which may be employed in care of trauma victims in different moments of their care; however, it is important to identify those which offer appropriate precision.
Given the presented aspects, this study has the objectives of identifying the capacity of mortality prediction, both in ICU and in-hospital, of RTS, REMS, mREMS, and SAPS III in blunt trauma victims admitted to the ICU and comparing their performance.
Design of stuDy
This is a retrospective cohort study which computed information from medical records of trauma patients from their admission in emergency to hospital discharge. scenario This study was conducted in a reference ICU which has 24 beds and specialized in Surgical Emergencies and Trauma, providing care to cases of high-complexity trauma in the state of São Paulo, Brazil.
selection criteria
The sample comprised all patients meeting the following eligibility criteria: victims of blunt trauma, aged 18 or older, and admitted to the ICU from August 1, 2014 to July 31, 2016. Blunt and penetrating trauma have different etiologies, clinical manifestations, treatments, and mortality, circumstances which indicate distinct analyses for these types of trauma. During data collection, 90% of people receiving care in this study's local presented blunt trauma. Consequently, an analysis of the indexes in victims of this type of lesion was opted for.
Those excluded from the sample comprised individuals who made it to the emergency service 24 hours after trauma, victims of hanging, choking, drowning, or nearly drowning, poisoning, burn, and electrocution. The exclusion of those who arrived at the emergency service 24 hours from the trauma was established taking into account that the initial clinical conditions of the victims of trauma are used to calculate the indexes and pointed out as important indicators of mortality or survival. The victims of the external causes previous cited were excluded, considering the important specificities of the physiopathology of these traumas in face of the others.
Data collection
From a consultation to the records of patient admission at the study's ICU, a list of patients which met the study's eligibility criteria was produced. Based on this list, the institution's Medical and Statistical Archive Service was asked for the location of the medical records for consultation and compilation of data of interest to this study.
The analysis of the medical records has enabled verification of whether patients met the eligibility criteria, as well as filling two forms used for data collection. The data included in the first instrument provided for the calculation of RTS, REMS, and mREMS from admission registers of victims in the emergency department, the identification of surgical and non-surgical patients, deaths and survivals during hospitalization, in addition to sample characterization (sex, age, external cause, type of prehospital support and length of hospital stay). The index SAPS III, which refers to severity and is regularly applied in the ICU by the medical team, had its score transcribed to the second data collection instrument, which also included information on admission until discharge from the ICU.
Considering that the analyzed severity indexes were elaborated for predicting ICU (11) or in-hospital (8)(9)(10) mortality, analyses which considered mortality in the ICU and during hospitalization as a dependent variable were performed.
The independent variables for this investigation were RTS, REMS, mREMS, and SAPS III. The calculation of the index RTS is based on the Glasgow Coma Scale (GCS), value for systolic blood pressure (SBP) and respiratory rate (RR). To estimate the probability of survival of the trauma victim, values from zero to 4 are attributed to each physiological parameter measured in hospital admission, which are subsequently multiplied by their weights (0.9368 for GCS, 0.7326 for SBP, and 0.2908 for RR) and summed (8) . The index RTS may range from zero to 7.8408 and the higher the final value, the best is the victim's prognosis. The probability of survival for trauma victims presents correspondence with the RTS score, as proposed by the authors of this index (8) .
The index REMS comprises GCS, heart rate (HR), mean arterial pressure (MAP), RR, oxygen saturation (SaO2), and age. According to the values observed in the admission to the emergency service, these variables received a score from zero to four, except for age, which ranges from zero to six; the total REMS score is the sum of the score obtained in these variables (9) . The calculation of mREMS includes age, SBP, HR, RR, SaO2, and GCS. Scores from 0 to 4 are attributed to the values observed in these parameters, except for GCS, whose score ranges from 0 to 6; the mREMS value is the sum of these scores (10) . The scores of REMS and mREMS vary from zero to 26 and higher scores indicate higher risk of death (9)(10) .
In the application of SAPS III, the values of three groups of variables are used: age and information on previous health status (comorbidities, length of hospital stay and intra-hospital location before ICU admission, and use of vasoactive drugs); circumstances of ICU admission (reasons for ICU hospitalization, anatomic site of surgery, if applicable, type of ICU admission -planned or unplanned, surgical status, and presence of nosocomial and/or respiratory infection), and physiological variables (body temperature, SBP, HR, oxygenation, arterial pH, creatinine, bilirubin, hematocrit, leukocytes, platelets, and GCS). To score in this index, the worst values attributed to the physiological variables are considered in the first hour of patient hospitalization in the ICU (11) . Each SAPS III item has a specific score and the final score is the sum of these values. The lowest score which can be attributed to the index is 16 and the highest is 217; the higher the score, the more severe is the patient health status. When this score is converted by logistic regression equation, the index shows the probability for hospital mortality (4,11) .
Data analysis anD treatment
This study's computerized database was built with the Statistical Package for the Social Sciences (SPSS) software version 22, which was used in statistical tests, abiding by an orientation from a specialist in this area. Except for SAPS III, which was transcribed from the medical records, the other indexes were calculated in an electronic spreadsheet.
Inferential analyses were performed to evaluate the performance of the severity indexes (RTS, REMS, mREMS, and SAPS III) and compare their capacity for predicting death of victims during hospitalization in the ICU and in the hospital, considering separately the total cases and the patients submitted or not to surgical treatment. The diagnostic proof Receiver Operating Characteristic (ROC curve) was used to analyze the performance of these indexes. The cut point was identified through the Youden index and the values for sensitivity, specificity, predictive positive value (PPV), and predictive negative value (PNV) were calculated. The difference in index performance was identified through analysis of area under curve (AUC) and 95% confidence interval (95%CI).
ethical aspects
The study was approved by the Research Ethics Committee of the institution (protocol n. 2.490.677). All patient data were saved and protected in a computer, with access restricted to this study's researchers, ensuring the safety and anonymity of the collected information.
RESULTS
The study population comprised 165 blunt trauma victims admitted to the ICU with a mean age of 38.5 and a standard deviation (SD) of 15.4 years. Concerning sex, men were prevalent, representing 81.2% of the total sample. The external cause with the most occurrences was motorcycle accident (33.3%), followed by being run over (27.3%), and falling (20.6%). In 43.0% of cases, the victims got to the emergency unit through basic life support; however, more than half the victims (53.4%) received care through air (35.8%) or ground (17.6%) advanced life support.
Out of the analyzed sample, 110 patients (66.7%) were submitted to surgical treatment. The mean of length of stay for ICU patients was 16.8 days (SD = 33.4) and, in the hospital, 24.6 days (SD = 40.6). A total of 29 patients (17.6%) died in intensive care and the in-hospital mortality rate was 20.6%; five patients died in the hospital after ICU discharge.
The survival rate estimated by RTS in the cases ranged from 98.8% to 2.7% and survival lower than 50% was estimated to be 18.7%. The mean REMS was 4.8 (SD = 3.42); scores ≥ 6 and ≤ 13 have been observed in 35.7% of the cases and > 13 in 1.2%. The mean mREMS was 5.1 (SD = 3.7) and 41.2% of the victims had scores ≥ 6 and ≤ 13, whereas 1.8% had a score > 13. The mean SAPS III was 48.6 (SD = 17.1) and most victims had a score between 32 and 67.
The AUC/ROC for ICU mortality of blunt trauma victims were close to 0.70 for all the analyzed indexes (variation from 0.672 to 0.738), as observed in Figure 1. The results provided in Table 1 show that AUC/ROC for ICU mortality were similar to in-hospital mortality in all indexes, according to the observed 95%CI. Also, the 95%CI indicated, in the comparison of index performance, no statistical difference for the results of AUC/ROC at a 0.05 level, as they present overlapping values, according to results in Table 1.
The SAPS III has presented a higher AUC than the other indexes when in-hospital mortality was analyzed, and it was similar to mREMS for ICU mortality. However, SAPS III presented a lower sensitivity and PNV than the other indexes for the two analyzed outcomes.
continue… Table 2 shows that, for surgical patients, RTS, REMS, and mREMS presented the highest AUC/ROC when ICU mortality is analyzed; however, SAPS III presented a similar value for ICU and in-hospital mortality. In the 95%CI analysis, the other indexes, RTS, REMS, and mREMS, also presented results that did not significantly differ between the conditions leaving the ICU and the hospital.
The data in Table 2, with the ROC curves of the indexes presented in Figure 2, show an AUC between 0.747 and 0.811 in the prediction of ICU mortality of trauma victims submitted to surgical treatment. The SAPS III and the mREMS presented the highest AUC, with values above 0.80; however, the lowest sensitivity values and PNV were found for these indexes. In this analysis, the 95%CI have also indicated similarities in the performance of the four indexes. For in-hospital mortality in this group, SAPS III presented the highest AUC (0.818) value, with no significant difference from the other ones, since the 95%CI overlap ( Identical values have been observed for in-hospital mortality. No blunt trauma victim with no surgical treatment died after ICU discharge; therefore, the unsatisfactory performance of the indexes was equivalent for estimating the ICU and in-hospital mortality of these patients.
DISCUSSION
To analyze the performance of these indexes regarding their capacity to differentiate deaths and survivals in the ICU and in the hospital, the ROC curve was applied, prioritizing AUC/ROC, considering its ability in indicating the overall www.scielo.br/reeusp Rev Esc Enferm USP · 2021;55:e03747 prognosis capacity of the instruments. The analysis of the result of this statistic in the total cases of the investigation has shown a reasonable performance for the instruments of clinical practice, with a more satisfactory prognostic capacity for mREMS and SAPS III in the ICU and SAPS III in the prediction of in-hospital mortality (AUC/ROC higher than 0.70) (12) , although this difference has not reached the statistical significance level established in this study.
With the cut point established by the Youden index, SAPS III presented the worst sensitivity and PNV in relation to the other instruments, identifying approximately half of the deaths in the cases: 49.3% of ICU deaths and 50.4% of in-hospital deaths (values of the sensitivity); also, when SAPS III indicated survival, it has correctly prognosed only 27.4% of the ICU cases and 31.6% in the hospital (PNV).
On the other hand, SAPS III has presented a high specificity and PPV; it was thus a positive index to identify survival in the cases (it has indicated 92.9% and 90.9% of ICU and in-hospital survivals, respectively) and when indicating death, it has correctly prognosed most cases (97.1% in the ICU and 95.6% in the hospital).
Differently from other indexes, RTS presented higher values of sensitivity and PNV than specificity and PPV; however, when interpreting these results, it is crucial to consider that the highest scores of this index point a higher probability of survival, whereas for the other instruments higher scores indicate death.
Taking this aspect into account, RTS and SAPS III had a higher capacity of identifying survivals than deaths in the cases, whereas REMS has identified these outcomes in a similar way and mREMS has better identified the individuals who died in the hospital than in the ICU (74.8% and 52.9% sensitivity, respectively).
Severity indexes are instruments elaborated for clinical practice aiming at evaluating the quality of care provided to patients and the planning of emergency care (13)(14) . In this sense, its capacity of correctly identifying individuals with a high probability of dying or living is of particular interest. Consequently, the AUC/ROC which quantified the general capacity of the indexes to perform this identification and correctly identify death and survival has provided a synthesis of the results of interest for this investigation.
As per AUC/ROC, the victims submitted to surgical treatment had a better performance in mortality prediction in ICU than those which did not receive such treatment -AUC/ROC variation between 0.747 and 0.811 for surgical cases and between 0.528 and 0.612 for the other ones. In the surgical cases, SAPS III and mREMS achieved a positive differentiating capacity with an AUC/ROC superior to 0.80. However, in non-surgical cases, the indexes had an unsatisfactory performance (AUC/ROC < 0.70); in addition, in Figure 3, the REMS has been overcome in certain segments by reference of the ROC curve. This diagonal line in the graph represents the behaviors of indexes if the scores produce no information on victim prognosis. The REMS lines suggest thus that in some scores the index has no differentiating capacity for ICU mortality of trauma victims not submitted to surgical treatment.
Concerning the in-hospital death in victims with surgical treatment, similar results to those related to ICU mortality were found and, again, there was an improvement in the AUC/ROC values when the surgical patients were analyzed.
The values for AUC/ROC presented in the original articles of the indexes selected for this investigation were: 0.852 (SD = 0.014) for the REMS (9) , 0.967 (95%CI: 0.963-0.971) for the mREMS (10) and 0.83 for SAPS III (11) . The screening RTS, which includes the values of GCS, SBP, and RR of the victims in the scene of the traumatic event, has identified more than 97% of the non-survivors in the investigation presented in its first publication, in the Journal of Trauma in 1989 (8) .
In the current research, the AUC/ROC between the indexes was similar when ICU and in-hospital mortality in the total cases was analyzed and when patients submitted or not to surgery were separately investigated. Thus, the prognostic capacity of RTS, REMS, mREMS, and SAPS III was equivalent. In the literature review, no articles comparing SAPS III with the other indexes analyzed in this investigation was found. Concerning mREMS, the comparison between indexes was found only in the original publication, commented on in this study's introduction.
The REMS was confronted to RTS in three studies, in which other indexes were also included among the comparisons (15)(16)(17) . In these studies, REMS and RTS had a similar performance; in two of these investigations, the indexes presented an excellent prognostic capacity, AUC/ROC of 0.91 and 0.9 for REMS and 0.89 and 0.924 for RTS (15,17) . Acceptable values, of 0.72 and 0.77 of AUC/ROC, have been observed in another study (16) .
In the studies comparing SAPS III to other indexes, analyses with SAPS II and APACHE II were frequent. In general, SAPS II has presented a better performance than SAPS III when their AUC/ROC are compared and SAPS III has surpassed APACHE (18)(19)(20)(24)(25)(26) .
Given the high in-hospital mortality identified in this study (20.6%), the frequency of REMS and mREMS cases with score > 13 was small (lower than 2%); this cut point for the index is considered a warning of a high risk of death (9)(10) . Therefore, the scores of these indexes did not reflect the severity of the analyzed cases.
Results more adjusted to the observed mortality were found in RTS and SAPS III. Regarding RTS, 18.7% of the cases presented an estimate of survival probability of 50% or lower; there was thus a high probability of death. The mortality observed in the ICU, 17.6%, and the mean value of SAPS III of the victims were also convergent upon observation of the mean score of the index indicating a 15.9% probability of death in the ICU when converted by regression equation.
This research presents as limitations the search for data in medical records and the conduction of this study with information from one ICU only, which specialized in providing care to trauma victims, located in a hospital which provided service only to patients referred by its emergency department. The absence of a database of systematized registers, containing relevant data of trauma victims, as observed in developed countries, made the conduction of this study harder and less safe.
The characteristics of the study site have certainly added specificities to the analyzed cases: 53.4% of the participants got to the emergency department in air or ground advanced life support. This high percentage of pre-hospital care indicates the identification of severe clinical conditions in most victims in the setting of occurrence. Also, the presence of nurses and physicians in these units provided for the early conduction of invasive and complex procedures which influence the time of survival of the victims, such as advanced respiratory procedures and use of medication related to cardiopulmonary resuscitation (27) . These procedures, when started in pre-hospital care, may stabilize the circulatory and ventilatory conditions of the victims and attenuate the severity indicated by RTS, REMS, and mREMS, which use in its calculation the vital signs measured in the emergency department.
CONCLUSION
The prognostic capacity of RTS, REMS, mREMS, and SAPS III was moderate and similar, with no preferential indication of one of these scores for use in the clinical practice. Also, the best performance achieved by the indexes upon their application to surgical patients suggests that this group of victims will receive more benefits than the non-surgical patients with the use of these indexes. | 2021-06-29T11:14:08.813Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "aa6f574b479915e80630033c70e9919bd5bee24d",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/reeusp/a/qHTKYhRWqNZscgHhBNDk4fF/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2a2ffe2128da3cc03ad57565635bfd11e6eea848",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
249926882 | pes2o/s2orc | v3-fos-license | A stochastic hierarchical model for low grade glioma evolution
A stochastic hierarchical model for the evolution of low grade gliomas is proposed. Starting with the description of cell motion using a piecewise diffusion Markov process (PDifMP) at the cellular level, we derive an equation for the density of the transition probability of this Markov process based on the generalised Fokker–Planck equation. Then, a macroscopic model is derived via parabolic limit and Hilbert expansions in the moment equations. After setting up the model, we perform several numerical tests to study the role of the local characteristics and the extended generator of the PDifMP in the process of tumour progression. The main aim focuses on understanding how the variations of the jump rate function of this process at the microscopic scale and the diffusion coefficient at the macroscopic scale are related to the diffusive behaviour of the glioma cells and to the onset of malignancy, i.e., the transition from low-grade to high-grade gliomas.
Introduction
Gliomas are the most common type of primary brain tumours, accounting for 78% of all malignant brain neoplasia [43].They originate from mutations of the glial cells in the central nervous system and are classified by the World Health Organisation (WHO) into four grades according to the degree of malignancy (see [85] for a more detailed description).In this work, we mainly focus on the low grade gliomas (LGGs), which are a class of rarely curable diseases, often resulting in the premature death of the patient.Since in the last years some medical interventions have shown to improve the median survival time of the patients, the study of this class of tumour has become of great importance for the clinicians.
The development, growth, and invasion of gliomas in the brain is a very complex phenomenon, involving many interrelated processes over a wide range of spatial and temporal scales.As such, often the individual cell behaviours and the intracellular dynamics described at a microscopic scale are manifested by functional changes in the cellular and tissue level phenomena.Therefore, this multiscale nature of glioma evolution requires modelling techniques that are able to deal with different levels of description.
The first mathematical models for the study of brain tumours started to emerge in the early 1980s (see [26,28,27,80,81] for further details).Since then, the mathematical modelling of glioma evolution has evolved considerably and several different approaches have been proposed, going from discrete or hybrid microscopic models to macroscopic and multiscale frameworks.Discrete models at the microscopic scale, also called agent-based models, have been used to describe the dynamics of individual cells moving on a lattice (for some examples we refer the reader to [84,58,45], or, specifically, to [4] for cellular automata models and [39] for cellular Potts models).Further, stochastic discrete models for cell motion have also been proposed, e.g.describing 2D persistent random walk or 3D anomalous diffusion [29,57,5,72].In particular, recently in [72], the authors have presented the analysis of 3D cell tracking data, based on a persistent random walk model adapted into the context of glioma cell migration.At the macroscopic scale, several phenomenological models for glioma evolution stated in the form of reaction-diffusion-advection equations have been proposed and studied [78,44,80,77], also including patient-specific data (e.g. in the form of diffusion tensor imaging (DTI) information).This has allowed for a comparison between the real and the virtual tumour evolution [51,53,17,60].Concerning multiscale models, a broad and rich literature has been developed for the integration of microscopic and macroscopic dynamics (for some examples see [47,49,7,66,32,33,55,52,34]).In particular, in [32], a more detailed description of the migration process of individual cells, involving the dynamics of cell receptors and the interaction with the tumour microenvironment, is discussed.
A key aspect of modelling tumour evolution concerns cell movement, which is based on a combination of complex processes involving motility and migration: motility refers to the random movement from one location to another, while migration involves also the interactions between cells and the microenvironment [59].
The first description of particle movement, which uses a stochastic Markov process combining deterministic ordinary differential equations (ODEs) for the continuous movement with Poisson-like jumps for the random change of direction, was introduced in 1974 by Stroock [75] on the basis of the biological observations illustrated in [1].The concept of piecewise deterministic Markov processes (PDMPs) was introduced in 1984 in [24].An extension of [24] was then provided in [13,15], where the authors developed the extended generator and the differential formula for piecewise diffusion Markov processes (PDifMPs), showing that all the classes of proposed stochastic hybrid processes can be seen as a special case of their concept of a general stochastic hybrid system (GSHS).Further, in [10] a general class of continuous-time stochastic hybrid systems in which the continuous flow is the solution flow of a stochastic differential equation (SDE) was presented.These processes have been widely applied in different contexts, e.g. for interacting particle systems [11], air traffic management [14], or gene network [61]) and especially in biological modelling (for some examples, see [82,36,18,67,41,68,22,70]).However, it seems that the use of PDifMPs in the context of tumour growth, motility, and migration has not yet been investigated.In this article we extend the description of cell movement based on velocity jump processes with the use of PDifMPs in the context of glioma progression.In particular, we build a multiscale model, starting with a contact-mediated description of cell motion on the microscopic scale using PDifMPs.We use the extended generator for such processes to derive a generalised Fokker-Planck equation, including the description of the tumour-microenvironment interactions.The solution of this equation provides the joint density of the transition probabilities of this Markov process for all the involved variables.As the variables involved in these interactions are fast-acting compared to the macroscopic scale, we make use of a scale separation variable and the Hilbert expansion method to derive the corresponding macroscopic scale equation for the time and space variables (for a more general discussion of multiscale modelling and moment closure techniques, we refer the reader to [8,54,50]).
The paper is organised as follows.Section 2 contains a brief introduction to PDifMPs.In Section 3, we derive a stochastic multiscale model for glioma progression.Numerical simulations in a 2D scenario for the resulting macroscopic equation for the tumour cell density are presented in Section 4, including several studies on the effect of parameter variations.Finally, in Section 5, we review our results and discuss further directions of research.
Definition and notation
In this section, we provide a brief introduction to PDifMPs and the construction of their paths.We refer the reader to [15] and [61] for a general description of stochastic hybrid systems.
For the couple of non-exploding processes (St, Vt), we assume that the first stochastic component (St) t∈[0,T ] possesses continuous paths in E1 and the second component (Vt) t∈[0,T ] is a jump process with right continuous paths and piecewise constant values in V.The times (Ti) i∈N at which the second component jumps form a sequence of randomly distributed grid points in [0, T ].The motion of the PDifMP (Ut) t∈[0,T ] on (E, B(E)) is defined by its characteristic triple (φ, λ, Q) as follows: At the end point Ti+1 of each interval, si+1 is set to the current value of φ( • , ui) to ensure the continuity of the path.Further, a new value vi+1 is chosen as fixed parameter for the next interval according to the jump mechanism described below.We define also the function b with values in R d 1 , which represents a family of drift coefficients, and the d1 × m matrix σ with real coefficients.
Assumption 2.1.We assume that b : E → R d 1 and σ : E → R d 1 ×m are linearly bounded and globally Lipschitz continuous for all s ∈ E1.
For any vi ∈ V, this assumption ensures the existence and uniqueness of the solution to (1) (see Theorem 5.2.1 in [62]).Moreover, the stochastic flow satisfies the semi-group property, i.e., • λ : E → R+ is the jump rate, i.e, it determines the frequency at which the second component of (Ut) t∈[0,T ] jumps.
is the transition kernel that determines the new values of the second component after a jump occurs.For all u ∈ E, it satisfies Q(u, {u}) = 0, meaning that the process cannot have a no-move jump.
Moreover, for all t ∈ [Ti, T ], i ≥ 0, we define the survival function of the inter-jump times as This function states that there is no jump in the time interval [Ti, t) conditional on the process being in the initial state ui.Let U be a uniformly distributed random variable on [0, 1], thus Assumption 2.2.Let λ : E → R+ be a measurable function such that ∀ui ∈ E and T > 0 Moreover, there exists a measurable function ψ : [0, 1] × E → E such that for ui ∈ E and A ∈ B(E) ψ represents the generalised inverse function of Q.For a fixed t, ψ(U(ω), U (ω)) is a random variable describing the post-jump locations of the second component of U .
Assumption 2.3.For all A ∈ B(E), Q( • , A) is measurable, while for all u ∈ Ē the function Q(u, • ) is a probability measure.
Summarising, the first component of the triple (φ, λ, Q) describes the continuous evolution of the trajectories of the process (Ut) t∈[0,T ] between jumps in time intervals defined by the survival function S, while the couple (λ, Q) yield the jump mechanism.All three components of (φ, λ, Q) are coupled.
Construction
From the local characteristics (φ, λ, Q), it is possible to iteratively construct the sample path Ut as follows.Let (Un) n≥1 be a sequence of iid random variables with uniform distribution on [0, 1] and u0 = (s0, v0) ∈ E the initial value of (1) at T0 = 0, such that u0 can be either an F0-measurable random variable (independent from the Wiener process) or a deterministic constant, for some ω ∈ Ω.We apply the survival function S(t, u0) defined in (2) and use its generalised inverse ζ with the first element U1 to determine T1 = ζ(U1, u0), i.e., the first jump time of the second component of Ut.We then define the sample path Ut up to the first jump time as The trajectory of Ut follows the stochastic flow φ given in (1) starting from U0 = u0 until a first jump occurs at the random time t = T1.The post-jump state UT 1 is determined through the measurable function ψ.For all A ∈ B(E), the distribution of ψ(U2, u0) is given by where τ1 is the waiting time until the first jump occurs, i.e. τ1 = T1.
Restarting the process from the post-jump location UT 1 , we define the next waiting time before a jump occurs from the survival function (2).In this way, we find the next jump time T2 = T1 + τ2.
Consequently, the state of the process in the interval [T1, T2) is given by We proceed recursively to obtain a sequence of jump times (Ti) i≥1 , such that the generic sample path of Ut, for t ∈ [Ti, Ti+1), is defined accordingly by The number of jump times that occur between 0 and t is denoted by Assumption 2.4.For all t > 0 and for every starting point ui ∈ E, E[Nt|u = ui] < ∞.
This assumption ensures the non-explosion of the process Ut.Under the Assumptions 2.1-2.4 the piecewise diffusion process can be constructed as a strong càdlàg Markov process (see [15] for further details), called then a Piecewise Diffusion Markov Process (PDifMP).
Extended generator of the PDifMP
The notion of infinitesimal generator is an extremely important tool for the study of Markov processes [9,24].In the following, we adopt the definition in [61,15], and, for the reader's convenience, we recall the theorem that fully characterised the extended generator (see [9] and references therein for further details about the difference between extended and classic generators).
Theorem 2.1.Let Ut be a PDifMP with characteristics (φ, λ, Q).The domain D(A) of the extended generator A consists of all bounded, measurable functions f on E ∪ ∂E satisfying: Then, for f ∈ D(A), u = (s, v) ∈ E, the extended generator Af is given by where for s = (s1, . . ., s d 1 ).Here, ∇sf (s, v) • b(s, v) is the inner product in R d 1 , σ T is the transpose matrix of σ, ∇ T s is the transpose operator of ∇s.We refer to [15] for the definition of L loc 1 (p) and the proof of this theorem.
Generalised Fokker-Planck equation
The adjoint of the generator is used to derive the generalised Fokker-Planck equation, describing the time evolution of the probability distribution g(t, s, v) of the process.The equation is given by where the adjoint operator of A dif reads We refer to [6,40] for further details on the derivation of Fokker-Planck equations for general Markov processes.
Application to tumour modelling
Gliomas can be considered as dynamical ecosystems where cells undergo constant changes due to many cellular processes, e.g.migration, proliferation, death, or creation of new blood vessels [79,3].We focus on the process of cell movement, which is responsible for the global diffusive features that characterise glioma evolution.Cell movement can be divided into motility and migration.Motility refers to the random or spontaneous motion of cells from one location to another, while cell migration involves many interconnected biological aspects, such as environmental cues driving it.Thus, methods that take into consideration the stochastic nature of this phenomenon (i.e., motility) while accounting for environmental cues influencing it (i.e., migration) are important for providing a more complete understanding of the entire process.Following [52,50], we model the process of cell movement under the influence of subcellular scale interactions, considering the effects of the amount of bound receptors located on the cell membrane.Specifically, we consider the role of integrins in this dynamics [23,30].Referring to cell migration, we take into account the alignment of the tissue as a cue enhancing the efficiency of cell invasion [83,25], as cells tend to attach to the fiber and crawl along them, a phenomenon referred to as contact guidance.However, since the direction that cells decide to follow remains random, there is a need to consider a stochastic description for the motility component.
Inspired by particle movement models [75,63,64], we propose piecewise diffusion Markov processes for the modelling of cell movement.In the context of persistent random motion, the continuous stochastic component of the PDifMP describes the contact guidance phenomenon, while its second component describes the random motility dependent on the velocity jump process.This approach makes it possible to describe the cellular migratory response to environmental signals while keeping the random aspect of cell motility.Moreover, it also allows us to show how several well-established methods proposed in the literature (e.g.see [63,64,50]) can be cast into a rigorous PDMP framework.
Interactions between cells and microenvironment
In order to migrate through the complex brain structure, glioma cells must adapt quickly to the physical characteristics of the environment.Their interactions with the extracellular matrix (ECM) [37] are mediated by the binding between the integrins and the ECM fibrillar proteins.These bindings allow them to exert the forces necessary for them to migrate [69,30].As these processes happen at a sub-cellular level, we describe the mechanism behind cell motion modelling the dynamics of the receptors on the tumour cell membrane.
Let y(t) ∈ (0, 1) be the concentration of bound integrins and let us assume that the binding between integrins and tissue occurs in areas of highly aligned fibers [32].The binding process can be described with the following general reaction where R0 defines the total number of cell surface receptors, Q(x) the macroscopic volume fraction of tissue (including ECM and brain fibers), depending on the position x ∈ X ⊂ R 3 , and k + and k − the rates of attachment and detachment between cell and tissue [34,32].Within this framework, denoting by x = x0 + vt, we look at the path of a single cell moving from an initial position x0 with velocity v ∈ V ⊂ R 3 .V = αS 2 is the closed set for cell velocities, where S 2 denotes the unit sphere on R 3 and α the mean speed of a tumour cell, which is assumed to be constant.Since we are interested in the interactions between cell surface receptors and the ECM, and this binding process takes place for fixed position x, we ignore any type of randomness resulting from the velocity change.The mass action kinetics for the concentration y(t) is governed by the following ODE: Since the integrin dynamics are much faster than the macroscopic time scale phenomena, we assume that they equilibrate rapidly [33,50,19].Thus, after rescaling y/R0 → y, we consider the unique steady state y * of (10), given by and we define a new internal variable z := (y − y * ) ∈ Z = (y * − 1, y * ) ⊂ R, which measures the deviation of y from its steady state [32,50].Considering the piecewise location of a single cell x = x0 + vt through the density field Q(x), z satisfies where The internal variable z is bounded as long as ∇xQ(x) is bounded and its sign depends on the current orientation of the cell w.r.t the gradient of Q(x).
PDifMP description for glioma cell movement
To model cell movement under the influence of external signals, we assume that the sample path of an individual cell starting in position x0 and moving in a certain direction due to contact guidance for a random period of time is given by Here, the second term in the r.h.s represents the stochastic variability in the velocity, with σ ∈ R being the diffusion coefficient and Wt the standard Wiener process.Due, for instance, to collisions with other cells in their surrounding [56,65], during the movement a cell stops for a negligible duration and reorients its path [56].This causes the cell to adopt a new velocity to continue migrating in the new direction until another obstacle is encountered.To describe this process, we rely on the introduced PDifMP framework.We set E1 = X × Z ⊂ R 4 and we denote by St := (Xt, Zt) the continuous component describing cell motion.Their evolution is characterised through the SDE (12) for cell motility and the ODE (11) for the interactions with the microenvironment.Both processes are affected by spontaneous velocity changes induced by the jump process Vt.Then, we denote by E = E1×V the state space of the piecewise process Ut = (Xt, Zt, Vt) for cell motility and migration and by φ : [0, T ] × E → E1, the solution to the coupled system (11)- (12).
As the duration of reorientation is negligible, we describe the direction of a cell at a given instant.Moreover, under the additional assumption that the motion is Markovian in the state space, we state that cell direction is described with an inhomogeneous Poisson-like process [38], whose intensity depends on time, position on the scaled sphere V, and internal state.Thus, the cell reorientation rate referring to the jump rate function λ : [0, T ] × E → R+ of the stochastic process ut depends on the integrin state z.This means that the binding process is seen as the onset of reorientation.In particular, following [76], we assume that, if many integrins are bounded, cells tend to change direction frequently in order to escape the densely packed areas, resulting in an increased rate λ.Thus, following [33,34], we set λ(ut) := (λ0 − λ1zt) ≥ 0 with λ0 and λ1 positive constants.In particular, λ0 refers to the basal turning frequency of an individual cell [74] accounting for the "spontaneous" cell motility, while the term λ1z represents the variation of the turning rate in response to environmental signals.
Following the construction described in Section 2.2 with initial state u0 = (x0, z0, v0), we use the jump rate function λ defined in (2) to determine the duration of movement before any reorientation of direction occurs.Moreover, considering that the velocity jump process vt is of Markovian type, we have that cells retain no memory of their velocities before the reorientation.Thus, we define the Markov transition kernel Q, determining the post-velocity jump state of the process ut, using K(x, v, v ), which describes the distribution of newly chosen velocities, having that K(x, v, v ) = K(x, v).Definition 3.1.Let ν be the standard Lebesgue measure on (V, V) and K : X × V → [0, ∞] be a measurable function with respect to the σ-algebra X ⊗ V such that Then, the mapping defines a Markov transition kernel over V, where ν(dv) = dv.
Denoting by q(x, v) the fiber distribution function over V, with v = v v ∈ S 2 , and by a scaling constant [49,48], we assume that the dominant directional cue leading cell migration is given by the fiber network.Thus, the transition probability kernel is given by For the fiber distribution function q(x, v), different expressions can be found in the literature, such as the Von Mises-Fisher Distribution, the Peanut Distribution Function, or the Orientation Distribution Function (ODF) [66,2].A comparison among these distributions have been proposed in [20], in both 1D and 2D scenarios.We rely on this analysis and we choose the ODF for describing q(x, v), i.e., we set Here, v stands for the fiber direction, x for spatial position within the brain, while D is the diffusion tensor taking into account information about the water diffusivity in the brain [20].We also assume that fibers are not polarised, i.e., q(x, v) = q(x, −v) for all v ∈ S 2 .It is straightforward to verify that q is a probability distribution on S 2 [33,32,34].From (2), it is possible to construct the sequence of jump times (Tn) n≥1 , with Tn = τ1 + • • • + τn for all n ≥ 1 (and T0 = 0 by convention), such that the process Ut describing cellular movement is piecewise constructed on each interval [Ti, Ti+1), i = 1, . . ., n, via the characteristics (φ, λ, Q) given by Here, zt is the solution of (11) and vt is a piecewise constant over each interval of random length Ti+1 − Ti.As proven in [15], this construction leads to a càdlàg strong Markov process, describing cell motion in an anisotropic environment.In summary, the overall system describing a contact-mediated movement of glioma cells at the microscopic scale reads The solution of ( 17) is a triple and hereafter we will refer to (Xt, Zt, Vt) as (xt, zt, vt) as we are talking about the sample path of Ut.
Derivation of the mesoscopic equation and its macroscopic limit
We rely on the definition of the extended generator of (Ut) t∈[0,T ] given in Section 2.3 to obtain a mesoscopic equation describing the evolution of the joint probability density function of all microscopic variables.In the specific, for all test functions f ∈ D(A), the extended generator A of the above defined process Ut reads where λ and Q are given in (16).Notice that the integral term in ( 18) is defined over V as the transition kernel Q has a density defined on V.
Let g(t, x, z, v) be the joint pdf of the microscopic variables at time t ∈ [0, T ], position x ∈ X, internal state z ∈ Z, and velocity v ∈ V.In this context, we refer to as glioma density function.The adjoint operator A * g is given by Thus, following the analysis of Section 2.4, the generalised Fokker-Planck equation for the evolution of g(t, x, z, v) reads where, from ( 14), the turning operator reads [63,46,66,33] Remark 3.1.Note that for σ = 0, (20) coincides with the kinetic transport equation derived in [33,50,32].This means that the PDMP resulting from setting σ = 0 in ( 20) is the formally defined mathematical model underlying the description in [33,50,32].
We introduce the notations for the mean fiber orientation and the variance-covariance matrix of the fiber orientation distribution, respectively.Notice that the symmetry on the fiber distribution implies Eq = 0.
Following [33,50], we model proliferation as an effect of cell-tissue interactions via integrin binding Here, M (t, x) denotes the macroscopic cell density, that is, the marginal distribution of g(t, x, z, v) over all possible velocities and internal states, i.e., Moreover, µ(M ) is the growth function and the kernel X (x, z, z ) is a probability density in the second variable z characterising the transition from z to z during the proliferation process at position x.For X we only assume that the operator P(g) is uniformly bounded in the L 2 -norm, a reasonable biological condition related to the space-imposed limits on cell division.Thus, for the evolution of g(t, x, z, v) we obtain the following equation Due to the high dimensionality of ( 23), numerical simulations of this equation would be too expensive.Moreover, clinicians are more interested in the macroscopic dynamics of the tumour rather than in the lower scale interactions.Thus, we derive the macroscopic equation for the evolution of the tumour density, based on the definition of the moments of g with respect to v and z: Notice that we do not consider higher order moments of g with respect to z as the subcellular dynamics are much faster than the events taking place on the other scales, so that the deviation z is close to zero.Dropping the (t, x) notation for simplicity, the moment equations reads and Following [32,33], we consider a parabolic scaling of the moment equations setting x → x and t → 2 t for space and time variables, respectively.In particular, we scale the growth rate function µ(M ) with 2 as it accounts for faster dynamics.Thus, we obtain and We consider the Hilbert expansion methods [31,50] expanding the moments of g as By equating the same powers of in ( 26) and ( 27), we derive the equation for the leading order coefficient M0 of the Hilbert expansion of M .Thus we obtain With classical scaling arguments (see [32] for more details), we obtain M z 0 = m z 0 = 0 and m0 = q(x,v) w M0.On account of that, using the symmetry assumption, i.e., Eq = 0, from (31), we obtain M z 1 = 0, and Moreover, considering (30) and following the analysis in [63,32], we get M1 = 0, and Replacing it into (32) and integrating over V, we get: where Therefore, the evolution equation for M0 reads where denotes the function that carries the information about the influence of the subcellular dynamics, while refers to the macroscopic tumour diffusion tensor.In addition, the tumour drift velocity is given by In view of the results obtained in [32], the -correction terms for M can be left out and, after ignoring the higher order terms and discarding subscripts, we obtain the following evolution equation characterising the macroscopic glioma density: Using the theory of monotone operators for nonlinear parabolic equations and following the approach in [73,71], it is possible to prove the existence, uniqueness and non-negativity of the solution of the following parabolic problem with homogeneous Neumann boundary conditions. where We refer the reader to Appendix A.1 for more details about the necessary assumptions on the operators and for an outline of the proof of the well-posedness of the macroscopic problem (38).
Numerical simulations
We perform 2D simulations of the macroscopic equation for the tumour cells (37) to study the impact of both the subcellular dynamics and the stochastic parameter σ on the overall tumour evolution.With this aim, we firstly specify parameters and coefficient functions involved in the equation.Concerning the tumour diffusion tensor D T (x) in (35), we numerically compute it using the orientation distribution function given in (15), where D(x) represents the water diffusion tensor obtained from processing (patient-specific) DTI data.Taking advantage of this DTI information, for the macroscopic tissue density Q(x) we assume the following expression where F A refers to the fractional anisotropy of the tissue.We refer to [32] for its definition.This choice is motivated by the fact that the fractional anisotropy represents a measure of the fiber alignment and, since in this setting fiber alignment is guiding cell migration, it is reasonable to assume that the function Q(x) expresses higher values where the tissue is more anisotropic.Following several previous works (e.g.see [33,20]), for the growth rate µ(M ) we employ a logistic growth term defined as with µ 0 the constant growth coefficient and K M the tumour carrying capacity.Finally, we report in Table 1 the range for the constant parameter values involved in the macroscopic setting (37).The values for the stochastic parameter σ are proposed based on the ranges of the other parameters.We present 2D numerical simulations performed with a self-developed code in Matlab (MathWorks Inc., Natick, MA).The computational domain is a horizontal brain slice reconstructed from MRI scans.The DTI dataset used to compute the D T (x) was acquired at the Hospital Galdakao-Usansolo (Galdakao, Spain), and approved by its Ethics Committee: all the methods employed were in accordance to approved guidelines.A Galerkin finite element scheme for the spatial discretisation is considered, together with an implicit Euler scheme for the time discretisation.For the initial condition, we consider a Gaussian-like aggregate of tumour cells centered at (x 0 , y 0 ) = (−35, −41), situated in the left-bottom part of the brain slice.To be specific, M 0 = e (x−x 0 ) 2 +(y−y 0 ) 2 8 . Moreover, Figure 2 shows the initial tissue density estimated with (39).In particular, yellow areas refer to regions where the fibers are highly aligned and, thus, the value of F A(D(x)) is closer to one, while black-red areas refer to more isotropic regions, where the fibers are randomly distributed [32].
We present different sets of simulations to obtain insight into several features characterising the proposed approach.In detail, (A) we consider the model for σ = 0 and we evaluate the effects of the variation of λ 1 and λ 0 on tumour evolution; (B) we fix the value of λ 0 and λ 1 and we assess the effects of the variation of σ on tumour evolution, i.e., the role of the stochastic parameter in the overall dynamics; (C) we consider different combinations of λ 1 and σ and we show how their respectively effects merge; (D) following the approach proposed in [12], we discuss the effects of λ 0 , λ 1 , and σ on the estimation of the onsets of malignant transformation from low grade to high grade gliomas.
Starting from the numerical test (A), we analyse the effects of varying λ 1 (referring to it as experiment A.1) and λ 0 (referring to it as experiment A.2).These experiments are motivated by the fact that obtaining a clear biological estimation for λ 0 and, especially, for λ 1 is quite difficult.Thus, understanding the impact of their variation becomes a fundamental point to address.As described in Section 3.1.2,λ 0 refers to the basal turning frequency of an individual cell, while λ 1 takes into account the role of the receptor dynamics in the evolution.Recalling the expression of the turning rate λ(z), we could describe the constant parameters λ 0 and λ 1 as the weights of the receptors-independent and receptors-dependent cell turning, respectively.Starting from the analysis on the parameter λ 1 and in line with some studies concerning the effects of its variability [50] on tumour evolution, we consider the range λ 1 ∈ [−5, 5] (s −1 ) and we assess the effects of changes in both its sign and modulus.Considering that the turning rate λ(z) = λ 0 − λ 1 z has to be non-negative, we should ensure that λ 0 ≥ λ 1 z, meaning that • if λ 1 ≥ 0, the non-negativity is ensured for λ 0 ≥ λ 1 /2; • if λ 1 ≤ 0, the non-negativity is ensured for λ 0 ≥ λ 1 .
Thus, to obtain reasonable values of the turning rate, we should assume λ 1 ≤ λ 0 .Although we are aware that negative values of these parameters are not sustained by biological observations, we also include them in our analysis because we want to assess the sensitivity of our results to these parameter changes.In Figure 3, we firstly show the evolution of the tumour density over time in the limit case in which λ 0 = λ 1 = 0.8 (s −1 ).37) with the parameters listed in Table 1 and for λ 0 = λ 1 = 0.8 (s −1 ).The tumour evolution is shown after 200, 400, and 600 days.
We notice how cells spreading is highly influenced by the underlying fiber structure.Cells clearly tend to move along preferential directions, determined by the fiber bundles, and this gives rise to a heterogeneous tumour mass with an irregular shape, which is a common characteristic for this kind of brain tumours.
Referring to the tumour situation at the last time step, i.e., after 600 days, we compare the tumour evolution for different values of the parameter λ 1 ∈ [−5, 5] (s −1 ), as described in experiment A.1.Results are shown in Figure 4.The main effect of varying λ 1 consists in obtaining a greater or lower level of heterogeneity in the distribution of the tumour cells inside the tumour mass.The external border of the neoplasia, in fact, does not seem to be particularly affected, while the internal dissemination of the cells shows evident changes when λ 1 varies from largenegative values to large-positive values.In particular, clear differences with respect to the case λ 1 = 0 can be observed for quite large values of the parameter (|λ 1 | > 1), while the evolution is qualitatively similar in the cases |λ 1 | < 1.Such differences can be better observed in Figure 5, where the differences between the solution of system (37) for λ 1 = 0 (s −1 ) and the solution of the same system for the different values of λ 1 used in Figure 4 are shown.The impact of λ 1 variation can be immediately grasped.There is a clear difference in the spreading inside the tumour mass and in the cell response to the anisotropy of the brain tissue.The impact becomes stronger when λ 1 increases in modulus, and especially for |λ 1 | > 1.In this case, in fact, the haptotactic component of the dynamics is stronger (in an attractant or repellent way, depending on the sign of λ 1 ) and, thus, the heterogeneity of the underlying brain tissue have a larger impact on the dynamics.The mechanism that drives cell migration along the tissue structure can be visualised in details in Figure 6, where the leading eigenvector of the tensor D T (x) (related to the fiber direction) is plotted together with the differences in the tumour density at 600 days for λ 1 = 5 (s −1 ) and λ 1 = −5 (s −1 ).Recalling the expression given for the tissue density (39), from the left plot of Figure 6 we notice that, where the fibers are strongly aligned (e.g.along the central vertical bound), we obtain negative values of the difference M λ1=−5 − M λ1=0 .Here, in fact, the gradient of tissue Q driving the haptotactic movement is bigger and, due to the negative value of λ 1 , cells tend to avoid this area, moving away from it.Conversely, looking at the right plot of Figure 6, we obtain exactly the reverse behaviour.In fact, the positive value of λ 1 leads to a much stronger haptotactic movement towards these fiber bundles.Thus, the difference shows positive values in the same regions described above.
We then test the effect of varying the parameter λ 0 , as described in experiment A.2. Results of this test are shown in Figure 7, where the difference between the solution of Figure 4: Experiment (A.1).Numerical simulations of equation (37) with the parameters listed in Table 1, λ 0 = 0.8 (s −1 ) and for different values of λ 1 (s −1 ).The tumour evolution is shown after 600 days.
(37) for λ 0 = 0.8 (s −1 ) and the one obtained for λ 0 varying in the interval [0.25, 5](s −1 ) are illustrated.We observe two different trends for λ 0 ≥ 0.8 or λ 0 ≤ 0.8.Smaller values of the parameter lead to a larger spreading of the tumour cells with respect to the case λ 0 = 0.8, while larger values of it lead to a reduced invasion of the tumour mass.In fact, smaller values of λ 0 mean a reduced random turning of the cells, thus a greater persistence in their migration, which macroscopically translates into a large spread.Instead, larger values of λ 0 imply a larger frequency of cell turning and, thus, a macroscopic lower degree of persistence and spread in the tissue.In particular, the main difference is in the region of the outer rim of the neoplasia.Concerning the numerical test (B), we fix λ 0 = λ 1 = 0.8 (s −1 ) and we vary the value of the parameter σ relating to the variability of the cell velocity in the microscopic model (12) and, thus, leading the additional diffusion term appearing in the macroscopic model (37).Results of the simulations for σ ∈ [0.01 − 0.2] (mm 2 •s −1 ) are shown in Figure 8.As expected from equation ( 37), the effect of the parameter σ consists of a larger spread of the tumour cells inside the brain tissue.In particular, the larger the value .1).Differences between the solution of system (37) with λ 1 = 0 and the solution obtained for λ 1 varying in the interval [−5, 5] (s −1 ).Results are shown after 600 days.Here λ 0 = 0.8 (s −1 ), while the remaining parameters are taken from Table 1.
of σ is, the stronger the diffusion phenomenon characterising glioma cells appears.
For large values of σ, we observe more regular tumour borders and a more isotropic cell migration because the additional diffusion term does not depend on the diffusion tensor (35).These features can be better appreciated in Figure 9, where the differences between the solution of equation ( 37) for σ = 0 and for σ = 0 are shown.This figure clearly depicts an extensive and more homogeneous diffusion of the tumour mass for large values of σ.We obtain, in fact, negative values of the differences only in areas inside the tumour core (due to the balance between a faster spread and the same cell proliferation rate), while positive differences in the areas around the tumour border.In particular, comparing the first rows of Figures 9 and 7, we notice that the increase of σ values has an effect similar to the decrease of λ 0 values, i.e., a larger tumour spread in the area of tumour outer rim.It is interesting to observe how the same macroscopic cell behaviour is obtained from two different microscopic processes.
In fact, increasing σ allows for a stronger effect of the stochastic component related to the variation of cell velocity, while decreasing λ 0 reduces the random turning of the cells and determines a greater persistence in their direction of migration.
Referring to test (C), we analyse the interplay between the effects of the parameters .1).Differences between the solution of system (37) with λ 1 = 0 and the one obtained for λ 1 = −5 (s −1 ) (left plot) and λ 1 = 5 (s −1 ) (right plot) after 600 days.The differences are plotted against the fiber direction.).This is still an effect of the higher value of λ 1 , here set at λ 1 = 5 (s −1 ).Finally, the combination of low values for both σ and λ 1 used for scenario (C.3) determines a smoothness of the internal distribution of tumour cells as well as a reduced cell spread in the healthy tissue.
For the last test (D), we discuss the onsets of malignant transformation from low grade glioma (LGG) to high grade glioma (HGG) in relation to the possible variations of the parameters λ 0 , λ 1 and σ.
LGGs are usually slowly-growing, infiltrative tumour with a very unpredictable clinical course.Most LGG patients face transformation of their tumour into higher grade one, with a worse prognosis.This process is known as malignant transformation and it is usually defined on the basis of contrast enhancement on MRI scans or histopathological evidences.In line with the approach proposed in [12], we estimate the time instant τ OSM of the onset to the malignant transformation of cells into a more aggressive high grade tumour.The main aim of the proposed experiment (D) consists in showing how our approach is able to replicate the same qualitative behaviours of [12] (where a comparison with patient data is proposed), but with a more detailed and precise description of the microscopic processes related to cell migration.Specifically, τ OSM is defined as the first time instant at which the LGG cell density becomes greater than a certain threshold M crit , which we set to 0.6K M [12].We run several numerical tests varying one parameter at the time and estimating the resulting time of onset of the malignancy.Table 2 collect the results of these experiments.We observe that the parameter λ 1 seems to not have such an evident impact on the time of onset of malignancy.In fact, τ OSM varies only of ± 28 days.Instead, both λ 0 and σ strongly affect the estimation of τ OSM .In Figure 11 the estimated values of τ OSM with respect to λ 0 and σ are plotted together with the corresponding interpolant curves, showing the trends of τ OSM = τ OSM (λ 0 ) (left plot of Figure 11) and τ OSM = τ OSM (σ) (right plot of Figure 11).Increasing the value of λ 0 leads to a reduction of the time τ OSM at which LGG turns into HGG, while increasing σ has the reverse effect, i.e., it leads to an increase of τ OSM .The parameter λ 0 is, in fact, related to the tumour responsiveness to the tissue structure, and large values of this parameter refer to a loss of responsiveness, which is a common characteristic in HGG.Moreover, observing that the overall diffusion coefficient of tumour cells in equation ( 37) is proportional to 1 λ0 + σ 2 2 , increasing σ (or equivalently decreasing λ 0 ) corresponds to an increase of this diffusion coefficient.Thus, comparing these results with the ones shown in [12] (e.g.see Figure 7 in there), we notice a good qualitative agreement between them and a similar behaviour for the evolution τ OSM .We would like to remark that this is only a first possible approximation for the estimation of τ OSM and we are aware that there are several other factors involved in the definition of the transformation from LGG to HGG, apart from the increase in the tumour density.Surely, the tumour density values have an evident impact on the definition of τ OSM , however, from a mathematical point of view, it is difficult to provide a formal definition for it.Thus, as a first attempt, we decide to rely on the definition given in [12] for τ OSM , leaving its possible extensions for future works.
Discussion
To the best of our knowledge, this is the first hierarchical stochastic model in which piecewise diffusion Markov processes are used to describe glioma cell motion within a multiscale framework.We start with the description of glioma cell movement at the microscopic scale using a PDifMP, which combines a stochastic model for cell motility and a deterministic one for cell migration.The latter looks at the response of glioma cells to external and environmental cues.The extended generator of the formulated PDifMP takes the form of an integro-differential equation in all the involved variables.Its solution yields the density of the transition probability of the Markov process.Using scaling arguments, we then obtain the equation describing the evolution of the tumor density at the macroscopic level.In this way, our approach allows us to take into account the macroscopic level properties as well as the features characterising the microscopic processes.Using numerical simulations of the macroscopic setting we analyse the role and influence of both the parameters involved in the jump rate function λ of the PDifMP and the parameter σ related to the stochastic variability in the cell velocity.In particular, we observe how the parameter λ 0 at the microscopic scale promotes a major spreading of the tumour mass inside the brain tissue, regardless of the specific brain structure, while λ 1 relates to cell responsiveness to the guided movement along the brain fibers.The fully detailed formulation of glioma cell motion with the PDifMP allows us to observe that the jump rate function determines the distribution of the waiting times of the process being in a particular state.Thus, for a constant jump rate (λ = λ 0 ) there is no influence of the microenvironment on the motion and a larger frequency of cell turning determines at the macroscopic scale a reduced migration along the fibers.Instead, including the term λ 1 z results in an increase in reorientations in response to the brain structure and, thus a visible heterogeneity inside the tumour bulk.A particularly interesting result is obtained by comparing the numerical experiments A.2 and B. In fact, we show how a similar macroscopic behaviour -large cell spreading around the outer rim of the tumour -can result from two different sources at the microscopic level: either from increasing the value of σ and thus the diffusion of cells, or from reducing the value of λ 0 and thus the random cell rotations, resulting in higher cell persistence .
With respect to well-known multiscale models of this type [32,34,33], in the present note we also include a further novel aspect concerning the transition to malignancy of the tumour mass.In particular, by accepting the hypothesis that the loss of responsiveness of glioma cells to the tissue structure can be seen as a sign of the transition from LGG to HGG, we numerically show that the time at which this transition happens can be estimated with our approach and it is highly influenced by the parameters σ and λ 0 .The obtained results are perfectly in line with the ones presented in [12], confirming the reliability of the proposed approach.
With our work, we aim at emphasising how the use of PDMP or PDifMP for the description of the phenomena leading cell movement is of paramount importance for rigorously modelling the cellular scale processes.An interesting point would concern a numerical comparison of the cell behaviours at the different scales (microscopic and macroscopic) with either the deterministic and the stochastic formulation.Moreover, in the present notes, glioma cell motion is described in relation to the binding with the tissue, but the proposed approach can be extended in order to incorporate other biologically relevant aspects of tumour progression.For instance, following [21], the influence of microenvironmental acidosis on glioma cell migration and the consequent pH-repellent chemotactic process can be considered.This could be done assuming different expressions for the jump rate function of the PDifMP, e.g.allowing its dependence on different interactions between cells and microenvironment or relating it to the tumour response to treatments.Another interesting direction for future development concerns the modification of the jump process using stochastic differential equations to model not only jumps in the cell velocity, but also jumps in the position, trying to recover the typical feature of tumour recurrence in different (and quite far from the original tumour location) regions of the brain.Finally, here we propose a first possible way to analyse the transition to malignancy.However, as stated in the above section, this process is much more complex and we are working towards the development of an interdisciplinary study in which an extension of our approach could be used to shed light on the intricate biological processes underlying this transition.
is the stochastic flow of the continuous first component of (Ut) t∈[0,T ] .Starting at T0 = 0 with initial value u0 = (s0, v0) ∈ E, the process φ(t, u) represents the solution of a sequence of SDEs over the consecutive intervals [Ti, Ti+1) of random length.At each random point Ti ∈ [0, T ], i ≥ 1, there are newly updated initial values ui = (si, vi) ∈ E, where si serves as the initial value and vi as a parameter in the following SDE defined on the interval [Ti, Ti+1):
Figure 8 :
Figure 8: Experiment (B).Numerical simulations of equation (37) with parameters listed in Table 1 and for different values of σ.The tumour evolution is shown after 600 days.Values of σ are expressed in mm 2 • s −1 .The figures referring to the cases σ = 0.15 and σ = 0.2 are shown on a less zoomed region to better assess the tumour invasion in the tissue.
Figure 9 :
Figure 9: Details of experiment (B).Differences between the solution of system (37) for σ = 0 and the one obtained for σ ∈ [0.01, 0.2] (mm 2 • s −1 ).Results are shown after 600 days.The remaining parameters are taken from Table 1.The figures referring to the cases σ = 0.15 and σ = 0.2 are shown on a less zoomed region to better observe the tumour invasion in the tissue.
Figure 11 :
Figure 11: Experiment (D).Estimation of the time of onset of malignant transformation τ OSM for different values of the turning rate λ 0 and the free parameter σ.The remaining parameters are taken as in Figure 3.
Table 2 :
Estimations of the onsets of malignant transformation τ OSM for different values of λ 1 , σ, and λ 0 . | 2022-06-23T06:43:27.684Z | 2022-06-20T00:00:00.000 | {
"year": 2023,
"sha1": "88a7a9ea36c04a2f2b8a4f22d33d02962b019463",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00285-023-01909-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "88a7a9ea36c04a2f2b8a4f22d33d02962b019463",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
196520470 | pes2o/s2orc | v3-fos-license | Random blood glucose on admission as prognostic factor for assessment of severity of acute myocardial infarction
Background: Assessment of risk factors or prognostic markers is essential to determine the adverse outcome related to acute myocardial infarction (AMI). The aim of the present study was to examine the role of random blood glucose as prognostic marker for assessment of severity of AMI. Methods: This prospective study was conducted on 79 patients with onset symptoms of AMI. All the patients both diabetics and non-diabetics underwent serum blood glucose estimation in the hospital. Primary endpoint of the study was all cause mortality till day 90 follow-up. The secondary end points were composite of death, reinfarction and heart failure till day 90.Mortality rate is higher in the diabetics as compared to nondiabetics. Results: The mean age group was 55.9 years. Males (86%) outnumbered females (14%). The mean BMI was 22.3±2.83. The mean random blood glucose in the study population was 138±92.9 mg/dl (7.7±5.15 mol). Of total 79 patients, 5 were diabetics, of them 2 (40%) died. Among 79 patients, 16 patients were died during 3 months following the qualifying event, 7 had heart failure and 4 had reinfarction. Conclusions: In patients with AMI, hyperglycemia should consider as one of the important prognostic marker to determine the adverse cardiovascular events.
INTRODUCTION
The role of hyperglycemia in the occurrence of cardiovascular events in acute myocardial infarction (AMI) is still a major concern. 1 Hyperglycemia can also be seen in normal patients when hormonal control of blood glucose level was disturbed by any acute or major illness such as AMI. 2 There are two hypotheses in the literature to explain the relation of hyperglycemia and the AMI. First perspective is hyperglycemia in patients with AMI is induced by activation of adrenergic receptors. Second one is that increase in blood glucose in patients with AMI is a marker of pre-existing carbohydrate metabolism disorders. 3 Previous studies by Meier et al, showed an association between high blood glucose levels in AMI patients and increased overall mortality rate. 4 Changes in random blood glucose levels by adrenergic stimulation are responsible for recurrent MI and initiate the development of ventricular arrythmiasis. Furthermore, hypoglycemia inhibits metabolic processes in the myocardial cells and causes apoptosis in cardiomyocytes leading to MI. The increased release of anti-insulin hormones also results in the development of hyperglycemia, especially in patients with acute diabetes mellitus. 5 Hence, it is important to estimate the levels of blood glucose in patients with AMI, regardless of diabetic status in the in hospital period and it should be considered as an important metabolic marker of adverse outcomes.
The present study was aimed to evaluate the role of random blood glucose on hospital admission for assessment of severity of acute myocardial infarction.
METHODS
This prospective cohort study conducted on 189 patients admitted to emergency department of Maharaja Yeshwant Rao Hospital, Indore with suspected acute coronary syndrome, during the period from November 2004 to July 2005. All the patients were screened thoroughly and 79 patients with evolving STEMI within 12 hours of symptom onset were included in the present study. Diagnostic criteria of acute myocardial infarction (AMI) were patients with typical ischemic chest pain/symptom ≥30 minute duration or with onset of symptoms within 12 hours of presentation. Patients with advanced neoplastic or concomitant life threatening disease, which might limit the expectancy to less than 3 months, patients with psychiatric illness and those under legal custody, patients who refused to give consent and patients anticipated poor compliance with follow-up were excluded from the study.
The study protocol was approved by the institutional ethics committee. A written informed consent was obtained from all patients for the participation in the study. If a patient had full pain or not in full conscious consent was taken from close relatives. Complete details related to patient history were collected in a predesigned format. Blood sample was withdrawn from all the patients for laboratory investigations. Serum blood glucose was estimated by standard procedures (GOD-POD) method. Standard 12-lead electrocardiography was taken for all the patients. All patients were also evaluated for major conventional risk factors including hypertension, smoking, and previous cardio-vascular event like previous MI, and stroke. Patients received usual line of management as per the guidelines set by the treating unit.
Statistical analysis
All the collected data was collected and analysed by using SPSS 10 version software. Continuous data are expressed as the mean±SD. Comparison between two groups was performed by using unpaired t-test for the continuous variables and Chi-square test was used to compare the categorical (non-continuous) variables. P value <0.05 was considered as significant.
RESULTS
Demographic and clinical characteristics of study population were given in Table 1. A total of 79 patients were included in the study. The mean age group was 55.9 years. Males (86%) outnumbered females (14%). The BMI was ranged from 16.5 to 28.1 with a mean of 22.3±2.83.The mean Killip class for all the patients at presentation to the hospital was 1.4±0.8. The mean random blood glucose in the study population was 138±92.9 mg/dl (7.7±5.15 mol). Previous history of hypertension was present in 17 (22%) patients, myocardial infarction in 11(14 %) patients and prior history of CVA was present in 7 (9%) of the patients. As shown in Table 3, secondary outcome events reported in our study are composite of death, reinfarction and heart failure till day 90.Out of 79 patients, 27 patients either had reinfarction, heart failure or died by the 90 day of the qualifying event. Of them 16 died, 7 had heart failure and 4 had reinfarction. No events of severe recurrent ischemia, stroke or major hemorrhage were observed. It is found that age and kilp class significantly affected the occurrence of the events with a p value of less than 0.05 The random blood glucose that was a significant factor in determining the mortality is of lesser significant in secondary outcome events having p-value 0.141.
DISCUSSION
Elevated blood glucose levels at admission during acute illness are common and are associated with poor outcome in acute cardiopulmonary events such as AMI, heart failure, and stroke. [6][7][8] From recent years, the association between hyperglycemia and the outcome of acutely ill patients has received considerable importance because of potential benefits and risks of tight glycaemic control. 9 Considering the post prandial blood glucose levels helps in determination of prognosis in patients with MI in the development of adverse cardiovascular events. 10,11 Many studies suggested that post prandial blood glucose levels are more important for the risk assessment of cardiovascular complications rather than fasting blood glucose level. 12 In the current study, the prevalence rate of diabetes in AMI patients is 6.3%. This rate was comparatively lesser than previous findings, suggesting 14.3-40.9% prevalence of diabetes mellitus in patients with MI according to the national population characteristics. 13,14 Irrespective of diabetic status, hyperglycemia is considered to be a major predictor of survival or increased risk development of cardiovascular events in AMI patients.
Irrespective of diabetic status, hyperglycemia is considered to be a major predictor of survival or increased risk development of cardiovascular events in AMI patients. 3 Our study results showed that during consideration of baseline characteristics the mean random blood glucose levels are more in non-survivors (9.91±8.1 mmol/l)with acute AMI compared to survivors(7.09±4.04 mmol/l). These findings confirm that patients with hyperglycemia had a significantly higher mortality rate (p=0.05). Our findings are consistent with the findings of Capes et al. 8 They found that patients without diabetes who had glucose concentrations more than or equal to range 6.1-8.0 mmol/L had a 3.9-fold (95% CI 2.9-5.4) higher risk of death than patients without diabetes who had lower glucose concentrations. Similarly, in the present study, the mortality rate in diabetic patients was higher (40%) compared to non-diabetics (19%) which is comparatively higher than the findings of Karetnikova et al. 1 In his study, the mortality rate was 14.82% in diabetic patients and 10% in non-diabetics (p>0.05).
In our study, out of 79 patients, recurrent MI was seen in 4 patients during the follow up period. This was lower in comparison to the studies of Karetnikova et al. 1 In their study, 8 patients with diabetes mellitus and 10 patients without diabetes mellitus had shown recurrent MI during their one year follow up period.
CONCLUSION
In conclusion the findings of the data suggested that hyperglycemia at the time of admission in patients with AMI were significantly associated with risk of mortality. Hence, future studies should focus on the development of therapeutic strategies related to the association of hyperglycemia and AMI. Further studies need to be done to determine whether glucose-lowering treatments could improve outcomes in Hyperglycemic patients with AMI. | 2019-07-15T22:29:38.692Z | 2019-06-28T00:00:00.000 | {
"year": 2019,
"sha1": "d29673377dc8bb8b0c409087a35451c9c4c8e92d",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/6261/4857",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7f0ed56e250b705b74c68a3a744ffdc42a159afc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233629390 | pes2o/s2orc | v3-fos-license | Unfolding jellyfish bloom dynamics along the Mediterranean basin by transnational citizen science initiatives
: Science is addressing global societal challenges and, due to limitations in research financing, scientists are turning to public at large to jointly tackle specific environmental issues. Citizens are therefore increasingly involved in monitoring programs, appointed as citizen scientists with potential to delivering key data at near no cost to address environmental challenges, so fostering scientific knowledge and advise policy- and decision-makers. One of the first and most successful example of marine citizen science in the Mediterranean is represented by the integrative and collaborative implementation of several jellyfish spotting campaigns in Italy, Spain, Malta, Tunisia started in 2009. Altogether, in terms of time coverage, geographic extent, and number of citizen records, these represent the most effective marine citizen science campaign so far implemented in the Mediterranean Sea. Here we analyzed a collective database merging records over the above four Countries, featuring more than 100,000 records containing almost 25,000 observations of jellyfish specimens, collected over a period of 3 to 7 years (from 2009 to 2015) by citizen scientists participating in any of the national citizen science programs included in this analysis. Such a wide citizen science exercise demonstrates to be one of the so far available most valuable and cost-effective tools to understanding ecological drivers of jellyfish proliferations over the Western and Central Mediterranean basins, and a powerful contribute to develop tailored adaptation and management strategies, mitigate jellyfish impacts on human activities in coastal zones, and support implementation of marine spatial planning, Blue Growth and conservation strategies.
Introduction
Jellyfish have been acknowledged as a natural component of marine ecosystems since antiquity, as the fossil record showed gelatinous blooms were already occurring several hundred millions of years ago [1][2]. There is evidence to suggest that jellyfish blooms may occur following decadal cycles [3][4], presenting interannual fluctuations over climate-related cycles [5][6][7][8][9][10] and that their blooms may be driven by physical forcing in the sea [11]. In the past years, evidence also suggests that jellyfish blooms are fostered in many coastal marine ecosystems over the world by increasing human-related pressures, such as overfishing, eutrophication, climate change, habitat modification ("ocean sprawl"), species translocation (reviewed in [7,[12][13][14][15]).
At a global scale, there is a paucity in historical datasets covering large temporal and spatial scales, such that it is not possible to conclude whether a global increase of gelatinous zooplankton has occurred or not [4,[16][17]. However, some coastal marine ecosystems may have been undergoing a long-term increase in jellyfish biomass proliferation in recent decades (e.g., the Mediterranean Sea: [4,16,[18][19][20]). This lack of data on gelatinous zooplankton is partly due to the fact that jellyfish were not targeted species in fisheries or oceanographic research, as they were poorly known by scientists and considered ecologically unimportant. Another reason behind such data deficiency is that the fragility of most gelatinous species and their patchy distribution makes conventional plankton nets inadequate for sampling this taxon: gelatinous organisms may be damaged beyond recognition, therefore critically limiting the resulting information on species composition, abundance and distribution [21][22]. Therefore, recent surveys and research campaigns on jellyfish also relied on non-traditional data collection methods, including citizen science, to assess patterns of jellyfish diversity, abundance, seasonality, and distribution (e.g. shoreline surveys and stranding observations, aerial surveys, or interviews to stakeholders, among others; [22][23][24][25][26][27]).
In recent years citizen science has expanded the scope of data collection on the presence/absence of jellyfish, either in open waters or stranded along coastal areas [28][29][30][31][32]. Long-term and broad-scale data is necessary to determine jellyfish population dynamics and citizen science is considered as one of the best methods to contribute beyond the local scale. Citizen science programs are, in fact, generally cost-effective and allow the establishment of monitoring programs at broader geographic scales and for longer periods [33][34][35][36]. Moreover, citizen science has the potential to increase biodiversity awareness and contribute to understanding of spatial and temporal distribution of species, especially when their biogeographical ranges are broad [37][38]. effects on marine ecosystem services and important economic losses within the affected human activities/enterprises [14,42,[45][46][47][48][49].
As a result, the development of adaptation and management strategies to address jellyfish blooms is imperative, in order to mitigate the costs and effects of a massive jellyfish presence within coastal areas [49][50]. Management of jellyfish blooms need to address various disparate aspects, including information and education campaigns, citizen science-based programs, systems for the removal/recovery of jellyfish in extreme cases of infestation, short-term beach closures, usage of temporal anti-jellyfish nets, training of lifeguard personnel and medical aid provision in case of stings, monitoring of stranded jellyfish and different methodological alternatives for forecasting the occurrence of future blooms [50][51][52][53]. In fact, a forecasting capacity can become a great management tool in the form of an "early warning" system [54][55][56]. This allows coastal managers to identify high-risk days and to be aware of the possible arrival and occurrence of jellyfish blooms in coastal areas, enabling them to take action and mitigate the potential negative impact of jellyfish [53]. This capacity is especially important in coastal areas afflicted by life-threatening species, where reliable forecasting capabilities can counsel the closure of beaches in order to prevent stings during high-risk days [47].
Within this context, the present work focused on compiling and analyzing the jellyfish abundance data originating from integrated, collaborative, citizen science-based programs conducted in four Mediterranean countries (Italy, Malta, Spain and Tunisia), in order to understand the dynamics of jellyfish blooms at a sub-basin-scale and to find tools to mitigate the impact of jellyfish blooms in coastal areas. To achieve such an objective, a thorough analysis of the four corresponding databases was conducted in order to standardize the data and effort. In addition, the spatial and temporal dynamics and prevalence of the targeted jellyfish species were characterized through a habitatmodeling approach, which was applied to the coastlines of the monitored countries. This was done in order to understand the operational environmental forcing and to predict high probability areas for massive beach stranding of jellyfish. The jellyfish bloom forecasting system output is presented as a preventive and mitigation tool for citizens and coastal stakeholders, aiming to reduce the jellyfish blooms socio-economic impact in coastal areas through a feasible and powerful management strategy.
Study area and jellyfish citizen science-based programs
The study was conducted along the coastlines of the central and western Mediterranean within the framework of the ENPICBC MED MEDJELLYRISK Project, where participatory (citizen science-based) data on the occurrence of jellyfish species was gathered in four countries: Italy, Malta, Spain and Tunisia ( Figure 1).
Figure 1.
Map with the four countries involved in the study: Italy, Malta, Spain and Tunisia, where the citizen science jellyfish data was gathered.
To ensure a high degree of accuracy and quality of the data being collected, as from the early stages species recognition tools, such as identification guides and posters ( Figure S1), were mass distributed within the MED-JELLYRISK project to all salient coastal stakeholders.
Also training sessions in Spain and Tunisia were held for coastal lifeguard services, and the information was also included within different digital platforms that facilitated access in the field, including mobile apps (Focus Meteo Meduse App and iMedJelly App) and social networks such as Facebook (https://m.facebook.com/Spot-the-Jellyfish, https://m.facebook.com/Meduses.Tunisie/, @meteomeduse in Italy). All relevant information about the species most likely to be encountered along the targeted Mediterranean coastlines (at least 29 species) was included, i.e.: a photo or illustration, most recognizable characteristics, frequency and season of appearance, and its stinging potential. These materials were updated whenever necessary, such as the inclusion of additional gelatinous species and the updating of existing information as a result of the release of additional knowledge on the subject. A compilation of all this information was included in an identification guide (available online at: http://193.188.45.233/jellyfish/docs/Englishguide.pdf) and in a jellyfish sting treatment booklet (available online at: http://193.188.45.233/jellyfish/docs/firstaid.pdf), specifically developed within the framework of the MEDJELLYRISK project. . The devised data collection protocols were simple and consistent within the four targeted coastal regions, with participants being asked to submit jellyfish records containing basic attributes through the on-line (mail, facebook, app) citizen science report submission form: species observed, abundance (quantitative and categorical scales), date, time and location of the observation.
Once the citizen science reports were gathered, experts from the different institutions involved in the project validated all data. Data was filtered for eliminating error and bias (e.g. erroneous reports, duplicates and records with missing information were excluded). The emerging databases were then treated to standardize the sampling effort, the abundance data (by adopting common categories ranging from 0 to 3), the days of observation and the sampling sites in order to be able to compare the data between the 4 citizen science programs.
In general, citizen science-based data followed two different monitoring approaches. The first approach, followed by Italy and Malta, reported only presence records of jellyfish. Data from Italy was gathered from a citizen science-based program called "MeteoMeduse" coordinated by the University of Salento in Lecce. It covers the shallow and deep seas of the entire Italian coast; the Adriatic, Ionian, Tyrrhenian and Ligurian seas (Boero 2013). Data analyzed covered the period from 2009 to 2014 and was validated by experts from the University of Salento. Data from Malta originated from a citizen science-based program launched in 2010 called "Spot the Jellyfish" run by the International Ocean Institute (IOI) and the University of Malta, and included coastal sightings made between 2011 and 2014 from the entire Maltese archipelago. Experts from the University of Malta validated data. For these two countries, the total reports equal the number of positive (presence) reports. The 0 (zero) values that represented the absence of jellyfish within coastal areas were not considered for Italy and Malta's citizen science report databases.
Conversely, the second citizen-science based monitoring approach, followed by Spain and Tunisia, gathered the information for both presence and absence of jellyfish, so the 0 (zero) values representing the absence of jellyfish were also considered in these two databases. This information was available due to a higher effort invested by the public in surveying the respective coastal areas. In order to evaluate medium-term arrivals of jellyfish along the Spanish Catalan coast, people (mainly from Rescue Services and from coastal municipalities) were trained to implement a citizen science-based monitoring program that has been carried out since 2007. The main objective was to evaluate the presence of jellyfish along the Catalan coast from May to September from 2007 until 2013, through a daily sampling scheme involving a total of 243 beaches. Data from Tunisia for the 2013-2015 period originated from a citizen science campaign established during the project. Experts from the Faculty of Science of Bizerte validated the corresponding data.
To compare abundance data across different countries, data was grouped in three distinctive categories following Canepa et al. [30]; low (1) as < 10 individuals per beach, medium (2) as < 1 individual m -2 and high (3) as > 1 individual m -2 . A comparison among countries in terms of the temporal frequency of jellyfish sightings made achieved through an intensity index. This index incorporates the number of positive reports (reports with a jellyfish presence in any of the three abundance categories) over the total amount of reports and the number of surveyed beaches, as follows: Intensity index = (Positive Reports · Total Reports -1 · Beach surveys -1 ) * 100 Higher intensity index values represent conditions of high prevalence of jellyfish in particular coastal areas, since index values do not only consider the positive records over the total records, but also incorporate the monitoring effort invested by including the number of beaches surveyed.
Predictor (environmental) data
The oceanographic variables (i.e., SST, Salinity, Chlorophyll, Nitrate, Phosphate, Oxygen and Temperature) were obtained through the European "Copernicus Marine Service" (CMEMS -http://marine.copernicus.eu/). All the oceanographic variables were downloaded in separate datasets, with a resolution of six degrees of latitude and these were uploaded to the server on a weekly basis. The motuclient framework (developed by Copernicus as a tool for tapping into their database) was used to recall the data when necessary. For each oceanographic variable, the nearest point to a reported stranded jellyfish coordinate was extracted from the dataset to obtain the value inputted within the machine-learning algorithm (see next section). Following Benedetti-Cecchi et al. [57], the distance from the nearest marine canyon was included as a predictor in the models, as a "slope index", calculated as the distance (in Km) to the nearest 1000 m depth isobath (except for Malta where 800 m was the deepest value) for each site using the marmap package in R [58] ( Figure S2).
Data analysis
The oceanographic variables used as potential explanatory variables inside machine learning algorithms were analyzed a-priori using the Pearson correlation "rho" to avoid collinearity (more important for GAM and ANN models; [59]), where pairs of correlations with a rho value higher than 0.7 were discarded from the analysis ( Figure S3). These "dropped" variables were those variables with less-known effects or indirect ecologically-related effects on the jellyfish ecophysiology [60][61]. The response variable was defined as the number of jellyfish blooms (the sum of categories 2 and 3, as these categories represent incidences of 'blooming') per each sampled beach. The machine learning workflow (averaging protocol and evaluation) closely followed the routine proposed by Thuiller et al [62]. Different machine learning methods were applied to the (participatory) jellyfish data, so as to model the environmental forcing over selected species. In addition, in order to identify some potential areas of massive jellyfish occurrence and/or stranding and as a way to create a management tool to assist in the mitigation of jellyfish bloom negative impacts (i.e. people get stung), an ensemble potential distribution "predictive" map was also generated and made available through the iMedjelly App. The algorithms used in the machine learning approach were: generalized additive models (GAM), artificial neural networks (ANN) and random forest (RF). All these models were fitted using the ensemble platform for species distribution modeling "biomod2" package [63].
The Generalized Additive Model (GAM) works in a similar fashion to a linear model; however, in GAMs, the linear predictor involves a sum of smooth functions of covariates (explanatory variables), allowing this method to model (fit) more complex relationships between the response (jellyfish data) variable and the selected covariates [64]. This flexibility is achieved since the relationships are defined as 'smooth functions', rather than detailed parametric relationships, reducing the over-fitting problem. Also, the "generalized" part of the model allows for a family distribution of the errors, which is different from the Gaussian distribution, making it ideal for non-negative and/or continuous data, including, as in this case, count-like data. A second approach considered here was the use of a different family of models, based on the decision trees, in particular the Random Forest (RF) approach, which is an alternative to cope with the non-linear relationships commonly found in natural systems. This model is based on the generation of a large number of regression trees as a (random) subset of the original data by means of permutation [65]. For the prediction of the response value (abundance category), the covariates were included in each tree, producing a set of response values where the final value is taken as the average value among the trees considered in the forest [66]. Finally, we used an Artificial Neural Network (ANN) statistical modeling approach. This model is based on the analogy to the human brain's way to learn [67] from the relationships between an input layer, hidden layers and output layers. At each layer, the inputs values were processed independently in nodes. The output is reached by iteratively connecting two pairs of nodes, with the back-propagation algorithm being one of the most used in the training process. In general, ANN has proved to be robust for noisy data given their ability to handle linear and nonlinear relationships [68]. In order to compare modeling output, only significant variables were retained and the importance of each environmental variable was calculated through a permutation process, where the model was fitted by excluding (in a stepwise fashion) one environmental variable at a time and calculating how decrease in accuracy when the variable is excluded. Thus, when an "important" variable is excluded, the model's accuracy decreases to a higher degree than when a "non-important" variable is removed. The estimation of variable importance is given for a zero-to-one (0-1) range, where the zero represents a variable, which has no influence, and the value 1 represents a variable, which is essential.
Jellyfish Dynamics
Jellyfish bloom data varied in the way it was collected and in its temporal coverage. In the case of Spain and Tunisia, all the data-collection was conducted by trained people and reports were submitted both when jellyfish were present (positive reports) but also when they were absent (negative reports). On the other hand, reports from Italy and Malta came from on-line citizen science platforms where people submitted only images of jellyfish species encountered within the coastal area, equivalent to positive records only (without considering the negative = absent records). In order to use this data, experts from the different institutions involved in the project validated the sightings. Temporal coverage of the jellyfish bloom data varied across countries, where Spain, Italy, Malta and Tunisia conducted seven, six, four and three years of data collection, respectively. Similarly, the average (per year) number of sampling days and surveyed beaches for Italy were 194.5 and 965, and 111 and 193 for Spain, respectively. Malta and Tunisia surveyed, to a much lesser extent, for an average of 89.8 and 48 days, and on 32 and 33 beaches, respectively (Table 1). A linear model showed the positive effect of the increase in the number of days surveyed over the spatial coverage of the reports, visualized as the number of beaches surveyed (r-square = 0.6, t-value = 4.73, p-value < 0.01), but this was not consistent across countries ( Figure S4). TOTAL represents the sum of variables, except for PPR and Intensity index, which represents the average (and standard error) values (*--: due the small sampling effort of this year this data was not considered in the analysis).
. In the case of Italy, data from MeteoMeduse included reports from 2009 until 2014, comprising a total of 15.731 jellyfish presence records. No absence data was registered, since data originated from citizens that are accustomed to report only the presence of jellyfish; hence, the percentage of positive reports was always 100 % (Table 1) (Table 1). Data from Spain considered reports covering the 2007-2013 period, collected exclusively from the Catalonian coast. All the reports (98,373) were made during the summer season, of which only 7.9% were positive reports ( Table 1). The year with the highest number of jellyfish presence records was 2012 (2982 positive reports). For Tunisia, most citizen science records came from fishers. Tunisian data covered the years 2013 and 2014, with a total of 639 records collected, from which 21% corresponded to jellyfish presence and 80% to jellyfish absence reports ( Table 1).
Country
The relationship between the positive reports and the total reports received, considering the monitoring effort as the total surveyed beaches, was analyzed through the intensity index. This index showed that among the four-targeted countries, Malta has the highest values (3.57) reflecting the higher frequency of jellyfish reporting on its beaches during the study period, especially during 2013. Even though its dataset was the shortest in duration, Tunisia showed the second level of intensity (0.80), followed by Italy (0.51) and Spain (0.43) ( Table 1). An inverse, albeit anecdotal, relationship thus emerged between the length of the dataset and the intensity index values, as well as a high degree of inter-annual variability in the frequency of jellyfish occurrence. For instance, for Italy, 2009 was the most important year in terms of intensity of jellyfish sighting reports, with a clear decline in the next year until a minimum was registered in 2012, after which a positive (but weak) trend is evident. In the case of Spanish coastal areas, years 2011 and 2012 were the two most important years in terms of the intensity of jellyfish reports.
The jellyfish species reported from the four target countries included the scyphozoans Rhizostoma pulmo, Pelagia noctiluca, Cotylorhiza tuberculata, Aurelia spp., and Chrysaora hysoscella as well as the hydrozoans Velella velella, Aequorea forskalea and Physalia physalis. As far as it concerns Aurelia spp., it should be noticed that, following Scorrano et al. [69], the only open-water Aurelia species recorded in the Western and Central Mediterranean (including the Adriatic Sea) is A. solida Browne 1905, whereas the Aurelia moon jellyfish occurring in sheltered coastal lagoons and harbours (such as Varano lagoon in Italy, or Empuriabrava in Cataluña) should be referred to Aurelia coerulea von Lendenfeld 1884.
Within the target countries, different species dominated the report cohort, in general, for those countries in the northern half of the Mediterranean (Italy and Spain), R. pulmo was the most common species reported, followed by P. noctiluca. Conversely, reports from central Mediterranean countries (Malta and Tunisia) were dominated by P. noctiluca, with few (or no) sightings of R. pulmo.
An analysis of the temporal dynamics of jellyfish reports per country and per species showed that, for Italy, frequency of reports for the species Rhizostoma pulmo increased consistently from 2009 to 2013 when they decreased to minimum levels, even lower than those reported in 2009. Reports of the species for abundance category 1 were the most abundant, followed by categories 2 and 3, respectively. This observed hierarchy in the prevalence of different abundance categories was evident for different species, with the exception of Velella velella, for which abundance category 3 has been the most commonly-reported abundance category since 2010, showing a progressively-increasing trend in abundance till the end of the dataset (Figure 2). For P. noctiluca, reports showed two periods of high abundance, in 2010 and in 2013.
In the case of Malta, reports were broadly dominated by P. noctiluca, for which a maximum number of records were reported during 2012, with abundance category 2 being as almost as commonly reported as abundance category 1. A similar increase in the number of reports for V. velella is evident as for Italian coastal areas ( Figure 3).
Along the Spanish (Catalan) coast, the reports were dominated by R. pulmo, whose reports increased from 2007 to a maximum in 2012, when the abundance category 3 reached a maximum level, with more than 100 reports (Figure 4). However, numbers seemed to decrease during 2013, when no report was submitted for the abundance category 3 (Figure 4). The second most commonly reported species (P. noctiluca) showed a maximum during 2008, with a subsequent decrease in the number of reports until a minimum being registered during 2012. Afterwards, during 2013, the reports of this species seemed to recover again, at least for abundance category 1, which reached the maximum number of reports registered during the entire 7 years (Figure 4). For 2009 and 2010, the reports were broadly dominated by V. velella and A. forskalea, two species that were almost absent within other years ( Figure 4).
For the Tunisian coastal areas, the reports were dominated by P. noctiluca, which showed an increase in the number of records for abundance categories 2 and 3 from 2013 to 2014; however, the species was not reported during 2015. Aurelia (cfr. A. solida; see also Gueroun et al. [70]) was a commonly-reported species for Tunisian waters, with a similar abundance pattern being reported for 2013 and for 2014, with no record of the species being made during 2015. The few reports submitted for R. pulmo showed a marked decrease in category 1 since 2013, with a peak of abundance being registered during 2014, the only year when the abundance category 2 was registered ( Figure 5).
Jellyfish Spatial Dynamics
From a spatial (horizontal) perspective, the distribution of the "low" (1) abundance category, which represents an abundance of less than 10 individuals of jellyfish per beach, showed no clear patterns. The "medium" and "high" (categories 2 and 3) abundance categories showed similar horizontal patterns and thus their values were combined within analysis and were used to represent jellyfish massive sightings and strandings. For the two most commonly-reported jellyfish species (i.e. Rhizostoma pulmo and Pelagia noctiluca), the horizontal distribution of the highest abundance categories showed that, for Italy, P. noctiluca blooms were persistent throughout the study period and that the most affected areas were those in the southern and northern Tyrrhenian Sea, the Ligurian Sea and also the central Adriatic, which contributed several reports of P. noctiluca blooms ( Figure 6). Besides, R. pulmo blooms were mainly reported from the Ligurian, the north Tyrrhenian Sea and the north Adriatic Sea, and even occasionally from inside the Gulf of Taranto (north Ionian Sea) (Figure 7).
For Malta's coastal areas, P. noctiluca blooms were reported from all around the island, with a higher number of reports being submitted for NW and SE coastal areas (Figure 8). R. pulmo was largely absent from these waters, with the exception of a few records made in 2013.
In the case of Spain, the greatest abundance of P. noctiluca and R. pulmo showed contrasting patterns of distribution. P. noctiluca showed its greatest abundance along northern sections of the coast, showing a clear association with the presence of submarine canyons. Along southern parts of the Catalan coast, high numbers of P. noctiluca were recorded to the north of the Ebro River delta, where circulation eddies are common (Figure 9). On the other hand, R. pulmo showed its greatest abundance along the central area of the Catalan coast, near the city of Barcelona, especially for 2011 and 2012 when records for this species were dominated by the high abundance category ( Figure 10).
In the case of the Tunisian coastal areas, the northern part (near Bizerte) was the area from where most reports originated. Thus, blooms of P. noctiluca ( Figure 11) and R. pulmo ( Figure 12) were mostly recorded in this area. Abundance categories are shown as: category 1 -"low" in yellow, category 2 -"medium" in green and category 3 -"high" in red.
Effects of environmental variables and potential distribution
An analysis of the putative effects of environmental variables over the observed distribution of stranded jellyfish revealed a high degree of inter-specific and intercountry variation. In order to analyze their effects, we will focus on the average variable importance among the three machine learning algorithms. Thus, for the species Pelagia noctiluca, the most important (average value > 0.4) variables in explaining the observed distribution of the species were sea surface temperature (SST), salinity, wind direction and slope index ( Table 2). In Italy, salinity and slope index were the two most explanatory variables (0.44 and 0.62, respectively), with all the other environmental variables scoring 0.1 or less in terms of importance estimates. The predicted potential areas for high numbers of jellyfish, considering average environmental conditions, showed important areas near the Gulf of Genoa (Ligurian Sea), the south Tyrrhenian Sea (near the Aeolian Islands), around the island of Sardinia, in the Strait of Sicily and one area in the central Adriatic Sea (Figure 13-A). In the case of Malta, the observed distribution of the species could be mostly explained in terms of SST values, with an average significance estimate of 0.43. The predicted important areas blooming of the species within Maltese coastal waters, in concordance with its Intensity Index, showed high values around the entire archipelago, with the north and south-east areas scoring the highest values (Figure 13-B). In Spain, the variables with the highest average importance values were SST, wind direction ("north wind") and slope index (0.6, 0.47 and 0.4, respectively). Predicted important areas for future blooming of the species consisted of two main areas: i) those associated with the presence of submarine canyons (which reflects the influence of high slope index values) along the northern Catalan coast near the Cap de Creus National Park and ii) the coastal area contiguous to the northern Ebro river delta (Figure 13-C). In Tunisia, the two main explanatory variables were SST and slope index, with values of 0.41 and 0.78, respectively. The predicted areas with high potential for P. noctiluca blooming were all circumscribed to the north-west coastal area, near Bizerte (Figure 13-D). For the species Rhizostoma pulmo, the relative importance of the environmental variables in explaining the observed distribution is similar to that observed for the previous species (P. noctiluca), albeit in a different variable combination (Table 3). Thus, for Italy the salinity and slope index were the two most important variables (0.4 and 0.62, respectively). The predicted important areas for blooming of the species showed a different spatial distribution than that for P. noctiluca. In this case, these areas were restricted to the Levante Riviera and to the Tuscan Archipelago (east Ligurian Sea), the central area of the Tyrrhenian Sea, north of Naples, the southern coastal areas of the island of Sicily, the coastal areas of the Gulf of Taranto, the southern part of the Adriatic Sea and the coastal areas of the Gulf of Venice (Figure 14-A). In the case of Spain, SST and slope index were the two most important environmental variables (0.46 and 0.45, respectively), which significantly explained the observed distribution of blooms. The spatial prediction of the future occurrence of blooms of R. pulmo in these waters contrasted with that for P. noctiluca, showing high probability values in the central area of the Catalan coast, mainly south of the city of Barcelona (Figure 14-B). In Tunisia, the distribution of blooms of R. pulmo was explained mostly by SST, Nitrate, Wind (speed and direction) and slope index, variables which featured within all the model algorithms. The predicted blooming high probability areas for R. pulmo in these waters showed a similar pattern to those for P. noctiluca, where the coastal area near Bizerte showed the highest values (Figure 14-C).
Discussion
Our results demonstrate at a regional, sub-basin scale, the potential of citizen science data for the unraveling of jellyfish-temporal dynamics and, through habitat-modeling, for the identification of potential blooming areas of the most abundant jellyfish species. To our knowledge, this is the first time that the spatial and temporal dynamics, the role of the environmental variables and the characterization of the most probable presence areas have been described for the species P. noctiluca and R. pulmo on a broad scale across the Western and Central Mediterranean, from Italy to Tunisia to Spain.
Citizen science has been widely recognized as a great tool for producing large amounts of data, especially data covering broad spatio-temporal extents, without whose application the corresponding data acquisition and analysis might be constrained or even impossible [33,35,37,[71][72][73][74][75]. The effort invested in the current study in collating together the jellyfish sighting databases from four different citizen science programs implemented at different levels in different Mediterranean countries allowed us to generate a standardized, large-scale database. This, in turn, has contributed to addressing the lack of knowledge about jellyfish and their spatio-temporal distribution, at least in the central-western Mediterranean.
One of the big challenges in utilizing data from citizen science programs is the generation of a good-quality dataset [76][77] and the current study was not exempt from such a challenge. However, in the present study, training sessions and/or teaching tools (e.g. species identification guides and posters, mobile apps and social media platforms) were implemented, in order to facilitate participation, data collection, and so as to increase the capacities and experience of volunteers. This early stage strategy helped minimize error and to increase accuracy [34,37,75,[77][78]. A thorough data management and validation of every single jellyfish sighting report by experts also contributed to database quality [34,78].
Although the most common jellyfish species in the Mediterranean are easily recognizable, a poor status of the specimen observed (missing parts like tentacles, only umbrella, or very degraded mesoglea) or a poor photographic quality may generate confusion in the reports, making the validation and the data treatment processes essential to reduce bias.
Citizen science monitoring campaigns can provide data about the presence and/or absence of jellyfish and their relative abundance, which can be a preventive tool in themselves by acting as an early warning system. However, the applicability of the same data can be extended, as in the delivery of predictive models that would act as a management tool [50,55,79]. The analyzed database, featuring more than 100,000 records containing almost 25,000 observations of jellyfish individuals, collected over a period of 3 to 7 years by citizen scientists participating in any of the citizen science programs included in this analysis, is a valuable contribution to the understanding of jellyfish ecology. Using different machine learning methods and the available environmental data, we were able to identify explanatory environmental variables and to develop the species distribution modeling. In so doing, we inferred considerable inter-specific variability in occurrence and a well-defined inter-annual variability in abundance for each species. Our data covers a short period of time (3 years in Tunisia and 7 years in Spain); hence, the observed trends may not be representative of larger (decadal) oscillations, although these results are definitely a key aspect for understanding jellyfish dynamics and a starting point for future related work. Jellyfish are known for their highly variable spatio-temporal dynamics and for the periodicity of their blooms [3,28,80]; only large temporal databases can aspire to identify the correct temporal trends of jellyfish blooms [4,16,81]. Regarding the spatial distribution, the low abundance category showed no clear species-specific pattern, reflecting the observation that the presence of small numbers of jellyfish along the coastal areas is a common phenomenon. The medium and high abundance categories (considered together as "blooms") were analyzed for P. noctiluca and R. pulmo in all the study areas due to their public hazard importance, with P. noctiluca being considered as the most important Mediterranean jellyfish from the toxic point of view [82].
Pelagia noctiluca is a holoplanktonic species that presents an extensive range of distribution in all warm and temperate waters [83]. Within the Mediterranean, the species inhabits coastal and oceanic waters [84][85] and prominent occurrence all year-round, albeit with interannual differences in intensity, emerges in the current study as a result of the collected citizen science data. Rhizostoma pulmo is a meroplanktonic species widely distributed in the whole Mediterranean basin (reviewed in [86] and one of the second most abundant species along western Mediterranean coasts [87]. In the Adriatic, P. noctiluca was considered a rare species [88] before its anomalously high blooms recorded in 1977 [89][90][91]. According to previous studies on the occurrence of jellyfish along the Italian coasts [15], in 2010 P. noctiluca dominated the western basin but it was much less observed from the Adriatic. Over the 4-yrs period 2010-2013, P. noctiluca had the greatest abundance along the western coast of Italy, but it was increasingly recorded at bloom levels from the Adriatic Sea. Pelagia noctiluca was still not as abundantly recorded as R. pulmo, especially in the northern Adriatic, where the latter species is considered to be native and is thus observed regularly [18] and where its blooms were reported during the entire duration of the current six-years study. Along the rest of the Italian coast, records of P. noctiluca were notably more abundant than in the Adriatic, especially those belong to the high abundance category. The pattern was similar throughout the six-year study, except for 2009 when the fewest observations were reported. In 2009, P. noctiluca abundance peaked early (January-March), according to studies carried out in the Strait of Messina [92]. Given that citizens' participation is lower during winter months than during the bathing season, this might explain the lower number of reports for the species during 2009, at least for the Tyrrhenian and Ionian Seas. In the Ligurian Sea, P. noctiluca has been reported as present along most of the coastline, but exhibiting alternating years of presence and absence [3,28], in full agreement with data collected in the current study. Regarding R. pulmo, the general distribution pattern shown in the present study is consistent with that reported by Boero et al. [15] and Leoni et al. [93]. For P. noctiluca, one of the environmental variables with highest importance was salinity, which is in agreement with Piccinetti Manfrin & Piccinetti [94] and Canepa et al. [30]. Although previous studies in different areas of the Adriatic suggested a direct correlation between sea-water temperature and the abundance of P. noctiluca [89,[95][96], with temperature being identified as a major environmental factor affecting population densities [30,85], this factor was identified the third most important in this study for the same species. Another important explanatory factor was the slope index, corroborating the association between jellyfish stranding events and their proximity to marine canyons reported in previous studies, including the one for the Strait of Messina [92]. In the case of R. pulmo, the same factors as for P. noctiluca were relevant, with SST also being identified as the third one in importance in this study, being correlated with high abundance of Rhizostoma spp in Europe [97].
One of the scyphomedusae known for considerable coastal aggregations in Malta is P. noctiluca, and since its massive blooms in the early 1980's [98], this species is considered to be one of the most common species around the island. As previous studies have shown (based on the same citizen science data), the species is present throughout the year, along both flanks of the island, disappearing only sporadically in autumn [99]. Over the years 2011-2015, Gatt et al. [99] highlighted the year 2012 as the highest in terms of the number of reports submitted for the most commonly reported jellyfish species, with P. noctiluca having the highest abundance values. Conversely, other species such as Rhizostoma pulmo, Aurelia cfr solida, and C. hysoscella were very rarely reported during the same period. The results of this study showed that the outstanding forcing environmental factor for P. noctiluca was sea surface temperature, which is consistent with the results of previous studies [30,99]. In addition, for this species, hydrodynamic forcing (e.g., winds, current and tidal effects) has been implicated in the formation of coastal and offshore aggregations in the Adriatic [90,[100][101] and in Maltese waters [98]. Although in the current study, these variables did not emerge as important ones, except for the currents, which may explain the local aggregations in coastal areas of Malta due to the island effect [98,101]. Malta's island status and the relative importance of sea currents within this context could also be the reason why in Malta the intensity index for P. noctiluca was so high when compared to the rest of the countries included in the study.
The spatial distribution of P. noctiluca along the Catalan coast in Spain covered the entire coastline but was highly concentrated at the extremes, especially in the northern area, as was also previously described in Canepa et al. [30] whose analysis was based on the same data series collected up to 2010. This distribution can be explained in terms of the proximity to the canyons in the north that can enhance circulation, making the seasonal occurrence of this species more likely, as hypothesized in previous works [30]. Moreover, in the current study, the slope index was identified as an important environmental factor, consistent with the findings of Benedetti-Cecchi et al. [57] who reported fewer jellyfish outbreaks with increasing distance from canyons. According to Fuentes et al. [102], for the years 2007-2009, data based on citizen science reports and supplemented with monthly coastal surveys, R.
pulmo was the second most observed jellyfish after P. noctiluca, results that agree with our findings. In that same study, authors indicated that, by 2010, important impacts caused by this species had been reported by fishermen as a result of more frequent and intense blooms, the results shown in Leoni et al. [93]. The spatial distribution observed in this study is also in agreement with Fuentes et al. [102], showing the occurrence of this species along the entire Catalan coast, with the highest concentration in the central area, around the Barcelona province. This is supported by the findings of previous coastal surveys that recorded the presence of ephyrae mainly in the central area of the Catalan coast [102]. To our knowledge, this is the only work describing the spatio-temporal distribution of R. pulmo together with the role of the environmental variables along Spanish coasts and integrating a seven-year-long database. For both species, P. noctiluca and R. pulmo, the most important explanatory environmental factors were SST and the slope index. For P. noctiluca, the wind direction was also identified as an important variable, in agreement with the findings of Canepa et al. [30], although in the latter study, the explanatory variable behind the stranding of this jellyfish species were southeastern winds whilst northwesterly winds were similarly identified in the current study.
Pelagia noctiluca in Tunisian coastal areas was the most sighted species (blooms of high abundance category) and mostly in the northern part of the country, in agreement with the findings of previous studies, [103][104][105]. Some of the most important explanatory environmental variables for blooms of this species within this region were SST, slope index and nitrate, in coherence with the findings of Touzri et al. [105]. Another environmental variable positively associated with the presence of P. noctiluca in the bay of Bizerte identified by Touzri et al. [106] was salinity, a variable that was highly correlated (rho = 0.85) with the slope index. R. pulmo is a characteristic species of Tunisian coasts [106] and blooms are described under conditions of high temperature and coastal eutrophication [107]. However, there is no ample information about the spatio-temporal distribution of this species in Tunisian waters, except from the recent work by Leoni et al. [93] that documents a yearround presence and large blooms for this species, neither on the key environmental variables that determine its presence, elevating the current study to the first one of its kind on R. pulmo within these waters.
Several studies along the Mediterranean have shown that high abundances of gelatinous zooplankton are generally related to variations in water mass hydrodynamic variables, in particular in salinity and sea surface temperature [3,85,103,105,108]. The different Mediterranean seas and coastal areas have their own hydrodynamic and physico-chemical peculiarities, as do different jellyfish species in terms of preferred conditions. Hence, it is fundamental to know those peculiarities in order to plan and implement adequate management measures, region-and species-specific. In this study, we explored those peculiarities and applied the identified key variables within further analysis, testing a predictive model as a potential prevention and mitigation tool, considering that forecasting systems may be useful for anticipating events and facilitating management [50,55]. Our forecasting models appear to be consistent with the data of P. noctiluca ( Figure 13) and R. pulmo (Figure 14) collected through a participatory approach, since coastal areas identified as carrying a high probability of bloom occurrence largely coincide with areas reporting the highest frequency of abundance category 2 and 3 citizen science records.
Jellyfish blooms can have a serious socio-economic impact, especially on the sectors of tourism, aquaculture and fisheries. Blooming species, like P. noctiluca and R. pulmo, are among the most important species, as a result of their sheer impact, within the western Mediterranean. As a result, investing in citizen science may be an effective approach to provide useful information about jellyfish blooms and to explore potential mitigation tools for jellyfish coastal management purposes. Similarly, but in light of a Blue Growth vision, the potential to predict areas with dense jellyfish proliferation may turn into a positive perspective for the potential exploitation of commercially valuable jelly biomasses [109][110]. Herein, we demonstrate the importance and usefulness of citizen science programs in order to determine jellyfish dynamics at broad spatial scales. In this case, four countries of the Mediterranean covering the Adriatic, Ionian, Tyrrhenian, Ligurian and Balearic Seas, were able to characterize the spatio-temporal dynamics of the most commonly-occurring jellyfish species, and to propose a mitigation and prevention tool for coastal jellyfish bloom management through the data coming from their national citizen science initiatives. Welldesigned, implemented and evaluated citizen science programs, conducted with committed participants, can efficiently generate quality-controlled data, contribute to effective management strategies and help mitigate negative impacts [33,72,111]. In this framework, the implementation of well-structured citizen science programs should be recommended across large marine regions to increasing knowledge on jellyfish proliferation mechanisms and distribution, so contributing to the development of adaptive management strategies in coastal areas.
Conclusions
This section is not mandatory but can be added to the manuscript if the discussion is unusually long or complex.
Patents
This section is not mandatory but may be added if there are patents resulting from the work reported in this manuscript.
Supplementary Materials:
The following are available online at www.mdpi.com/xxx/s1, Figure S1: title, Table S1: title, Video S1: title. Table S1. Summary statistics for all the predictor (environmental) data. Figure S1. Poster "Watch for Jellies" (citizen science campaign) distributed within the MED-JELLYRISK project. | 2021-05-05T00:09:41.678Z | 2021-03-11T00:00:00.000 | {
"year": 2021,
"sha1": "fe7d0ce544e2df6a3a382a04ca9b1a992e01fd41",
"oa_license": "CCBY",
"oa_url": "https://www.um.edu.mt/library/oar/bitstream/123456789/77704/1/Unfolding_jellyfish_bloom_dynamics_along_the_mediterranean_basin_by_transnational_citizen_science_initiatives.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "056f5f589c5c2fa3cb375be8a7bd56f63c084c05",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
251681775 | pes2o/s2orc | v3-fos-license | Cycloaddition of 4-Acyl-1H-pyrrole-2,3-diones Fused at [e]-Side and Cyanamides: Divergent Approach to 4H-1,3-Oxazines
4-Acyl-1H-pyrrole-2,3-diones fused at [e]-side with a heterocyclic moiety are suitable platforms for the development of a hetero-Diels–Alder-reaction-based, diversity-oriented approaches to series of skeletally diverse heterocycles. These platforms are known to react as oxa-dienes with dienophiles to form angular 6/6/5/6-tetracyclic alkaloid-like heterocycles and are also prone to decarbonylation at high temperatures resulting in generation of acyl(imidoyl)ketenes, bidentate aza- and oxa-dienes, which can react with dienophiles to form skeletally diverse products (angular tricyclic products or heterocyclic ensembles). Based on these features, we have developed an approach to two series of skeletally diverse 4H-1,3-oxazines (tetracyclic alkaloid-like 4H-1,3-oxazines and 5-heteryl-4H-1,3-oxazines) via a hetero-Diels–Alder reaction of 4-acyl-1H-pyrrole-2,3-diones fused at [e]-side with cyanamides. The products of these transformations are of interest for drug discovery, since compounds bearing 4H-1,3-oxazine moiety are extensively studied for inhibitory activities against anticancer targets.
Introduction
Diversity-oriented synthesis (DOS) is a strategy to access structurally diverse libraries of small molecules from a single set of reagents [1,2]. This approach allows efficient exploration of the chemical space for the development of new drugs [3,4].
Initially, we tested the reaction of FPD 1a with cyanamide 2a in acetonitrile at room temperature (Table 1). According to UPLC-UV-MS data of the reaction mixture, the reaction proceeded very slowly. In a week, several unidentified side products were observed along with unreacted starting materials (conversion degree of FPD 1a of~20%). The UPLC-UV-MS yield of the desired product 3a was~10%. However, at elevating the reaction temperature up to 95 • C, the test reaction of FPD 1a with cyanamide 2a in acetonitrile proceeded smoothly and afforded the desired tetracyclic alkaloid-like 4H-1,3-oxazine 3a in an isolated yield of 85% (Table 1, Entry 3a). The reaction progress was monitored visually by the change of colour of the reaction mixture (FPD 1a has a deep violet colour, and product 3a is yellow). According to UPLC-UV-MS data of the reaction mixture, compound 3a was formed as a single product, and no side products were observed. Product 3a was isolated by a simple filtration directly from the reaction mixture. Since test results were satisfactory, we examined the substrate scope of this reaction by involving FPDs 1a-i, bearing various acyl substituents R 1 and heteroatoms X and cyanamides 2a-f, bearing various substituents at amino nitrogen atom (Table 1). Quinoxaline derivatives 3a-k were prepared using acetonitrile as the reaction solvent and isolated by a simple filtration directly from the reaction mixture. For the synthesis of 1,4-benzoxazine derivatives (X = O), toluene was used as a reaction solvent since compounds 3l,m were readily soluble in acetonitrile, and no precipitate was formed. In toluene compounds 3l,m formed precipitates after cooling of the reaction mixtures to room temperature, which eased their isolation.
It was found that the studied reaction proceeded well both with 5-oxa (X = O) and 5-aza (X = NH, NPh, NMe) FPDs 1. The reaction also worked well with various aryls and tert-butyl at acyl substituent R 1 of FPDs 1. Expectedly, the reaction of methoxy bearing FPD 1h did not result in cycloadduct 3k, since the methoxycarbonyl group COOMe is not electrophilic enough to participate in cycloaddition as a C=O part of the heterodiene system. The examined substituents in N,N-dialkylcyanamides 2a-d did not affect the reaction noticeably. However, our attempts to involve N-arylcyanamides 2e,f in HDA with FPDs 1a,l were not successful. In this case, the reaction proceeded with formation of insoluble hard-to-purify compounds, whose structure we did not succeed to identify. We assume that in this case other reaction course could occur instead of the formation of the desired compounds 3o-q, since N-arylcyanamides 2e,f has lower nucleophilicity at C≡N nitrogen than N,N-dialkylcyanamides 2a-d.
It is worthy of note that some of products 3 had a very low solubility in organic solvents all available to us. There were problems with acquisition on NMR spectra of such products, that's why in some cases, we had to record solid-state NMR (ssNMR) spectra.
It should be mentioned that in case of the reaction of 4-nitrophenyl substituted FPD 1d with 4-morpholinecarbonitrile 2b, the desired product 3n was observed only in trace amounts by UPLC-UV-MS of the reaction mixture. Prolongation of the reaction time (up to 14 days) and increasing the temperature (up to 120 • C) did not yield any positive results. We suppose that this phenomenon was caused by very low solubility of product 3n, which, possibly, under the examined conditions (FPDs 1 were used as suspensions in acetonitrile), formed a protective insoluble layer on the surfaces of solid particles of FPD 1d and, thus, prevented the reaction. It also should be mentioned that our attempts to perform the reaction of 4-nitrophenyl substituted FPD 1d with carbonitrile 2b in DMSO were also unsuccessful. This experiment was complicated by the fact that DMSO is a highly hygroscopic solvent and facilitated the hydrolysis reactions of the starting FPDs 1 and the products 3 (for hydrolysis studies of analogs of products 3, see [22]). In the case of compound 1d, NO 2 substituent makes FPD 1d very electrophilic and very reactive towards water.
Moreover, in the case of 1,4-benzoxazine products 3l,m (X = O), there were problems with monitoring them with UPLC-UV-MS and HPLC-UV (acetonitrile-water as eluents). Chromatograms of the reaction mixtures and individual compounds 3l,m (pure according to the NMR spectra) contained a lot of overlapped broad peaks, and the mass detector data showed signals of the desired products 3l,m only in trace amounts. Furthermore, such problems were never observed with quinoxaline products 3a-j (X = NH, NMe, NPh). We think that these could be explained by the occurrence of hydrolysis of compounds 3l,m on the LC column due to the presence of an ester moiety in their structures, which is a common feature of such compounds [22].
The study of melting in a capillary of compounds 3a-i,l,m revealed that under such conditions 5-heteryl-4H-1,3-oxazines 4a-i,l,m ( Table 2) were formed as sole products, and no regioisomeric pyrimidines G (Scheme 2) were observed (monitoring by UPLC-UV-MS). This transformation was then easily scaled up to 0.4 mmol (~200 mg) under solvent-free conditions. When scaling up, we found that an addition of small amounts (of about 0.1 equiv.) of the corresponding cyanamides 2a-d was required to increase the isolated yields of compounds 4a-i,l,m by reducing the side reactions leading to compounds H (monitoring by UPLC-UV-MS) (Scheme 2) characteristic of transformations involving in situ generation of acyl(imidoyl)ketenes C [24,32]. Compounds 4a-i,l,m were readily isolated by simple recrystallization of the crude reaction mixtures. No effect of the examined substituents on the formation of compounds 4a-i,l,m was observed. In the case of compound 3j (X = NH), no compound 4j was formed-instead of this compound, furoqinoxaline I was detected (monitoring by UPLC-UV-MS) (Scheme 2) [24,33].
Scheme 2. Plausible pathway of formation of compounds 4 and G.
We assume that the formation of compounds 4a-i,l,m proceeded through three stages (Scheme 2). First, compounds 3a-i,l,m underwent thermally initiated retro-HDA that afforded FPDs 1a-i and cyanamides 2a-d. Second, formed FPDs 1a-i decarbonylated (the evolution of carbon monoxide was indicated by a gas detector tube) to generate acyl(imidoyl)ketenes C. And finally, acyl(imidoyl)ketenes C reacted as oxa-dienes with cyanamides 2a-d to produce the desired 4H-1,3-oxazines 4a-i,l,m. We suppose that ketenes C reacted with cyanamides 2a-d exclusively as oxa-dienes, since this cycloaddition reaction proceeded via a charge-controlled polar transition state, as it was observed earlier in the reaction of ketenes C with carbodiimides [25].
To validate the proposed pathway of formation of compounds 4 (Scheme 2), we tested the one-pot solvent-free reaction of FPD 1a with cyanamide 2b. At heating of compound 1a with cyanamide 2b (reaction scale of 0.4 mmol, 1a:2b reagents ratio of 1:1.1) at 235-240 • C, we found that compound 4b was formed only in a yield of~45% (monitoring by UPLC-UV-MS), which was much lower than in the case of decomposition of compound 3b. We think that it was because of violation of heat and mass transfer processes during the solventfree reaction of compounds 1a and 2b. These violations promoted the thermolytical side reactions leading to compounds H [24,32] (monitored by UPLC-UV-MS) and decreased the yield of compound 4b. Thus, the development of a procedure to compounds 4 from the direct reaction of compounds 1 and 2 without isolation of compounds 3 is rather possible, but it requires additional optimization.
Then, to further validate the proposed pathway of formation of compounds 4 (Scheme 2), we performed the decomposition of compound 3b in the presence of FPD 1b at 240 • C and decomposition of compound 3a in the presence of cyanamide 2b at 240 • C and studied the obtained reaction mixtures by HPLC-UV. As a result, the decomposition of compound 3b (R 1 = Ph, R 2 = morpholino) in the presence of FPD 1b (R 1 = 4-ClC 6 H 4 ) at 240 • C afforded a mixture of compounds 4b (R 1 = Ph, R 2 = morpholino) and 4e (R 1 = 4-ClC 6 H 4 , R 2 = morpholino) along with a mixture of corresponding side products H. The decomposition of compound 3a (R 1 = Ph, R 2 = NEt 2 ) in the presence of cyanamide 2b (R 2 = morpholino) at 240 • C afforded a mixture of compounds 4a (R 1 = Ph, R 2 = NEt 2 ) and 4b (R 1 = Ph, R 2 = morpholino) along with the corresponding side product H. These crossover experiments indirectly confirm that the proposed pathway of formation of compounds 4 (Scheme 2) includes retro-HDA stage and formation of acyl(imidoyl)ketenes C.
General Information
1 H and 13 C NMR spectra (Supplementary Materials) were acquired on a Bruker Avance III 400 HD spectrometer (Switzerland) (at 400 and 100 MHz, respectively) at 313 K in CDCl 3 (stab. with Ag) or DMSO-d 6 using the TMS or HMDS signal (in 1 H NMR) or solvent residual signals (in 13 C NMR, 77.00 for CDCl 3 , 39.51 for DMSO-d 6 ; in 1 H NMR, 7.26 for CDCl 3 , 2.50 for DMSO-d 6 ) as internal standards. 13 C ssNMR spectra were acquired on a Bruker Avance III 400 WB NMR spectrometer (Switzerland) (at 100 MHz). Melting points were measured on a Mettler Toledo MP70 apparatus (Switzerland). Elemental analyses were carried out on a Vario MICRO Cube analyzer (Germany). The reaction conditions were optimized using UPLC-UV-MS (Waters ACQUITY UPLC I-Class system (USA); Acquity UPLC BEH C18 column, grain size of 1.7 µm; acetonitrile-water (water containing 0.1% formic acid) as eluents; flow rate of 0.6 mL/min; ACQUITY UPLC PDA eλ Detector (wavelength range of 230-780 nm); Xevo TQD mass detector; electrospray ionization (ESI); positive and negative ion detection; ion source temperature of 150 • C; capillary voltage of 3500-4000 V; cone voltage of 20-70 V; vaporizer temperature of 200 • C) and HPLC-UV (Hitachi Chromaster Japan); NUCLEODUR C18 Gravity column (particle size 3 µm; eluent acetonitrile-water, flow rate 1.5 mL/min); Hitachi Chromaster 5430 diode array detector (λ 210-750 nm)). CO was indicated by gas detector tubes Gazoopredelitel GH-4 (USSR) (specifications 12.43.20-76). The single crystal X-ray analyses of compounds 3a, 3i, 4b, 4f, 4g, and 4i were performed on an Xcalibur Ruby diffractometer (Agilent Technologies, UK). The empirical absorption correction was introduced by multi-scan method using SCALE3 AB-SPACK algorithm [34]. Using OLEX2 [35], the structures were solved with the SHELXS [36] program and refined by the full-matrix least-squares minimization in the anisotropic approximation for all non-hydrogen atoms with the SHELXL [37] program. Hydrogen atoms were positioned geometrically and refined using a riding model. Thin-layer chromatography (TLC) was performed on Merck silica gel 60 F 254 plates using EtOAc/toluene, 1:5 v/v, toluene, EtOAc as eluents. Starting compounds 1a-j were obtained according to reported procedures [25,33,38,39]. Toluene for procedures involving compounds 1 was dried over Na before the use. Acetonitrile for procedures involving compounds 1 was dried over molecular sieves 4Å before the use. All other solvents and reagents were purchased from commercial vendors and used as received. Procedures involving compounds 1, 3 were carried out in oven-dried glassware.
Synthetic Methods and Analytic Data of Compounds
3.2.1. General Procedure to Compounds 3a-j,l,m A suspension of the corresponding FPD 1 (0.76 mmol) [25,33,38,39] and the corresponding cyanamide 2 (0.84 mmol) in 4 mL of a solvent (anhydrous acetonitrile (for 1a-h) or anhydrous toluene (for 1i)) was stirred and heated at 95 • C for 16 h (until the disappearance of the dark violet color of the compound 1) in an oven-dried capped vial. Then the reaction mixture was cooled to room temperature, and the resulting precipitate was filtered off to afford the desired compound 3. Compound 3 was pure enough and was used further without additional purification. trione (3i). After cooling the reaction mixture, no precipitate was formed. As such, the reaction solvent (acetonitrile) was removed on a rotary evaporator. The resulting solid was dissolved in toluene (2 mL). Then, petroleum ether (bp 70-100 • C) (6 mL) was added to the toluene solution, and the resulting precipitate was filtered off to afford compound 3i.
General Procedure to Compounds 4a-i,l,m
A mixture of the corresponding compound 3 (0.4 mmol) and the corresponding cyanamide 2 (0.04 mmol) was put into an oven-dried tube, pressed slightly, and then heated in a metal bath at 190-245 • C (the temperature for each compound is given in Table 2; caution: CO evolves during the reaction) for 3 min. The reaction mixture was cooled to room temperature and recrystallized from about 3 mL of a solvent (acetonitrile (for 3a-h) or toluene (for 3l,m)) to give the appropriate compound 4. In the case of compound 3i, the reaction mixture was cooled to room temperature, dissolved in 1 mL of ethyl acetate. Then, 5 mL of n-hexane were added to it, and the resulting precipitate was filtered off to afford compound 4i. (2) | 2022-08-20T15:14:35.334Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "a758f957649c173fa65011a776c9dceb5cd02ae7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/16/5257/pdf?version=1660744585",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "246232c50f1968e476e797a21ddf6a463a227124",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
219974193 | pes2o/s2orc | v3-fos-license | Economic Cooperation in the Greater Mekong Sub-Region
This paper investigates the impact of regional economic cooperation between Vietnam and its partners during the 2000-2015 period, focusing on the Greater Mekong Sub-region (GMS). Overall, the GMS represents a significant portion of Vietnam’s trade portfolio. China is the dominant trading partner in the GMS, exhibiting strong influence over Vietnam’s trade, especially its imports. However, using the gravity model of trade, we find that Vietnam has not benefited from GMS cooperation, as exemplified by its significant trade deficit, particularly with China. We further find that human capital enhancement and financial development are key factors to facilitate Vietnam’s trade, and mitigate its trade deficit with other GMS member countries. To this end, this study provides some important policy implications for the Vietnamese government as well as policymakers in other countries, when trading with larger partners in the context of regional economic cooperation.
I. Introduction
It is confidently assumed that regional economic cooperation can facilitate trade and beget economic gains to member states by deterring violent conflicts and misconceptions (Hoekman Schmidt, 2004), calling for more in-depth investigation of the cooperation-trade nexus, as well as the potential positive channels and/or circumstances that could affect trade.
The remainder of this paper is structured as follows. Section II overviews the literature on regional economic cooperation and trade. Section III discusses current economic GMS cooperation, as well as Vietnam's participation. Section IV discusses the data and model. Section V reports the empirical results. Section VI discusses potential channels affecting trade between Vietnam and the other GMS countries. Section VII concludes and offers policy recommendations.
II. Regional Economic Cooperation and Trade -A Review of the Literature
The literature documents the positive relationship between regional economic cooperation and trade gains (Aitken, 1973;Khazeh and Clark, 1990;Fernandez and Portes, 1998;Iyoha, 2005). Aitken (1973) empirically weighed the impact of the European Economic Community and the European Free Trade Association on member trade and finds an increasing trade growth. Fernandez and Portes (1998) investigated the impact of regional trade agreements on their partners, finding that in addition to the traditional gains of trade, a number of nontraditional benefits including credibility, signaling, bargaining power, insurance, and coordination could be achieved when joining a regional trade agreement. Iyoha (2005) discussed how regional economic integration affects intra-African trade, agreeing with Fernandez and Portes (1998) that economic cooperation positively affects trade, primarily by increasing market size, competition, and investment. Yang and Martinez-Zarzoso (2014) found a positive relationship between regional cooperation and trade between China and the ASEAN Free Trade Area, mostly due to the removal of tariff barriers between intra-bloc and extra-bloc countries. Nevertheless, regional economic cooperation may not always benefit members. Yang and Gupta (2008) found that African regional economic cooperation does not significantly promote bilateral trade, possibly because of high external trade barriers and low resource complementarity among members. Cooperation can also bring negative effects if trade diversion outweighs trade creation (Todaro & Smith, 2015).
The literature on the GMS is sparse. Yu (2003), Krongkaew (2004), and Poncet (2006) are among the few papers focusing on the Mekong area investigated the GMS, but were not able to definitively assess the effects of Mekong cooperation on its members. Specifically, Yu (2003) investigated the benefits and drawbacks of constructing a regional power-grid market in the GMS, arguing that institutional barriers, national interest, and differing levels of energy development are major obstacles that challenge the potential success of this common energy market. Thus, GMS countries should accelerate regional cooperation with a flexible approach to energy policy and energy reforms. Krongkaew (2004) asserts a more nationalist attitude Economic Cooperation in the Greater Mekong Sub-Region 243 that smaller countries are more likely to suffer a patron-client relationship, in which the larger countries accrue all benefits through size and market power. Subsequently, Poncet (2006) analyzed the evolution of Yunnan economic integration with the GMS, finding that bilateral trade between Yunnan and the GMS has decreased over the 1988-1999 period.
Despite little investigation of the impact of Mekong cooperation on Vietnam's trade, researchers have posited that the long-standing issues of a lack of experience in international business and overlapping bureaucratic administrative systems can obscure any benefits from cooperation (Schmidt, 2004). Pham (1999) even posited that regional economic cooperation could negatively impact Vietnam's economy because domestic firms lack competitiveness, and become vulnerable when Vietnam economically cooperates with more advanced nations. In addition, the removal of tariffs may increase non-tariff barriers that Vietnamese firms usually fail to satisfy, such as the Sanitary and Phytosanitary Measures, and the Technical Barriers to Trade. This makes it difficult for Vietnam's goods to be exported (OECD, 2005), resulting in accelerated trade deficits as imports exceed exports, hampering the country's economy. To this end, while the Vietnamese government has provided support for GMS cooperation, whether or not the country can expect trade gains is still an open empirical question that warrants investigation III. Regional Cooperation in the GMS A. The mekong basin and regional cooperation The Mekong is the twelfth largest river in the world and the longest in Southeast Asia.
Its genesis lies in the Tibetan highlands, and flows through the Chinese province of Yunnan; Myanmar, Laos, Thailand, and Cambodia, and finally Vietnam. The GMS, an area of 2.3 million square kilometers with a population of 242.8 million, is an important geopolitical region.
The GMS has to date employed up to 15 mechanisms, including both intra-bloc and extra-bloc cooperation. With the former, cooperation in the Mekong Basin began in 1957 when the Mekong Committee was launched at the initiative of the United Nation Economic Commission for Asia and the Far East (ECAFE). Accordingly, the Committee's principal objectives were to solve the rising problems of poverty and political instability along the lower river basin and to promote peace and prosperity through an effective joint exploitation of the river's resources.
However, Mekong cooperation was interrupted by subsequent wars and conflicts. The process re-gained momentum in 1992 when six riparian countries of the Mekong River, assisted by the Asian Development Bank (ADB) formalized the sub-regional cooperation to be known as the Greater Mekong Sub-region. Since then, the ADB has been the main sponsor of most cooperative initiatives. The GMS program has been directed toward the facilitation of sustainable economic growth and improving living standards in the Mekong region through factor input specialization and greatly expanded trade and investment. Since 1992, the ADB has loaned US$ 280 million to priority projects and disbursed US$ 7.6 million for technical assistance to study suitable programs and projects, and for project consultation activities.
In addition, the UN and the World Bank, and other sponsor countries such as Japan, France, Australia, and the US provide support broadening the context of GMS cooperation. For example, the Japanese government has recently promised JPY 750 billion in the form of Official Development Assistance to the GMS, in order to promote infrastructure and sustainable development; and the United States has also pledged to assist the development of the region by sending its experts and scholars.
B. Vietnam's participation in regional and economic cooperation in the greater mekong sub-region The Mekong river plays a pivotal role in Vietnam's economic activities: Twenty percent of Vietnam lies within the Mekong Basin, which produces half the country's agricultural products, including eighty percent of rice, ninety percent of rice exports, and roughly fifty percent of seafood exports. Thus, Vietnam has contributed to Mekong cooperation, mostly through promoting trade, facilitating collaboration initiatives, and providing financial assistance to neighboring countries.
The Vietnamese government has simplified customs for trading goods and people crossing.
It grants travel rights to vehicles of the GMS members, and has invested in large infrastructure projects, including a number of expressways and harbors, to link Vietnamese provinces with other GMS cities in order to facilitate cooperation and trade. Ultimately, this has accelerated trade volume between Vietnam and GMS members.
A. Data and sample overview
The data used for our analysis were extracted, processed, and merged from a number of sources. Specifically, import and export volumes were retrieved from the World Integrated Trade Our sample size comprises 188 countries with whom Vietnam has a trading relationship. the earliest model proposed by Tinbergen (1962): Equation (1) explains the volume of bilateral trade flows ( ) between country i and j (where i is Vietnam and j is its trading partner country), depending on: the GDP of country i ( ); the GDP of country j ( ); and the geographical distance between the two countries ( ). Additionally, the gravity model consists of a gravitational constant, G, which is independent of both i and j, and captures country-independent effects.
Based on (1), we estimate the following equation: Equation (2) is the simplest form of the gravity model. However, this specification has been criticized for being incomplete since it disregards multilateral resistance factors to trade such as trade agreements, common language, common colonial base, remoteness, and adjacency (Anderson and Wincoop, 2003 Where is the trade flow between countries i and j. Here, we separately consider trade in terms of the total volume of Vietnam's exports to partner countries, and the total volume of Vietnam's imports from partner countries. and are the GDP of countries i and j, respectively. and indicate the GDP per capital of country i and country j, respectively. is the absolute value of the difference in GDP between country i and country j. is the geographical distance between two countries i and j. reflects the level of openness in terms of trade, measured by the sum of total exports and imports divided by total GDP. notes an historical conflict between countries i and j, equal to 1 if conflict has existed, and 0 otherwise. denotes the residual term.
V. Empirical Results -Baseline Regressions
The gravity model has traditionally been estimated cross-sectionally. This approach has been criticized for obtaining unreliable results since it might exclude or mis-measure trading pair-specific variables (Baldwin, 2006) and it might be unable to capture the relevant relationships between variables over time. To mitigate this problem, we estimate our gravity model with the utilization of time-series cross-sectional (panel) data.
The two common methods used in panel data analysis are fixed effects and random-effects.
The former is used to surmount the problem of the traditional pooled cross-section method unable to cope with bilateral (exporter and/or importer) heterogeneity (Cheng & Wall, 2005;Anderson & Wincoop, 2003). However, this approach does not allow for the inclusion of key exporter and importer invariant factors, and subsequently excludes important economically relevant trade variables (Prehn et al., 2016). As our main focus is on time-constant variables, such as the Great Mekong Sub-region Cooperation, dummies indicating GMS member countries, and the geographical distance from Vietnam to its trading partners, the random-effects is more appropriate. Therefore, we follow the literature (Kavallari et al., 2008;Keum, 2010;Prehn et al., 2016) and employ a panel random-effects approach to evaluate the trade flows between Vietnam and other countries. Overall, the estimated coefficients of the explanatory variables Ln (Total income) and Ln (Distance) are consistent with the expected signs. The exports from Vietnam increase with the GDP of Vietnam and its trading partners (p < 0.01) and decrease with distance. In addition, the GDP per capita of the destination and regional countries are negatively associated with export volume because the estimated coefficients on Ln (Regional per capita income) and Ln (Destination per capita income) are negative and statistically significant at p < 0.05 and p < 0.01, respectively. We further find that a larger difference in economic size (DIFGDP) between countries is positively related to export flows since the estimated coefficient on Ln (DIFGDP)
A. Exports
is positive and statistically significant at p < 0.1. Moreover, the relationships between trade openness and contiguous borders, as well as trade agreements and Vietnam's exports, are positive and significant.
Regarding the regional dummies, since the estimated coefficient on GMS is negative and statistically significant (Column 1), it indicates that the level of goods and services exported from Vietnam to GMS member countries is below the normal level. Specifically, the export flow from Vietnam to GMS members is 62.4 percent [exp (-0.977) -1 = -0.624] lower than the normal level predicted by the countries' GDP, the distance between them, and multilateral resistance factors. The regression in column 1 displays results with the incorporation of the GMS regional dummy to examine the import flow into Vietnam from GMS members. To attain a more detailed result, we replicated the procedure in the Exports sub-section and separately incorporated dummy variables indicating China, Laos, Cambodia, Myanmar, and Thailand into the specification.
Columns 2-6 present the specification including only one of the GMS countries at a time.
Overall, the signs of the coefficients of the two basic explanatory variables Ln (Total income) and Ln (Distance) comport with the classical results of the gravity model. We further find that the large difference in GDP between Vietnam and its trading partners resulted in an increase in Vietnam's import volume. In addition, the effects of a contiguous border, and trade agreement on Vietnam's imports are positive and statistically significant.
The estimated coefficients of the GMS regional dummy, although positive, are not statistically significant. And the estimated coefficients of China and Thailand are positive and significant, indicating that Vietnam's imports from China and Thailand are higher than the normal level.
These results attest that China and Thailand account for a large proportion of Vietnam's total imports, and the import volume from these countries has increased significantly from 2000 to 2015. On the other hand, we find that Vietnam's imports from Cambodia are less than the normal level, as the estimated coefficient on Cambodia is negative and statistically significant.
Taken together, our results comport with Krongkaew (2004) who argued that when entering into cooperation with GMS members, smaller countries like Vietnam may not benefit from trade gains since the larger trading partners (e.g., Thailand and China) can take advantage of the size and market power to accrue all the gains from trade.
VI. Channels That Can Facilitate Trade
In this section, we investigate channels that help to facilitate Vietnam's trade with its partners, especially with the GMS, principally human capital enhancement and financial development.
A. Human capital
A large number of studies consider human capital as a critical factor in determining the growth of trade flows (Contractor & Mudambi, 2008;Levin & Raut, 1997;Gomez-Mejia, 1988). Levin and Raut (1997) found significant complementarities between export performance and human capital. In addition, Contractor and Mudambi (2008) examined the human capital investment and export performance of the top 25 service-outsourcing countries, finding that human capital significantly affects the exports of both services and goods. Based on the literature, we expect human capital enhancement to promote Vietnam's imports and exports.
To test this hypothesis, we incorporate into the baseline regression (3) a variable capturing the human capital level in Vietnam, as well as its interaction term with the GMS dummy.
Specifically, based on the literature we include the natural logarithm of the total number of university and college students in Vietnam to proxy for human capital. Next, we follow the procedure conducted in earlier sections of this study and examine whether human capital enhancement can yield a significant impact on the bilateral trade of Vietnam with each GMS member. We first estimate our specification for the export volumes. Then, we replicate this exercise for imports. Table 8 B. Trade openness and financial development Beck (2002) and Shahbaz and Rahman (2014) have asserted that development of the financial sector could facilitate trade flows. According to Rajan and Zingales (1998), and Svaleryd and Vlachos (2005), the development of the financial sector allows enterprises to access external funds more conveniently, by increasing the amount of accessible funds and reducing the cost of external financing. As a result, it could reduce the problem of liquidity shortage, and ultimately promote trade.
Another literature strand evidences the positive impact of trade openness on trading activities, especially exports (Arslan & Wijnbergen, 1993;Ahmed, 2000;Krueger, 1997), partly because trade liberalization diminishes anti-export bias and helps exports become more competitive in an international market. Hence, in this sub-section, we attempt to conduct another robustness check, examining whether trade openness and financial development affect Vietnam's trade.
We follow Lee and Chang (2009) and use the ratio of liquid liabilities to GDP to proxy for financial development (Financial development), and incorporate it into Eq. (3), along with its interaction term with Trade openness. (iii) *, **, *** indicate the significance level at 10%, 5%, and 1%, respectively Table 9. Continued development could be an important channel that facilitates Vietnam's exports and helps reduce its reliance on imports, thus mitigating the trade deficit.
The signs of their coefficients of the other variables largely comport with our prior findings when we include and its interaction term with into the Eq. (3).
VII. Conclusion and Policy Implications
The nexus between regional economic cooperation and trade gain has been studied by many researchers. Although researchers and policy makers have investigated this nexus theoretically (Fernandez and Portes, 1998;Iyoha, 2005) and empirically (Aitken, 1973;Yang & Gupta, 2008;Yang & Martinez-Zarzoso, 2014), their findings have not reached a consensus.
China's recently proposed BRI has revived the debate on the effectiveness of regional cooperation. Since the impact of regional economic cooperation on trade outcomes is contingent upon country-specific features (e.g., economic size) and capacities (the level of financial development and human capital), it is vital that economists and policymakers conduct more in-depth investigations on whether and through which channels cooperation facilitates or impedes trade. Our paper responds to this need by empirically assessing the trade relations between Vietnam and its GMS partners.
Using the gravity model of trade on a large sample of Vietnam and its trading partners during the period 2000-2015, we found that regional economic cooperation in the GMS did not facilitate Vietnam's trade as expected, as evidenced by perennial trade deficits. Specifically, Vietnam's exports with GMS members are far below the level predicted by distance, relative economic size, and development. For example, we found that Vietnam's exports to Chinaits largest trading partner -and Myanmar are much lower than the expected level, whereas the total value of Vietnam's imports from China and Thailand is above the normal level. We also found that human capital enhancement and financial development are two important channels that facilitate Vietnam's trade and help mitigate its trade deficits.
Our study recommends policies to stimulate trade flows. The empirical results may help the Vietnamese government identify the benefits and costs of participating in GMS cooperation, subsequently developing an appropriate trade strategy. In addition, we recommend that governments of small and developing countries like Vietnam should devote more effort to improving their financial systems, and invest in human capital in order to encourage domestic production, improve the quality of the labor force, and lessen their dependence on imports from other countries. | 2020-06-04T09:12:45.902Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "e038054f96eedabe5a6079a20389510ea79b1502",
"oa_license": null,
"oa_url": "https://www.e-jei.org/upload/JEI_35_2_240_263_2013600216.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fca32142f538082655773d90c2d07e781683f792",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
245772092 | pes2o/s2orc | v3-fos-license | Accuracy of delirium risk factors in adult intensive care unit patients
Abstract Objective: To assess the accuracy measurements for predisposing and precipitating Risk Factors for delirium in an adult Intensive Care Unit. Method: Cohort, prospective study with patients over 18 who had been hospitalized for over 24 hours and were able to communicate. The patients were assessed once a day until the onset of delirium or permanence in the Intensive Care Unit. Instruments were employed to track delirium, characterize the sample, and identify the risk factors. Descriptive statistics was employed for sample characterization and accuracy tests for risk factors. Results: The included patients amounted to 102, 31 of which presented delirium. The predisposing predictive risk factors were hypoalbuminemia, American Society of Anesthesiology over three, severity, altered tissue perfusion, dehydration, and being a male, whereas precipitating predictive factors were physical restraint, infection, pharmacological agent, polypharmacy, anemia, altered renal function, dehydration, invasive devices, altered tissue perfusion and altered quality and quantity of sleep. Conclusion: An accurate identification of predisposing and precipitating risk factors may contribute to planning preventive measures against delirium.
INTRODUCTION
Delirium is described as a disturbance in attention, conscience, and cognition, with a brief period of development, oscillating in severity throughout the day; it is related to physiological changes in the individual (1) . Its physiopathology is not completely established and the main hypothesis refers to a change in the concentration of neurotransmitters such as acetylcholine, serotonin, dopamine, melatonin, noradrenaline, and gamma-Aminobutyric acid (GABA). The increased secretion of cytokines and its high release in chronic stress result in an inflammation and increase the permeability of the bloodbrain barrier, changing neurotransmission. Delirium is thus a likely result of different pathogenetic mechanisms, which may lead to reduced oxidative metabolism of the brain (2) .
The incidence of delirium varies according to the studied population. It is identified in up to 83.3% of patients in mechanical ventilation (MV) in general Intensive Care Units (ICU) and corresponds to the most frequent neurological dysfunction (3) . It is pointed as responsible for an increased time of hospitalization at the ICU, functional decline, higher institutionalization, long-lasting hospitalization, loss of invasive devices, higher costs, and higher mortality rate (4) .
To track delirium in the ICU, nurses may use the Confusion Assessment Method Intensive Care Unit (CAM-ICU), conceived for patients in severe state or in MV, with an overall sensitivity of 72.5% and specificity of 96.2% for Brazilian Portuguese (5) . In addition to its identification, preventive actions must be prioritized to decrease the risk of developing delirium, given its deleterious consequences to patients and the health system. Understanding the factors leading to its development is thus essential in this process (4,6) .
Delirium may be triggered by only one Risk Factor (RF) but is frequently considered a multifactorial condition. In most cases, there is an inter-relation between predisposing (vulnerability of the individual) and precipitating factors (damaging events during hospitalization) (6) .
Some RF are described in the literature, such as: being over 65 years old, patient severity, smoking, drinking, hypertension, dehydration, previous cognitive impairment, high number of hospitalization days, use of sedatives and analgesics, mechanical restriction, invasive devices, MV, and pain (6)(7)(8)(9)(10)(11) . However, the nurse must identify the most accurate for their patients' profile, with an emphasis on the RF of independent performance, providing early aid to the determination of preventive interventions. In this context, the primordial work of the nursing team is emphasized, since it maintains bedside surveillance, which enables an early identification of RF (4,6) .
The use of accuracy measures to determine the RF whose diagnosis development is most likely shows the representation of each factor regarding delirium as a phenomenon, directing the nursing practice to be based on scientific evidence (12) . That said, the nurse, who is responsible for planning patient care, must identify the priority interventions for solving nursing problems. Recognizing RF related to delirium is insufficient and identifying those which are more accurate for the outcome is a necessity. To this moment, there are no studies which provide accuracy between predisposing and precipitating RF for the development of delirium (4,6) .
In this context, this study aims to assess the accuracy measures of predisposing and precipitating RF of delirium in adult ICU patients. The expectations are thus to identify the RF to subsidize the nurses' future planning of care to patients with susceptibility to delirium.
Design of stuDy
Prospective cohort study guided by the instrument Standards for Reporting Studies of Diagnostic Accuracy (STARD) (13) .
LocaL
This study was performed in the ICU of a university hospital in São Paulo state's inland, Brazil. This ICU had 410 inpatient beds, 51 of which for the adult ICU.
sampLe Definition
The population comprised patients hospitalized in the Adult ICU for over 24 hours. The sample was obtained by convenience for a collection time of five months (September 2018 to January 2019).
seLection criteria
Patients over 18 years old capable of responding to CAM-ICU verbally or through gestures were randomly included. Excluded patients were those who, due to a change in their clinical conditions, were unable to respond to the items of this instrument (5) . For statistical analysis, patients who developed delirium during the assessment period and those who did not were included.
Data coLLection
The participants were assessed once a day by the researchers. The main researcher was responsible for clinical assessments, whereas the other researchers were responsible for medical record data collection. This was performed daily (morning, afternoon, or evening) from September 2018 to January 2019. These cases were followed up until the development of delirium. Patients who did not present this outcome continued to be assessed until ICU discharge, death, or transference. The main researcher performed case studies suggested by CAM-ICU prior to starting clinical assessments.
In addition, an instrument elaborated by the researchers for data collection was employed. This aimed at characterizing the population and identifying the RF, as well as their conceptual definitions (theoretical meaning) and operational definitions (how a given concept is applied and measured in practice). To identify all RF available in the literature, regardless of the clinical profile of the assessed patient, an Integrative Literature Review (IR) was performed, including: the phases of theme identification, literature search, study categorization, assessment of included studies, interpretation of results, and synthesis of knowledge available in the analyzed articles (14) .
The content of this instrument is emphasized to have been analyzed and appreciated by three judges of the Study and Research Group of Nursing Care Technologies (Grupo de Estudos e Pesquisa sobre Tecnologias do Cuidar em Enfermagem).
The following were obtained through clinical assessment or direct interview with the patients: MV -considered as artificial ventilation the application of positive pressure on the airways, which may be invasive or non-invasive (7,11) ; physical restraint (mechanical restriction to bed, multiparametric monitoring, prescribed movement inhibition with or without orthotics and external fixators) (7) ; functional impairment (Barthel index) (7,9) ; pain (visual analog pain scale) (11) ; invasive devices ( number and types of catheters and drains) (7,19) ; altered visual acuity (use of glasses or contact lenses, total vision loss or report of difficulty to see) (6) ; alcohol abuse (volume and type of consumed alcoholic beverage) (6,21) , smoking (reporting having smoked one or more days in the last 30 days) (6,8) ; altered quantity and quality of sleep (how they slept and resting sensation) (7) and comorbidity (Charlson index) (26) .
After statistical analysis and for a better comprehension, the RF were subdivided into predisposing and precipitating. During the phase of daily patient follow-up, for identification of delirium, CAM-ICU was employed. This instrument assesses the four patient features: 1 -"fluctuating mental status"; 2 -"inattention"; 3 -"disorganized thinking"; and 4 -"altered level of consciousness". For a positive assessment of delirium, the presence of features one, two, and three or one, two, and four were considered, according to present or absent response pattern by the patient. The mean time of assessment was five minutes (5) .
Data anaLysis anD treatment
The data were stored in Microsoft Excel® spreadsheets for the application of descriptive statistics to characterize the sample and to obtain frequencies, measures of central tendency (mean, median, minimum, and maximum) and dispersion (standard deviation -SD) (12) . The accuracy of the RF was assessed through calculation of measures of Sensitivity (SE), Specificity (SP), Positive Predictive Value (PPV), Negative Predictive Value (NPV), Positive Likelihood Ratio (LR+), Negative Likelihood Ratio (LR-), and Diagnostic Odds Ratio (DOR) (12,27) .
The test's SE refers to the probability of it being positive in the presence of the outcome; SP refers to the probability of the test being negative in the absence of the outcome; PPV is the probability of the outcome when the test is positive; NPV is the probability of absence of the outcome when the test is negative. The LR+ refers to the probability of a result being positive for sick individuals over the probability of the result being positive for healthy individuals; LR-refers to the probability of a negative result for sick individuals over the probability of the result being negative for healthy individuals; DOR represents the chance of occurrence of the outcome among those exposed divided by the chances of the outcome for the non-exposed (12,27) .
To determine the accuracy, the test's SE higher than 0.6 was adopted, given that there is no appropriate theoretical framework for its cut point, LR+ over one, LR-lower than one, and DOR over one. The more sensitive a factor is, the higher the chances of the event in the presence of a RF. Thus, considering the high variation of delirium incidence, this cut point was opted for; in this point, the chance of development of delirium is higher than that of no development in the presence of a given RF (27) . When sensitivity was equal to 1.00, the numerator was zero, and thus the result could not be calculated.
Finally, all analyses considered a 5% significance level. The analysis was assisted by statistical software Statistical Analysis Software® version 9.4 and Statistical Package for the Social Sciences® 22.0.
ethicaL aspects
The study abided by the Brazilian National Health Council's Resolution n. 466/12, referring to research involving human beings, and was approved in 2018 by the Ethics Committee of Universidade Estadual de Campinas on opinion 2.502.946. To start the assessments, the patient and/or responsible were previously approached for research clarifications and so that signature of the Informed Consent Form (ICF) could be requested. The assessments were started only upon authorization.
During data collection, the patient was approached and oriented concerning the procedure on all occasions. When delirium or any complaint was identified, it was reported to the health team in charge of the patient's care.
RESULTS
The study included 102 patients during the data collection period. No individuals were excluded. Among the included patients, 31 (30.4%) presented delirium. The ones who did not were assessed until ICU discharge, death, or transference, with a mean of 7.4 days of assessment per individual, in addition to a mean age of 54 with a SD of 15.4. The other sociodemographic data are described in Table 1.
Other causes of hospitalization referred to myasthenia gravis, porphyria, diabetic ketoacidosis, and grade IV hepatic encephalopathy. Among the RF identified in the literature, it was not possible to assess dementia, since this diagnosis was not in the analyzed medical records. In this study, altered renal function was analyzed only as a precipitating factor, given that the obtained data were not sufficient to differentiate this alteration between chronic and acute.
The accuracy measures of the predisposing RF are presented in Table 2.
The RF alcohol abuse, functional impairment, and history of delirium had a high specificity, showing that, in the absence of these factors, delirium has a 94-97% probability of not being present.
The accuracy measures of the precipitating RF are presented in Table 3. The analysis of the RF "invasive devices" identified a mean of two devices per individual. The analysis was thus built based on this result.
The RF blood transfusion and MV presented a high specificity, showing that, in the absence of these RF, delirium has a risk of not being present in 95-98% of cases. After the identification of the accuracy of the RF pharmacological agent, each one was analyzed separately (Proton-pump inhibitors, Table 3 -Measures of accuracy and of the precipitating RF for the development of delirium regarding sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio. Analgesics, Opioid analgesics, Moderate cholinesterase inhibitor, Antipsychotics, Corticosteroids, Hypnotics/Anxiolytics, Very strong cholinesterase inhibitor, Antidepressants, and General anesthetics). Only proton-pump inhibitors (SE -0.6875) and analgesics (SE -0.6875) were predictive for the development of delirium. No patient with delirium received general anesthetics during the period of assessment, and the sensitivity of this item was thus zero.
The RF "invasive devices" was analyzed separately and the Indwelling Bladder Catheter (SE -0.8750) was the most predictive of the development of delirium, followed by the Central Venous Catheter (SE -0.7500) and the Nasoenteral Catheter (SE -0.7500).
DISCUSSION
In this study, 30.4% of the patients presented delirium, which corroborates the incidence in the intensive care environment, which may be up to 83.3% (3) , being influenced by the characteristics of the population, presence of specific tracking methods, and preventive measures implemented for this outcome (2,(4)(5) . In the study unit, there were no systematic methods of delirium identification, risk, or conduction of preventive measures. The incidence found was within that described by the literature, particularly for a place with no specific measures for its management (20) .
This study's sample was predominantly male, which is a predisposing factor described in the literature and predictive of delirium in this study (20) . Half of the included patients were 60 years old or older. In Brazil, the elderly population is defined by the World Health Organization as comprising individuals over 60 years old (28) . In this study, the RF age was not accurate for the development of delirium. However, old age may be related to neuronal apoptosis, reduction of brain blood flow and alteration in the neurotransmitter system, elucidating from a physiological point of view its relationship in other works discussing increased risk of delirium (21) .
The main cause of hospitalization was cardiac and the most incident chronic disease in Brazilians is cardiovascular disease, which justifies this result (29) . Associated to that, surgical treatment was the most conducted one, since 25 of the investigated beds were postoperative and some patients of the coronary and clinical unit were not exempt of possible surgical interventions during hospitalization.
The predisposing RF hypoalbuminemia presented a significantly lower value in patients with delirium, which makes it possible to state its relation with the outcome. This may indicate a poor nutritional state prior to hospitalization, loss of proteins through urine or alteration of its production by the liver. Thus, as with the resulting reduction of osmotic force, there is a difficulty in the permanence of the intravascular volume, which may lead to a reduction in brain perfusion. In addition, the proteins are responsible for transportation of some medication, enabling its free concentration in plasma to become high, increasing the risk of delirium (25) .
The subjective measure of comorbidities and the preoperative patient conditions may be assessed through ASA, in which a score higher than three is predictive of delirium (21) . A higher number and severity of previous patient comorbiditieswhich cause chronic altered tissue perfusion -, smaller events during hospitalization which intensify this alteration, may be determinant for the reduction of brain perfusion and the development of delirium (21) .
Patient severity, defined as intensity and extension of organic dysfunction and of the disease presented by the individual, which influences prognosis, was identified through an APACHE score over 16 and SOFA higher than or equal to five (7,15) . This RF presented a high predictive power for delirium in this study, probably due to the lower physiological reserve of the individual due to condition severity. Thus, in this context, the presence of a lower number of precipitating RF is necessary for the development of delirium (6) .
In addition to that, a reduction in blood flow of tissues that compromise health may be identified through a previous diagnosis of Systemic Arterial Hypertension, Diabetes Mellitus, Mean Blood Pressure lower than 55 mmHg, Cerebrovascular Accident, or brain hemorrhage, leading to an alteration in tissue perfusion (7)(8)(10)(11)(20)(21)23) . These parameters must be analyzed in the moment of hospitalization and related to the likely alteration in brain tissue perfusion during hospitalization.
Also, dehydration, i.e., the reduction of the extracellular volume secondary to hydroelectrolytic loss, identified through a urea nitrogen/creatinine ratio over 18, was also a predisposing RF. This was probably due to a contraction of intravascular volume, reduction of tissue perfusion, and global reduction of the mechanism of brain oxidation (2,9) . This factor may be identified upon patient admission, becoming a predisposing or precipitating RF during hospitalization.
Therefore, preventive measures for these factors must be targeted at therapeutic measures with the objective of reducing cognitive decline, sensory loss, and an adequate hydration and nutrition. In addition, the assessment of the presence of relatives in the ICU must be considered, since it provides higher sensory stimulus known by the patient, in addition to reducing cognitive decline (2,6) .
The precipitating RF physical restraint, validated in this study, may be caused by patient stress and reduction of sensory stimulation known by the individual. In addition, mechanical restriction in bed must be the last resort to maintain safety and early mobilization is a preventive method (2,6) .
Similarly, the infection RF may be related to delirium, since inflammation during this process changes the permeability of the blood-brain barrier and, consequently, neurotransmission. For this reason, preventive actions against infection may also result in prevention of delirium (2) .
Proton-pump inhibitors and analgesics (all types of medication used for pain management) were the most predictive for the outcome. However, long-lasting use of proton-pump inhibitors still does not present a consistent relation with the development of delirium. The hypotheses include: (a) increased risk of infection (pneumonia and Clostridium difficile), which is a RF for delirium; (b) Vitamin B12 deficiency, which increases cognitive decline; (c) hypomagnesemia and interference in the pharmacokinetics of benzodiazepines and antidepressants, which may cross the blood-brain barrier (22) .
In addition, the administration of opioids may interfere in the regular working of the neurological system, particularly among the elderly. In these individuals, pharmacodynamics and pharmacokinetics of medications are altered by aging, which involves modifications in body composition and reduction of renal and liver function. Due to that, they are susceptible to more intense adverse or therapeutic effects, which may be related to the result obtained in this study (21,28) .
Consequently, polypharmacy is defined as the administration of five or more medications within 24 hours (19) and was considered a precipitating predictive RF of delirium and its isolated value was significant for this outcome. This is due to the fact that it may be related to patient severity, which requires a high number of medications and/or stress caused by the disturbance of the constant manipulations needed for its administration (2) .
In addition, the reduction of hemoglobin compromises oxygen transportation to tissues, altering the brain oxidation system and increasing the risk of delirium. The RF Anemia was validated for this population. Thus, the correction of hemoglobin must be prioritized whenever they match the transfusion criteria (2) .
Altered renal function, defined as a condition when kidneys lose their capacity of performing basic functions, may be related to delirium due to accumulation of toxins (urea and creatinine) in the organism and consequent alteration in neurotransmission, including alteration in dopamine and serotonin. Both are identified in the physiopathology of delirium (2,16,20) .
Also, the presence of invasive devices was one of the precipitating RF most predictive of delirium and may be related to patient severity, increased risk of infection, and physical restraint. Thus, these devices must be removed whenever possible and this may occur through daily reassessments of its indication. In this study they were categorized according to their type and IDBC was the most predictive of this outcome, demonstrating a constant need for assessment of its indication (6,16) .
The lack of programmed care and excessive noise and light may be sources of stress for patients, alter the production of melatonin and quality and quantity of sleep, which may be altered in the presence of delirium. Thus, schedules must be planned for medication administration, vital signs measurement, and conduction of procedures for adequate sleep (2) . Patients require thus actions which may provide time and space orientation. Such actions may be conducted through calendars, watches, familiar objects, reduction of noise and adequate illumination. Such factors provide a calm and comfortable environment. Reaffirming the presence of family is pointed out as a preventive measure against delirium, reducing patient stress and anxiety while promoting a better participation in care, improving connection with the environment (2,6,30) .
Concerning this study's limitations, the number of patients included in each of the RF could have been higher; however, the low sample representation of some RF did not interfere in the outcome analysis. In addition to that, cognitive impairment was not assessed, since the high severity of this population did not allow for an objective assessment and the motor subtypes of delirium (hyperactive, hypoactive, mixed) were not identified, which precluded relating the RF with their motor subtypes.
Thus, identifying the most predictive RF for the development of delirium in a specific population has shown that not all factors identified in the literature are sensitive to the outcome. With that, the nurse needs to recognize the demand of the population, the accurate RF, and their physiological relation with delirium to direct specific and efficient nursing interventions.
The identification of these RF in this study may collaborate with planning and implementation of preventive actions for the studied population, particularly with the evidence of RF more predictive of delirium in the context of intensive care. In addition, this study shows that the RF identified in the literature are not always present in all populations and accuracy studies are needed to determine which are the most predictive for planning the nurses' interventions.
Preventive actions against RF aim at the non-intensification of predisposing factors and prevention of occurrence of precipitating factors during hospitalization. The nurse plays thus a vital role in the early identification of these RF for subsequent care directing.
CONCLUSION
In this study the predisposing RF most predictive of delirium were hypoalbuminemia, ASA over three, patient severity, altered tissue perfusion, dehydration, and being a male. Precipitating RF include physical restraint, infection, pharmacological agent, polypharmacy, anemia, altered renal function, dehydration, over two invasive devices, altered tissue perfusion, altered quality and quantity of sleep. Considering that tissue perfusion and dehydration are both predisposing and precipitating factors, the need for permanence of identification of these factors during hospitalization is emphasized. | 2022-01-07T06:17:39.481Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "f62d678c4c379b64c66b9c9b09d0e113e61df536",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/reeusp/a/XzTdCbXFkRc9ZGZNcPZYsxm/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05d32106dcf7dd4b9736d903eae12c429f0a6b6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119257944 | pes2o/s2orc | v3-fos-license | Dark Matter that Interacts with Baryons: Density Distribution within the Earth and New Constraints on the Interaction Cross-section
For dark matter (DM) particles with masses in the 0.6 - 6 m_p range, we set stringent constraints on the interaction cross-sections for scattering with ordinary baryonic matter. These constraints follow from the recognition that such particles can be captured by - and thermalized within - the Earth, leading to a substantial accumulation and concentration of DM that interact with baryons. Here, we discuss the probability that DM intercepted by the Earth will be captured, the number of DM particles thereby accumulated over Earth's lifetime, the fraction of such particles retained in the face of evaporation, and the density distribution of such particles within the Earth. In the latter context, we note that a previous treatment of the density distribution of DM, presented by Gould and Raffelt and applied subsequently to DM in the Sun, is inconsistent with considerations of hydrostatic equilibrium. Our analysis provides an estimate of the DM particle density at Earth's surface, which may exceed 1.E+14 cm-3 for the mass range under consideration. Based upon our determination of the DM density at Earth's surface, we derive constraints on the scattering cross-sections. These constraints are placed by four considerations: (1) the lifetime of the relativistic proton beam at the Large Hadron collider (LHC); (2) the orbital decay of spacecraft in low Earth orbit (LEO); (3) the vaporization rate of cryogenic liquids in well-insulated storage dewars; and (4) the thermal conductivity of Earth's crust. As an example application of our results, we show that for the scattering cross-sections that were invoked recently in Barkana's original explanation for the anomalously deep 21 cm absorption reported by EDGES, DM particle masses in the 0.6 - 4 m_p range are ruled out.
Introduction
In recent years, there has been considerable interest in the possibility that dark matter (DM) might have astrophysically-significant non-gravitational interactions with ordinary baryonic matter, prompted in part by the suggestion that DM may be self-interacting (e.g. Spergel & Steinhardt 2000). The possible effects of such interactions have been discussed in a wide variety of astrophysical contexts, including the heating of X-ray clusters (Qin & Wu 2001;Chuzhoy & Nusser 2006), big bang nucleosynthesis (BBN; e.g. Cybert et al. 2002); the power spectrum of the cosmic microwave background (CMB) and Lyman-α forest (e.g. Dvorkin et al. 2014;Xu et al. 2018;Gluscevic & Boddy 2017); and the thermal balance of neutral atomic gas during the "Dark Ages" (Muñoz et al. 2015) and at "Cosmic Dawn" (Barkana 2018;Fialkov et al. 2018;
& Loeb 2018). Additional effects result if DM
can annihilate with ordinary matter (e.g. Farrar & Zaharijas 2006;Mack et al. 2007). Such considerations set constraints on the allowable cross-sections for scattering with ordinary baryonic matter, as a function of the particle mass. For cross sections large enough to have astrophysical effects, direct detection searches for WIMP dark matter (e.g. Abrams et al. 2002;Aguilar-Arevalo et al. 2016;Angloher et al. 2017) do not generally apply, because DM particles lose too much energy via collisions in the atmosphere or Earth's crust to trigger the detector (e.g. Starkman et al. 1990). Numerous studies have been performed to place limits for the "moderately interacting" range of cross sections, starting with Wandelt et al. 2001; an extensive discussion of the limits from direct detection experiments has been presented very recently by Mahdawi & Farrar (2018). Each of the various constraints thereby obtained typically applies to specific baryonic nuclei and for a specific range of collision velocities. A wide variety of interactions have been considered, ranging from interactions involving hadronic dark matter (e.g. a stable sexaquark; Farrar 2017) that are characterized by a Yukawa (or double-Yukawa potential); to long-range interactionsinvolving as-yet undiscovered forces or milli-weak charged DM -with a cross-section that is posited to decrease rapidly with collision velocity (e.g. Muñoz et al. 2015), as in the case of Rutherford scattering.
In this paper, we consider additional constraints that can be derived in the circumstance that DM particles are captured by -and concentrated within -the Earth. The concentration of DM within the Earth can be significant within the range of parameters to be considered in the present paper: DM masses, m DM , between 0.2 and 10 m p , and scattering cross-sections in the range 10 −30 to 10 −20 cm 2 for typical nuclei in the crust or atmosphere. To avoid confusion with "self-interacting dark matter" (SIDM, introduced by Spergel & Steinhardt), we adopt the acronym HIDM, for "hadronically-interacting dark matter," to refer to particles in this region of parameter space.
In Section 2, we discuss the capture of HIDM particles by the Earth, which follows the scattering and thermalization of HIDM in Earth's crust or atmosphere. In Section 2, we discuss the probability that a HIDM particle that is intercepted by the Earth will be captured, the number of HIDM particles thereby accumulated over Earth's lifetime, the fraction of such particles retained the face of evaporation ("Jeans loss"), and the density distribution of such particles within the Earth. These considerations lead to an estimate of the DM particle density at Earth's surface, which we use in Section 3 to determine constraints on the scattering cross-sections. These constraints are placed by four considerations: (1) the lifetime of the relativistic proton beam at the Large Hadron collider (LHC); (2) the orbital decay of spacecraft in low Earth orbit (LEO); (3) the vaporization rate of cryogenic liquids in well-insulated storage dewars; and (4) the thermal conductivity of Earth's crust. In Section 4, we present a summary of these combined constraints.
The variation of the scattering cross-section, σ A v , from one nucleus to another depends upon the nature of the scattering potential. For a interaction involving a long-range (1/r) potential proportional to nucleon number, the dependence is is the reduced mass. This behavior is exactly analogous to that obtained for scattering in a Coulomb potential (i.e. Rutherford scattering): where Z is the nuclear charge. But for a short-range interaction, such as that described by a Yukawa potential, the situation is more complicated. In the Born approximation, the cross-section for this case is given by a simple dependence that has been widely used in previous studies: Kurylov & Kamionkowski 2004). While this simple A-dependence might apply at high collision energies or if the coupling is weak, very substantial deviations could occur at low energies and might result in a scattering cross-section that shows a strong and non-monotonic dependence on A. Such is indeed the case for the scattering of thermal neutrons by atomic nuclei. In particular, an attractive Yukawa potential can give rise to resonances associated with quasi-bound states; this behavior opens the possibility that σ A v might be greatly enhanced or reduced for specific values of A (Farrar & Xu 2018). In this study, we allow for the possibility that σ A v varies rapidly with A in a manner poorly-described by the Born approximation. Accordingly, we will present limits on σ A v for a variety of different nuclei and for a variety of mixtures of nuclei. In the notation we adopt here, the superscript on σ A v is either the chemical symbol for the nucleus in question, or denotes a medium -e.g. the crust, denoted by a superscript "cr" -for which we constrain the average scattering cross-section for the various nuclei it contains. Where a number appears alone in the subscript, it is the collision velocity in km s −1 . Alternatively, the subscript may denote a collisional energy (e.g. "6.5 TeV") or a temperature (e.g. "300 K") for which the velocity-averaged cross-section is constrained.
Number of HIDM particles captured by Earth
We consider first the capture of halo HIDM by the Earth. During its lifetime, the number of HIDM particles intercepted by the Earth is where t ⊕ = 4.55 Gyr (Manhes et al. 1980) is the age of the Earth, ρ DM is the dark matter mass density in the Galactic plane, v ⊕ is the average velocity of the Earth relative to the HIDM particles, and R ⊕ = 6371 km is the radius of the Earth. On the right-hand side of equation (3), we have normalized v ⊕ , m DM , and ρ DM , relative to typical values of interest for the rotational velocity of the Sun around the Galactic Center, the HIDM mass, and the DM density in the Galactic plane. In an extensive review of the local density of dark matter, Read (2014) found that determinations of ρ DM the two years prior to the review were in the range 0.2 − 0.8 (GeV/c 2 ) cm −3 (excluding one non-detection). Subsequent analyses have led to values ρ DM ≃ 0.5 (GeV/c 2 ) cm −3 (Bienayme et al. 2014;Piffl et al. 2014;McKee et al. 2015, andSivertsson et al. 2017). Nonetheless, in order to be conservative, we adopt a lower value, ρ DM = 0.3 (GeV/c 2 ) cm −3 , for purposes of our analysis. All the upper limits obtained in Section 3 below on the cross-sections for the interaction of HIDM with ordinary matter are inversely proportional to the value adopted for ρ DM , while the lower limits obtained in Section 3.4 are proportional to ρ DM . Thus, larger values of ρ DM than the conservative value we adopt would strengthen all the constraints obtained in our study. Equation (3) is based upon the assumption that the Earth presents a cross-section πR 2 ⊕ to halo DM; this neglects small enhancements that might result from the thickness of the atmosphere (in the case where DM first scatter in the atmosphere), and from gravitational focusing effects (which are small because in typical models for the DM halo v ⊕ is much larger than the Earth's escape velocity.) Equation (3) also neglects the effects of random DM motions that would lead to modest enhancements in the rate at which DM hit the Earth, and ignores possible evolution in the local DM density and velocity distribution over the lifetime of the Earth.
Not every intercepted particle is captured, however, because a significant fraction of the particles is reflected from the Earth's atmosphere with a speed larger than Earth's escape velocity, v es = 11.2 km s −1 . The fate of any incoming HIDM particle depends upon the number of scatterings it suffers before emerging again from the atmosphere: a particle that suffers a sufficient number of scatterings will lose enough energy to emerge with a speed smaller than v es and will thus be captured. When a fast DM particle scatters off a much slower atom of mass m A in Earth's atmosphere, the fraction of kinetic energy transferred is where θ is the scattering angle in the center-of-mass frame. In this expression for f KE , the relevant mass is that of the scattering nucleus, not the atmospheric molecule that contains it, because energy transfer to the scattering nucleus occurs on a short timescale relative to the rotational and vibrational periods of the molecule. If the scattering cross-section has a forwards-backwards symmetry, then the mean kinetic energy transfer is , and the mean number of scatterings needed to reduce the particle speed below the escape velocity is N 0 = ln(v 2 es /v 2 ⊕ )/ ln(1 −f KE ). For example, a particle of mass m DM = 2m p and velocity 200 km s −1 scattering off pure nitrogen atmosphere (m A = 14m p ) hasf KE = 7/32 and N 0 = 23.3.
The problem to be addressed here -determining the fraction of particles that suffer a given number of scatterings before reflection from a scattering slab -is almost exactly analogous to an astrophysical problem that was discussed more than three decades ago: the reflection of X-rays by a cloud of cold electrons (Lightman & Rybicki 1979;Lightman et al. 1981, hereafter L81). In this case, when the number of scatterings exceeds ∼ 5, the fraction of reflected X-ray photons to have suffered N scatterings is well-approximated by π −1/2 N −3/2 (L81, their equation 14) 1 . Thus, by analogy, the fraction of incident HIDM particles that suffer enough scatterings (i.e. N 0 or more) to have their speed reduced below v es and be captured is For the example considered previously (i.e. 2m p particles at velocity 200 km s −1 scattering in the atmosphere), f cap = 0.23.
Density of captured particles within the Earth
We defer to Section 2.4 a discussion of the loss of HIDM from the atmosphere or the crust. Under conditions where the loss rate is negligible, the average number density within the Earth isn A simple argument can explain the N −3/2 dependence obtained by L81. If τ represents the penetration depth of incident particles in units of the mean free path, then the average depth reached by an incident particle before the first scattering is τ ∼ 1. After scattering at τ = 1, roughly one-half of the particles are reflected (i.e. reach τ = 0) prior to reaching depth τ = 2, and roughly one-half reach depth τ = 2. Of those particles that reach depth τ = 2, we may argue (by symmetry) that roughly one-half are reflected (i.e. reach τ = 0) without ever reaching depth τ = 4, and roughly one-half reach depth τ = 4 or deeper. This argument then implies that the fraction of particles that penetrate to depth τ or deeper is of order τ −1 . But, in a random walk process, the number of scatterings suffered by particles that penetrate to depth τ or deeper is of order τ 2 or larger. Thus the fraction of particles that suffer N or more scatterings is of order N −1/2 , and the fraction that suffer exactly N scatterings is of order N −3/2 . = 2.46 × 10 14 f cap 0.23 Provided that the mean free path within the Earth, λ, is much smaller the length scale on which the temperature varies, the HIDM particles will reach thermal equilibrium with local material in the Earth and will acquire a Maxwell-Boltzmann velocity distribution with a mean speedv = (8kT /[πm DM ]) 1/2 = 2.51(T /300 K) 1/2 (m DM /m p ) −1/2 km s −1 . The HIDM particles captured by the Earth will assume an equilibrium density distribution on a characteristic diffusion timescale, t diff ∼ R 2 ⊕ /(λv), that is typically short compared to the age of the Earth. For thev in the Earth's core, where the product of λv is smallest, we may obtain an upper limit t diff ≤ 1.1 × 10 4 (m DM /m p ) −1/2 (λ/cm) −1 yr.
Provided t diff ≪ t ⊕ , the number density of HIDM particles within the Earth, n DM , is governed by the Jeans equation (e.g. Binney & Tremaine 2008), which -for a static, steady-state distribution of particles -simplifies to where Φ is the gravitational potential and σ 2 ij =< v i v j > is the velocity dispersion tensor. Here, we may neglect the effects of Earths' rotation about its axis and its motion around the Sun, because the accelerations associated with those motions are much smaller than the acceleration due to Earth's gravity. Equation (7), which applies to both collisionless and collisional gases, may be derived from the collisional Boltzmann equation, as shown by Chapman & Cowling (1970, hereafter CC70;their Chapter 8), who present a derivation of the divergence of the pressure tensor (their equation 8.1,7) for a single component within a gas mixture. For t diff ≪ t ⊕ , the HIDM particles have reached their final density distribution and have no net motion relative to the Earth, i.e. their average velocity,C 1 (in the notation of CC70) is zero. In this case, CC70 equation (8.1,7) reduces to equation (7) above, after division by the particle mass.
If the length scale on which the temperature changes, −(d ln T /dr) −1 , is large compared to the thermalization length, λ * , then the velocity distribution is isotropic and in thermal equilibrium with local material in the Earth: σ ij = (kT /m DM ) 1/2 δ ij . Here, the thermalization length (e.g. Rybicki & Lightman 1986, p. 38) is the root mean square radial distance traveled by an HIDM particle before reaching thermal equilibrium with its surroundings. If the mean fractional energy transfer isf KE per scattering, then of order f −1 KE scatterings are required for thermalization. During thermalization, an HIDM particle will therefore undergo a random walk that takes it a radial distance λ * ∼ λ(3f KE ) −1/2 from where it started. In the Earth's crust, the mean atomic mass is 21.5 m p , and the mean free path is λ = 35.9 (σ cr 300 K /10 −24 cm 2 ) −1 (ρ cr /g cm −3 ) −1 cm, where σ cr 300 K is the mean cross-section for the scattering of HIDM by atoms in the crust and ρ cr is the density. For an assumed density of 2.7 g cm −3 (Dziewonski & Anderson 1981), we obtain λ = 13.3 (σ cr 300 K /10 −24 cm 2 ) −1 cm. The length scale on which the temperature changes in the crust, −(d ln T /dr) −1 , is ∼ 10 km (Fridleifsson et al. 2008). For m DM = 2, we obtainf KE = 0.21, and find that the criterion −(d ln T /dr) −1 ≫ λ * is satisfied for σ cr 300K ≫ 1.7 × 10 −29 cm 2 . In the atmosphere, the mean atomic mass is 14.5 m p , and the mean free path is λ = 0.53 (σ atm 300 K /10 −24 cm 2 ) −1 (ρ atm /10 −3 g cm −3 ) −1 km, where σ atm 300 K is the mean cross-section for the scattering of HIDM by atoms in the atmosphere and ρ atm is the atmospheric density (1.3 g cm −3 at sea-level).
For a spherically-symmetric potential, Φ(r), where r is the distance from the center of the Earth, we then obtain where p DM = n DM kT is the partial pressure of HIDM particles, g(r) = GM r /r 2 is the local gravitational acceleration, T (r) is the temperature, and M r is the mass enclosed within radius r. Equation (8) is exactly equivalent to the expression given by Gilliland et al. (1986; their equation 5) for the density distribution of DM in the Sun if the scattering cross-section is large. A modification to the density distribution given by Gilliland et al. (1986) was subsequently proposed by Gould & Raffelt (1990), but -as discussed below in Appendix A -would appear to violate considerations of hydrostatic equilibrium and is inconsistent with CC70 equation (8.1,7).
We integrated equation (8) numerically to obtain the HIDM partial pressure and density as a function of r, adopting the density profile for the Earth's interior given by Dziewonski & Anderson (1981;the PREM model) the pressure scale height is largest and the r dependence is weakest (red curve). As discussed above, we may assume here that the HIDM particles are in thermal equilibrium with the local material, because the mean free path is small compared to the length scale on which the local temperature varies.
The bottom panel shows the corresponding HIDM particle density, normalized relative to the mean value within the Earth,n DM . The HIDM density increases sharply to the Earth's surface, owing to the decrease in the local temperature 2 , and can even exceed the 2 Analogous behavior may be observed in a household "top freezer" refrigerator: here, the temperature in the upper freezer compartment might be 7-8 % smaller than in the lower compartment, and the density there must be correspondingly larger so that both compartments are very close to pressure equilibrium with each other and their surroundings (as we infer they must be because any small fractional pressure difference would cause the the compartment doors to fly open or would render them unopenable, atmospheric pressure at sea-level being ∼ 1000 kg force per m 2 ). average density for m DM < 1.8 m p . For m DM = 2 m p , which is a natural mass for HIDM of the type proposed by Farrar (2017), the density at the Earth's surface, n DM (R ⊕ ), is The upper panel of Figure 2 shows the n DM (R ⊕ )/n DM as a function of m DM , while the lower panel shows the density at the Earth's surface, n DM (R ⊕ ). The latter was computed under the condition that every HIDM particle captured by the Earth is retained (the validity of which will be considered in section 2.4 below). For large m DM , the density at the Earth's surface is a strongly decreasing function of m DM , primarily because the scale height of the DM particles within the Earth is inversely proportional to m DM . Other factors affecting the number density at the Earth's surface are the number density of HIDM particles in the Galactic plane, which -for a given mass density, ρ DM -is inversely proportional to m DM , and the capture fraction, f cap , which is an increasing function of m DM for m DM < ∼ 15 m p .
Atmospheric loss
HIDM particles can evaporate from the Earth's atmosphere if a sufficient fraction present at or above last scattering surface (LSS) has a velocity larger than the escape velocity, v es = 11.2 km s −1 . In this subsection, we present an approximate treatment of this process with the purpose of obtaining an upper limit on the loss rate, as a function of m DM and scattering cross-section. A more precise treatment would entail a Monte-Carlo simulation that is beyond the scope of the present study.
Jeans escape from the LSS
In an isothermal atmosphere, the flux of escaping particles, F es may be approximated by the Jeans escape formula (Jeans 1904): where n LSS is the particle density, v LSS = (2kT LSS /m DM ) 1/2 , and T LSS is the temperature, each evaluated at the LSS. As discussed by Gross (1974), the Jeans escape treatment yields a slight overestimate of the loss-rate for an isothermal atmosphere because it assumes that the tail of the particle velocity distribution is replenished instantaneously as particles escape.
The location of the LSS depends upon the cross-section for scattering by nuclei in the atmosphere or crust. If the LSS lies in Earth's atmosphere, the relevant cross-section is where n A is the number density of element A and the sum is taken over all elements present in the atmosphere. The escape of HIDM is strongly dominated by particles with velocities just above v es = 11.2 km s −1 ; thus, in the circumstance that the cross-sections are velocity-dependent, the values for a collision velocity of v es are the appropriate ones for use in equation (10). The LSS is located at the τ = 1 surface, at height z LSS , for which For σ atm 11 < ∼ 2.5 × 10 −26 cm 2 , the optical depth, τ , at the surface of the Earth's surface falls below unity, and the LSS drops into the crust. In this case, the relevant cross-section is σ cr 11 , which is defined in manner exactly analogous to σ atm 11 (the only difference being that the values of n A in the right-hand-side of equation (10) reflect the elemental composition of the crust, not the atmosphere.) We introduce the quantity σ es 11 , which is the -17cross-section that determines the location of the LSS and thereby the escaping flux, F es .
Because the escape rate depends strongly on the gas temperature, calculating the escape rate requires an accurate knowledge of the atmospheric temperature profile. In our treatment of atmospheric loss, we have made use of the NRLMSIS-00 model (Picone et al. 2002) for the density, temperature, and composition of the atmosphere at heights, z, up to 1000 km. This well-tested model, which was motivated primarily by the need to model atmospheric drag on satellites in low Earth orbit, provides results as a function of latitude, longitude, height, day-of-year, hour-of-day, and solar activity, the latter being characterized by the 10.7 cm solar radio flux, F10.7, its 81 day average, F10.7A, and the A p index of geomagnetic activity. Two example temperature and density profiles are shown in Figure 3, one for a low level of solar activity (dashed curves) and one for a high level of solar activity. The temperature profile below z ∼ 100 km is not strongly dependent on the solar activity level, and the temperature lies in the range ∼ 200 − 300 K. Within the thermosphere (z > ∼ 100 km), however, the temperature rises rapidly and is strongly affected by solar activity. For each of these profiles, the location and temperature of the LSS are shown in Figure 4 as a function of σ es 11 .
Based on the temperature and densities plotted in Figure 3, we have evaluated the Jeans escape formula as a function of m DM and σ es 11 . The results are conveniently represented as a fractional loss rate, which is plotted in Figure 5. atmosphere. As σ es 11 falls below ∼ 2.5 × 10 −26 cm 2 , the LSS moves down into the crust, where the temperature increases with an assumed gradient ∼ 25 K per km of depth (Fridleifsson et al. 2008). This leads to a rapid increase in the fractional loss rate for σ es 11 < ∼ 10 −28 cm 2 .
The red dashed line represents 1/t ⊕ . When the fractional loss rate > ∼ 1/t ⊕ , then the effects of HIDM loss are important.
Thermospheric escape from above the LSS
In the Jeans treatment of evaporation from planetary atmospheres, the loss rate is determined solely by the temperature and density at the LSS (equation 9). In the case considered here, however, the atmospheric temperature increases rapidly above the LSS, within the thermosphere at z = 100 − 200 km ( Figure 4). Because the loss rate depends very strongly on the temperature, this rapid increase raises the question of whether the loss rate might be enhanced significantly above the value given by equation (9). Such an enhancement might result from collisions between hot atmospheric molecules in the thermosphere and HIDM, but its importance is mitigated by two effects: (1) the gas density declines rapidly with z; and (2) a single collision between an HIDM and a hot thermospheric gas molecule only transfers a fraction f KE of the hot molecules' energy. In Appendix B, we present an analysis of the effects of thermospheric escape. Because of the second consideration above, thermospheric escape is controlled by collisions with the minor atmospheric constituents H and He; when m DM lies in the range when thermospheric escape is important, f KE is largest for these lightest atmospheric constituents. Our analysis is complicated by the fact that the thermospheric temperature depends strongly on solar activity: this results in large variations during the 11-year solar cycle, which we account for with the NRLMSIS-00 model. For the range of parameters under present consideration, the result of our analysis is that thermospheric escape is always negligible relative to 1/t ⊕ or to the standard Jeans loss rate if (1) σ He 11 and σ H 11 are both smaller than 10 −24 cm 2 ; or (2) if m DM is smaller than 0.7 m p . As we shall see in Section 3.3 below, upper limits on σ He 300K and σ H 300K may be obtained from the low vaporization rates for liquid He and H 2 achievable in well-insulated dewars. Under the assumption that the cross-sections at v es are no larger than those at 300 K and are no larger than 10 −20 cm 2 , the combination of these constraints implies that thermospheric escape is also negligible for m DM An important caveat pertains to that result, however. Our model for the time-averaged loss rate is severely limited by the short historical record of solar activity. We have only computed loss-rates over the past ∼ 50 years, for which measurements of the F10.7A and A p activity indicators are available. Sunspot observations, which are available back into the 17th century, suggest that solar activity varies on multiple timescales that can be much longer than the 11-year solar cycle. Most notably, sunspots were exceedingly rare during the Maunder mimimum (1645 -1715), implying an unusually low level of solar activity, and the particle loss rate was presumably very low as a result. Equally, we cannot exclude the possibility that extended periods of very high solar thermospheric loss rate have occurred prior to the 17th century. Thus, because of the exponential dependence of the loss rate on the level of solar activity, the average loss rate over geological timescales could differ greatly from the average we determined for the last 50 years.
Density of HIDM at the surface of the Earth
If HIDM particles are being captured a rate f cap ρ DM v ⊕ πR 2 ⊕ m −1 DM , and are being lost at a fractional rate f loss , the number within the Earth varies according to If the capture and fractional loss rates have been constant over the history of the Earth, the current number of HIDM particles is represents the correction that is needed to account for atmospheric loss. It is equal to unity in the limit f loss ≪ 1/t ⊕ , and tends to [f loss t ⊕ ] −1 in the limit f loss ≫ 1/t ⊕ .
In Figure 6, we plot contours of the number density at the Earth's surface, n DM (R ⊕ ), in the m DM − σ es 11 plane. Unlike in Figure 2, these results include the effects of atmospheric loss. Particle densities in excess of 10 14 cm −3 can be achieved for m DM in the range 1 − 2 m p .
Atmospheric escape becomes increasingly important at very small σ es 11 , because the LSS falls deep into the crust or upper mantle where the temperature is higher. The particle density at Earth's surface also drops rapidly for m DM > 3 m p , because the particles become increasingly concentrated toward the center of the Earth. The results shown in Figure 6 apply under conditions (see Section 2.4.2 above) where thermospheric loss can be neglected, 3. Limits on the HIDM density at the surface of the Earth Figure 6 suggests that the density of captured HIDM particles at the Earth's surface could exceed the interstellar particle density by ∼ 15 orders of magnitude for particle parameters (m DM , σ es 11 ) within the range considered here. These thermalized particles would have typical kinetic energies kT ∼ 0.025 eV, significantly below the threshold for detection in current DM searches. Nonetheless, their presence could have detectable effects in experiments that are not specifically designed for DM detection. In this section, we discuss the constraints on the particle parameters that are placed by measurements of the beam lifetime in the Large Hadron collider (LHC), the orbital decay of spacecraft in LEO, the vaporization rate of liquid helium (LHe) and other cryogens, and the temperature gradient in the Earth's crust. Except where otherwise stated, the limits we derive apply under conditions where the Jeans escape formula (eqn. 9) is applicable, and the thermospheric loss process discussed in Appendix B can be neglected.
The LHC beam lifetime
Within a significant region of parameter space, n DM (R ⊕ ) significantly exceeds the gas density in the LHC beam pipe, which is evacuated to high vacuum (∼ 10 −7 Pa, equivalent to a gas density ∼ 10 9 cm −3 ) in order to achieve a long mean free path for the relativistic protons. Inelastic collisions with particles within the beam pipe lead to a reduction of the beam intensity on a timescale that can be as large as 100 hr (Lamont & Johnson 2014), requiring a mean free path λ LHC > 1.1 × 10 16 cm for inelastic scattering of protons traveling close to the speed of light. Elastic collisions are not important here, because the magnetic multipoles correct for momentum transfer that is unaccompanied by significant energy loss. 3 The mean free path for inelastic collisions with HIDM is [n DM (R ⊕ )σ p,inel 6.5 TeV ] −1 , where σ p,inel 6.5 TeV is the cross-section for the inelastic scattering of 6.5 TeV protons by stationary HIDM.
An LHC beam lifetime of 100 hr places an upper limit on the high energy inelastic scattering cross-section σ p,inel 6.5 TeV < 9 × 10 −17 [n DM (R ⊕ )/cm −3 ] −1 cm 2 . In Figure 7, this constraint is plotted in the m DM − σ p,inel 6.5 TeV plane. Here, we adopted the values of n DM (R ⊕ ) plotted in Figure 6; for the cross-sections under present consideration, the pumping of atmospheric gases out of the beam pipe does not alter the density of HIDM, because the latter are constantly diffusing through the pipe walls into the evacuated region. Where HIDM escape is significant, n DM (R ⊕ ) depends on σ es 11 as well as m DM . Thus, the limits obtained on σ p,inel 6.5 TeV depend on what is assumed for σ es 11 . In Figure 7, we have plotted results applying for five values of σ es 11 for which the LSS lies in the crust (10 −29.0 , 10 −28.8 . 10 −28.5 , 10 −28 , and 10 −27 cm 2 ), and two values for which the LSS lies in the atmosphere (10 −22 and 10 −20 cm 2 ).
The orbital decay of spacecraft in low Earth orbit
Spacecraft orbiting within the Earth's thermosphere can experience a significant drag force, F drag , which results from collisions with atmospheric molecules and leads to orbital decay. Within a significant region of the parameter space considered here, the mass density of HIDM can greatly exceed the mass density of atmospheric molecules. Above the last scattering surface (LSS), the HIDM partial pressure continues to decline according to equation (8), but with T LSS replacing T (r) in that equation. Thus, in the limit z LSS ≤ h ≪ R ⊕ , the particle density at altitude h is 10 (σ p, inel 6.5TeV / cm 2 ) -2 9 .0 log 10 (σ es 11 /cm 2 ) = -2 8 .8 log 10 (σ es 11 /cm 2 ) = -2 8 .5 log 10 (σ es 11 /cm 2 ) = -2 8 .0 log 10 (σ es 11 /cm 2 ) = -2 7 .0 log 10 (σ es 11 /cm 2 ) = -2 2 .0 log 10 (σ es 11 /cm 2 ) = -2 0 .0 log 10 (σ es 11 /cm 2 ) = . For m DM = 2 m p , and σ cr 11 in the range 10 −28 to 2.5 × 10 −26 cm 2 , the HIDM number density at 600 km is 8.5 × 10 11 cm −3 , corresponding to a mass density of 2.8 × 10 −12 g cm −3 . For comparison, the mass densities of atmospheric gases at 600 km implied by Figure 3 are 3.3 × 10 −15 g cm −3 and 5.9 × 10 −17 g cm −3 , respectively, for the low and high solar activity cases. In the regime where the HIDM mass density greatly exceeds these atmospheric values, the orbital decay of spacecraft in low Earth orbit would be greatly accelerated unless the scattering cross-sections are low enough that most HIDM particles pass through the spacecraft.
In the limit of small scattering cross-section, such that the probability of more than one scattering is negligible, the drag force due to HIDM is where M sc is the spacecraft mass, R orb is the radius of the orbit, v orb = 7.9(R orb /R ⊕ ) −1/2 km s −1 is the orbital velocity,m A is the mean atomic mass for the material constituting the spacecraft,μ is the mean reduced mass, and σ sc 8 is the average cross-section per nucleus at a collision velocity of v orb ∼ 8 km s −1 . Henceforth, we assume that the HIDM mass is small compared to the mass of typical nuclei in the spacecraft, so the scattered HIDM have an average momentum of zero in the spacecraft frame, andμ may be replaced by m DM in equation (15). The quantity M/m A is simply the number of nuclei within the spacecraft.
The total energy, E tot , of the orbiting spacecraft decreases at a rate Noting now that E tot = −E kin = − 1 2 M sc v 2 orb , in accord with the virial theorem, and that the expression obtained above from equation (8) with the use of T LSS in place of T (r). E tot is inversely proportional to R orb , we may write the orbital decay rate in the form Here, we have normalized the mean atomic mass relative to the value for aluminum.
The orbital decay rate, dR orb /dt, scales linearly with σ sc 8 , until the spacecraft becomes opaque to HIDM. In the limit of large σ sc 8 , the drag force becomes independent of the cross-section: where Σ eff is the effective cross-section presented by the spacecraft and C d is the drag coefficient. The orbital decay rate then becomes Thus the actual orbital decay rate can be approximated by computing the values given by equations (17) and (19) and adopting whichever has the smaller magnitude.
A full discussion of the Σ eff /M sc values, elemental composition, orbital parameters, and the observed dR orb /dt for the fleet of spacecraft in LEO is beyond the scope of this paper. Here, we will focus on a single example, the Hubble Space Telescope (HST), which orbits in a nearly circular orbit at an altitude of ∼ 600 km. Elements present with a Opaque HST Tran parent HST -29.0 log 10 (σ es 11 /cm 2 ) = -28.8 log 10 (σ es 11 /cm 2 ) = -28.5 log 10 (σ es 11 /cm 2 ) = -28.0 log 10 (σ es 11 /cm 2 ) = -27.0 log 10 (σ es 11 /cm 2 ) = -22.0 log 10 (σ es 11 /cm 2 ) = -20.0 log 10 (σ es 11 /cm 2 ) = Fig. 9.-Upper limits on σ sc 8 , implied by 0.8 km yr −1 orbital decay rate for a satellite at an altitude of 600 km. Results are shown for five values of σ es 11 for which the LSS lies in the crust (10 −29.0 , 10 −28.8 . 10 −28.5 , 10 −28 , and 10 −27 cm 2 ), and two values for which the LSS lies in the atmosphere (10 −22 and 10 −20 cm 2 ). The curves are labeled with log 10 (σ es -32 -HIDM dominated the drag on HST during this period. The results plotted here were obtained for an estimatedm A of 27, and for the spacecraft parameters 5 relevant to HST (Σ eff = 7.0 × 10 5 cm 2 ; M sc = 1.11 × 10 7 g; and C d =2.47). Above the horizontal line, HST is opaque to HIDM and the orbital decay rate due to HIDM is given by equation (19). Below the horizontal line, HST is transparent and equation (17) applies. In the limit where HST is opaque to HIDM, equation (19) places an upper limit of 2 × 10 7 (m DM /m p ) −1 cm −3 on n DM (R orb ), which is indicated by the dashed blue contour in Figure 8 and excludes a large portion of the parameter space under consideration.
The vaporization of liquid cryogens
Next, we considered the heating effects of thermal HIDM particles on liquid cryogens within a well-insulated storage dewar. Such dewars, which are widely used to store cryogenic liquids for periods of several months without the use of active cooling, rely on multiple layers of reflective material, under vacuum, to minimize the entry of heat into the cryogenic chamber through conduction or radiation. If the HIDM mean free path is large compared to the size of the chamber, so that the dewar is transparent, and the temperature of the cryogen, T cry , is much smaller than the temperature of HIDM in the laboratory, T DM ∼ 300 K, the heating rate per atom is where the angled bracket denotes an average over the Maxwell-Boltzmann distribution subscript "td" denotes the "transparent dewar" limit, and σ A T = v 3 σ A v MB / v 3 MB is the appropriately-weighted mean cross-section for thermal HIDM particles at temperature T . If the cross-section has a power-law velocity-dependence of the form σ For the case where j = 4 (velocity-dependence for Rutherford scattering) and v 0 = 1 km s −1 , we obtain σ A T = 0.0204 (T /300 K) −2 (m DM /m p ) 2 σ A 1 . In the case where T cry is not much smaller than T DM , we may estimate H td by replacing T DM by (T DM − T cry ) in equation (20).
For a cryogenic liquid composed of atoms of mass m A = Am p , for which the specific latent heat of vaporization is L vap , this heating rate will result in vaporization at a fractional where mb denotes the millibarn (1 mb = 10 −27 cm 2 ), and M cry is the mass of cryogen remaining in the dewar. Even cross-sections in the millibarn range can lead to significant vaporization. As an example, let us consider a case in which liquid He (LHe) is heated by HIDM with a mass m DM = 2 m p and a density n DM (R ⊕ ) = 10 14 cm −3 at Earth's surface. For the parameters applicable to LHe (viz. A = 4, L vap = 21 J g −1 , T cry = 4 K,f KE = 4/9) equation (21) yields fractional vaporization rate of 3.3 × 10 −7 (σ He 300 K /mb) s −1 = 3.0 (σ He 300 K /mb) % per day.
Equation (21) relies on the assumption that the mean free path in the cryogen, λ cry , exceeds the diameter, D, of the dewar, so that the entire volume of cryogen is exposed to HIDM at 300 K. In the opposite limit, where λ cry ≪ D, HIDM deposit their heat in the outer part of the cryogen and the interior is unheated. The flux of warm HIDM entering the dewar is (1/4)n DM (R ⊕ )v, and the resultant energy flux is (1/4)n DM (R ⊕ ) 1 2 m DM v 2 v MB .
The average fractional energy transfer is at leastf KE per entering particle, since in this limit they scatter at least once before leaving. In this "opaque dewar" limit, we may set a lower limit on the heating rate per cryogen atom: where Σ is the surface area of the cryogenic volume, and N A is the number of atoms in the dewar. The latter is given by N A = ρ cry V D /m A , where V D is the volume of the cryogen and ρ cry is its density. Thus, For a spherical volume, which minimizes the surface area to volume ratio, Σ/V D = 6/D = (36π/V D ) 1/3 .
For small σ A 300 K , the dewar is transparent and H is given by H td , which increases linearly with σ A 300 K . But, once the dewar becomes opaque, H stops increasing linearly with σ A
K
and approaches H od asymptotically. The transparent dewar limit applies for H td < H od .
Given the lower limit on H od obtained using (23) for a spherical cryogenic volume, and comparing it with equation (21), we find that H td < H od whenever σ A 300 K ≤ 3m A /(2Dρ cry ) or equivalently λ cry < 3D/2. We may obtain a conservative estimate of H by adopting H td for σ A 300 K < 3m A /(2Dρ cry ) and assuming H to be constant for larger σ A 300 K . The corresponding fractional vaporization rate is then obtained by dividing this estimate of H by L vap m A .
limits that are implied for σ He 300 K . As was the case for σ p,inel 6.5 TeV (see Section 3.1 above), the limits σ He 300 K depend on what is assumed for σ es 11 ; once again, we have plotted results obtained for several assumed values of σ es 11 . The results shown here are for a 1000 liter dewar containing LHe with a density of 0.125 g cm −3 .
Figure 10 also assumes that HIDM particles can enter the LHe within the dewar without scattering first within the multilayer insulation (MLI) that is typically used to mimimize the radiative heat load. A very well-insulated dewar might employ several tens of layers of ∼ 10 µm thick aluminized mylar, (C 10 H 8 O 4 ) n , corresponding to a total mylar thickness of several hundred µm (with a negligible additional thickness of aluminum).
Thus, our analysis is only valid if the HIDM mean-free path in mylar, λ mylar , exceeds that thickness, or equivalently if the mean cross-section per atom in mylar is less than ∼ few × 10 −22 cm 2 . Because orbiting spacecraft, including HST, are often covered with similar multilayer insulation for thermal control, a cross-section any larger than this value would mean that the opaque spacecraft limit applies (Section 3.2); this is turn would rule out a large portion of the available parameter space anyway (dashed blue contour in Figure 8). One caveat applies to this argument: the HIDM striking a spacecraft have a relative velocity of ∼ 8 km s −1 , while those incident on the dewar insulation have a velocity of 2.51 (m DM /m p ) −1/2 km s −1 . Thus, if the cross-section were a sufficiently strongly decreasing function of collision velocity, this argument would not necessarily apply.
Additional limits may be obtained for other nuclei by considering the vaporization of other cryogens. Figure 11 shows analogous limits for H, N, O, and Ar. These were obtained from limits on the boil-off rates for storage dewars for liquid H 2 , N 2 , O 2 and Ar.
The relevant parameters for these cryogens are tabulated in Table 1. For nuclei where no liquid cryogen is available, constraints on σ A 300 K could be obtained by experiments in which solid materials containing the nucleus in question are placed in a storage dewar and immersed in a liquid cryogen. Any heat deposited by HIDM within the solid material would be transferred to the cryogen; this effect could be detected by means of a differential measurement in which the boil-off rates were compared for cases with and without an immersed sample of the material to be tested.
The limits presented in Figures 10 and 11 are very conservative, in that they assume the observed boil-off rates to be entirely attributable to scattering of thermal HIDM.
Improved limits could be obtained by measurements of how the mass-loss rate of cryogen, M cry , depends on the mass of cryogen, M cry , present within the dewar. The expected mass-loss rate of cryogen isṀ where Q is the heat leak in the dewar and B DM is the fractional boil-off rate due to HIDM heating. Figures 10 and 11 are conservatively based upon B DM ≤Ṁ cry /M 0 , whereṀ cry is determined for full dewar containing a mass M 0 of cryogen. To the extent that the inner vessel within the dewar is isothermal and at the boiling point of the cryogen, the heat leak, Q, may be expected to be independent of M cry , provided any cryogen remains within the dewar. Thus, a limit can be placed on B DM by comparing the mass-loss rates for a full dewar containing a cryogen mass M 0 and a nearly-empty dewar. If the difference in the two mass loss-rates is less than δṀ cry , then the limit on B DM becomes B DM < δṀ cry /M 0 instead of B DM ≤Ṁ cry /M 0 . If the mass-loss rates for full and nearly-empty dewars could be demonstrated to differ by less than 1%, for example, the limit would improve by a factor of 100.
The thermal conductivity within the Earth's crust
The effects of HIDM on cryogenic experiments are one manifestation of heat transport.
A related consequence of captured HIDM would be to enhance the thermal conductivity in Table 1. the Earth's crust. Similar effects have been considered for the case of weakly-interacting massive particles within the Sun (Spergel & Press 1985;Faulkner & Gilliland 1985). In the regime of present interest, the HIDM constitute a monoatomic gas that is "Lorentzian" in the terminology of CC70 (Section 10.5), meaning that m DM is small compared to the mean atomic mass of crustal materials, 21.5 m p , and n DM is much smaller than the number density n cr , of atoms in the crust. For a mean scattering cross-section, σ cr 0 , that is independent of the collision velocity, the additional thermal conductivity associated with such a gas was first computed by Lorentz (1904) and may be written 7 An important feature of this expression is that k DM is a decreasing function of the cross-section; for a given value of n DM (R ⊕ ), an upper limit on the allowable value of k DM therefore sets a lower limit on σ cr 0 .
Fig.
12.-Lower limits on σ ′cr 300 K , obtained from the requirement k DM ≤ 4.3 × 10 5 erg s −1 cm −1 K −1 . Results are shown for several values of σ es 11 . The curves are labeled with log 10 (σ es 11 /cm 2 ). The allowed region is above the curves. The criterion for the validity of the heat conduction equation, λ * < (d ln T /dz) −1 ∼ 10 km, is met above the dashed line in Figure 12. A sufficient condition for the validity of the treatment in Appendix C, λ * < 100 m, is met for cross-sections more than two orders of magnitude above the dashed line. where is an effective cross-section that replaces σ cr 0 in equation (25). This cross-section is analogous to the Rosseland mean cross-section in the theory of radiative diffusion (Rosseland 1925) -being an appropriately-weighted harmonic mean -and reduces to σ cr 0 in the velocity-independent case (j = 0). Putting in numerical values, we then obtain Here, we adopted a mean density of 2.7 g cm −3 for the crust (Dziewonski & Anderson 1981), and a mean atomic mass of 21.5 m p , yielding an atomic number density n cr = 7.5 × 10 22 cm −3 . The mean free path, which is determined by n cr , not n DM , is therefore λ = 13.3 (σ ′cr T /10 −24 cm 2 ) −1 cm.
Comparing this to the analogous expression obtained for σ A T in Section 3.1 above, we obtain the following ratio of the cross-section that determines the conductivity to the mean cross-section that is relevant to the vaporization of cryogens: σ ′A T /σ A T = 4/[Γ(3+ 1 2 j)Γ(3− 1 2 j)]. For j = 0 (i.e. with no velocity-dependence), σ ′A T /σ A T = 1 as required. For j = 4 (velocitydependence for Rutherford scattering), σ ′A T /σ A T = 1/6.
Heat flow through the Earth's crust has been probed extensively through measurements of the varying temperature gradients in boreholes. Such measurements have led to an estimate of 47 ± 2 TW for the total power transported through the Earth's crust (Davies & Davies 2010). A value of k DM any larger than 3.7 × 10 5 erg cm −1 K −1 would more than double current estimates for the global heat flow from the Earth, and would require significant modifications to the standard understanding of the Earth's internal heat budget (e.g. Lay et al. 2008).
Moreover, in cases where a borehole probes a stratified series of rock formations, changes in the temperature gradient can be sometimes be discerned at the boundaries of the different rock types (e.g. Harris & Chapman 1995;hereafter HC95). The observed behavior is found to be in quantitative agreement with the ratios of conductivities that are measured in the laboratory for the different rocky materials involved, placing limits on any anomalous conductivity associated with HIDM. A more detailed discussion of such measurements appears in Appendix C, where we derive an upper limit of 4.3 × 10 5 erg cm −1 K −1 on k DM for cross-sections such that the thermalization length, λ * (see section 2.3 above), is less than the characteristic length scale ∼ 100 m probed in HC95. Figure 12 shows the lower limits on σ ′cr 300 K implied by this constraint, which are very similar to the limits imposed the global heat flow constraint, k DM < 3.7 × 10 5 erg cm −1 K −1 , under the assumption that heat transport by HIDM does not increase the latter by more than a factor 2. As in Figures 7, 9, and 10, the results depend on what is assumed for σ es 11 ; once again, we have plotted results obtained for several values of σ es 11 . The criterion for the validity of the heat conduction equation, λ * < (d ln T /dz) −1 ∼ 10 km, is met above the dashed line in Figure 12. A sufficient condition for the validity of the treatment in Appendix C, λ * < 100 m, is met for cross-sections more than two orders of magnitude above the dashed line.
Summary and conclusions
The various considerations described in Section 3 provide constraints on the properties of the HIDM. These constraints are based upon a model for the number of HIDM particles that have been captured and retained over the lifetime of the Earth and for their resultant density distribution. Our treatment of the latter makes use of a differential equation for the partial pressure of HIDM particles (eqn. 8) that may be derived from the collisional Boltzmann equation (CC70) and was adopted by Gilliland et al. (1986) in their study of WIMP dark matter in the Sun. As we have discussed in Appendix A, a modification to that equation proposed subsequently by GR90 violates considerations of hydrostatic equilibrium and presents other pathological behaviors.
We note that our discussion of the density of HIDM particles within the Earth, and of the resultant constraints presented in Section 3, assumes that HIDM does not annihilate with nucleons in the Earth (limited by Farrar andZaharijas 2006 andMack et al. 2007) and is not destroyed by self-annihilation. The latter assumption is valid provided the self-annihilation cross-section is smaller than where N C is the number of captured particles. The critical value of the self-annihilation cross-section, σ sa crit , above which self-annihilation reduces n DM and weakens the limits set in Section 3, ranges from 1.0 × 10 −36 cm 2 to 1.0 × 10 −34 cm 2 as m DM ranges from 1 to 5 m p .
Because the constraints we derive involve cross-sections for multiple baryonic nuclei at a variety of collision energies , and because the cross-sections may show strong non-monotonic variations with nucleon number, A (Farrar & Xu 2018), the constrained parameter space has a high dimensionality. Several different cross-sections appear in the various constraints obtained in Section 3: σ cr 11 , σ atm 11 , σ p 6.5 TeV , σ HST 8 , σ He 300 K , σ H 300 K , σ N 300 K , σ O 300 K , σ N 300 K , σ cr 300 K , σ ′cr 300 K , σ p 1 , σ p 11 , σ He 11 . Any proposal for the HIDM that makes a specific prediction for the A-dependence and velocity dependence for the scattering cross-section with baryons can be evaluated with respect to the multiple constraints given here.
Figures 7 and 9 -12 provide the full set of constraints that we have obtained, and the key results are summarized in Table 2. These results rely on the assumption that our estimates of the HIDM accumulation and evaporation rates apply throughout Earth's history. However, they are conservative in the sense that we have derived them under the assumption that HIDM are entirely responsible for limiting the LHC beam lifetime, for the vaporization of liquid cryogens, and for the decay of spacecraft orbits. Stronger limits could be derived, for example, by modeling LHC beam losses due to conventional effects such as beam particle scattering off residual atmospheric gases in the beam pipe and interactions within the beam or with components of the beam-line. Or the boil-off of liquid cryogens could be monitored as described in the last paragraph of Section 3.3 All the upper limits presented in Table 2 are inversely proportional to the value adopted for ρ DM , the mass density of the dark matter in the Galactic plane, while the lower limits obtained for σ ′cr 300 K are proportional to ρ DM . Thus, if ρ DM were larger than the conservative value of 0.3 (GeV/c 2 ) cm −3 that we adopted, all the constraints we obtained would be strengthened.
As an illustrative example of the application of our constraints, we consider a case where the cross-section has a v −4 dependence on collision velocity, v. Such a velocity dependence was considered by Muñoz et al. (2015) in their analysis of the cooling of hydrogen atoms at high redshift, an analysis that was recently invoked (Barkana 2018) as the explanation for an anomalously deep 21 cm absorption feature reported at z ∼ 17 (Bowman et al. 2018). 8 In this example, we assume that the scattering cross-sections have a Rutherford-like behavior, i.e. are proportional to (A/[µ A v 2 ]) 2 , as discussed in Section 2.1. Then the scattering cross-sections only depend on a single parameter, σ p 1 , via σ A v = (Aµ p /µ A ) 2 (v/km s −1 ) −4 σ p 1 , where σ p 1 is the cross-section for collisions with protons at v = 1 km s −1 . For m DM in the range 0.5 − 5 m p , σ cr v /σ p v varies from 237 − 20.8 and σ atm v /σ p v varies from 102 − 10.7 due to the variation of µ p /µ A with m DM . To yield a significant cooling effect on hydrogen atoms at high redshift, σ p 1 values of order 10 −19 − 10 −18 cm 2 are required (Muñoz et al. 2015;Barkana 2018). In this example, we therefore focus on 2018; Barkana et al. 2018), this example remains a useful illustration of how the constraints we have presented here may be applied in the context of a specific DM candidate.
at v = 200 km s −1 are first scattered in the crust at depths ≤1 km and may therefore be captured. Furthermore, the LSS (for DM particles with v ∼ v es = 11.2 km s −1 ) is below an altitude of 100 km and HST is opaque to HIDM at v orb ∼ 8 km s −1 . In this region of parameter space, equation (19) may be used to estimate the orbital decay rate, −dR orb /dt, of HST. In Figure 13, we plot −dR orb /dt for HST as a function of m DM , with the horizontal green line indicating the value, −dR orb /dt = 0.8 km yr −1 , experienced by HST between servicing missions SM1 and SM2. The results plotted here include the effects of thermospheric loss. Because the A-and velocity-dependences are specified in this example, the relevant cross-sections for thermospheric escape (σ He 11 and σ H 11 ) have a fixed relationship to σ p 1 , as do all the cross-sections of relevance in our model. For σ p 1 in the range proposed by Barkana (2018), 8 × 10 −20 − 10 −18 cm 2 , masses in the range 0.55 -3.9 m p are ruled out. 9 .
To summarize, we have shown that for dark matter in the 0.60 − 6 m p mass range, having a momentum-transfer cross-section > ∼ 10 −29 cm 2 for scattering with material in the crust, the Earth will have a significant atmosphere of dark matter extending throughout the interior and far above the surface. We determined the density and structure of this DM atmosphere from first principles, finding that the density can exceed 10 14 cm −3 at the Earth's surface. Given this high density, we infer upper limits on scattering cross-sections that are generally stronger than those from direct detection experiments, using bounds on the orbital decay of HST and the evaporation of liquid cryogens. These upper limits are complemented by lower limits from the thermal conductivity of the Earths crust, which provide a stringent constraint on models, especially when the Born approximation can be used to relate the cross sections for different nuclei. Typical DM velocities at the surface 9 In his treatment of scattering by primordial material at z ∼ 17, Barkana (2018) assumed a σ A v ∝ A dependence that was different from that adopted here (personal communication). For this A-dependence, the range of excluded masses is very similar: 0.60 − 3.9 m p
A. Density distribution of DM
Based on an analysis of the Boltzmann collision equation for a dilute gas of particles of mass, m X , within a "background" gas of atoms of mass, m n , Gould & Raffelt (1990; hereafter GR90) proposed a modification to the density distribution adopted by Gilliland et al. (1986) for DM in the Sun in the limit of large scattering cross-section. Whereas Gilliland et al. (1986) adopted a particle density profile given by which is the integral form of our equation (8) and is in agreement with that implied by CC70 equation (8.1,7), GR90 obtained the expression (their equation 2.30) Here, the dimensionless quantity α, which was computed numerically and tabulated by GR90, was found to be a monotonically increasing function of m X /m n , having values of 2, 2.32, and 2.5 respectively for m X /m n = 0, 1, and ∞. This modification amounts to the addition of a term ( 5 2 − α)d ln T (r)/dr to the right-hand-side of our equation (8), yielding (GR90; equation 2.29) In the limit of large m X /m n , the additional term ( 5 2 − α)d ln T (r)/dr is zero and the DM density distribution is identical to that adopted by Gilliland et al. (1986). This is also apparent from the form of equation (A2) given by GR90 for the case of constant α (their However, outside the limit of large m X /m n , the partial pressure gradient dlnp DM /dr given by GR90's treatment is different from that given by equation (8); as we discuss below, this would appear to present an inconsistency with the hydrostatic equilibrium equation, dp tot /dr = −gρ tot , where p tot and ρ tot are the total pressure and density.
As a thought experiment, let us consider a single component gas in hydrostatic equilibrium, and label one in every million particles and consider them to comprise the dilute gas under consideration. The partial pressure, p X , and density, ρ X , for that dilute gas is everywhere a factor of one million times smaller than the total pressure and density, and must therefore also obey the hydrostatic equilibrium equation, i.e. dp X /dr = −gρ X . But this equation is exactly equation (8), without the additional term proposed by GR90, and thus the required value of α is 5/2, not the 2.32 given by GR90 for the case m X /m n = 1 Similar inconsistencies arise for m X /m n = 1. To demonstrate this point, let us return to the binary gas mixture consisting of a dilute gas of particles of mass m X within a "background" gas of atoms of mass, m n , and now consider the case where there is a temperature gradient but no gravitational field. In equilibrium, the total gas pressure must be constant and thus the total particle number density, n X + n n , must be inversely proportional to temperature. However, GR90 equation (2.29) tells us that the density of the dilute gas is n X ∝ T (α−3/2) , and the concentration of the dilute gas, n X /(n X + n n ), is therefore proportional to T (α−5/2) . Thus, if α < 5/2, as claimed by GR90 except when m X /m n ≫ 1, the concentration of the dilute gas is a decreasing function of temperature.
This leads to the pathological result that a medium containing a binary gas mixture with a temperature gradient will undergo segregation, even in the absence of a gravitational field; moreover, the sense of segregation is the same (dilute gas concentration largest where the temperature is smallest) regardless of whether the dilute gas has a larger or smaller molecular mass than the background gas. Such behavior is neither observed nor understandable on thermodynamic grounds. Finally, we note that in a gas containing multiple constituents with different molecular masses, the sum of the differential equations for the individual components is guaranteed to yield the hydrostatic equilibrium equation when equation (8) is adopted for the derivatives of the partial pressures, whereas it does not if the GR90 modification is included. Accordingly, we adopt the density distribution implied by equation (8) in this study.
B. Loss of HIDM from the thermosphere
To compute the loss rate of HIDM, we may write the escaping flux as where L(z) is the loss rate per unit volume. The latter may be written where R v dv is the rate per unit volume at which scattering events produce HIDM particles with speeds of v to v + dv, and β is the probability that a scattered particle with velocity v > v es actually escapes (instead of suffering an additional scattering). For a particle traveling in the upward direction, the probability of escape is exp(−τ ), where τ = ∞ z λ −1 dz. Because the escape of HIDM is dominated by particles at or just above the escape velocity, the relevant cross-sections are those for collision velocities of v es .
Since z ≪ R ⊕ , we may treat the atmosphere as having a plane-parallel geometry. For a scattered particle traveling at angle cos −1 µ to the upward direction, the escape probability is reduced to exp(−τ /µ). The angle-averaged escape probability is therefore β(τ ) = 1 2 E 2 (τ ), where E 2 = 1 0 exp(−τ /µ)dµ is the exponential integral function of order 2.
We may estimate R v by observing that the fraction of HIDM particles, f v dv, with speeds in the range v to v + dv obeys the relation n DM f v = R v t v , where t v is the mean time between scatterings. Particles with v > v es are moving faster than the typical velocities of molecules in Earth's atmosphere. In this limit, t v may be estimated as λ/v, and thus R v may be approximated by n DM f v v/λ. With the use of this expression for R v , and substituting equation (B2) into equation (B1), we obtain for the escaping flux and with the sum taken over all atmospheric nuclei. Observing now that dz = λdτ , we may rewrite equation (B3) Below the LSS, f v is well-approximated by the Maxwell-Boltzmann distribution function at the temperature of the atmosphere, We consider first the case of an isothermal atmosphere at temperature T 0 . The Jeans approximation consists of assuming f MB (T 0 ) above the LSS as well. With this assumption, our treatment yields If n DM varies slowly with τ , then the integral over τ may be approximated by 1 2 n(τ = 1), and we recover the Jean escape formula (eqn. 10).
Because the gas temperature rises rapidly in Earth's thermosphere (i.e. above an altitude of ∼ 100 km; see Figure 3), we have investigated whether the isothermal treatment given above is applicable. In particular, we have considered the possibility that collisions between HIDM and hot gas particles in the thermosphere might significantly enhance the escape rate. Because the gas density there is very small, escape from the thermosphere proceeds almost invariably by the single scattering of a HIDM particle by an atmospheric molecule or atom. The HIDM have a characteristic energy kT LSS , while the atmospheric molecules and atoms -which have a much larger cross-section for collisions among themselves -still have a Maxwell-Boltzmann distribution at the local thermospheric temperature, T (z).
We consider the scattering of HIDM particles with a thermospheric nucleus of mass m A . If the scattering angle is cos −1 µ sc in the center-of-mass frame, the fractional energy transfer is to the HIDM if it is initially at rest is given by equation (4) and has an angular dependence f KE = (1 − µ sc )f KE . The HIDM particle will acquire enough energy to escape if the kinetic energy of the atomic nucleus on which it scatters exceeds a minimum value where v min is the minimum velocity corresponding to that energy. One-half of the HIDM thereby produced are moving in the upward direction, so the rate per unit volume at which scattering events produce escaping HIDM is where n A is the number density of the colliding gas particles.
Using the expression given by equation (B6) for f MB , and integrating equation (B9) over velocity, we obtain where v T = (2kT /m A ) 1/2 . Finally, integrating over angle with the assumption that σ A 11 is independent of µ sc (i.e. that the scattering angles are isotropically distributed in the center-of-mass frame, as appropriate for short-range interactions in the low-velocity limit), we find where In Figure 14, we show the loss rates per unit volume, L(z) = R es β(τ ), resulting from collisions with each constituent of the atmosphere. The example results shown here were obtained for the two atmospheric profiles plotted in Figure 3 in Section 2.4.1, and for m DM = 2 m p and σ A 11 = 10 −23 cm 2 . For m DM = 2 m p , collisions with He are the dominant loss process; despite the relatively small He abundance in the thermosphere, the energy transfer is more efficient for He (f KE = 4/9) than for N (f KE = 7/32), leading to a larger T eff .
B.1. Variation of the loss rate during the solar cycle
The results shown in Figure 14 indicate that the HIDM loss rate can depend strongly on the level of solar activity. Moreover, there are also significant dependences on latitude, local solar time, and day-of-year. To compute the globally-averaged loss rate over an extended period, we have run a separate grid of NRLMSIS models for each day over a complete solar cycle, obtaining temperature and density profiles as a function of latitude and local solar time. Here, we considered solar cycle 21 (1976 -1985), for which the maximum monthly SSN (smoothed sunspot number) was the largest of any solar cycle in the ∼ 50 years for which F10.7 indices have been available, and the second largest over the ∼ 400 years for which sunspots have been observed. Figure 15 shows the corresponding variations in the solar activity indices. Here, we adopted a cross-section σ A 11 = 10 −23 cm 2 for all nuclei; as indicated by equation (B11), the loss rate scales linearly with cross-section. Figure 16 shows the average loss rate over the full solar cycle, as a function of m DM .
Green, red, magenta and cyan curves show the individual contributions due to collisions with N, O, He and H nuclei for an assumed σ A 11 of 10 −24 cm 2 (top panel) and 10 −20 cm 2 (bottom panel). For comparison, the Jeans loss rates (from Figure 5) are shown by dotted curves for σ es 11 = 10 −28 cm 2 (green), 10 −27 cm 2 (red) and 10 −20 cm 2 (blue). The latter case (blue dotted line) represents the smallest Jeans loss rate for any value of σ es 11 considered here. The horizontal red dashed curve indicates the value of 1/t ⊕ . Figure 16 indicates that if σ H 11 and σ He 11 are both ≤ 10 −24 cm 2 , then thermospheric loss is never significant: the thermospheric loss rate is either smaller than 1/t ⊕ or smaller than the Jeans loss rate.
If either σ He 11 or σ H 11 exceeds 10 −24 cm 2 , then thermospheric loss can be important. For σ He 11 and σ H 11 as large as 10 −20 cm 2 , Figure 16 shows that the thermospheric loss rates due to collisions with H or He can exceed both the Jeans loss rate and 1/t ⊕ for DM masses between for liquid H 2 and He (Section 3.3) place stringent limits on these cross-sections and imply that thermospheric loss is only significant over a narrower range of DM masses. To illustrate this point, let us consider the case where σ es 11 = 10 −20 cm 2 , which maximizes the importance of thermospheric loss relative to Jeans loss. We first consider the results that are obtained in the transparent dewar limit when thermospheric loss is neglected (Figures 10 and 11). As m DM increases from 1.0 to 2.1 m p , the limit on σ He 300 K increases from 0.3 − 1.3 × 10 −28 cm 2 (Figure 10), and the corresponding limit for σ H 300 K increases from 0.6 − 3.7 × 10 −28 cm 2 ( Figure 11). Throughout this mass range, these limits are below the cross-sections at which the dewars become opaque by a factor of at least 4000 (for He) or 1000 (for H). This now indicates that in the opaque dewar limit (i.e. σ He 11 > ∼ 6 × 10 −25 cm 2 or σ H 11 > ∼ 4 × 10 −25 cm 2 ), the values of n DM are constrained to lie at least 3 orders of magnitude below the values that are obtained without the inclusion of thermospheric escape. Referring now to Figure 16, we see that the thermospheric loss rates for H and He never exceed 1/t ⊕ by a factor as large as 1000 for m DM ≥ 1.2 m p . Thus, for the mass range m DM = 1.2 − 2.1 m p , the effects of thermospheric escape fail to reduce n DM by a factor that is sufficient to evade the constraints implied by the vaporization rates for liquid H 2 and He, provided that the cross-sections for H and He at v es are no larger than those at 300 K and are both ≤ 10 −20 cm 2 . Our conclusion, then, is that thermospheric loss is potentially important only within a fairly narrow range of HIDM masses: 0.7 m p < ∼ m DM < ∼ 1.2 m p .
C. Measurements of temperature gradients in boreholes
Temperature gradients have been measured within the crust in more than 30,000 boreholes widely distributed over the surface of the Earth. Combined with laboratory measurements of the thermal conductivity of crustal rocks, these temperature gradient measurements have been used to obtain an estimate of the total power transported upwards through the crust: P ⊕ = (47 ± 2) TW (Davies & Davies 2010).
In certain cases, boreholes cross stratified rock formations in which rock type changes.
For such boreholes, discontinuities in the measured temperature gradient can be detected at the rock formation boundaries and have been attributed to differences in the conductivity of the rock types. As an example, we considered the study of such effects presented by HC95 for 9 boreholes in the Colorado plateau of Eastern Utah. Changes in the temperature gradient at rock type boundaries are most obvious for the borehole designated WSR-1, in which a layer of lower-conductivity rock (Jurassic Carmel, Jca) lies sandwiched between two rock types of high conductivity (Jurassic Entrada, Je; and Jurassic Navajo, Jna).
The temperature profile for this borehole is shown in Figure 17 (top panel). Using a simultaneous fit to all the data acquired for the nine boreholes that they investigated, HC95 obtained estimates of the different rock conductivities needed to account for the observed changes in temperature gradient at the rock formation boundaries, and compared them with laboratory-measured values. Because the actual heat flux is not measured directly, only the relative conductivities are constrained. For the best-fit thermal conductivities obtained by HC95, the bottom panel of Figure 17 shows a "Bullard plot" (Bullard 1939), in which the temperature is plotted as a function of the thermal resistance, where y is the distance below the surface. The absence of slope discontinuities at the layer boundaries in the lower panel of Figure 17 indicates that the relative conductivities have been computed correctly. Moreover, the absence of significant curvature implies that the flux is constant and that there is little radiogenic heat production. The dotted blue line is the behavior expected for a constant flux of 68 erg s −1 cm −2 . The deviation from this behavior at smaller depths (≤ 100 m) was interpreted by HC95 as providing a record of surface temperature changes over the past several hundred years. Table 1 in HC95. Lower panel: Bullard plot for WSR-1 (see the text).
The dotted blue line is the behavior expected for a constant flux of 68 erg s −1 cm −2 .
In Figure 18, we plot the laboratory-measured rock conductivities, k lab , as a function of the values inferred from the temperature profile within the sample of boreholes, k bh . These values are taken from Table 1 in HC95, where they are denoted k pr and k a respectively, and they apply to the San Rafael Swell region, in which four of the nine boreholes are located (including WSR-1). The error bars on k lab represent the standard deviations obtained for laboratory measurements performed on multiple samples of a given rock type. While the ratios of the k bh -values reported by HC95 are determined by the borehole measurements, the overall scaling adopted by HC95 is unconstrained. (It was chosen by HC95 to minimize the differences between k lab and k bh when averaged over all rock types.) Accordingly, we performed the linear regression shown by the black line with both the slope and y-intercept unconstrained, and indeed recovered a best-fit slope of 1.0 For conductivities in the range of interest, the mean-free path for HIDM, λ, is larger than the thickness of the rock samples for which the laboratory measurments were obtained (∼ 2 cm; Roy et al. 1968). Any anomalous conductivity, k DM , associated with HIDM would therefore increase k bh but not k lab . Thus, in this regime, k lab = (k bh − k DM ), and thermal conductivity associated with HDIM would reveal itself as a negative intercept for the best-fit linear regression. The linear regression yields a y-intercept of (−0.04 ± 1.44) × 10 5 erg s −1 cm −1 K −1 , placing a 3 σ upper limit of 4.3 × 10 5 erg s −1 cm −1 K −1 on k DM . The corresponding upper limit on λ is 250(m DM /m p ) 1/2 (T DM /300 K) −1/2 (n DM /10 14 cm −3 ) −1 cm, and the resultant lower limit on σ ′cr 300K is 5.4 × 10 −26 (m DM /m p ) −1/2 (T DM /300 K) 1/2 (n DM /10 14 cm −3 ) cm 2 . Table 1 in HC95 for the different rock types. | 2018-07-30T20:28:06.000Z | 2018-05-22T00:00:00.000 | {
"year": 2018,
"sha1": "f4589e770748a9a7a66f9e1ff81f3fbf63a9768a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.08794",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f4589e770748a9a7a66f9e1ff81f3fbf63a9768a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255668270 | pes2o/s2orc | v3-fos-license | Improving Generalizability of Spectral Reflectance Reconstruction Using L1-Norm Penalization
Spectral reflectance reconstruction for multispectral images (such as Weiner estimation) may perform sub-optimally when the object being measured has a texture that is not in the training set. The accuracy of the reconstruction is significantly lower without training samples. We propose an improved reflectance reconstruction method based on L1-norm penalization to solve this issue. Using L1-norm, our method can provide the transformation matrix with the favorable sparse property, which can help to achieve better results when measuring the unseen samples. We verify the proposed method by reconstructing spectral reflection for four types of materials (cotton, paper, polyester, and nylon) captured by a multispectral imaging system. Each of the materials has its texture and there are 204 samples in each of the materials/textures in the experiments. The experimental results show that when the texture is not included in the training dataset, L1-norm can achieve better results compared with existing methods using colorimetric measure (i.e., color difference) and shows consistent accuracy across four kinds of materials.
Introduction
Spectral reflectance reconstruction in the multispectral imaging system (MIS) has attracted a lot of attention in recent years [1][2][3][4][5][6][7][8]. The objective is to obtain a full spectral reflectance image of the objects (e.g., fabric) such that accurate color reproduction can be performed. Multispectral imaging has its advantage over conventional three-channel color imaging because it can provide the full spectral information in the visible band (i.e., 400 nm-700 nm), which can be used for accurate color measurement [9][10][11]. Applications of MIS also include fruit classification [12], art archiving [13], and color constancy determination [14] among many others. In our study, multispectral imaging refers to using 16 narrowband channels to estimate the full spectral which consists of 31 channels, similarly defined in [2].
In MIS, spectral reflectance reconstruction refers to the process of reconstructing spectral reflectance from the response of multispectral images at different narrow-band wavelengths [3,13]. The transmission rate of a typical set of narrowband filters is shown in Figure 1. In most of the cases, there is a need to find a mathematical mapping to transform a camera's response vector (with dimension c) to a reflectance vector (with dimension m), where c is less than m.
In the literature of multispectral imaging, several reflectance reconstruction techniques have been proposed, including Wiener estimation [3][4][5], Least-square estimation method [6,13,15,16], and Kernel-based methods [7,8]. These methods usually have too many parameters involved in estimating the mathematical mapping between the response and reflectance. Take Pseudo-Inverse as an example, it has m × c parameters, where m is the dimension of reflectance vector and c is the dimension of response vector. The number of parameters grows linearly with the value of c. In color measurement applications [10], it is not uncommon that the number of channels m and c is as large as 31 and 16 respectively, so the number of parameters will be 31 × 16 = 496. Because of a large number of parameters, many training samples are needed for parameter estimation; otherwise, such many parameters may cause overfitting in the reconstruction processing. In recent years, there has been a lot of work for spectral reconstruction using only 3-channel Red-Green-Blue (RGB) images from off-the-shelf commercial cameras such as Digital Single-Lens Reflex (DSLR) cameras [1,[17][18][19][20]. The main methods include regression, sparse coding, and deep neural networks. While the proposed work is also for spectral reconstruction, there is a fundamental difference between our work and the aforementioned recent works, which is the focus on stringent color accuracy. The recent works try to reconstruct 31-dimension spectral data from 3-dimension response data through the massive amount of training samples (each pixel is referred to as a sample while one image can have millions of pixels) with the focus on spectral difference. It is shown that a small spectral difference does not translate to a small color difference [1]. In our work, rather than treating each pixel as a sample, each color patch is treated as a sample. This major difference makes our method not directly comparable to these recent works. Discussion on this issue is further explained in the experiments section.
In this paper, an L1-norm penalization item is added to the Least-square estimation to solve the overfitting issue. The L1-norm item can help the target parameter to achieve sparse property and overcome the overfitting problem in training. Here we take the Pseudo-Inverse as an example, if 5 out of the 16 channels contribute to the final reconstruction results of each reflectance, the number of parameters will decrease from 496 to 31 × 5 = 155, which reduces to more than half of the parameters in the Pseudo-Inverse. To verify the results, we prepared four kinds of materials (cotton, paper, polyester, and nylon) with a total of 816 samples. The evaluation results verify the L1-norm penalization method can help to improve the color reproduction accuracy compare to traditional methods.
The paper is organized as follows: Section 2 introduces the basic formulations in spectral reflectance reconstruction; Section 3 presents the current reconstruction algorithms. Section 4 discusses the proposed L1-norm method. Section 5 shows the experiments and then compares the results between our method and other methods. Sections 6 and 7 discuss the reason why the L1-norm works and reveal the conclusion of our work.
Formulation of Multispectral Imaging
In our study, a multispectral imaging system is built as illustrated in Figure 2. In the system, a monochrome camera is used for capturing the response images of each narrow-band wavelength λ i (1 ≤ i ≤ n) using the corresponding filter in the filter wheel. Narrow-band wavelength filters (transmission rate illustrated in Figure 1) and CCD cameras are commonly used in multispectral systems for color measurement [9,10,21]. The filter wheel with n filters is placed between the lens and the camera to filter the light entering the camera. The measured response of the camera is proportional to the intensity of light entering the sensor and we can formulate this as Equation (1). Denote l(λ) to be the spectral power distribution of the imaging illumination, r(λ) to be the spectral reflectance of the samples being imaged, s(λ) to be the sensitivity of CCD camera, b c to be the bias response caused by dark current, and finally, n c to be the noise. In the spectral characterization of the imaging system, spectral sensitivity and bias are recovered by training dataset with known reflectance. Then these responses u c of the cth channel can be represented as The objective in reflectance reconstruction is to recover r(λ). Note that l(λ) and s(λ) can be merged together into a single term m c (λ) in Equation (1). In practice, the filters are narrow-band filters, so we can replace the continuous variables with their discrete counterparts and the integral can be replaced to summation. If N uniformly spaced samples are used over the visible spectrum, Equation (1) can be rewritten in vector and matrix notation as
Camera
where u is the c-dimensional vector of response u, and r is a m-dimensional vector of reflectance r, M is a c × m matrix of spectral responsivity and illumination, b and n are two vector representation of biases and noises respectively.
Preliminaries
To make this paper self-contained, we briefly summarize the formulations of typical reflectance reconstruction methods, including Least-square estimation (pseudo-inverse method), ridge regression (L2-norm penalization), Wiener estimation, and Kernel methods. Our proposed L1-norm based solution is built based on Least-square estimation, and we will compare our method with all the other methods mentioned in this section.
3.1. Least-Square Estimation (Pseudo-Inverse) and Ridge Regression (L2-Norm Penalization) The subsection provides a brief review of the Least-square estimation method and Ridge Regression [13] while having a detailed discussion of the method. The estimation of reflectance is to find a m × c matrix W that can transform the response u into the estimated reflectancer ,r A natural thought will be to minimize the difference between the reconstructedr and the Wu. So we can formulate the cost function as In this equation, R is the matrix form of r and U is the matrix form of u. Note that the matrix U in Equation (4) is of size c × numbero f samples. The subscript F refers to the Frobenius norm.
Ridge regression can be viewed as adding an L2-norm penalization to Least-square estimation (Equation (4)); the cost function of the ridge can be written down as The closed-form of solution M can be solved by partially differentiating M in both sides and making it equal to 0. The solution is When β = 0, it is the solution of Least-square estimation.
Wiener Estimation
In Wiener estimation [3], the transform matrix is where K r and K n are the autocorrelation matrices of reflectance and noise, respectively: The noise is assumed to be independent across each channel, so the matrix k n is a diagonal matrix in Wiener estimation. The noise σ c can be estimated as: where u c is the response of the cth channel, and m c is the spectral responsivity of the cth channel. E denotes the operation of expectation.
Kernel Method
Kernel method is also widely used in spectral reflectance reconstruction [7,8]. It regularizes the Least-square regression in Reproducing Kernel Hibert Space (RKHS). The kernel can be viewed as a function to map the vector in the Least-square method to a new space. Many kernels can be used; in the work [7], the authors applied the Gaussian kernel, Polynomial kernel, Spline kernel, and Duchon kernel.
For example, Gaussian kernel can be defined by where γ > 0 is a super-parameter. The Gaussian kernel is invariant to rotation and translation, so k(x, z) = k(||x − z||). The corresponding RHKS space is infinite dimensional.
Proposed Method
In this work, we propose to apply the L1-norm penalized linear regression method for reflectance reconstruction. To the best of our knowledge, it is the first study to use the L1-norm penalized linear regression method for this kind of application. The L1-norm can provide the constrained variable (in our study the constrained variable is W) with sparsity, and this can help to overcome overfitting [22]. The cost function of L1-norm penalized linear regression in reflectance reconstruction is In this equation, α is a super-parameter (or a regularization parameter) and can be estimated by cross-validation. W is the weight that transforms response to reflectance. In our work, the response are acquired by placing narrowband filters before the camera, which means that the special reflectance channel is only related to some channels in reflectance. By constraining the weight W, some values of the weight are forced to equal zero. Because the L1-norm is not smooth, we can use the Alternation Direction Method of Multipliers (ADMM) [22] to solve it. A dummy variable can be introduced to Equation (12), and it will be transformed as: This is a standard lasso (least absolute shrinkage and selection operator) problem and we can solve it by the following iteration [22].
where matrices Z, T are intermediate variables, which can be initialized with zero matrices, and µ should be set larger than zero and I is a unit matrix. The operation soft is a softthresholding function as: Equation (12) can be efficiently solved by using the toolbox in [22]. The pseudo code can be found in Algorithm 1.
Data Preparation
Four kinds of materials are prepared for testing and they are polyester, nylon, paper, and cotton. We use one particular kind of sample (e.g., polyester) as the training set and the remaining three kinds of materials as testing set (i.e., nylon, paper, and cotton). They are selected following Ref. [23]. The objective is to test whether the accuracy of spectral reflectance reconstruction is depending on the type of materials used for training/testing. Each texture includes 204 patches and the reflectance of the color patches was measured using a Spectrophotometer DataColor 650 with an interval of 10 nm. The reason for using the Spectrophotometer is because it is the standard for color measurement [24]. The multispectral images of the 816 samples (4 materials × 204 patches for each material) are acquired by a self-made machine as shown in Figure 2. We use a Xeon lamp and the integral sphere as the illumination light source to make the light more uniform. Moreover, a high-resolution monochromatic camera is employed to capture multispectral images.
The L*a*b space scatters of each texture are shown in Figure 3. The values of these samples are computed by computational color science tools [25]. The reflectance of the samples is in the range of 400-700 nm sampled with 10 nm intervals. The x-axis indicates the wavelength and the y-axis indicates the reflectance measured by the Spectrophotometer.
Evaluation Metric
The color accuracy of the reflectance reconstruction is evaluated both in spectral and colorimetric error. The spectral Root-Mean-Square RMS error between the actual reflectance r and its estimater is calculated as where m is the dimension of vector r. The color difference is evaluated by ∆E CMC(2:1) [24,25], which is widely used in many industries such as textile and paper production.
Super-Parameter Estimation
There are three super-parameters in our experiments that need estimation, the β in Equation (5), the α in the proposed method in Equation (12), and the γ in Equation (11). As an example, γ, α, and β are set to 0.006, 0.074, and 0.0005, respectively, when using cotton as training, which is illustrated in Figure 4. In Figure 4, 70% percent of 204 cotton samples are used for training and the remaining 30% are used for validation. The complete set of values of the super-parameters are listed in Table 1. From the table, the super-parameters are quite stable across different materials. . Super-parameter estimation of α, β, and γ in our experiment, the y-axis is the colorimetric difference between the reconstructed spectral and ground truth spectral reflectance. The black point is the minimum point of the ∆E, which means the value we will adopt in the reconstruction. Figure 5 illustrates the results when one kind of texture (e.g., cotton) is used as a training set (204 samples) and others (204 × 3 samples) as a testing set. The color difference values under D65 are shown in the figure. In Figure 5, the L1-norm method outperforms the Pseudo-Inverse and other estimation methods in all cases when the training set is different from the testing set. The results are consistent when using the mean, the median, and the maximum of the color differences after reflectance reconstruction for the comparison. The mean and median results reveal the overall performance, while the worst-case performance is shown in the maximum color difference results. Specifically, the results of the L1-norm consistently outperform that of the Pseudo-Inverse method using the mean color difference when the training material is different from the testing material. When using the median for the comparison, the L1-norm is better than the pseudo-inverse method in the nylon material for all the testing sets. Similar results are also obtained when using the maximum color difference for comparison in the nylon material. Overall, in the situation when the testing material is unseen (i.e., not present in the testing set), which is often in practice, using L1-norm is better than using Pseudo-Inverse and Wiener estimation for spectral reflectance reconstruction.
Results
training: polyester training: paper training: nylon training: cotton median max mean Figure 5. ∆E results under illumination D65 using different materials as training. "Proposed" refers to our L1-norm penalization method, "L2" refers to the ridge regression (L2-norm penalization), "Wiener" refers to the Wiener method and "PI" refers to the least-square estimation (pseudo-inverse method). Sub-figures (a-d) show the results of using polyester, paper, nylon, and cotton as training samples, respectively. Results show that the proposed method consistently outperforms other methods in the color space; more detailed description is provided in the text. Figure 6 shows the values of the color difference with illumination F2. The results tend to be similar to that of Figure 5. Figure 7 shows the spectral difference between the reflectance measured by Spectrophotometer and MIS using RMS, which is not in the color space. From the results, it is interesting to note that the L1-norm method does not show a significant advantage over Pseudo-Inverse and Wiener estimation when using RMS to measure the difference. In practice, color difference is measured in the color space (D65 and F2 in Figure 5 and Figure 6, respectively). This reveals that the L1-norm can be used in situations focused on colorimetry, such as the fabric industry. training: polyester training: paper training: nylon training: cotton Figure 6. ∆E results under illumination F2 using different materials as training. "Proposed" refers to our L1-norm penalization method, "L2" refers to the ridge regression (L2-norm penalization), "Wiener" refers to the Wiener method, and "PI" refers to the least-square estimation (pseudo-inverse method). Sub-figures (a-d) show the results of using polyester, paper, nylon, and cotton as training samples, respectively. Results show that the proposed method consistently outperforms other methods in the color space; more detailed description is provided in the text.
training: polyester training: paper training: nylon training: cotton Figure 7. RMS results of spectral reflectance reconstruction using different materials as training. "Proposed" refers to our L1-norm penalization method, "L2" refers to the ridge regression (L2-norm penalization), "Wiener" refers to the Wiener method, and "PI" refers to the least-square estimation (pseudo-inverse method). Sub-figures (a-d) show the results of using polyester, paper, nylon, and cotton as training samples, respectively. Results show that the proposed method is comparable with other methods in the reflectance space; more detailed description is provided in the text. Figure 8 is the reconstructed reflectance of two randomly selected cotton samples and the training set used was 204 paper samples. The ground truth is the spectral reflectance measured by a Spectrophotometer and the results are compared with spectral reflectance reconstructed by different algorithms. The accuracy is measured both in color metric and spectral metric. We select five typical illumination sources (i.e., A, C, D50, D65, and F2) and the color difference is measured by ∆E CMC(2:1) . The table in this figure illustrates the color difference and spectral difference individually. In color metrics, the proposed L1-norm method outperforms the traditional methods consistently. However, in spectral metric, the differences of these two samples is still larger than the traditional method. These two samples can verify that the L1-norm penalization can outperform the cut-of-edge algorithm in spectral reflectance reconstruction. Figure 8. Reflectance reconstruction of a paper sample of the proposed L1-norm estimation and traditional estimations when using cotton for training. Tables inside the plots are the color difference and spectral difference. The A, C, D50, D65, and F2 represent different illumination. The unit of these items is ∆E (CMC(2:1)) . RMS represents the Root-Mean-Square spectral difference metric. In the tables, method "P" is the briefcase of method Pseudo-Inverse. "Ours" is the proposed method. "W" is Wiener estimation. "K" means kernel method.
Time Analysis
This section compares the running time of the tested methods in the experiments. All the methods are running on an Intel i7-4790 machine with 32G RAM using Matlab. The training and testing time are listed in Table 2.
When comparing with Ridge Regression (L2) methods and Pseudo-Inverse method, the proposed method requires much more training time; however, they share comparable testing time. As the training can be done offline, the testing time is more important in measuring the efficiency of the methods. From the results, the testing time of the proposed method is comparable to others, except for the Kernel method, which has much worse performance in terms of time analysis.
Comparison With RGB-based Methods
We also compared three recent RGB-based spectral reconstruction methods, namely, Polynomial methods [18,26], RBF (Radial Basis Function) network method [17], and Gaussian Process method [20]. Sparse coding [1] and deep learning [19] methods require a lot more training samples and they are not included in this work. In this set of experiments, cotton samples are used for training while polyester samples are for testing. The data is first transformed to RGB with D65 as the light source and 1964 observer. The mean color deference is listed in Table 3. From the results, the average color difference using RGB-based methods is significantly higher than those in Figure 5 which is in the range of 0.4 to 0.9. However, it should be noted that the compassion is unfair as results in Figure 5 are obtained using 16-channel data as input, which contains much more information than the 3-channel RGB data. The results in this sub-section show that recent proposed RGB-based methods cannot be directly applied to reflectance reconstruction tasks with stringent color difference requirements.
Discussion
This section discusses the reasons for the superior results using the proposed method (i.e., L1 penalization). The advantages of L1 penalization are that (1) it can alleviate the overfitting problem and (2) its sparsity characteristic is more suitable for the underlying spectral reconstruction task.
Overfitting Problem
Overfitting refers to the problem that a prediction model can only work in the training data but not the testing data [27], as a result, the model is not generalizable (i.e., it works poorly for unseen data). This problem is formed by biased training data (e.g., using a particular material only for training) and lack of penalization term in the model. In order to understand whether the training data is biased, the feature vectors of the four materials used in the experiments are investigated. Each material used in the experiments has its specific property on reflectance [28], which can be shown in Figure 9. Figure 9 plots the first four feature vectors of the four materials. From the figure, one can find that the samples have a great difference especially for the paper samples. When using traditional methods with no penalization term to reconstruct the spectral reflectance, the prediction model tends to work on the (biased) training data only. The L1 penalization method can prevent the model from being dominated by the training data, and thus makes it more generalizable. Figure 9. The first 4 feature vectors of 4 kinds of samples (cotton, polyester, nylon, and paper).
Sparsity Characteristic
Sparsity refers to the characteristic in the data such that most of the entries (in a vector or matrix) have a value of zero. If the underlying problem (i.e., reflectance reconstruction) possesses the sparsity characteristic, a prediction model which can accommodate the sparse data will perform better than those which cannot. To show that the reflectance reconstruction problem is indeed sparse, Figure 10 shows the correlation between the response (16-channel input) and reflectance (31-channel output) in the form of a heatmap. The sub-figures (a) show correlation for the proposed L1-norm penalization method, (b) shows the L2-norm penalization method, and (c) shows the pseudo-inverse method. From the figure, one can see that the correlation is constrained to be zero (yellow color) for most of the entries except the diagonal. As we use the narrow-band filters in our system, it is intuitive to consider that the target value (31-channel output) is mostly affected by its neighbourhood channels only, but not channels far away. All the 16 filters are shown in Figure 1. Taking the 450 nm filter as an example, the filter blocks most of the light in the spectral domain, only keeping the light from 430 nm to 470 nm to pass through. So if we reconstruct the reflectance in the 450 nm range in our target results, it should only have a high correlation with the response from 430 nm-470 nm range and have little to no association with the response from the 600 nm range and beyond, which is too far away. By adopting the L1 penalization, the weight of these unrelated channels in the long-distance can be reduced to 0, as shown in the highlighted blue area in Figure 10. However, for the L2-norm penalization (L2) and pseudo-inverse (PI) methods, there are still non-zero correlations (in light red colors), which impact their results.
Reflectance Channel
This sparsity characteristic can also be verified in Figure 11. The reflectance curves measured by the Spectralphotometer and the response curve captured by MIS share a similar shape. It can also be confirmed that the reflectance curve should only be constructed by its neighborhood channels during reflectance reconstruction. Based on the sparsity characteristic of reflectance, the superior performance of the L1 penalization method is reasonable and understandable. It can inhibit the noise introduced by similar textures and only focus on the accuracy introduced by the MIS.
Material Dependence
From the results of Figure 5 and Figure 6, when polyester and cotton are used as training samples, the result outperforms the others. Meaning, even though L1 penalization improves the generalizability, the results are still sensitive to the training set. In order to verify this, we compare the similarity between the training set (one material, and 204 samples) and testing sets (remaining 3 materials, 612 samples) by KL Divergence using Equation (19).
P(x) and Q(x) here are the distribution of L from Lab color space in training and testing set individually. The results are shown in Table 4. The distances when polyester and cotton are used as training samples are less than that of using paper and nylon. The mean color differences are also smaller.
Conclusions and Future Work
We propose an L1-norm method for reflectance reconstruction which in certain practical conditions (when the testing texture is unavailable in training samples), the accuracy of the reconstructed reflectance is higher than that using the conventional methods like Pseudo-Inverse and Wiener estimation method. Note that our study is mainly focused on color measurement; therefore, other metrics such as shape-distance sensitivity are not included.
In this paper, we also find a very interesting phenomenon that, while we are optimizing the color difference by spectral domain, the results of the proposed method are better in the color domain. This does not affect the practical application of the proposed method because color difference is measured mainly in the color domain. This phenomenon can be investigated in future work. Data Availability Statement: All data, models, or codes that support the findings of this study are available from the link: https://github.com/tendence/refl_L1. | 2023-01-12T16:32:59.772Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "430ca68a50762df06579b3acca1bb855f93ac6e8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/2/689/pdf?version=1673428220",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb15bc757a4d41a9bbf51dd5ac126af3d37b7cf2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
117870418 | pes2o/s2orc | v3-fos-license | Reheating, Dark Matter and Baryon Asymmetry: a Triple Coincidence in Inflationary Models
A scenario in which primeval black holes (PBHs) form at the end of an extended inflationary period is capable of producing, via Hawking radiation, the observed entropy, as well as the observed dark matter density in the form of Planck mass relics. The observed net baryon asymmetry is produced by sphaleron processes in the domain wall surrounding the PBHs as they evaporate around the electroweak transition epoch. The conditions required to satisfy these three observables determines the PBH formation epoch, which can be associated with the end of inflation, at $t\sim 10^{-32}\s$.
I. INTRODUCTION
The possibility of primeval black hole (PBH) formation in the early Universe has been known for a long time [1,2]. The realization that quantum effects lead to their evaporation [3] led to investigations of possible constraints on the PBH mass spectrum from their consequences for various astrophysical backgrounds [4,7] and their possible dynamical role as dark matter [5]. The interplay between PBHs as a source of CDM affecting cosmological dynamics, and their evaporation as a source of entropy and particles [6] affecting nucleosynthesis and CMB observations leads to constraints on their mass spectrum over a wide mass range [8,9]. Since black hole evaporation manifestly violates T-invariance, CP must be violated as well so that baryon number is not conserved [11], which led to the suggestion, in the context of GUTs, that the baryon asymetry of the Universe could be thereby explained [12,13]. In the context of inflationary models, PBH formation can be triggered by large amplitude inhomogeneities caused by bubble nucleation over scales comparable to the horizon, which can collapse into black holes at the end of an extended inflationary period [15].
The evaporation of PBHs with mass less than ∼ 10 15 g, on timescales less than a Hubble time [3], can lead either to complete evaporation, or may stop at a Planck scale mass m P l ∼ 10 −5 g [16]. Much work has been done since then on the possibility of PBH formation at phase transitions, on the dynamics of their collapse and on their possible role in large scale structure formation, e.g. [17,18] and references therein.
In what follows, we concentrate on the potential of PBHs formed at the end of extended inflation in providing a mechanism for the production of the current observed entropy per baryon, the inferred dark matter density, and as a possible source for the baryon asymmetry in the Universe. We emphasize a remarkable triple coincidence for the conditions required to produce these three quantities, pointing towards a well defined epoch for PBH formation around t ∼ 10 −32 s, which can be identified with the end of inflation.
In this model, the absolute entropy of the universe is given by the entropy of a gas of standard model particles at the initial temperature T BH ∼ 300GeV, produced by PBHs created at t ∼ 10 −32 s which evaporate at t BH ∼ 10 −12 s, identified with the reheating time. This same PBH evaporation leads also to a dark matter component, assumed to consist of approximately Planck mass remnants with a mass density approximately equal to the PBH density times the Planck mass. The net baryon density and asymmetry is also related to the PBH evaporation, through sphaleron processes in a domain wall structure surrounding the PBHs. The presently observed entropy, dark matter density and net baryon to entropy ratio are obtained for a unique value of the reheating time t BH ∼ 10 −12 s, which coincides with the electroweak timescale, and which defines a unique time for the end of inflation at t end ∼ 10 −32 s.
II. PBHS, REHEATING AND ENTROPY
At the Planck time t P l = ( G/c 5 ) 1/2 ∼ 10 −43.3 s the Planck mass m P l = ( /t P l c 2 ) ∼ ( c 3 /G) ∼ 10 −4.7 g, corresponding to the Planck energy E P l ∼ 10 19.05 GeV, is within a particle horizon whose size is the Planck length ℓ P l ∼ ct P l ∼ 10 −32.83 cm. Any PBHs formed before or during inflation would have had their energy density diluted by the exponential expansion of the scale factor to a negligible value, so that PBH formation is of interest mainly after the end of inflation, at t > ∼ t end . The difficulties of the original inflation model are resolved most simply in models of extended or hyperextended inflation [19], which is generally taken to end around an epoch t end ∼ 10 −32±6 s, at which time the energy scale has dropped to E ∼ ρ 1/4 ∼ 10 13±3 GeV. At this time the Universe is cold, due to adiabatic cooling during the expansion of the scale factor by sixty or more e-foldings, so the pressure is essentially zero, and the equation of state is correspondingly soft. Primordial energy density fluctuations coming into the hori-zon at t ∼ t end may be of the canonical inflationary (Harrison-Zeldovich) type, with relative amplitudes δ end ≡ (δρ/ρ) t end ∼ 10 −4 , and/or may be large amplitude fluctuations caused by chaotic conditions associated with bubble nucleation at t end , where one may expect δ end ∼ 1, e.g. [15]. The latter fluctuations can cause PBHs to form almost immediately at t 1 ∼ t end , with a mass M BH which is a fraction η < ∼ 1 of the mass in the horizon M hor ∼ m P l (t/t P l ) at that time, M BH (t 1 ) ≃ ηm P l (t 1 /t P l ) ≃ 10 6.6 ηt 1,−32 g (1) or M BH ≃ 10 30.3 ηt 1,−32 GeV, where t 1,−32 = (t 1 /10 −32 s). The temperature associated with a black hole of mass M BH is T BH = (m 2 P l /8πM BH ) = 10 6.4 M 3 6.6 GeV = 10 6.4 η −1 t 1,−32 GeV. (2) In the standard treatment, these PBHs evaporate on a timescale where g * = 10 2 g * ,2 ∼ 106.75 is the number of degress of freedom in the early universe for the standard model [10].
Most of the evaporated energy goes into radiated photons and particles. For evaporation times much longer than the epoch of formation, t BH ≫ t 1 , the epoch (age of the Universe), at which PBHs of mass M BH evaporate is t ∼ t BH ∼ 10 38 M BH s. Thus even if the perturbations coming into the horizon at the end of inflation had only the canonical amplitude δ end ∼ 10 −4 , they grow with the scale factor of the Universe a as δ ∝ a ∝ t 2/3 (for a matter dominated [MD] Universe, if the equation of state is cold). The collapse time t col at which the fluctuations achieve large amplitude δ ∼ 1 is much smaller than t BH , and for fluidlike perturbations the epoch at which PBHs of mass M BH evaporate is again t ∼ t BH .
If β(M BH ) is the fraction of the energy density of the Universe which collapses into PBHs of mass ∼ M BH at the the epoch t 1 , the radiation produced by the PBHs at the evaporation time t BH , after having relaxed with the environment, results in a specific entropy per baryon [9], where S i is the initial entropy per baryon before PBH evaporation, assuming β ≪ 1. This can be used to set constraints on the fraction β of PBHs of mass M , and can also contribute to producing some or possibly most of the entropy of the universe. Generalizing this argument to an inflationary scenario [15], with S i ≃ 0 as expected from adiabatic cooling at t 1 ∼ t end , assuming that the PBH mass is a fraction ηM Hor of the mass in the horizon at the collapse time t 1 , the initial energy density in a PBH component is while the remaining fraction (1 − β) goes into relativistic particles or radiation, ρ R (t 1 ). For plausible values of η < ∼ 1, 10 −10 < ∼ β < ∼ 1, the universe expansion is initially dominated by radiation, a ∝ t 1/2 , but after a time [15], a ∝ t 2/3 . The PBHs evaporate at t ≃ t BH ≫ t 1 , injecting into the universe a radiation energy density , which can be re-expressed as a function of the initial mass of the PBHs which evaporate at t BH , This newly injected radiation component is much larger than the diluted radiation produced at time t 1 , and it provides henceforth the dominant energy form in the universe, which again expands as a ∝ t 1/2 (until t eq ). A this time the universe acquires an entropy density s(t BH ) = (2π 2 /45)g * T (t BH ) 3 , being reheated to a temperature T (t BH ) given by Taking the standard value for the matter-radiation equilibrium epoch a eq = 4.3 × 10 −5 (Ω M,0 h 2 ) −1 , the PBH evaporation epoch corresponds to a BH = 4.1 × 10 −16 M 3/2 6.6 , and from the entropy scaling T 3 a 3 g * = constant with g * ,BH ≃ 106 and g * ,0 = 3.9 one obtains a present day radiation temperature T 0 ≃ 3.0 × 10 −4 (1 − β) 1/4 eV, close to the observed value of 2.5 × 10 −4 eV (See Fig. 1). Notice that if one were only trying to explain the current entropy or the current radiation temperature, one could in principle also satisfy this with, e.g. earlier evaporation times or higher T BH values. However, if in addition one demands that the PBH evaporation should also lead to the currently observed dark matter density, the evaporation time becomes determined, as discussed below.
III. DARK MATTER
The evaporation timescale (3) remains approximately the same even if mass loss stops after the PBH has shrunk down to a Planck mass ∼ 10 −5 g, since by this time it will have radiated away most of its energy in the form of photons and particles. The semi-classical quantum evaporation treatment breaks down near ∼ m P l , requiring taking quantum gravity effects into account, and several authors have argued that the process leaves behind stable relics of approximately a Planck mass [16]. These would behave as non-relativistic matter, and in order not to exceed limits on the current dark matter density, they imply constraints on the epoch at which they evaporated, and therefore also on the epoch at which they formed [15,18]. At the epoch t BH when they evaporate, if each PBH leaves a relic of mass κm P l , the relic matter density ρ M is At this time t BH the PBH-contributed radiation density (5) is dominant, and the ratio of radiation (including relativistic particles) density to relic (dark) matter density is which is ≃ 2 × 10 11 κ −1 M 6.6 = 2 × 10 11 κ −1 ηt 1,−32 . This ratio decreases as a −1 , and at t eq its value is close to unity, as needed for dark matter. The evaporation at t BH ∼ 10 −11.4 s of PBHs formed at t ∼ 10 −32 s leads therefore to a plausible model for explaining the reheating of the Universe after the end of inflation, leading to the right amount of present day entropy, as well as providing a source for the present day dark matter density. The latter could be in the form of stable Planck mass relics, or possibly stable, weakly interacting decay products of such relics, with the same total mass. This is achieved if (i) the end of inflation occurs at t end ∼ 10 −32 s, (ii) PBHs collapse is dominated by fluctuations coming into the horizon at t 1 ∼ t end , aided by the soft equation after the end of inflation, and iii) the fraction of the energy density of the Universe collapsing into PBHs at that epoch is 10 −10 < ∼ β < ∼ 1 (see previous section). This range is mostly unconstrained by current observational restrictions on PBH mass spectra [18]. The choice of t 1 ∼ 10 −32 s is then essentially determined, if we want to explain both the current entropy and the current dark matter.
IV. BARYON ASYMMETRY
It is generally thought that the baryon asymmetry must have been generated by the epoch at which the Universe cooled below the electroweak energy scale T ew ∼ 300T ew,300 GeV [13,15]. This corresponds, extrapolating back from the present epoch, to a scale factor a ew ∼ 2.6 × 10 −16 T −1 ew,300 and an epoch t ew ∼ 10 −11.9 (Ω M,0 h 2 ) −2 T −2 ew,300 s. This electroweak transition epoch t ew is, numerically, essentially the same as the evaporation epoch t BH ∼ 10 −11.4 s defined in equation (3), for PBHs of mass M BH ∼ 10 6.6 g [eq. (1)] which formed at the epoch t 1 ≃ 10 −32 s.
The coincidence between the PBH evaporation timescale t BH and the electroweak timescale t ew is remarkable. The first is derived from a requirement to explain the reheating and the observed entropy/DM ratio starting from an inflationary early universe scenario, while the second is determined from a particle physics energy scale and the more recent dynamics of the Universe. The fact that the Universe is baryon asymmetric provides, in fact, an additional constraint on the epoch t BH at which PBHs evaporate, if the baryon asymmetry arises from baryonic decay products of PBH evaporation, which is manifestly CP-violating process.
The requirement that any net baryon number n B = n b − nb produced in PBH evaporation should not be washed out by CP-violating currents expected at the electroweak transition epoch t ew imposes the requirement t BH > ∼ t ew . This additional requirement is in fact satisfied by the entropy to dark matter constraint (9), requiring t BH ∼ 10 −11.4 s, which in turn constrains, through equation (1,3), the horizon entrance time of the PBH perturbations to be t 1 ≃ 10 32 s. In addition, in order for the energy density of these PBHs not to be diluted to a negligible value, they must be born after the end of inflation, t 1 > ∼ t end . The double constraint on t 1 ≃ 10 −32 s from the reheating/entropy/DM requirement on the one hand, and on t 2 > ∼ t ew to preserve any baryon asymmetry on the other hand, in turn constrains the end of inflation to occur at t end ≃ t 1 ≃ 10 −32 s. In fact, either one of the previous two constraints acting individually (as long as PBHs are responsible either for the entropy/dark matter ratio or the baryon asymmetry) is enough to require t end ≃ 10 −32 s. The fact that both independent constraints acting simultaneously require the same value of t end is again remarkable.
The baryogenesis mechanism of [14] included mechanisms occurring at GUT temperatures, assuming that PBHs with T BH > 10 14 GeV radiate bosons which decay into a net baryon number. (A different PBH baryogenesis mechanism in ekpyrotic/cyclic models [22] has been discussed by [23]). However, any GUT scale baryon asymmetry can be washed out by Sphaleron processes during during the electroweak phase transition. Sphalerons are non-trivial topological field configurations which generate a net B − L number. It was shown by Cohen, Kaplan and Nelson [21] that it is possible to use the electroweak sphaleron (instantons) to generate baryon asymmetry through the well known ABJ anomaly equation: where N f is the number of families, W µν is the weak field strenght, B µν is the hypercharge field strength and g and g ′ are the gauge couplings. The CKN mechanism states that baryogenesis can be spontaneous in the sense that a derivate coupling between a scalar field and the baryon number current is induced in general: , and substituting from eq (10) we get However the coincidence of PBH formation during and before the electroweak phase transition temperature gives us a clue as to the origin of this field φ. If the field φ is associated with the phase of the Higgs field we may be able to naturally generate the baryon asymmetry. Indeed such a mechanism was made concrete by Nagatani [24]. In this mechanism the Higgs field forms a spherical domain wall around the PBH due to spontaneous electroweak symmetry breaking. The gradient in the domain wall is the CP violating phase which also acts as the chemical potential to generate the net baryon asymmetry due to sphaleron processes. The domain wall configuration is expressed as: where f (r) = 1 − (T (r)/T weak ) 2 , T (r) is the local temperature measured at a radius r from the black hole center. This temperature gradient is determined by the radiation energy density gradient produced by the radiation outflow from the black hole.
In this configuration of the Higgs vacuum expectation value, the width of the domain wall d DW is equal to the depth of the symmetric region. The Hawking radiation (particles) emanating from the black hole traverse this domain wall, and the energy gradient in the wall induces a shpaleron transition which creates a net baryon number from the Hawking radiation [28] . The net baryon number n B resulting from this process can be calculated directly [24], and for PBH temperatures T BH ∼ 10 6.5 − 10 7.5 GeV the resulting net baryon number, and the ratio of the net baryon to entropy (where the latter is as calculated in the previous section) is n B /s ≃ 10 −10 , satisfying the BBN constraints. Remarkably, as shown in the previous two sections, this temperature is essentially the same temperature (2) corresponding to PBHs formed at t 1 ∼ 10 −32 s and evaporating at t BH ∼ 4 × 10 −12 s ∼ t ew , which can produce both the observed entropy and the observed dark matter density. These same PBHs can therefore also produce the right net baryon number and the entropy per baryon of the universe.
V. DISCUSSION
The possibility that three major observational parameters of the universe, namely the entropy density, the dark matter density and the net baryon to entropy ratio, may be simultaneously explained by a single mechanism is remarkable. In this scenario the reheating and the entropy is produced by the evaporation of promordial black holes. If, as has been widely surmised, these leave relics whose mass is of order the Planck mass per evaporating black hole, these can provide the dark matter density. For one or both of the above to come out right, the PBH mass must be in the ton range (1×10 3 kg). A newer element, in addition to the above, is that the difference between the PBH temperature and the temperature of the universe provides a temperature gradient, through which evaporating particles can undergo CP-violating transitions leading to a net baryon number. A specific PBH evaporation domain wall mechanism can give the observed net baryon number and baryon to entropy ratio, in agreement with BBN constraints, when the PBH temperature is in the PeV range, corresponding again to the ton mass range.
The entropy, by itself, could in principle be produced by a range of PBH collapse times t 1 < ∼ 10 −32 s. However, the additional requirement of relating also the dark matter density, the net baryon number, or both, to the evaporation process narrows the PBH formation time to the epoch t 1 ∼ 10 −32 s. Since PBHs with a significant energy density must arise at or after this epoch, this can be identified with the end of inflation. This same epoch occurrs independently in hyperextended models of inflation where the non-minimal coupling is something other than quadratic in the scalar field, leading also to a prediction [25] of a gravitational wave background.
The PBH formation epoch t 1 ∼ t end ∼ 10 −32 s also determines the reheating temperature T ∼ 300GeV, caused by the PBH evaporation at the epoch t BH ∼ 10 −12 s ∼ t ew . The triple coincidence discussed here, based on the physics of PBH evaporation, provides a strong incentive for identifying PBHs as responsible for three of the key parameters of cosmological models, namely the current entropy, the dark matter, and the net baryon asymmetry with the right baryon to entropy ratio. It also provides an upper limit for the end of inflation at the epoch t end ∼ 10 −32 s. | 2019-04-14T02:55:34.584Z | 2007-03-08T00:00:00.000 | {
"year": 2007,
"sha1": "5fae57fcbdadf3bc15fffc31a1d73b9722497452",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5fae57fcbdadf3bc15fffc31a1d73b9722497452",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56477557 | pes2o/s2orc | v3-fos-license | Vitamin D status during pregnancy: The importance of getting it right
In this issue of EBioMedicine Enkhmaa et al. [1] conducted a randompaper [1]. Most of these clinical trials have validated the positive effects ized clinical trial (RCT) that demonstrates the amount of vitamin D that must be supplemented to achieve adequate vitamin D status in a vitamin D deficient pregnant population. As had been demonstrated in similar populations [2,3], the amount of vitamin D to be supplemented far exceeds the amount currently recommended by the National Academy of Medicine [4]. The amount of vitamin D supplemented during pregnancy has long been a contentious subject due to the false association of vitamin D being a teratogen leading to Williams Syndrome [5]. This association which is now known to be false, has caused generations of Obstetricians that fear vitamin D [6]. The current paper as well as those before it have clearly demonstrated vitamin D to be safe at up to 4000 IU/d [1–3,6–9]. In all of these studies, NOT A SINGLE ADVERSE EVENT has been associated with vitamin D supplementation [1–3,6–9]. Thus, it is now known that this level of vitamin D can be safely administered during pregnancy. However, what would be the advantage of doing so? A lack of vitamin D is long known to have adverse effects on skeletal development [6]. In fact, we used vitamin D and its ability to improve calcium homeostasis as the primary specific aim to obtain funding from NIH to conduct a clinical trial in 2001 [6]. Ultimately, this trial was funded but not before we obtained investigational drug approval from the FDA to study the administration of up to 4000 IU/d during pregnancy [6]. As stated earlier our primary goals were to determine how much vitamin D was required to achieve adequate vitamin D status, as defined by circulation 25(OH)D, and the effects it would have on skeletal homeostasis. Of course, another objective of the study was to prove that this amount of vitamin D during pregnancy was safe. What this study ultimately showed us is that vitamin D supplementation during pregnancy could reduce birth complications. Now this benefit was not a primary end point of this original study because in 2001 we did not know enough to even ask the question. When the prior study was completed and the results presented at a vitamin D workshop in Brugge, Belgium, to put it simply, nobody believed the data nor the positive outcomes on birth complications. However, the results spurred a large field of investigation that led to many more studies of this avenue [2–4,7–10] including the current
In this issue of EBioMedicine Enkhmaa et al. [1] conducted a randomized clinical trial (RCT) that demonstrates the amount of vitamin D that must be supplemented to achieve adequate vitamin D status in a vitamin D deficient pregnant population. As had been demonstrated in similar populations [2,3], the amount of vitamin D to be supplemented far exceeds the amount currently recommended by the National Academy of Medicine [4].
The amount of vitamin D supplemented during pregnancy has long been a contentious subject due to the false association of vitamin D being a teratogen leading to Williams Syndrome [5]. This association which is now known to be false, has caused generations of Obstetricians that fear vitamin D [6]. The current paper as well as those before it have clearly demonstrated vitamin D to be safe at up to 4000 IU/d [1][2][3][6][7][8][9]. In all of these studies, NOT A SINGLE ADVERSE EVENT has been associated with vitamin D supplementation [1][2][3][6][7][8][9]. Thus, it is now known that this level of vitamin D can be safely administered during pregnancy. However, what would be the advantage of doing so? A lack of vitamin D is long known to have adverse effects on skeletal development [6]. In fact, we used vitamin D and its ability to improve calcium homeostasis as the primary specific aim to obtain funding from NIH to conduct a clinical trial in 2001 [6]. Ultimately, this trial was funded but not before we obtained investigational drug approval from the FDA to study the administration of up to 4000 IU/d during pregnancy [6]. As stated earlier our primary goals were to determine how much vitamin D was required to achieve adequate vitamin D status, as defined by circulation 25(OH)D, and the effects it would have on skeletal homeostasis. Of course, another objective of the study was to prove that this amount of vitamin D during pregnancy was safe. What this study ultimately showed us is that vitamin D supplementation during pregnancy could reduce birth complications. Now this benefit was not a primary end point of this original study because in 2001 we did not know enough to even ask the question.
When the prior study was completed and the results presented at a vitamin D workshop in Brugge, Belgium, to put it simply, nobody believed the data nor the positive outcomes on birth complications. However, the results spurred a large field of investigation that led to many more studies of this avenue [2][3][4][7][8][9][10] including the current paper [1]. Most of these clinical trials have validated the positive effects of prenatal vitamin D on birth outcomes [3,7-9] while some have not [4]. There are major differences in these trials with respect to when vitamin D was administered during pregnancy and the amount administered. Studies that have been negative have not supplemented adequate amounts of vitamin D and/or administered vitamin D too late in the pregnancy cycle [4]. Recent studies have demonstrated that vitamin D needs to be administered as early in the pregnancy as possible. In fact, it appears to be critical to provide vitamin D in the preconception period to ward off preeclampsia and/or preterm birth [8][9][10]. This is likely due to the fact that vitamin D is essential in the first trimester of pregnancy to ensure proper placental development and lung development during this important period to prevent childhood asthma [7]. Providing vitamin D after this critical developmental period appears to not correct these developmental deficiencies [9] and thus the failure of clinical trials [4]. The problem is that the preconception studies have only been observational. Ideally, these preconception studies need to undergo RCT, however this may never happen because of the expense of such a study would incur. It is possible therefor that we will be forced to make decisions on this matter using only observational data.
We remain at a crossroads with respect to using vitamin D during pregnancy to improve the birth complication rate. Naysayers continue to ask for more studies on the basis of safety concerns even though not a single adverse event has been observed in previous studies. Conversely, many potential benefits have been observed in these studies. The fact remains that vitamin D is the only substance shown to decrease preeclampsia rates and subsequent preterm birth. If vitamin D were a pharmaceutical it would be worth billions of dollars and that is probably a major fact in the non-acceptance of vitamin D for this purpose, competition with pharmaceutical companies as vitamin D is essentially free.
Disclosure
The author declared no conflicts of interest. | 2018-12-20T14:03:07.365Z | 2018-12-15T00:00:00.000 | {
"year": 2018,
"sha1": "eb269154d8410dc6aabef08a1ea68fe2d2617752",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thelancet.com/article/S235239641830598X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb269154d8410dc6aabef08a1ea68fe2d2617752",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216454029 | pes2o/s2orc | v3-fos-license | Overtourism and Medium Scale Sporting Events Organisations—the Perception of Negative Externalities by Host Residents
The main purpose of this study is to investigate the influence of non-mega sporting events on the perception of negative externalities of host residents. The detailed aim of the study was to examine whether the inhabitants of the city feel the negative effects of organizing sporting events (communication problems or inappropriate behavior of supporters) and do they believe that these events increase the level of crime in the city or, despite these inconveniences, they are satisfied with the organization of sporting events in their place of residence. The case study is the city of Poznan and two, well-known events in this agglomeration. The first one is the Poznan Half Marathon—medium scale mass event, the second one is Cavaliada—elite international equestrian event. The theoretical part of this article presents the meaning of sporting events organization for tourism industry and indicates the positive and negative effects this kind of tourism brings to host cities. The whole refers to the theoretical foundations of the term of “overtourism”. The second part of the manuscript presents empirical research results, between 774 active and passive participants, which was conducted by the method of diagnostic survey. The results of this research show that both athletes as well as fans of the Half Marathon said that the Poznan Half Marathon event causes bothersome communication problems in the city and some other social problems. The inhabitants only experience minor inconveniences that felt as a result of organizing sporting events in the city. The negative impact of Cavaliada was very low. For checking the differences between the two examined groups of respondents: Half Marathon fans and Cavaliada fans, Chi-square test and U Mann Whitney’s test was used. The participants feel bothersome communication problems that cause the Half Marathon and have an average level of dissatisfaction higher than the average level of dissatisfaction of Cavaliada participants. Moreover, the participants in the Half Marathon have an average level of satisfaction with the organization of sports events in Poznan significantly lower than the average level of satisfaction of Cavaliada participants. Therefore, an elite equestrian sporting event is less burdensome for its residents and gives them more satisfaction.
Negative Implications of Sporting Events and Overtourism
Sports tourism is a deeply interdisciplinary phenomenon. As mentioned above, it affects both residents and tourists in the economic, ecological, social and cultural dimensions. It also impacts positive and negative in psychological, institutional, political and planning levels. However, the larger the event, the greater the impact. Weed and Bull [1] claims that three key components interact to create value in sports tourism: the places involved, the activities undertaken, the motivations to participate. Places where tourists stay and their activities have specific characteristics (beautiful scenery, attractive landscapes, monuments etc.) which are a subject of various interpretations [2][3][4]. On the other hand, the number of tourists and their activities can destroy sociocultural, physical or economic resources and reduce the quality of tourist's satisfaction. That is why, determining and respecting the carrying capacity of destinations becomes a necessity in tourism planning [5,6]. Popular tourist destinations around the world have reached a tourism tipping point. To describe these tourism disturbances (not only in sports tourism), the term overtourism, tourism phobia, overcrowded locations or visitor pressure has rapidly been popularized [7][8][9][10][11]. The perception of city tourism has changed dramatically. Destinations are being saturated with visitors. The critical point begins at the place where there is an imbalance between the perception of positive and negative effects of tourism for the inhabitants [11,12]. Infrastructure like roads, public transportation, cultural attractions and other services which were created primarily for local use, suffer under increasing number of tourists. The growing popularity of transport services, online accommodation and a desire to see the authentic, everyday city life has meant that tourism activities become further intertwined with local life, also outside of the main tourist areas in cities [13,14]. Such developments have led to calls from residents to deal with tourism growth and protests have been observed for example in popular destinations like Venice or Barcelona. Although this problem is most evident in European cities, similar sentiments have also been reported elsewhere, such as tropical islands, backpacker ghettos or even slums [13,15]. Despite of the growing popularity, "overtourism" is still not clearly defined [13]. The term describes destinations where hosts or guests, locals or visitors, feel that the quality of life and the quality of the experience in the area has deteriorated unacceptably. Moreover, "overtourism" describes the situation in which the impact of tourism exceeds ecological, physical, social, psychological, political and economic capacity thresholds and causes the loss of authenticity and imply a significant risk to the future attractiveness of a destination. There are many examples where the cultural and natural heritage of a place is at risk or where costs of living and real estate have substantially increased and caused a decline in quality of life. It is the opposite of responsible tourism which is about using tourism to make better places to live in and better places to visit [16]. The uncontrolled development of tourism can cause significant damage to air and water quality, landscapes, seascapes, as well as the living conditions of residents, causing economic inequalities and social exclusion, as well as many other issues [17]. Dissatisfaction with overtourism on the part of local residents might mobilize forces to prevent tourism from developing and increasing at its destination. The dissatisfaction of visitors can reduce the number of visits to the destination, thus harming its economic sustainability [11].
Sports tourism is not always manifest in the mass movement of large numbers of tourists. Many authors (Hautbois, Djaballah, Desbordes; Hall; Hallmann; Barclay; Taks; Lee; Taylor; Preuss; Kim, Jun, Walker, Drane) have studied the influence of mega sporting events on all the above-mentioned planes of human life, both residents of reception areas and passive participants of sporting events, fans and athletes. They describe sports event organization as a mechanism used to tackle social problems. Of course, sports tourism contributes significantly in the development of a society [18][19][20][21][22][23][24][25][26]. Unfortunately, to the authors knowledge, there are no in-depth studies that would indicate a significant, negative impact of organization non-mega sporting events on environmental, social life of residents or culture of the place. Residents are an important part of the success of a sporting event and their opinion is significant, even if we are talking about small sporting venture. However, according to Kim and Petrik or Ohmann [27][28][29], there is a certain degree of inconsistency in the use of the perception of residents to measure the impact of the event due to the fact that it often lacks objectivity because of the subjective feature's views of residents. However, Jönsson [30] refers to the credibility of local opinions in the field of social impact assessment. He finds it difficult to investigate because their perception may change over time. Performing a longitudinal study would allow assessment over a period of time, thus recording any changes in perceptions by residents. It is also important, for the commercial success of any touristic region, to monitor the satisfaction of visitors or so-called tourist social ability [6].
The Benefits of Sporting Events Organisation
Sports tourism is not only a sum of sport and tourism. It is a complex phenomenon, similar and different from sport and tourism individually. It is multifaceted and exists under a variety of forms and names [1,[31][32][33][34]. Sports tourists travel to observe sporting activities, to participate in sport and to visit sports attractions (stadiums, sports museums, recreation areas, etc.). Depending on active or passive participation and motivational factors, sports tourists encounter different experiences as the ultimate value they are seeking [35][36][37][38]. Mega and small scale sports tourism has the potential to contribute to the social, cultural, economic and infrastructural development of the host country or city. Visitors generate tremendous activity through different forms of expenditure on sporting and non-sporting activities. Cities provide them with a number of multifunctional, complex, multiuser environments. They are able to simultaneously receive domestic and international tourists but also business tourists and people visiting friends and relatives (VFR) [13].
The organization of sporting events is widely recognized as a method of promoting touristic cities and addressing seasonality in destinations [39,40]. They are the most obvious manifestation of sporting activities, gathering two groups of participants: competitors and spectators. Events usually offer a lot of entertainment opportunities for residents and visitors [41] and are seen as one of the most sustainable economic growth strategies for cities, as a driver for economic recovery of great value [42]. The fact that cities tend to have good infrastructure facilities and already host a diverse and dynamic population is obvious and suggests that they will better cope with increasing tourist numbers than other, well-known destinations [13]. Most of the literature follows the relevance of sporting events stems directly from their impact on local, regional and national economies [43][44][45] and distinguishes between the economic, sociocultural and environmental impacts of sporting events [46][47][48][49][50]. They stimulate the dynamic development of tourism in cities. The phenomenon is very wide and many researchers are trying to carry out scientific analyses to check whether the positive or negative effects of organizing a sports event in a given tourist destination prevail [51][52][53]. According to Hall [19], the impact of sports tourism affects changes in the value systems of individuals, local communities or entire societies caused by sports travel, changes in types of behavior of tourists and the local population, their social structures, lifestyle and level of quality of life. Due to the effects of a given sporting event are also called its "legacy"-what will remain after it, especially for the local community [19,[54][55][56].
The organization of sporting events includes numerous benefits on an economic, social, cultural and environmental level as new investments, new employment and increased tourism figures and tax revenues [48,[57][58][59][60][61][62]- Table 1. Positive example is new material benefits building: new roads and highways, ultramodern sports stadiums or the development of small sports and tourists' infrastructure in smaller towns. We can observe also some non-monetary effects like improvements to a country's or destination's image abroad and among the fans coming to events [20,63]. It can translate into its tourist attractiveness or promotion of the sport and healthy lifestyle among citizens. An important aspect is also the organizational competence acquired during the preparation of the event [64,65]. Sporting events can have the positive influence on local residents' quality of life, people who believe in the importance of physical activity and the ability to shape their own social environment, increase sports participation, enhance social cohesion or generate interest in a foreign culture [25,28,66,67]. Positive environmental impacts could only have a place when new sports infrastructure is built on devastated land [58,68]. Table 1. Potential effects of sport tourism events.
Type of Impact Positive Negative
Economic -financial benefits of organizing major sporting events (supporters' expenses, sports sponsorship, advertisers, etc.) -financial benefits resulting from the development of tourism after the event (e.g., thanks to improving the image of the region) -development of public as well as tourist and sports infrastructure -excessively high costs of organizing large sporting events -crowding out phenomenon -too high investment costs in infrastructure, the problem of "white elephants"
Sociocultural
-increased sense of national pride -integration of the local community -development or strengthening of regional identity -intercultural communication -diffusion of forms of sports cultures -the impact of sports tourism on the development of different forms of tourism, -sports heritage, e.g., development of sports volunteering -the possibility of changing a healthy lifestyle as a result of observing others -opportunity for entertainment (increase in the level of happiness and quality of life) -development of sports museums-potential tourist attractions in the future -the opportunity to present regional cultural heritage.
-difficulties in the normal operations of the local community-traffic jams, congestion, price increases, acts of intentional vandalism (reduced quality of life), etc.
-resettlement of the local population -globalization of sport (loss of regional sports cultures due to the domination of others) -improper behavior of supporters, e.g., presenting nationalist attitudes -the impact of sports tourism on the development of different forms of tourism, e.g., sex tourism
Ecological
-the possibility of implementing sustainable development programs-increasing public awareness (assuming its promotion by competition organizers) -revitalization of urban space (parks, health paths, etc.) -noise -littering of areas of outstanding natural value, sometimes protected areas -transformation of natural areas for sports infrastructure -increased emission of toxic substances Source: Malchrowicz-Mośko [8].
The effects have been observed at a number of Olympic Games. "Barcelona effect" is worth mentioning in this place as an example of a positive legacy of sporting events. Due to excellent organization and promotion of the Olympics Games, the city became recognizable across the whole world as a business center and, mostly, a touristic destination. Although the Olympic Games left Barcelona deficit, the capital of Catalonia benefited by the event in a long-term perspective [64,68]. Over the years this impact is positive but also negative. Specifically, the city's residents experience inconvenience related to the influx of tourists [67,[69][70][71][72]. Barcelona has clearly become a major urban tourist destination and a cultural tourism center [73,74]. The 1992 great event led the city to present many dimensions which make up its personality and at the same time served to modernize them and open them to the future [74]. Currently, the city is the main recipient of international tourism income in the country. Barcelona's shops and stores receive over 15% of the total expenditure of foreign visitors in Spain, which is the second-largest tourist destination in the world, after France [66][67][68][69][70][71][72][73][74][75][76].
The Costs and Negative Impacts of Sporting Events Organisation
In most cases, negative (Table 1) legacies are often neglected when planning and evaluating an event [77]. Sporting events could also produce excessive spending, increased taxes and higher costs of living for residents [25,78,79]. Even if social and cultural impacts are more difficult to measure and manage [53,80,81] cultural conflicts between residents and tourists are seen. Moreover, security risks, hooliganism or traffic problems seem to be among the most relevant negative impacts for residents [19,25,46,49,51,79]. Another issue is the well thought-out and planned construction of a new sports tourism infrastructure. If this is not possible, it can cause environmental damage to the host community [46,51] and many people gathered at an event generate air and water pollution, an increased amount of waste and noise levels [46,79,82].
The opposite of Barcelona's example is the Summer Olympic Games in Montreal in 1976, when the most often cited the legacy of, is debt. The Olympic Stadium was supposed to cost $250 million but ultimately it cost $1.4 billion. The city did not pay off until November 2006-30 years after the closing ceremony [73,75]. Haynes [83] gives examples (not positive for tourism) of the 1984 Los Angeles Games and the 2000 Sydney Olympic Games. In Los Angeles, although hotels were occupied at that Sustainability 2020, 12, 2827 5 of 24 time, the Disney resort, Universal Studios and the Six Flags Magic Mountain all reported reduced interest from tourists [83]. During the 2000 Olympic Games in Sydney, the hotel occupancy in Sydney and Adelaide was high but in hotels elsewhere in Australia was significantly lower [84]. The British media presented this trend as a major problem for tourist attractions and hotels in central London in 2012 when a few weeks before the event, it turned out that a third of hotel rooms in London were unsold [83,84]. It is estimated that during the 2002 South Korea FIFA World Cup, the number of current foreign tourists was the same as the number of tourists who visited during the same period the previous year [85]. The Atlanta Olympic Games experienced crowding out effects, concluding that, in a part of the city a short distance from the Olympic Park, many hotels and restaurants a significant reduction in income [85]. The host of the Singapore Formula One Grand Prix have noted the same problem. Retailers and restaurateurs near the track have complained about a fall in custom as residents avoid the area [86].
Authors Wilson and Liu [84] conducted factor analysis which revealed six negative impact factors: travel inconvenience (the most negative), price inflation, security and crime concern, risk of disease and pollution and the last factor-damage to the environment. Respondents did not have a clear opinion on the impact of the event on the deterioration of the quality of services. It was found that travel inconvenience and price inflation were significantly but negatively related to the intention to travel [84]. Of course, the organization of major sporting events carries the risk of price inflation, vandalism, terrorism, pollution and environmental problems. Therefore, the negative effects of great events cannot be ignored. But the above-mentioned impacts could be very intensive but depend on the size of event. Large-scale sporting events are globally attractive to tourists as well as the media [47,87] but negative impacts are more visible.
Most research in this problem has focused on mega sporting events (MSEs). There is little research on organizing smaller non-mega sports events (NMSEs) that reflects on how these smaller types of events can potentially contribute to benefits and losses residents of local people [22]. For example, Djaballah, Hautbois and Desbordes checked how local governments make sense of small scale sporting events' social impacts. Their case study were local sports officials from 25 medium French cities [88]. The analyzed small scale events are mainly perceived by researchers as a stimulator of tourism development and a chance for the general development of cities and regions [88,89]. Many studies are primarily concerned with identifying motivations and benefits for active or inactive participants of events (and less often for residents) [90][91][92][93][94][95] but in the context of impact on destinations, the authors write about protected areas [96]. Recently, there have also been publications in which attention is devoted not only to modern sporting events but also to historical sporting events in the field of impact on the local society [97,98]. Sports events are very often the most important goal in the tourism strategies of many cities. Major sports events not only attract participants and spectators but also have the capacity to change the image of cities and encourage future tourism which have an influence on economies, local communities, sociocultural context and ecology for many years after the event has been staged [99]. In the last two decades, there has been a lot of research addressing a variety results [100][101][102][103] of mega spectator sporting events [18].
The subject of the presented manuscript closely refer to this special issue, especially that in the tourism literature, overtourism in connection with sporting events organization has been discussed. The effect of excessive tourism is an increase in the aggressive commercialization, price of services, rental fees and real estate and depopulation in cities and districts exploited by sports tourism. In cities with a long history, it causes the gentrification of historical areas. Overtourism as a negative phenomenon observed during the organization of sports events results in the limits of socio-psychological capacity not only residents being exceeded but also tourists. That is why we see the link between our research and this special issue, especially that so many cities are seeking the right to organize big events such as the Summer or Winter Olympic Games, the FIFA World Cup, the Formula One Grand Prix hoping for far-reaching changes benefit their host community [104]. Small scale sporting events, if respectively designed and implemented in practice, also have potential to benefit hosting communities [105][106][107][108][109] but are not so burdensome for residents. However, the larger the event, the greater the impact, which is why in the event planning process, consideration should be given to developing contingency solutions for major risks [110].
The purpose of this manuscript is to pay attention to the negative effects of organization medium and small scale sporting events from the perspective of a participating resident, since such events have not yet been studied. Indeed, what impact do they have on the perception of participating inhabitants? Whether the inhabitants of the city feel the negative effects of organizing sporting events (communication problems or inappropriate behavior of supporters) and do they believe that these events increase the level of crime in the city or, despite these inconveniences, are they satisfied with the organization of sporting events in their place of residence? How are these impacts perceived by participants who are also residents of the city?
Structure of the paper is as follows. The first part shows the literature review of negative implications of sporting events and overtourism, the costs and benefits of sporting events organization. The second part presents the method description-diagnostic survey by the authorship questionnaire submitted to the host's participants (n = 774) of the two, well-known events in Poznan: Half Marathon: medium scale mass event and Cavaliada and elite international equestrian event. The second part of the article presents results of this empirical research. Finally, we present the results of this empirical research, our findings and discuss their theoretical and managerial implications.
Aim of the Study
The main purpose of this study is to investigate the impact of non-mega sporting events on the perception of negative externalities of host residents. The detailed aim of the study was to examine whether the inhabitants of the city feel the negative effects of organizing sporting events (communication problems or inappropriate behavior of supporters) and do they believe that these events increase the level of crime in the city or, despite these inconveniences, they are satisfied with the organization of sporting events in their place of residence. Two, different ranks of sporting events organized in Poznan (one of the largest Polish cities and the most important sports centers in Poland) were selected for the empirical research. The first one is the Poznan Half Marathon-medium scale mass event, the second one is Cavaliada-elite international equestrian event.
Research Design and Data Collection
The authors of the presented research selected two sporting events of different sporting ranks, which took place in the city of Poznan-the capital of the Greater Poland region. The first of the surveyed events, the 6th Poznan Half Marathon, was an event in the field of mass sports, in which both amateur and professional athletes participated. The event is mainly national in nature but in recent years it has become international. It is also an event that has become a permanent feature in the sports calendar of the city of Poznan. The second surveyed event was the third edition of Cavaliada, which is an international equestrian event. The event consists of three parts: Cavaliada Sport, for top-level professionals; the Cavaliada Show, which also featured numerous amateur riders; and the Cavaliada Fair. This event has been successfully organized in the capital of Greater Poland for several years and has an international reach. The study was attended by residents of the Poznan agglomeration.
Research Tool
The method of a diagnostic survey was applied, which was the standardized interview technique with the use of the questionnaire tool during selected events. In order to carry out the research, an authorship questionnaire was prepared for the study. The division of Freyer and Gross (2002), who distinguished four types of orientation among the motives of participation in sporting events [111], was the basis for the development of the author's questionnaire survey. Based on existing literature and the results presented, it has been recognized that there is still an unexplored area in terms of the influence of non-mega sporting events on the perception of negative externalities of host residents. The questionnaire had 25 questions. The first part of the questionnaire focused on socio-demographic variables ( Table 1). The second part of the survey focused on motives to participate in researched events. The third part of the questionnaire was designed for people who were residents of Poznan. The last part was designed for sport tourists. For the purpose of the study, we have focused only on two parts of the questionnaire (first and third).
The authors of the article received an official permission from the organizers to conduct research when the runners were finishing the race and personally filled out during the conversation with the runners. In the case of Cavaliada equestrian competition, permission was obtained only for research among fans. Authors of the article personally talked with the residents of the city: sixth Half Marathon fans and runners and Cavaliada fans. The questions examined the impact of sporting events on residents living conditions and concerned inhabitants' opinion of selected events about their negative influence of broadly understood quality of life (communication problems, noise, behavior of supporters, increase crime). The research instrument was validated before the examined event-during the 5th Poznan Half Marathon.
Data Analysis
When determining the number of recipients, information from the organizers on the expected number of participants in the event was used to make the sample selection in a way that ensured the best possible representativeness of the results obtained. The scheme of simple random sampling without replacement was used. In calculations the formula for sample size for finite population was used. The assumption was made that the maximum error of estimate (e) at 95% confidence level should not exceed 4%.
Descriptive statistics (percentages, means and standard deviations) were calculated. In order to further analyze the obtained result, respondents were asked to define the intensity level of the inconvenience associated with the organization of a sporting event in their city on a 10-point Likert scale (Table 3, Table 4, Table 5). The differences between responses were tested among the groups with a Chi-square test for independence. Statistical significance was set at p < 0.05. When the distribution of the analyzed feature in both groups differed significantly from the normal (p < α), therefore the nonparametric test was used-U Mann Whitney test. All statistical analyses were conducted using Statistica Software 10.0 (StatSoft Inc., Cracow, Poland, 2011).
Socio-Demographic Characteristics of Surveyed Participants (Athletes and Supporters).
The survey was attended by residents of Poznan: active (athletes) and passive (sports fans) participants of events. A total of 774 respondents took part in the survey: 210 Half Marathon athletes, 256 Half Marathon fans and 308 "Cavaliada" fans who were simultaneously inhabitants of the city of Poznań. There has been deliberate selection of respondents, who are participants in the event. The population questioned was chosen compared to its knowledge of the event to be able to identify simultaneously its positive and negative aspects. However, we have excluded residents who are disinterested in the event who may express a more negative outlook but who are not necessarily able to testify to the positive effects of the event. It was considered that their opinion would be objective, from the perspective of both residents and tourists. The table below present the socio-demographic profile of the respondents. A sample of 774 respondents: 315 men and 459 women participated in the event voluntarily and completed a questionnaire. The participants of the research were mainly between 18 and 25 years old (39.7%-307) and 26-35 years old (30.7%-238). Among the surveyed residents of Poznan, people with higher education constituted the vast majority of 41.8% (324), 27.9% (216) possessed secondary education and 18.1% (140) were people with incomplete higher education. A greater percentage of them-48.8% (378) was professionally active and over 29.7% were students (230). The socio-demographic characteristics of respondents are presented below (Table 2). The results of the research were divided and presented in three groups. The first two groups of results are presented in Table 3, Table 4. They contain the opinion of the participants of the 6th Poznan Half Marathon: athletes (210) and fans (256). The research has shown that more often than every tenth person (13.3%-28) stated that the Poznan Half Marathon causes inadequate behavior of fans in the city and concluded that the Poznan Half Marathon causes inadequate supporter behavior in the city on average at 4 points on a 10-point scale. However, the majority -86.7% had a different opinion. According to 128 participants, the fans behave properly. Exactly every tenth person (10%-21) said that the Poznań Half Marathon causes an increase in crime in the city. It turns out that as many as 9 out of 10 researched people had a different opinion on 0% Exactly every tenth person (10%-21) said that the Poznań Half Marathon causes an increase in crime in the city. It turns out that as many as 9 out of 10 researched people had a different opinion on this subject. These 21 respondents concluded that the Poznań Half Marathon causes an increase in crime in the city on an average of 3.1 points on a 10-point scale. Moreover, as almost 99% of the surveyed residents of the Half Marathon athletes found that they were satisfied with the organization of the Poznan Half Marathon in their place of residence. Only 1% (2 people) had a different opinion and stated that they were not satisfied with the organization of the Half Marathon in Poznan at the level of 9.2 points on a 10-point scale. It proves high social support for this sporting event among the examined group of people and even if the disadvantages are perceived by the local communitythey are most likely not very onerous and short-lived (of which the respondents probably realize).
Research on the 6th Poznan Half Marathon supporters (n = 256) found that 62.5% said that the Half Marathon caused bothersome traffic jams in the city but 37.5% (160) thought differently ( Table 4). The supporters decided that the Half Marathon causes traffic jams in the city with an average severity of 5.5 points on a 10-point Likert scale. Moreover, less than 10% of the surveyed fans found that the Half Marathon caused inadequate behavior of the fans in the city. However, 90.6% of supporters had different opinions. The scale of inappropriate behavior of fans was estimated at 3.8 points on average (n = 24).
Satisfied with the Organization of Sports Events in Poznan, Bearing in Mind the Disadvantages of Organizing Events from the Poznan Half Marathon in the City.
Half Authors have researched 308 Cavaliada-an international equestrian event-supporters. Studies have shown that 31.5% of them (97 people) think that the event causes onerous communication difficulties on average at 4.5 points on Likert scale (Table 5). In the case of Half Marathon fans, 62.5% of respondents answered yes (average level-5.5 points). Therefore, the Half Marathon made public transport much more difficult than Cavaliada. It turned out that as many as 67.1% of the athlete's respondents said that the Poznan Half Marathon causes bothersome communication problems in the city. Only 32.9% had a different opinion on this subject. In order to further analyze the obtained results, 141 respondents who said YES were asked to define the intensity level of their discomfort in the communication problems caused the 6th Poznan Half Marathon Organization on a 10-point Likert scale (answer scale: 10-very high, 1-very low). The respondents concluded that the Poznań Half Marathon causes communication problems in a city with an average level of 5.5 points.
The research has shown that more often than every tenth person (13.3%-28) stated that the Poznan Half Marathon causes inadequate behavior of fans in the city and concluded that the Poznan Half Marathon causes inadequate supporter behavior in the city on average at 4 points on a 10-point scale. However, the majority-86.7% had a different opinion. According to 128 participants, the fans behave properly.
Exactly every tenth person (10%-21) said that the Poznań Half Marathon causes an increase in crime in the city. It turns out that as many as 9 out of 10 researched people had a different opinion on this subject. These 21 respondents concluded that the Poznań Half Marathon causes an increase in crime in the city on an average of 3.1 points on a 10-point scale. Moreover, as almost 99% of the surveyed residents of the Half Marathon athletes found that they were satisfied with the organization of the Poznan Half Marathon in their place of residence. Only 1% (2 people) had a different opinion and stated that they were not satisfied with the organization of the Half Marathon in Poznan at the level of 9.2 points on a 10-point scale. It proves high social support for this sporting event among the examined group of people and even if the disadvantages are perceived by the local community-they are most likely not very onerous and short-lived (of which the respondents probably realize).
Research on the 6th Poznan Half Marathon supporters (n = 256) found that 62.5% said that the Half Marathon caused bothersome traffic jams in the city but 37.5% (160) thought differently ( Table 4). The supporters decided that the Half Marathon causes traffic jams in the city with an average severity of 5.5 points on a 10-point Likert scale. Moreover, less than 10% of the surveyed fans found that the Half Marathon caused inadequate behavior of the fans in the city. However, 90.6% of supporters had different opinions. The scale of inappropriate behavior of fans was estimated at 3.8 points on average (n = 24).
Only 7% of surveyed fans found that the Half Marathon causes an increase in crime in the city. Most of them (93%) had a different opinion. Fans who rated the rise in crime in the city (n = 18) thanks to a Half Marathon at an average of 3.3 points on a 10-point scale. The supporters' responses were almost unanimous and 98.4% (n = 252) attest to their satisfaction with the organization of the Half Marathon in Poznan. Only 1.6% (4 people) of supporters were not satisfied with this. The fans' satisfaction with the organization of the Half Marathon in Poznan were on average at the level of 8.8 points.
Authors have researched 308 Cavaliada-an international equestrian event-supporters. Studies have shown that 31.5% of them (97 people) think that the event causes onerous communication difficulties on average at 4.5 points on Likert scale (Table 5). In the case of Half Marathon fans, 62.5% of respondents answered yes (average level-5.5 points). Therefore, the Half Marathon made public transport much more difficult than Cavaliada. Only 7% of surveyed fans found that the Half Marathon causes an increase in crime in the city. Most of them (93%) had a different opinion. Fans who rated the rise in crime in the city (n=18) thanks to a Half Marathon at an average of 3.3 points on a 10-point scale. The supporters' responses were almost unanimous and 98.4% (n=252) attest to their satisfaction with the organization of the Half Marathon in Poznan. Only 1.6% (4 people) of supporters were not satisfied with this. The fans' satisfaction with the organization of the Half Marathon in Poznan were on average at the level of 8.8 points.
Authors have researched 308 Cavaliada-an international equestrian event-supporters. Studies have shown that 31.5% of them (97 people) think that the event causes onerous communication difficulties on average at 4.5 points on Likert scale (Table 5). In the case of Half Marathon fans, 62.5% of respondents answered yes (average level-5.5 points). Therefore, the Half Marathon made public transport much more difficult than Cavaliada. Respondents were asked a question: "Does Cavaliada cause inappropriate behavior of fans in the city (e.g. loud behavior, fights e.g.)?" and 6.2% of them said Cavaliada caused inappropriate supporter behavior in the city (on average 4.2 points on Likert scale). In the case of the Half Marathon, 9.4% of supporters said yes. Only 5.2% of Cavaliada fans said that this event increase in crime in the city (on Respondents were asked a question: "Does Cavaliada cause inappropriate behavior of fans in the city (e.g., loud behavior, fights e.g.)?" and 6.2% of them said Cavaliada caused inappropriate supporter behavior in the city (on average 4.2 points on Likert scale). In the case of the Half Marathon, 9.4% of supporters said yes. Only 5.2% of Cavaliada fans said that this event increase in crime in the city (on average 3.3 points). In the case of the Half Marathon, 7% of supporters said yes. The supporters declared their satisfaction with the organization of the Cavaliada event in Poznan. Only 2% (6 persons) of supporters were not satisfied with this. The fans' satisfaction with the organization of the Half Marathon in Poznan were on average at the level of 9.4 points. The fans' satisfaction with the organization of the Half Marathon in Poznan were on average at the level of 8.8 points.
From the point of view of the conducted analysis, it proved important to check the difference between the two examined groups of respondents: Half Marathon fans and Cavaliada fans, among those who answered YES-the average level of dissatisfaction significantly different between the analyzed groups (Likert scale [1][2][3][4][5][6][7][8][9][10]. For this purpose, Chi-square test and U Mann Whitney's test was used:
Bothersome Communication Problems (e.g., Additional Traffic Jams in the City) that Cause Sporting Events
Checking if the fractions of the people who answered YES to this question differ significantly between the two analyzed groups-Chi-square test. Fractions differ from each other in a statistically significant way: p-value = 0.000 (<0.05). There is a relationship between the type of event and the answer to the question being analyzed. The distribution of the analyzed feature in both groups differs significantly from the normal (p < α), therefore the nonparametric test was used-U Mann Whitney test. The average level of dissatisfaction differs significantly between the analyzed groups: p = 0.004 (the average in the samples was: for the Half Marathon 5.53, for Cavaliada-4.54). For the one-sided test: p = 0.004/2 = 0.002. Then it can be assumed that the participants of the Half Marathon have an average level of dissatisfaction higher than the average level of dissatisfaction of Cavaliada participants. Checking if the fractions of the people who answered YES to this question differ significantly between the two analyzed groups-Chi-square test. Fractions do not differ from each other in a statistically significant way: p-value = 0.362 (>0.05). It cannot be assumed that there is a relationship between the type of event and the answer to the question being analyzed. The distribution of the analyzed feature in both groups again differs significantly from the normal (p < α), therefore the nonparametric test was used-U Mann Whitney's test. The average level of dissatisfaction is not significantly different between the analyzed groups p = 0.971 (the average in the samples was: for the Half Marathon 3.28, for Cavaliada -3.31).
Satisfied with the Organization of Sports Events in Poznan, Bearing in Mind the Disadvantages of Organizing Events from the Poznan Half Marathon in the City
Checking if the fractions of the people who answered YES to this question differ significantly between the two analyzed groups-Chi-square test. Fractions do not differ from each other in a statistically significant way: p-value = 0.730 (>0.05). It cannot be assumed that there is a relationship between the type of event and the answer to the question being analyzed. The distribution of the analyzed feature in both groups again differs significantly from the normal (p < α), therefore the nonparametric test was used-U Mann Whitney's test. The average level of satisfaction differs significantly between the analyzed groups: p = 0.000 (the average in the samples was: 8.82 for the Half Marathon, 9.25 for Cavaliada). For the one-sided test: p = 0.000/2 = 0.000. Then it can be assumed that the participants of the Half Marathon have an average level of satisfaction significantly lower than the average level of satisfaction of Cavaliada participants.
Discussion
A review of the literature on the problem shows that there is little research on the impact of small sporting events on the quality of life of residents. In addition, studies on the relationship between mega sporting events and the place where they are organized have a relatively short history and the first studies appeared after 1984 under the influence of LIO in Los Angeles [108,109]. Over the next 30 years, there were research results showing the relationship between events and their host [23,[111][112][113][114]. In Poland, the first such research and reports began in 2007 when the results of the selection of the host of Euro 2012 was announced [115]. This gave impetus to get interested in the subject. There were few studies that referred to economic [115][116][117][118][119], tourists [24,120,121] and sociological [122] issues. All these studies refer to the analysis of the relationship between a sporting event and the place where it takes place and includes the effect, impact and influence of the mega event on the host [121]. Street running has already been analyzed from the side of runners' profiles and their motivation to participate in events [90,91,123]. Moreover, it was emphasized that the venue for the organization of running is increasingly important for sustainable development. For example, the effects of organizing running events in national parks were researched [96,97,124]. The impact of running events on local communities in the context of health promotion was also examined [100] but the results of the research on the negative impact of running events on the urban community have not been met before in the literature. There are not so many studies that concern small and local events and their negative or positive impact on residents [22,105,125,126]. Many authors at that time took up the issue of sports tourism, which is related to the organization of sporting events [107,[127][128][129][130]. Sports tourism, if well organized, has the potential of creating more positive economic, social and cultural benefits to the host community. This is a kind of tourism that has recently been used to enhance the city's identity and appeal to businesses and travelers. Most cities' bids to host sporting events in order to achieve urban regeneration with revenues being generated from TV licenses and other areas. However, cities also face difficulties while trying to assess the impacts of these events when set against the costs incurred. This applies to the organization of major sporting events in popular tourist destinations. That is why many authors dealt with issues in this area, especially social impacts of hosting major sport events. Kim and co-authors [25] were trying to develop a complex scale to evaluate the perceived six social impacts of a large-scale sport tourism event. There were economic benefits; community pride; community development; economic costs; traffic problems; and security risks. Their questionnaire was tested among community of host's residents for the Formula One Korean Grand Prix in South Korea. They wanted to understand how residents view the impacts of a large sporting tourism event [25]. Liu, using Shanghai Formula One as a case study, examine the impact of mega sporting events on host city image from the international students perspective [131]. Leisure facilities and service were the most positive image impact in the opinion of the respondents. International students disagreed that Formula One would result in security problems or any crime. They had doubts about any negative impact on their daily life or environment. Lunhua & Haiyan [132] investigated residents' perceptions of the social impact of the Formula One Chinese Grand Prix and examined the relationships between the perceptions of social impact and four sets of variables. The results showed one dimension of negative impact (environmental and cultural problems) which was significantly associated with involvement in sports industry, community attachment and identification with the event. Moreover, Liu [133] developed a scale to measure the legacy of psychic income associated with the Olympic Games. The research collected from Beijing residents during the 2008 Beijing Games, identified a seven-factor of a scale of measuring psychic income (SPI) with 24 pertinent items retained. A study conducted by Balduck et al. [134] contributes to this line of inquiry by assessing the impact of the arrival of a stage of the 2007 Tour de France in Ghent. Exploratory factor analysis revealed seven impact factors. The most highly perceived benefits were cultural and image benefits whereas the most highly perceived costs were excessive spending and mobility problems. But Yi-De Liu [26] wrote that improving residents' quality of life (QoL) is one main reason to host major events. Also, event legacy has been emerging as a key outcome associated with the hosting of an event. Based on a case study of Liverpool as the 2008 European Capital of Culture (ECOC), the research indicate that the most highly perceived benefits were image, identity and cultural legacies. However, respondents were less likely to perceive the legacy of economic and tourism development on their QoL. The study underlines also the importance of legacy planning as a holistic program from the early stages of event process [26].
However, presented examples of conducted research concern large sporting events and show that no one studied the impact of the organization of less known, medium scale sporting events on residents on the perception of negative externalities by host residents. The authors of the present paper did not reach the results of the research, which would show an assessment of the impact of the organization of non-mega scale sports events on the quality of the hosts' lives. With reference to the Faulkner and Tideswell studies [135], the perceptions of community members are important and, furthermore, obtaining responses from a diverse group of residents is essential in representing the varied perceptions. For example, Bynner [136] stated that longitudinal data is needed to study the transition process involved, the effects of societal change and the policy impact.
Final Conclusions
The theoretical part of this article presents the meaning of sporting events for tourism industry and indicates the negative and positive effects this kind of tourism brings to host cities. The whole refers to the theoretical foundations of the term of "overtourism." The second part presents results of empirical research was conducted by the method of diagnostic surveys during two sporting events of different sporting rank, which took place in Poland and represented various sports disciplines-running and horse riding. The case study is the city of Poznan and two, well-known events in this agglomeration. The first one is Poznan Half Marathon-mass sports event, the second one is Cavaliada-elite equestrian event. A total of 774 respondents took part in the study, resident hosts who took part in the studied events. The main goal of this study was to investigate the impact of non-mega sporting events on quality of the host's life. The detailed aim of the study was to examine whether the inhabitants of the city feel the negative effects of organizing sporting events (communication problems or inappropriate behavior of supporters) and do they believe that these events increase the level of crime in the city or, despite these inconveniences, they are satisfied with the organization of sporting events in their place of residence. This phenomenon can also be referred to the social exchange theory (SET) to analyze the perceptions of residents. Many authors [135,[137][138][139] have continuously drawn on this theory. For example, Homans [137], the author of SET has been applied it to variety of leisure disciplines, to understand the views of local residents with regard to tourism. Harrill [139] states that SET involves the trading and sharing of resources between individuals and groups. Ap [140] things that not only highlights the exchange of resources but has also been expanded to include the mutual benefits that all exchange participants can get. In applying this to tourism, research by Teye et al. [141] indicates that perceived benefits associated with host community improvement led to support of residents. Moreover, Waitt [142] has found that enthusiasm of the residents and support varied according to how tourism events were perceived either positively according to the benefits derived or negatively with respect to any costs incurred from what they supplied. It is essential that event organizers and affiliates consider the local voices about the sport tourism event.
The results of the presented research show that both athletes as well as fans of the Half Marathon said that the Poznan Half Marathon causes bothersome communication problems in the city. Cavaliada, as an international equestrian event, causes onerous communication difficulties only in the opinion of 31.5% of researched respondents. Therefore, the Half Marathon made public transport much more difficult than Cavaliada situated in one place-Poznan Trade Centre. It also turned out that in the opinion of most of the researched fans and athletes, the surveyed events did not cause inadequate behavior of fans in the city or an increase in crime in the city and people are satisfied with the organization of these events. It turns out that the inhabitants, despite minor inconveniences that are felt as a result of organizing sporting events in the city, must also see the benefits. The negative impact of Cavaliada was very low. For checking the difference between the two examined groups of respondents: Half Marathon fans and Cavaliada fans, Chi-square test and Mann Whitney's test was used. The participants feel bothersome communication problems that cause the Half Marathon and have an average level of dissatisfaction higher than the average level of dissatisfaction of Cavaliada participants. Moreover, the participants of the half marathon have an average level of satisfaction with the organization of sports events in Poznan significantly lower than the average level of satisfaction of Cavaliada participants. Therefore, an elite sporting event is less burdensome for its residents and gives them more satisfaction. They do not think the event will harm them. However, they are aware of social exchange and profits for the city. The conclusions and reflections resulting from these studies can be used by organizers of non-sporting events and be their inspiration. Importantly, the inhabitants play an important role in the development of sustainable tourism, because they are cultural agents and the social group in which tourism is provided and local hospitality is a key element of the tourism product [143].
The paper provides data that may be useful for support marketing events such as half marathons. The popularity of sporting events participation fulfills a number of important sociocultural functions in the modern world. The most important include enabling sports tourists to build a sense of connection and integration with other people, thanks to which sports events become a postmodern form of participation in social life. Further research should go towards recognizing the importance and impact of small sporting events on people's lives and their environment. These types of events are definitely less recognized but their growing popularity indicates great importance for the development of cities and regions, tourism and economics. | 2020-04-09T09:12:01.884Z | 2020-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "19a4fbc211168e5d610c3b590cd2b2fc3b1ca8fa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/7/2827/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1284ec5cb7eab18698e8f7ff3b19569db63d7e9b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119138918 | pes2o/s2orc | v3-fos-license | Triple Massey products and Galois theory
We show that any triple Massey product with respect to prime 2 contains 0 whenever it is defined over any field. This extends the theorem of M. J. Hopkins and K. G. Wickelgren, from global fields to any fields. This is the first time when the vanishing of any $n$-Massey product for some prime $p$ has been established for all fields. This leads to a strong restriction on the shape of relations in the maximal pro-2-quotients of absolute Galois groups, which was out of reach until now. We also develop an extension of Serre's transgression method to detect triple commutators in relations of pro-$p$-groups, where we do not require that all cup products vanish. We prove that all $n$-Massey products, $n\geq 3$, vanish for general Demushkin groups. We formulate and provide evidence for two conjectures related to the structure of absolute Galois groups of fields. In each case when these conjectures can be verified, they have some interesting concrete Galois theoretic consequences. They are also related to the Bloch-Kato conjecture.
Introduction
A major problem in Galois theory is the characterization of profinite groups which are realizable as absolute Galois groups of fields. This is a very difficult problem, and in general little is known. In our paper we provide a definite contribution valid for all fields.
In 1967 A. Weil in [Wei] describing Artin's first result in the theory of real fields says "Even now, this is an altogether isolated result of great depth, whose significance for the future is not to be assessed lightly." In the classical papers [AS1,AS2] published in 1927, E. Artin and O. Schreier went on with developing a theory of real fields and showed in particular that the only non-trivial finite subgroups of absolute Galois groups are cyclic groups of order 2. In [Be], E. Becker developed some parts of Artin-Schreier theory by replacing separable closures of fields by maximal p-extensions of fields. Here, and below, by p we mean a prime number. In the remarkable paper [Koe] in 2001, J. Koenigsmann provided a classification of solvable absolute Galois groups. In [MS1] it was shown that orderings of fields can be detected already by much smaller Galois 2-extensions. In 1996, in [MS2] using Villegas' results in [Vi] provided a structural result of the quotient of absolute Galois group by the third 2-descending series. These results were extended to analogous results for p-descending series and p-Zassenhaus series in [EM1,EM2]. These are a few fundamental results on the structure of absolute Galois groups of general fields.
However in a recent spectacular development, the Bloch-Kato conjecture was proved by M. Rost and V. Voevodsky. (See [Voe].) These are very strong restrictions on the structure of absolute Galois groups but these results do not give directly structural results of absolute Galois groups. However in [MS2,EM1,EM2], the previous results by A. Merkurjev and The first author is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant R0370A01. The second author is supported in part by the National Foundation for Science and Technology Development (NAFOSTED). 1 A. Suslin [MeSu]) on the Bloch-Kato conjecture in degree 2 were used. It is a challenging important problem both for the structure of absolute Galois groups as well as for understanding the Bloch-Kato conjecture better, to provide a direct precise translation of the Bloch-Kato conjecture on the group-theoretical properties of absolute Galois groups. Building on work of a number of mathematicians ( [Dwy,DGMS,Ef2,GLMS,EM1,EM2,HW,MS2,MeSu,Vi,Voe]) we formulate here two other fundamental and strong conjectures which we call the "Vanishing n-Massey Conjecture" and the "Kernel n-Unipotent Conjecture".
The main objective of this paper is to prove the Vanishing 3-Massey Conjecture for prime 2 for all fields and to derive strong consequences for the structure of relations in absolute Galois groups of all fields or their maximal pro-2 quotients. These consequences were out of reach prior to this work.
Let us first recall briefly the notion of triple Massey products (see Section 2 for more detail on Massey products). Let C • be a differential graded algebra with differential δ : C • → C •+1 and homology H • . Suppose that a, b, c ∈ H 1 such that ab = bc = 0. We can choose A, B, C in C 1 representing a, b, c respectively. Since ab = 0, there is E ab such that δE ab = AB, similarly there is E bc such that δE bc = BC. Note that δ(E ab C + AE bc ) = 0, hence E ab C + AE bc represents an element of H 2 . The set of all E ab C + AE bc obtained in this manner is defined to be the triple Massey product a, b, c ⊂ H 2 . We say that the triple Massey product vanishes if it contains 0. Now let F be a field of characteristic = 2 and let G = G F (2) be the maximal pro-2 quotient of the absolute Galois group G F of F . Let C • = C • (G, F 2 ) denote the differential graded algebra of F 2 -inhomogeneous cochains in continuous group cohomology of G (see e.g., [NSW,Chapter I,§2]). For any a ∈ F * = F \ {0}, let χ a denote the corresponding character via the Kummer map F * → H 1 (G, F 2 ). In the work of M. J. Hopkins and K. G. Wickelgren [HW], the following result was proved.
Theorem 1.1 ( [HW,Theorem 1.2]). Let F be a global field of characteristic = 2 and a, b, c ∈ F * . The triple Massey product χ a , χ b , χ c contains 0 whenever it is defined.
In our paper we show that triple Massey products with respect to prime 2 vanish over any field F . As it follows from Example 4.1 and from Witt's Theorem (see [Wi], [Ko2,Theorem 9.1]) that n-fold Massey products vanish with respect to 2 if char(F ) = 2. So we can assume that the characteristic of F is not 2.
Theorem 1.2. Let F be an arbitrary field of characteristic = 2, a, b, c ∈ F * . The triple Massey product χ a , χ b , χ c contains 0 whenever it is defined.
This has remarkable consequences for the structures of absolute Galois groups G F and their maximal pro-2-quotients G F (2). We state our results for finitely generated pro-2-groups but our methods can be used also in the case of infinitely generated pro-2-groups with several relations. In Section 7 we also consider pro-p-groups for p possibly not equal to 2. The reason for our restriction in the remainder of the paper for considering p = 2 is that we do not yet have complete results for triple Massey products for p > 2. This is work in progress. (See [GMTT].) The results on the shape of relations of finitely generated pro-2-groups of the form G F (2) for some field F are fundamental results extending the classical results of S. P. Demushkin,K. Iwasawa,U. Jannsen,H. Koch,J. Labute,I. Shafarevich and K. Wingberg. (See e.g.,[De1,De2,I,JaWi,Ko1,Ko2,La,Se1,Sh].) Thus we provide strong restrictions on the structure of groups G F (2). Before stating the results we illustrate them with an example. Examining the classification of Demushkin group by Labute in [La] one sees that G F (p) always has a presentation where the generating relation is a product of commutators between generators and p-powers of generators. (If G F (p) for a local field is not a Demushkin group, then it is free pro-p.) Already in the paper [CEM], Section 9, it was shown that , where S is a free pro-2-group on generators x 1 , . . . , x n , n ≥ 3, cannot be an absolute Galois group of any field. One can also deduce, for example, that G as above cannot be isomorphic to G F (2) for any field F . However relations where simple commutators are combined with triple ones like r = [x 4 , x 5 ][[x 2 , x 3 ], x 1 ] are much harder to exclude, and one could not show that G = S/ r , S is a free pro-2-group on n generators x 1 , . . . , x n with n ≥ 5, is not isomorphic to G F (2) for any field F until this work. In Examples 7.2 and 7.3, we deal with this group in a detailed way, and in particular we show that G ≃ G F (2) for any field F . The next Theorem 1.3 and Theorem 1.4 are a vast generalization of this example.
That some conditions are necessary, one can see for example from the following example. Consider a free pro-2-group S on generators x 1 , x 2 , x 3 and However now consider three new generators, where r ′ is an element in the 4-th term S (4) in the 2-Zassenhaus filtration of S defined in Section 3 after the proof of Lemma 3.6. Observe now first that where S 1 is a free pro-2-group on generators x 1 , x 3 , is realizable as G F 1 (2), over the field C((X 1 ))((X 2 )) of iterated power series (see [Wa,Corollary 3.9, part (2)]). Also G 2 := S 2 , the free pro-2 group on x 2 is realizable as G F 2 (2) where F 2 = C((X 2 )). By [JW,Theorem 3.6], we see that their free product in the category of pro-2-groups is also realizable as G F (2) for some field F . Hence where S is a free pro-2-group on generators y 1 , y 2 , y 3 , is of the form G F (2). Hence we see that some conditions as in our Theorems 1.3 and 1.4 are necessary to guarantee the truth of these theorems. Therefore these conditions look like natural conditions. It is clear that they are very strong conditions and they extend some results on the shape of relations of G F (2) from local fields to all fields.
In the theorems below we use the following notation. Let (I, <) be a well-ordered set. Let S be the free pro-2 group on generators x i , i ∈ I. Let S (i) , i = 1, 2, . . . be the 2-Zassenhaus filtration of S. (See Section 3 for the definition of Zassenhaus filtration.) Then any element r in S (2) may be written uniquely as where a i , b ij , c ijk ∈ {0, 1} and r ′ ∈ S (4) . For convenience we call (1) the canonical decomposition modulo S (4) of r (with respect to the basis (x i )) and we also set u ij = b ij if i < j, and Theorem 1.3. Let R be a set of elements in S (2) . Assume that there exists an element r in R and distinct indices i, j, k with i < j, k < j such that: (i) In (1) the canonical decomposition modulo S (4) of r, a k = a j = u ij = u kj = u ki = u kl = u jl = 0 for all l = i, j, k, and c ijk = 0; and (ii) for every s ∈ R which is different from r, the factors [x k , not occur in the canonical decomposition modulo S (4) of s.
Then G = S/ R is not realizable as G F (2) for any field F .
Theorem 1.4. Let R be a set of elements in S (2) . Assume that there exists an element r in R and distinct indices i < j such that: (i) In (1) the canonical decomposition modulo S (4) of r, a i = a j = u ij = u il = u jl = 0, for all l = i, j and c iji = 0 (respectively, c ijj = 0); and (ii) for every s ∈ R which is different from r, the factors [x i , x j ] and x 2 i (respectively, [x i , x j ] and x 2 j ) do not occur in the canonical decomposition modulo S (4) of s. Then G = S/ R is not realizable as G F (2) for any field F . Theorem 1.3 (respectively Theorem 1.4) follows immediately from Theorem 1.2 and Theorem 7.8 (respectively Theorem 7.12).
Remarks 1.5. 1) Notice that any pro-2-group which is realizable as G F for some field F , is also realizable as G F (2). Hence the above two theorems also provide pro-2-groups which cannot be realizable as the absolute Galois group of any field F .
2) One can also use Theorems 7.8 and 7.12 to obtain profinite groups which are not realizable as the absolute Galois group of any field F . For simplicity we consider only the following example. Let S be a free profinite groups on 5 generators x 1 , . . . , x 5 and let r = [x 4 , x 5 ][[x 2 , x 3 ], x 1 ]. Then G := S/ r cannot be realizable as G F for any field field F . In fact, one can check that the pro-2-quotient G(2) of G has a presentation G(2) = S ′ / r ′ , where S ′ is a free pro-2-group on 5 generators y 1 , . . . , y 5 and r ′ = [y 4 , y 5 ][[y 2 , y 3 ], y 1 ]. By Theorem 7.8 (or Example 7.2), we see that G(2) does not have the vanishing triple Massey product property with respect to F 2 . Hence neither does G by Corollary 3.4. Therefore G is not realizable as G F by Theorem 6.2 Motivated by the theorems above, we formulate the Vanishing n-Massey Conjecture for n ≥ 3. See Definition 3.2 for the definition of the vanishing n-fold Massey product property.
Conjecture 1.6. Let p be a prime number and n ≥ 3 an integer. Let F be a field, which contains a primitive p-th root of unity if char(F ) = p. Then the absolute Galois group G F of F has the vanishing n-fold Massey product property with respect to F p . Theorem 1.2, more precisely Theorem 6.2, shows that Conjecture 1.6 holds true for n = 3, p = 2 and for any field F . In [MT2], we show that the conjecture is true for any n ≥ 3, p > 2 and for any p-rigid field F . In [MT3], the conjecture is verified for n = 3, p > 2 and F an algebraic number field. Note also that Theorem 4.2 shows that the conjecture is true for any n ≥ 3, any prime number p and any local field F . Further results related to Conjecture 1.6 are Proposition 4.4 and Proposition 4.5 as well as additional results in [MT2]. In Section 8, we also formulate a related conjecture, the Kernel n-Unipotent Conjecture (see Conjecture 8.3).
As will be explained in Section 8, the Kernel n-Unipotent Conjecture evolved over a number of years through work contained in [Vi], [MS2], [GLMS], [EM1], [EM2] and [Ef2]. This conjecture has significant value because it describes specific pro-p-groups which are images of unipotent representations of absolute Galois groups as building blocks of quotients of absolute Galois groups by various terms in their p-Zassenhaus filtrations.
The Vanishing n-Massey Conjecture can be used to construct these building blocks from much smaller p-groups inductively. (See Theorem 3.1, due to B. Dwyer, and our use of it in Section 6.) Thus these two conjectures together provide us with valuable tools for telling us which Galois p-groups we should be able to construct automatically from smaller Galois groups, and how we can proceed to build entire maximal p-extensions of any field. Our paper opens new directions in studies of Galois p-extensions of fields. It complements methods in current research in abelian birational geometry ( [BT1], [BT2] and [Pop]).
In retrospect we now understand the initial Artin-Schreier results from this new point of view, and we now appreciate A. Weil's intuition about the significance of these results for future developments in Galois theory. (See Remark 4.7.) It seems that our use of triple Massey products for detecting higher commutators is the first time when the rather restrictive assumption that all cup products have to vanish, was removed. (See e.g., [Ef2,Gä,Mor,Vo,Vo2].) In fact this suggests that there is a comprehensive extension of the theory described in [Vo2,Appendix] where the assumption on the relations of G contained in a large enough weight of the free group mapping on G can be considerably weakened if G = G F (p) for some prime p. (Here G F (p) is the maximal pro-p quotient of the absolute Galois group G F .) Work on this theory is in progress. (See [GMTT].) In the following discussion, we refer to definitions for the formality of differential graded algebras and the motivation for studying formality to [DGMS] as well as connections with Massey products. (To recall the notion of differential graded algebras abbreviated as DGAs, see Section 2.) Let C • := C • (Spec F, Z/2) = C • (G F , Z/2) be the DGA of inhomogeneous continuous cochains of G F with coefficients in Z/2. In the paper [HW], the following extremely interesting question was posed. It is known that if C • (Spec F, Z/2) is formal, then all higher Massey products vanish. Therefore the vanishing property of Massey products makes the question above a natural one.
The structure of our paper is as follows. In Sections 2 and 3, basic facts on Massey products are reviewed. Some examples on groups satisfying the vanishing Massey product property are discussed in Section 4. In Section 5 we provide the first proof of Theorem 1.2 using splitting varieties [HW]. In Section 6 we present the second proof of Theorem 1.2 using Galois theory and some results in [GLMS]. In Section 7 we apply our results to show some strong restrictions on the shape of relations of G F (2), for a field F . In the last section we point out certain notions related to our results and possibly interesting directions for further research.
Review of Massey products
In this section and the next section, we review some basic facts about Massey products and we use as main resources [Dwy], [Ef2], [HW] and [Wic1]. For other references on Massey products, see e.g., [Fe, Kra, May, Mor, Vo].
Let A be a unital commutative ring. Recall that a differential graded algebra (DGA) over A is a graded A-algebra (2) δ 2 = 0. Then as usual the cohomology H • = ker δ/imδ. We shall assume that a 1 , . . . , a n are elements in H 1 .
Definition 2.1. A collection M = (a ij ), 1 ≤ i < j ≤ n + 1, (i, j) = (1, n + 1) of elements of C 1 is called a defining system for the nth order Massey product a 1 , . . . , a n if the following conditions are fulfilled: (1) a i,i+1 represents a i .
(2) δa ij = j−1 l=i+1 a il ∪ a lj for i + 1 < j. Then n k=2 a 1k ∪ a k,n+1 is a 2-cocycle. Its cohomology class in H 2 is called the value of the product relative to the defining system M , and is denoted by a 1 , . . . , a n , M .
The product a 1 , . . . , a n itself is the subset of H 2 consisting of all elements which can be written in the form a 1 , . . . , a n M for some defining system M . The product a 1 , . . . , a n is uniquely defined if it contains only one element.
When n = 3 we will speak about a triple Massey product. For n ≥ 2 we say that C • has the vanishing n-fold Massey product property if every defined Massey product a 1 , . . . , a n , where a 1 , . . . , a n ∈ C 1 , necessarily contains 0.
In particular, a 1 , a 2 , a 3 is uniquely defined if and only if a 1 ∪ H 1 = H 1 ∪ a 3 = 0.
Massey products and unipotent representations
Let G be a profinite group and let A be a finite commutative ring considered as a trivial discrete G-module. Let C • = C • (G, A) be the DGA of inhomogeneous continuous cochains of G with coefficients in A [NSW,Ch. I,§2]. We write H i (G, A) for the corresponding cohomology groups. As observed by Dwyer [Dwy] in the discrete context (see also [Ef2,§8] in the profinite case), defining systems for this DGA can be interpreted in terms of upper-triangular unipotent representations of G, as follows.
Let U n+1 (A) be the group of all upper-triangular unipotent (n + 1) × (n + 1)-matrices with entries in A. Let Z n+1 (A) be the subgroup of all such matrices with all off-diagonal entries being 0 except at position (1, n + 1). We may identify U n+1 (A)/Z n+1 (A) with the group U n+1 (A) of all upper-triangular unipotent (n + 1) × (n + 1)-matrices with entries over A with the (1, n + 1)-entry omitted.
Theorem 3.1 ([Dwy, Theorem 2.4]). Let α 1 , . . . , α n be elements of H 1 (G, A). There is a one-one correspondence M ↔ρ M between defining systems M for α 1 , . . . , α n and group Moreover α 1 , . . . , α n M = 0 in H 2 (G, A) if and only if the dotted arrow exists in the following commutative diagram Explicitly, the one-one correspondence in Theorem 3.1 is given by: For a defining system Definition 3.2. Let the notation be as above. We say that G has the vanishing n-fold Massey product property (with respect to A) if the DGA C • (G, A) has the vanishing n-fold Massey product property.
Corollary 3.3. The following conditions are equivalent.
(i) G has the vanishing n-fold Massey product property with respect to A.
(ii) For every representationρ : Corollary 3.4. Let G(p) be the maximal pro-p quotient of G and assume that A = F p . Then the following conditions are equivalent.
(i) G has the vanishing n-fold Massey product property with respect to F p .
(ii) G(p) has the vanishing n-fold Massey product property with respect to F p .
Proposition 3.5 ( [Dwy,p. 182,Remark], see also [Ef2,Proposition 8.3]). Letρ M : G → U n+1 (A) correspond to a defining system M = (c ij ) for α 1 , . . . , α n as in Theorem 3.1. Then the central extension associated with α 1 , . . . , α n M is the pull back Now assume that G = S/R is the quotient of some profinite group S by some normal subgroup R. Then we have the transgression map ([NSW, Chapter I, Prop 1.6.6]) Letρ : G →Ū n+1 (A) be a representation of G and let −ρ 12 , . . . , −ρ n,n+1 ρ be the n-fold Massey product value relative to the defining system corresponding toρ. Suppose that ρ : S → U n+1 (A) is a lift ofρ, i.e., ρ is a homomorphism such that the diagram for τ ∈ R. Then by the same argument as in [Sha,Lemma 2.3] and by Proposition 3.5, we obtain the following result. We include a proof here for the convenience of the reader.
Proof. We consider the following diagram and we read the diagram from the top to the bottom. Here the second exact sequence is the pushout of the first exact sequence via Λ(ρ) : R → A. Then its equivalence class as an element in H 2 (G, A) is trg(Λ(ρ)).
On the other hand, by Proposition 3.5 the equivalence class of the third central extension in H 2 (G, A) is −ρ 12 , . . . , −ρ n,n+1 ρ . In order to prove the lemma, we only need to prove that there exists a dashed arrow E U n+1 (A) ×Ū n+1 (A) G making the above diagram commute. But this follows from the universal properties of the pullback U n+1 (A) ×Ū n+1 (A) G and the pushout E. Now let A = F p , with p a prime number. As shown for example in [Ef2,Gä,Mor,Vo], Massey products in C • (G, F p ) are also intimately related to the p-Zassenhaus filtration G (n) , n = 1, 2, . . . of G. Recall that this filtration is defined inductively by where ⌈n/p⌉ is the least integer which is greater than or equal to n/p.
Lemma 3.8. The profinite group G has the vanishing n-fold Massey product property with respect to F p if and only if G/G (n+1) has this property also.
Proof. This follows from Corollary 3.3 and Lemma 3.7.
Proposition 3.9. Let N, N ′ be closed normal subgroups of a free pro-p-group S such that N S (n+1) = N ′ S (n+1) . Then G = S/N has the vanishing n-fold Massey product property with respect to F p if and only if G ′ = S/N ′ has the vanishing n-fold Massey product property with respect to F p .
Proof. Because surjective homomorphisms take nth p-Zassenhaus filtrations onto nth p-Zassenhaus filtrations, using our assumption, we have . Therefore our result follows from Lemma 3.8.
First examples
Example 4.1. If G is a free pro-p-group, then it has the n-fold Massey product vanishing property for every n ≥ 2 because H 2 (G, F p ) = 0. Alternatively, this follows from the universal property of G and condition (ii) of Corollary 3.3.
Theorem 4.2. Let n ≥ 3 be an integer and p a prime number. Then every pro-p Demushkin group has the vanishing n-fold Massey product property with respect to F p .
The following proof is adapted from that of [HW,Lemma 3.5].
Proof. Let G be a pro-p Demushkin group. Let χ 1 , . . . , χ n be in H 1 (G, F p ). Assume that the triple Massey product χ 1 , . . . , χ n is defined. If χ 1 = 0 then by [Fe,Lemma 6.2.4], which is valid in the profinite case as well, χ 1 , . . . , χ n contains 0. So we may assume that χ 1 = 0. In this case, to show that χ 1 , . . . , χ n contains 0, we only need to show that is surjective by Remark 2.2. From the definition of Demushkin groups, one has So it enough to show that the map χ 1 ∪(−) is non-zero. But it follows from the non-degenerate property of the cup product H 1 (G, Remark 4.3. If F is a finite field extension of Q p containing a primitive p-th root of unity, then G F (p) is a pro-p Demushkin group. In [Sha], Shafarevich showed that if F is as above, but F does not contain any primitive p-th root of unity, then G F (p) is a free pro-p-group. Demushkin groups along with free pro-p-groups, abelian torsion free pro-p-groups, and cyclic groups of order 2 play a dominant role in the current investigation of finitely generated subgroups of maximal pro-p quotients G F (p) of absolute Galois groups. The elementary conjecture predicts that these groups above are all "building blocks" for G F (p). (See [Ef1,Mar,LLMS,JW].) Proposition 4.4. Let G 1 , G 2 be two pro-p-groups. Then the free pro-p product G 1 * G 2 has the vanishing n-fold Massey product property with respect to F p if and only if both G 1 and G 2 have this property as well.
Similarly, G 2 has the vanishing n-fold Massey product property.
Let p > 2 be an odd prime and G a pro-p-group. Let χ be an element in H 1 (G, F p ). In [Kra, Section 3], Kraines defined a restricted n-fold Massey product χ n . If a restricted n-fold Massey product χ n is defined then so is the n-fold Massey product χ, . . . , χ , and the latter contains the former. Kraines showed that χ n = 0 for n = 2, . . . , p − 1 and χ p is defined ( [Kra,Theorem 15]). In fact χ p = −βχ, where β : H 1 (G, F p ) → H 2 (G, F p ) is the Bockstein homomorphism, i.e., the connecting homomorphism induced by the exact sequence Using Kraines' results mentioned above, we obtain following result.
Proposition 4.5. Let n be an integer with 2 < n ≤ p. Let F be any field containing a primitive p-th root of unity if char(F ) = p. Let G be the absolute Galois group G F of F or its maximal pro-p quotient G F (p). Then for any χ ∈ H 1 (G, F p ), the n-fold Massey product χ, . . . , χ is defined and contains 0.
Proof. It is enough to consider the case G = G F (p). Also if charF = p then since G F (p) is a free pro-p-group, χ, . . . , χ = 0. So we may assume that charF = p and let us fix a primitive p-th root of unity ξ. Then χ = χ a for some a ∈ F * , where χ a ∈ H 1 (G F , F p ) = H 1 (G F (p), F p ) is the character associating to a via the Kummer map F * → H 1 (G F , F p ) = H 1 (G F (p), F p ).
Example 4.6. Let p be an odd prime number and G = Z/pZ. Let χ ∈ H 1 (G, F p ) be the indentity map. Then the p-fold Massey product χ, . . . , χ is defined but does not contain 0. Suppose that, contrarily, the p-fold Massey product χ, . . . , χ contains 0, then there exists a representation ρ : . Then all entries of B at positions (i, i + 1), i = 1, . . . , p, are equal to 1. Hence B p = 1, and this contradicts with the fact that B is the image of an element of order p.
Remark 4.7. Proposition 4.5 and Example 4.6 immediately give an explanation to a part of the well-known Artin-Schreier's theorem [AS1,AS2] (respectively, Becker's theorem [Be]) which says that the absolute Galois group G F (respectively, its maximal pro-p-quotient G F (p)) of any field F cannot have an element of odd prime order. (Note also that if Gal F ≃ Z/pZ then F contains a primitive p-th root of unity.) In [MT2], using Galois automatic realization of given groups we shall prove a more general result than Proposition 4.5 in which the condition n ≤ p can be omitted provided that if p = 2 then −1 is a square in F . One can then use this generalized result to show the full Artin-Schreier theorem (respectively, Becker's theorem). (See [MT2].)
Splitting variety and vanishing property
Let F be a field of characteristic = 2. Let G = G F (2) the maximal pro-2 Galois group of F . Let a, b, c be elements in F × and χ a , χ b , χ c ∈ H 1 (G, F 2 ) be the characters corresponding to a, b, c via the Kummer map F × → H 1 (G, F 2 ). Let X a,b,c be the variety in G m × A 4 defined by the equation First proof of Theorem 1.2. The first proof was inspired by the second proof in Section 6.
If a (or b, or c) is in (F * ) 2 then the corresponding character χ a (or χ b , or χ c ) is the trivial character and hence the Massey product χ a , χ b , χ c contains 0 by [Fe,Lemma 6.2.4]. So we may assume that a, b and c are not in (F * ) 2 . The following well-known fact will be used frequently: [HW,Introduction,p. 4], [Se2,Chapter XIV,, or [Sri,Lemma 8.4]). There are two cases to consider.
Field theory and vanishing property
In this section we present another approach to prove Theorem 1.2 using Galois theory and [GLMS].
Notation: For a, b in a field F of characteristic = 2, (a, b) F is the quaternion algebra generated by a and b. For x, y, z in a group, [x, y] = x −1 y −1 xy.
Second proof of Theorem 1.2. As in the first proof, we may assume that a, b and c are not in (F * ) 2 .
Assume that the triple Massey product χ b , χ a , χ c is defined, we show that it contains 0. (Note that the order in the triple Massey product here is different from the one in the first proof, because we want to be consistent with the notation in [GLMS].) Galois extension which is cyclic of order 4. Its Galois group is generated by One has the following homomorphism ϕ : Gal(L/F ) → U 4 (F 2 ) by letting Let ρ be the composite homomorphism ρ : Gal F → Gal(L/F ) ϕ → U 4 (F 2 ). Then one can check that ρ i,i+1 = χ b ∀i = 1, 2, 3.
Case 2: a ≡ b mod (F * ) 2 and a ≡ c mod (F * ) 2 . This case can be treated in a similar way to Case 3 below.
Case 3: a ≡ b mod (F * ) 2 and c ≡ a mod (F * ) 2 . Then χ b , χ a , χ c = χ b , χ a , χ a . Since (a, a) F = (a, b) F = 0 in the Brauer group Br(F ), by construction in [GLMS, Section 3], we have a Galois extension L/F which contains F ( √ a, √ b) with Galois group G 1 described below. Also there exist σ a , σ b ∈ Gal(L/F ) such that Let G 1 be the group generated by two symbols x, y subject to the relations: x 4 = y 2 = 1 = (x, y) 2 = (x, y, x) 2 and (x, y, x) commutes with x and y. Then it is shown in [GLMS] that σ a , σ b generates Gal(L/F ) and Gal(L/F ) is isomorphic to G 1 by letting σ a → x and σ b → y. Then u, v], u] is central and of order 2 in U 4 (F 2 ). Hence one has the homomorphism ϕ : Gal(L/F ) → U 4 (F 2 ) by letting σ a → u, σ b → v. (The homomorphism ϕ is in fact injective so that ϕ induces an isomorphism between Gal(L/F ) and the subgroup generated by u, v. This follows from Z(G 1 ) = Z/2Z, which is the smallest non-trivial normal subgroup of G 1 and [[u, v], u] = 1.) Let ρ be the composite homomorphism ρ : Gal F → Gal(L/F ) ϕ → U 4 (F 2 ). Then one can check that ρ 12 = χ b and ρ 23 = ρ 34 = χ a .
Hence by Theorem 3.1, the triple Massey product χ b , χ a , χ a contains 0.
Hence by Theorem 3.1, the triple Massey product χ b , χ a , χ b contains 0.
Case 5: a ≡ b mod (F * ) 2 and c ≡ ab mod (F * ) 2 . Then χ b , χ a , χ c = χ b , χ a , χ ab . By assumption, we have (a, b) F = (a, ab) F = 0. Hence (a, b) F = (a, a) F = 0. As in Case 3, we can construct a Galois extension L/F with Galois group isomorphic to the group G 1 . Let Let ρ be the composite homomorphism ρ : . Then one can check that Hence by Theorem 3.1, the triple Massey product χ b , χ a , χ ab contains 0.
Case 6: a, b, c are F 2 -independent in F * /F * 2 . Because χ b , χ a , χ c is defined, (b, a) F = (a, c) F = 0. As in [GLMS], we have the following construction. There exist β ∈ F ( Then [Wad,Lemma 2.14] implies that there exist δ ∈ E and d ∈ F . It is shown in [GLMS,Proof of Proposition 4.6] that there exist automorphisms σ a , σ b , σ c ∈ Gal(L/F ) such that σ a fixes √ b, √ c and σ a ( √ a) = − √ a and similarly for σ b , σ c .
By direct computation one has
(1) : Y ]] is in the center and of order dividing 2. Hence there is a natural homomorphism ϕ from Gal(L/F ) ≃ G 2 to U 4 (F 2 ) (G 2 is defined in [GLMS,Definition 4.4] as the group generated by x, y, z satisfying two conditions (1)-(2) above). As X, Y, Z generates U 4 (F 2 ), ϕ is surjective and hence an isomorphism because |Gal(L/F )| = |U 4 (F 2 )| = 64. Also from [GLMS,Proof of Proposition 4.7] one deduces that ϕ maps σ a to X, σ b to Y and σ c to Z.
(Note that χ a is the composition Gal F → Gal(L/F ) → Gal(F ( √ a)/F ) ≃ F 2 , where the last map is the map sending σ a | F ( √ a/F ) to 1 and similarly for χ b .χ c . Since all the maps ρ, χ a , χ b , χ c factor through Gal(L/F ), it is enough to check on σ a , σ b , σ c .) Hence by Theorem 3.1, the triple Massey product χ b , χ a , χ c contains 0.
Remark 6.1. Because the Galois extensions L/F with Galois group isomorphic with U 4 (F 2 ) play a fundamental role in the theory of triple Massey products, and for its use in Galois theory, we shall describe the structure of these extensions. For further related results see [GLMS] where U 4 (F 2 ) is called G 2 . Let X, Y, Z be matrices defined as in Case 6 of the previous proof. Then observe that Since W is a 2-elementary group, Gal(L/E) = W and by Kummer theory one has and set for σ c . Now since M is 4-dimensional we have a ∈ (E * ) 2 . Hence we have shown that: One can see that the converse for this also holds. Namely, if L/F is a normal closure of E( √ δ)/F where E/F and δ satisfy 2 conditions (1)-(2) above, then L/F is a U 4 (F 2 )-Galois extension.
Theorem 6.2. If G is the absolute Galois group G F of a field F or its maximal 2-extension quotient G F (2). Then G has the vanishing triple Massey product property with respect to F 2 .
Proof. It is enough to consider the case G = G F (2) by Corollary 3.4.
If F is of characteristic 2, then G is free and hence G has the vanishing triple Massey product property.
If F is of characteristic = 2, then G has the vanishing triple Massey product property by Theorem 1.2.
Groups without the triple vanishing property
In this section, we construct pro-p-groups G which do not have the vanishing triple Massey product property. In particular, when p = 2, they are not realizable as G F (2) for any field F .
First we verify the following computational fact.
Example 7.3. Let G be as in the previous example with p = 2. Then by Theorem 1.2 (or more precisely, Theorem 6.2), G is not realizable as G F (2) for any field F . For this statement, using [GLMS] we will give another proof, which avoids Theorem 1.2 and Massey product formalism technique.
Consider a field L/F attached to the triple a 2 , a 3 , a 1 (see [GLMS,Proposition 4.6]). Let σ a i be constructed as in [GLMS,Proof of Proposition 4.6] with a, b, c there replaced by a 3 , a 1 , a 2 , respectively. Then [[σ a 2 , σ a 3 ], σ a 1 ] is a non-trivial element in Z(Gal(L/F )) ≃ Z/2. For each i, σ i and σ a i act in the same way on K = F ( √ a 1 , √ a 2 , √ a 3 ). Therefore σ i | L/F = σ a i γ i , for γ i ∈ Φ(Gal(L/F )) (here Φ(Gal(L/F )) is the Frattini subgroup of Gal(L/F )).
Remark 7.4. As noted in [CEM,EM2], one can use [CEM,Proposition 9.1] (or [EM2,Corollary 6.3]) to show that various pro-2-groups do not occur as G F (2) for some field F of characteristic = 2. For the convenience of the reader, we recall this result for pro-2-groups as below.
To show that a pro-2-group G 1 cannot be isomorphic to G F (2) for a field F of characteristic = 2, we choose a group G 2 such that two conditions in the above Corollary are satisfied and G 2 does occur as G L (2) for some field L of characteristic = 2, and we are done. Now we consider the pro-2-group G =: G 1 defined as in the previous example, i.e., G is the quotient of the free pro-2 group S on generators x 1 , . . . , x 5 by the relation r Then one might wonder whether we can use Proposition 7.5 to show that G = G 1 is not realizable as G F (2) for some field F of characteristic = 2. One very natural candidate for the group G 2 is the following: G 2 is the quotient of the free pro-2 group S by the relation and G 2 is the free product of the free pro-2 group on 3 generators x 1 , x 2 , x 3 with the group Z 2 × Z 2 . And it is known that, see [JW,Theorem 3.6], G 2 is isomorphic to G F (2) for some field F of characteristic = 2. However, H * (G 1 , F 2 ) ≃ H * (G 2 , F 2 ). In fact, let where for each i = 1, 2, U i is spanned by the images of χ 1 , χ 2 , χ 3 , χ 4 in H 1 (G i , F 2 ) and V i is spanned by the image of χ 5 in H 1 (G i , F 2 ). Then using usual transgression-relation pairing we see that: (1) The cup product Hence G i are mild groups (see for example [Fo, Gä, LM]). In particular, cdG 1 = cdG 2 = 2 and H * (G 1 , F 2 ) = H * (G 2 , F 2 ). Therefore we cannot easily apply Proposition 7.5 to this example. Our discussion above shows that our techniques provide genuinely new cases of pro-2-groups which cannot occur as G F (2) over some field F . Theorems 7.8 and 7.12 below exhibit large families of pro-2-groups which are not of the form G F (2).
It is easy to provide examples as above with more relations. For example if G = S/R, where S is the free pro-2-group on generators x 1 , x 2 , . . . , x 7 and R is its normal subgroup generated by r 1 = [x 4 , x 5 ][[x 2 , x 3 ], x 1 ] and r 2 = [x 6 , x 7 ]. Then the proof above for showing that G ≃ G F (2) for any field F is valid word-for-word with the very exception that we choose in our possible example of F an Let G be a pro-p-group. Let be a minimal presentation of G, i.e., S a free pro-p-group and R ⊂ S (2) . Then the inflation map inf : is an isomorphism by which we identify both groups. Since S is free, we have H 2 (S, F p ) = 0 and from the 5-term exact sequence we obtain the transgression map is an isomorphism. Therefore any element r ∈ R gives rise to a map which is defined by α → trg −1 (α)(r) and is called the trace map with respect to r.
Let (x i ) i∈I be a basis of S, where I is a well-ordered set. Let χ i , i ∈ I be the dual basis to Let r be any element in S (2) . Then r may be uniquely written as where a i , b ij , c ijk ∈ {0, 1, . . . , p − 1} and r ′ ∈ S (4) [Vo,Prop. 1.3.2 and Prop. 1.3.3]. For convenience we call (*) the canonical decomposition modulo S (4) of r (with respect to the basis (x i )) and we also set u ij = b ij if i < j, and u ij = b ji if j < i.
Lemma 7.6. Let the notation be as above. Assume that R = r and that the triple Massey product −χ k , −χ i , −χ j is defined for some distinct i, j, k with i, j < k. Then there exists an α ∈ −χ k , −χ i , −χ j , which can be given explicitly, such that Hence by [Vo,Proposition 1.3.2] (see also [NSW,Proposition 3.9.13]), we have u ki = u ij = 0.
Proposition 7.7. In (*) suppose that there exist distinct i, j, k such that i < j, k < j, u ij = u kj = u ki = u kl = u jl = 0 for all l = i, j, k. If p = 2 we assume further that a k = a j = 0. Let G = S/ r and χ 1 , . . . , χ n be the F p -basis of H 1 (S, F p ) = H 1 (G, F p ) dual to x 1 , . . . , x n . Then −χ k , −χ i , −χ j is uniquely defined and In particular, if we assume further that c ijk = 0 then −χ k , −χ i , −χ j does not vanish.
The following theorem generalizes Example 7.2.
Theorem 7.8. Let R be a set of elements in S (2) . Assume that there exists an element r in R and distinct indices i, j, k with i < j, k < j such that: (1) In (*) the canonical decomposition modulo S (4) of r, u ij = u kj = u ki = u kl = u jl = 0 for all l = i, j, k, and c ijk = 0 and if p = 2 we assume further that a k = a j = 0, and (2) for every s ∈ R which is different from r, the factors [x k , not occur in the canonical decomposition modulo S (4) of s. Then G = S/ R does not have the vanishing triple Massey product property.
Proof. Let G ′ = G/ r and let f be the canonical map f : G ′ = S/ r → G = S/ R . We shall identify three groups H 1 (S, F p ), H 1 (G, F p ) and H 1 (G ′ , F p ) via inflation maps. We also use subscript ·, ·, · G (respectively, ·, ·, · G ′ ) to denote Massey products in the cohomology groups of G (respectively, G ′ ).
Lemma 7.10. Let the notation be as in Lemma 7.6. Assume that R = r .
(2) Assume that the triple Massey product −χ i , −χ j , −χ j is defined for some i < j .
Theorem 7.12. Let R be a set of elements in S (2) . Assume that there exists an element r in R and distinct indices i, j with i < j such that: (1) In (*), u ij = u il = u jl = 0, for all l = i, j and c iji = 0 (respectively, c ijj = 0) and if p = 2 we assume further that a i = a j = 0, and (2) for every s ∈ R which is different from r, the factor [x i , x j ] does not occur in the canonical decomposition modulo S (4) of s and if p = 2 we further assume that x 2 i (respectively, x 2 j ) does not occur in the canonical decomposition modulo S (4) of s. Then G = S/ R does not have the vanishing triple Massey product property.
Proof. Let G ′ = G/ r and let f be the canonical map f : G ′ = S/ r → G = S/ R . We shall identify three groups H 1 (S, F p ), H 1 (G, F p ) and H 1 (G ′ , F p ) via inflation maps. We also use subscript ·, ·, · G (respectively, ·, ·, · G ′ ) to denote Massey products in the cohomology groups of G (respectively, G ′ ).
We only treat the case that c iji = 0. The other case is treated similarly. By [Vo,Proposition 1.3.2] (see also [NSW,Proposition 3.9.13]) and by assumption, we have tr s (χ j ∪ χ i ) = tr s (χ i ∪ χ i ) = 0, for all s ∈ R.
By Proposition 7.11 applying to the group G ′ , −χ j , −χ i , −χ i G ′ does not vanish. Therefore −χ j , −χ i , −χ i G does not vanish, and we are done.
Further directions
Let p be a prime number. Let F be a field of characteristic = p, which contains a primitive p-th root of unity. Let G = G F (p) be the maximal pro-p-quotient of the absolute Galois group G F of F . Denote by G (i) , i = 1, 2, . . . the p-Zassenhaus filtration of G. Let F (i) be the fixed field F (p) G (i) of the group G (i) , where F (p) is the maximal p-extension of F .
When p = 2, F (3) is the compositum of all C 2 , C 4 , D 4 -extensions K/F inside F (2). This fact was proved by Villegas [Vi] and [MS2,Corollary 2.18] (see also [EM1,Corollary 11.3] for a more general result). Inspired by this beautiful fact, and the second proof of Theorem 1.2, we would like to propose the following conjecture.
Let C n be the cyclic group of order n, D 4 the dihedral group of order 8 and let G 1 and G 2 be groups defined as in [GLMS] (see Cases 3 and 5 of the second proof of Theorem 1.2 for the definition). Explicitly, G 2 ≃ U 4 (F 2 ) and G 1 ≃ the subgroup of U 4 (F 2 ) consisting of upper 4 × 4-matrices (a ij ) with a 23 = a 34 .
We define the field F ω as the compositum of C 2 , C 4 , D 4 , G 1 , G 2 -extensions K/F inside F (2). Then F ω ⊂ F (4) and the conjecture says that in fact F ω = F (4) .
Definition 8.2. Let G be a pro-p-group and let n ≥ 1 be an integer. We say that G has the kernel n-unipotent property if G (n) = ker(ρ : G → U n (F p )), where ρ runs over the set of all representations (continuous homomorphisms) G → U n (F p ).
It is easy to see that for n = 1, 2, every pro-p-group G has the kernel n-unipotent property. It was shown that for G = G F (p), where F is a field containing a primitive p-root of unity, G has the kernel 3-unipotent property. (See [MS2,Vi,EM1] for the case p = 2 and [EM2, Example 9.5(1)] for the case p > 2.) For any fixed integer n ≥ 3, in [MT2] we also give an example of a torsion free pro-p-group G such that G does not have the kernel n-unipotent property.
The following conjecture is a generalization of the above conjecture.
Conjecture 8.3 (Kernel n-Unipotent Conjecture). Let F be a field containing a primitive p-th root of unity and let G = G F (p). Let n ≥ 3 be an integer. Then G has the kernel n-unipotent property.
In a subsequent paper [MT1], we show that every pro-p Demushkin group has the kernel 4-unipotent property. In [MT2], we also show that pro-p Demushkin groups of rank 2 have the kernel n-unipotent property for all n ≥ 4. It is shown in [Ef2, Theorem A] that every free pro-p-group has the kernel n-property for all n ≥ 3. (In [MT2] we provide an alternative direct short proof.) The results of this paper are also relevant in determining strong automatic realizations of canonical quotients of absolute Galois groups. (See [MST].) Finally it is very interesting to extend the main theorems in this paper also to the case p > 2. (See [GMTT].) | 2014-10-22T01:59:13.000Z | 2013-07-25T00:00:00.000 | {
"year": 2013,
"sha1": "f8acffe36853fb2c591aff17242421e863a3021f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1307.6624.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f8acffe36853fb2c591aff17242421e863a3021f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
57761766 | pes2o/s2orc | v3-fos-license | Cervical Cancer Screening Programs in Europe: The Transition Towards HPV Vaccination and Population-Based HPV Testing
Cervical cancer is the fourth most frequently occurring cancer in women around the world and can affect them during their reproductive years. Since the development of the Papanicolaou (Pap) test, screening has been essential in identifying cervical cancer at a treatable stage. With the identification of the human papillomavirus (HPV) as the causative agent of essentially all cervical cancer cases, HPV molecular screening tests and HPV vaccines for primary prevention against the virus have been developed. Accordingly, comparative studies were designed to assess the performance of cervical cancer screening methods in order to devise the best screening strategy possible. This review critically assesses the current cervical cancer screening methods as well as the implementation of HPV vaccination in Europe. The most recent European Guidelines and recommendations for organized population-based programs with HPV testing as the primary screening method are also presented. Lastly, the current landscape of cervical cancer screening programs is assessed for both European Union member states and some associated countries, in regard to the transition towards population-based screening programs with primary HPV testing.
Introduction
Cancer of the cervix uteri, more commonly known as cervical cancer, is an important public health concern. It was reported as the fourth most frequently occurring gynecological cancer, with an estimated worldwide incidence of 528,000 cases and 266,000 deaths in 2012 [1]. In Europe, an estimated 58,373 women are diagnosed annually with cervical cancer, and 24,404 of those die from this illness [2].
The incidence and mortality of cervical cancer, however, have been declining in developed countries due to the discovery of the Pap test in the 1940s, which enabled the prompt identification of morphological changes in the cervical epithelium [3]. The use of the Pap test in national screening programs can be dated back to the 1960s and 1970s [4], and it is still a cornerstone in the majority of current programs. Moreover, the International Agency for Research on Cancer (IARC) determined that the incidence of invasive cervical cancer can be reduced by at least 80% with the implementation of cervical cancer screening programs based on Pap test every three to five years for women of ages 35 to 64 [5][6][7][8][9].
Cervical cancer screening was revolutionized in the early 1980s by the discovery of human papillomaviruses (HPV) as the single causative agents of the disease. In 1983, HPV type 16 (HPV16) was first identified in DNA from a biopsy sample of invasive cancer of the cervix, and in the following years, HPVs were reported as the main causative agents of cervical cancer [10][11][12][13]. HPVs are small non-enveloped double-stranded DNA viruses with 221 officially characterized types, as of June 2018 [14]. These viruses have a genome of 8 kb that encodes early regulatory proteins (E1, E2, E5, E6, and E7), and late structural proteins (L1 and L2). HPVs are the most common sexually transmitted viruses [15][16][17]. According to estimates, approximately 80% of sexually active women will acquire the infection in their lifetime, and in the majority of cases (>90%), it will be a transient, asymptomatic infection cleared by the immune system in six months to two years [17][18][19]. Only after a persistent infection can HPV lead to low-and/or high-grade cervical intraepithelial neoplasia (CIN), which may eventually evolve to cervical cancer [17,20,21]. However, not all HPV types have been linked to cervical cancer. At least 12 types of HPV are epidemiologically classified as oncogenic, high-risk (hr) types (HPV16/18/31/35/39/45/51/52/56/58/66/68), which cause more than 97% of cervical cancer cases, while low-risk (lr) types (HPV6/11/40/42/43/44/54/61/72) are linked to anogenital warts and laryngeal papillomas [16,[22][23][24]. The aforementioned HPV16 and HPV18 are the most commonly occurring hrHPV types, and cause approximately 70% of cervical cancers (~50% HPV16, 20% HPV18) [17,25,26]. The elucidation of the etiological role of HPV has altered the landscape of cervical cancer screening in more ways than one. The fact that cervical cancer is primarily attributable to a single infectious agent enabled the development of new more sensitive HPV-based screening tests for secondary prevention of cervical cancer and three vaccines against HPV, which are utilized for primary prevention.
This review focuses on the available tests and strategies, which are currently employed for screening and prevention of HPV infection and cervical cancer. Furthermore, in accordance with recommendations specified in the recent European Guidelines, important aspects of screening programs necessary for the success and efficiency of such systems are highlighted. Finally, the current landscape of cervical cancer screening programs of member states of the European Union (E.U.) and some associated countries is reviewed.
Conventional Pap Test and Its Alternatives
Testing to identify anomalies in the cervix can be dated as far back as the early 19th century, when anatomists and pathologists of the time observed and studied the cytological changes derived from cervical and other genital neoplasms, as well as the woman's menstrual cycle [27]. In the mid-1800s, the Irish physician Walter Hayle Walsh was the first to show that cancerous cells could be identified by microscopy [28,29]. In the early 20th century (1927), the Romanian physician Aurel Babeş detected the presence of cervical cancer by collecting cells from a woman's cervix using a platinum loop and then observing them under a microscope. This process was the predecessor to what is known today as the Pap test [29].
With the invention of the Pap test in the 1940s, by George N. Papanicolaou and H.F. Traut, cervical cytology gained a robust and low-complexity method of screening for cervical cancer [30]. This process entails the exfoliation of cells from the cervix, which are then fixed, viewed under a microscope, and are subsequently morphologically interpreted. The staining method developed for this test offered a polychromatic definition of the nucleus and the features of the cytoplasm. The Pap test allows the assessment of nuclear chromatin alterations to discern whether necrosis occurred, the observation of the degree of cellular degeneration, and the distinction of the maturity of squamous epithelial cells [28,30,31].
Despite its widespread use as the primary cervical cancer screening method, the Pap test has some important limitations. The staining procedure of the conventional Pap test requires a considerable amount of time (20-30 min) and consumables [32]. The smearing process of the Pap test is also characterized by poor reproducibility and is vulnerable to obscuration by blood and mucus, imperfect fixation, and a non-uniform distribution of cells, thus causing errors in the detection and interpretation of the results. These issues can be attributed partly to the quality of sampling and can explain the broad range of sensitivity (30-87%) reported for the Pap test [33,34].
Consequently, to address the shortcomings of the Pap test, a number of derivative methods were developed, such as the UltraFast staining technique, the short-duration Papanicolaou stain, the REAP stain and the Enviro-Pap method [32,[35][36][37][38][39]. These modifications significantly improved upon the conventional Pap smear performance in terms of speed and cost, and are also more environmentally friendly. The guiding principle of these enhancements was to improve at least one aspect of the smear without compromising the quality of the results [32,[35][36][37][38].
Liquid-Based Cytology
Another alternative method developed to address the shortcomings of the conventional Pap smear is liquid-based cytology (LBC). ThinPrep®Pap test (Hologic, Inc, Marlborough, MA) was the first LBC technique to be approved by the United States Food and Drug Administration (FDA) [34]. This method entails the collection of cells from the cervix, which are then transferred to a vial containing preservative solution instead of being fixed on a slide, thus enabling uniform distribution of the collected clinical material. Since only a portion of the sample is used for cytology, the rest can be employed for further testing, including HPV testing [33,40]. Presently, Thin Prep and SurePath (Becton Dickinson) are the two most frequently used LBC techniques. Several studies have shown significantly reduced numbers of unsatisfactory smears that would require repeat testing when LBC is used and some studies have also shown higher CIN detection rates compared to the conventional Pap test [34,41,42]. Conversely, other studies have questioned the advantages of LBC over the conventional Pap test and showed sensitivity less than or equal to that of the conventional Pap test [41,[43][44][45][46].
Visual Inspection by Acetic Acid and Visual Inspection with Lugol's Iodine
Visual inspection by acetic acid (VIA) or with Lugol's iodine (VILI) are two inexpensive screening methods frequently used in low-resource settings, with VIA being more widely used. These techniques are based on the fact that upon the application of acetic acid or Lugol's iodine directly to the cervix, precancerous cervical lesions become discernible to the naked eye by both clinicians and nonclinicians. Although not perfect, both VIA and VILI have been reported to have acceptable specificity and sensitivity in low-resource settings [47][48][49].
Cervical cytology has undoubtedly played an important role in cervical cancer screening and continues to do so. However, it has inherited the limitation of being a morphological method requiring subjective interpretation by well-trained cytologists [11]. Despite continuous efforts to improve the performance of cervical cytology, its sensitivity is not optimal and the method still produces high numbers of borderline results, such as atypical squamous cells of undetermined significance (ASCUS, or ASC-US after the 2001 Bethesda Workshop) which, require further testing, tight follow-up and raise constant uncertainty for false negative results leading to over-referral to colposcopy and overtreatment [11,[50][51][52].
Advantages and Limitations
In contrast to screening methods based on cytology, HPV testing does not rely on morphological interpretation and is based on the detection of HPV DNA, HPV mRNA or other viral markers. In the last two decades, HPV testing has become in several countries an invaluable part of clinical guidelines for cervical carcinoma screening, triage and follow-up after treatment [53]. As a general rule, HPV testing must be performed in appropriate, evidence-based contexts to maximize the benefit and reduce over-diagnosis. HPV testing for the identification of women at higher risk of developing cervical cancer, significantly differs from molecular testing for other medically relevant viruses, in that analytic sensitivity for the detection of HPV is not the prime driver of test performance. Unfortunately, the great majority of HPV tests currently on the market have high analytic sensitivity. Consequently, when they are used for agreed clinical indications they can yield a large number of clinically insignificant positives, resulting in more false referrals for colposcopy and biopsy, decreased correlation with the histological presence of disease, unnecessary treatment of healthy women and a consequent distrust of a positive result by the treating physician. Another important peculiarity of HPV testing for identification of women at higher risk for the development of cervical cancer is the need for balanced and artificially reduced coverage of the HPV testing types.
Clinical Validation of HPV Tests
Taking into consideration these characteristics of HPV testing, when designing an HPV test to be used for agreed clinical indications, the ultimate sensitivity for the detection of precancerous lesions by inclusion of HPV types that are rarely associated with cervical cancer, must be carefully weighed against the potentially dramatic loss of clinical specificity when a particular HPV type (e.g., HPV53 and HPV66) is frequent in low-grade disease or in women without disease. In addition, it should always be taken into consideration that absolute reassurance following a negative cervical cancer screening test result is not achievable at any analytic sensitivity, because of a myriad of factors that are independent of the actual screening test performance, including operator error and poor cervical sampling. Thus, a cervical cancer screening program should adopt an HPV test for use as screening tool, only if it has been validated by demonstrating reproducible and consistently high sensitivity for CIN2+ and CIN3+ lesions, as well as minimal detection of clinically irrelevant, transient HPV infections [54,55]. There is a consensus in the HPV community that HPV tests (neither commercial nor in-house tests) that have not been clinically validated should not be used in clinical practice. HPV testing should be performed only on samples processed and analyzed in qualified laboratories, validated by authorized accreditation bodies and in compliance with international standards [54,55]. Laboratories involved in HPV-based screening should perform a minimum of 10,000 HPV tests per year [54,55].
Several comprehensive inventories of commercially available HPV tests were published in the last decade [56][57][58]. As of July 2018, at least 250 distinct commercial tests for detection of alpha HPVs and at least 230 variants of the original tests are available at the global market. Unfortunately, only a subset of commercial HPV tests has documented clinical performance for agreed indications for HPV testing in current clinical practice. For more than half of the HPV tests in the global market, no single publication in peer-reviewed literature can be identified [58]. In contrast to commercial kits for "classical" molecular microbiology targets, the great majority of HPV commercial tests currently on the market do not contain a sample extraction step and a number of them do not even mention recommended nucleic acid extraction methodology in their manufacturer's instructions. Only a minority of HPV tests on the market have internal controls [58].
As a multitude of hrHPV tests are available, regular evaluation updates are essential to ensure their suitability for primary cervical cancer screening. A recent systematic review [59], listed the hrHPV DNA tests that were either validated through randomized trials showing a very low incidence of cervical cancer after a negative hrHPV DNA test [53,60] or fulfilling consensus-based international equivalence criteria based on cross-sectional data [8]. The international equivalence criteria are based on the non-inferior cross-sectional accuracy of a new HPV test versus one of the two benchmark comparator tests (GP5+/6+ PCR-EIA and/or Qiagen Hybrid Capture 2 HPV DNA Test) that have been validated in clinical trials and detect the same molecular targets, i.e., DNA of hrHPV types [61]. To fulfill the necessary criteria, the candidate test should demonstrate a relative sensitivity and specificity to detect CIN2+ compared to the standard comparator tests of more than 0.90 and 0.98 respectively, and show high inter-and intra-laboratory reproducibility [61]. Other potential cervical cancer screening tests based on other target molecules such as HPV mRNA, proteins or methylation markers cannot directly be considered equivalent and require additional evidence regarding their longitudinal effects, i.e., long-term safety [59]. The proper validation of HPV DNA tests, according to the international equivalence criteria can be problematic due to difficulties with obtaining an appropriate set of clinical specimens. The recently launched international framework "Validation of HPV Genotyping Tests (VALGENT)" facilitates the comparison and validation of HPV DNA tests by providing a set of samples obtained from women attending routine screening (1,000-1,300 samples) enriched with cytological abnormal samples (300 samples) [62]. In order to allow comparison with other HPV tests, each VALGENT panel includes a comparator assay that was previously clinically validated for cervical cancer screening purposes [62]. As of July 2018, only 14 commercial HPV assays (out of +480 HPV assays at the global market) can be considered as completely or partially validated for primary HPV-based cervical cancer screening [59,62]. The list includes four out of five HPV assays approved by US FDA: Hybrid Capture 2 (hc2) HPV DNA Test (Qiagen), cobas 4800 HPV Test (Roche), APTIMA HPV Assay (Hologic) and BD Onclarity HPV Assay (Becton Dickinson).
Since the performance of an HPV test may vary depending on the sample collection procedures and medium, regulatory approval in some settings requires validation of performance based on the choice of sample collection medium. Importantly, the validation of a pre-approved assay for use with a specific medium is a simpler process than de novo clinical validation of an HPV assay. It can be expected that several previously approved tests will eventually be validated for use with the most commonly used collection media [58].
It is worth mentioning that although we have an increasing understanding of which HPV tests are valid for HPV-based primary cervical cancer screening, given an internationally accepted and applied validation framework and published professional guidance, we do not have widely accepted equivalent metrics to judge the validity of HPV tests in other clinical settings, including post-treatment surveillance and the triage of low-grade abnormalities [58,59]. International efforts to create such validation guidelines will be of great benefit, since existing data show significant variation in commercially available tests being used in clinical settings that are not part of HPV-based primary cervical cancer screening programs.
HPV Vaccines
The identification of HPV as the main etiological agent of cervical cancer, presented novel opportunities for the development of preventative modalities against cervical cancer [10]. With this knowledge, it became clear that stopping hr types of the virus from ever infecting should be explored as an option, in addition to the preexisting cervical cancer screening tests. Two decades long efforts culminated in 2006 with the approval of the first safe and efficacious HPV prophylactic vaccine [63]. The first vaccine that was approved was the quadrivalent Gardasil/Silgard, which targets HPV6, 11, 16, and 18 [64]. A year later, the bivalent Cervarix vaccine targeting HPV16 and 18 was approved, and more recently, the nonavalent Gardasil 9 vaccine, which targets HPV6, 11, 16, 18, 31, 33, 45, 52, and 58 was also approved [65]. All three of these vaccines target HPV16 and 18, and contain HPV L1 protein virus-like particles (VLPs) expressed in different cell types [16]. VLPs are morphologically and antigenically similar to native HPV virions, and because of the genomic similarity between different types of the virus, a certain degree of protection against HPV types not targeted by the vaccine, so called cross-protection, is also achieved. HPV vaccines elicit immunity through the production of high titers of anti-HPV IgG neutralizing antibodies, which block the entrance of the virus into the host cells [15]. The quadrivalent and nonavalent vaccines contain VLPs of two lrHPV types, 6 and 11, which are responsible for more than 90% of anogenital warts and laryngeal papillomas. Moreover, the nonavalent vaccine is targeted against the five types (HPV31, 33, 45, 52, 58) most frequently identified in cervical cancer after HPV16 and 18 [16,66,67]. Nonetheless, even with cross-protection and the increased number of HPV types covered by the nonavalent vaccine, HPV vaccines do not protect against all HPV types that cause cervical cancer [68,69].
Improving HPV Vaccination Coverage
Initially, all three HPV vaccines had been approved for a 3-dose series in order to generate sufficient and long-lasting protective immunity [70]. Currently, for all three vaccines two doses are recommended for persons starting the series before their 15th birthday and three dose schedule for those who start the series on or after their 15th birthday and for persons with certain immunocompromising conditions [71]. Decreasing the number of doses not only leads to reductions in overall cost, which is a concern (especially in low-income countries), but it also increases adherence to the program [71][72][73].
Despite their potency in providing protection against HPV infection, HPV vaccines are not therapeutic, as they are not effective in curing preexisting HPV infections [16]. Hence, current HPV vaccination programs are mainly targeted to both genders prior to coitarche, aiming to reduce the burden of cervical cancer and other HPV-related tumors, not only in vaccinated but also in unvaccinated individuals thanks to herd immunity [69]. As both genders are responsible for HPV transmission, both genders should be vaccinated to share the burden in reducing the risk of HPV-related disease, as well as to have equal access to direct vaccine benefits. It is becoming evident that only gender-neutral vaccination will lead to substantial control of HPV-related diseases both in women and men as well as maximizing prevention of cervical cancer, especially if vaccination coverage for girls in a particular program is not high. Current failure to implement gender-neutral HPV vaccination with high coverage in the great majority of countries looks like a missed unique public health opportunity [74,75]. However, even with the high protection against de novo HPV infections provided by HPV vaccines, successful cervical cancer prevention will still rely on screening for years to come [69] but future strategies will require substantial changes: longer screening intervals, exclusive use of HPV-based screening strategies as well as vaccination of older cohorts. An innovative strategy with the purpose of accelerating the reduction of cervical cancer incidence and mortality named "HPV-FASTER" has been recently proposed, with a generalized HPV vaccination campaign aimed at girls and women aged 9-45, paired with at least one HPV-based screening test at any age over 30 and eventual triage and diagnostic assessments among women who screen HPV-positive [76].
Organization of Screening
With extensive knowledge of the biology of cervical cancer and with an arsenal of screening and prevention tools, the disease can be detected at an early enough stage to be curable. As a concept, the fundamental principles of cervical cancer screening can be dated back as far as the 1940s, before organized screening programs took place [4,77]. However, it was not until 1968 that Wilson and Jungner defined a set of criteria (comprehensively reviewed by Basu et. al. [47]) that not only helped to define whether a disease, such as cervical cancer, is eligible for screening but also influenced the development of better-thought-out screening programs. Undertakings of such magnitude, however, are no trivial tasks, since a number of prerequisites have to be accounted for before embarking on the implementation of such programs. The nature and parameters of the program, which are directly influenced and supported by scientific progress, must be established [47,78].
To this end, the first edition of the European Guidelines for Quality Assurance in Cervical Cancer Screening, published in 1993 [79], designated the principles for organized, population-based screening, with a number of countries adhering to this recommendation [80]. The supplements of the second edition of the European Guidelines for Quality Assurance in Cervical Cancer Screening of 2015 (the original volume of the second edition was published in 2008) emphasize the importance of the implementation of an organized, population-based cervical cancer screening program with a call/recall invitation system in order to take full advantage of the benefits of screening and discuss the key aspects of this type of organization in considerably increased detail [55,80]. Such a program should have a national/regional team that directs the implementation of guidelines, rules and procedures. This team would also be responsible for quality assurance to monitor and to guarantee that all levels of the process are performed sufficiently. This responsibility includes the management and coordination of the call/recall system, testing and diagnosis, as well as follow-up after positive test results. Furthermore, quality assurance procedures call for attention to training personnel, evaluating performance, auditing and monitoring, and reviewing the impact of the program on the burden of disease. The latter is facilitated by the population-based nature of the program, which is characterized by the identification and personal invitation of each member of the targeted population eligible for screening [6,81,82].
In contrast to organized population-based screening, opportunistic screening depends on the initiative of the individual woman and/or her doctor. This type of screening often results in high coverage only in certain parts of the population, which are screened frequently, while other parts of the population, usually with a lower socioeconomic status, exhibit lower coverage. This situation results in uneven coverage with heterogeneous quality, limited effectiveness, and reduced cost-effectiveness, as well as difficulty in monitoring the population [81].
Thus, as the European Guidelines recommend, a program with an organized population-based nature may substantially improve the accessibility and equity of screening access while simultaneously improving effectiveness and cost-effectiveness [6,81]. The key factors to be specified within such a program are the target age, screening intervals, and screening algorithm. The latter refers to the primary screening test and the subsequent management of results at each step of the algorithm.
Primary Cytology Testing
Three options that are currently in use for primary cervical cancer screening, are cytology, HPV testing, and cotesting [83]. Cytology-based testing has been used for primary screening for more than half a century and is currently employed by the majority of screening programs in Europe. However, it was implemented in screening programs in the 1960s-70s without being assessed in RCTs [47]. As described earlier in this review, cytology-based testing has various technical characteristics that affect its standing at the forefront of screening. It has undoubtedly proven its impact on reducing cervical cancer morbidity and mortality, especially in organized settings [84]. However, the low sensitivity of the technique, the requirement for high-quality diagnostic facilities, the high costs needed to sustain the infrastructure, and the need for highly trained personnel are important issues that have brought primary cytology screening under intense scrutiny for the past twenty years [85,86]. To maintain the accuracy and performance of cervical cytology, short intervals between screenings are required, which implies the performance of an increased number of tests and as such it can be costly [83]. Another factor that is already affecting the performance of cytology as a tool for primary screening is a reduced population burden of HPV due to HPV vaccination. The specificity of cytology, the main hallmark of the method, is decreasing in countries with high HPV coverage due to the dramatic population reduction of high-grade lesions as a result of HPV vaccination. Furthermore, since the current vaccines do not cover all HPV types causing cervical abnormalities, an increase in proportion of minor abnormalities caused by less carcinogenic HPV types is also expected, which in turn will further lower the once very high positive predictive value (PPV) of cytology [84,87]. As the population prevalence of hrHPVs and consequently CIN2/3 will decrease, screening modalities with higher sensitivity like HPV testing will clearly perform better at the population level.
Primary HPV Testing
The development of clinically validated HPV tests, which are more accurate and sensitive than primary cytology testing, has recently caused a paradigm shift. According to the European Guidelines, as well as the World Health Organization (WHO), HPV testing is now proposed as the primary screening tool for cervical cancer [55,88]. HPV testing is characterized by high clinical sensitivity, a high negative predictive value (NPV), objectiveness, low training requirements, reproducibility and a high throughput capacity [47,88,89]. HPV-based screening requires longer screening intervals than cytology-based screening since progression to cancer occurs years after an infection with hrHPV. Based on these facts, the European Guidelines recommend a five-year screening interval for HPV testing, which may be extended up to 10 years depending on the age and screening history of the patient [47,55,90,91]. Longer screening intervals contribute to less expensive programs, as well as providing a longer duration of "peace of mind" when women test negative in comparison to cytology-negative women [47,91]. Another factor that is expected to help establish primary HPV screening as a more cost-effective option is HPV vaccination. In a study performed to evaluate the effectiveness and cost-effectiveness of cervical cancer prevention scenarios, the most cost-effective strategy was the combination of preadolescent vaccination with an organized screening program, using primary HPV testing every five years with cytology triage [92]. In this regard, partial HPV genotyping may be worth employing either as part of primary HPV screening, which would entail using an HPV assay with genotyping capabilities, or as triage. This approach would not only help with the management of positive HPV cases but would also enable the direct monitoring of the downstream effects of vaccination [83,84,91,93].
When deciding at what age to start HPV-based screening it is important to take into account the natural history of HPV infection in order to avoid unnecessary follow-up and/or overtreatment of women with only transient HPV infections [47,94]. Thus, the European Guidelines recommend against primary HPV screening before the age of 30 and are in favor of screening starting at the age of 35, especially in a setting without prior cytology screening implemented. However, there is insufficient evidence to promote or restrict the start of HPV-based screening between the ages of 30-34. Conversely, in a region or country where primary cytology screening is running well, the policy-makers of the program may decide to implement primary HPV testing beginning at the recommended age of 30 or 35, while also maintaining their current cytology-based program from the ages of 20-30, at least until evidence shows otherwise. Nonetheless, the avoidance of screening prior to the age of 20 is recommended [53,55,91].
At the same time, setting the age to stop screening is important. The European Guidelines suggest that primary HPV screening could stop at the same age recommended for cytology, that is, at 60-65 years of age, provided that the most recent screening test was negative [55]. The reasoning for stopping screening at this age is due to the extremely low probability that an incident HPV infection will become persistent and that women will consequently develop cancer. Screening for a newly acquired HPV infection is therefore redundant and/or not cost-effective in women over 65 years with recent negative screen result(s) [95,96]. Furthermore, RCT data report significantly less CIN2/3 at ages 50-60 in comparison to 35-49 [60,97]. However, the European Guidelines state that current data are insufficient to select the optimal age to stop HPV primary testing, which was why the recommended age to stop screening for cytology was also kept for primary HPV testing. Nonetheless, it is important to note that cytology performs relatively poor at those ages, especially for postmenopausal women, in whom epithelial atrophy is commonly observed. Moreover, the cervical transformation zones of postmenopausal women are situated in the cervical canal, making the collection of material for cytological examination less accessible. Accordingly, cytology has low sensitivity for postmenopausal women, and screening can result in elevated false-positive results [95,98]. However, a recent Swedish study found that although HPV prevalence is relatively low in older women, there was still an increased risk for cervical dysplasia upon a second positive HPV screen test [98]. Furthermore, 30% of cervical cancer cases were still diagnosed in women older than 60, with a mortality as high as 70%. These findings, coupled with the non-optimal performance of cytology in older women, suggest the extension of the screening age as well as the need for more research [98].
Primary HPV Cotesting
Cotesting combines the sensitivity of HPV testing with the specificity of cytology at the level of primary screening. Even though some non-European studies reported marginal superiority of cotesting over HPV-based screening alone, the European Guidelines recommend against cotesting at any given age because it is not substantially more effective than HPV testing and is considerably more costly [55,83,89].
Management of Women after a Positive HPV Primary Test Result
Having established HPV testing as the recommended primary screening method, the age range, and the screening intervals of a negative test, it is important to specify the management of positive results from primary testing. Triaging women with a positive HPV primary test result can compensate for the lower specificity that characterizes HPV testing. In this regard, the European Guidelines recommend the performance of cytology as the main triage test in order to manage the increased number of screen positives identified by primary HPV testing, which would otherwise lead to an excessive number of referrals to colposcopy. Thus, only women with both an HPV-positive result and cytological abnormalities are immediately referred for colposcopy. If the primary HPV test employs partial genotyping for HPV16 and HPV18, then direct referral colposcopy (without cytology) is possible [91]. The same sample used for primary testing is recommended to be subsequently used for triage testing in order to reduce the risk of follow-up loss and maximize the efficiency of resources [47,55,91,99]. Furthermore, primary HPV testing improves cytology screening by eliminating HPV-negative ASC-US cases, which constitute a considerable portion of borderline cytology and pose essentially no elevated risk for underlying CIN2/3 or cancer [77,83,91]. Moreover, there is evidence that the predictive value of cytology readouts increases if the cytologist is aware of HPV status of the sample [88].
To increase the specificity and improve the detection of precancerous lesions, other techniques, in addition to cytology, that could potentially be used for the triaging of women after a positive HPV primary test result are: partial HPV genotyping (HPV16/18 or extended), p16-Ki67 immunostaining, HPV E6/E7 mRNA detection, and cellular and viral methylation assays. However, at this time, there are insufficient data to favor such methods over cytology for triaging in Europe. The use of partial HPV genotyping triage is based on the fact that there is substantial variation in risk depending on HPV type, but it is still a matter of debate as to which HPV types other than HPV16 (HPV-18, HPV-31, HPV-33, HPV-45) it is worth implementing a routine risk-stratification algorithm [100]. The p16/ki67 dual stain and HPV mRNA testing, have the potential to enable a more accurate distinction between transient HPV infections and those that will potentially progress to precancerous lesions/cancer. The p16/ki67 dual stain has been described as a credible tool that compares favorably to cytology, but both the p16/ki67 dual stain and HPV mRNA testing will need to become more cost-effective in order to compete with cytology. Methylation is in a similar predicament: it is still in the early stages but is displaying great potential as an accurate and promising molecular risk-stratification marker. The objectivity that this method offers, the consistency, and the high throughput potential will make methylation a strong candidate triaging method even if its performance is equivalent to that of cytology [55,101,102].
Management of Women after a Positive HPV Primary Test Result and Negative Cytology Triage Results
HPV-positive, cytology triage-negative women are recommended to undergo a different path than women with triage-positive cytology and/or borderline cytological results. Cytology triage-negative women who are infected by hrHPV, are still at risk for persistent infection and thus, require repeat testing at shorter intervals than HPV-negative women [83]. The open issue is how to select the most appropriate follow-up test and intervals for repeat testing. The European Guidelines report that at present the evidence available is not sufficient to definitively recommend a single approach for all settings [55] and as such provide three strategies for repeat testing (Figure 1). It is important to note that HPV retesting may be performed after at least 12 months, while cytology retesting can be performed after 6-12 months [99,103,104]. As shown in Figure 1, the European Guidelines recommend that if HPV retesting is performed, a woman with a negative repeat HPV test is recommended to return to routine screening, while a woman with a positive result should be referred to colposcopy. If cytology retesting is performed, a woman with abnormal cytology should be referred for colposcopy, whereas a woman with negative cytology could return to routine screening. If HPV testing with cytology triage in repeat testing is performed it can be managed as follows: A woman with a negative HPV result can return to routine testing. However, a woman with a positive HPV result and abnormal cytology should be referred immediately for colposcopy. A woman with a positive HPV result and negative cytology can be referred to undergo repeat testing after 12 months, for colposcopy, or return to routine screening [99,105,106]. A recent study, however, discourages the use of HPV repeat testing since women who test repeatedly HPV positive and cytology negative still have an increased risk for CIN2+ even after a repeat an HPV-negative test [88,107]. This finding also indicates the lack of sufficient evidence regarding repeat testing, and thus, prior to the implementation of HPV based screening in repeat screening, the decision makers of each program have to consider the prevalence of HPV types in the target population as well as the quality of cytology in that region [55]. recommended to return to routine screening, while a woman with a positive result should be referred to colposcopy. If cytology retesting is performed, a woman with abnormal cytology should be referred for colposcopy, whereas a woman with negative cytology could return to routine screening. If HPV testing with cytology triage in repeat testing is performed it can be managed as follows: A woman with a negative HPV result can return to routine testing. However, a woman with a positive HPV result and abnormal cytology should be referred immediately for colposcopy. A woman with a positive HPV result and negative cytology can be referred to undergo repeat testing after 12 months, for colposcopy, or return to routine screening [99,105,106]. A recent study, however, discourages the use of HPV repeat testing since women who test repeatedly HPV positive and cytology negative still have an increased risk for CIN2+ even after a repeat an HPV-negative test [88,107]. This finding also indicates the lack of sufficient evidence regarding repeat testing, and thus, prior to the implementation of HPV based screening in repeat screening, the decision makers of each program have to consider the prevalence of HPV types in the target population as well as the quality of cytology in that region [55].
Post-Treatment Follow-up
Following the referral of a patient for colposcopy, the identification of high-grade cervical lesions may be diagnosed after biopsy (in approximately a quarter of referred women), followed by surgical treatment. Various treatments of high-grade cervical lesions are available, including cryotherapy, laser, loop electrosurgical excision procedure (LEEP) or large loop excision of the transformation zone (LLETZ) and cone biopsy, which are all characterized by an overall high success rate. However,
Post-Treatment Follow-up
Following the referral of a patient for colposcopy, the identification of high-grade cervical lesions may be diagnosed after biopsy (in approximately a quarter of referred women), followed by surgical treatment. Various treatments of high-grade cervical lesions are available, including cryotherapy, laser, loop electrosurgical excision procedure (LEEP) or large loop excision of the transformation zone (LLETZ) and cone biopsy, which are all characterized by an overall high success rate. However, treatment may fail with regard to residual or recurrent precancer, with 5-15% of treated women being diagnosed again with CIN2+ and therefore requiring additional therapy. Indeed, women once diagnosed with high-grade lesions are characterized by an increased lifetime risk of developing cervical cancer [47,108]. Therefore, the increased risk of cancer highlights the importance of close post-treatment monitoring (follow-up testing) with the objective of early identification of residual/recurrent disease [108][109][110]. For many years, the Pap test has been the most widely employed follow-up test, despite having relatively low sensitivity in this setting. Since 2008, the European Guidelines have recommended the performance of cytology 6, 12, and 24 months after CIN2+ treatment as main follow-up test [111][112][113]. Nonetheless, there is growing evidence for the use of HPV testing in post-treatment monitoring, either alone or as cotesting. Importantly, cotesting achieves only marginally higher sensitivity than cytology or HPV testing alone, implying that HPV testing can be safely used without cytology [109,113,114]. A study analyzing pooled data from 33 published studies argues in favor of follow-up hrHPV testing by noting that it had higher sensitivity for underlying CIN2+ and comparable sensitivity to that of cytology. In the same study it was also stated that women with positive surgical margins may benefit more from hrHPV testing due to very high PPV and NPV [109]. Nevertheless, large-scale RCTs are required to establish the best follow-up algorithms after treatment of high-grade lesions [109].
The Implementation Status of Organized Population-Based Programs for Cervical Cancer Screening
Considering the pros and cons of all available cervical cancer screening tests, and despite the existence of evidence-based recommendations, it is clear that there is no "one size fits all" model for cervical cancer screening. There are various factors affecting the implementation of a screening program: the amount of healthcare funds available in each region/country, the preexisting medical and economic infrastructure, and the risk perception and tolerance of the society [87]. Table 1, which presents the data of each country regarding their cervical cancer screening programs, collected through meticulous bibliographical search, shows clearly that there is significant variation in the way members of the E.U., as well as some E.U. associated countries, address the matter of cervical cancer screening (Table 1). Thus, the most recent official survey of implementation status of cervical cancer screening in the E.U. showed that although substantial improvement in screening implementation was documented in last decade and that a total of 22 member states were implementing, piloting, or planning the population-based cervical cancer screening program in 2016, the roll-out of the screening programs was completed in only nine out of 28 member states: Denmark, Estonia, Finland, Latvia, Poland, Slovenia, Sweden, The Netherlands, and The United Kingdom [115], along with one E.U. associated country, Norway [116]. There are countries among them that do not yet have organized population-based programs, namely, Austria, Bulgaria, Cyprus, Germany, Greece, Luxembourg, Spain, Israel, and Switzerland. However, even though these countries lack the abovementioned screening program, some of them have in place programs with certain elements of organized programs, mostly as a result of recommendations issued by the country's government and/or national gynecological/medical societies. In Austria, for example, a nationwide opportunistic program was created in 1970 to screen for cervical cancer. The program has remained opportunistic and is loosely structured by recommendations from Austrian medical societies, and the expenses are covered by health insurance [117][118][119][120]. In Israel, screening is recommended and fully covered by the National Health Insurance Law, and furthermore, the Israeli Gynecological Society recommends the extension of screening ages from 35-54 to 25-65 [121,122]. A similar situation is noted in Switzerland, where recommendations are offered by the Swiss Gynecological Society, and Pap testing is covered by health insurance [123]. Conventional cytology; LBC = liquid-based cytology; GP = general practitioner; N/A = not available, GYN = gynecologists; "-" = HPV vaccine not in the national immunization program. 1 In Belgium an organized population-based program is in place only in the Flemish region [138]. 2 In Cyprus a regional pilot screening program was initiated in 2012, which is still in effect [152]. 3 In Finland, Italy, Portugal and Spain there is variation depending on the region. There are some regions in Spain that have population-based programs [92,175,184,220,251,269,278,312]. 4 In Greece, there are some regional cervical cancer screening programs that have been reported [206]. 5 In Romania an HPV vaccination program had started in 2008 but it was discontinued due to low uptake [256,257,259].
Some of these countries which are still lacking organized national screening programs have made attempts to implement national and/or regional screening programs (Table 1). In 2009, Bulgaria initiated the "Stop and Get Checked" cancer screening program, which ended in 2014 with no scaling up [115,322]. In Cyprus, the Ministry of Health, the Department of Medical and Public Health Services, assigned a temporary committee in 2008 with the intention of implementing a national screening program for cervical cancer in 2009 [118]; however, the program was not realized, and screening is currently opportunistic. Nonetheless, a regional pilot screening program in Cyprus, organized by a private organization of women in cooperation with governmental health services as well as the support of the Ministry of Health, was initiated in 2012 and is still in effect [152]. Similarly, in Greece, a number of regional cervical cancer screening programs have been reported, and there have also been efforts to establish a national organized population-based screening program for cervical cancer. These efforts have not been fruitful yet, reportedly due to the financial crisis [206]. In Luxembourg, a national cervical cancer screening program was initiated in 1962, and it is currently opportunistic, run by a single national cytology laboratory [234,236]. In Spain, screening at a national level is opportunistic, and there are variations in screening recommendations in different regions. In addition, some regions have their own population-based programs. Several scientific Spanish societies recommend the implementation of an organized screening program with HPV primary screening [92,269,278]. Germany, however, with the passing of the Cancer Screening and Registration Law of 2013, has planned for an organized population-based cervical cancer screening program, which was reported to be scheduled for implementation by 2018 [197]. In France, despite the existence of organized population-based programs, the country has been primarily characterized by opportunistic screening. National guidelines were published in 2010 for the initiation of a population-based cervical cancer screening program, and they are expected to be implemented nationwide in 2018 [185]. In Lithuania, the program is organized but still has some opportunistic qualities, since the general practitioners (GPs) are the ones instructing patients to attend cervical cancer screening instead of the process being governed by an organized call-recall system and the invitations being sent out by mail [232]. In Turkey, there are both organized and opportunistic programs, but the opportunistic approach is employed to a higher degree. An organized screening program implemented in 2004 was characterized by low coverage and redesigned in 2014 to include primary HPV testing, with the additional implementation of HPV vaccination being debated as well [312,[315][316][317][318]. Besides Turkey, other countries covered in this review that have yet to implement an HPV vaccination program are Bulgaria, Poland, Romania and Slovakia, indicating that HPV vaccination programs have been adopted by the majority of members of the E.U. and E.U. associated countries. As presented in Table 1 and depicted in Figure 2, out of the 32 countries covered in this review, only five countries do not have a national HPV vaccination program running [131].
The Implementation Status of Primary HPV Testing
As it can be observed from the data in Table 1, which provides the implementation status of HPV primary testing in each country, there is a recent movement towards HPV-based primary screening, which has been embraced by some countries and is currently being strongly considered by others (Figure 3). Finland, Germany, Italy, The Netherlands, Sweden, The United Kingdom, Norway, and Turkey are all either in the process of implementing HPV primary screening on a regional or national level or have done so recently. Distinctions should be noted for Norway, where a regional pilot program for HPV primary testing is underway, and Finland, where HPV primary screening is implemented by some municipalities [175,184]. In France, primary HPV testing has been studied in regional pilot programs [6,115,186]. Romania is currently using cotesting in some regions and reportedly the strategy is to change to HPV primary screening during the 2017-2020 National Cancer Control Plan [115,257]. Cotesting was also employed for a pilot study in two regions of Poland [248] and for some regions in Portugal [251]. Moreover, cotesting is also being performed in a pilot population-based program that is still ongoing in Malta [115]. Other countries, such as Denmark, which performs HPV testing for women in the age range of 60-64 [169,170] and Belgium [138], are still evaluating HPV primary screening for implementation in their national programs. The implementation status of primary HPV testing in E.U. member states and some E.U. associated countries. The magnifying glass serves to enlarge the island of Malta. It is important to state that this is a rapidly changing field and that the status of implementation could not be confirmed for all countries from two independent sources.
The Importance of Coverage and Acceptance of Cervical Cancer Screening Programs
Despite all the efforts to implement screening programs, their success depends primarily on sufficient population coverage. Unfortunately, many countries report suboptimal participation in screening programs [210,211,229,232,315]. In an effort to increase coverage, in addition to educational campaigns and invitation reminders, many countries are also exploring or implementing selfsampling for nonparticipants [183,204,293,307,309,323,324]. This testing strategy is also mentioned in the European Guidelines; however, they recommend that successful self-sampling pilot projects precede implementation. Furthermore, it is important to emphasize that self-sampling should be performed for HPV testing and not cytology [55,325]. HPV self-sampling has been reported to have similar sensitivity and specificity as testing performed on samples taken by trained professionals. However, European Guidelines do not recommend self-sampling for all women, since, although they performed similarly, the results of self-collected samples are less accurate than those of samples collected by clinicians [55]. The acceptability of self-sampling for HPV testing was shown in an RCT, where 99% of the samples returned were adequate for analysis, indicating that self-sampling can be a valid alternative for nonparticipants [326].
Low coverage is directly affected by the targeted population, and accordingly, there have been numerous studies in various countries evaluating the awareness, perception, and knowledge of the population in regard to HPV, cervical cancer screening programs and vaccination programs [141,172,205,209,219,247,251,255,258,283,320,321]. These studies also highlight the importance of health care providers, general practitioners and gynecologists, both in opportunistic screening and in organized programs [128,207,327]. As indicated in Table 1 and illustrated in Figure 4, GPs and gynecologists tend to be the primary figures in opportunistic screening, performing the examinations Figure 3. The implementation status of primary HPV testing in E.U. member states and some E.U. associated countries. The magnifying glass serves to enlarge the island of Malta. It is important to state that this is a rapidly changing field and that the status of implementation could not be confirmed for all countries from two independent sources.
The Importance of Coverage and Acceptance of Cervical Cancer Screening Programs
Despite all the efforts to implement screening programs, their success depends primarily on sufficient population coverage. Unfortunately, many countries report suboptimal participation in screening programs [210,211,229,232,315]. In an effort to increase coverage, in addition to educational campaigns and invitation reminders, many countries are also exploring or implementing self-sampling for nonparticipants [183,204,293,307,309,323,324]. This testing strategy is also mentioned in the European Guidelines; however, they recommend that successful self-sampling pilot projects precede implementation. Furthermore, it is important to emphasize that self-sampling should be performed for HPV testing and not cytology [55,325]. HPV self-sampling has been reported to have similar sensitivity and specificity as testing performed on samples taken by trained professionals. However, European Guidelines do not recommend self-sampling for all women, since, although they performed similarly, the results of self-collected samples are less accurate than those of samples collected by clinicians [55]. The acceptability of self-sampling for HPV testing was shown in an RCT, where 99% of the samples returned were adequate for analysis, indicating that self-sampling can be a valid alternative for nonparticipants [326].
Low coverage is directly affected by the targeted population, and accordingly, there have been numerous studies in various countries evaluating the awareness, perception, and knowledge of the population in regard to HPV, cervical cancer screening programs and vaccination programs [141,172,205,209,219,247,251,255,258,283,320,321]. These studies also highlight the importance of health care providers, general practitioners and gynecologists, both in opportunistic screening and in organized programs [128,207,327]. As indicated in Table 1 and illustrated in Figure 4, GPs and gynecologists tend to be the primary figures in opportunistic screening, performing the examinations and collecting the specimens, while in organized settings, the specimen can be can be collected by a variety of medically qualified individuals, such as nurses and midwives. These factors emphasize the importance of all affected parties in the movement towards organized population-based HPV primary screening. All parties must work together in order to achieve success, whether an already existing cytology-based organized program is upgraded to HPV-based program or a new organized program is implemented in a country previously performing opportunistic screening only.
Viruses 2018, 10, x FOR PEER REVIEW 19 of 37 and collecting the specimens, while in organized settings, the specimen can be can be collected by a variety of medically qualified individuals, such as nurses and midwives. These factors emphasize the importance of all affected parties in the movement towards organized population-based HPV primary screening. All parties must work together in order to achieve success, whether an already existing cytology-based organized program is upgraded to HPV-based program or a new organized program is implemented in a country previously performing opportunistic screening only. Health care providers that act as sample takers in cervical cancer screening programs in E.U. member states and some E.U. associated countries. The magnifying glass serves to enlarge the island of Malta. It is important to state that this is a rapidly changing field and that the status of implementation could not be confirmed from two independent sources and that this is a rapidly changing field. This figure was designed based on information available in Table 1
Conclusions
Cervical cancer is an important health care problem in many parts of the world as well as in the E.U. It is a disease with a clearly defined natural history caused by essentially one etiological agent, and with long clinical latency. These characteristics of the disease enabled the development of acceptable and valid testing, such as the Pap test that was invented in the 1940s, to identify the precursor lesions, which can be treated in a safe, effective and acceptable way. This subsequently led to the establishment of routine cervical cancer screening in the 1960s. Primary prevention of cervical cancer was implemented more recently with the release of the first prophylactic HPV vaccine in 2006. Currently, the European Guidelines recommend organized population-based screening with primary HPV testing. However, this paradigm shift requires either the reformation of currently existing cytology-based organized programs or the implementation of new programs for countries still relying on opportunistic screening, which also mainly use cytology as a screening tool. The . Health care providers that act as sample takers in cervical cancer screening programs in E.U. member states and some E.U. associated countries. The magnifying glass serves to enlarge the island of Malta. It is important to state that this is a rapidly changing field and that the status of implementation could not be confirmed from two independent sources and that this is a rapidly changing field. This figure was designed based on information available in Table 1
Conclusions
Cervical cancer is an important health care problem in many parts of the world as well as in the E.U. It is a disease with a clearly defined natural history caused by essentially one etiological agent, and with long clinical latency. These characteristics of the disease enabled the development of acceptable and valid testing, such as the Pap test that was invented in the 1940s, to identify the precursor lesions, which can be treated in a safe, effective and acceptable way. This subsequently led to the establishment of routine cervical cancer screening in the 1960s. Primary prevention of cervical cancer was implemented more recently with the release of the first prophylactic HPV vaccine in 2006. Currently, the European Guidelines recommend organized population-based screening with primary HPV testing. However, this paradigm shift requires either the reformation of currently existing cytology-based organized programs or the implementation of new programs for countries still relying on opportunistic screening, which also mainly use cytology as a screening tool. The existing cytology-based screening programs are in many instances inefficient and costly because of the subjective nature of cytology, threatening to strain the public health budget of many countries, an effect that is expected to be exacerbated further as population HPV vaccination coverage increases [84]. We are all fully aware that the implementation of functioning HPV-based organized cervical screening programs with accessible and effective treatment of precancerous lesions, coupled with universal gender-neutral HPV vaccination, is challenging for some of the E.U. member states, however, this is certainly the only way forward. When adequately combined, these two promising prevention options have the potential to dramatically reduce cervical cancer incidence and mortality. | 2019-01-22T09:01:17.996Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "ed009b6b252ee38f2ef9a07fe46757e1442aadc7",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/viruses/viruses-10-00729/article_deploy/viruses-10-00729.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed009b6b252ee38f2ef9a07fe46757e1442aadc7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268010427 | pes2o/s2orc | v3-fos-license | Revolutionizing Customer Experience through Innovative Digital Marketing Approaches
This study explores the transformative impact of innovative digital marketing approaches on revolutionizing customer experience in the contemporary business landscape. The study's main objectives are to examine the role of personalization, immersive technologies, and omnichannel integration in enhancing customer engagement and satisfaction, analyze the effectiveness of measurement and analytics in evaluating digital marketing initiatives' impact, and identify critical policy implications for businesses and policymakers. Utilizing a secondary data-based review approach, this study synthesizes insights from peer-reviewed academic journals, industry reports, and case studies to investigate emerging trends and best practices in digital marketing. Significant findings include the paramount importance of personalization in driving customer engagement, the considerable role of immersive technologies in creating memorable brand experiences, and the critical need for omnichannel integration to deliver seamless and cohesive customer experiences. Policy implications highlight the importance of privacy and data protection, digital inclusion, transparency in personalization efforts, and standardized measurement frameworks. Overall, this study underscores the transformative potential of innovative digital marketing approaches in shaping the future of customer experience.
INTRODUCTION
In the contemporary business landscape, the convergence of digital technologies and evolving consumer expectations has led to a paradigm shift in how companies approach customer experience (CX) management (Bolton et al., 2018).This transformation has been accelerated by innovative digital marketing approaches, which have revolutionized how businesses engage with their customers.This article explores how organizations leverage cutting-edge digital marketing strategies to enhance customer experience and drive competitive advantage.The concept of customer experience encompasses every consumer interaction with a brand, from initial awareness to post-purchase support.In today's hyper-connected world, consumers expect seamless and personalized experiences across all touchpoints, whether online or offline (Chisty et al., 2022).As a result, companies are under increasing Volume 11, No 2/2022 | GDEB pressure to deliver exceptional customer experiences that differentiate their brand and foster loyalty.
Digital marketing is pivotal in shaping the customer journey and influencing consumer perceptions.Marketers can reach their target audience with precision and relevance by leveraging diverse digital channels, such as social media, search engines, email, and mobile apps.Moreover, data analytics and artificial intelligence advancements empower marketers to gather insights into consumer behavior and preferences, enabling them to tailor their messaging and offers to individual customers.One of the critical drivers of innovation in digital marketing is the relentless pursuit of personalization.Rather than adopting a one-sizefits-all approach, successful companies harness the power of data-driven personalization to deliver tailored experiences that resonate with their audience.By analyzing customer data and employing machine learning algorithms, marketers can create highly targeted campaigns that speak directly to the needs and interests of individual customers.
Furthermore, the rise of immersive technologies, such as augmented reality (AR) and virtual reality (VR), has opened up new possibilities for enhancing customer engagement (Surarapu et al., 2018).These technologies enable brands to create interactive, immersive experiences that captivate consumers' attention and foster deeper emotional connections.Whether allowing customers to visualize products in their environment through AR or transporting them to virtual brand experiences, immersive technologies are redefining the boundaries of digital marketing.In addition to personalization and immersion, another key trend shaping the future of digital marketing is the integration of omnichannel experiences.Today's consumers expect a seamless experience across multiple channels and devices, whether browsing online, visiting a physical store, or interacting with a brand's mobile app (Vadiyala, 2020).By unifying their marketing efforts across channels and providing a consistent experience at every touchpoint, companies can create a cohesive brand experience that enhances customer satisfaction and loyalty (Ahmed, 2022).
In summary, the digital revolution has ushered in a new era of customer experience management, where innovative digital marketing approaches are driving profound changes in how companies engage with their customers.By embracing personalization, immersion, and omnichannel integration, businesses can deliver exceptional customer experiences that set them apart.In the following sections of this article, we will delve deeper into specific digital marketing strategies and case studies that illustrate the transformative power of these approaches (Maleque et al., 2010).
STATEMENT OF THE PROBLEM
In today's rapidly evolving digital landscape, the intersection of customer experience and digital marketing presents opportunities and challenges for businesses seeking to stay competitive.While significant progress has been made in leveraging digital technologies to enhance customer engagement, a research gap exists in understanding the most effective strategies for revolutionizing customer experience through innovative digital marketing approaches (Surarapu, 2016).
Despite the growing importance of digital marketing in shaping customer experiences, there needs to be more comprehensive studies that examine the specific strategies and tactics that drive success in this area.While there is a wealth of literature on digital marketing and customer experience management individually, there needs to be more research to explore the synergies between these two disciplines and identify best practices for integrating them The selection of secondary data sources is based on their relevance to the study's objectives, including insights into the latest digital marketing trends, consumer behavior patterns, and advancements in technology that are shaping the customer experience landscape.Critical databases such as PubMed, Google Scholar, Scopus, and industry-specific databases are utilized to identify relevant literature and research studies.A systematic approach is employed to search, screen, and select relevant sources to ensure the comprehensiveness and validity of the secondary data (Ahmed, 2009).Search queries are constructed using keywords related to digital marketing, customer experience, innovation, and relevant theoretical frameworks such as the customer journey, personalization, and omnichannel marketing.
The collected secondary data is then analyzed thematically to identify emerging trends, common themes, and critical insights relevant to the study's objectives.This analysis involves synthesizing information from multiple sources to develop a coherent narrative that provides a comprehensive overview of the current state of digital marketing and its impact on customer experience.Limitations of the secondary data-based review approach include the reliance on existing literature, which may only sometimes capture the latest developments or provide insights into industry-specific practices that need to be more widely documented.However, by drawing on a diverse range of secondary data sources and applying a systematic approach to analysis, this study aims to provide valuable insights and recommendations for businesses seeking to revolutionize their customer experience through innovative digital marketing approaches.
DIGITAL MARKETING TRENDS AND INNOVATIONS
In the ever-evolving digital marketing landscape, staying abreast of the latest trends and innovations is crucial for businesses looking to revolutionize their customer experience.This chapter delves into the most prominent digital marketing trends and innovations shaping today's customer experience landscape.
Artificial Intelligence and Machine Learning: Artificial intelligence (AI) and machine learning (ML) have emerged as game-changers in digital marketing, offering unprecedented opportunities for personalization and automation (Mahadasa, 2017).AI-powered algorithms analyze customer data to deliver highly targeted and relevant content, offers, and recommendations in real time.From chatbots and virtual assistants to predictive analytics and recommendation engines, AI and ML are revolutionizing how businesses engage with their customers, providing personalized experiences at scale.
Voice Search Optimization:
With the rise of voice-enabled devices such as smart speakers and virtual assistants, voice search optimization has become increasingly crucial for businesses seeking to enhance their digital presence.Voice search queries are typically longer and more conversational than text-based searches, necessitating a shift in SEO strategies to accommodate natural language processing and optimize content for voice-based interactions.By optimizing their websites and content for voice search, businesses can improve their visibility and accessibility to voiceenabled consumers, enhancing the overall customer experience (Deming et al., 2018).
Augmented Reality (AR) and Virtual Reality (VR): Augmented reality (AR) and virtual reality (VR) technologies offer immersive and interactive experiences that blur the lines between the physical and digital worlds.AR enables users to overlay digital content onto the real world, while VR transports users to entirely virtual environments (Baddam, 2021).Businesses are leveraging these technologies to create engaging brand experiences, allowing customers to visualize products in their own space through AR or immerse themselves in virtual brand activations and storytelling experiences through VR.By integrating AR and VR into their digital marketing strategies, businesses can captivate audiences and create memorable experiences that drive brand engagement and loyalty.
User-Generated Content and Influencer Marketing: User-generated content (UGC) and influencer marketing have become powerful tools for brands to amplify their message and connect with their audience authentically.UGC, such as customer reviews, testimonials, and social media posts, serve as social proof and foster trust and credibility among potential customers (Perakakis & Kopanakis, 2019).Influencer marketing, on the other hand, involves partnering with individuals with a large and engaged following on social media to promote products or services.By harnessing the reach and influence of UGC and influencers, businesses can extend their brand reach, spark conversations, and cultivate a community around them, enhancing the overall customer experience.
Data Privacy and Ethical Marketing: As consumers become increasingly concerned about data privacy and security, businesses must prioritize ethical and transparent marketing practices.GDPR and other data privacy regulations have ushered in a new era of data protection, requiring companies to obtain explicit customer consent before collecting and using their data for marketing purposes (Baddam, 2020).Ethical marketing practices encompass transparency, accountability, and respect for consumer privacy rights, fostering customer trust and loyalty.By adopting ethical marketing practices and respecting customer privacy preferences, businesses can build stronger relationships with their audience and enhance the overall customer experience.
Digital marketing trends and innovations drive significant changes in how businesses engage with their customers, offering new opportunities for personalization, immersion, and engagement (Bujor & Avasilcăi, 2015).By embracing emerging technologies, leveraging usergenerated content and influencer marketing, and prioritizing data privacy and ethical marketing practices, businesses can revolutionize their customer experience and gain a competitive edge in today's digital landscape.
PERSONALIZATION STRATEGIES FOR ENHANCED ENGAGEMENT
Consumers expect personalized experiences catering to their preferences, interests, and needs in the digital age.Personalization has become a cornerstone of effective digital marketing strategies, enabling businesses to deliver relevant and tailored content, products, and services to their audience (Siddique & Vadiyala, 2021).This chapter explores the importance of personalization in revolutionizing customer experience and examines critical strategies for implementing personalized marketing initiatives that drive engagement and loyalty.
Data-Driven Insights:
Central to effective personalization is collecting and analyzing customer data to gain actionable insights into their behavior, preferences, and purchase history.By leveraging advanced analytics tools and technologies, businesses can gather data from various touchpoints, including website interactions, social media engagement, and past purchases, to create detailed customer profiles (Fadziso et al., 2019).These insights enable businesses to segment their audience into distinct groups based on demographics, interests, and buying behavior, allowing for more targeted and relevant personalization efforts.
Dynamic Content Personalization: Personalization involves tailoring website content, email campaigns, and digital ads to individual users based on their preferences and behavior (Surarapu & Mahadasa, 2017).Through dynamic content management systems and marketing automation platforms, businesses can deliver real-time personalized experiences, adapting content and messaging based on past interactions, location, and browsing history.For example, an e-commerce retailer may display product recommendations based on a customer's past purchases or show personalized offers to incentivize repeat purchases.
Behavioral Trigger Campaigns: Behavioral trigger campaigns are automated marketing campaigns triggered by specific actions or behaviors exhibited by customers.These campaigns deliver timely and relevant messages based on the customer's stage in the buying journey or their interactions with the brand.Common triggers include cart abandonment emails, welcome emails for new subscribers, personalized recommendations based on browsing history.By capitalizing on the behavioral triggers, businesses can engage customers at critical touchpoints and drive conversion rates.
Personalized Product Recommendations: Product recommendations are a powerful tool for driving sales and enhancing the customer experience.Businesses can generate customized product recommendations relevant to each individual's preferences and interests by analyzing customer data and purchase history.Whether through customized emails, website recommendations, or targeted ads, companies can leverage algorithms and machine learning to suggest products that align with the customer's past purchases, browsing behavior, and demographic profile, increasing the likelihood of conversion and customer satisfaction.
Interactive Personalization: Interactive personalization involves engaging customers through interactive experiences that allow them to customize their interactions with the brand.This could include interactive quizzes, product configurators, or personalized assessments that provide tailored recommendations based on the customer's responses (Mahadasa & Surarapu, 2016).By empowering customers to personalize their experience, businesses can foster a sense of ownership and investment in the brand, leading to higher engagement and loyalty.
Personalized Customer Service: Besides customized marketing efforts, businesses can enhance the customer experience through personalized customer service interactions.By leveraging customer data and CRM systems, companies can provide tailored support and assistance to customers based on their past interactions and preferences.Whether through personalized email responses, proactive outreach based on customer feedback, or customized recommendations from customer service representatives, businesses can demonstrate their commitment to meeting the individual needs of their customers, fostering loyalty and advocacy (Neogy & Ahmed, 2015).
Personalization is a crucial strategy for revolutionizing customer experience in the digital age.By leveraging data-driven insights, dynamic content personalization, behavioral trigger campaigns, personalized product recommendations, interactive experiences, and personalized customer service, businesses can create tailored experiences that resonate with their audience and drive engagement, loyalty, and advocacy.As customer expectations evolve, companies must prioritize personalization as a fundamental component of their digital marketing strategy to remain competitive and deliver exceptional customer experiences.
IMMERSIVE TECHNOLOGIES IN CUSTOMER EXPERIENCE
In recent years, immersive technologies such as augmented reality (AR) and virtual reality (VR) have emerged as powerful tools for transforming the customer experience landscape.By providing immersive and interactive experiences beyond traditional marketing channels, these technologies enable businesses to engage customers innovatively, fostering deeper connections and driving brand loyalty (Homburg et al., 2017).This chapter explores the role of immersive technologies in revolutionizing customer experience and examines how businesses can leverage AR and VR to create memorable and impactful brand experiences.
Augmented Reality (AR)
Augmented reality (AR) overlays digital content in the real world, blending virtual elements with the physical environment.AR technology enables customers to interact with products and experiences more immersively and engagingly, providing a unique opportunity for businesses to showcase their offerings compellingly (Baddam et al., 2018).For example, retail brands can use AR to allow customers to visualize products in their space before purchasing.
In contrast, travel companies can use AR to provide virtual tours of destinations and attractions.
One notable example of AR in customer experience is beauty and fashion brands' use of AR try-on experiences.By leveraging AR technology, customers can virtually try on makeup products or clothing items using their smartphone or tablet, allowing them to see how the products look on themselves before making a purchase decision.This enhances the online shopping experience, reduces the likelihood of returns, and increases customer satisfaction.
Virtual Reality (VR)
Virtual reality (VR) creates entirely virtual environments that users can explore and interact with, offering a fully immersive and interactive experience.VR technology enables businesses to transport customers to virtual worlds where they can engage with products, services, and brand experiences in previously impossible ways.For example, automotive companies can use VR to provide virtual test drives of vehicles, while hospitality brands can offer virtual tours of hotels and resorts (Mandapuram et al., 2019).
One of the critical advantages of VR in customer experience is its ability to evoke strong emotions and create memorable experiences.By immersing customers in virtual environments that evoke excitement, awe, or inspiration, businesses can leave a lasting impression and build stronger emotional connections with their audience.This emotional resonance can increase brand loyalty and advocacy, as customers are likelier to remember and recommend brands that have provided memorable experiences.
Interactive Brand Experiences
Immersive technologies enable businesses to create interactive brand experiences that engage customers on a deeper level.Through interactive games, virtual tours, or immersive storytelling experiences, companies can captivate customers' attention and create memorable moments that leave lasting impressions.By actively allowing customers to participate in the brand experience, businesses can foster a sense of ownership and investment in the brand, leading to increased engagement and loyalty.
For example, IKEA's AR app allows customers to visualize how furniture will look in their homes before purchasing.In contrast, Marriott Hotels' VR Postcard experience will enable guests to virtually explore destinations before booking a trip.These interactive brand experiences enhance the customer experience, differentiate brands from their competitors, and drive customer loyalty.
Integration with Marketing Campaigns
Immersive technologies can be seamlessly integrated into marketing campaigns to enhance their effectiveness and impact.Whether as part of a product launch, promotional event, or brand activation, AR and VR experiences can attract attention, generate buzz, and drive engagement.By incorporating immersive elements into marketing campaigns, businesses can create memorable and shareable experiences that resonate with their target audience.
For example, Coca-Cola's "Happiness Arcade" campaign used AR technology to transform a traditional arcade game into an immersive experience where players could interact with virtual elements using their smartphones.This innovative approach attracted attention, generated excitement, and increased consumer brand awareness and engagement.
Immersive technologies such as AR and VR are revolutionizing customer experience by providing innovative and engaging ways for businesses to interact with their audience (Tivasuradej & Pham, 2019).Whether through augmented reality try-on experiences, virtual reality brand activations, interactive games, or immersive storytelling, businesses can leverage these technologies to create memorable and impactful brand experiences that drive engagement, loyalty, and advocacy.As immersive technologies evolve and become more accessible, businesses must embrace them as a fundamental component of their digital marketing strategy to remain competitive and deliver exceptional customer experiences.
OMNICHANNEL INTEGRATION FOR SEAMLESS INTERACTIONS
In today's digitally connected world, consumers interact with brands through many channels, including websites, social media platforms, mobile apps, and physical stores.Omnichannel integration is a strategic approach that seeks to unify these channels to provide customers with a seamless and consistent experience across all touchpoints.This chapter explores the importance of omnichannel integration in revolutionizing customer experience and examines how businesses can leverage this approach to enhance engagement and drive loyalty.
Understanding Omnichannel Integration
Omnichannel integration goes beyond multichannel marketing, which involves using multiple channels to reach customers.Instead, omnichannel integration seeks to create a unified and cohesive experience that seamlessly connects all channels, allowing customers to move effortlessly between them without disruptions.Whether online or offline, customers should have access to the same products, information, and services, ensuring consistency and continuity throughout their journey.At the heart of omnichannel integration is the concept of customer-centricity, which prioritizes the needs and preferences of the customer above all else.By understanding the customer's journey and providing relevant and personalized experiences at every touchpoint, businesses can build stronger relationships with their audience and drive loyalty and advocacy.
Benefits of Omnichannel Integration
Omnichannel integration offers several benefits for both businesses and customers.For businesses, omnichannel integration provides valuable insights into customer behavior and preferences, enabling more targeted and personalized marketing efforts.By tracking customer interactions across channels, companies can gain a holistic view of the customer journey and identify opportunities for optimization and improvement.
Furthermore, omnichannel integration allows businesses to leverage data from one channel to personalize experiences on another.For example, a customer who browses products on a brand's website may receive personalized recommendations or promotions via email or social media based on their browsing history.This seamless integration enhances the customer experience, increases the effectiveness of marketing efforts, and drives sales and revenue.
For customers, omnichannel integration provides convenience, flexibility, and choice.Customers can choose the channel that best suits their needs and preferences at any given time, whether it's browsing products online, visiting a physical store, or interacting with a brand's mobile app.Moreover, omnichannel integration ensures consistency and coherence across channels, reducing customer friction and confusion as they navigate their journey.
Critical Strategies for Omnichannel Integration
Successful omnichannel integration requires careful planning, coordination, and execution across all channels.Critical strategies for achieving seamless omnichannel interactions include: Unified Customer Data: Consolidate customer data from all channels into a centralized database to create a single view of the customer.This allows businesses to track customer interactions, preferences, and behavior across channels and provide personalized experiences accordingly (Crammond et al., 2018)
Case Studies and Examples
Numerous brands have successfully implemented omnichannel integration strategies to enhance the customer experience and drive business results.For example, Starbucks' mobile app seamlessly integrates with its loyalty program, allowing customers to earn rewards and purchase across channels.Nike's "SNKRS" app provides a unified shopping experience for sneaker enthusiasts, allowing them to browse, purchase, and engage seamlessly with exclusive content and events.Omnichannel integration is a powerful approach for revolutionizing customer experience in the digital age.Businesses can enhance engagement, drive loyalty, and differentiate themselves in a competitive marketplace by unifying channels, leveraging customer data, and providing seamless and personalized interactions.As customer expectations evolve, businesses must prioritize omnichannel integration as a fundamental component of their digital marketing strategy to deliver exceptional customer experiences and drive business success.
MEASURING IMPACT: METRICS AND ANALYTICS
In the dynamic landscape of digital marketing, measuring the impact of innovative approaches on customer experience is essential for driving continuous improvement and achieving business objectives (Vadiyala, 2021).This chapter explores the importance of metrics and analytics in revolutionizing customer experience and examines key indicators that businesses can use to evaluate the effectiveness of their digital marketing initiatives.
Importance of Measurement
Effective measurement and analytics are critical for assessing the performance of digital marketing campaigns and determining their impact on customer experience.Businesses can gain valuable insights into customer behavior, preferences, and interactions across various channels by tracking key metrics and analyzing data insights.This enables enterprises to identify areas of strength and opportunity, optimize their marketing efforts, and drive meaningful outcomes that align with business goals.
Furthermore, measurement allows businesses to demonstrate their digital marketing initiatives' return on investment (ROI), providing tangible evidence of the value they deliver to the organization.By quantifying the impact of innovative digital marketing approaches on customer satisfaction, loyalty, and advocacy, businesses can justify investment in these initiatives and secure buy-in from stakeholders.
Critical Metrics for Measuring Impact
Several key metrics can be used to evaluate the impact of innovative digital marketing approaches on customer experience.These include: Engagement Metrics: Engagement metrics such as click-through rates, time spent on site, and social media interactions provide insights into how customers interact with digital content and campaigns.High levels of engagement indicate that customers are actively interested and involved in the brand, contributing to a positive customer experience. Conversion Metrics: Conversion metrics measure the effectiveness of digital marketing campaigns in driving desired actions, such as website sign-ups, purchases, or inquiries.By tracking conversion rates, businesses can assess the effectiveness of their marketing efforts and optimize their strategies to maximize conversions and ROI.
Customer Satisfaction Metrics: Customer satisfaction metrics, such as Net Promoter Score (NPS) and customer satisfaction surveys, provide insights into how customers perceive and feel about their experience with the brand.By collecting feedback and measuring satisfaction levels, businesses can identify areas for improvement and prioritize initiatives that enhance customer satisfaction and loyalty.
Customer Lifetime Value (CLV): CLV measures a customer's total revenue throughout their relationship with the brand.By calculating CLV, businesses can identify highvalue customers, tailor their marketing efforts to maximize lifetime value and prioritize retention and loyalty initiatives that drive long-term profitability (Nunes et al., 2013).
Leveraging Data and Analytics
Innovative digital marketing approaches generate a wealth of data that can be leveraged to gain actionable insights into customer behavior and preferences.Advanced analytics tools and techniques, such as predictive analytics, machine learning, and customer segmentation, enable businesses to uncover patterns, trends, and correlations in data, providing deeper insights into customer needs and motivations (Wu et al., 2019).By analyzing data across multiple touchpoints and channels, businesses can comprehensively view the customer journey and identify opportunities for personalization and optimization.For example, by studying website traffic data and user behavior, companies can locate high-performing content and optimize their website for improved engagement and conversion rates.
Case Studies and Examples
Several brands have successfully leveraged metrics and analytics to measure the impact of innovative digital marketing approaches on customer experience.For example, Amazon uses advanced analytics and machine learning algorithms to personalize product recommendations for individual customers, driving increased sales and customer satisfaction.Similarly, Airbnb analyzes customer feedback and reviews to identify areas for improvement and optimize the guest experience.By leveraging data and analytics, Airbnb This work is licensed under CC-BY-NC, i-Proclaim | GDEB Page 81 can identify trends and patterns in guest feedback, prioritize initiatives that enhance the guest experience, and drive higher satisfaction and loyalty levels (Vadiyala & Baddam, 2018).
Continuous Improvement and Optimization
Measuring the impact of innovative digital marketing approaches is an ongoing process that requires continuous monitoring, analysis, and optimization.By regularly reviewing performance metrics, experimenting with new strategies and tactics, and iterating based on insights and feedback, businesses can continuously improve the customer experience and drive meaningful outcomes that contribute to business success (Yuan et al., 2015).
Measuring the impact of innovative digital marketing approaches on customer experience is essential for driving continuous improvement and achieving business objectives.By leveraging key metrics and analytics, businesses can gain valuable insights into customer behavior, preferences, and interactions, enabling them to optimize their marketing efforts and deliver exceptional customer experiences that drive engagement, loyalty, and advocacy.As digital marketing continues to evolve, businesses must prioritize measurement and analytics as a fundamental component of their strategy to stay competitive and meet the changing needs of their customers.
MAJOR FINDINGS
Several key findings have emerged through critical findings by exploring innovative digital marketing approaches aimed at revolutionizing and highlighting the significance of leveraging cutting-edge strategies and technologies to enhance engagement, drive loyalty, and differentiate brands in the competitive digital landscape.
Personalization is Paramount: One significant finding is the importance of personalization in modern digital marketing strategies.Personalized experiences tailored to individual preferences and behaviors significantly impact customer engagement and satisfaction.By leveraging data-driven insights, businesses can deliver targeted content, recommendations, and offers that resonate with customers personally, fostering deeper connections and driving loyalty.
Immersive Technologies Drive Engagement: Another key finding is the significant role of immersive technologies, such as augmented reality (AR) and virtual reality (VR), in enhancing customer experience.These technologies provide immersive and interactive experiences that captivate customers' attention and create memorable brand interactions (Vadiyala, 2017).By integrating AR and VR into marketing initiatives, businesses can provide customers with unique and engaging experiences that differentiate their brand and leave a lasting impression.
Omnichannel Integration Enhances Cohesion:
Omnichannel integration is critical in delivering seamless and cohesive customer experiences across multiple touchpoints.By unifying channels and providing consistent messaging and branding, businesses can streamline the customer journey and minimize friction, resulting in a more positive and cohesive customer experience.Omnichannel integration also enables enterprises to leverage data and insights from various channels to personalize interactions and drive engagement.
Measurement and Analytics Drive Optimization: Effective measurement and analytics are essential for optimizing digital marketing efforts and driving meaningful outcomes.By tracking key metrics such as engagement, conversion, customer satisfaction, and brand equity, businesses can gain valuable insights into the effectiveness of their marketing initiatives and identify improvement areas.Leveraging data and analytics enables companies to make data-driven decisions, refine their strategies, and continuously improve the customer experience.
Continuous Improvement is Key: Finally, a significant finding is the importance of continuous improvement and optimization in digital marketing.The digital landscape is constantly evolving, and customer expectations are continually changing.Therefore, businesses must continuously monitor performance, experiment with new strategies and tactics, and iterate based on insights and feedback.By embracing a culture of continuous improvement, companies can stay ahead of the curve, adapt to changing trends, and deliver exceptional customer experiences that drive long-term success.
The significant findings highlight the transformative power of innovative digital marketing approaches in revolutionizing customer experience.Personalization, immersive technologies, omnichannel integration, measurement, and continuous improvement are essential to successful digital marketing strategies.By embracing these findings and leveraging cuttingedge techniques and technologies, businesses can differentiate their brand, drive engagement, and build long-lasting relationships with their customers in the digital age.
LIMITATIONS AND POLICY IMPLICATIONS
While innovative digital marketing approaches offer significant opportunities for revolutionizing customer experience, there are also several limitations and policy implications that businesses and policymakers need to consider.
Privacy and Data Protection Concerns: One of the primary limitations of digital marketing initiatives is the potential for privacy and data protection concerns.As businesses collect and analyze vast amounts of customer data to personalize experiences and target advertisements, there is a risk of infringing on individual privacy rights (Mahadasa, 2016).Policymakers must enact robust data protection regulations and ensure compliance with privacy laws to safeguard consumer privacy and protect against potential abuses of personal data.
Accessibility and Digital Divide: Another limitation is the accessibility of digital marketing experiences for all consumers.While digital technologies offer tremendous opportunities for engagement and interaction, there is a risk of exacerbating the digital divide and excluding specific segments of the population who may need access to or be proficient in using digital technologies (Mahadasa et al., 2020).Policymakers must address digital inclusion and ensure equitable access to digital marketing experiences for all consumers, regardless of socioeconomic status or technological literacy.
Accuracy and Transparency in Personalization:
The accuracy and transparency of personalization efforts also present challenges.While personalized experiences can enhance customer engagement and satisfaction, there is a risk of misinterpreting customer data or making incorrect assumptions about individual preferences.Businesses need to prioritize accuracy and transparency in their personalization efforts, ensuring that customers understand how their data is being used and have control over their privacy settings (Surarapu, 2017).Policymakers can play a role in promoting transparency and accountability in digital marketing practices through regulatory oversight and consumer education initiatives.
Measurement and Attribution Challenges: Measuring the impact of digital marketing initiatives and attributing outcomes to specific strategies or channels can be challenging.The complexity of the digital landscape, combined with the multitude of touchpoints and interactions involved in the customer journey, makes it difficult to track and attribute conversions and outcomes accurately.Policymakers need to support efforts to develop standardized measurement frameworks and industry best practices for measuring the effectiveness of digital marketing campaigns, enabling businesses to understand better and optimize their marketing efforts.
While innovative digital marketing approaches hold immense potential for revolutionizing customer experience, they also present several limitations and policy implications that must be addressed.Policymakers are critical in ensuring consumer privacy and data protection, promoting digital inclusion, enhancing transparency and accountability in personalization efforts, and supporting efforts to improve measurement and attribution in digital marketing.By addressing these challenges and leveraging the opportunities presented by innovative digital marketing approaches, businesses, and policymakers can work together to create a digital ecosystem that delivers meaningful and impactful customer experiences for all.
CONCLUSION
In conclusion, the revolutionization of customer experience through innovative digital marketing approaches represents a pivotal shift in how businesses engage with their audience in the digital age.From personalized experiences and immersive technologies to omnichannel integration and advanced analytics, enterprises leverage cutting-edge strategies and technologies to create meaningful and impactful customer interactions.Throughout this exploration, several key themes have emerged.Personalization is a cornerstone of effective digital marketing strategies, enabling businesses to deliver tailored experiences that resonate with individual preferences and behaviors.Immersive technologies such as augmented reality (AR) and virtual reality (VR) have transformed the customer experience landscape, providing immersive and interactive experiences that captivate customers' attention and drive engagement.Omnichannel integration is essential for delivering seamless and cohesive customer experiences across multiple touchpoints.Measurement and analytics are critical in optimizing digital marketing efforts and driving meaningful outcomes.By tracking key metrics and analyzing data insights, businesses can gain valuable insights into customer behavior, preferences, and interactions, enabling them to refine their strategies and deliver exceptional customer experiences.
However, limitations and policy implications also need to be addressed.Privacy and data protection concerns, accessibility issues, and challenges related to accuracy and transparency in personalization efforts highlight the importance of regulatory oversight and consumer education initiatives.Despite these challenges, the potential for revolutionizing customer experience through innovative digital marketing approaches is vast.By embracing personalization, leveraging immersive technologies, prioritizing omnichannel integration, and investing in measurement and analytics, businesses can differentiate their brand, drive engagement, and build long-lasting relationships with their customers in the digital age.
As digital marketing continues to evolve, businesses and policymakers must collaborate to address challenges, promote ethical practices, and create a digital ecosystem that delivers meaningful and impactful customer experiences for all.By doing so, businesses can position themselves for success and drive long-term growth in today's competitive marketplace.
. Cross-Channel Communication: Enable communication and collaboration between different channels to ensure consistency and coherence in messaging and branding.Whether through shared databases, integrated communication platforms, or crossfunctional teams, businesses must facilitate seamless communication across channels to provide a unified experience for customers. Seamless Transitions: Ensure customers can transition seamlessly between channels without disruptions or barriers.Whether starting a transaction online and completing it in-store or vice versa, customers should be able to pick up where they left off without repeating themselves or providing redundant information.
Personalized Experiences: Leverage customer data and insights to personalize experiences across all channels.Whether through targeted offers, personalized recommendations, or tailored messaging, businesses can create meaningful and relevant experiences that resonate with customers and drive engagement and loyalty. | 2024-02-27T18:21:45.483Z | 2022-12-31T00:00:00.000 | {
"year": 2022,
"sha1": "82a58cd3da261dd5ffab0ad6555e5cc062cb4355",
"oa_license": "CCBYNC",
"oa_url": "https://i-proclaim.my/journals/index.php/gdeb/article/download/716/646",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3e4a0deb21cf263aac641108d390f71048052012",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
} |
635899 | pes2o/s2orc | v3-fos-license | An evaluation of the protective role of Ficus racemosa Linn. in streptozotocin-induced diabetic neuropathy with neurodegeneration.
OBJECTIVE
Ficus racemosa (FR) is one of the herbs mentioned in the scriptures of the Ayurveda as Udumbara with high medicinal value. The objective of this study was to estimate the protective effect of FR against streptozotocin (STZ) induced diabetic neuropathy with neurodegeneration (DNN).
MATERIALS AND METHODS
Diabetes was induced in Wistar rats with STZ and were divided into six groups namely diabetic vehicle control, FR (four) and glibenclamide (one) treated rats; while one group was of normal control rats. After the 4(th) week of diabetes, induction treatment was started for further 28 days (5(th) to 8(th) week) with FR aqueous extract (250 mg/kg and 500 mg/kg) and ethanolic extract (200 mg/kg and 400 mg/kg). Investigation of DNN was carried out through biochemical and behavioral parameter assessment in rats.
RESULTS
Study showed a significant fall in glycosylated hemoglobin (HbA1c) and blood glucose level by the treatment of FR in diabetic rats. Antioxidant potential of FR showed a great rise in superoxide dismutase, catalase content and reduction observed in serum nitrite level; while significant fall in lipid peroxidation level and of C-reactive protein was observed in FR treated diabetic rats. Further FR treated diabetic rats also showed marked improvement in tail flick latency, pain threshold, the rise in locomotion and fall latency period.
CONCLUSION
Treatment with FR shows protection in the multiple pathways of DNN by improving blood glucose, HbA1c, biochemical, and behavioral parameters, which suggest the protective role of FR in the reversal of DNN.
and inflammation play a crucial role in the development and progression of late-stage complications of diabetes, [4] an acute phase reactant C-reactive protein (CRP) level rises dramatically during inflammatory processes. [5] The antioxidant defenses in humans are superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase. Lipid peroxidation (LPO) was one of the characteristic features of the chronic diabetic condition. Plants have been the major source of the treatment for diabetes in the Indian system of medicine and other ancient systems in the world. Ficus racemosa Linn. (FR) (Family Moraceae) is one of the plants mentioned in the ancient scriptures of Ayurveda. The aim of the present study was to investigate the protective role of FR Linn. in streptozotocin (STZ) induced diabetic neuropathy and neurodegeneration (DNN), which has not been evaluated so far.
Drugs and Chemicals
All reagents were analytical grades. Biochemical kits were purchased from Span diagnostics, India. STZ was obtained from Sigma chemicals, India. Glibenclamide (GL) was obtained as gratis sample.
Extraction Methods of Ficus racemosa
FR stem bark was collected from the Directorate of medicinal and Aromatic Plants Research, Gujarat. The stem bark was shade dried and converted into powder form by the milling process. An amount of 50 g of powder was dissolved in 3/4 th of water, and the mixture was boiled with the temperature not exceeding 50-60°C till the volume of water came down to one-fourth. The mixture was cooled, and concentrated portion was considered as aqueous extract. A total 200 g of powder was dissolved in 99.5% strength with 2/3 rd of ethyl alcohol mixed well and refluxed for 3-4 days (3-4 h/day). The alcoholic portion was transferred in rota-vapor (temp of bath 60°C, RPM -110, Vacuum -100) until complete drying of alcoholic extract. The aqueous extract was prepared in 10% tween 20 solution prepared in double distilled water while ethanolic extracts were prepared in 5% gum acacia solution for oral administration. Both the vehicles utilized for aqueous and ethanolic extract were inert.
Animals and Experimental Protocol
The animal protocol was approved by the Institutional Animal Ethics Committee. Male Wistar rats (300 ± 50 g) were provided the standard pelleted diet and ad libitum water, were kept at environmental temperature (23°C ± 3°C) and under 12 h light-dark cycle. The animals were acclimatized to the experimental conditions a week before the study. Acute toxicity study of FR was performed with a dose level of 2000 mg/kg, [6] which formed the basis for dose selection in the efficacy study.
Induction of Diabetic Complication with Treatment Schedule
Diabetes was induced in overnight fasted rats by a single intraperitoneal injection of fresh STZ [7,8] (60 mg/kg body weight) in citrate buffer (0.1M, pH 7.4). After 48 h of STZ injection, fasting blood samples were withdrawn from tail vein and blood glucose level was measured by use of a glucometer (Accu-Chek, Johnson and Johnson, India). The animals having fasting blood glucose level ≥230 mg/dl were randomized in groups. The development of DNN was confirmed by basal nociceptive reaction at the 4 th week of STZ injection, and all treatments were started thereafter from 5 th to 8 th week. GL is widely used as second-line therapy in diabetes and can lower glycosylated hemoglobin (HbA1c) by 1-2%, [9] several research reports had mentioned the use of GL as a positive control. [8,10,11] At the end of 8 th week, behavioral analysis were done and the blood sample was collected, while brain and sciatic nerve were isolated and washed with ice-cold saline, then homogenate was prepared and stored at -80°C till further analysis. No mortality was observed during the study period.
The animals were divided into seven groups (n = 5):
Analysis of blood glucose and glycosylated hemoglobin level
The blood glucose level was analyzed with glucometer using glucose reagent strip while HbA1c level was measured as reference cited. [12] Serum protein level and C-reactive protein Serum protein (SP) level (g/dl) was estimated by a method as per assessment kit. [13] CRP level (mg/l) was determined by particle enhanced immunoturbidometry method. [14] Serum antioxidant catalase and superoxide dismutase CAT activity of serum samples was expressed in µmol H 2 O 2 utilized per min/mg of protein. [15] SOD level in serum was expressed as mU of SOD/mg protein. [16] Serum nitrite level The serum nitrite level was estimated using greiss reagent which served as an indicator of nitric oxide production. [17] Tissue homogenate level of malondialdehyde LPO product as malondialdehyde (MDA) level estimated in tissue homogenate by method cited in reference. [18] Behavioral Markers
Tail immersion (hot water) test
The tail of the rat was immersed in a hot water bath (55°C ± 0.5°C) until withdrawal or signs of struggle were observed (cutoff 10 s). Shortening of the tail-withdrawal time indicated hyperalgesia. [19] Hot plate test In this test, animals were individually placed on a hot plate with the temperature adjusted to 43°C ± 1°C. The latency to the first sign of paw licking or jump response to avoid the heat was taken as an index of the pain threshold. [19] Motor coordination activity The motor coordination and performance of each rat was evaluated using rota-rod apparatus. Latency to fall from the rotating bar was registered in seconds. [20]
Locomotion activity
Photoactometer test was performed to study the effect of drug treatment on spontaneous motor activity and cutoff of the photocell beam was recorded. [21] Statistical Analysis Values in the result were expressed as a mean ± standard error of the mean. Differences between groups mean were estimated using one-way analysis of variance. Paired t-test was applied for the comparison of groups at different time interval. The result was considered statistically significant at P < 0.05, 0.01, and 0.001.
STZ injected
Wistar rats had produced cardinal signs of diabetes, i.e., weekly change in body weight [ Figure 1], polyphagia, and polydipsia [ Table 1], which persisted throughout the period of study.
Effect of Ficus racemosa on Body Weight, Food, and Water Intake
Chronic treatment with FR significantly in diabetes prevented weight loss as compared to DC rats [ Figure 1]. FR treated diabetic rats also showed a reduction in food consumption and increased water intake during the study period.
Effect of Ficus racemosa on blood glucose and glycosylated hemoglobin level
Sustained hyperglycemia was observed in STZ induced diabetic rats during the study. There was significant reduction seen in blood glucose level of FR treated diabetic animals as compared to DC rats [ Figure 2]. HbA1c level was significantly increased in diabetic animals (8.14 ± 0.34), while aqueous (5.66 ± 0.78, 5.44 ± 0.73) and ethanolic extract of FR (6.38 ± 1.02, 5.47 ± 1.84) treated animals showed a significant reduction in HbA1c level [ Table 2].
Effect of Ficus racemosa on Antioxidant Level
Antioxidant enzyme (CAT and SOD) levels were significantly reduced in diabetic animals (P < 0.05) as compared to normal animals, while FR treated groups showed significant rise in CAT and SOD enzyme levels as of DC rats.
Effect of Ficus racemosa on Serum Protein, C-reactive Protein, and Nitrite Level
Protein present in the blood was significantly higher in diabetic rats as compared to NC animals [ Table 2]. Treatment Figure 1: Effect of Ficus racemosa aqueous and ethanolic extracts administration on weekly changes in body weight (g). Each value was considered as mean ± standard error of mean. (n = 5) statistical significant difference was mentioned for Ficus racemosa treated groups (P < 0.05) as compared to diabetic control animals Figure 2: Effect of Ficus racemosa aqueous and ethanolic extracts administration on blood glucose analysis before and after the treatment each value was considered as mean ± standard error of mean. (n = 5) statistical significant difference was mentioned of Ficus racemosa, glibenclamide treated and normal control group (**P < 0.01) compared to diabetic control animals statistical significant difference was mentioned of diabetic control ( ## P < 0.01) as compared to normal control group with FR showed a significant reduction in SP content as compared to DC rats. CRP levels were significantly increased in diabetic rats as compared to normal rats, while FR treated diabetic rats showed a significant reduction in CRP level as compared to DC rats [ Table 2]. Nitrite levels were significantly increased in diabetic rats as compared to NC rats, while FR treated animals showed a significant reduction in nitrite level as compared to DC rats [ Table 2].
Effect of Ficus racemosa on Tissue Malondialdehyde Level
Tissue MDA level was significantly raised in brain and nerve tissues of diabetic rats. Two-fold decrease in the level of brain MDA was observed in FR (aqueous extract) treated diabetic rats, while four-fold decrease in brain MDA level were observed in FR (ethanolic extract) treated diabetic rats as compared to DC rats [ Table 2]. Nerve MDA level were reduced four-fold with FR treated (ethanolic extract) diabetic rats as compared to DC rats.
Effect of Ficus racemosa on Behavioral Markers
STZ injected rats had nociceptive threshold significantly lower than NC rats as observed by tail immersion test and hot plate assay. FR treated diabetic rats exhibited a rise in the tail flick latency as compared to DC rats [ Table 3]. FR aqueous extract treated group showed significantly improved pain threshold, while FR ethanolic extract treated group showed double fold rise in pain threshold as compared to DC rats [ Table 3]. Diabetic animals showed reduced locomotion (lo) ability as observed in a number of cut off significantly different from NC rats (100.2 ± 2.03). FR treated diabetic animals showed a significant rise in lo time as compared to DC rats [ Table 3]. The rota-rod test experiment demonstrated the impairment of the motor function and coordination in the diabetic rats with a significant reduction in fall off time as compared to NC rats. FR extract treated diabetic rats showed a significant increase in fall off time as compared to diabetic animals [ Table 3].
Discussion
STZ causes direct DNA damage to the pancreatic islets of beta cells, which leads to hyperglycemic state. Increase in blood glucose and HbA1c levels following STZ treatment observed in our study was supported by other work. [8,22] Coscinium fenestratum stem and Catharanthus roseus brought back the status of blood glucose and HbA1c to normal range in diabetic rats, [10] supported our study results. Alteration in antioxidant defense in the diabetic rats was evidenced by a significant reduction in serum antioxidant enzyme activity in diabetic rats. The decrease in antioxidant enzymes activity in the hyperglycemic rats could be due to oxidative stress induced inactivation [23] which was observed in present study. Antioxidant levels were increased with FR treatment in DNN supported by result of carnitine and lipoic acid had reduced free radical production and increased in antioxidant status thereby lowering oxidative stress, [24] while ginger had significantly improved the level of SOD, CAT in diabetic rats. [11] The excessive production of superoxide and peroxynitrite in sciatic nerve have been linked with altered vaso-relaxation responsible for nerve perfusion irregularities. [25] In hyperglycemia induced oxidative injury, key mediator is peroxynitrite formed by the combination of superoxide with nitric oxide that exerts detrimental effects on the nerve tissue leading to neuropathic pain. [26] Thus formed peroxynitrite further initiates the pathways implicated in the development of diabetes neuropathy and degeneration. [27] Present study showed reduced nitrite level by FR in diabetic rats, supported by a work showed that reduced neural nitrite level in naringin-treated diabetic rats. [28] In another study, cannabis exhibited significantly lower levels of nitrite in diabetic condition. [29] CRP, an acute phase reactant, was a highly sensitive marker of inflammation. [4] Treatment with FR extracts showed anti-inflammatory potential, which is supported by results of berberine, which had reduced plasma CRP levels. [30] LPO was one of the characteristic features of chronic diabetes. The overall effect of LPO was to decrease the membrane fluidity, deformability, visco elasticity of tissues, which was improved by treatment with FR extracts in diabetic rats, result of other studies showed decreased LPO level in diabetic rats when treated by ginger [11] and Erythrina variegate also showed reduction in basal LPO as compared to DC. [31] The nociceptive threshold was significantly lower in diabetic animals than nondiabetic animals indicating that diabetics exhibited thermal hyperalgesia. Diabetic rats showed a significant reduction in paw withdrawal threshold, which indicates the development of hyperalgesia. Present study revealed that treatment with FR prevents allodynia which further reduces neuropathic pain in diabetic rats, in scientific evidence naringin-treated diabetic rats attenuated reduced mean tail withdrawal latency as compared to diabetic rats. [28] Present study also evaluated the behavioral response in motor and lo performance of diabetic and NC rats through rota-rod and photoactometer test. Diabetic rats showed lower fall off time from the rotating rod when compared to NC and suggesting impairment in their ability to integrate sensory input with appropriate motor commands to balance their posture as showed in present study supported by literature. [32] Present study showed reduced motor performance (mp) and lo in STZ-induced diabetic rats, and FR treated rats showed improvement for mp and lo in diabetic rats. In one study, fall latency was improved, motor incoordination was prevented, and lo counts were improved when diabetic rats treated by the extract of Pakinsonia aculeate [32] supports our study results.
Conclusion
The data of present study suggest that FR exhibit protective effect by reducing complications of DNN through preventing a rise in glycated hemoglobin content, reduced oxidative-nitrosative stress level, and decreased early inflammation level. FR treatment also showed excellent antioxidant potential with a low level of LPO thus provided protection in diabetic tissue. Behavioral aspects were improved during treatment by FR in diabetic animals. FR may be considered as a future option due to varieties of pharmacological actions with proven results in DNN for its reversal. However, further studies are required for the better understanding of the mechanism of action of FR. | 2018-04-03T04:39:29.687Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "aaf398af2fc7186cc3d8cff6cc93e4b82b22c0f3",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4689013",
"oa_status": "GREEN",
"pdf_src": "WoltersKluwer",
"pdf_hash": "5e8d24a6c6c2b4239ed9dec3f126fc50261212bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270800125 | pes2o/s2orc | v3-fos-license | The Influence of the Diameter of Orthodontic Mini-Implants on Primary Stability: Bending Tests—An In Vitro Study
Orthodontic Mini-Implants have a high success rate, but it is crucial to assess the load that they bear in order to maintain their primary stability. Increasing the diameter can improve this stability, but there are limitations due to the proximity of the tooth roots. To avoid damage, smaller diameters are used, which can decrease resistance and cause permanent deformations. Objective: The objective of this study is to evaluate the influence of the diameter of Mini-Implants through bending force tests, taking into account primary stability after one and two insertions. Methods: Here, 40 Ti6AI4V alloy Mini-Implants of two different brands and diameters were divided into eight groups, half of which received one insertion in the artificial bone, and the rest received two. All were subjected to a constant bending force using an INSTRON-Electropuls E10000LT (Norwood, MA, USA) until fracture. Results: The smaller-diameter Mini-Implants were less resistant to fracture, but both were able to withstand the necessary loads produced by orthodontic movements. As for the inserts, there were no statistically significant differences. Conclusions: There is an advantage to using 1.6 mm Mini-Implants over 2.0 mm ones, as a smaller diameter does not lead to fracture due to the forces used in orthodontic treatment. Having one or two inserts did not have a statistically significant effect.
Introduction
Orthodontic treatment requires stable anchorage, and some traditional orthodontic appliances depend considerably on the patient's cooperation in reinforcing this anchorage.In order to improve this factor, Mini-Implants (MIs) have emerged as an alternative that is independent of the patient's cooperation.
The use of MIs for skeletal anchorage has increased in orthodontic practice, making treatment more efficient and faster since it does not depend directly on the patient's cooperation [1,2].The use of MIs as an aid in orthodontic treatment has had a success rate from 75% to 90% [3].One of the crucial points when placing these devices is determining the torque required and assessing the load they can bear in order to preserve primary stability and absolute anchorage [3,4].Even with a high success rate, the reattachment of MIs is a common procedure.The need for this procedure may arise due to anatomical limitations or the loss of primary stability.
Stability is achieved through mechanical retention since MIs do not undergo the process of osseointegration.Due to their great advantages, these devices have several indications, including intrusions, extrusions, medialisation, dental distalisations and the treatment of open and deep bites [4,5].Primary stability (obtained immediately after placement in the bone) is crucial for MIs' success [4,6,7].These devices can be applied to practically the entire oral cavity and used with different types of forces [6].Improving primary stability can be achieved by increasing the diameter and length of Mis, which translates to better performance.However, this increase is limited by the proximity of adjacent teeth roots and the risk of contact [3].To ensure MIs' good primary stability, several factors need to be considered, such as the type of the bone (cortical and trabecular), the characteristics of the Mini-Implant (MI) (diameter, length and shape), its position (placement angle), the condition of the gingival tissue around the MI, the patient's age (considering that the quantity and quality of bone increases with age) and the force applied.If a load is applied to the MI without it having sufficient stability, loss of stability may occur [4,[7][8][9].Studies indicate that titanium alloy MIs remain stable with forces of up to 250 g and are more effective when subjected to immediate forces [8].
The literature points out that, in orthodontic practice, the most commonly used MIs are those made of the titanium alloy Ti6Al4V due to their greater resistance to corrosion and greater biocompatibility, are less likely to be rejected when placed on patients, leading to an increase in MIs performance [10].The minimum recommended length for MIs is around 6 mm and usually goes up to 10 mm, and the diameter usually varies between 1.3 mm and 2.0 mm [11].
Although one of the main advantages of MIs is the low risk of complications, it is possible for these to occur.To minimise the risk of injury to adjacent anatomical structures, smaller diameters are used.However, a significant issue is that reducing the diameter of MIs results in a decrease in resistance to both the maximum fracture torque and the amount of load supported, which can lead to permanent deformation.This happens because decreasing the diameter leads to less resistance to the load it can bear, which means that smaller diameters will be less resistant to bending forces, which can compromise primary stability [10].
Therefore, the first aim of this study was to evaluate the influence of the diameter and brand of MIs on their resistance through mechanical tests of bending forces, thus evaluating MIs' primary stability.The second objective was to assess primary stability after one and two insertions.Two brands used in orthodontic practice were evaluated.
Materials
All materials and chemicals were used in accordance with the manufacturers' standards.This study used two brands of Mini-Implants (MIs), one of which was Fatscrew (Fts) from Air Orthodontics ® (Barcelona, Spain).The other brand of MIs was a white brand (MB) produced by Worldtrade-center ® (Beijing, China) marketed by eBay ® (San Jose, CA, USA).These MIs were inserted into an artificial bone with characteristics similar to those of the human jawbone-Sawbones ® (Sawbone Europa AB, Malmö, Sweden).
Methods
A standard laboratory protocol was established and applied to test all selected samples at the Laboratory of Investigation in Oral Rehabilitation and Prosthodontics, UNIPRO-Oral Pathology and Rehabilitation Research Unit, University Institute of Health Sciences (IUCS), CESPU, Gandra, Portugal.
Sample Preparation
Forty Ti6AI4V alloy (grade V) self-drilling MIs were used in this study-twenty Fatscrew (Fts) brand MIs from Air Orthodontics ® and twenty white brand (MB) MIs from Worldtrade-center ® .Regarding each brand, 10 MIs were 1.6 mm in diameter, and the remaining 10 were 2.0 mm in diameter.The 40 MIs were divided into eight groups of 5 MIs each, with four groups having only one placement in the artificial bone and the remaining four groups having two insertions in the bone: Group 1 (5 MI Fts of 1.6 mm Ø, placed once in the artificial bone), Group 2 (5 MI Fts of 1.6 mm Ø, placed twice in the artificial bone), Group 3 (5 MI Fts of 2.0 mm Ø, placed once in the artificial bone) Group 4 (5 MI Fts of 2.0 mm Ø, placed twice in the artificial bone), Group 5 (5 MB MIs of 1.6 mm Ø, placed once in the artificial bone), Group 6 (5 MB MIs of 1.6 mm Ø, placed twice in the artificial bone), Group 7 (5 MB MIs of 2.0 mm Ø, placed once in the artificial bone) and Group 8 (5 MB MIs of 2.0 mm Ø, placed twice in the artificial bone).
Elaboration of Artificial Bone Blocks
The MIs were placed in artificial bone with characteristics similar to those of a human jaw bone.The material used was Sawbones ® (Sawbone Europa AB, Malmö, Sweden).Sawbones' epoxy formulation is filled with short glass fibres and used to simulate cortical bone for structural testing.This simulated cortical bone has a density (2.0 g/cc), fracture toughness (6.0 MPa), Tensile Strength (150 MPa), Tensile Modulus (20 GPa), Flexural Modulus (20 GPa), Flexural Strength (225 MPa) and hardness all similar to cadaveric cortical bone.This material is the cortical bone for all our absolute bones.It is a grey/green colour.Cellular, rigid polyurethane foam has larger pores to resemble cancellous bone.There are different types of density to be chosen, and it presents an off-white colour [12].
In order to better represent the human jawbone, a 2 mm-thick sheet of 4th-generation fibre-filled epoxy with the following specifications was used in the cortical bone layer: short fibre filler epoxy, the direction of fibre parallel to width (120 mm), density ± 2.5%.
To represent the cancellous bone, we used a rigid cellular foam block of 20 PCF (0.32 g/cm 3 ) with a thickness of 10 mm (Figure 1a) with the following specifications: cellular, rigid polyurethane foam, thickness parallel to the direction of rise, density ± 10%.The epoxy sheet was coupled to the rigid cellular foam block with cyanoacrylate, as indicated by Sawbones ® , Figure 1b.This bone material was divided into 1.5 × 1.5 cm fragments (Figure 1c).The MIs were placed in artificial bone with characteristics similar to those of a human jaw bone.The material used was Sawbones ® (Sawbone Europa AB, Malmö, Sweden).Sawbones' epoxy formulation is filled with short glass fibres and used to simulate cortical bone for structural testing.This simulated cortical bone has a density (2.0 g/cc), fracture toughness (6.0 MPa), Tensile Strength (150 MPa), Tensile Modulus (20 GPa), Flexural Modulus (20 GPa), Flexural Strength (225 MPa) and hardness all similar to cadaveric cortical bone.This material is the cortical bone for all our absolute bones.It is a grey/green colour.Cellular, rigid polyurethane foam has larger pores to resemble cancellous bone.There are different types of density to be chosen, and it presents an off-white colour [12].
In order to better represent the human jawbone, a 2 mm-thick sheet of 4th-generation fibre-filled epoxy with the following specifications was used in the cortical bone layer: short fibre filler epoxy, the direction of fibre parallel to width (120 mm), density ± 2.5%.
To represent the cancellous bone, we used a rigid cellular foam block of 20 PCF (0.32 g/cm 3 ) with a thickness of 10 mm (Figure 1a) with the following specifications: cellular, rigid polyurethane foam, thickness parallel to the direction of rise, density ± 10%.The epoxy sheet was coupled to the rigid cellular foam block with cyanoacrylate, as indicated by Sawbones ® , Figure 1b.This bone material was divided into 1.5 × 1.5 cm fragments (Figure 1c).
Insertion into Artificial Bone
The MIs were placed using a drill stand machine, Figure 2a, to which an Implantmed ® Plus SI-1023 micromotor with an S-NW-W&H wireless foot pedal (REF: 30288000) made in Austria, was attached.
In order to place all the MIs, the micromotor was calibrated at 50 rpm with a maximum torque of 60 N/cm.Zetalabor and R&S Turbocclusion silicone, shown in pink and orange in Figure 2b, was used to ensure that the counter-angle of the micromotor remained firm during MI placement and did not move.
Insertion into Artificial Bone
The MIs were placed using a drill stand machine, Figure 2a, to which an Implantmed ® Plus SI-1023 micromotor with an S-NW-W&H wireless foot pedal (REF: 30288000) made in Austria, was attached.
In order to place all the MIs, the micromotor was calibrated at 50 rpm with a maximum torque of 60 N/cm.Zetalabor and R&S Turbocclusion silicone, shown in pink and orange in Figure 2b, was used to ensure that the counter-angle of the micromotor remained firm during MI placement and did not move.
For the MIs that were placed twice in the artificial bone, the second was placed next to the first one.In this way, it was possible to retrace their placement so as to avoid operator errors, which often occur.In order to ensure MIs' uniform placement in the artificial bone, the centres of all the bone blocks were determined (Figure 2c), and the MIs were then placed in a holder so that they were fixed in the same position as in the drilling machine.This ensured that there was no variability in the placement of the MIs on the artificial bone so that in later bending force tests, no operator error would occur.For the MIs that were placed twice in the artificial bone, the second was placed next to the first one.In this way, it was possible to retrace their placement so as to avoid operator errors, which often occur.In order to ensure MIs' uniform placement in the artificial bone, the centres of all the bone blocks were determined (Figure 2c), and the MIs were then placed in a holder so that they were fixed in the same position as in the drilling machine.This ensured that there was no variability in the placement of the MIs on the artificial bone so that in later bending force tests, no operator error would occur.
Compression Test to Measure the Fracture Resistance of Different Mini-Implants
The 40 MIs were subjected to a single-load bending force at a constant speed of 10 mm/min on the INSTRON ® Electropuls E10000 LT universal testing machine (Norwood, MA, USA).To provide fixed support for the sample and ensure compatibility with the INSTRON ® machine, the artificial bone material was embedded in a 2.0 × 2.0 cm metal support with Probase Cold Self-Curing Acrylic from Vivadent ® (Madrid, Spain).This facilitated its connection to the Electropuls E10000 LT testing machine, which is a dynamic fatigue testing machine with a linear dynamic capacity of 10 KN, a linear static capacity of 7 KN, a linear stroke of 60 mm and a torque capacity of 100 Nm, which allows static and dynamic axial and torsional tests in accordance with the ISO 7500-1 standard [13].It has an accredited calibration force of up to 5 meganewtons in accordance with ISO 7500-1 and ASTM E4 [14].
The fracture test was carried out via compressive, which was applied to the neck of the Mini-Implant (MI) and coupled to the load cell of the testing machine.The fracture was notified by an audible click and confirmed by a sharp drop in the load-deflection curve; the test results were recorded using Bluehill Calculation Reference Software (Instron ® , Norwood, MA, USA), which facilitated the definition and execution of tests and data acquisition.Subsequently, all values were statistically analysed.The loads required for fracture were recorded in Newtons (N).The bending force depended on the strength of the MIs and was detected by the load cell.The points of deformation initiation and fracture, determined by the load cell, were considered the key points of this test, as shown in Figure 3a-d.All this information was transferred directly to the computer connected to the machine.
Compression Test to Measure the Fracture Resistance of Different Mini-Implants
The 40 MIs were subjected to a single-load bending force at a constant speed of 10 mm/min on the INSTRON ® Electropuls E10000 LT universal testing machine (Norwood, MA, USA).To provide fixed support for the sample and ensure compatibility with the INSTRON ® machine, the artificial bone material was embedded in a 2.0 × 2.0 cm metal support with Probase Cold Self-Curing Acrylic from Vivadent ® (Madrid, Spain).This facilitated its connection to the Electropuls E10000 LT testing machine, which is a dynamic fatigue testing machine with a linear dynamic capacity of 10 KN, a linear static capacity of 7 KN, a linear stroke of 60 mm and a torque capacity of 100 Nm, which allows static and dynamic axial and torsional tests in accordance with the ISO 7500-1 standard [13].It has an accredited calibration force of up to 5 meganewtons in accordance with ISO 7500-1 and ASTM E4 [14].
The fracture test was carried out via compressive, which was applied to the neck of the Mini-Implant (MI) and coupled to the load cell of the testing machine.The fracture was notified by an audible click and confirmed by a sharp drop in the load-deflection curve; the test results were recorded using Bluehill Calculation Reference Software (Instron ® , Norwood, MA, USA), which facilitated the definition and execution of tests and data acquisition.Subsequently, all values were statistically analysed.The loads required for fracture were recorded in Newtons (N).The bending force depended on the strength of the MIs and was detected by the load cell.The points of deformation initiation and fracture, determined by the load cell, were considered the key points of this test, as shown in Figure 3a-d.All this information was transferred directly to the computer connected to the machine.
Statistical Analysis
Data were analysed with R software, version 4.3.2[15].Three-way ANOVAs (3-way ANOVAs) with type II sum of squares were used [16].Type II sum of squares (SS) was chosen in place of type III sum of squares because it involves comparing the change in SS to a model with all other effects of equal or lower order (e.g., three-way interactions, two-way interactions and main effects), and Type III SS compares SS with a model containing all other effects (regardless of the order).We calculated and assessed the main 2-way and 3-way effects of brand, insertions and diameter for use as fixed effects.Generalised eta squared (η 2 g ) was calculated to measure the effect sizes of the main 2-way and 3-way effects, according to Cohen's [17] benchmarks for small (0.01), medium (0.06) and large (0.14).
Residual QQ plots and Shapiro-Wilk tests were used to assess the residuals' normality.Levene's test was used to test for variance homoscedasticity.For both tests, the null was not rejected for p > 0.05.Statistical significance was determined at 5%.
Primary Stability Loss
Tables 1-4 show the main effects (Table 1), two-way effects (Tables 2 and 3) and three-way effects of brand, insertions and diameter in force (N) at loss of primary stability (Table 4).The main effects of brand and diameter (Table 1) were statistically significant-F( 1,32) = 9.06, p = 0.005, η 2 g = 0.22 and F( 1,32) = 201.59,p < 0.001, η 2 g = 0.86, respectively, both with high effect size.The use of Fatscrew (Fts) and a diameter of 2.0 mm led to increased force (N) at loss of primary stability.Having one or two insertions did not show a significant main effect, p = 0.207.Brand's interaction with diameter (Table 2) was statistically significant, F (1,32) = 35.57,p < 0.001, η 2 g = 0.53, with a high effect size.Using a diameter of 2.0 mm led to an adjusted mean of 178.75 for Fatscrew (Fts) and 198.01 for white label, contradicting the results for the main effects.This was due to the brand effect of a lower diameter (1.6 mm), whereby the adjusted mean was 125.09 for Fatscrew (Fts) and 66.61 for the white label (p < 0.001).
Brand's interaction with insertions (Table 3) was statistically significant, F (1,32) = 5.25, p = 0.029, η 2 g = 0.14, with a high effect size.Differences between brands were statistically different with two insertions (p < 0.001), with adjusted means of 163.58 for Fatscrew (Fts) and 129.04 for white label, but no statistical difference for one insertion.
Three-way effects (Table 4) were not statistically significant.The highest results for force (N) at the loss of primary stability were found for the white label at 2.0 mm diameter, closely followed by Fatscrew (Fts) at 2.0 mm, but only with two insertions.
Figure 4 shows boxplots for 3-way interactions.Fatscrew adjusted means were very close to the ones exhibited on the white label for 2.0 mm.For 1.6 mm, mean Force (N) was higher in Fatscrew (Fts), particularly for 2 insertions, whilst in white label, this was the lowest result of all.Figure 5 shows that the normality assumption for residuals was met, with all points falling within 95% normal bounds.The result of the complementary Shapiro-Wilk test was p > 0.05.Variances were homoscedastic, with F (7,15) = 0.74, p = 0.638.
Fracture
Tables 5-8 show the main effects (Table 5), two-way effects (Tables 6 and 7) and threeway effects of brand, insertions and diameter on force (N) at fracture (Table 8).
Fracture
Tables 5-8 show the main effects (Table 5), two-way effects (Tables 6 and 7) and three-way effects of brand, insertions and diameter on force (N) at fracture (Table 8).The main effects of brand and diameter (Table 5) were statistically significant-F (1,32) = 9.39, p < 0.001, η 2 g = 0.50 and F( 1,32) = 402.42,p < 0.001, η 2 g = 0.93, respectively, both with a high effect size.Fatscrew (Fts) and a diameter of 2.0 mm led to increased force (N) at fracture.Having one or two insertions did not lead to a significant main effect, p = 0.598.
Brand's interaction with diameter (Table 6) was statistically significant, F (1,32) = 49.32,p < 0.001, η 2 g = 0.61, with a high effect size.A diameter of 2.0 mm led to an adjusted mean of 202.15 for Fatscrew (Fts) and 209.13 for white label, with these being very similar results that do not explain the differences found in the main effect of the brand.This was due to the effect of the brand with a lower diameter (1.6 mm), where the adjusted means were 138.08 for Fatscrew (Fts) and 76.03 for the white label (p < 0.001).
Brand's interaction with insertions (Table 7) was statistically significant-F (1,32) = 6.87, p = 0.013, η 2 g = 0.18, with a high effect size.Differences between brands were statistically different for two insertions (p < 0.001), where the adjusted means were 177.76 for Fatscrew (Fts) and 137.45 for white label, and for one insertion (p < 0.001), the adjusted means were 162.36 for Fatscrew (Fts) and 147.71 for white label.
Three-way effects (Table 8) were statistically significant-F (1,32) = 6.87, p = 0.013, η 2 g = 0.18, with a high effect size.The three-way interactions showed that the highest results for force (N) at fracture were produced by white label products with a 2.0 mm diameter, closely followed by Fatscrew (Fts) with 2.0 mm.The white-label brand showed the worst performance of all, with a 1.6 mm diameter.
Figure 6 shows boxplots for the three-way interactions.Figure 7 shows that the residual normality assumption was met, with all points falling within 95% normal bounds.The complementary Shapiro-Wilk test result was significant at p > 0.05.Variances were homoscedastic, with F (7,15) = 0.32, p = 0.356.closely followed by Fatscrew (Fts) with 2.0 mm.The white-label brand showed the worst performance of all, with a 1.6 mm diameter.
Figure 6 shows boxplots for the three-way interactions.Figure 7 shows that the residual normality assumption was met, with all points falling within 95% normal bounds.The complementary Shapiro-Wilk test result was significant at p > 0.05.Variances were homoscedastic, with F(7,15) = 0.32, p = 0.356.closely followed by Fatscrew (Fts) with 2.0 mm.The white-label brand showed the worst performance of all, with a 1.6 mm diameter.
Figure 6 shows boxplots for the three-way interactions.Figure 7 shows that the residual normality assumption was met, with all points falling within 95% normal bounds.The complementary Shapiro-Wilk test result was significant at p > 0.05.Variances were homoscedastic, with F(7,15) = 0.32, p = 0.356.
Discussion
The use of Mini-Implants (MIs) for temporary skeletal anchorage (TAD) has increased in orthodontic practice, making treatment faster and more efficient [1,2].However, there are studies that report a failure rate of approximately 10-15% [3,18].In order to reduce this failure rate, several parameters need to be assessed and analysed.The main one is the preservation of the primary stability of the MIs, for which it is necessary to study the insertion and removal torque or the load they can bear through assessment of the horizontal forces applied to them [3,4,6].During orthodontic treatment, the MIs are subjected to forces perpendicular to their axes, so in order to test the MIs in our study, horizontal resistance tests were carried out [16].
According to the American Society for Testing and Materials (ASTM), rigid polyurethane foam is an ideal material for testing MIs as well as other medical equipment (ASTM F1839-08( 2012)) [19].Although artificial foams have limitations and do not fully represent real human jawbone, they are widely used in biomechanical tests, simulation and the evaluation of dental implants [20][21][22].According to numerous articles, the Sawbones are one of the bones with the most similar characteristics to the human bone [6,7,18,23].In order to be able to represent the human jawbone, two artificial bones with different characteristics were needed to represent the spongy part as well as the cortical jawbone, so we were able to maximise the similarities to the human bone.In our study, all the MIs were placed in a Sawbones ® artificial bone block to better represent human bone, a procedure used in several similar studies [6,7,18,23].In this study, a 2 mm-thick sheet of fibre-filled epoxy was used to represent the cortical bone and a 10 mm-thick rigid block of 20 PCF (0.32 g/cm 3 ) cellular foam was used to represent the cancellous bone, a technique that was used by Hergel et al. when they evaluated primary stability through various tests and the effects of sterilisation [6].
It may be necessary to replace the MIs due to factors such as anatomical limitations or loss of primary stability.In terms of patient safety and ethics, MIs can only be placed in the same patient, and the same Mini-Implant (MI) cannot be used in several patients.Another aspect to be considered is the economic aspect, whereby one must try to minimise the medical costs associated with orthodontic treatment.Kim et al. [18] and Hergel et al. [6] concluded in their studies that MIs can be replaced twice, especially in cases of failure.
Many studies have shown that one of the characteristics of MIs that improve primary stability, as well as fracture resistance, is the diameter of the MI chosen [4,7,10,24].However, it is important to note that this increase is restricted by the proximity of adjacent tooth roots [3].The average inter-radicular space typically ranges from 2.5 mm to 3.5 mm, which means that using MIs with diameters larger than 2 mm poses a risk of unintended root contact [25].Regarding the maximum diameter, microfractures are especially common when using MIs with a diameter of 2 mm.Thus, diameters of 1.5 mm and 1.6 mm offer a good balance between MI durability and minimising cortical damage [25,26].It is crucial to be cautious of the load applied, as the lack of sufficient stability in MIs can result in permanent dislocations [4,[7][8][9].Wilmes et al. [7] observed that the diameter of the MIs was significantly associated with their stability.The authors reported that MIs with a diameter of 2.0 mm achieved greater primary stability compared to those with a diameter of 1.6 mm.In this study, only torsional force tests were used.Barros et al. [10], in addition to torsional strength, evaluated flexural strength, showing that this was significantly influenced by the diameter of the MI.The authors concluded that the diameter influenced 83.5% of the total variation in flexural strength.This was also confirmed in the study of Haghigh et al. [24], where diameter played an important role in reducing tension and displacement compared to length.Haghigh et al. [24] stated that there was a 53% contribution of diameter and that it is advisable to increase diameter and length first if stability is at risk.Chatzigianni et al. [4] determined that at low perpendicular forces (0.5 N), there were no significant differences in displacement according to MI diameter.At higher forces (2.5 N), 2.0 mm diameter MIs moved significantly less than 1.5 mm diameter IMPs.It was also concluded that length and diameter had a statistically significant influence at a force above 1 N.
In our study, the 2.0 mm diameter MIs (groups G3, G4, G7 and G8) showed statistically significant differences in terms of greater resistance to bending force, manifesting in both fracture and displacement, compared to the 1.6 mm (G1, G2, G5 and G6).For the average fracture force, the 2.0 mm MIs showed a force of 205.64 N, and the 1.6 mm MIs only had a force of 107.06 N. The fracture was detected by an audible pop during the investigation and confirmed by a sharp drop in the load-deflection curve.
According to Hergel et al., the repeated use of MIs can compromise their stability and performance [6]; however, in our study, no significant differences were found in the stability and performance of MIs with repeated insertions compared to single use, indicating that there is less of a problem with using them twice.
As for placement in the bone, with one or two insertions, there were no statistically significant differences, either in the loss of primary stability or in the fracture, with the 1.6 mm MIs placed once in the bone (groups G1 and G5) showing less resistance compared to the 1.6 mm MIs placed twice in the bone (groups G2 and G6).As such, the null hypothesis that there is no statistically significant difference in resistance to bending forces depending on the number of insertions in the bone using 1.6 mm MIs is accepted.
The same is true for the 2.0 mm MIs.Those placed once in the bone (groups G3 and G7) showed less resistance compared to those placed twice (groups G4 and G8).Thus, the null hypothesis that there is no statistically significant difference in resistance to bending force depending on the number of insertions in the bone using the 2.0 mm MIs is accepted.These results were also confirmed by Hergel et al. [6], who concluded that there was no statistically significant difference between MIs used twice and new MIs in terms of primary stability.
In our study, even though the results show statistically significant differences depending on the diameters of the MIs, MIs with a diameter of 1.6 mm can be used for temporary anchorage in orthodontic treatment regardless of the type of movement to be carried out.This is because the force required ranges from 10 g for dental intrusion to 120 g for individual tooth movements (body movement, translation), which correspond to approximately 0.098 N and 1.2 N (1 N ≈ 102 g).In group teeth movements, these forces can reach 250-300 g (2.45-2.94N) [11].
Regarding the brands under study, we chose Fatscrew ® (Fts) MIs [6]; their monetary value is high due to the characteristics of the material used (groups G1, G2, G3 and G4).We decided to compare these with white label (MB) MIs, available at one of the world's most popular websites, eBay ® , because these were much more affordable than the other brands (groups G5, G6, G7 and G8).Both brands used a titanium alloy Ti6AI4V (grade V).Ti6AI4V MIs have better mechanical properties due to their small diameter, corrosion resistance, ease of removal and ease of manufacture compared to type IV, thus improving the performance of MIs [25,27].
When comparing the two brands, there were statistically significant differences in both loss of stability and fracture.In both cases, the Fts brand (groups G1, G2, G3 and G4) showed greater average strength compared to MB (groups G5, G6, G7 and G8).
The interaction between brand and diameter was statistically significant for both brands in terms of loss of primary stability and fracture.In relation to the loss of primary stability in the Fts brand, MIs with a diameter of 2.0 mm (groups G3 and G4) showed a higher average force compared to MIs with a diameter of only 1.6 mm (groups G1 and G2).The null hypothesis that there is no influence of the diameter of the Fatscrew MIs on resistance to bending force is therefore rejected.
In terms of the loss of primary stability and the fracture of the white band (MB) MIs, the highest average force corresponded to the 2.0 mm MIs (groups G7 and G8).Therefore, the null hypothesis that the diameters of MB MIs have no influence on resistance to bending force is rejected.These results are not in line with those regarding the main effects (when only the brands are compared to each other).The reason for this was related to the effect on the MB with a smaller diameter, 1.6 mm, where the adjusted average was much lower compared to the Fts brand with a 1.6 mm diameter.
These results are similar to those from studies analysing the influence of diameter on the loss of primary stability [7,10,24].Wilmes et al. [7] observed that MI diameter was significantly associated with stability through torsional force.Barros et al. [10] concluded that flexural strength was significantly influenced by diameter, which was also confirmed by Haghigh et al. [24].
The interaction between brand and placement was statistically significant.The difference between the brands was statistically significant when there were two bone insertions, with a greater force being observed for the loss of primary stability as well as for the fracture of the Fts MIs (groups G2 and G4) compared to the MB MIs (groups G6 and G8).Therefore, the null hypothesis is rejected, as there were no differences between the MIs of the different brands in terms of resistance to bending force when placed twice in the bone.
When placed once, there were no statistically significant differences between the brands in terms of loss of primary stability.Therefore, the null hypothesis that there is no difference between the MIs of the different brands in relation to bending force after one placement in the bone is accepted.As for the fracture force of the MIs, there were statistically significant differences between the two brands when inserted once in the bone.
For the interaction effects of the three levels, brand, diameter and placement, no significant differences were found in the loss of primary stability.The highest average force was found in the white brand with a 2.0 mm diameter MIs placed both once and twice in the bone.The worst result, with a large difference in loss of stability force, was given by the 1.6 mm MB MIs placed once and twice in the bone.
As for the fracture of the MIs, the interaction effects of the three levels, brand, diameter and placement, were found to be significantly different.The highest average force was found in the white band with 2.0 mm diameter MIs placed both once and twice in the bone and with the 2.0 mm diameter Fts MIs placed once and twice in the bone.The worst result, with a large difference in fracture strength, was given by the 1.6 mm MB MIs placed once and twice.Some differences were observed in the fracturing of the MIs during the laboratory tests.The only MIs that fractured in the MI head area were the 2.0 mm diameter MBs placed twice in the bone (Group G8).In this group, four of five MIs fractured in the head area, and another fractured in the spiral area.One hypothesis is that these are probably made of a less resistant material compared to the Fts MIs.Also, as they were reattached to the bone, both the neck and the head of the MI may have become more fragile due to the placement strength and, therefore, tended to fracture more.The remaining MB MIs fractured at the level of the coils (internal fracture).
In the Fts MIs, some showed internal fracture and others permanent deformation.From these results, we can conclude that one of the major differences between the two brands studied relates to the material the MI is made of.The Fts are made of a stronger and more elastic material than the MB, even though both are made of titanium alloy.
Although the results indicate a significant difference in the brand to be used, both demonstrated the capacity for temporary anchorage in orthodontic treatment, regardless of the type of movement to be performed.Therefore, we can use any of the brands studied, but it is important to note that the Fatscrew brand shows superior resistance to bending force and a significant difference in terms of fracture.Considering the financial aspect, it is important to note that the Fatscrew brand, although more expensive, offers superior resistance and is more elastic.Therefore, when deciding between the two options, it is crucial to weigh the cost against durability and the ability to withstand the necessary force.
While the cost of Fatscrew ® may initially be higher, its superior strength can result in savings in the long run, avoiding additional replacement or maintenance costs.
It is important to note that one of the limitations of our research was the fact that it was an in vitro study, and although the artificial bone used, Sawbones ® , was the most suitable for testing with standardised models in order to achieve consistent results, it is not fully equivalent to natural bone due to variations in chemical composition and physical integrity.In addition, it is crucial to consider that bone composition differs depending on the individual, with significant variations possible.It is also necessary to recognise the limitations of the study environment, which, because it does not faithfully replicate the oral cavity, can influence the performance of the MIs, especially the MBs, which showed less resistance.One of the disadvantages is that humans do not have the same characteristics in terms of the % of cancellous bone and cortical bone in the jawbone constitution, and these results can differ from individual to individual.If the patient has periodontitis, the oral environment may become more acidic, leading to an increase in chemical reactivity, which may cause some modification in the MIs used [28].One possible way to improve this resistance is to treat the surface of the MI with 2-methacryloyloxyethyl phosphocholine (MPC), which was proven in the study by Chen et al., who concluded that MPC is a favourable tool for preventing infectious diseases and inhibiting microbial activity in the implants studied [29].Another relevant aspect is the impact of the sterilisation of the MIs, which can affect their effectiveness and is an additional factor to be considered when interpreting the results.
Conclusions
Considering the results achieved and in accordance with the methodology described in this study, we can conclude that, although the results show significant differences in the brands used, both demonstrated the capacity for temporary anchorage.However, it is important to note that the Fatscrew brand and a 2.0 mm diameter endow greater resistance to bending force.We can also conclude that MIs inserted once or twice did not lead to a statistically significant effect, so we can conclude that the operator can insert the MIs twice in the same patient without leading to a loss of stability.However, the Fatscrew MIs placed twice yielded better results than the white brand MIs.The only MIs that fractured in the MI head area were the 2.0 mm diameter MB MIs placed twice in the bone, showing significant differences in fracture between brands.We can conclude that the use of 1.6 mm diameter MIs is advisable, as they support the optimum force for orthodontic dental movement, as well as reducing the risk of injury to anatomical structures, especially adjacent teeth roots.
Figure 2 .
Figure 2. Preparation of test specimens.(a) Drilling support machine; (b) fixing the motor; (c) calibration when placing the MIs.
14 Figure 3 .
Figure 3. Fracture toughness tests of the MIs.(a) Fitting the samples to the Instron ® ; (b) initiation of loading to fracture; (c) deformation to loading; (d) complete fracture of the MIs.2.2.5.Statistical Analysis Data were analysed with R software, version 4.3.2[15].Three-way ANOVAs (3-way ANOVAs) with type II sum of squares were used [16].Type II sum of squares (SS) was chosen in place of type III sum of squares because it involves comparing the change in SS
Figure 3 .
Figure 3. Fracture toughness tests of the MIs.(a) Fitting the samples to the Instron ® ; (b) initiation of loading to fracture; (c) deformation to loading; (d) complete fracture of the MIs.
Figure 4 .
Figure 4. Boxplots for 3-way interactions of force (N) in the loss of primary stability model.Figure 4. Boxplots for 3-way interactions of force (N) in the loss of primary stability model.
Figure 4 .
Figure 4. Boxplots for 3-way interactions of force (N) in the loss of primary stability model.Figure 4. Boxplots for 3-way interactions of force (N) in the loss of primary stability model.
Figure 4 .
Figure 4. Boxplots for 3-way interactions of force (N) in the loss of primary stability model.
Figure 5 .
Figure 5. QQ plot for residuals of force (N) in the loss of primary stability model.
Figure 5 .
Figure 5. QQ plot for residuals of force (N) in the loss of primary stability model.
Figure 6 .
Figure 6.Boxplots for three-way interactions of force (N) at fracture.
Figure 7 .
Figure 7. QQ plot for residuals of force (N) at fracture.
Figure 6 .
Figure 6.Boxplots for three-way interactions of force (N) at fracture.
Figure 6 .
Figure 6.Boxplots for three-way interactions of force (N) at fracture.
Figure 7 .
Figure 7. QQ plot for residuals of force (N) at fracture.Figure 7. QQ plot for residuals of force (N) at fracture.
Figure 7 .
Figure 7. QQ plot for residuals of force (N) at fracture.Figure 7. QQ plot for residuals of force (N) at fracture.
Table 1 .
Main effects on force (N) at the loss of primary stability.
SE, standard error.
Table 2 .
Two-way effects on force (N) at loss of primary stability (brand and insertions × diameter).
.6 mm Diameter = 2.0 mm 2-Way Effects
Results are presented as adjusted means and standard errors.
Table 3 .
Two-way effects on force (N) at loss of primary stability (brand × insertions).
Results are presented as adjusted means and standard errors.
Table 4 .
Three-way effects on force (N) at loss of primary stability (brand × insertions × diameter).
Table 5 .
Main effects on force (N) at fracture.
Table 5 .
Main effects on force (N) at fracture.
SE, standard errors.
Table 6 .
Two-way effects on force (N) at fracture (brand and insertions × diameter).
Results are presented as adjusted means and standard errors.
Results are presented as adjusted means and standard errors. | 2024-06-29T15:24:44.456Z | 2024-06-27T00:00:00.000 | {
"year": 2024,
"sha1": "c21d702667176c828062bf4023f890eb6608c5ca",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma17133149",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16e0ffd66b30d840a5264b8ac3633fac1c0600e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
70876948 | pes2o/s2orc | v3-fos-license | The impact of plug-in vehicles on greenhouse gas and criteria pollutants emissions in an urban air shed using a spatially and temporally resolved dispatch model
With the introduction of plug-in vehicles (PEVs) into the light-duty vehicle fleet, the tail-pipe emissions of GHGs and criteria pollutants will be partly transferred to electricity generating units. To study the impact of PEVs on well-to-wheels emissions, the U.S. Western electrical grid serving the South Coast Air Basin (SoCAB) of California is modeled with both spatial and temporal resolution at the level of individual power plants. Electricity load is calculated and projected for future years, and the temporal electricity generation of each power plant within the SoCAB is modeled based on historical data and knowledge of electricity generation and dispatch. Due to the efficiency and pollutant controls governing the performance of the Western grid, the deployment of PEVs results in a daily reduction of greenhouse gases (GHGs) and tail-pipe emissions, especially in the critical morning and afternoon commute hours. The extent of improvement depends on charging scenarios, future grid mix, and the number and type of plug-in vehicles. In addition, charging PEVs using wind energy that would otherwise be curtailed can result in a substantial emissions reduction. Smart control will be required to manage PEV charging in order to mitigate renewable intermittencies and decrease emissions associated with peaking power production.
Introduction
It is projected that the world's energy consumption and electricity generation will increase 44 and 77 percent respectively from 2006 to 2030 [1] and that conventional vehicles will still be the dominant on-road fleet over the next two decades [2]. In 2006, the transportation sector accounted for 22 percent of worldwide energy consumption [1] and 20 percent of greenhouse gas emissions [3]. In California, transportation is responsible for roughly 50 percent of energy use and 40 percent of greenhouse gas emissions [4,5]. Another major contributor to greenhouse gases and criteria pollutants emissions is electricity generation that accounts for 28 percent of the total greenhouse gases in California, second only to transportation. The concerns regarding global climate change, air pollution, and high energy prices give rise to increasing demand for strategies to shift to alternative, low, or non-carbon based energy systems, from electricity generation to vehicles.
Plug-in Electric Vehicles (PEVs) represent one of the numerous strategies under consideration. These include both plug-in hybrid electric vehicles (PHEVs) and battery electric vehicles (BEVs). The use of PEVs can reduce tailpipe emissions but will impose an additional load on the electricity grid, resulting in increased emissions from electricity generation. Existing studies suggest that PHEVs have a net emissions benefit over both conventional [6] and (nonplug-in) hybrid electric vehicles (HEVs) [7], and that the extent of improvement depends on the electricity grid mix [8], and timing and pattern of charging [9]. To analyze emissions impacts, these studies have used one of the following three grid scenarios: 1. an average grid mix [10,11], 2. the marginal generation technology (i.e., assuming that the electricity required to charge the vehicles is provided by one technology that comes online last) [12,13], or 3. the temporal dispatch of generation resources based on historical data [14].
This research develops and applies a dispatch model which is both spatially and temporally resolved. The necessary inputs of the dispatch model are introduced and calculated first, followed by a detailed description of the methodology. The model developed is then used to (1) provide a base case for year 2050 and (2) establish the effects of deploying PHEVs and BEVs on the well-to-wheels pollutant emissions, especially NO x , and CO 2 , for a future year 0378-7753/$ -see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.jpowsour.2011.08.043 (2050). The major urban air shed selected for the study is the South Coast Air Basin in southern California (Fig. 1).
Electricity demand forecast
To study the air quality impacts (e.g., ozone, particular matter) of deploying PEVs today or in the future, spatially and temporally resolved criteria pollutant emissions are required from both mobile and stationary sources, including power plants. The first step in modeling the grid for emissions is to determine how the electricity output of each power plant changes with respect to the electricity load. Time resolved load data are not available for all generating entities within the SoCAB. As a result, it is assumed that the electricity demand is directly proportional to the population residing in the study area. This assumption is based on various California Energy Commission (CEC) reports projecting almost constant electricity consumption and peak demand per capita for the state of California [15].
Based on this population assumption, the hourly electricity demand for the entire SoCAB region can be calculated from the Southern California Edison (SCE) and San Diego Gas & Electric (SDG&E) hourly load, which is publicly available [16]. The results are illustrated in Fig. 2 for the year 2005.
High summer electrical loads generally correspond to heavy use of air conditioning in response to extreme heat. A high load results in an increase in power generation and can lead to a "peak" hour of generation for a given year. The electricity generation profiles for the peak and average days of 2005 are presented in Fig. 3.
In order to study the impact of PEVs in the future, the electricity demand for a future year (2050) is projected based on historical trends and the following assumptions: (1) The electricity consumption per capita in the SoCAB remains unchanged over the next four decades and is equal to that of the entire State of California [15]. Based on the first two assumptions, the annual growth in SoCAB's demand from 2005 can be deduced using projected population. For example, in the year 2050, SoCAB's annual electricity load is projected to be almost 61 percent more than it was in 2005. Further, it is assumed that the load growth rate for each hour is constant and the same as the annual average (i.e., the SoCAB load for each hour for a specific day in 2050 is 61 percent more than the load at that same hour of the same day in 2005 as shown in Fig. 3).
In-basin generation
In order to model future electricity generation, it is necessary to (1) establish the manner by which power plants are operated today, (2) establish a trend based on historical data, (3) model the electricity outputs of each power plant in the future, and (4) determine the important factors affecting the different modes of operation.
An emissions inventory, generated for the 2007 Air Quality Management Plan by the South Coast Air Quality Management [18], includes emissions from both stationary and mobile sources for the year 2005, and CO, NO x , SO x , TOG and TSP emissions from each source for the entire year with a time resolution of 1 h. Using the Facility Identification (ID) codes, the name of each emission source can be determined [19] and, based on the SIC code corresponding to each source [20], those with primary function of electricity generation can be selected. () On-site self-generation facilities are excluded because they are not included in the electricity demands reported by SCE or CEC.
With the emissions from the emissions inventory and emission factors from the U.S. Environmental Protection Agency (EPA) which can be obtained from eGRID [21], the hourly generation of each power plant can be determined. The calculations are based on NO x emissions because it is amongst the most important pollutants and is monitored at the majority of the power plants. Fig. 4 is a flow chart summarizing the process of calculating the hourly electricity generation from the emissions inventory.
For each power plant in the inventory, the electricity generation for the peak day of 2005 is calculated on an hourly basis, and by adding together the electricity outputs of power plants in a specific hour, in-basin generation for that hour is determined. It should be noted that hydro power plants are included in the dispatch model in order to accurately account for all power sources, even though they do not contribute any emissions.
It is improbable that the in-basin power plants generate the same amount of electricity every day with the same daily profile as revealed by the emissions inventory. It is noteworthy that the purpose of this inventory, in support of the Air Quality Management Plan, is to model an "episode day," namely where the emissions and meteorological circumstances result in the worst air quality impacts. As a result, the inventory does not indicate how the in-basin plants actually operate throughout the year. In this study, a dispatch model is developed to provide the needed insight.
As a first step in the development of the dispatch model, a graph of capacity factor versus "total generation" 1 is constructed for each in-basin power plant, including hydro plants, based on the data derived from Energy Central [22] which is an online datatbase including power plants' generation data from 1998 to the present. Each power plant is identified as either a baseloading, peaking or intermediate (load-following) unit. The dispatch model is developed so the emissions on the peak day of 2005 are consistent with the AQMD's emissions inventory.
Electricity imports to SoCAB
Having calculated the in-basin electricity demand and the inbasin generation, the electricity imports to the SoCAB can be derived from the difference between the two; the results are depicted in Fig. 5. The figure also shows the linear relationship (coefficient of determination 0.96) between the power imported to the SoCAB and the demand within the basin, which implies that the imports serve primarily to provide load-following power. The corresponding conclusion is that the generation within the SoCAB acts almost entirely as baseload, constant generation.
Dispatch model
The electricity load increases significantly (61 percent) from 2005 to 2050 and Fig. 5 suggests a corresponding significant increase in imports. However, the State of California is currently facing transmission congestion, reliability challenges, and higher costs related to insufficient transmission infrastructure [24], all of which threaten the integrity of the electrical system and the health of the economy. As a result, new transmission infrastructure is required to transport the higher imports to the SoCAB in the future. This notwithstanding, several obstacles prevent building transmission infrastructure fast enough to keep pace with the demand. These obstacles include: securing environmental permits and rights-ofway, securing regulatory approval for publicly-owned utilities and federal agencies, and local opposition due to visual and environmental impacts, as well as concerns about property values.
Due to slow development of transmission infrastructure with respect to demand, as well as the goal to model a worst case scenario for the SoCAB air quality, it is assumed that no new transmission lines are added to the current system and, as a result, the extra electricity generation needed to support the future demand is generated within the basin.
On the peak day, the capacity factors of the majority of the power plants within the basin are higher than their annual averages indicating that the in-basin generation is also at its maximum on the peak day. During heavy summer peak load periods, critical transmission paths in the state are often constrained [22], indicating that the transmission system is near saturation on the peak day. As a result, the capacity of the transmission system in this modeling is set equal to the maximum amount of imports on the peak day, which can be derived from Fig. 5. This capacity is kept constant for future peak day scenarios.
The electricity generation of each individual in-basin power plant is calculated for each hour of the 2050 peak day using the projected demand for that day. Assuming that the transmission system capacity remains unchanged, the maximum dispatchable electricity based on 2005 data will be the sum of electricity outputs from all in-basin power plants for a specific hour and the maximum allowed imports (transmission constrained). The difference between this available electricity generation and the generation required to support the demand indicates the amount of electricity that will need to be provided by in-basin units installed after 2005.
Assuming a maximum capacity factor of 0.95 for the generating units, 14.5 GW of capacity is added to the in-basin power plants. This 14.5 GW consists of 12 GW of non-peaking and 2.5 GW peaking units. This combination is chosen to ensure that the intermediate units have an annual average capacity factor of at least 30 percent, and the peaking units 10 percent or less, which matches historical trends. All the 12 GW non-peaking units are assumed to Fig. 6.
In order to add the newly installed power plants to the dispatch model, it is necessary to establish a strategy for operating these units. Peaking units in the future are assumed to be operated in the same manner as the peaking units are operated today. In particular, peaking units come online at times of peak demand or when the increase in demand occurs suddenly and other units are not capable of ramping up in time. As for the non-peaking units, these units are operated as intermediate power plants.
It is necessary to mention that in this model, generators are retired after they have been online for fifty years and are replaced by generators with the same power capacity but with adjusted emission factors for the time of replacement. Fig. 7 shows the generator dispatch strategy and order. Baseloading units are dispatched first, followed by intermediate units.
The older, existing intermediate units are dispatched before new ones are added to ensure that first the existing capacity is utilized. Next, the model dispatches imports and in-basin peaking units if necessary to provide the electricity demand of the area. If the demand still outpaces generation, the model adds additional combined cycle facilities and restarts from the beginning.
2050 base case results
Emission factors corresponding to different pollutants for existing units are available. To calculate the criteria pollutants and GHGs emitted from the newly installed plants, emission factors associated with these generators need to be determined first. Knowing the fuel, the emission factors associated with that fuel (kg kJ −1 ) and the heat rate of the system (kJ kWh −1 ), the emission factors of the whole system (kg MWh −1 ) can be derived. Natural gas is chosen as the primary fuel for all the new power plants. The emission factors associated with natural gas can be extracted from the EPA emission factors reports [25]. In order to include the advancements in technology that might occur in the future, and thus increase the efficiency of combined cycle systems and combustion turbines, a projected efficiency of 65 percent is used for combined cycle systems without carbon capture and sequestration and 57.5 percent for combustion turbines [26]. The efficiencies of today's state-ofthe-art plants are 59 and 33 percent, respectively [26]. Fig. 8a and b illustrates the amount of NO x in kilograms emitted from each individual power plant at 5 am and 5 pm on the peak day of 2050 (basecase) respectively, demonstrating the spatial resolution of the methodology.
Impacts of PEVs in 2050
Replacing light-duty conventional vehicles with PEVs reduces the tailpipe emissions related to the transportation sector; however, it imposes a new load on the electricity grid and gives rise to increased emissions from power plants. In order to assess the impacts of PEVs on criteria pollutant and GHG emissions, the changes in emissions both from the transportation and electricity generation sectors must be evaluated in combination.
To determine the electricity load associated with PEVs and concomitant emissions, for each case based on the charging scenario, vehicle type (BEV or PHEV) and the penetration in the light-duty fleet, the temporal electricity demand of the PEVs are calculated and added to the base-case electricity demand. This overall electricity demand is used as the input to the dispatch model for the year 2050.
To calculate the impact on emissions resulting from replacing conventional vehicles with PEVs, characteristics of the future vehicle fleet including fleet size, emission factors for both conventional and PEVs, daily vehicle miles traveled, and the travel distribution throughout the day must first be determined. and the associated emissions are calculated using a curve describing statistical driving behavior [27], the latter of which suggests that 70 percent of vehicles in Southern California are driven today less than 60 km per day.
Two particular charging profiles are considered -"business as usual" and "off-peak" charging -which have been used in previous studies [14,28]:. The "business as usual" scenario assumes that both workplace and home charging are available and no incentives are in place to shift the charging towards off-peak hours.
The amount of electricity consumed by PEVs depends on the type of vehicle and the penetration in the light duty vehicle fleet. Various studies [29][30][31] suggest that 40 percent penetration of PHEVs in the light duty vehicle fleet for the year 2050 would be reasonable for Southern California. Fig. 9 shows the electricity required for four separate 2050 scenarios, 40 percent PHEVs charging with the "business as usual" behavior, 40 percent PHEVs charging with an "off-peak" strategy, 40 percent BEVs charging with the "business as usual" behavior, and 40 percent BEVs charging with an "off-peak" strategy.
Figs. 10 and 11 illustrate the effects of different charging profiles for PHEVs and BEVs on the grid's NO x emissions, and well-to-wheels NO x emissions on the peak day of year 2050, respectively. These results show that deploying 40 percent PHEVs and 40 percent BEVs will result in a 6 and 22 percent reduction in NO x emissions on the peak day, respectively.
Discussion
This study has developed and applied a detailed dispatch model in order to characterize the hourly operation and emissions of power plants in the Western Grid. The goal was to establish the impact of PEVs, as a function of hour, on the overall emissions (tailpipe plus electricity grid) in a future year (2050) when a substantial population of PEVs would likely be deployed.
From the analysis above, the deployment of PEVs results in emission benefits at all hours of the day using the "business as usual" charging profile. For the "off-peak" charging scenario, the addition of PEVs results in an emission increase in the first 6 h of the day due to the large number of vehicles that are connected to the grid, and a small reduction from the transportation sector because of the low vehicle miles traveled at these hours. During the rest of the day, the net emissions decrease and the overall reduction is greater than the "business as usual" charging profile. Clearly, a further increase in the PEV penetration reduces the net emissions, especially in the critical morning and afternoon commute hours.
Following are the conclusions of this research: • The deployment of PEVs reduces tail-pipe emissions and, for the Western Grid, reduces overall emissions per vehicle mile. The deployment of PEVs transfers emissions from the tail-pipe to the electric grid. Due to the relatively low carbon footprint of the U.S. Western Grid, the addition of PEVs results in a reduction in both GHGs and criteria pollutants, and in a reduction of emissions per vehicle mile.
• The reduction in GHG emissions depends on the charging scenario. For PHEV penetrations lower than 34.5 percent, the "business as usual" charging scenario is more effective in reducing the emissions of CO 2 . For PHEV penetrations higher than 34.5 percent, the "off-peak" charging profile is more effective in CO 2 reduction. This is observed because the average grid emission factor changes with the electricity load and time of day.
• The improvement in air quality depends on the time of day inbasin criteria pollutant emissions are reduced. The reduction in criteria pollutants is correspondingly lower in both charging scenarios. Due to the relationship between the emission of criteria pollutants and the resultant air quality, the reduction in criteria pollutant emissions between the commute hours of 6 and 9 am is expected to be especially effectual in improving air quality.
• Smart communication and control will likely be required.
For grid stability and emission reduction, charging should be (1) limited during the late afternoon and early evening periods of peak electricity power demand and (2) encouraged between 11:00 pm and 6:00 am. The early deployment of PEVs will not significantly impact either emissions or the grid ability to charge the vehicles at any time of the day. As the popularity of PEVs increase, a critical population will be reached where smart control with economic incentives will be required to (1) ensure that the majority of charging occurs overnight and off-peak, and (2) charging is incentivized during periods when grid stability and efficiency would be enhanced (e.g., when wind resources would otherwise be curtailed).
Overall, the results show that with careful planning for both transportation and power generation sectors, along with providing incentives to consumers to charge their plug-in vehicles at certain times, deployment of PEVs in the light-duty vehicle fleet will result in a reduction in criteria pollutant emissions, a reduction in greenhouse gas emissions, and help the State of California to achieve AB32 goals | 2019-03-07T14:05:38.125Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "863a672a3bd56addafa71b00420dbebeacd0a974",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt05h92090/qt05h92090.pdf?t=odi5rx",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6c8c552487c48055908bea96930f9aac99ea61bc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
} |
250596782 | pes2o/s2orc | v3-fos-license | Extracorporeal membrane oxygenation support for lung transplantation: Initial experience in a single center in China and a literature review
Background Extracorporeal membrane oxygenation (ECMO) is a versatile tool associated with favorable outcomes in the field of lung transplantation (LTx). Here, the clinical outcomes and complications of patients who underwent LTx with ECMO support, mainly prophylactically both intraoperatively and post-operatively, in a single center in China are reviewed. Methods The study cohort included all consecutive patients who underwent LTx between January 2020 and January 2022. Demographics and LTx data were retrospectively reviewed. Perioperative results, including complications and survival outcomes, were assessed. Results Of 86 patients included in the study, 32 received ECMO support, including 21 who received prophylactic intraoperative use of ECMO with or without prolonged post-operative use (pro-ECMO group), while the remaining 54 (62.8%) received no external support (non-ECMO group). There were no significant differences in the incidence of grade 3 primary graft dysfunction (PGD), short-term survival, or perioperative outcomes and complications between the non-ECMO and pro-ECMO groups. However, the estimated 1- and 2-year survival were superior in the pro-ECMO group, although this difference was not statistically significant (64.1% vs. 82.4%, log-rank P = 0.152; 46.5% vs. 72.1%, log-rank P = 0.182, respectively). After regrouping based on the reason for ECMO support, 30-day survival was satisfactory, while 90-day survival was poor in patients who received ECMO as a bridge to transplantation. However, prophylactic intraoperative use of ECMO and post-operative ECMO prolongation demonstrated promising survival and acceptable complication rates. In particular, patients who initially received venovenous (VV) ECMO intraoperatively with the same configuration post-operatively achieved excellent outcomes. The use of ECMO to salvage a graft affected by severe PGD also achieved acceptable survival in the rescue group. Conclusions Prophylactic intraoperative ECMO support and post-operative ECMO prolongation demonstrated promising survival outcomes and acceptable complications in LTx patients. Particularly, VV ECMO provided safe and effective support intraoperatively and prophylactic prolongation reduced the incidence of PGD in selected patients. However, since this study was conducted in a relatively low-volume transplant center, further studies are needed to validate the results.
Introduction
Lung transplantation (LTx) is the final therapeutic option for patients with end-stage pulmonary disease unresponsive to medical treatment (1). Pre-operative management, intraoperative manipulation, and post-operative management and recovery impact the success of LTx (2)(3)(4). Hence, suboptimal management during this complex surgery can jeopardize long-term survival of LTx recipients.
Extracorporeal membrane oxygenation (ECMO) is used with increasing frequency in LTx to provide prolonged cardiac and respiratory support (5)(6)(7)(8). After careful patient selection and the involvement of a multidisciplinary team, several single-and multi-center studies have reported successful use of ECMO as a bridge to transplantation (BTT) (9-12) as well as a postoperative rescue strategy for primary graft dysfunction (PGD) (13), which has prompted intraoperative use of ECMO during LTx (7). Encouraging outcomes of ECMO for both shortand long-term intraoperative support have been reported (14- 16). Moreover, prophylactic intraoperative use of ECMO and during the post-operative period in selected patients has been shown to improve perioperative and long-term outcomes of LTx recipients (15,17,18).
The increased frequency of perioperative ECMO support in recent years has improved the success of LTx as evidenced by improved survival and functional outcomes. Hence, the aim of the present study was to review the clinical outcomes and complications of LTx recipients who received ECMO support both intra-and post-operatively in a single center in China.
Patient population
The cohort of this single-center, retrospective study included 86 patients who underwent LTx at Shanghai Pulmonary Hospital affiliated with Tongji University (Shanghai, China) between January 2020 and January 2022. Of these patients, 54 received no external support (non-ECMO group) and 32 required ECMO support (ECMO group). Among the patients in the ECMO group, five received ECMO as a BTT (bridging ECMO group), 21 received prophylactic intraoperative use of ECMO with or without prolonged post-operative use (pro-ECMO group), and six received ECMO for rescue of PGD (rescue ECMO group) ( Figure 1). The demographics of the donors and recipients as well as LTx information are summarized in Table 1. The study protocol was approved by the Institutional Research Ethics Board of Shanghai Pulmonary Hospital affiliated with Tongji University (approval no. K22-217) and conducted in accordance with the ethical principles for medical research involving human subjects described in the Declaration of Helsinki.
ECMO management
The decision to perform ECMO was made by an experienced multidisciplinary team based on current center guidelines. The main indication for ECMO as a BTT was persistent hypercapnia and/or hypoxic respiratory failure, defined as PCO 2 >80 mmHg and partial arterial oxygen pressure (PaO 2 ) to the fraction of inspired oxygen (P/F ratio) <70 mmHg. Following assessment of cardiac function, all five patients in the ECMO group received femoral-jugular venovenous (VV) ECMO as a BTT. The circuits were coated with heparin and composed of Quadrox PLS oxygenators (Bioline R ; Maquet Cardiopulmonary AG, Hirrlingen, Germany), a centrifugal pump, and an integrated heat exchanger. A 15-17 French (Fr) cannula was used for the jugular vein and a 21 Fr cannula for the femoral vein (Maquet Cardiopulmonary AG). All cannulas were inserted percutaneously using the Seldinger technique. The same ECMO system was maintained for intraoperative and prolonged postoperative support.
Intraoperatively, the surgical technique and handling of ECMO were consistent throughout the study period and among all transplant surgeons. Central cannulation was performed for most of the patients. After opening the chest, the . /fmed. . patients received 2,000-3,000 IU of unfractionated heparin intravenously. The heparin dose was not repeated during surgery. Activated clotting time was routinely monitored. A 17 Fr arterial cannula was used for the ascending aorta and a 32 Fr curved-tip cannula for the right atrium. The ECMO flow was set to 50% of the predicted cardiac output and adapted according to hemodynamic and gas exchange demands. Prolonged post-operative ECMO was conducted in accordance with the Vienna protocol (15). Briefly, the function of the implanted graft was evaluated 10 min after decannulation and immediately after chest closure. If pulmonary function tests failed to meet the pre-defined criteria (i.e., oxygen tension/inspired oxygen fraction >100, mean pulmonary arterial pressure/mean systemic arterial pressure <2/3, and normal size-equivalent tidal volume) or if there was clear worsening of either measurement, the same ECMO system was reinserted in the femoral-femoral venoarterial (VA) configuration and the patient was transferred to the intensive care unit (ICU) with the use of a running system. For prolonged ECMO, the patient received a therapeutic dose of heparin and activated clotting time was monitored at 180-220 s. In the PGD subgroup, femoral-jugular VV ECMO was employed in the ICU as a rescue strategy after LTx.
PGD definition
PGD occurs usually within 72 h after LTx as demonstrated by hypoxemia and non-cardiogenic pulmonary infiltrates on chest radiographs. The severity of PGD was graded at four time points starting from reperfusion of the second lung (T0) to 24 h (T24), 48 h (T48), and 72 h (T72) after LTx, in accordance with the latest consensus conference criteria of the International Society for Heart and Lung Transplantation (19). PGD grade 0 was defined as the absence of infiltrate on chest X-rays. In the presence of pulmonary infiltrates, PGD grades 1-3 were determined based on the P/F ratio as follows: PGD grade 1, P/F ratio >300 mmHg; PGD grade 2, P/F ratio of 200-300 mmHg; and PGD grade 3, P/F ratio <200 mmHg. Patients receiving prolonged post-operative ECMO with chest X-rays showing pulmonary infiltrations were classified as PGD grade 3.
Statistical analysis
Continuous variables are presented as the mean ± standard deviation or median [range or interquartile range (IQR)]. Independent continuous variables between two groups were compared with the non-parametric Mann-Whitney test, while categorical variables were compared using the chi-squared test. A probability (P) value of ≤0.05 was considered statistically significant. The 1-and 2-year survival rates were estimated using the Kaplan-Meier method. Differences between groups were quantified using the log-rank test. Overall survival was defined as the period from LTx to death due to any cause and patients were censored at the last date of follow-up. Baseline covariates were balanced by the method of propensity score matching. The following parameters were included: age, sex, body mass index, primary diagnosis and type of transplant. Matched groups were compared using the Mann-Whitney test or the chi-squared test. The difference in survival between the matched groups was compared by a stratified log-rank test. Statistical analysis was performed using IBM SPSS Statistics for Windows, version 27.0 (IBM Corporation, Armonk, NY, USA).
Recipient characteristics
A total of 75 LTx recipients were included in non-ECMO and pro-ECMO groups. The characteristics of the LTx recipients are summarized in Table 1. There were no significant differences in age, sex, indications for LTx, waiting .
/fmed. . time, lung allocation score, left ventricular ejection fraction, pulmonary artery systolic pressure, and follow-up duration between the two groups. However, body mass index (BMI) was significantly lower in the pro-ECMO group than the non-ECMO group (P = 0.027) and bilateral LTx was more common in the pro-ECMO group (P = 0.001). Accordingly, the median surgical duration was longer (345 vs. 248 min, P < 0.001), blood loss was greater (2,000 vs. 800 ml, P < 0.001), and need for intraoperative transfusions of blood and fresh frozen plasma was greater (10 vs. 2 U, P < 0.001; 20 vs. 0 U, P < 0.001) in the pro-ECMO group as compared to the non-ECMO group.
Donor characteristics
The characteristics of the lung donors are detailed in Table 1. All lungs were retrieved from brain-dead donors. There were no differences in age, sex, and BMI between the two groups or in the partial pressure of oxygen (PaO 2 ) and partial pressure of carbon dioxide (PaCO 2 ) in pure oxygen at the time of retrieval. Cold ischemic time (CIT) between the first transplanted lung was comparable between the pro-ECMO and non-ECMO groups (419 ± 47 vs. 401 ± 88 min, P = 0.655), while CIT for the second transplanted lung was slightly longer in the pro-ECMO group, although this difference was not statistically significant (P = 0.097).
Perioperative outcome
As listed in Table 2, the median mechanical ventilation time, median ICU stay, and length of hospital stay were comparable between the non-ECMO and pro-ECMO groups (2 vs. 4 days, P = 0.967; 17 vs. 20 days, P = 0.165; 41 vs. 45 days, P = 0.409; respectively). In terms of post-operative complications, patients in the pro-ECMO group were more likely to require revision surgery (14.3% vs. 1.9%, P = 0.064). However, there was no significant difference in the 30-and 90-day survival rate between the two groups (92.6% vs. 95.2%, P = 1.000; 81.5% vs. 95.2%, P = 0.251, respectively) or in the incidence of other postoperative complications, including post-operative hemodialysis, PGD 3 at 48 or 72 h, venous thromboembolism (VTE), airway complications, fungal infection, pulmonary infection, acute rejection, and chronic lung allograft dysfunction.
Mid-term outcome
Although the estimated 1-year survival rate was higher in the pro-ECMO group than the non-ECMO group, this difference was not significantly significant (82.4% vs. 64.1%, log-rank P = 0.152, Figure 2). Similarly, the estimated 2-year survival rate was higher in the pro-ECMO group than the non-ECMO group, which was also not statistically significant (72.1% vs. 46.5%, log-rank P = 0.182, Figure 2).
Propensity score matching (PSM)
A PSM was performed to balance baseline covariates between the non-ECMO group and the pro-ECMO group. The matching parameters included: age, sex, BMI, primary diagnosis and type of transplant. As demonstrated in Table 3, PSM resulted
ECMO subgroups
Having demonstrated the value of pro-ECMO for the prognosis of LTx recipients, all patients who received ECMO support were regrouped into the following four subgroups based on the stage of ECMO support: group I, bridging ECMO (n = 5); group II, prophylactic intraoperative ECMO (intraOp pro-ECMO, n = 11); group III, prophylactic intraoperative and post-operative ECMO (intra/postOp pro-ECMO, n = 10), and group IV, rescue ECMO (n = 6) ( Table 4). As expected, the duration of ECMO support was shortest in group II with a median duration of 3 (IQR, 2-5) h and longest in group III with a median duration of 82 (IQR, 47-95) h (P < 0.001). All patients in group I received VV ECMO as a bridge to LTx. All patients in group II received VA ECMO. Half of the patients in group III received VA ECMO, which was extended into the post-operative period. Similarly, half of the patients in group IV were rescued with VV ECMO and half with VA ECMO. Idiopathic pulmonary fibrosis (IPF) was the major indication among the 4 groups. Pneumosilicosis and idiopathic pulmonary arterial hypertension (IPAH) or peripheral vascular occlusive disease (PVOD) only occurred in groups II and III, respectively. The 90-day survival rate was better in groups II and III than groups I and IV (100% and 90% vs. 40% and 67%, log-rank P = 0.018). There were no significant differences in the other variables among the 4 groups, which included duration of mechanical ventilation, ICU and hospital stays, ECMO weaning rate (survived ECMO), survived to hospital discharge (survived to DC), and 30-day survival.
ECMO-related complications
Hemorrhage and thrombosis were the most common complications of ECMO support. As demonstrated in Table 4, both VTE and circuit-related thrombosis were identified in 10 (31.25%) patients who received ECMO support. Arterial thromboembolic events were observed in 2 (6.25%) patients, while bleeding events that required reoperation were experienced by 4 (12.5%) patients. All patients who developed arterial thromboembolic events and bleeding belonged to the prolonged ECMO group. The incidence of VTE associated with ECMO was comparable among the four groups (P = 0.561). However, the incidence of circuit-related thrombosis varied with the highest incidence in the prolonged ECMO and rescue ECMO groups (P = 0.013).
Discussion
Extracorporeal membrane oxygenation is an extremely versatile tool in the field of LTx as it can serve as a BTT before transplantation, as a support modality during transplantation, and as a rescue strategy after transplantation (3,(6)(7)(8). The data presented here confirmed the essential role of ECMO in LTx, especially the prominent contribution in the intra-and post-operative periods. These data demonstrate promising primary graft function and survival rates with prophylactic intraoperative and post-operative prolongation of ECMO support. Furthermore, the incidences of ECMO-related complications were acceptable in the patient cohort. By optimizing gas exchange, pre-operative VV ECMO offers pulmonary support as a BTT. In this study, VV ECMO was used to successfully bridge LTx in five patients. Notably, 30-day survival was achieved in 4 (80%) patients, which is consistent with short-term survival (81.6%) in low-volume centers (20). However, 90-day survival was achieved only in 2 (40%) patients, which is lower than the 90-day survival rate in previous report (12). There are several possible reasons why early initial experience with ECMO as a BTT in our center was discouraging. First, the low-volume of transplantation in our center may partially explain the inferior survival rate since ECMO is a complex procedure and use in LTx favors a volume-outcome association (20,21). Second, post-transplantation survival is lower for IPF than other indications (22). In this series, ECMO support was used in 4 IPF patients whose conditions deteriorated rapidly despite maximal medical therapy. It is difficult to successfully rehabilitate critically ill patients, which was detrimental to transplantation outcomes. In addition, ECMO as a BTT has evolved over the last two decades from an acute rescue therapy to a semi-elective procedure in an experienced high-volume transplant center (23). However, our center is still in the stage of acute rescue therapy.
Aside from pre-operative VV ECMO support as a BTT, VA ECMO is preferred intraoperatively for both hemodynamic and respiratory support. The study conducted by the Hannover Group had a larger cohort of patients, but there were no differences in long-term outcomes and complications between patients who survived hospital discharge with intraoperative VA ECMO support and those without ECMO support, although ECMO recipients endured more complicated perioperative and early post-operative courses (14). Similarly, intraoperative VA ECMO resulted in lower PGD rates and superior 1-, 2-, 3-, and 5-year survival rates as compared to transplantation with no extracorporeal support based on two large cohorts of patients from the Vienna Group (15, 16). Furthermore, intraoperative VA ECMO support for LTx recipients with severe IPAH, a very difficult patient population, provides excellent outcomes as compared to the use of cardiopulmonary bypass (17). Due to the satisfying survival rates of patients who received intraoperative ECMO, recent studies have proposed routine or prophylactic use of intraoperative ECMO in LTx. In previous studies, routine use of ECMO during LTx improved early outcomes and postoperative lung function without increasing the incidence of extracorporeal-related complications (15, 16,24,25). Intraoperative ECMO can be extended into the early postoperative period if graft function failed to meet established quality criteria or even to maintain ECMO "prophylactically" for high-risk recipients, such as those with pulmonary hypertension (7,(26)(27)(28). The Vienna Group extensively investigated the concept of prophylactic post-operative ECMO prolongation, particularly in patients with pulmonary hypertension and questionable graft function at the end of LTx, and found that prolongation of ECMO support resulted in excellent primary graft function and survival rates, thereby demonstrating a survival benefit in patients both with and without pulmonary hypertension (15,16). Another independent study conducted by the same group (18) reported similar excellent survival data in a population with severe IPAH. Several other groups (17,29) have also reported superior outcomes.
In line with these reports, 21 of 86 (47.2%) LTx recipients in the present study received pro-ECMO support, which included 16 (76.2%) who were adopted with the VA configuration, including 11 in the intraOp pro-ECMO group and five in the intra/postOp pro-ECMO group. The remaining 5 (23.8%) patients were initiated with VV ECMO and the same configuration was maintained post-operatively ( Table 4). The incidence of PGD grade 3 at 48 or 72 h and short-term survival were comparable between patients who survived hospital discharge with pro-ECMO support and those without ECMO support (95.2% vs. 92.6%, respectively). However, the estimated 1-and 2-year survival rates were superior in the pro-ECMO group as compared to the non-ECMO group, although this difference was not statistically significant, possibly due to the relatively small cohort and limited follow-up period. Furthermore, the significantly lower BMI in the pro-ECMO group was predictive of improved graft survival, as previously reported (14).
Although VV ECMO is typically the preferred configuration as a BTT, relatively few studies have evaluated the use of VV ECMO support during LTx (6). A 2018 study by Hashimoto et al. (30) of intraoperative extracorporeal support during LTx in patients bridged with VV ECMO reported that VV ECMO was maintained in 59% of bridged patients, whereas 32% were converted to central VA ECMO due to compromised hemodynamics. Post-operatively, 41.2% were extended with VV ECMO. Notably, there were no significant differences in 90day mortality and 5-year survival between these two groups, indicating the feasibility of intraoperative and post-operative prolongation of VV ECMO.
In our center, after splitting the intra/postOp pro-ECMO subgroup from the pro-ECMO group, 5 of 10 (50%) of patients were initiated with VV ECMO intraoperatively and remained on the same configuration post-operatively. All patients who received VV ECMO support were successfully weaned off and discharged from the hospital and achieved excellent 30-and 90day survival rates. In contrast, one patient who received VA ECMO support died of severe IPAH while on ECMO, which resulted in a lower survival rate in this group. The predominant baseline disease was chronic obstructive pulmonary disease in the VV ECMO group and IPF and IPAH in the VA ECMO group. In this study, patients with IPAH underwent LTx with the VA ECMO strategy, which was directly extended into the postoperative period, as described in previous reports (15, 16,18). However, in patients with baseline disease that only affects oxygenation, VV ECMO is sufficient to provide safe and effective support intraoperatively and to reduce the incidence of PGD post-operatively in a relatively low-volume transplant center. However, further studies are needed to validate these results.
Both VV ECMO and VA ECMO can be used postoperatively as a rescue therapy for hemodynamic instability or inadequate graft function, such as PGD. In the present study, 6.98% (6/86) of the cohort were rescued with ECMO for PGD post-operatively, which is within the reported range of 5.1% to 12.8% (31-33). Among these six patients, half required VA ECMO and half received VV ECMO. The 30-day survival was 84% in the rescue group, which is consistent with a previous report (34). The 90-day survival in this study was 67%, lower than in the intraOp pro-ECMO group and intra/postOp pro-ECMO group, but similar to several studies reporting 1-year survival rates after post-operative rescue ECMO of 59% to 78% (13, 33, 34).
Bleeding and thrombosis are major complications in patients supported with ECMO. In the current study, 14.3% (3/21) of patients in the pro-ECMO group developed bleeding events that required reoperations, which was comparable with the incidence in the non-ECMO group. No bleeding was observed in the intraOp pro-ECMO group, as all patients (4/10, 40%) who had bleeding events were in the intra/postOp pro-ECMO group, which was a higher incidence than in the prolonged ECMO group reported by Hoetzenecker et al. (15). Thromboembolic events, such as arterial thromboembolism, were observed in 20% of patients in the intra/postOp pro-ECMO group, and the incidences of both VTE and circuitrelated thrombosis were higher in each ECMO subgroup with the exception of the intraOp pro-ECMO group. However, there was no difference in the incidence of VTE between the pro-ECMO and non-ECMO groups.
The main limitations to this study were the single-center retrospective nature, relatively small sample size, and limited experience with ECMO as demonstrated by the slightly higher prevalence of related complications. Nonetheless, the estimated 1-and 2-year survival rates were relatively superior in the pro-ECMO group.
Conclusion
Taken together, these findings indicate that bridging strategies for LTx are sufficient as an acute rescue therapy, thus appropriate patient selection, such as those on a waiting list for LTx and well-rehabilitated patients, is important to achieve optimal results. Intraoperatively, prophylactic use of ECMO and prophylactic post-operative ECMO prolongation, particularly in patients with pulmonary hypertension and questionable graft function at the end of implantation, achieved satisfactory survival and acceptable complication rates. In addition, the VV ECMO strategy provided safe and effective support intraoperatively and reduced the incidence of postoperative PGD in selected patients in this relatively low-volume transplant center. Post-operatively, the use of ECMO as a rescue therapy to salvage a graft affected by severe PGD also provided acceptable survival.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the Institutional Research Ethics Board of Shanghai Pulmonary Hospital affiliated with Tongji University. The patients/participants provided their written informed consent to participate in this study. | 2022-07-17T15:18:28.684Z | 2022-07-15T00:00:00.000 | {
"year": 2022,
"sha1": "a9fb58f95abcb527dd2a762526979545601eafba",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2022.950233/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59fdc6d0dbedd6191ed8c2d0ba67e27ccc13d20a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
213432664 | pes2o/s2orc | v3-fos-license | Methodology of short-term planning of work indices in the university department on basis of the reference model using web-portal
Today, the departments of various universities have implemented databases in the form of Web sites. The information environment of the department is a huge data warehouse. Modern database management systems allow to effectively store and process data, but, unfortunately, these tasks implementation is not enough for personnel management. The trends in the higher education development in Russia indicate that it is necessary not only to store data, but also to control the department development in educational, scientific and other areas of activity. For this it is proposed to implement a system for calculating, maintaining and visualizing the department’s rating and introduce it into the existing Web portal of the department. This system will not only simplify the obtaining, calculation and storage of the required data on the department rating. It will also provide the required indices in a visual and understandable way.
Domain analysis
The subject field of this paper is the planning of the department activity indices, using the reference model, based on the information management system of the department. A formalized mapping of the proposed process is presented in figure 1.
The objectives of this study are: to determine the indices types with specified performers on the department rating formation for the current year; to develop an algorithm to calculate the current and planned performance indices for the department; to develop a reference model; to realize the current model as a part of the Web -portal of the department. to define and to compare the current rating indices of the department to the reference model via Web -portal to schedule the department activities.
The annual rating of the scientific activities of the university departments, as is, is necessary and should fulfill the following functions: MIST: Aerospace 2019 IOP Conf. Series: Materials Science and Engineering 734 (2020) 012120 IOP Publishing doi:10.1088/1757-899X/734/1/012120 2 information -specialized. To gather and to structure the information required to compile the official report on the scientific activities of the university according to the state structures requirements and the reports for other bodies of federal regional and local control; moral and material stimulation of scientific activity. The first places in the final rating should be provided for forms of moral and material stimulation; analytical and administrative -managerial. A comparative analysis of the scientific activity results by the administration of the university, faculties, departments and other units included in the rating. Identification of "weak and strong" positions of scientific activity, negative and positive trends. Formulation and planning of correction measures. Development of a mechanism to implement the long-term plans. Administrative regulation of the competitive change of personnel and management structures of units (faculties, departments, research centers, laboratories, etc.) based on a comprehensive analysis of the scientific activities results within the corresponding period. Regulation of the process of the educational workload distribution depends on the scientific activity effectiveness [1] (faculty between departments and "intra-department" between employees of the corresponding unit).
The control loop is shown in figure 2.
System project for the method development for the short-term planning of the department's performance indices, based on the reference model using the Web -portal
The basic principles to build and to develop a promising system are based on modern generally accepted ideas of information systems and networks design, as well as on the experience of creating and operating such systems in leader universities in the world.
The system project includes the development of a functional model referring to the IDEF0 methodology standards and an information model referring to the IDEF1X methodology standards [2].
Functional Model
To create any information system, it is necessary to survey the subject area. The survey was realized in accordance with the SADT methodology. The subject area is the methodology to determine and to obtain the rating indices and to introduce a reference model.
To build a functional model, we used the IDEF0 methodology implemented in the AllFusion Process Modeler software product of Computer Associates.
The development of a functional model according to the IDEF0 methodology begins with setting a goal to narrow the subject area under consideration and to choose the point of view from which we will consider it.
The objective is to determine and to obtain rating indices and to introduce a reference model, the point of view is the head of the department. The context diagram "Defining indices and getting the department rating" is shown ( figure 3). This diagram defines the boundary of the system and consists of one block and its arcs. Input arcs are -Current department data, rating data of university departments. Output arc is a common database with all indices, a single graph. Management are documents, orders, university contracts with teachers. The mechanisms are students, teachers, dean's office, academic management, department staff.
As a result of the context diagram decomposition, the following blocks will be obtained:
Information Model
Based on the functional model, an information model is built. It is realized according to IDEF1X [3] methodology using the AllFusion Erwin Data Modeler package of Computer Associates. In this case, the information model is an adequate reflection of the information structure of the existing process. The entities depicted in the database blocks are the tables interconnected by key fields. The information model presents two types of relationships between entities: identifying and nonidentifying. According to the information model, a portal database has been developed.
Technology of model realization
The purpose of the development is to organize the process of the necessary data collection, to ensure a prompt information obtaining by all interested parties due to an access to a common database.
The functioning technology of the system to collect and to view the required information
The functioning of the system for collecting and viewing the required information occurs in the user dialogue mode with the interface screen form. For each dialogue step (press a button, load a screen form, prepare a print form), it is distinguished the procedures to work with a database, screen and reporting forms [4,5]. The functioning of the method and its main steps are presented below in the form of a block diagram (figure 4).
Organization of the information system database
The main component of the system under consideration is the database. The database traditionally serves as a storage, processing and retrieval of the required information. The structure of the database is reflected in the information model built according to the rules of the IDEF1X methodology using the Erwin package of Platinum Technology. The database in the mentioned solution method is a group of tables. A table is a collection of columns called table fields and rows containing one element (field) of each column. Such rows are called table entries. | 2020-01-30T09:15:16.412Z | 2020-01-29T00:00:00.000 | {
"year": 2020,
"sha1": "6de19885fffe1a9901d9ba57a13aa6d2abdf586a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/734/1/012120",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8f02ad068877d4438e9f26de7213937b1848e2e4",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
211559095 | pes2o/s2orc | v3-fos-license | Policy Acceptance of Low-Consumption Governance Approaches: The E ff ect of Social Norms and Hypocrisy
: Tackling over-consumption of resources and associated emissions at the lifestyle level will be crucial to climate change mitigation. Understanding the public acceptability of policy aimed at behaviour change in this domain will help to focus strategy towards e ff ective and targeted solutions. Across two studies (n = 259, 300) we consider how policy approaches at di ff erent levels of governance (individual, community, and national) might be influenced by the inducement of hypocrisy and the activation of social norms. We also examine the influence of these experimental manipulations upon behavioural intention to reduce consumption (e.g., repair not replace, avoiding luxuries). Dynamic social norm framing was unsuccessful in producing an e ff ect on policy acceptance or intentions to reduce consumption. Information provision about the impact of individual consumption on global climate change increased support for radical policies at the national level (banning environmentally harmful consumption practices) and the community level (working fewer hours, sharing material products, collaborative food cultivation), yet the inducement of hypocrisy had no additional e ff ect. This is in contrast to individual-level behavioural intentions, where the inducement of hypocrisy decreased intentions to engage in high-consumption behaviour. This paper concludes with implications for low-consumption
Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a . cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s. Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s .
Introduction
The world is experiencing drastic changes in its climate system, with the scientific community in agreement on the severity of climate change risks [1] and the need to avoid these risks for human societies and ecosystems [2]. The Paris Agreement has set targets to limit global warming to 1.5 • C requiring aggressive emissions reductions. If we continue on the current trajectory and exceed 3 • C of warming, the results are likely to be catastrophic [3]. In order to limit warming to 1.5 • C, future scenario pathways show that new technology and increased energy efficiency will not be sufficient, and significant lifestyle changes are required [4]. One-quarter of global emissions are linked to the consumption and production of material products such as clothing, vehicles, electronics and household items [5] and we will need to reduce our consumption of these material goods in order to help tackle this problem effectively as part of an emissions reduction strategy. At the same time, other environmental and social impacts of material production, processing and consumption are profound and increasing [6]. Mining and material production generate pollution, threaten ecosystems, and deplete scarce natural resources [7], whilst extraction of metals finance armed conflicts and employ child labour [8]. Waste is also a growing concern, with the traditional waste management solution in many countries (including Understanding public acceptability of these different governance approaches can help policy makers build support from their electorate, and give the political mandate required to make changes to individual lifestyles [26]. More generally, incorporating public perspectives into policy making is important for normative, substantive and instrumental reasons [27]. That is, in democratic societies, it is important to consult those who would be impacted by policies. Also, by consulting the public, we can improve the quality of policy outcomes by broadening the range of insights; and, finally, engaging the public can reduce opposition and build support for policies [27]. These rationales for public participation are particularly important when considering transformational policies with broad societal implications across multiple scales (including for lifestyles). Ascertaining the most attractive and democratically viable policy solutions to tackle over-consumption necessitates a psychological study into individual level decision-making regarding consumption practices.
From the environmental psychology literature, we know that social norms and hypocrisy have been found to encourage the uptake of some pro-environmental behaviours (PEB) such as recycling, plastic bag use, water conservation, etc. (see [28,29] for meta-reviews on social norms; [30] for hypocrisy). However, these approaches have not been applied to material consumption choices or to policy preferences in relation to reduced consumption; hence, the current research represents a much-needed advance towards redirecting environmental psychology towards high-impact pro-environmental behaviours [31]. This paper thus aims to explore the framing effects of social norms and hypocrisy inducement upon support for low-consumption policies representing different levels of governance, and a number of behaviours representing low-consumption lifestyle choices.
Hypocrisy
Hypocrisy studies are grounded in cognitive dissonance theory [32], which describes the mental state of holding two or more contradictory beliefs or values. Whereas cognitive dissonance often concerns attitudes, beliefs and values that are inconsistent with each other, hypocrisy is specifically concerned with attitudes, beliefs or values that are inconsistent with behaviour. The result of this inconsistency presents a threat to self-identity and self-integrity. Hypocrisy research has further explored how incoherence in one's self-identity and one's self-integrity often results in negative affect [33,34]. Being made aware of an inconsistency between one's values and one's action, i.e. hypocrisy, means the individual must either accept this inconsistency (along with any potential negative affect), or change their attitudes or behaviour in order to maintain more consistency in the future (and thus alleviate negative affect).
Previous research has utilized a hypocrisy paradigm in order to induce hypocrisy in participants, resulting in increased uptake of pro-environmental, ethical, and pro-social behaviours [30][31][32][33][34][35]. The hypocrisy paradigm was pioneered by Aronson, Fried and Stone [36] who designed an experiment that has since reliably induced hypocrisy and dissonance. Firstly, participants must advocate a pro-social (or pro-environmental) behaviour. Secondly, participants are asked to recall any past transgressions, the function of which is to increase the salience of any inconsistencies between past behaviour and the stance they have just advocated for. In previous experiments, the first stage of hypocrisy inducement has taken various forms, including signing petitions, posters or flyers [37][38][39][40], writing a list of reasons or a paragraph to promote the target behaviour [41][42][43], or being filmed giving a speech in support of a behaviour [33,36,41,[44][45][46][47][48]. Research has shown that alterations in the commitment stage can lead to different behavioural responses. Specifically, behaviour change is more likely where the advocacy is a public display of support, rather than a private disclosure of advocacy. This public commitment more reliably produces hypocrisy and subsequent effects on behaviour change and/or intentions [49][50][51].
The second step, transgression recall, is less variable, and prompts the participant towards listing or rating the frequency of past behaviour. This serves to raise the awareness of a failure to adhere to one's own principles and is understood to increase the salience of hypocrisy in the respondent. This hypocrisy is unpleasant to the individual experiencing it, motivating them to reduce any associated Sustainability 2020, 12, 1247 4 of 25 psychological discomfort [36,49,52]; therefore, in the current experiment, we expect induced hypocrisy about high-consumption to increase intentions to engage in low-consumption behaviours.
This current research aims to increase the salience of hypocrisy and measure the impact it has on individual decision making, specifically on policy support and behavioural intentions. Due to a lack of research on hypocrisy and policy support, our hypothesis will be more exploratory. If participants are made to feel hypocritical, they are being confronted by the inconsistency between what they want to do, and what they do. We predict higher support for policy measures that involve regulation and control and lower support for policy that rely upon individual freedom and decision making. The hypocrisy is a result of their inability to match their autonomous decision making with their ideal course of action, and therefore there may be less willingness to be totally responsible for those decisions.
Social Norms
Social norms have been explored with relevance to pro-environmental behaviour (PEB) [27,28], but not with specific relevance to reduced consumption lifestyles and acceptance of policy. This research contributes to this gap in the literature. Social norms have long been the subject of psychological enquiry, and more recent decades have yielded much research on their effect upon PEBs, most of which are rooted in theory of normative conduct [53], pioneered by Cialdini and colleagues [27].
A meta-review of social norms and PEB has shown that descriptive norms are more effective than injunctive norms in eliciting behaviour change [28]. Descriptive norms illustrate what most people actually do, whereas injunctive norms illustrate what most people should or ought to do. Descriptive norms around low consumption are unlikely to be evident in today's consumerist society, which presents problems about how this experimental manipulation might be received plausibly by participants. Additionally, research has found hypocrisy studies to be most effective when the norms are well accepted [49][50][51][52][53][54]. Relevant to this concern is the more recent research into dynamic social norms, which have been influential on emergent and less well-established norms [55,56]. This manipulation presents a social norm that is emergent, illustrating that there is an uptake of the target behaviour or belief and that it is becoming a social norm. This specific type of norms framing has been effective for promoting water consumption in the home [55].
Social norms that are framed around relevant social groups are found to be more effective in provoking take-up of a behaviour, as opposed to framing a social norm around a group that one might feel little affiliation and identification with [57,58]. This study will emphasise a UK-specific dynamic descriptive norm in order to ensure that all respondents (recruited from the UK) are within a relevant social group. By framing the uptake of the desirable behaviour rather than drawing attention to transgressions of non-adopters we avoid inadvertently portraying high-consumption lifestyles as normative. Social norms have been found to be most effective when they activate guilt within participants [59]. If the respondents in the social norm condition are aware that others in their community (in this case, other UK citizens) are reducing their environmental impact, they could be motivated to make commitments to low-consumption behaviours out of anticipated guilt or shame [60]. Therefore, activating social norms in conjunction with inducing hypocrisy is likely to produce an interaction effect whereby social norms have a larger effect on decreasing consumption when hypocrisy has been induced.
It will be beneficial to consider how social norms can influence the public acceptance of policy measures, and whether or not they might affect the support for more radical social transformation that is being called upon to address climate breakdown. There is contradictory evidence around the interaction of social norms and public policy, with some studies supporting the notion that public policy change can bring about social norms [61,62], and others vice versa, that social norms around an issue can provoke the creation of public policy [24]. The latter is more in line with the research on PEB. Understanding this relationship better, by ascertaining effects of social norms upon policy acceptance and behavioural intention, can help inform effective policy making. Hypothesis 1 (H1). The social norm framing will result in an increase in low-consumption behavioural intentions and increase support for national governance of radical and transformative policy measures (regulation from government level, consumption budgets, local community sharing economy) and decrease support for policies that operate at a level of personal governance (environmental taxation, deregulating markets).
Hypothesis 2 (H2). The hypocrisy manipulation will decrease consumption behavioural intentions and increase support for national governance of radical and transformative policy measures (regulation from government level, consumption budgets, local community sharing economy) and decrease support for policies that operate at a level of personal governance (environmental taxation, deregulating markets).
Hypothesis 3 (H3).
An interaction effect will occur where the social norm framing will increase the influence of the hypocrisy framing.
Design
The current study employed a 2 × 3 between-participants experimental survey design that operationalised two independent variables: social norm framing (control, social norms) and induced hypocrisy (control, advocacy, and hypocrisy) (See Table 1). The effect of these conditions was measured against the two sets of dependent variables: acceptability of policies to support low-consumption lifestyles, and behavioural intentions to reduce one's own consumption.
Participants
The study recruited 259 participants from the UK using an online participant panel (Prolific) (see Appendix B for exclusion criteria information). Participants were randomly assigned to one of the six conditions via a randomizer function in Qualtrics (See Table 2 for demographic information of participants per condition). More participants were in the lower income brackets (average [median] wage in the UK is~£28,000, with results showing that the sample is skewed towards the lower incomes, which is typical for most research), and more participants self-identified as being left-wing, than right-wing. A Chi-squared distribution test showed that age, income and political affiliation were non-significantly different across the conditions; however, gender was significantly unbalanced across conditions. Income and political orientation were not correlated (r = 0.05, p = 0.42) and a one-way ANOVA showed that political orientation was not predicted by income (F 4, 254 = 1.17, p = 0.32).
Materials
All materials used in this research were accessed by participants using Qualtrics, an online digital survey distribution software package. Analysis for descriptive and inferential statistics was conducted using IBM SPSS statistical analysis software (ver. 25).
Procedure
All participants first read information about the study and data protection and were required to give informed consent. Participants were then given factual information about climate change and consumption, in order to give reference to why policy support and behaviour change might be required (see Appendix A). Experimental manipulation of social norms was implemented through the provision of a graph and short statement ( Figure 1. detailed below in Section 2.2.5), which was added to the end of the information statement. Experimental manipulation of hypocrisy was carried out in accordance with the established practice detailed in Sections 1.1 and 2.2.6 where participants in the advocacy and hypocrisy conditions were asked to sign a pledge in order to state their advocacy for a given course of action, in this case to reduce their consumption. Following this, participants in the hypocrisy condition were then immediately asked to recall times in the recent past when their behaviour has transgressed from this pledged advocacy. After these various manipulations, all participants completed the dependent variable items on policy support and behavioural intentions. Finally, a dissonance thermometer was used to measure hypocrisy in each group as a manipulation check (see Section 2.3.1). After the survey, participants were debriefed, including explaining the social norms deception.
Sustainability 2020, 12, x 7 of 24 by a dynamic descriptive norm that details a percentage of these who recognise the problem and are starting to take actions to reduce consumption. The statement read as follows: "A recent survey found that the majority (72%) of people in the U.K. realise our consumption levels are too high, and of those people, 45% are already taking steps to live more sustainably by buying less." The figures used in the social norm manipulation were not factual and had been developed in order to present low consumption as a social norm. This method of deception is common in research and participants were subsequently debriefed.
Hypocrisy
The second part of the experiment was the hypocrisy manipulation, which consisted of three levels. The first being a control group with no experimental manipulation. In the other two levels participants were asked to sign a pledge and make a commitment to reduce their own consumption (see Figure 2). Participants were told that their signature would be publicly displayed. While in actual fact their signatures were never publicly displayed, it was important the participants believed that the advocacy pledge would be public as this is the most effective at eliciting behaviour change [30]. In the hypocrisy condition, participants were also asked to sign the pledge (as in the advocacy condition) but were then also asked to rate the frequency of any behavioural transgressions that contradict their pledge to reduce consumption. This had the aim of making participants aware of their own lack of consistency in their behaviour with the pledged advocacy regarding consumption.
Social Norms Framing
The first stage of the experimental design constituted an information statement with a number of facts about consumption and climate change. This stage of the design had two conditions, one condition included only this information statement, the second included a social norms statement with a descriptive norm about how many people recognise over-consumption as a problem, followed by a dynamic descriptive norm that details a percentage of these who recognise the problem and are starting to take actions to reduce consumption. The statement read as follows: "A recent survey found that the majority (72%) of people in the U.K. realise our consumption levels are too high, and of those people, 45% are already taking steps to live more sustainably by buying less." The figures used in the social norm manipulation were not factual and had been developed in order to present low consumption as a social norm. This method of deception is common in research and participants were subsequently debriefed.
Hypocrisy
The second part of the experiment was the hypocrisy manipulation, which consisted of three levels. The first being a control group with no experimental manipulation. In the other two levels participants were asked to sign a pledge and make a commitment to reduce their own consumption (see Figure 2). Participants were told that their signature would be publicly displayed. While in actual fact their signatures were never publicly displayed, it was important the participants believed that the advocacy pledge would be public as this is the most effective at eliciting behaviour change [30].
In the hypocrisy condition, participants were also asked to sign the pledge (as in the advocacy condition) but were then also asked to rate the frequency of any behavioural transgressions that contradict their pledge to reduce consumption. This had the aim of making participants aware of their own lack of consistency in their behaviour with the pledged advocacy regarding consumption. Participants were asked "how many times in the past week have you bought . . . ?" in relation to four items; 'an item made of (or packaged with) single use plastic', 'a product that is not ethically produced', 'food that you have ended up throwing away', and 'something that you didn't really need' (rated 1-6: 'not at all' (1), 'once' (2), 'a few times' (3), 'often' (4), 'every day' (5), and 'more than once a day' (6)-an additional option of 'don't know' was excluded from analyses). Participants were also asked "how many times in the past year have you bought . . . ?" in relation to two items; 'a new replacement product, instead of repairing an old one' and 'an expensive luxury item' (rated 1-6: 'not at all' (1), 'once' (2), 'a few times' (3), 'often' (4), 'every day' (5), and 'more than once a day' (6), with an additional option of 'don't know' which was excluded from analyses).
The second part of the experiment was the hypocrisy manipulation, which consisted of three levels. The first being a control group with no experimental manipulation. In the other two levels participants were asked to sign a pledge and make a commitment to reduce their own consumption (see Figure 2). Participants were told that their signature would be publicly displayed. While in actual fact their signatures were never publicly displayed, it was important the participants believed that the advocacy pledge would be public as this is the most effective at eliciting behaviour change [30]. In the hypocrisy condition, participants were also asked to sign the pledge (as in the advocacy condition) but were then also asked to rate the frequency of any behavioural transgressions that contradict their pledge to reduce consumption. This had the aim of making participants aware of their own lack of consistency in their behaviour with the pledged advocacy regarding consumption.
Hypocrisy Manipulation Check
A 'dissonance thermometer' was used in order to measure cognitive dissonance, a scale that was developed by Eliot and Devine [34]. This has been used in some previous research as a proxy for measuring hypocrisy. This measure was reduced to an eight-item scale representing the four dimensions of dissonance: negative self-directed affects ('disappointed in myself' and 'disgusted in myself'), psychological discomfort ('uncomfortable' and 'bothered'), anxiety ('stressed' and 'worried'), and positive affects ('content' and 'happy'). A ninth item was added to try and assess self-reported feelings of hypocrisy; this was labelled 'hypocritical'. Correlations between the dissonance thermometer and the hypocrisy item help gauge the effectiveness of the scale. All items were answered in response to the question "at the moment, to what extent do you feel . . . ?" (1-7, with a label at each end of the scale 'does not correspond to how I feel' (1) and 'completely corresponds with how I feel' (7)) as per the procedure of Pelt et al. (2018).
The dissonance thermometer showed very strong reliability using Cronbach's Alpha score for scale reliability testing (α = 0.91) and correlated significantly with the additional hypocrisy item (r = 0.53, p < 0.01). However, when a one-way ANOVA was conducted on the sample, no significant difference was found in the dissonance of the participants across the conditions, when using the dissonance thermometer (F 2, 259 = 1.01, p = 0.36). Nor was there a significant difference between the hypocrisy conditions when using the 'hypocrisy' item (F 2, 259 = 0.584, p = 0.558).
In order to try and make further sense of this result, we can look at the subsection of participants who completed the transgression scores and see if the degree to which a participant transgressed in their consumption behaviour led to a more hypocritical rating. Transgression scores were positively correlated to higher self-reported hypocrisy specific item score (r = 0.22, p = 0.024) and marginal significance with the dissonance thermometer (r = 0.181, p = 0.053).
Dependent Variables
The dependent variables were behavioural intentions and policy support items. The items used for the transgressions were included when asking about behavioural intentions along with two additional PEB items ('Travel by foot, bike, bus or train instead of by car or plane' and 'Choose to eat vegetarian/vegan meals and cut down on meat'; both of these items were reverse coded for analysis). Participants were asked 'Think about the next month, and your intentions regarding the following Sustainability 2020, 12, 1247 9 of 25 actions. How often will you?' with a five item scale (rated 1-5, 'not at all' (1), 'once' (2), 'a few times' (3), 'often' (4), 'always' (5), and a further option, 'don't know' which was excluded from analysis).
The policy support items were measured on a six-item scale (rated 1-7, 'completely oppose' (1), 'strongly oppose' (2), 'slightly oppose' (3), 'neither oppose nor support' (4), 'slightly support' (5), 'strongly support' (6), and 'completely support' (7)). The six items represented the following policies: • Consumption budget: "Government policy should introduce an individual consumption budget where calculations are made around the impact of the things you buy and you are individually responsible for your consumption footprint. You would have a limit to how much, and what you can buy." • Environmental taxation: "Government should introduce a tax on activities and products that are damaging to the environment or people. This would make it more expensive to produce and buy products that are environmentally or socially damaging but they would still be available." • Regulation to ban products: "Government policy should regulate businesses to produce and sell only sustainable and ethical products. For example, cheaply produced clothes and electronics will be banned" • Reduce working hours: "There should be a reduction in working hours. This would mean we have more time to spend with family, and actively engaging in more activities (making things and growing food etc.). This would also mean we have less money to spend, and we wouldn't be able to buy as many things." • Free-market deregulation: "Business should be given more freedom from government to meet demand. Products would only become more environmentally-friendly if people chose greener products. This 'deregulation' doesn't guarantee a reduction in consumption." • Community sharing/fixing economy: "Local community groups should have support from the government to set up 'repair cafes' and 'library of things' so we can be better equipped to fix things we have, and share appliances and products we don't need to own. This would be cheaper but less convenient than owning them ourselves."
Policy Measure Support
A multivariate analysis of variance (MANOVA) was conducted (using Pillai's trace), finding a non-significant effect of social norms on policy acceptance measures, V = 0.022, F 6, 248 = 0.920, p = 0.48 and a non-significant effect of hypocrisy inducement on policy acceptance measures, V = 0.071, F 12, 498 = 1.53, p = 0.11, as well as non-significant interaction effects of social norms and hypocrisy inducement on policy acceptance measures, V = 0.043, F 12, 498 = 0.914, p = 0.53. However, there were marginally significant results with separate univariate between-subjects tests illustrating effects of hypocrisy inducement on the regulation of markets, F 2, 253 = 2.38, p = 0.095 (Figure 3), and on the deregulation of markets, F 2, 253 = 2.74, p = 0.067 ( Figure 4). Hypocrisy marginally reduced willingness to support government regulation, and increased support for deregulation and free-market business solutions. Table 3 shows overall means and sub-group condition means for policy support. Sustainability 2020, 12, x Table 4 shows overall means and sub-group condition means for beh intentions. 1 Actual item was "how often will you choose to eat vegetarian/vegan meals and cut down on m so the label and figures have been reversed to maintain harmony in the scale. 2 Actual item was "h often will you to travel by foot, bike, bus or train instead of by car or plane" so the label and figu have been reversed to maintain harmony in the scale.
There were also significant results with between-subject effects illustrating effects of hy inducement on behavioural intentions related to ethical purchasing (F2, 126 = 6.95, p < 0.0 replacing and repairing (F2, 166 = 4.78, p = 0.014). Ethical purchasing showed significant dif between the hypocrisy control group and advocacy only group with Bonferroni corrected p testing (p = 0.012) showing the advocacy only group had lower intentions to engage in pu unethical products than the control group. There were also between-subject interaction e social norms and hypocrisy inducement on replacing and repairing (F (2, 126) = 3.53, p showing that when social norms were present the advocacy only group had lower inten Table 4 shows overall means and sub-group condition means for behavioural ns.
ble 4. Overall means (and standard deviations) for intention to engage in consumption behaviours. ctual item was "how often will you choose to eat vegetarian/vegan meals and cut down on meat" the label and figures have been reversed to maintain harmony in the scale. 2 Actual item was "how en will you to travel by foot, bike, bus or train instead of by car or plane" so the label and figures ve been reversed to maintain harmony in the scale.
ere were also significant results with between-subject effects illustrating effects of hypocrisy ment on behavioural intentions related to ethical purchasing (F2, 126 = 6.95, p < 0.005) and Table 4 shows overall means and sub-group condition means for behavioural intentions.
There were also significant results with between-subject effects illustrating effects of hypocrisy inducement on behavioural intentions related to ethical purchasing (F 2, 126 = 6.95, p < 0.005) and replacing and repairing (F 2, 166 = 4.78, p = 0.014). Ethical purchasing showed significant differences between the hypocrisy control group and advocacy only group with Bonferroni corrected post-hoc testing (p = 0.012) showing the advocacy only group had lower intentions to engage in purchasing unethical products than the control group. There were also between-subject interaction effects of social norms and hypocrisy inducement on replacing and repairing (F (2, 126) = 3.53, p = 0.039) showing that when social norms were present the advocacy only group had lower intentions to engage in over-consumption and the hypocrisy group had higher intentions to over-consume. See Figures 5 and 6 below for reference of this. 1 Actual item was "how often will you choose to eat vegetarian/vegan meals and cut down on meat" so the label and figures have been reversed to maintain harmony in the scale. 2 Actual item was "how often will you to travel by foot, bike, bus or train instead of by car or plane" so the label and figures have been reversed to maintain harmony in the scale.
Sustainability 2020, 12, x engage in over-consumption and the hypocrisy group had higher intentions to over-cons Figures 5 and 6 below for reference of this. to buy unethically sourced products.
Discussion
The social norms framing was not found to influence support for different policy meas did they appear to influence people's intentions to consume less. In light of these results reject H1. The inducement of hypocrisy did not influence support for policy and governanc have an effect on behavioural intentions to reduce consumption. However, the results do no that hypocrisy itself increases intention to lower one's consumption specifically, but the commitment and advocacy to reduce consumption helps to reinforce this as a behavioural i These findings mean we must also reject H2. H3 can be partially retained due to a si interaction effect illustrating that the effects of the hypocrisy manipulation were strengthen social norms condition when evaluating the behavioural intentions. No interaction effect w between social norms and hypocrisy when evaluating the impact of the experiment on po governance support.
The failings of a social norm manipulation could be interpreted in light of recent resea Richter, Thøgersen and Klöckner [63] who reliably replicated the appearance of a 'Boomera on the social norms manipulation in their study on sustainable consumption choice supermarket. This effect is when the target behaviour of the social norm manipulation resu increased uptake of the behaviour that one is seeking to reduce. This effect was also found research [64,65]. Richter et al. [63] suggest that a descriptive norm of an undesirable behav is particularly common can have a negative impact on attempts to reduce the target behav use of a dynamic descriptive norm in this study could have reproduced this, as it acknowle a given behaviour is currently widespread, despite suggesting that there is a strong m towards people changing this behaviour. In hindsight, this manipulation may have leg hypocrisy by showing that there are people who are thinking about changing their cons patterns, but not translating this into behaviour. iscussion The social norms framing was not found to influence support for different policy measures, nor they appear to influence people's intentions to consume less. In light of these results we must t H1. The inducement of hypocrisy did not influence support for policy and governance but did an effect on behavioural intentions to reduce consumption. However, the results do not indicate hypocrisy itself increases intention to lower one's consumption specifically, but the act of a mitment and advocacy to reduce consumption helps to reinforce this as a behavioural intention. e findings mean we must also reject H2. H3 can be partially retained due to a significant action effect illustrating that the effects of the hypocrisy manipulation were strengthened in the l norms condition when evaluating the behavioural intentions. No interaction effect was found een social norms and hypocrisy when evaluating the impact of the experiment on policy and rnance support.
Discussion
The social norms framing was not found to influence support for different policy measures, nor did they appear to influence people's intentions to consume less. In light of these results we must reject H1. The inducement of hypocrisy did not influence support for policy and governance but did have an effect on behavioural intentions to reduce consumption. However, the results do not indicate that hypocrisy itself increases intention to lower one's consumption specifically, but the act of a commitment and advocacy to reduce consumption helps to reinforce this as a behavioural intention. These findings mean we must also reject H2. H3 can be partially retained due to a significant interaction effect illustrating that the effects of the hypocrisy manipulation were strengthened in the social norms condition when evaluating the behavioural intentions. No interaction effect was found between social norms and hypocrisy when evaluating the impact of the experiment on policy and governance support.
The failings of a social norm manipulation could be interpreted in light of recent research from Richter, Thøgersen and Klöckner [63] who reliably replicated the appearance of a 'Boomerang' effect on the social norms manipulation in their study on sustainable consumption choices in the supermarket. This effect is when the target behaviour of the social norm manipulation results in the increased uptake of the behaviour that one is seeking to reduce. This effect was also found in other research [64,65]. Richter et al. [63] suggest that a descriptive norm of an undesirable behaviour that is particularly common can have a negative impact on attempts to reduce the target behaviour. The use of a dynamic descriptive norm in this study could have reproduced this, as it acknowledges that a given behaviour is currently widespread, despite suggesting that there is a strong movement towards people changing this behaviour. In hindsight, this manipulation may have legitimised hypocrisy by showing that there are people who are thinking about changing their consumption patterns, but not translating this into behaviour.
The dynamic social norm statistics reported to participants about how many people in the UK were reducing their consumption were not true. In fact, UK consumption figures showing steady growth of household final consumption expenditure [66]. Therefore, it is possible that participants did not truly feel like they would be following a social norm by adopting the low-consumption behaviours. The social norms manipulation check suggested that they were not sceptical of the information, and believed it to be truthful; however, there is a possibility that the norm of consumption is so strong it cannot be easily manipulated by a simple statistical graph. The reason for the lack of effect could also be the result of a failed activation of the dynamic descriptive social norm relevant to policy or governance. The norm activation was designed to make it salient that most people in the UK were beginning to take action in tackling their consumption levels, and lead people into conforming to this apparent norm. This did not specify a policy preference or support for a type of governance; therefore, there was no social norm activation for policy support, only for behaviour change relevant to it.
The hypocrisy framing showed a marginal effect of advocacy on behavioural intentions to repair existing products and purchase ethical goods. These two items both represent a strong commitment to a lifestyle change that is relatively time-intensive and effortful. We can infer from this that making a pledge and committing to reduce one's consumption behaviour resulted in a stronger desire to follow this up with direct action. However, if hypocrisy is induced after this commitment stage, then the effects are no different to receiving no message at all. This could be explained by research that has shown how increased salience of hypocrisy can also evoke a 'rebound' effect, where individuals harden their stance and refuse to change their habits. One reason for this might be that people find it easier to renege on their recent advocacy statement and change their beliefs to be in accordance with the transgression of the target behaviour, and find this a suitable way to relieve themselves from any negative affect arising from the arousal of dissonance [30,40,67,68]. We must understand the conditions of this rebound effect as it is important to understand when activating the salience of hypocrisy is useful or not [69].
The marginal significance of support on policy measures does hint towards the possibility that hypocrisy reduces support for national governance and regulation and increases support for personal governance through deregulation and free-market economy mechanisms. The finding suggests that hypocrisy is not a useful mechanism for increasing support for national governance and regulation. This brings into question how appropriate it is for our behaviour change programs to focus on individual responsibility and decision making. Because hypocrisy reinforces a sense of personal moral responsibility for one's actions, it could be perpetuating an individualistic approach. This could help make sense of why it empowers individuals to change their intentions and reduces their willingness to absolve themselves of this responsibility. However, this could benefit from further exploration in a future study.
Because the hypocrisy manipulation check showed no difference between conditions, there is a chance that the measure was not recording levels of hypocrisy prior to the chance participants had to ameliorate and relieve their negative affect. This could be achieved by supporting a policy they felt would solve the problem, or by reinforcing their intention to consume less. Additionally, all participants were given information about consumption that may have induced hypocrisy to those participants in the control group. A pure control experimental group and a revised placement of the hypocrisy measures manipulation check should be sought in future research. Another significant caveat that should be made in regards to the effects of hypocrisy upon intentions to reduce consumption behaviour is the smaller sample size of the two significant items. Due to a lot of participants selecting 'don't know' for some of the items, there is potentially a problem with a lack of power.
Conclusions
Inducing social norms was not effective in this study, which could be due to a weak manipulation or a boomerang effect. Additionally, making the public feel hypocritical did not increase their willingness to accept stronger regulation. Offering a commitment to advocate and pledge for a change in their behaviour, however, increased support for regulation and more transformative radical policy, and decreased intentions to live a high-consumption lifestyle.
Study 2
A replication study was devised to simplify and strengthen the research design. Alterations were made in order to create a more obvious distinction between the policy options and make them more relevant to different levels of governance and policy approaches. They will now more distinctly represent 'National Governance and Regulation', 'Personal governance with deregulation of free markets', and 'Local and community governance with radical change' (see Section 3.2.6 for new items). The social norms manipulation was removed, as this did not result in significant findings in Study 1. A pure control was introduced in order to establish the effects of providing relevant information versus no information. This would allow the study to ascertain how influential it was to give participants information about the effects of their consumption behaviours with reference to global climate change, and in turn, if the hypocrisy manipulation increased or decreased this effect. Finally, amendments were made to the behavioural intention items to provide examples about what the behaviour changes were (see Section 3.2.6).
Hypothesis 4 (H4)
. Hypocrisy manipulation will increase support for national governance and radical policy measures and decrease support for policies that operate at a level of personal governance.
Participants
The study recruited 300 participants from a UK sample, again using Prolific. Table 5 presents socio-demographic characteristics of the sample in study 2. A Chi-squared distribution test showed that age, gender, and political orientation were non-significantly different across the conditions; however, income was significantly unbalanced across conditions. Income and political orientation were not correlated (r = 0.09, p = 0.13) and a one-way ANOVA showed that political orientation was not predicted by income (F 4, 295 = 1.16, p = 0.33).
Design
This study employed a 1 × 4 between-participants design. The four conditions were as follows:
1.
A pure control group where participants were given no information, 2.
An information only group where participants were provided with information on climate change and the role of consumption, 3.
An advocacy only group in which participants were shown the information on climate change followed by a commitment to reduce their consumption behaviours, and 4.
A hypocrisy group in which participants were provided with the information, asked to make a commitment to reduce their consumption behaviour followed by listing their recent transgressions with regards to consumption behaviours.
The dependent variables consisted of two sets: three low-consumption policy approach items, and six consumption behaviour intention items.
Materials
Materials were the same as in Study 1.
Independent Variables
The social norms framing was removed for this study, and the hypocrisy level was expanded to include four conditions. The information given was introduced as an experimental condition, and was the same information from the first study. Materials for the advocacy pledge were also kept the same. A slight change was made to the transgression list, in order to make it clearer how each behavioural domain was relevant to actual actions, e.g., "...something made of, or packaged with, single use plastic (e.g., plastic wrapped salad/vegetables, crisp packets, soft drinks bottles)", "...a new product instead of fixing an old one (e.g., repairable shoes, electronic devices)", and "...a product that is not ethically produced (e.g., made with cheap labour, conflict metals, damaging to the environment)". These items were all amended to have the same 'polarity', instead of having reversed items as in Study 1, in order to reduce cognitive load.
Hypocrisy Manipulation Check
The dissonance thermometer was the same as in Study 1; however, it was moved to before the dependent variables. This was due to concerns in Study 1 that the placement of this measure at the end of the study meant that any negative affect or dissonance resulting from the hypocrisy had been ameliorated by support for policy and expressed intentions to consume less.
The dissonance thermometer (α = 0.89) was significant positively correlated with the hypocrisy item (r = 0.54, p < 0.001). A one-way ANOVA showed that hypocrisy was significantly different across conditions (F 3, 296 = 5.38, p < 0.001), with Bonferroni corrected multiple comparisons showing that the control condition felt significantly less hypocritical than any of the experimental conditions. However, these post-hoc tests did not show any significant difference between the three experimental conditions. Transgression scores for those in the hypocrisy condition were marginally significantly positively correlated with self-reported hypocrisy (r = 0.24, p = 0.08).
Dependent Variables
The policy approach items were re-written to be more focused and reduced to three items to clearly illustrate different political strategies with minimal crossover. The items were: • "There will be more control over businesses to produce and sell only sustainable, easily repairable and ethical products (e.g., cheaply produced clothes and single use plastics will be banned). Making these products unavailable is the best way to ensure we can reduce our material consumption.", representing 'National Governance and Regulation', • "Business will be given more freedom to meet consumer demand. Products will become more environmentally-friendly if people buy more 'green' products and don't buy harmful or unsustainable products. If people don't want environmentally damaging products, they won't buy them. People can be trusted to make informed decisions about what they buy", representing 'Personal governance with deregulation of free markets' • "There will be an increase in local community led projects where people can borrow tools and appliances from a 'library of things', instead of individually owning products we don't use all the time. There will be more space made available for allotments and community growing. People will work fewer days a week, allowing for more time to be spent with friends and family, and making or repairing their own products instead of buying new things.", representing 'Local and community governance with radical change'.
Behavioural intention measures were kept largely the same as in Study 1 and mirrored the changes made to the transgressions, with the small amendments made to try and reduce 'don't know' responses. Participants were asked 'Think about the next month, and your intentions regarding the following actions. 'How often will you . . . ?' in relation to the six items (rated 1-5, 'not at all' (1), 'once' (2), 'a few times' (3), 'often' (4), 'always' (5), and 'don't know' which was removed from analysis).
Policy Measure Support
A MANOVA was conducted (using Pillai's trace), finding a significant effect of experimental manipulation on policy support V = 0.143, F 9, 291 = 4.93, p < 0.001. Between-subject effects showed significant effects on national governance to regulate consumption (F 3, 297 = 14.03, p < 0.001) and radical change at local governance level (F 3, 297 = 3.74, p = 0.012). Deregulation was not significantly affected. Bonferroni corrected multiple comparisons showed a significant difference in support for regulation measures between control and all three experimental conditions (p < 0.001), but no significant difference between the three experimental conditions. Additionally, Bonferroni corrected multiple comparisons showed a significant difference in support for radical change measures between control and information-only condition (p < 0.01), but no significant difference between any other conditions (Figures 7-9).
A MANOVA was conducted (using Pillai's trace), finding a significant effect of exper manipulation on policy support V = 0.143, F9, 291 = 4.93, p < 0.001. Between-subject effects s significant effects on national governance to regulate consumption (F3, 297 = 14.03, p < 0.001) and change at local governance level (F3, 297 = 3.74, p = 0.012). Deregulation was not significantly a Bonferroni corrected multiple comparisons showed a significant difference in support for reg measures between control and all three experimental conditions (p < 0.001), but no sig difference between the three experimental conditions. Additionally, Bonferroni corrected m comparisons showed a significant difference in support for radical change measures between and information-only condition (p < 0.01), but no significant difference between any other con (Figures 7-9). tely support' (7)). avioural intention measures were kept largely the same as in Study 1 and mirrored the made to the transgressions, with the small amendments made to try and reduce 'don't know' s. Participants were asked 'Think about the next month, and your intentions regarding the g actions. 'How often will you…?' in relation to the six items (rated 1-5, 'not at all' (1), 'once' w times' (3), 'often' (4), 'always' (5), and 'don't know' which was removed from analysis).
lts licy Measure Support ANOVA was conducted (using Pillai's trace), finding a significant effect of experimental lation on policy support V = 0.143, F9, 291 = 4.93, p < 0.001. Between-subject effects showed nt effects on national governance to regulate consumption (F3, 297 = 14.03, p < 0.001) and radical t local governance level (F3, 297 = 3.74, p = 0.012). Deregulation was not significantly affected. ni corrected multiple comparisons showed a significant difference in support for regulation s between control and all three experimental conditions (p < 0.001), but no significant ce between the three experimental conditions. Additionally, Bonferroni corrected multiple sons showed a significant difference in support for radical change measures between control rmation-only condition (p < 0.01), but no significant difference between any other conditions 7-9). igure. 7 Policy support for national governance Figure. 8 Policy support for personal governance Figure. 9 Policy support for community governance. Behavioural intention measures were kept largely the same as in Study 1 and mirrored the changes made to the transgressions, with the small amendments made to try and reduce 'don't know' responses. Participants were asked 'Think about the next month, and your intentions regarding the following actions. 'How often will you…?' in relation to the six items (rated 1-5, 'not at all' (1), 'once' (2), 'a few times' (3), 'often' (4), 'always' (5), and 'don't know' which was removed from analysis).
Policy Measure Support
A MANOVA was conducted (using Pillai's trace), finding a significant effect of experimental manipulation on policy support V = 0.143, F9, 291 = 4.93, p < 0.001. Between-subject effects showed significant effects on national governance to regulate consumption (F3, 297 = 14.03, p < 0.001) and radical change at local governance level (F3, 297 = 3.74, p = 0.012). Deregulation was not significantly affected. Bonferroni corrected multiple comparisons showed a significant difference in support for regulation measures between control and all three experimental conditions (p < 0.001), but no significant difference between the three experimental conditions. Additionally, Bonferroni corrected multiple comparisons showed a significant difference in support for radical change measures between control and information-only condition (p < 0.01), but no significant difference between any other conditions (Figures 7-9).
Behavioural Intentions
A MANOVA was conducted (using Pillai's trace), finding a significant effect of experimental manipulation on behavioural intentions V = 0.197, F 18, 282 = 2.81, p < 0.001. Between-subject comparisons showed significant effects on reducing behavioural intentions of all of the items: Intention to use plastic (F 3, 297 = 8.13, p < 0.001); Intention to buy unethically (F 3, 297 = 7.25, p < 0.001); Intention to waste food (F (3, 297) = 3.10, p < 0.05); Intention to buy something you don't need (F 3, 297 = 9.90, p < 0.001); Intention to buy new instead of repair (F 3, 297 = 9.91, p < 0.001); Intention to buy expensive luxuries (F 3, 297 = 6.61, p < 0.001). Bonferroni corrected multiple comparisons showed a significant difference in consumer intentions between control and hypocrisy groups across all items (p < 0.001), but no significant difference between the control, information only, and advocacy groups, with two exceptions. These exceptions showed that Bonferroni corrected multiple comparisons were significantly different in reducing behavioural intention to buy things you don't need (with significant differences between information-only and hypocrisy, both p < 0.001, as well as the aforementioned control and hypocrisy conditions) and buy new instead of repair items (where all conditions were significantly different to hypocrisy, p < 0.01) (Figures 10-15).
expensive luxuries (F3, 297 = 6.61, p < 0.001). Bonferroni corrected multiple comparisons sh significant difference in consumer intentions between control and hypocrisy groups across a (p < 0.001), but no significant difference between the control, information only, and advocacy with two exceptions. These exceptions showed that Bonferroni corrected multiple compariso significantly different in reducing behavioural intention to buy things you don't nee significant differences between information-only and hypocrisy, both p < 0.001, as wel aforementioned control and hypocrisy conditions) and buy new instead of repair items (w conditions were significantly different to hypocrisy, p < 0.01) (Figures 10-15). Figure. 10 Intention to use single use plastic Figure. 11 Intention to buy unethical pr Figure. 12 Intention to waste food Figure. 13 Intention to use single use p Figure. 14 Intention to buy new instead of repair Figure. 15 Intention to buy expensive lu Figure 10. Intention to use single use plastic.
ility 2020, 12, x 7 of 24 ehavioural Intentions MANOVA was conducted (using Pillai's trace), finding a significant effect of experimental lation on behavioural intentions V = 0.197, F18, 282 = 2.81, p < 0.001. Between-subject isons showed significant effects on reducing behavioural intentions of all of the items: n to use plastic (F3, 297 = 8.13, p < 0.001); Intention to buy unethically (F3, 297 = 7.25, p < 0.001); n to waste food (F (3, 297) = 3.10, p < 0.05); Intention to buy something you don't need (F3, 297 p < 0.001); Intention to buy new instead of repair (F3, 297 = 9.91, p < 0.001); Intention to buy ve luxuries (F3, 297 = 6.61, p < 0.001). Bonferroni corrected multiple comparisons showed a ant difference in consumer intentions between control and hypocrisy groups across all items 1), but no significant difference between the control, information only, and advocacy groups, o exceptions. These exceptions showed that Bonferroni corrected multiple comparisons were antly different in reducing behavioural intention to buy things you don't need (with ant differences between information-only and hypocrisy, both p < 0.001, as well as the entioned control and hypocrisy conditions) and buy new instead of repair items (where all ns were significantly different to hypocrisy, p < 0.01) (Figures 10-15).
re. 10 Intention to use single use plastic Figure. 11 Intention to buy unethical products Figure. 12 Intention to waste food Figure. 13 Intention to use single use plastic ure. 14 Intention to buy new instead of repair Figure. 15 Intention to buy expensive luxuries
Discussion
The hypotheses for Study 2 were adopted from Study 1 and sought observe the effect that was predicted in light of the literature reviewed originally, rather than to replicate any findings from Study 1. We cannot fully accept H4, as the only significant differences we found were in comparison to the pure control group. We can accept H5 as we found that the hypocrisy condition showed significantly lower intentions to engage in consumption behaviour. Study 2 shows that the provision of information about the seriousness of climate change and the relevance of individual-level consumption behaviours can increase support for stronger regulation at a national governance level and for locally led community level governance for radical lifestyle changes. The addition of a control group allowed us to recognise the marked difference between those who were given the information statement, and those who were not. However, the experimental manipulation levels were not significantly different from each other with regards to support for public policy and governance. Self-reported measures of hypocrisy and cognitive dissonance show that the control group felt significantly less hypocritical than the other groups, so perhaps a mechanism of hypocrisy was already activated by the mere presentation of information.
Hypocrisy was significantly more effective at reducing behavioural intentions for consumption, with those participants who were rated their transgressions showing the lowest intentions for consumption behaviour in the future. We did not replicate the marginal rebound effect found in study 1; however, with such a clear pattern of significance across all of the items (and improved design), we can be more confident in the findings from Study 2. Making participants aware of their hypocrisy appears to have motivated them to ameliorate this discrepancy by intending to act in accordance with their recently signed advocacy statement.
General Discussion
Hypocrisy inducement yielded an interesting mix of effects upon policy acceptance. The introduction of a pure control group in Study 2 allowed us to demonstrate the effect of information provision upon support for more regulation at national governance level and radical change in local communities. It is possible that the provision of this information alone increased feelings of hypocrisy, as all conditions that were given the information self-reported higher hypocrisy. Further examination of how a process of information provision might provoke reflection and associated guilt might help to further understand this finding. Supporting stronger regulation on consumption could be seen to absolve the individual of responsibility, which is consistent with focus group research where we found that participants believed governments should do more to reduce consumption [23].
Hypocrisy may be best at influencing an intention to change behaviour. However, taken together with the findings of these two studies and the results regarding policy options and governance levels, we can suspect that hypocrisy reinforces a sense of the 'neo-liberal self' in people, resulting in higher support for deregulation and free-market solutions. So profound is the entrenched nature of neo-liberalism, the environmental movement itself has often focused on individual agency and decision-making responsibility [70][71][72]. Governments and activists alike have often sought to influence individual actions that are 'simple and painless' rather than attempting more ambitious change [73].
The findings in this current study show that inducing hypocrisy reproduces support for policy that places responsibility on individual decision making, which could be what is causing hypocrisy in the first place. This interesting paradox should be further explored in order to unpack hypocrisy and its self-perpetuating impact on policy and governance.
Interestingly, the most effective manipulations of hypocrisy upon behavioural intentions were for the more challenging lifestyle changes. Consuming ethically and repairing consumer items require some effort on behalf of the individual and yet they were most significantly increased by the intervention. This shows that hypocrisy may be a powerful tool for changing intentions for more difficult to change behaviours.
The dynamic social norms framing was not found to be effective in influencing low-consumption behavioural intentions in Study 1, which is most likely due to the dominance of consumption-based living. Therefore, it could be more useful to seek ways for social norms to be generated through policy and governance, rather than attempting to generate support for a policy through a descriptive social norm framing (which is misaligned with social reality). The implementation of public policy can contribute to the emergence of social norms and increase public support and acceptance, where this might have been lacking prior to launching. Nyborg et al. [74] explored how recycling was once unpopular; however, with the introduction of policy, regulation and infrastructure it has now become socially normalised and widespread. Similar effects could be explored in future research on social norms and low consumption.
A moderating effect of social norms on hypocrisy is noted for further investigation within a hypocrisy paradigm. Study 1 showed a 'boomerang effect' where advocating for a pledge after being primed with dynamic social norms framing increased intentions to consume less, but when hypocrisy was induced after pledge advocacy the intention to consume was higher. Study 2, which did not employ any social norms, had none of these rebound effects. The inducement of hypocrisy here only increased the intended effect of the manipulation in intentions to engage in low-consumption behaviours. Although this study removed social norms due to a lack of effect, the inconsistencies between findings in Studies 1 and 2 could be further explored and potentially explained by social norms interaction effects.
Limitations and Future Research
Due to recruitment methods the sample here was slightly skewed towards left-wing and lower-income groups. With an ever-changing political climate in the UK, we cannot be sure how representative this sample is to the UK population at the time of data collection in 2019. This single item measure of political orientation is fairly primitive and might not be easily understood, hence the complexities of political identity in the UK in 2019 cannot possibly be captured by it, and therefore we cannot make claims based on this measure in our research. Critically, the measure of political affiliation was balanced across conditions; as such any experimental findings were due to conditional differences not sample bias. However, it would be useful for future research to examine more closely how political identity (particularly right-wing) and/or income levels (particularly higher income) might affect policy support and behavioural intentions in this domain. Another sample-based concern is that the participants in study 2 were skewed towards females. Again, as this was not significantly different across groups, we can be confident of our research findings, but future research would be advised to redress this.
A possible limitation was that we chose to operationalize governance levels in terms of aggregate, but relatively crude, categories of policy which encompassed multiple elements; as such, we do not know whether there were particular aspects of the policies that respondents supported or opposed. This approach allows for a more holistic assessment (and arguably one that is more ecologically valid, as it captures more complexity and exposes trade-offs) of governance approaches across multiple scales, but at the expense of a fine-grained analysis of specific policy elements. Public responses to these policy elements could be examined in future work.
A further limitation of this study could be said to be that it looks only at behavioural intentions, rather than actual behaviour. However, research on hypocrisy has found a greater effect upon behaviour than attitudes, so this contributes towards understanding how hypocrisy might influence intentions to consume less. Future research would be well poised to explore the effect upon actual behaviour change in light of the findings here.
This paper details how hypocrisy can influence consumption-related behavioural intentions, providing a rationale for it to be explored as a mechanism in economic and moral decision making. This study did not collect data on the reflections of the participants, and as such we cannot infer how much agency participants felt they had in making their decision making. Seeing how hypocrisy might relate to other psychological traits should form part of a new research agenda in this area and could help further explore agency and ability to change. Measuring other variables such as, for example, self-efficacy and perceived behavioural control could help explain further how a sense of responsibility and inability to act manifests in those experiencing hypocrisy. These factors could be boundary conditions that moderate or mediate the effectiveness of hypocrisy inducement upon behaviour change.
Author Contributions: The research in this article was carried our by D.T. as part of a PhD research project. The project was supervised by L.W. and C.D., therefore they had significant input into the conception and design. The article was written by D.T. with drafting suggestions and edits from L.W. and C.D. All authors have read and agreed to the published version of the manuscript.
Appendix A Information Statement
Please read the following information carefully. You will be asked questions on some of the details to test your knowledge at a later stage.
The world is experiencing climate change, and changes need to be made in order to avoid levels of global warming that would threaten the existence of human societies and the living planet. The Paris Agreement has set targets to achieve the positive change needed to avert crisis. In order to meet these goals, the models and future scenario pathways show that new technology and increased energy efficiency will not be enough to help change this. The target is to limit global warming to 1.5c, and this would require 'Aggressive emissions reductions', however, if we continue as we are now ('Business as usual') we are more likely to reach 3-5c, which scientists report to be catastrophic. We will need to reduce our consumption in order to help tackle this problem effectively as part of an 'Aggressive emissions reduction' strategy.
At the moment our consumption levels exceed the sustainable amount of resources the earth can provide. Research has shown that if everyone lived like we do in the U.K. we would need 4 planet Earths to use the resources sustainably. We only have one planet, with limited resources, therefore it is not feasible to maintain this level of consumption; to keep burning fossil fuels, or digging up natural resources and materials, or using more land for agriculture. We now use 8 times as many resources compared to the start of the 20th Century, and reducing this will be key to avoiding catastrophic climate change. Research has shown that we need to reduce our carbon footprint associated with clothing, packaging, electronics, appliances, vehicles, and buildings. High consumption lifestyles have been linked to a variety of problems such as depression, anxiety, and generally a lower sense of life-satisfaction or individual well-being. Therefore, not only is it better for society and the planet to consume less, it is better for our own state of health.
Appendix B Exclusion Criteria
Any participants who failed both knowledge check questions, and those who failed to follow the required steps for effective experimental manipulation, were removed from the analysis (see below for details on knowledge checks). The final number of participants after these exclusion criteria were applied was 259; therefore, this is the number reported in the main body of the text.
Appendix B.1 Knowledge Check
All participants were asked two multiple choice questions regarding the information statement they had been presented with: "How many planet Earths would we need if everyone in the world consumed like the average UK citizen currently does?" ('1', '4', or '8', with '4' being correct), and "Since the start of the 20th Century we have increased our use of resources by . . . " ('two times', 'eight times' and ' twelve times', with 'eight times' being correct). Those participants in the social norms condition were asked two further multiple choice questions related to the extra information they received: "How many people agreed that the UK has a problem with over-consumption?" ('15%', '58%', or '72%', with '72%' being correct), and "Of those people, how many were starting to reduce their consumption?" ('10%', '45%' or '62%', with '45%' being correct). These were designed to test the knowledge of information-only control and knowledge of the social norms condition and to check they had given an acceptable level of attention.
Appendix B.2 Social Norms Manipulation Check
The manipulation checks were focused on establishing any significant differences (or lack of) in the perceived accuracy of the statistics between the conditions. Social norm manipulations sought non-significant difference between the two groups in response to the question "Think about the information you were given about consumption and climate change and the beginning of this survey. To what extent do you agree that . . . ?" across three items: 'The information was overblown or exaggerated', 'The statistics were trying to manipulate my attitudes', and 'The statistics were accurate' (on a 1-7 response scale: Completely disagree (1), strongly disagree (2), slightly disagree (3), neither agree nor disagree (4), slightly agree (5), strongly agree (6), completely agree (7)). The social norms conditions were presented with statistics that were not true, therefore if the manipulation was successful, their responses to these questions would not be significantly different to those who only saw the factual information statement in the control condition. An independent t-test was used to compare the group means of those who were exposed to the dynamic social norm framing and those who were not. The t-tests showed non-significance for over-exaggerated statistics (t (257) = −0.88, p = 0.38), manipulating attitudes (t(257) = 1.03, p = 0.30), and accurate statistics (t (257) = 0.946, p = 0.35). This supports the notion that there was no difference between how each group perceived the accuracy of the information, whether they were exposed to only facts, or both facts and falsified social norms. | 2020-02-13T09:22:25.883Z | 2020-02-09T00:00:00.000 | {
"year": 2020,
"sha1": "52d5abae15769ff23a22ce0ddbaa0bd9f01e05d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/3/1247/pdf?version=1581250692",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8580e8cb9be2feb31008916456b3dbaa8304c524",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
} |
17725998 | pes2o/s2orc | v3-fos-license | Multiple cadherin extracellular repeats mediate homophilic binding and adhesion
The extracellular homophilic-binding domain of the cadherins consists of 5 cadherin repeats (EC1–EC5). Studies on cadherin specificity have implicated the NH2-terminal EC1 domain in the homophilic binding interaction, but the roles of the other extracellular cadherin (EC) domains have not been evaluated. We have undertaken a systematic analysis of the binding properties of the entire cadherin extracellular domain and the contributions of the other EC domains to homophilic binding. Lateral (cis) dimerization of the extracellular domain is thought to be required for adhesive function. Sedimentation analysis of the soluble extracellular segment of C-cadherin revealed that it exists in a monomer–dimer equilibrium with an affinity constant of ∼64 μM. No higher order oligomers were detected, indicating that homophilic binding between cis-dimers is of significantly lower affinity. The homophilic binding properties of a series of deletion constructs, lacking successive or individual EC domains fused at the COOH terminus to an Fc domain, were analyzed using a bead aggregation assay and a cell attachment–based adhesion assay. A protein with only the first two NH2-terminal EC domains (CEC1-2Fc) exhibited very low activity compared with the entire extracellular domain (CEC1-5Fc), demonstrating that EC1 alone is not sufficient for effective homophilic binding. CEC1-3Fc exhibited high activity, but not as much as CEC1-4Fc or CEC1-5Fc. EC3 is not required for homophilic binding, however, since CEC1-2-4Fc and CEC1-2-4-5Fc exhibited high activity in both assays. These and experiments using additional EC combinations show that many, if not all, the EC domains contribute to the formation of the cadherin homophilic bond, and specific one-to-one interaction between particular EC domains may not be required. These conclusions are consistent with a previous study on direct molecular force measurements between cadherin ectodomains demonstrating multiple adhesive interactions (Sivasankar, S., W. Brieher, N. Lavrik, B. Gumbiner, and D. Leckband. 1999. Proc. Natl. Acad. Sci. USA. 96:11820–11824; Sivasankar, S., B. Gumbiner, and D. Leckband. 2001. Biophys J. 80:1758–68). We propose new models for how the cadherin extracellular repeats may contribute to adhesive specificity and function.
Introduction
Cadherin-mediated cell-cell adhesion is essential for the morphogenesis of tissues and the maintenance of tissue function (Takeichi, 1995;Gumbiner, 1996). Adhesion results from the homophilic binding between extracellular domains of cadherins, which is controlled by the cytoplasmic domain and associated catenin polypeptides and the actin cytoskeleton. Cadherins make up a family of adhesion molecules, and the type of cadherin expressed in a cell can affect the specificity (Nose et al., 1990;Takeichi, 1995;Gumbiner, 1996) as well as the physiological properties (Levine et al., 1994;Kim et al., 2000) of cell interactions. Cadherin adhesive activity is also regulated by cytoplasmic signaling events, via the catenins and cytoplasmic domain. Ultimately, regulation of adhesion is mediated through the homophilic binding function of the extracellular domain, by modulation of either its binding strength or by its clustering (Yap et al., 1997). Therefore, an understanding of the molecular structure of the cadherin homophilic bond is fundamental to understanding the mechanism of cadherin-mediated adhesion, the specificity of adhesion, and the regulation of adhesion during tissue morphogenesis.
Recent findings about the structure of the cadherin extracellular domain have provided important clues about the molecular nature of the homophilic bond. Particularly important are the findings that the cadherin ectodomain forms a parallel, or cis, dimer that is required for homophilic bind-ing and cell adhesion (Shapiro et al., 1995;Brieher et al., 1996;Chitaev and Troyanovsky, 1998;Takeda et al., 1999;Shan et al., 2000). The extracellular domain of classical cadherins consists of five cadherin repeats, or extracellular cadherin (EC)* domains. The three-dimensional structure of the NH 2 -terminal EC domain (EC1) of N-cadherin determined by x-ray crystallography revealed an important element of dimerization, called the strand dimer (Shapiro et al., 1995). This parallel cis-dimer forms by reciprocal binding of the trp2 residue of each subunit in a hydrophobic pocket on the other subunit of the dimer, and the trp2 residue is crucial for cis-dimerization and adhesive function (Chitaev and Troyanovsky, 1998;Tamura et al., 1998;Shan et al., 2000). (An alternative model for cis-dimerization between EC1-2 domains has also been proposed [Nagar et al., 1996;.) Ca 2 ϩ is also required for cadherin function, and in the presence of Ca 2 ϩ , the cadherins form protease-resistant elongated rod structures (Hyafil et al., 1981;Takeichi, 1991;Pokutta et al., 1994;Sivasankar et al., 1999). The three-dimensional x-ray structures of fragments containing EC domains 1 and 2 reveal that Ca 2 ϩ -binding sites link successive domains together in a fixed orientation (Nagar et al., 1996;Tamura et al., 1998). Thus, the basic structural unit capable of making a homophilic bond between cells appears to be a parallel dimer, mediated by EC1, of two rigid rodlike cadherin ectodomains.
The molecular structure of the homophilic bond is much less well understood. In particular, the identity of the actual binding site(s) for the homophilic interaction remains uncertain. Most studies have focused exclusively on the EC1 domain, mostly because of an elegant early study that attributed the specificity of adhesion to this domain (Nose et al., 1990). However, direct attempts to identify a specific homophilic binding site in EC1 have not been conclusive. The HAV sequence conserved in many cadherin EC1 domains was initially proposed to be a critical part of the binding site (Blaschuk et al., 1990;Nose et al., 1990;Williams et al., 2000), analogous to the role of the RGD sequence integrin-binding substrates. However, unlike the RGD sequence, the HAV sequence does not form a specific loop or pocket typical for a binding site; indeed, the ala residue is not even on the surface of EC1 (Shapiro et al., 1995). Moreover, type II cadherins have a different sequence at this site, QAI, and mutation of either the HAV or QAI residues does not affect either homophilic binding or cadherin specificity Kitagawa et al., 2000). The x-ray analysis of N-cadherin EC1 did reveal an antiparallel crystal packing interaction between subunits, which was interpreted to represent homophilic binding (Shapiro et al., 1995). However, the antiparallel packing interaction was quite different for crystals of the two domain fragment, EC1-2, of N-cadherin (Tamura et al., 1998). Moreover, mutagenesis of many residues at the surface of EC1 has failed to reveal a role in cell adhesion (Kitagawa et al., 2000;Shimoyama et al., 1999), in contrast to the striking effects of mutating the trp2 that forms the parallel cis-dimer (Chitaev and Troyanovsky, 1998;Tamura et al., 1998;Shan et al., 2000). Thus, direct evidence for a specific homophilic binding site in EC1 remains elusive, and the role of EC1 in determining cadherin specificity needs to be reconsidered, especially in light of the importance of EC1 in establishing the lateral/cis-dimers required for adhesion.
Indeed, several findings in the literature provide evidence that the rest of the cadherin EC domain has a function in adhesion beyond serving as a simple spacer region. Although some adhesion blocking antibodies have been found to bind to EC1 (Nose et al., 1990;Amagai et al., 1992), many adhesion blocking antibodies and an adhesion activating antibody have been found to recognize other EC domains, including EC5 (Ozawa et al., 1990b;Zhong et al., 1999) and EC3 (mAb 6B6; unpublished data). In addition, naturally occurring missense mutations in EC2 and EC3 domains of E-cadherin have been found in several human tumors (Berx et al., 1998a,b), and mutations in one Ca 2 ϩ -binding site of E-cadherin between EC domains abolish adhesive function (Ozawa et al., 1990a). Furthermore, a biophysical study measuring direct molecular forces between cadherin ectodomains found evidence for multiple adhesive interactions, with maximal adhesive force developing when the ectodomains domains overlap entirely (Sivasankar et al., 1999(Sivasankar et al., , 2001. Although none of these studies identified specific binding sites, they do suggest that cadherin EC domains other than EC1 play important roles in the homophilic binding interactions between cadherin cis-dimers. In a previous study, we were able to express and analyze the biochemical and homophilic binding properties of the entire soluble ectodomain of Xenopus C-cadherin, CEC1-5, which exhibited functional activity only when dimeric (Brieher et al., 1996). This provided a starting point to begin to analyze the roles of all the cadherin EC domains in the homophilic binding function of the cadherin ectodomain. We have undertaken a systematic structure-function analysis of CEC1-5 using deletions of specific EC domains and assays for homophilic binding activity.
Analytical centrifugation
In a previous study of the purified soluble ectodomain of C-cadherin, CEC1-5, lateral dimerization was shown to be required for the homophilic binding activity (Brieher et al., 1996). However, the conditions for CEC1-5 dimerization were not well defined. Moreover, it has not always been possible to detect dimers of other soluble cadherin ectodomains (Pokutta et al., 1994;Tamura et al., 1998). Therefore, we wished to determine whether dimers of CEC1-5 exist in dynamic equilibrium with monomers and to measure the affinity of the dimer interaction. Furthermore, we wished to determine whether the formation of higher order oligomeric species of CEC1-5, which would result from homophilic adhesive binding interactions between dimers, could be detected. To measure these interactions in solution, equilibrium sedimentation analysis was performed using the analytical ultracentrifuge.
The stock CEC1-5 solution had an absorbance of 1.4975 at 280 nm and a concentration determined by fringe count of 2.12 mg/ml, resulting in a calculated E 280nm 1 cm of 0.706 mg/ml. Calculated apparent weight average molecular weights *Abbreviation used in this paper: EC, extracellular cadherin. from individual sedimentation equilibrium data sets collected over a loading concentration range of 2-25 M varied from 78,570 to 102,660, whereas an apparent weight average molecular weight of 88,850 was determined from a global fit of all the data sets to a single species model. The best global fit was obtained for a monomer-dimer self-association model, using an assumed value of 75,000 for the monomer molecular weight, and allowing the Ka for each data set to float (Fig. 1 A). There was no obvious concentration-dependent trend in the determined Kas for the various data sets, which would have indicated possible heterogeneity or non-specific aggregation. Averaging all the individual raw Ka values resulted in a calculated Molar Kd (1-2) of 64 M. Using this value, the CEC1-5 appears to consist of ف 5-30% dimer in the concentration range at which the measurements were performed ( Fig. 1 B).
The lack of evidence for any higher oligomeric species, which might result from homophilic binding between dimers, at the concentrations of protein used indicates that any potential binding between dimers could only occur with a significantly lower affinity than the monomer-dimer affinity (i.e., with a Kd ӷ 64 M). Thus, the formation of adhesive bonds between cadherin dimers may involve multivalent low affinity interactions (see Discussion), and an analysis of the homophilic binding properties of CEC1-5 or domains of CEC1-5 requires the use of techniques that can assay this multivalent binding activity.
Expression and purification of cadherin-Fc fusion proteins
To examine the contribution of the different extracellular (EC) cadherin domains of C-cadherin, a series of C-cadherin mutants were designed (Fig. 2). First, we sequentially deleted the EC domains from the COOH terminus according to the described sequence of C-cadherin (Lee and Gumbiner, 1995) and the structures of the cadherin repeats observed by x-ray crystallography (Shapiro et al., 1995;Nagar et al., 1996;Tamura et al., 1998). After analyzing the first constructs, we decided to make additional deletion constructs also shown in Fig. 2. Previous studies on the soluble Full-length C-cadherin molecule consists of the ectodomain (five EC domains), transmembrane region (TM) and the cytoplasmic tail (CP). CEC1-5Fc consists of the ectodomain fused to the Fc part of the human IgG (Fc). Domains were expressed as Fc chimaeras to force dimerization, since dimerization of C-cadherin was shown to be crucial for adhesive function. CEC1-4Fc, CEC1-3Fc, and CEC1-2Fc consist of successively fewer number of cadherin repeats fused to Fc at the COOH terminus. CEC3-4-5Fc consists of the ectodomain deleted from the NH 2terminal region fused to Fc at the COOH terminus. CEC1-2FNFc consists of the first two domains fused to the two fibronectin type III repeats of the chicken N-CAM (FN III), still having Fc as the COOHterminal region. CEC1-2-4Fc consists of successively domains 1-2 and 4 fused to Fc at the COOH terminus and on the same scheme CEC1-2-4-5Fc of domains 1-2 and 4-5.
C-cadherin ectodomain showed that dimerization was necessary for adhesive function (Brieher et al., 1996). During initial attempts to express C-cadherin with EC domain deletions, it was difficult to obtain active dimeric forms (not shown); therefore, chimeras having the IgG Fc domain (Fc) fused to the COOH terminus were constructed to force dimerization (the IgFc domain forms parallel stable disulfide-linked dimers). A similar approach has been used to produce functional soluble dimers of N-cadherin and VEcadherin (Baumgartner et al., 2000;Lambert et al., 2000) and human E-cadherin (unpublished data). We also made a construct in which a linker was inserted between the COOH terminus of EC1-2 and the IgFc domains, CEC1-2FNFc (Fig. 2). This linker consists of the two fibronectinlike domains founded in the extracellular domain of the chicken N-CAM, each of which is similar in size and in shape to the EC domains and have not been found to have any kind of adhesive activity (Cunningham et al., 1987;Ranheim et al., 1996). Additionally, constructs were also made with deletions of the either the first two NH 2 -terminal do-mains (CEC3-4-5Fc) or a deletion of domain 3 (CEC1-2-4Fc and CEC1-2-4-5Fc). We also tried to make a construct having only the EC1 domain fused to Fc (CEC1Fc), but it was poorly expressed and could not be recovered in reasonable quantities.
Proteins were stably expressed in CHO cells and were purified from conditioned media on Protein A column. These polypeptides are recognized by an anti-human Fc antibody, demonstrating that the Fc part of the IgG is present (Fig. 3 A), and by anti-C-cadherin antibody (not shown). Assuming these proteins are modified by glycosylation or other posttranslational modifications (Lee and Gumbiner, 1995), the molecular weights are the expected sizes for the mature secreted proteins. We also made sure that those proteins were dimeric by running them on a nonreducing gel (Fig. 3 B). Between 1 and 2 mg of each purified protein were obtained from two liters of conditioned media as determined by Coomassie staining (Fig. 3 C). Minor bands of higher and lower molecular weight than mature full-length protein probably correspond to precursor forms and breakdown products, respectively, since they are recognized by antihuman Fc and by anti-C-cadherin antibodies.
Cadherins are synthesized with a large proregion that is normally proteolytically cleaved to yield the mature cadherin, and functional activity depends on the precise cleavage at the correct amino acid (Ozawa and Kemler, 1990). To make sure that CHO cells processed these proteins to the correct mature form, NH 2 -terminal sequencing of each purified protein was performed. The majority of the chimeric proteins (CEC1-5Fc, CEC1-4Fc, CEC1-3Fc, CEC1-2Fc, CEC1-2FNFc, CEC1-2-4Fc, and CEC1-2-4-5Fc) were processed correctly to yield the appropriate NH 2 -terminal residue of EC1. However, the CEC3-4-5Fc protein contained a mixture of three proteins; one was cleaved at the proper site, and two were cleaved at different sites within the proregion. This construct is the only one lacking the NH 2terminal EC1 domain; presumably, the proper connection of the proregion to this domain is important for effective processing of the protein.
In all of the following experiments, we tested proteins produced by at least two different clones of secreting cells for each chimera and several protein preparations from each clone.
Analysis of the EC domain deletions by bead aggregation and cell adhesion assays
The low affinity of the homophilic binding interaction between cadherin dimers requires that assays for multivalent interactions are used to analyze the binding properties of deletion mutants. One such assay that has been frequently used for the analysis of cell adhesion molecules is a bead aggregation assay, which provides an in vitro mimic of cell aggregation assays for adhesion using purified proteins (Grumet and Edelman, 1988;Grumet et al., 1993;Ranheim et al., 1996;Retzler et al., 1996;Lambert et al., 2000). Indeed, bead aggregation can be used as a specific measure of calcium-dependent homophilic binding activity of the extracellular domain of C-cadherin (CEC1-5) (Brieher et al., 1996). Therefore, this assay was used to test the capacity of various deletion mutant proteins to mediate homophilic binding. The COOH-terminal IgFc domain allowed us to use protein A-coated beads in order to orient the chimeric proteins on the beads. The full-length cadherin ectodomain, CEC1-5Fc, induced aggregation of beads ( Fig. 4 A), similar to CEC1-5, as described previously (Brieher et al., 1996). Thus, addition of Fc to the COOH-terminal EC domain did not interfere with adhesive function of full-length C-cadherin. Aggregation of the coated beads was specific and dependent on cadherin activity, because CEC1-5Fc-coated beads failed to aggregate in the absence of calcium ( Fig. 4 A, ϩ EDTA) and was specifically inhibited by anti-C-cadherin mAb 6B6 (not shown).
We then tested the other chimeric proteins. CEC1-4Fc and CEC1-3Fc induced calcium-dependent aggregation of beads quite effectively (Fig. 4 A) and aggregation was specifically inhibited by anti-C-cadherin mAb 6B6 (not shown). Therefore, domains 4 and 5 do not seem to be essential for basic homophilic binding. They may enhance aggregation activity somewhat because aggregation mediated by CEC1-3Fc was not quite as effective as CEC1-5Fc. In contrast, CEC1-2Fc did not stimulate high rates of bead aggregation compared with CEC1-3Fc, CEC1-4Fc, and CEC1-5Fc, although it did over background levels ( Fig. 4 A), suggesting that domains EC1 and EC2 are not sufficient for effective aggregation activity.
It was possible that the loss of bead aggregation activity by CEC1-2Fc was simply due to the lack of a spacer region necessary to provide sufficient distance from the bead surface or due to conformational constraints on the normal dimerization of domains 1 and 2 forced by a proximal Fc dimer. Therefore, a spacer consisting of the two fibronectin-like do- Analysis of CEC1-2 with spacers inserted: CEC1-2FNFc compared to CEC1-5Fc. The number of aggregates of coated microspheres large enough to be detected by a Coulter counter is plotted as function of time. Samples were incubated in the absence of calcium (EDTA) or in the presence of calcium (Ca). The experiment was performed with at least three different batches of protein and the mean Ϯ SEM is shown. Figure 5. Adhesive activity of C-cadherin mutants assessed by a cell detachment assay. The adhesive strength is measured by the resistance of cell detachment under a laminar flow from surfaces coated with chimeric proteins. (A) Adhesion of CHO cells expressing C-cadherin (C-CHO cells) to CEC1-5Fc at different concentrations of CEC1-5Fc (100, 20, 10, and 5 g/l). The construct was attached to the tube surface via protein A, and the cells were allowed to bind to the substrate under static conditions. The flow was subsequently increased every 30 s, and the number of cells remaining within the field of view was counted. Assays were performed in the presence of calcium using C-CHO cells or control CHO cells. (B) Adhesion of C-CHO cells to surfaces coated with CEC1-5Fc, CEC1-4Fc, CEC1-3Fc, CEC1-2Fc (two different clones), and CEC1-2FNFc, all at 5 g/ l. The experiments were performed in triplicate and the mean Ϯ SEM is shown. mains of the chicken N-CAM (similar in size and folding to two EC domains) was inserted in frame between the COOH-terminal part of domain 2 and the Fc part of the IgG. Like CEC1-2Fc, CEC1-2FNFc failed to induce strong aggregation of beads ( Fig. 4 B). Therefore, EC1 and EC2 domains alone do not seem to be sufficient for effective homophilic binding activity.
The in vitro bead aggregation assay analyzes the basic binding activity of each of the dimeric proteins. We also wished to determine the abilities of these proteins to mediate cell adhesion. We have previously described a flow assay that measures the strength of cell attachment under shear forces. This assay measures the capacity of CHO cells expressing full-length wild-type C-cadherin (C-CHO cells) to adhere to surfaces coated with different chimeric proteins. Similar to the CEC1-5 protein described previously (Brieher et al., 1996), the full-length Fc chimera (CEC1-5Fc) mediated strong adhesion of C-CHO cells (Fig. 5 A). Adhesion to CEC1-5Fc was specific because it required calcium (not shown) and because CHO cells not expressing C-cadherin did not adhere, even at the lowest shear stress (Fig. 5 A). Additionally, adhesion of C-CHO cells to CEC1-5Fc was inhibited by incubating the cells with Fab fragments of an anti-C-cadherin mAb, 6B6 (data not shown), confirming that the adhesive interaction between C-CHO cells and these substrates are C-cadherin specific.
The conditions of the flow assay were optimized to make the measured range of adhesion strengths sensitive to the adhesive activity of the chimeric protein coated on the substrate. The resistance of cell detachment to increasing shear force was determined as a function of the concentration of CEC1-5Fc coated on the substrate (Fig. 5 A; see Materials and methods). At high concentrations of CEC1-5Fc (100, 20, and 10 g/ml) cells remained strongly attached over the entire range of shear forces used. Only at 5 g/ml did cells exhibit sensitivity to detachment at high shear force; therefore, a concentration of 5 g/ml was used for all of the chimeric proteins to test their cell adhesion activities.
CEC1-4Fc and CEC1-3Fc exhibited similar adhesive activity as CEC1-5Fc in the flow assay, having a similar resistance of cell detachment as a function of increasing shear force (Fig. 5 B). This suggests that domains 4 and 5 are not essential for strong adhesion, similar to the bead aggregation assay. In contrast, CEC1-2Fc exhibited significantly weaker adhesive activity compared with the longer proteins, suggesting that EC1 and EC2 are not sufficient for full adhesive activity. Addition of the spacer domain in CEC1-2FNFc did not increase the adhesive activity of EC1-2. Thus, although domains 1 and 2 retain low levels of adhesive activity, they are not sufficient to mediate the high level of cell adhesion activity exhibited by the full-length protein.
The significantly greater bead aggregation and cell adhesion activities of CEC1-3Fc compared with CEC1-2Fc or CEC1-2FNFc could have several different explanations. EC3 alone may possesses significant homophilic binding activity; three EC domains could be required for high binding activity; EC1 or EC2 (or both) of one cadherin in the pair might need to bind to EC3, EC4, or EC5 of the other cadherin. Several experiments were designed to try to distinguish between these possibilities.
To determine whether EC3 is specifically required for effective homophilic binding, we made two constructs with EC3 deleted: CEC1-2-4-5Fc and CEC1-2-4Fc. Both were able to induce a high rate of bead aggregation (Fig. 6 A), which was inhibited by anti-C-cadherin Fab (not shown). The rates of aggregation were similar to the activity of (A) Bead aggregation assay using a Coulter counter as described in the legend to Fig. 4. (B and C) Analysis of adhesion to chimaeric proteins using the laminar flow assay as described in Fig. 5. Adhesion of C-CHO cells to CEC1-2-4Fc (B) and CEC1-2-4-5Fc (C) compared with CEC1-5Fc, all at 5 g/l. The experiments were performed in triplicate and the mean Ϯ SEM is shown.
CEC1-3Fc and significantly greater than aggregation due to CEC1-2Fc. Also, both CEC1-2-4-5Fc and CEC1-2-4Fc exhibited high cell adhesion activity in the laminar flow assay, similar to the activity of CEC1-5Fc (Fig. 6, B and C) and much better than CEC1-2Fc. Therefore, EC3 is not specifically required for effective homophilic binding or cell adhesion. Furthermore, the high binding and adhesion activity of CEC1-2-4Fc demonstrates that EC4 is interchangeable with EC3. Thus, there may not be any defined specificity to the binding interactions between EC domains, raising the possibility that multiple interactions occur in the homophilic bond.
Although EC1 and EC2 are not sufficient for effective binding activity, we wanted to test whether they are required. Therefore, we analyzed whether a construct lacking domains 1 and 2, CEC3-4-5Fc, retains bead aggregation and cell adhesion activity. Most preparations of CEC3-4-5Fc (70%) failed to induce detectable bead aggregation (not shown). In ف 30% of the preparations, there was some evidence of aggregation, but it was highly variable and irreproducible from day to day. Furthermore, CEC3-4-5Fc never exhibited detectable cell adhesion activity in the flow assay (Fig. 7), irrespective of the preparation or day of experiment. The lack of activity in the cell adhesion assay probably cannot be attributed to the variability in the NH 2 -terminal propeptide cleavage that we observed because there was no detectable adhesion even at very high concentrations of the protein (100 g/ l), which is 20 times more than needed for strong adhesion to CEC1-3Fc or for low but detectable adhesion to CEC1-2Fc. Thus, even if domains EC3, EC4, and EC5 possess some binding activity, domains 1 and 2 appear to be required for effective homophilic binding and cell adhesion. The requirement for EC1 and EC2 could be due either to the presence of a critical binding site in one or both of these two domains or to the requirement for the EC1 domain in the formation of normal cadherin cis-dimers (Shan et al., 2000). Although CEC3-4-5Fc dimerize through the COOH-terminal Fc domain in the absence of EC1, it may be ineffective in creating the proper protein conformation and/or dimeric binding interface.
Since domains 1 and 2 are required but not sufficient for homophilic binding activity, it is possible that domains 1 and 2 need to bind to EC domains 3, 4, or 5 in the full C-cadherin ectodomain. To try to test this possibility, bead mixing experiments were performed. A flow cytometry assay with different color fluorescent beads (yellow and red) was used to determine whether CEC1-2Fc-coated beads and CEC3-4-5Fc-coated beads aggregate better with each other than they do by themselves ( Fig. 8; Table I). A positive control for the assay is shown by an analysis of mixed aggregates formed by two sets of beads coated with full-length C-cadherin (CEC1-5Fc) in Fig. 8 A. Aggregates containing both fluorescent colors (yellow and red) appear along the diagonal of the fluorescence intensity graph. The formation of mixed aggregates was quite extensive at this time in the assay, since each point on the graph is a single fluorescent event that corresponds to a single aggregate, each of which can contain a large number of beads. For CEC1-5Fc, Ͼ 90% of the detected events and ف 790,000 beads are present in mixed aggregates (Table I). As expected, mixed aggregates between two sets of beads, which both contained CEC1-2Fc, were smaller and fewer (Fig. 8 B). Only 27% of the events contained mixed aggregates (i.e., 73% were either single beads or small unmixed aggregates), with only ف 28,000 beads present in mixed aggregates (Table I). The negative control (i.e., background) is shown by analysis of CEC3-4-5Fc by itself, which formed even fewer and smaller mixed aggregates Figure 7. Lack of adhesion activity of a construct lacking EC domains 1 and 2 (CEC3-4-5Fc) by laminar flow assay. Attachment of C-CHO cells to high concentrations of CEC3-4-5Fc (100 g/l) compared with CEC1-5Fc. The experiment was performed in triplicate and the mean Ϯ SEM is shown. Figure 8. Mixed bead aggregation assay to assess homophilic binding activity between different cadherin mutants. Flow cytometry was used to detect and quantify mixed aggregates formed between yellow fluorescent beads (Y) and red fluorescent beads (R) coated with different cadherin EC constructs. Mixed aggregates appear in the region to the right of and above the lines drawn on the graph. (Yellow only singlets and small aggregates appear in the lower right region, but red only singlets and small aggregates do not appear on the graph because they lie on the y axis.) (A) Analysis of aggregation between CEC1-5Fc on both sets of beads. (B) Analysis of aggregation between CEC1-2Fc on both sets of beads. (C) Analysis of aggregation between CEC3-4-5FC on both sets of beads. (D) Analysis of aggregation between CEC1-2Fc-coated yellow beads and CEC3-4-5FC-coated red beads.
( Fig. 8 C), with Ͻ 10% of events in mixed aggregates (i.e., Ͼ 90% single or small unmixed) and Ͻ 5,000 beads present in mixed aggregates ( Table I). The experimental analysis of the mixing between CEC1-2Fc beads and CEC3-4-5Fc beads is shown in Fig. 8 D (one case) and Table I (both the case in Fig. 8 D and the reciprocal mixture). These samples formed mixed aggregates no better than CEC3-4-5Fc alone, and worse than CEC1-2Fc alone. Therefore, using this assay for mixed aggregation, it was not possible to detect a direct binding interaction between EC domains 1-2 and domains 3, 4, and 5.
At face value, the above finding may seem to show that EC domains 1 and 2 do not interact with EC domains 3, 4, and 5 in the C-cadherin homophilic bond. However, there are alternative explanations for the absence of mixed aggregates between beads coated with these deletion mutant proteins. It is possible that the lack of dimerization at the NH 2 terminus, which is normally provided by EC1, renders CEC3-4-5Fc aggregation incompetent towards all potential binding partners. Alternatively, effective and strong homophilic binding may require complete reciprocal binding interactions between binding partners (e.g., for the fulllength protein, EC1-2 domains of molecule A binds EC3-4-5 domains of molecule B, plus EC1-2 domains of molecule B binds EC3-4-5 domains of molecule A). In contrast, the potential bond between CEC1-2Fc and CEC3-4-5Fc could only engage one half of the reciprocal binding interaction (although, EC1-2 of molecule A could bind EC3-4-5 of molecule B, there would be no EC1-2 of molecule B or EC3-4-5 of molecule A available to interact). Although the contribution of binding reciprocity to the rate of bead aggregation is not known, reciprocal binding interactions between proteins in solution are known to result in a highly synergistic increase in affinity. Unfortunately, therefore, for homophilic binding proteins there is a theoretical limitation to interpreting mixing experiments between proteins with different binding site deletions.
Discussion
A thorough structure-function analysis of the homophilic binding properties of the soluble C-cadherin ectodomain reveals that multiple cadherin EC repeats contribute to a low affinity interaction between cadherin cis-dimers. Although the EC1 domain appears to be required for the formation of an effective adhesive bond, perhaps due to its role in cisdimerization, it cannot account for the entire homophilic binding interaction as has been previously believed. A minimum of three of the EC domains is required for effective homophilic binding and adhesion, since domains EC1-2 are not sufficient. Although domains EC4 and EC5 do not seem to be absolutely required, they can contribute to the binding interaction. CEC1-4Fc and CEC1-5Fc do exhibit a somewhat better binding activity than CEC1-3Fc. Moreover, EC3 is not specifically required for binding, and EC4 is able to substitute for EC3, since CEC1-2-4-5Fc and CEC1-2-4Fc have high binding and adhesion activity. Together, these findings suggest that the homophilic bond formed between cadherins involves extensive overlap between the extracellular domains and may arise from multiple interactions or different combinations of interactions between EC domains (Fig. 9 A).
The homophilic binding interaction between individual cadherin cis-dimers appears to be of very low affinity, supporting the notion that multivalent interactions via a large number of cadherin dimers is required for the formation of the adhesive bond. Sedimentation analysis of purified CEC1-5 reveals only a monomer-to-dimer interaction. Of course, sedimentation analysis by itself cannot distinguish between cis-or trans-dimer interactions. However, previous work on CEC1-5 showed that this same dimer is required to mediate strong bead aggregation and adhesion; the monomeric species has no or little activity (Brieher et al., 1996). Moreover, we find that forcing parallel cis-dimerization through the COOH-terminal Fc domain results in molecules with similar adhesion activity as CEC1-5, and Fcmediated dimerization was required for the adhesive activity of deletion constructs. The measured affinity of the monomerdimer equilibrium of 64 M should not be taken too literally, since anchorage of normal cadherins in the plasma membrane is likely to increase the effective affinity of cisdimerization. Nonetheless, the lack of any detectable higher oligomeric species indicates that any interaction between cisdimers will have a Kd significantly higher than 64 M.
The concept that cadherin-mediated cell adhesion involves multivalent low affinity interactions is supported by other observations. Deletion of the cytoplasmic domain results in a Data were derived from the experiment shown in Fig. 8. The percentage of fluorescent events present in the mixed aggregate region (Fig. 8) and the total number of beads present in the mixed aggregates was calculated. cadherin with very poor adhesive activity even when it is expressed at high levels at the cell surface. Forced clustering of the cadherin into patches through an artificial oligomerization domain independent of any interactions with the actin cytoskeleton resulted in significant strengthening of adhesion (Yap et al., 1997). Also, the measurement of the trans-interaction between dimers of VE-cadherin by atomic force microscopy suggested a low affinity reaction ( Kd ϭ 10 Ϫ 3 -10 Ϫ 5 M) (Baumgartner et al., 2000). Similarly, induction of integrin clustering, resulting from enhanced membrane mobility, is thought to underlie integrin activation in lymphocytes (Dransfield et al., 1992;Stewart and Hogg, 1996;Yauch et al., 1997;Bazzoni and Hemler, 1998). There are, however, some reports of potentially higher affinity binding interactions between cadherins, including electron microscopic detection of interactions between pentameric forms of E-cadherin (Tomschy et al., 1996) and the detection of interactions between cadherins present in neighboring cells by immunoprecipitation (Chitaev and Troyanovsky, 1998;Shan et al., 2000). The reason for this difference is not clear, but the actual molecular nature of the interacting cadherin pentamers or the coimmunoprecipitated cadherins is not yet well established. Our direct analysis of the interactions between functionally active purified C-cadherin ectodomains, along with the demonstrated contribution of clustering to adhesion (Yap et al., 1997), lead us to favor a model for the cadherin adhesive bond involving multivalent low affinity homophilic interactions. Our findings that the homophilic bond forms through the interactions of multiple EC domains is in agreement with a previous biophysical study of the adhesive forces that develop between opposing C-cadherin (CEC1-5)-covered lipid bilayers (Sivasankar et al., 1999(Sivasankar et al., , 2001. The surface force apparatus that was used allowed the measurement of both the magnitude of the forces that develop and the distance dependence of the forces between the full-length cadherin extracellular segments. The strongest interaction was detected when the antiparallel proteins were fully interdigitated, corresponding to extensive overlap involving multiple EC domains. Interestingly, two other weaker adhesive interactions were detected when the interdigitated proteins were separated by greater distances corresponding to additional EC domain lengths. The authors proposed a model for the cadherin adhesive bond in which successive rupture of distinct interactions along the length of the cadherin molecule occurs to impede the abrupt failure of cadherin-mediated contacts under the forces arising between cells.
Although our structure-function analysis demonstrates that the homophilic bond forms by the overlap/interaction of multiple EC domains, it has not been possible to discern exactly which specific EC domain interacts with which other EC domain in the bond, or even whether there are specific one-to-one domain interactions. Because EC1-2 is required for adhesive activity and exhibits only low levels of adhesive activity alone, it is possible that EC1 and EC2 preferentially bind to EC3, EC4, or EC5. We were not able to detect such preferential binding in bead mixing experiments, but this analysis may be limited by having only 1 of 2 complete binding partners in the assay. The fact that EC4 and EC5 are not essential for binding and adhesion might be taken to suggest that they do not participate in binding. However, both the somewhat higher aggregation activity when EC4 (or EC4 and EC5) is present, and the ability of EC4 to substitute for EC3 indicates that EC4 (and perhaps EC5) can participate in the formation of the bond. Indeed, the interchangeability of EC3 and EC4 suggests that the interactions between EC domains may not be entirely specific, and that the cadherins may be able to interact at multiple different sites or degrees of overlap (Fig. 9 A). Interactions at multiple sites would be consistent with the biophysical measurements by Sivasankar et al. (1999Sivasankar et al. ( , 2001, showing that adhesive forces developed at multiple extents of overlap between cadherin on two surfaces.
Our findings challenge the prevailing model for the structure of the cadherin homophilic bond, which entails a direct interaction between EC1 domains at the distal tips of the cadherin molecules (Fig. 9 B) (Takeichi, 1995;Shapiro and Colman, 1998;Koch et al., 1999;Shan et al., 1999). In fact, direct binding between EC1 domains has never been demonstrated, nor has it ever been shown that EC1 alone is sufficient to form the homophilic binding site. Moreover, mutations in other EC domains of E-cadherin have been found associated with cancers and to affect adhesion (Ozawa et al., 1990a;Berx et al., 1998a,b), consistent with our findings of a requirement for additional EC domains in binding. Furthermore, the measurement of the adhesive force distance profile with the surface force apparatus did not reveal a detectable interaction when the distal EC1 domains were brought into proximity (Sivasankar et al., 1999(Sivasankar et al., , 2001. All of these findings together with our structure-function analysis of the C-cadherin ectodomain argue strongly against the prevailing model of adhesive binding exclusively via the EC1 domain. The x-ray crystal structure of the EC1 domain of N-cadherin led to a very attractive model of the homophilic bond, called the zipper model ( Fig. 9 B), which relies on the direct antiparallel adhesive interactions between EC1 domains (Shapiro et al., 1995). However, this putative adhesive interaction could have resulted from simple crystal packing interactions rather than true adhesive interactions, and other potential adhesive interactions could not have been observed, since the other EC domains were not present in the crystallized protein. Nonetheless, one important concept from the zipper model may still be important for the structure of the homophilic bond; the idea that cis-dimerization could endow the cadherin on one cell with more than one adhesive binding site. Indeed, there is now considerable evidence that cis-dimers form the basic adhesive unit. With multiple EC domains, the binding interactions of each cis-dimer could potentially occur in multiple orientations, leading to the formation of a two-dimensional lattice instead of a linear zipper. Such a two-dimensional lattice might be a more reasonable structure for a zone of adhesive contact or cell junction, and would be consistent with the concept of a multivalent low affinity interactions between cadherin dimers.
Until now, the strongest evidence that the homophilic binding site resides in EC1 came from the finding that the adhesion specificity is determined by EC1. Cells expressing either E-cadherin or P-cadherin sort out from each other in aggregation assays, and the analysis of E-cadherin/P-cadherin chimeras showed that sorting out was determined en-tirely by the EC1 domain (Nose et al., 1988), and similar findings have been obtained more recently for E-cadherin and N-cadherin (Shan et al., 2000). However, alternate models for the role of EC1 in cadherin specificity are possible in light of more recent findings on cadherin structure and function. First, it should be recognized that many different pairs of cadherins fail to exhibit adhesion specificity, including some that have fairly different amino acid sequences (Volk et al., 1987;Steinberg and McNutt, 1999;Shimoyama et al., 2000;unpublished data), and the level of expression of a single cadherin may be a more important determinant of cell sorting specificity (Steinberg and Takeichi, 1994). In these cases, there is no need to postulate a significant specificity determining site in EC1. When specificity between cadherins is observed, the role of EC1 in determining specificity could be due to its role in the formation of cis-dimers. Indeed, in a recent study of E-cadherin/R-cadherin chimeras, EC1 was found to determine the specificity of cis-dimer formation (Shan et al., 2000) One theoretical model for which cis-dimerization specificity could lead to adhesive binding specificity is shown in Fig. 10 A. The model also depends on another documented struc-tural feature of cadherins, the linking of successive EC domains together via calcium binding sites to form a rigid rodlike protein. Because of this property, we postulate that the entire ectodomain behaves as a single structural unit, and any alterations in the orientations of the EC1 domain dimer interface will be propagated throughout the rest of the EC domains. Thus small differences in the relative orientations of the EC1 dimerization interfaces for different cadherins would alter the orientations of other putative adhesive binding sites in the other EC domains (shown as large changes for emphasis), resulting in less compatible binding and/or in a reduced ability to form an extended two-dimensional lattice. Other models to explain how EC1 could determine adhesion specificity when other EC domains contribute to homophilic binding are also possible. For example, in the model shown in Fig. 10 B, an initial cadherin-specific interaction between EC1 domains could precede the formation of the final homophilic bonds between the other EC domains. For this to make sense physically, there would have to be some sort of repulsive barrier between cells to prevent interactions between EC2-5 from occurring directly, and an initial weak binding between EC1 domains would lower the energy barrier leading to the final binding state. For either of these models, there would be no cadherin-type specificity in the homophilic binding interactions between EC domains 2-5, which is consistent with the low adhesion specificity observed for many pairs of different cadherins. Irrespective of whether either of these two theoretical models is correct, this theoretical exercise demonstrates that determination of cadherin adhesion specificity by EC1 can be compatible with the participation of EC domains 2-5 in the homophilic binding interactions.
We favor a new model for the structure of the cadherin homophilic bond entailing the overlap of cadherin ectodomains and the interactions between multiple EC domains. We proposed that multivalent interactions between large numbers of individual low affinity and low specificity bonds lead to the formation of a two-dimensional lattice at the sites of cell-cell contact. Future studies will be required to determine the exact structural basis of the molecular interactions that contribute to the homophilic bond and to understand how catenins and cytoplasmic signals regulate the formation and strength of the adhesive bond between cells.
Plasmid construction
Because dimerization is crucial for adhesive function (Brieher et al., 1996) but not always obtained when soluble cadherins are expressed, we generated chimeric constructs having an IgG Fc domain (Fc) fused to the COOH terminus of one of the cadherin ectodomain in order to force dimerization through the stable parallel interaction of the Fc domains. The IgG1 Fc domain was excised from the pIg plus vector (Novagen) by digestion with HindIII and BclI and subcloned into the expression vector pEE14. The vector pEE14 encodes the glutamine synthase minigene as a selectable marker for CHOK1 cells expressing the minigene in the absence of glutamine and in the presence of the glutamine synthase inhibitor, methionine sulfoximine (Davis et al., 1990).
DNA sequences containing the C-cadherin signal sequence (amino acids 1-155; sequence is numbered according to EMBL/GenBank/DDBJ accession no. UO4707; Levine et al., 1994), followed by either EC domains 1-5 (amino acids 1-697), EC domains 1-4 (amino acids 1-593), EC domains 1-3 (amino acids 1-487), or EC domains 1 and 2 (amino acids 1-376) were isolated by PCR (Roche Expand high-fidelity PCR System: Taq Figure 10. Two hypothetical models for role of EC1 domain in determining cadherin binding specificity. (A) Cis-dimerization specificity could influence adhesive binding specificity. Calcium binding causes the ectodomain to behave as a single structural unit. Therefore, alterations in the orientation of the EC1 domain dimerization interface are propagated to the binding sites throughout the rest of the EC domains. (B) An initial cadherin specific interaction between EC1 domains could precede the formation of the final homophilic bonds between the other EC domains. A repulsive barrier between cells would be postulated to prevent interactions between EC2-5 from occurring directly, and an initial weak binding between EC1 domains would lower the energy barrier leading to the final binding state. DNA and two DNA polymerases) using the cDNA encoding the full-length Xenopus C-cadherin (Levine et al., 1994) as a template.
For cloning purposes, an HindIII cloning site was introduced at the 5Ј end of the PCR fragment and an XbaI cloning site at the 3Ј end. These different PCR products were then cloned by insertion in Fc-pEE14 digested by HindIII/XbaI.
For the cDNA construct CEC1-2FNFc, the two fibronectin-like domains of the chicken N-CAM (Cunningham et al., 1987;Ranheim et al., 1996) were inserted in frame between the COOH-terminal part of domain 2 and the Fc part of the IgG. For this construct, we also used an overlapping PCR method using a chicken N-CAM cDNA (provided by Urs Rutihauser, Memorial Sloan-Kettering Cancer Center, New York, NY) and the cDNA encoding the full-length Xenopus C-cadherin (Levine et al., 1994) as templates with the following primers: (SF1-5A) 5Ј-ccaagcttgggcaccatggggggcaccaggcttaga and (SF1-2FN) 3Ј-ctccactctgtcaatagaaggtcctccactaaaaattggagcattgtcgtttgc, (SF2FNf) 5Ј-aatgctccaatttttagtggaggaccttctattgacagagtggagccctac and (SFFNr) 3Ј-tagtctagagacagtaggctgagcagatgtccg. All the cDNA constructs were verified by sequencing in their entirety.
Cell lines
We used the mammalian CHO cell line expression-secretion system for the production of all our recombinant proteins (Davis et al., 1990) to ensure proper folding and posttranslational processing (misfolded proteins in the secretory pathway are usually degraded). CHOK1 cells were grown in complete Glasgow glutamine-free MEM with 10% dialyzed FCS. cDNA constructs encoding the different combinations of the five cadherin domains of Xenopus C-cadherin in the pEE14 expression vector, containing the glutamine synthase minigene, were transfected into CHOK1 cells by lipofection in serum-free Glasgow MEM using lipofectin (GIBCO BRL). Cells containing the transfected plasmid were selected by culturing the cells in the presence of 25 M methionine sulfoximine (Sigma-Aldrich). CHOK1 cells can normally grow in the absence of glutamine, but growth in the absence of glutamine and in the presence of methionine sulfoximine requires expression of the glutamine synthase minigene.
Expression and secretion of the desired protein by CHO cells was determined by Western blotting-conditioned media of methionine sulfoximineresistant cell lines. The C-CHO cells used for adhesion assays are CHO cells expressing the wild-type C-cadherin (Brieher et al., 1996).
Protein purification
CEC1-5 was purified from CHO cells in conditioned medium as described previously (Brieher et al., 1996). The Fc-containing chimeras were purified differently. The supernatant was harvested when cells became confluent to the point where they are no longer adherent 41-01ف( d after seeding cells initially at 1.104 cells/ml). Conditioned media was filtered through a cellulose acetate low protein binding 0.45-m pore membrane. The protein containing the IgFc domain was purified by applying the filtrate to a protein-A column (1 ml bed volume; Pharmacia Fine Chemicals) at a drop rate of 1 ml/min at 4ЊC. The column was then washed with 100-150 ml of 20 mM Hepes, 50 mM NaCl, 1 mM CaCl 2 . The protein was eluted with 150 mM glycine, pH 2.0. Fractions of 1 ml were collected and buffered with 1 mM CaCl 2 , 1 M Tris, pH 8.0. Fractions containing the protein were combined, desalted on a PD-10 column (Pharmacia Fine Chemicals) and concentrated in a Microcon 10 (Amicon Corp.).
For sequence analysis purified recombinant proteins were separated by SDS-PAGE and transferred to a polyvinylidine difluoride (Bio-Rad Laboratories) membrane. The membrane was first stained with Coomassie blue and then destained with 50% methanol. The desired proteins bands were cut out and subjected to NH 2 -terminal Edman degradation by the microchemistry core facility at the Memorial Sloan-Kettering Cancer Center, New York, NY.
Bead aggregation assays
The beads used in this assay were Protein-A-coated polystyrene beads (carboxylate modified), fluorescent yellow, 0.9 M (Bangs Laboratories Inc.). Before using them they were washed once in sodium acetate 100 mM, pH 3.9 (pH at which any impurities coupled to protein A will be eluted) and twice in 10 mM Hepes, 50 mM NaCl, pH 7.2. Then, the Fc-cadherin protein was bound to the beads at a ratio of 40 g of protein per 40 l of beads suspension (2.1010 beads/ml) in 10 mM Hepes, 50 mM NaCl, pH 7.2, 1 mM CaCl 2 for 90 min at 4ЊC on an Eppendorf shaker (1,400 rpm). The coated beads were pelleted, washed twice, and resuspended in 400 l of 10 mM Hepes, 50 mM NaCl, pH 7.2. The suspension was briefly sonicated to obtain single beads, as determined by microscopy, before the addition of either 1 mM CaCl 2 to initiate aggregation or 1 mM EDTA as appropriate. The samples were incubated at room temperature, and at various time points 10 l aliquots were removed. The number of particles large enough to be detected by a Beckman Coulter counter (parameters: aperture 100 m, threshold 5-15 m, count above 5 m) was determined.
The amount of protein coupled to the beads was determined by taking an aliquot and pelleting it and resuspending in 2 ϫ SDS sample buffer containing 1 mM EDTA. The beads were subsequently pelleted, and the supernatant was immunoblotted with the anti-human IgG HRP conjugate (1 mg/ ml; Promega) after SDS-PAGE.
To study the aggregation between different sets of beads coated with different cadherin EC constructs, a flow cytometry assay was developed with the help of the Memorial Sloan-Kettering Cancer Center flow cytometry facility. For this purpose, we also used an other type of protein A-coated beads with a red fluorochrome to easily distinguish the two sets of beads. Two different Fc-cadherin proteins were coupled to two different fluorescent beads at a ratio of 120 g of protein per 30 l of beads suspension (2.1010 beads/ml) overnight at 4ЊC on an Eppendorf shaker (1,400 rpm). The coated beads were pelleted, washed twice, and resuspended in 150 l of Fc 1 mg/ml in PBS for 15 min (in order to block all the protein A empty sites) after a brief sonication. The final volume was brought up to 300 l with 10 mM Hepes, 50 mM NaCl, pH 7.2. The suspension was sonicated to obtain single beads as determined by microscopy. Red beads and yellow beads were then mixed (equal volume of each) as appropriate for a final volume of 300 l and 1 mM CaCl 2 was added to initiate aggregation. The samples were incubated at 4ЊC on an Eppendorf shaker (1,400 rpm), and at various time points, 10 l aliquots was removed. The appearance of mixed aggregates as a function of time was determined.
Laminar flow cell adhesion assay
The modified laminar flow adhesion assay performed was a modification of the one described previously (Brieher et al., 1996). In brief, borosilicate glass capillaries (1.1 mm internal diameter) (Sutter Instrument) were precoated with protein A (Amersham Pharmacia Biotech) 100 g/ml in PBS ϩϩ during 5 h at 4ЊC, and nonspecific binding sites were then blocked with 0.5% casein hydrolysate enzymatic (ICN Biochemicals, Cleveland, OH) in PBSϩϩ for 2 h at 4ЊC. Then these capillaries were coated overnight at 4ЊC with the Fc-cadherin fusion proteins at various concentrations (100, 20, 10, and 5 g/ml). To control the amount of Fc-cadherin fusion protein bound to protein A on the surface, the total Fc-containing protein concentration is maintained at 100 g/ml by adding the appropriate amount of 1 mg/ml Fc (Human IgG, Fc Fragment, Plasma, Calbiochem) in PBS. Nonspecific binding sites were then blocked with 5% milk (nonfat dry milk; Carnation, Nestle) in HBSS containing 1 mM CaCl 2 . CHO cells or the stable cell line C-CHO were grown under standard conditions then harvested by a method that leaves cell surface cadherins intact (incubation with crystalline trypsine [0.01% wt/vol] in PBS ϩϩ ), washed, and resuspended in HBSS/1 mM CaCl 2 or HBSS/1 mM EDTA. At that point, cells were infused into the coated capillary from a reservoir using a pump. After 1 min, the flow was stopped, and the cells were allowed to bind to the surface under static conditions for 10 min. Capillaries were observed with a phase microscope, and the number of cells attached to the substrate in a 20ϫ field was counted. Flow was initiated, and the number of cells remaining in the field was counted after 30 s. Subsequently, the flow was doubled every 30 s, and the number of cells remaining in the field was counted at the end of each time point. Data were normalized to the number of cells present in the field before starting the flow.
Analytical ultracentrifugation
Protein concentration and extinction coefficient determinations were performed using a Beckman XLI analytical ultracentrifuge and a double sector capillary synthetic boundary sample cell after the fringe count procedures described by Babul and Stellwagen (1969). Before running in the ultracentrifuge, the sample was equilibrated with the buffer solution using a Microsep microconcentrator. The absorbance of the sample was then measured in a Perkin-Elmer Lambda 5 spectrophotometer. 150 ul of stock sample was then loaded into one sector of the sample cell, and 400 ul of buffer solution were loaded into the other sector. The run was performed at 8,000 rpm, and scans were taken when fringes could be resolved across the boundary region between the protein solution and buffer solution. The number of fringes produced across the boundary was then measured and converted to concentration using an average refractive increment of 3.31 fringes/mg/ml.
Sedimentation equilibrium experiments were carried out at 20ЊC in a Beckman XLI analytical ultracentrifuge using both Interference and Absorbance optics following the procedures described by Laue and Stafford (1999). 110 l aliquots of sample solution, with loading concentrations ranging from 2-25 M, was loaded into two six-sector CFE sample cells, allowing six concentrations of sample to be run simultaneously. Runs were performed at 10,000 and 14,000 rpm, and each speed was maintained until there was no significant difference in scans taken 2 h apart to ensure that equilibrium was achieved.
The sedimentation equilibrium data was evaluated using the program NONLIN, which incorporates a nonlinear least-squares curve-fitting algorithm described by Johnson et al. (1981). This program allows the analysis of both single and multiple data files. Data can be fit to either a single ideal species model or models containing up to four associating species, depending on which parameters are permitted to vary during the fitting routine. To fit all the data sets globally, the data collected with the absorbance optical system was converted from absorbance to fringe displacement using the extinction coefficient determined from the fringe count. To convert the raw Ka in fringes, determined from fitting to a self association model, to a molar Ka, the following equation was used: Where K conc is the association constant in molar concentration terms, K fringe is the signal association constant, is the specific refractive increment, l is the pathlength of the centerpiece in cm, is the lightsource wavelength in cm, M 1 is the monomer molecular weight, and n is the stoichiometry of the larger association species.
Assuming 20% glycosylation, an estimated value of 0.706 was used for the partial specific volume, and a monomer molecular weight of 75,000 was assumed for the fitting to a monomer-dimer model. The buffer solution density was estimated using the program SEDNTERP, which incorporates calculations detailed by Laue et al. (1991).
We are grateful to members of the Memorial Sloan-Kettering Cancer Center microchemistry facility, especially Hediye Erdjument-Bromage and Lynne Lacomis, and to the Memorial Sloan-Kettering Cancer Center flow cytometry facility, especially Thomas Delohery and Diane Domingo, for their important contributions to the experiments. We also thank Urs Rutishauser and members of the Gumbiner laboratory for valuable discussions during this project, and Jonathon Goldberg, Urs Rutishauser, Filippo Giancotti, Cara Gottardi, and Carien Niessen for critically reading the manuscript. This work was supported by a National Institutes of Health grant (GM52717) awarded to Barry M. Gumbiner, by the Dewitt Wallace Fund for Memorial Sloan-Kettering Cancer Center, and by a Cancer Support grant NCI-P30-CA-08784. Sophie Chappuis-Flament was a recipient of a fellowship from the Association pour la Recherche sur le Cancer. | 2014-10-01T00:00:00.000Z | 2001-07-09T00:00:00.000 | {
"year": 2001,
"sha1": "bb7df8f5dcf3db35de6e01c486ce11df9c1471dd",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/154/1/231/1299463/jcb1541231.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8eaccc6a579ab29ccd3a6fb07d2aa354614ad578",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
21720965 | pes2o/s2orc | v3-fos-license | Mining the Relationship between Spatial Mobility Patterns and POIs
Passengers move between urban places for diverse interests and drive the metropolitan regions as the aggregation of urban places to group into network communities. This paper aims to examine the relationship between the spatial patterns (represented by the network communities) ofmobility flows and places of interest (POIs). Furtherly, it intends to identify the categories of POIs that play themost significant role in shaping the spatial patterns of mobility flows. To achieve these purposes, we partition the study area into disjoint regions and construct the network with each partitioned region as a node and connection between them as links weighted by themobility flows.The community detection algorithm is implemented on the network to discover spatial mobility patterns, and the multiclass classification based on the logistic regression method is adopted to classify spatial communities featured by POIs. Taking the taxi systems of Shanghai and Beijing as examples, we detect spatial communities based on the movement strengths among regions. Then we investigate their correlations with POIs. It finds that communities’ modularity correlates linearly with POIs; particularly governments, hotels, and the traffic facilities are of the most significance for generating the mobility patterns. This study can provide valuable insight into understanding the spatial mobility patterns from the perspective of POIs.
Introduction
People move in a city, generating the population mobility flows between places.Acquiring the volume of mobility flows in different places in a city is particularly important as it benefits a convergence of applications, such as location selecting for a retail store to allow increasing customers to shop around and advertisement casting for capturing as many consumers as possible [1,2].Technological advances allow for precise measurements of mobility flows on large datasets [3][4][5][6][7][8][9][10][11][12][13], including taxi trajectories [14][15][16], mobile phone trajectories [17,18], and transport smart cards [19].
By solving the privacy-preserving problem of mobility traces [20][21][22][23][24], retrospective studies of mobility flows which focus on modeling the mobility flows from a place to another, such as the universal model, called radiation model [25], are proposed and applied to predict human movements [26].Though the model is parameter-free that requires only population distribution as input, it disregards the spatial cluster features of mobility flows, which means that most people travel in a specific range of regions instead of the whole city and some of the citizens share similar regional scope.To analyze the spatial variability of urban mobility flows, we construct the spatial network with the metropolitan regions as nodes and the connections between them as links weighted by the aggregated strengths of interregion movements [1,17,27].The community in the spatial network is applied to further analysis of the spatial patterns of mobility flows as it offers a visual representation of spatial cluster features of mobility flows, where a spatial community is a set of nodes which have more connections among themselves than with the rest of nodes [1,28].The community in the spatial network is then named as the spatial community in this paper, representing the spatial patterns of urban mobility flows.The community detection allows one to identify the innercommunity links which plays a very important role in understanding the travel pattern and interaction among urban regions [29,30].For example, based on the mobility flows around the city area of Shanghai, Liu et al. [16] built the spatial network and adopted the community detection to model spatial patterns around the city area.
Combined with the techniques of the network, applications based on mobility flows are widely developed in the field of urban computing [31,32].For example, the centrality metrics of the network are used to estimate the importance of road segments [14], and the network connectivity is applied to reveal new latent links among urban regions [33].Studies mentioned above provide insights into using mobility flows in networks to reveal the mobility patterns or urban structures.However, these have not given the underlying mechanisms that motivate the urban mobility flows from the land-use aspects and socioeconomic views.
Actually, urban mobility flows are rooted in people's traveling activities (e.g., work or entertainment) [19,34], reflected by specific POIs.Retrospective studies of spatial communities improve our ability to analyze the mobility flows from the perspective of the network.However, they do not provide insight into the factors that motivate the population mobility dynamics.As each urban movement contains the origin and the destination that is determined by the travel motivation [35], the regions as the origin and destination of a trip are the cause of mobility flows.In [19,36,37] POIs are collected to explain the activity patterns and model the dynamic decisionmaking process that shapes individual's movements.Besides, POIs are combined with mobility flows to discover functional regions [2], where the segmented regions of the city area carry socioeconomic functions as people live in the regions and POIs fall in regions.In [38], POIs are applied to find the characteristics of resident trips based on the clustering method.It finds that the residents' travel pattern in the working day can be expressed as "spatial relative dispersionspatial aggregation-spatial relative dispersion."The effectiveness of these proposed models indicates that mobility flows are related to the POI distribution among urban regions.
However, there has not existed research concentrating on the relationship between spatial communities and POIs, which should be taken into consideration in the future urban developing planning of POIs for the prediction of community changes.In this study, we aim to study the relationship between spatial communities and POIs.And we intend to find the group of specific categories of POIs to explain the identified communities.Taking the large-scale and realworld datasets of Shanghai and Beijing in China as examples, we construct the networks with the urban regions and interregion movements, where the communities are detected.We collect POIs for each node to characterize people's mobility motives and study the relationship between spatial communities and POI features by adopting the stepwise logistic regression.
Researching the inherent relationship between spatial communities of mobility flows and POIs provides a new insight for understanding the underlining mechanism of urban movements.In accordance with the research aim of our work, the rest of this paper is organized as follows: Section 2 presents the methods used in this paper, including the relationship estimation model and the significant POIs identification method.Experiments are implemented in Section 3. We discuss the experimental findings in Section 4. Finally, we briefly conclude the paper in Section 5.
Relationship Estimation Model.
A mobility used in this paper is a 2-tuple ⟨( , ), ( , )⟩.Both ( , ) and ( , ) are geospatial positions, respectively, representing the origin and destination of a trip.In detail, the OD pair represents a trip starting at ( , ) and ending at location ( , ).
As shown in Figure 1, to construct the spatial networks, in this research, the study area is segmented into disjoint grids, and each grid is set as a node.Trips between two nodes indicate the existence of an edge or a linkage.After extracting mobility flows from the travel trace dataset, the volume of mobility originating from and ending in is set as the weight from to .Thus a weighted and directed network is constructed.
As shown in the Figure 1, some nodes indicate much stronger connections among them than with other nodes.By dividing the network into densely connected subnetworks, the urban area is partitioned into intensely interactive subregions.In network science, community detection methods can partition an entire network into tightly connected subnetworks, called communities, and reveal the network's clustering characteristics.
A community, also called a cluster or a module, is typically regarded as a group of vertices which probably share common properties or play similar roles within the network, and the metric of modularity is always to estimate the community detection results [39][40][41].When applied to weighted and directed networks, the modularity is defined as [42] Here is the total weight of links starting and ending in module , in and out are the total in-and out-weight of links in module , and is the total weight of all links in the network.
To optimize , the vast majority of searching strategies take one of the following steps to evolve starting the network partitions: merging two communities, splitting a community into two, and moving nodes between distinct communities.We employ a high-quality modularity based community detection algorithm that adopts all the three strategies, called Combo [43] as the adopted community detection technique.
In the spatial networks, partitioned regions are set as nodes, and the number of OD pairs is set as the weight on the directed edge.To explain how the spatial communities are generated by the mobility flows in the network, the ultimate proof of the hidden reason is to match the mobility patterns to POIs distributed among regions.We match POIs of the studied area to nodes, in accordance with the geolocation using the process of map matching.We get the (1) (1) POI features of each node in the network, denoted as = ( (1) , (2) , . . ., () ), where is the number of POI category (particularly equal to 17 in our case studies), and () is the number of POI category in node .After applying community detection algorithms to networks, the nodes are partitioned into disjoint sets (spatial communities).Nodes in the same community share the same value of classification label .Then the community label is set as the dependent variable.Suppose that the value set of is {1, 2, . . ., }, then the multinomial logistic regression is defined as where = ( (1) , (2) , . . ., () ) and are parameters of the model.Given the testing set = {( 1 , 1 ), ( 2 , 2 ), . . ., ( , )}, let denote the samples labelled with , and = (, ), then the MLE (maximum likelihood estimation) is applied to calculate the parameters: We adopt the stepwise strategy to select POI categories for the logistic regression, and the fitness metric of the -square guarantees that none redundant dependents are selected.It means that we choose categories of POIs that make sense for distinguishing the spatial communities.
Significant POIs Identification.
For the problems of multiclassification based on logistic regression, one class is always set as the reference class as shown by (2).To identify the categories of POIs that affect the spatial communities, in this paper, each community is set as the reference class in turn, and then the statistic frequency of the significant POIs is set as one element of the feature vector of a community .As shown in ( 4) and ( 5), each element of the feature vector for a community represents the frequency of the th significant category of POIs in community .Then the feature value of a community is calculated by its norm multiplied by the entropy.This guarantees that we selected communities of more significant POI categories and more diverse of the significance frequency.
= feq (sig ( POI )) . ( Then we identify significant POIs by (5), where top percentage of communities of the largest is selected as the candidate set.Then we identify the most significant categories as one element in the ultimate significant POI set.
For example, suppose that we got four communities, and the significance frequency of each POI for a community is shown in Table 1.
| | selects communities that contain larger significance frequency of the total significant POI categories.And the entropy tends to select spatial community candidates that consist of more frequency variation.Specific to 2 and 3 , 2 = 3, and 3 = 1, though | 3 | > | 2 |, the entropy of 3 is smaller than that of 2 , meaning smaller difference between significance frequency of POIs.When is set as 50, we select two communities to identify significant POIs that feature the spatial communities. 1 and 2 are selected, and then POIs of traffic facility and enterprise are identified as the ultimate significant ones that make sense for forming this community snapshot. of OD = ⟨( , ), ( , )⟩ own the same occupation state = 1.The extracted points for Shanghai and Beijing are, respectively, shown in Figure 3.
Wireless Communications and Mobile Computing
To construct the spatial networks, we first divide the spatial area into grids of 1 km by 1 km using the open street map (OSM) (http://www.openstreetmap.org/copyright),then each grid is set as a node in the network.We extract mobility flows between any nodes by matching origin points or destination points to grids using the OSM.The mobility flow volume originating from grid to grid is set as the weight on the directed edge.Disregarding grids visited by none OD pairs, it remains 2926 nodes for the spatial network of Shanghai and 3995 nodes for Beijing.Then datasets of mobility flows are as shown in Table 2.
We use the Baidu APIs (Liu et al., 2015) to collect the POIs in two cities.Seventeen categories of POIs are collected as shown in Figure 4.As the category of POIs will be set as the independent variables in the relation estimation model in this study, we label them as () , as shown in Table 3.Each category of POIs is set as a specific dimension of the independent variable.And the number of each POI category C u l t u r e (9) 3,971 3,723 10 Scenic spot (10) 56,996 48,463 11 Auto service (11) 50,898 55, 479 12 Living service (12) 158,121 149,576 13 Food (13) 86,301 82,021 14 Shopping (14) 208 of the two cities is as shown in Table 3 and Figure 4; totally we collect 1,446,865 POIs for the network of Shanghai and 1,405,954 POIs for the network of Beijing.
Relationship between Spatial Communities and POIs.
The spatial communities are affected by the travel distance.
Thus we add the distance threshold (DT) into the spatial community detecting process.As shown in Figure 5, for the networks of Shanghai, the edge number and the mobility flow reaches 90% as the distance threshold gradually increase to 20 km and 14 km; similar to networks of Beijing, the critical distance is 25 km and 9 km.
The modularity of community detection results for two cities is, respectively, shown in Figure 6, together with the stepwise logistic regression results, -square.It can be found that the modularity is changing with the distance threshold (DT) increasing.
Larger DT means that more edges and more mobility flows are added to the networks so that it gets smaller modularity for the networks of Shanghai and Beijing.It also shows that the modularity tends to be convergent with the distance threshold tending to be larger.And the modularity variation trend presents quite similar for both networks.
As shown in Figures 7(a) and 7(b), even nodes in the suburban area are connected to the spatially close communities.And the distance increased by 1 km takes little variation (16 km to 17 km) to the spatial community detection results.
Note that the mobility flow density of Beijing network is 466, while it is just 130 for the network of Shanghai.The modularity got for the spatial networks of Beijing is generally larger than that of Shanghai as in Figure 6.
The results of regression fitness (-square) for both networks also tend to be convergent.The median value of adjusted 2 obtained is, respectively, 0.3 for the Shanghai networks and 0.48 for the Beijing networks.This verifies that the spatial communities are tensely correlated to the POI features.Then the quantitative correlation between the modularity and the -square is presented as shown in Figure 8.It shows that the adjusted 2 presents to be positively and linearly correlated to the regression of the modularity.This indicates that the spatial communities are correlated to POIs, and the spatial communities can be explained by POIs.
Identified POIs.
The community detection results got without distance threshold limitation for Shanghai and Beijing are depicted in Figure 9. Table 4: Identified significant POIs.
POI
Beijing using [38] Beijing using our model Shanghai using our model As shown in Figure 9, we get seven spatial communities for the spatial network of Shanghai and thirteen spatial communities for Beijing.It can be observed that both cities are polycentric.
Then each community is set as the reference class in turn to conduct the stepwise regression method, and we use the -square as the metric for estimating the regression results.The significance of the variables is adopted to identify the POI categories that are tensely correlated to the spatial communities.Note that some POIs are identified as shared categories for both cities.When is set as 50, which means that we select half number of communities of the largest value of , significant frequency of each POI category in a community is as shown in Figures 10 and 11.And the identified POIs for both cities are as shown in Table 4.
To verify the effectiveness of the proposed method, the identified POIs using the method proposed in [38] are listed in Table 4.It can be found that the reference method also identified the POI categories of living service, government, and education in Beijing, which certifies the effectiveness of the proposed significant POI identification method.Compared with findings in [38], which partitions time of day into three time intervals (morning, evening, and night), the proposed identification method finds that the traffic facilities play an important role on shaping the community pattern in urban transport networks for both Beijing and Shanghai.These findings fit in with the actual situation in life, as traffic facilities satisfy the daily commuting needs.This further verifies the effectiveness of the proposed model.
The significant POIs for generating spatial communities in the network of Shanghai contain shopping, enterprise, traffic facility, government, finance, and hotel, while it contains the living service, traffic facility, food, government, and hotel for the network of Beijing.It is found that the POIs of traffic facility, government, and hotel are identified as the common significant POIs to distinguish the communities in both networks.
Discussion
Understanding the spatial patterns and finding the driving factors of the urban mobility flows help planners to evaluate the urban construction plan.To study the drivers of communities of mobility flows, we propose to estimate the relationship between spatial communities and POIs.
Using the taxi systems of Shanghai and Beijing as case studies, the experimental results show that the communities in spatial networks generated by mobility flow linearly correlate to the POIs.To further recognize the specific factors that drive the spatial communities of mobility flows, the stepwise logistic regression is used, and it is found that the POIs of governments, hotels, and the traffic facilities are common features that play an important role on distinguishing communities for both cities.
From the socioeconomical perspective, the locations of governments in a city attract various types of facilities and improve the economic development of the surrounded area, which is reflected by the spatial communities of mobility flows.Similarly, hotels are always located in the area of numerous facilities.A small number of hotels can be a better representative for the regional features that attract mobility flows [44].Traffic facilities play a role in forming community pattern of mobility flows [45][46][47][48], which may be because that these facilities satisfy the essential need for daily traveling and life.Note that mobility flows used in this paper are merely extracted from taxicabs.Thus, another reason for the significance of these categories of POIs may be that citizens are more likely to choose taxicabs due to the arbitrary option of travel departure time.Possibly, taxicabs are also popularly preferred as the transfer tool for the public transport system, such as train station, subway station, or bus station.After all, most commuters more likely choose buses or the subway, and travelers less take taxis for a long trip, especially in the two metropolises in China.
The computational complexity of the relationship and POI identification model is mainly reflected in the community detection process, which justified an upper bound to the execution time of ( 2 log()), where is the number of nodes and the number of communities in the network.
This study has some limitations.Mobility flows are only extracted from the taxi trajectories, and other spatial community patterns may be found with various data sources.However, the same analysis of methods could be used.In this study, we focus on the spatial communities generated by the taxi systems; future studies could consider the similarity and differences of the spatial communities in other public transport systems.Another limitation is that we just adopt the number of each category of POIs as the influencing factors, disregarding the scale of each POI, which should be considered in future works.
Conclusion
This paper proposes a model for estimating the relationship between the spatial communities of mobility flows and the urban POIs, thus to identify the categories of POIs that drive mobility flows to network communities.
Taking the mobility flows in Beijing and Shanghai as case studies, we find that the spatial communities can be explained by the POIs.Specifically, it is found that POIs of traffic facilities, government, and hotel are of great significance for dominating the spatial communities in both cities.It implies that experts could monitor the spatial distribution of urban mobility flows by observing the distribution of POIs, and urban planners could influence the spatial communities of mobility flows by changing the locations of these categories of POIs or adding new POIs of these categories.
In the future, we will further study the formation mechanism of the spatial communities of mobility flows.Meanwhile, we are going to employ other mobility data sources, Wireless Communications and Mobile Computing 9 such as cell-tower traces, and check-ins in location-based services.
Figure 1 :
Figure 1: Illustration of the network and communities (this is the prototype proposed by Liu et al., 2015).To construct a network based on mobility flows, the study area is divided into small regions (a) with each small regions corresponding to a node in the network.A directed edge or linkage existed between two nodes if there are mobility flows from one node to the other.The weight of an edge equals the volume of mobility flows represented in (b, c).Graphic (d) provides an illustration of the communities detected from a network, which is divided into four parts (depicted by four circles) in which the subnetworks had relatively dense connections.The community detection result corresponds to closely connected subregions (e).
Figure 2 :Figure 3 :
Figure 2: Mobility extraction from taxi trajectories.The occupation state changing from unoccupied to occupied or from occupied to unoccupied is adopted to extract the origination and destination of an urban movement.
Figure 4 :
Figure 4: Categories of POIs for Shanghai and Beijing.
Figure 10 :
Figure 10: The significance frequency of each POI category of 13 communities for the network of Beijing is shown.
Figure 11 :
Figure 11: The significance frequency of each category in the seven spatial communities for the network of Shanghai is depicted.
Table 1 :
Illustration of the significant POIs identification.
(, ) is a pair of spatial coordinates representing latitude and longitude. = 1 means that the taxi is occupied by passengers; otherwise = 0.The flag bound to each trajectory position is essential for judging the taxi occupation state, which is used to extract the origin and destination points (OD) of a trip.All other GPS points between a pair
Table 2 :
Studied area and OD number for the network of Shanghai and Beijing.
Table 3 :
Seventeen categories of POIs. | 2018-05-21T22:38:45.151Z | 2018-05-10T00:00:00.000 | {
"year": 2018,
"sha1": "b46399a9bfaf3fbd740fc7f5d177ebff7963842e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/wcmc/2018/4392524.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b46399a9bfaf3fbd740fc7f5d177ebff7963842e",
"s2fieldsofstudy": [
"Computer Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
116875999 | pes2o/s2orc | v3-fos-license | Describing Events: Changes in Eye Movements and Language Production Due to Visual and Conceptual Properties of Scenes
How can a visual environment shape our utterances? A variety of visual and conceptual factors appear to affect sentence production, such as the visual cueing of patients or agents, their position relative to one another, and their animacy. These factors have previously been studied in isolation, leaving the question about their interplay open. The present study brings them together to examine systematic variations in eye movements, speech initiation and voice selection in descriptions of visual scenes. A sample of 44 native speakers of German were asked to describe depicted event scenes presented on a computer screen, while both their utterances and eye movements were recorded. Participants were instructed to produce one-sentence descriptions. The pictures depicted scenes with animate agents and either animate or inanimate patients who were situated to the right or to the left of agents. Half of the patients were preceded by a visual cue – a small circle appearing for 60 ms on a blank screen in the place of patients. The results show that scenes with left- rather than right-positioned patients lead to longer speech onset times, a higher probability of passive sentences and looks toward the patient. In addition, scenes with animate patients received more looks and elicited more passive utterances than scenes with inanimate patients. Visual cueing did not produce significant changes in speech, even though there were more looks to cued vs. non-cued referents, demonstrating that cueing only impacted initial scene scanning patterns but not speech. Our findings demonstrate that when examined together rather than separately, visual and conceptual factors of event scenes influence different aspects of behavior. In comparison to cueing that only affected eye movements, patient animacy also acted on the syntactic realization of utterances, whereas patient position in addition altered their onset. In terms of time course, visual influences are rather short-lived, while conceptual factors have long-lasting effects.
INTRODUCTION
When people produce an utterance, they have a number of different linguistic options at their disposal. For instance, they could describe an event with a man kissing a woman by means of a simple transitive clause such as "The man is kissing the woman, " with a cleft-construction "It is the man who is kissing the woman" or with a passive sentence "The woman is being kissed by the man, " just to name some of the options feasible in English. Crucially, a number of factors appear to affect the way speakers choose a particular syntactic structure. One of the most well documented factors influencing the choice of syntactic constructions is the animacy status of a referent (e.g., Bock et al., 1992;McDonald et al., 1993;van Nice and Dietrich, 2003;Tanaka et al., 2011). In the example above, both referents (the woman and the man) are animate. However, numerous studies have shown that in case of an animate and an inanimate referent, animates are more likely to be realized as the subject of an utterance (e.g., McDonald et al., 1993;van Nice and Dietrich, 2003). For instance, in a sentence recall paradigm, McDonald et al. (1993) asked English speaking participants to reproduce sentences involving an animate and an inanimate entity that they had heard (e.g., The music soothed the child). The authors found that participants were more likely to erroneously remember animate referents as subjects compared to inanimates, even if this resulted in the production of passive constructions (e.g., The child was soothed by the music). Similarly, Tanaka et al. (2011) showed that speakers of Japanese -like English speakers -were more likely to erroneously recall animate referents as sentence subjects, confirming an increase in patient-first structures (i.e., passives) when patients were animate. They also found that Japanese speakers were more likely to assign animate referents earlier positions in the sentence than inanimate referents, suggesting that animacy affected both syntactic structure (i.e., the rate of passivizations) and word order (see Tanaka et al., 2011). Beyond sentence recall paradigms, animacy has also been shown to affect the choice of syntactic structure when participants had to describe visual events (e.g., van Nice and Dietrich, 2003;van de Velde et al., 2014). For instance, van Nice and Dietrich (2003) found that German-speaking participants produced more passive constructions when the agent of a transitive action was inanimate rather than animate. That is, a passive construction was more likely to be produced when an inanimate entity exerted an action (e.g., a wheelchair pushing a pig) than when the action was performed by an animate referent (e.g., a bear). Thus, the number of passives was higher for the former event ("the pig is pushed by the wheelchair") compared to the latter ("the bear pushes the pig"; see van Nice and Dietrich, 2003). A similar effect was obtained when the animacy status of the agent was held constant but animacy of the patient varied (e.g., a suitcase vs. a pig being pushed by a bear). German speakers produced more passive constructions when they had to describe pictures in which the patient of an action was animate ("The pig is pushed by the bear") compared to inanimate patients ("The bear pushes the suitcase"; van Nice and Dietrich, 2003). The increase of passivizations for animate patients was also confirmed in speakers of Dutch (van de Velde et al., 2014), supporting the importance of animacy for speakers' structural choices in sentence production.
In addition to the conceptual factor of referent animacy, there is evidence that other factors, too, can exert similar effects on sentence formulation. For instance, drawing the attention of a speaker to a referent by means of a visual cue can likewise affect participants' structural choices (e.g., Tomlin, 1995Tomlin, , 1997Gleitman et al., 2007;Myachykov et al., 2011Myachykov et al., , 2012. In his seminal "fish film, " for instance, Tomlin (1995Tomlin ( , 1997 presented English-speaking participants with video clips depicting two fish approaching each other from the left and the right side of the screen. Each scene ended with one of the fish swallowing the other. Visual attention was manipulated by an explicit arrow either pointing to the agent or to the patient fish on a given trial. Participants were more likely to describe the scene with a passive construction (The blue fish is being eaten by the red fish) when the patient fish had been the center of visual attention than when the agent fish had been cued. These results suggest that attention orienting can affect structural choices -similar to the conceptual factor animacy. That is, a visually salient referent is more likely to be realized as the more prominent subject of the utterance even if this requires the production of a more marked passive construction. While Tomlin's task was criticized for a number of reasons (e.g., Gleitman et al., 2007), other studies replicated Tomlin's original findings in more carefully controlled set-ups (Gleitman et al., 2007;Myachykov et al., 2011Myachykov et al., , 2012. For instance, Gleitman et al. (2007) presented participants with pictures of simple transitive events depicting two characters (agent and patient; e.g., a man kicking a boy). Before watching the events, participants' attention was manipulated by means of a subliminal visual cue that either drew the speakers' attention to the agent or to the patient of the event. When the cue directed participants' attention to the patient location, speakers were more likely to produce passive voice sentences compared to cued agents, thus confirming Tomlin's original findings.
While these studies suggest that visual attention may exert an effect on participants' structural choices that is comparable to the one demonstrated for animacy, so far both factors have been mainly studied in isolation (for one exception albeit with a different focus see van de Velde et al., 2014). That is, studies investigating effects of animacy on syntactic choice tended to ignore the visual saliency of a referent (e.g., McDonald et al., 1993;van Nice and Dietrich, 2003;Tanaka et al., 2011). Conversely, the majority of studies demonstrating effects of visual attention on sentence formulation exclusively included referents matched for animacy (e.g., Gleitman et al., 2007;Myachykov et al., 2011Myachykov et al., , 2012. As a consequence, whether or not the two factors really affect sentence formulation in similar ways is still unknown. To fill this gap, the present study sought to simultaneously examine effects of referent animacy and visual saliency (i.e., attentional cueing) on speakers' sentence production in an eye-tracking study. Do both factors have a similar effect on sentence formulation or is one more important than the other? It is possible that conceptual properties of a referent such as animacy are more relevant and exert stronger effects on sentence formulation than visual cues. While evidence from sentence production is missing so far, some demonstration in favor of this proposal comes from studies investigating visual scene perception. That is, participants' looking behavior in a freeviewing scene description task was affected more by conceptual aspects than by visually salient objects when meaning maps representing the spatial distribution of semantic features and saliency maps representing the distribution of image features were compared directly (e.g., Henderson et al., 2018). In contrast, a study by Rissman et al. (2018) focusing on participants' written descriptions of transitive events seems to indicate a reversed effect. In particular, these authors demonstrated that conceptual properties such as animacy of an agent can be overridden when the agent is visually backgrounded (i.e., by only presenting the agent's torso or hand; see Rissman et al., 2018). Asked to describe visual scenes of transitive events in which an animate agent performed an action on an inanimate entity, participants used more passive constructions when the animate agent was perceptually minimized than when not (Rissman et al., 2018). This was true despite the fact that participants judged both full and partial agents to have the same degree of animacy, suggesting that visual saliency may have the potential to override an effect as important as the animacy of the agent. Crucially, however, since participants in the study by Rissman et al. (2018) were asked to type their responses, it remains unclear which role both factors (animacy and visual saliency) play during spoken language production -a question addressed in the present study. Unlike previous studies, we not only focused on participants' structural choices (i.e., the rate of active vs. passive sentences) but also included analyses of speech onset times, as well as participants' looking behavior during the course of an utterance planning and production (by means of eye tracking). This way, the present study offers a first comprehensive approach to how the two different factors (animacy and visual saliency) affect sentence production.
In addition to examining the effects of referent animacy and visual saliency, we also focused on another factor, which has not yet been investigated in sentence production despite its attested relevance for language comprehension. This factor concerns the relative positioning of referents in transitive events, i.e., whether a patient of an action is depicted to the right or to the left of an agent in a visual scene. The positioning of referents has been shown to persistently influence a number of behavioral responses, such as drawings (Maass et al., 2014), aesthetic judgments (McLaughlin andMurphy, 1994;Maass et al., 2007), and spatial memory recalls (Maass et al., 2014). For instance, when participants listened to simple transitive sentences like "The circle hits the square" and then subsequently had to draw the event, they located agents to the left of the patient rather than to the right (Chatterjee et al., 1999). A similar left-toright preference was observed for speakers of Italian (e.g., Maass and Russo, 2003), as well as German speakers (Dobel et al., 2007). Despite these visual preferences for referents in transitive events, the effect of visual positioning has so far evaded the focus of language production studies. Thus, most of the studies examining sentence production during scene descriptions have counterbalanced this factor rather than systematically exploring its effect (e.g., Myachykov et al., 2018a). However, some authors observed that speakers of English produced more active sentences when describing pictures in which the agent was located on the left of the patient than when the agent was located on the right, indicating a preferred left-to-right mapping of the depicted referents (e.g., Bock, 1986, also see Hartsuiker and Kolk, 1998). Conversely, more passives were produced when the patient was presented to the left of the agent. Taken together, these findings indicate the possibility that the visual arrangement of a transitive event might likewise affect sentence formulation, similar to factors such as animacy and visual saliency of a referent.
To test this assumption, we explicitly added the positioning of visual referents (i.e., agent on the left of a patient vs. agent on the right of a patient) as an experimental factor.
Overview of the Present Study
The present study addressed the question of the relative importance of visual and conceptual factors for language production in the context of a scene description task. The considered factors thus included the conceptual factor animacy (animate vs. inanimate patient) and two visual factors: cueing (cue vs. no cue on patients) and positioning of patients relative to agents (left vs. right). The assessed behavioral aspects comprised language production (the type of utterances produced and their onset times) and visual behavior (eye movement patterns monitored over the course of an utterance). This combination of behavioral measures should provide complementary information about the influence of the manipulated factors on sentence planning. The methodological approach of our study thus goes beyond the analysis of utterance types, which has so far been the focus of the existing studies on language production using picture-description paradigms. The predictions made in regards of each of the manipulated factors are described below.
If animate entities are indeed more likely to be realized as subjects and take sentence-initial positions, as claimed by previous research (e.g., MacDonald et al., 1994;van Nice and Dietrich, 2003), we should observe a higher number of passive voice descriptions of scenes with animate rather than inanimate patients. Moreover, the presence of an animate patient in addition to an animate agent may result in a competition for the subject position between the two and lead to later utterance onsets than when only the agent is animate. In terms of visual behavior, if animate patients are perceived as conceptually more relevant than inanimate patients, this should be reflected in earlier and longer looks to them compared to their inanimate counterparts.
Drawing visual attention to patients via cueing should first of all affect the visual behavior, so that patients should be fixated before agents. If attention orienting also affects structural choices in sentence production (as shown in, e.g., Tomlin, 1995;Myachykov et al., 2012), then we should observe more passive descriptions of scenes following cueing of the patient than scenes where no cueing occurred. Deviation from the preferred active voice structure might require more processing time. We should, hence, also observe longer speech onset times in the patient cueing condition compared to the no cueing condition.
The positioning of elements in scenes has not yet been investigated in speech-production tasks. Generalizing the leftagent preference reported for language comprehension tasks to language production (e.g., Maass and Russo, 2003), we would expect patients positioned to the left of agents to elicit longer speech onset times, more passive voice utterances, as well as earlier and longer looks to them compared to patients positioned to the right of agents.
Since so far no study compared these conceptual and visual factors directly within one design, we cannot derive any predictions about possible interactions or the weight of factors relative to one another. On the one hand, it is possible that the factors may impact participants' behavior in a cumulative manner, simply adding up -in which case we should observe main effects but no interactions. On the other hand, it is also possible that the effect of one factor may depend on the effect of the other resulting in an interaction.
Additionally, since we not only study structural choices but also inspect participants' speech onset times as well as visual gaze patterns, it is possible that these measures are influenced to a similar degree by the tested experimental factors. Alternatively, the different aspects of verbal and visual behavior may be affected differently by the factors under study.
Participants
Forty-four students at the University of Cologne (36 female and 8 male; mean age 23.43 years, SD = 3.01) were offered a monetary compensation or a course credit for their participation in the experiment. All of them were native speakers of German who did not report any attention or language-related medical condition and had normal or corrected to normal vision.
Experimental Stimuli
A set of 56 black-and-white drawings depicting event scenes between two entities (e.g., a fisherman filming a clown) were used as experimental stimuli. Each event scene included an animate agent (e.g., a fisherman') on the right-or left-hand side of the drawing together with either an animate (e.g., "a clown") or an inanimate (e.g., "a chair") patient on the opposite side (see Figure 1). Each animate agent appeared performing the same action twice, once in a scene with an animate patient and once with an inanimate one. Both agents and patients corresponded to grammatically masculine mono-and disyllabic nouns in German in order to control for the potential influence of morphological or prosodic factors that might obscure the effects of our experimental manipulation. The 14 monosyllabic and the 14 disyllabic animate and inanimate patient nouns chosen for the experiment did not include productive derivations or compounds and did not differ in lemma frequency 1 , M animate = 158594.50, M inanimate = 73844.07, t (13) = 1.62, SE = 52406.51, p = 0.130. Drawings of experimental stimuli were made in such a way that agents and patients were comparable in size, visual complexity (i.e., number of details), and distance within which they were situated from each other across items. The portrayed transitive interactions between agents and patients involved no direct contact between them and could be recognized as dynamic actions. The verbs that corresponded to the depicted events were comparable in terms of their likelihood to occur in active and passive voice frames. In addition, two pictures of a red circle subtending an area of approximately 1 • of visual angle and centered in the right or the left half of the screen were prepared to realize the cueing of patients.
Event Scenes Pre-test
An offline pre-test of experimental stimuli was conducted in order to make sure that participants have similar visual preferences for the depicted scenes with left-and right-positioned agents irrespective of the particular event type. A sample of 36 native speakers of German (33 female, 3 male, mean age 24.2 years, SD = 1.8) participated in the pre-test. The pre-test consisted of a questionnaire with nine items corresponding to the following transitive events: angeln "to fish, " filmen "to film, " gießen "to water, " messen "to measure, " schieben "to push, " schlagen "to hit, " treten "to kick, " wiegen "to weigh, " and ziehen "to pull." Each item contained two mirror images of the same scene and three response options. Participants were asked to mark with a cross the picture they preferred (i.e., the one that -in their opinion -looked more conventional, natural or better) or the option "I have no preference." No time restriction was applied for completing the questionnaire but participants were instructed to respond as quickly and spontaneously as possible. Two versions of the questionnaire alternated the order in which mirror images for each item were presented. The results showed a significant association between the depicted events and whether or not participants had a preference for left-or right-positioned agents, χ 2 (16) = 26.38, p = 0.048. This association was driven by the scene depicting the event ziehen "pull, " as significantly fewer participants than expected preferred the left-agent depiction for ziehen "pull, " z = -2.1, p < 0.01; and significantly more participants preferred the right-agent depiction for this verb, z = 3.1, p < 0.001. When the item depicting ziehen "pull" was excluded, participants' preferences for left-or right-positioned agents were independent of the event type, χ 2 (14) = 10.84, p = 0.699. The verb ziehen "pull" was then excluded from the experimental materials, as well as the verb treten "kick, " which would require a preposition in the inanimate patient condition.
Fillers
A set of 56 drawings of animals and inanimate objects of masculine and feminine grammatical gender that were situated next to or on the top of each other were used as fillers. They served to ensure that participants produced sentences with different syntactic structures (i.e., not involving the description of a transitive event) and did not develop preferences towards a specific sentence type due to repetition from trial to trial.
Design
The experimental design included three factors: patient animacy (animate vs. inanimate, within subjects and between items), attention cueing (cue on the patient vs. no cue, within subjects and within items), and patient position (to the right vs. to the left of the agent, within subjects and within items). Four randomized lists presented each item in one of the eight experimental conditions: (1) left-positioned animate patients preceded by a cue; (2) left-positioned animate patients preceded by no cue; (3) left-positioned inanimate patients preceded by a cue; (4) left-positioned inanimate patients preceded by no cue; (5) right-positioned animate patients preceded by a cue; (6) right-positioned animate patients preceded by no cue; (7) right-positioned inanimate patients preceded by a cue; (8) right-positioned inanimate patients preceded by no cue. Each participant was presented with one list and saw items in all 8 conditions, each item appearing in one condition only.
Procedure
Participants were seated within a 60 cm distance from the computer screen on which the experiment was presented. They were asked to describe scenes on the pictures they would see on the screen in one sentence and were given examples of possible descriptions, as well as several practice trials to make sure they understood the task. The experiment consisted of seven blocks, each block contained eight experimental and eight filler items, which appeared in a random order. Before each block a familiarization phase took place in order to make sure that participants could easily recognize objects and figures that would later appear in the block. During the familiarization phase, objects and figures were displayed individually on the top, bottom, left, and right of the screen and participants had to point at them by using the keyboard keys to answer the questions they heard via headphones, such as "Where is the clown?." During the experimental phase, participants saw the fixation cross in the middle of the screen (500 ms) and thendepending on the condition (cueing/no cueing) -either a cue placed where a patient would appear next or a blank screen, each for 60 ms. Finally, the scene was presented and participants had 7000 ms to produce its description (Figure 2). To ensure the quality of voice recordings, participants wore a PC-headset Hama "Fire Starter" with a stereo headphone and a boom microphone with a frequency range of 50-5000 Hz. Before the experiment began, the nine-point calibration and validation procedures were performed to ensure the accuracy of eye movement recordings. This procedure was repeated whenever the experimenter detected significant deviations between participants' gaze and the fixation cross that appeared in each trial. Viewing was binocular but only the dominant eye determined using the Miles test 2 was tracked. At the end of the experiment participants were asked several questions that aimed at identifying whether they were aware of the presented cue or not. The experiment lasted approximately 45 min.
Data Analysis
The obtained behavioral data were analyzed with respect to three measures: the produced utterance type, speech onset times, and eye movements. Statistical analyses were conducted in R (RStudio, 2017) using lme4 package (Bates et al., 2014). Linear mixed-effects modeling using lmer function was applied to analyze continuous data (e.g., speech onset times), whereas mixed-effects logistic regression using glmer function was applied to binomial data (e.g., probability of first saccades). The optimal data transformation for continuous data was determined using the Box-Cox procedure (Osborne, 2010). The factors Position (right/left patient), Animacy (inanimate/animate patient), Cueing (cued/non-cued patient) were assigned sumcoded contrasts as categorical predictors (e.g., Barr et al., 2013;Levy, 2014). Models included these factors and interactions between them as fixed effects, as well as participants and items as random effects (see Baayen et al., 2008) 3 , Model <lmer/glmer [DV ∼ Position * Animacy * Cueing + (1 | participants) + (1 | items)]. Converging models were compared using ANOVA function. The results reported below are based on the best-fitting models with the lowest AIC value. The exact random effect structure of selected models is indicated in Tables 1-6.
Utterance Types
The total of 2464 produced utterances can be classified in three structural categories: active sentences (93.43%), passive sentences (6.17%) and other structures (e.g., sentences describing the location of patients, 0.40%). Table 1 shows the results of the mixed-effect model and presents regression estimates (b), standard errors (SE), z-values, and p-values for each main effect and interaction. The main effect of the factor position revealed that there were more passive utterances produced to describe scenes where patients appeared on the left of the agent 2 Participants were asked to fixate a point on the wall through a small opening created by their hands with arms extended. Then they drew hands closer to face while fixating the point. Depending on the ocular dominance, they moved hands to the left or the right side of the face, in order to keep focusing on the point with the dominant eye. 3 Trial order (centered) was initially included as a covariate but did not influence the results and was removed to simplify the model structure. Table 2) revealed a significant association with first saccades to patients (χ 2 (1) = 18.52, p < 0.001). Based on the odds ratio, the odds of producing passive utterances were 2.1 times higher when first saccades landed on patients than when they did not. While more first saccades were made to cued rather than non-cued patients (see Table 4 and corresponding analyses), first saccades were related to the production of passive utterances in both cued and non-cued patient conditions (χ 2 (1) = 10.52,
Speech Onset Times
Initial stages of data analysis involved identifying the exact time latencies from the onset of the scene picture on the screen until speech onset by using the Praat software (Boersma and Weenink, 2017). Based on the Box-Cox procedure, the reciprocal square root transformation was identified as an optimal transformation and applied to speech onset times. The results are reported for speech onset times of all produced utterances irrespective of the utterance type. Statistical analyses of speech onset times for produced active utterances alone produced the same patterns as described below. Speech onset times of passive utterances could not be analyzed due to the small number of observations. Table 3
Eye-Movement Data
The probability of looks in all 8 conditions is presented in Figure 4, which gives an overall impression about eyemovement behavior during scene presentation. Statistical results are reported to reflect the time course from the earliest to later stages, covering the time window before speech onset, as well as the full trial duration. The described measures representing each of these stages include the probability of first saccades, the percentage of dwell time on patients until speech onset and the percentage of total time spent on patients. Other measures reflecting the initial stage in looking behavior (e.g., the percentage, duration and start time of first fixations on patients, the number of first saccades to patients), as well as the time windows until speech onset and offset of stimuli (e.g., the probability of saccades, number of runs, gaze duration), reflect the same eye movement patterns as reported below and therefore are not reported for brevity. The probability of first saccades is a measure reflecting the earliest eye movements towards patients. Table 4 Figure 5. Heatmaps for each experimental condition visualizing the proportion of fixation duration relative to the trial total within the time window of 250-400 ms from the scene onset are provided in Figure 6.
The percentage of dwell time spent on patients until speech onset reflects gaze behavior from the scene onset until the average speech onset time (1584 ms). Table 5 summarizes the results of the mixed-effects model (regression estimates, standard errors, t-and p-values) that yielded significant main effects of animacy and position, which were consistent with initial stages of looking behavior. Prior to speech, there was more gazing time The percentage of total time spent on patients is a measure that represents the time spent on patients throughout the full trial irrespective of speech onsets. Table 6
DISCUSSION
Unlike previous studies, that either focused on referent animacy or referential cueing, here we examined both factors in one sentence-production experiment. Thus, for the first time, these rather diverse factors were considered within one experimental design rather than in isolation. In particular, the present study examined to which extent visual (cueing and patient position) and conceptual (patient animacy) factors lead to systematic variations in the syntactic choice speakers make in describing event scenes, in speech onset times of produced descriptions, and in eye-movement patterns. As to the syntactic choice, speakers were more likely to place patients into the prominent subject position producing passive voice descriptions for scenes when patients were animate or positioned to the left of the agent. Moreover, the production of passive voice descriptions was higher when speakers first looked at patients, irrespective of whether it was due to cueing or not. Speech onset times were also influenced by patient locations and reflected additional costs for scenes where patients appeared to the left of agents. At the same time, speech onset times remained unaffected by either cueing or animacy of patients in the scenes. In contrast to both syntactic choice and speech onset times, all of the manipulated factors had an immediate effect on speakers' eye movements, so that participants were more likely to look at cued, animate, and left-positioned patients than to their counterparts. Whereas cueing had an impact on the initial looks toward the patient, it did not influence later eye movements. Similarly, the position of patients affected the looks to patients until the initiation of speech but not later on. In fact, the animacy of patients was the only factor that modified both earliest and later eyemovement patterns, such that animate patients were looked at more than inanimate ones. To summarize, visual and conceptual properties of scenes influenced different aspects of behavior affecting both language and eye-movement responses. In comparison to cueing that only affected eye movements, both patient animacy and position also modified language behavior. While the impact of referent position has been demonstrated in a variety of comprehension tasks, we provide the first evidence that this factor also impinges on sentence production. We will now discuss the effects of each of these factors individually and then turn to their time course relative to one another.
Patient Animacy
Voice selection was sensitive to the animacy of thematic roles, so that more passive utterances were produced for scenes where the patient was animate. Since passive voice structures in German require placing a thematic patient into a sentenceinitial subject position, this means that animate patients were more likely to be verbalized as subjects and to occur at the beginning of an utterance than inanimate patients. This finding is in line with reported animate-first effects in language production and comprehension (e.g., McDonald et al., 1993;MacDonald et al., 1994;Trueswell et al., 1993;van Nice and Dietrich, 2003;Bornkessel-Schlesewsky and Schlesewsky, 2009;van de Velde et al., 2014). These effects are generally attributed to two separate processes that occur during utterance production. The first one relies on word order and consists in placing animate entities in the prominent position at the beginning of the utterance. The second one concerns the assignment of grammatical functions, so that animates are assigned subject functions. It cannot be determined from our data whether these processes occur in two separate stages -with animacy first determining the function and then the position of arguments (Bock and Levelt, 1994) -or simultaneously, as some would argue (e.g., Branigan et al., 2008).
Interestingly, although both object topicalizations (i.e., fronting the object, as in Den ACC Angler filmt der NOM Clown "The fisherman ACC is filming the NOM clown") and passivizations are grammatically equally valid options in German, no topicalizations were produced in our experiment. The complete absence of OS topicalizations among utterances observed in our experiment may seem puzzling, especially given that these constructions are reported to make up just under 4% of all sentences in German corpora (3.7% - Hoberg, 1981;3.3% -Kempen and Harbusch, 2005) and were successfully elicited in previous studies with sentence production tasks (e.g., Myachykov and Tomlin, 2008). This incongruity, however, may be explained if the ratios for specific object cases are considered: depending on the corpus, object topicalization with accusative objects only amount to 0.2-0.5% of all utterances, while the remaining 3.1-3.2% of utterances occur with dative objects (see Bader and Häussler, 2010, for more details on the comparison of corpora in this respect). This bias toward dative objects in object topicalized sentences may account for the lack of their occurrence in our experiment, where only the topicalization of accusative objects was possible. At the same time, the ratio of passive sentence occurrences in our experiment (6.2%) closely corresponds to ratios found in corpus studies for German language (e.g., 7% -Brinker, 1971;9% -Schoenthal, 1976). Our experimentally elicited utterances, thus, reflect quite accurately naturally occurring syntactic variations in language. Nonetheless, neither of the two accounts -grammatical function or word order -can be completely ruled out to explain animacy effects observed in our findings. Nevertheless, both of these models of language production assume a higher conceptual accessibility of animate referents underlying functional and positional processing and our data confirm this assumption, in that animate referents were both assigned subject roles and placed first in produced passive utterances more often than inanimate ones.
The higher conceptual accessibility of animate referents is often related to the inherent significance of animacy as an ontological category and its multifaceted influence on human cognition, including its prioritizing in language use (e.g., Yamamoto, 1999;Dahl, 2008). The priority of animate over inanimate entities in language is conceptualized as a prominence scale that organizes arguments of a thematic structure in terms of a hierarchy (e.g., Lamers and de Swart, 2012). The prominence hierarchy may map on other hierarchies, for instance, that of syntactic functions (subjects ranking over objects) or thematic roles (agents ranking over patients), and thus influence argument linearization. According to the so-called principle of harmonic alignment (Aissen, 2003), higher-ranked entities on one scale should align with higher-ranked entities on another scale. In case of passive utterances produced in our experiment, it was the animacy hierarchy that aligned with that of syntactic functions, so that more prominent animate patients were given higherranked subject functions. On the one hand, this is consistent with theories about the higher prominence of animate versus inanimate entities confirming a bias in the perception of animate roles as fitting subject functions better than inanimate ones. On the other hand, thematic roles are typically reported to align with syntactic functions, so that agent and not patient roles function as sentential subjects (e.g., Dik, 1978;Jackendoff, 1987;Grimshaw, 1990). In this respect, the production of passive utterances in our experiment provides an example of how semantic prominence of animacy may override the prominence of thematic roles.
Taken together, our experiment confirms that animacy is an important conceptual factor that can affect speakers' structural choices. This finding is in line with previous studies that involved the manipulation of agent animacy. Crucially, we investigated the animacy status of patients, thereby corroborating the importance of animacy for structural choices even when less prominent patient arguments are considered.
Visual Cueing
In contrast to the manipulation of patient animacy, drawing attention to patients using visual cueing did not elicit expected changes in language production. Nevertheless, the visual behavior was affected as predicted, so that upon scene presentation gaze was first directed to cued rather than non-cued patients. Thus cueing was effective in altering eye movements, even though it had no impact on either the onset of produced utterances or their syntactic structure. Given that the effects of cueing only surfaced in the initial saccades and fixations, it should not be surprising that these short-lived effects did not affect speech production. Yet, this finding contradicts a number of previous studies (e.g., Gleitman et al., 2007;Myachykov et al., 2011Myachykov et al., , 2012) that did find a correspondence between the increased use of passive voice and the visual cueing of patients in scene description tasks. It is assumed that increasing the saliency of patients via cueing may make them more accessible for processing and therefore more likely to be assigned subject functions, which then results in a passive voice utterance (e.g., Myachykov et al., 2018b). Despite the apparent similarity between these studies and our experiment, however, there are important methodological differences that could be responsible for the discrepancy in results. Thus, cueing manipulation in these aforementioned studies typically consisted in cueing both agents and patients, which perhaps created a starker contrast between the two cueing conditions compared to our experiment where agents were never cued and only patients were either cued or not. Moreover, cueing in these studies targeted the visual salience of referents, whereas their conceptual characteristics (e.g., animacy) did not vary systematically. In our study, the conceptual prominence exerted by animacy also rendered patients more accessible, conceivably outweighing the increase in their saliency due to cueing. As a result, the shifts in visual attention towards patients following cueing in our experiment may be shorter lasting than in previous studies. Unfortunately, this cannot be determined, as analyses of eye movements reflecting the time course of changes in visual attention were not reported in these studies.
While differences in the employed paradigms may have contributed to the absence of an effect of visual cueing on sentence production, other studies have likewise failed to observe effects of attentional cueing on structural choice (e.g., Myachykov et al., 2011;van de Velde et al., 2014;Hwang and Kaiser, 2015). For instance, Myachykov et al. (2011) did not observe significant effects of visual cueing on Finnish speakers' structural choice in a picture description task despite the fact that the visual cue effectively shifted participants' gaze to the cued entity. The same was true for speakers of Dutch (van de Velde et al., 2014) as well as for speakers of Korean (Hwang and Kaiser, 2015). A number of reasons could account for discrepancies in the results when it comes to cueing affecting (or not) speakers' structural choices. One reason might be the cross-linguistic variability in the grammatical systems of different languages. In a language with a case system (like German), for instance, the accessibility of a patient increased by cueing may be interfered with by the necessity to provide a case marked article (den "the ACC/MASC " or der "the NOM/MASC ") before the noun. The choice of the case marking on the article determines the choice of syntactic structure. If the accusative case marked article den has been chosen, only an object topicalization can follow (Den ACC Angler filmt der NOM Clown "The fisherman ACC is filming the NOM clown"). In contrast, the choice of the nominative article der necessitates to procede with a passive (Der NOM Angler wird vom DAT Clown gefilmt "The fisherman is filmed by the Clown"). As we have seen in corpora data described above, accusative objects are almost never topicalized, suggesting that the speaker would rather recur to the passive voice in order to produce a grammatically acceptable utterance. A related reason would be the relative flexibility of word order in German as compared to English. Since the number of available structural options is higher in a language with flexible word order, it may be more challenging for speakers of that language to integrate their linguistic choice with the shifts of visual attention. However, it is also possible that using longer cues may help increase the accessibility of referents enough to overcome language-specific factors that may interfere with structural choices. Cue duration does seem to play a role for speakers of English (Myachykov et al., 2018a). However, its role for speakers of other languages remains to be clarified in future studies. In sum, our findings indicate that increasing visual salience of referents by means of visual cueing may not be as effective in influencing the utterance structure as previously reported.
Spatial Position of Patients
Although it has often been observed that the position of referents is affected during sentence comprehension (e.g., Maass and Russo, 2003;Dobel et al., 2007), so far no study has looked at this effect in language production. However, our results show that the positioning of patients in space had a pervasive influence on participants' behavior affecting early and later eye movements, as well as the initiation of utterances and voice selection. The effects of patient positioning across all of these behavioral measures were consistent with our predictions and revealed participants' bias to expect agents to the left of patients in visual scenes and to assign subject functions to left-positioned rather than right-positioned referents. Similar spatial biases have been documented for areas other than sentence production. There is converging evidence that the relatedness of spatial positioning of referents and their thematic roles becomes evident in language comprehension. Dobel et al. (2007), for instance, asked participants to either draw or arrange transparencies of protagonists or objects in order to depict sentences they heard. Their findings suggest that the leftmost position in space is associated with agents rather than patients. Similar findings come from experiments by Chatterjee et al. (1995Chatterjee et al. ( , 1999, where the recognition of agents was less effortful when agents appeared to the left than to the right of recipients. Moreover, the applied spatial schema seem to also affect the direction in which the action evolves, i.e., from left to right (Chatterjee, 2002). In line with these findings, our results confirm a similar left-agent bias for sentence production. Crucially, this effect appears to be modulated by writing direction, as the reverse bias is found in speakers of languages with left-to-right scripts (e.g., Maass and Russo, 2003). For instance, a recent study that investigated spatial preferences for agent placement in scenes depicting transitive actions suggests its dependence on script direction (Esaulova et al., unpublished). The authors evaluated visual preferences for left-and rightpositioned agents in a group of native German and a group of native Arabic speakers. The results showed that speakers' visual preferences were consistent with the script direction in their native languages: German speakers preferred pictures with leftpositioned agents, while Arabic speakers preferred those with right-positioned agents. In addition to language-related effects of visual positioning of referents, left-to-right spatial schemata have also been observed in a number of other areas that nevertheless correlate with script direction (Zebian, 2005;Santiago et al., 2007;Pérez et al., 2011). One example of such a spatial bias is the so-called SNARC effect -a tendency to envisage numbers and magnitude along a horizontal line, starting with the smallest item and moving to the largest from left to right (Hubbard et al., 2009). Again, this tendency occurs in languages with a left-to-right script, while the opposite right-to-left pattern -known as the Reverse SNARC effect -is observed for languages with a rightto-left script, such as Hebrew or Arabic (e.g., Zebian, 2005;Shaki et al., 2009). Just like SNARC and time representations, schemata for linguistic agency change their direction in populations with right-to-left script (e.g., Arabic - Maass and Russo, 2003;Esaulova et al., unpublished). Furthermore, the spatial mental schemata seem to be used to represent social psychological concepts, such as social agency (see Suitner and Maass, 2011, for an overview on Spatial Agency Bias). Higher-status social groups (e.g., men) are typically mentioned before lower status groups (e.g., women) and are therefore positioned to the left in left-to-right languages (Hegarty et al., 2011). Likewise, groups that are represented to the left are generally perceived as the "norm" and of higher status (Hegarty et al., 2010;Bruckmüller et al., 2012). Patients positioned on the left in event scenes in our experiment could be perceived as more agentic than those on the right facilitating both the assignment of subject functions to them and a word order in which they would be mentioned first. Thus, while spatial orientation of patients may appear as a merely visual factor, it could in fact reflect both visual and conceptual preferences. Moreover, it can be conceptualized as a prominencelending factor, since -similar to animacy -it can be represented as a hierarchy with left-positioned referents aligning more readily with subjects than right-positioned ones.
Our findings suggest that positioning of figures and objects in event scenes influences sentence production in two ways, affecting both the structure and onset times of produced utterances. The position of agents and patients relative to one another is thus not only relevant for language comprehension, as previously reported, but also for language production. Whether these effects in language production may be subject to cultural adaptation -as it appears to be the case in language comprehension -is yet to be determined. In any case, considering that counterbalancing seems to be a common practice in most studies examining sentence production during scene descriptions (e.g., Myachykov et al., 2018a), our finding has important methodological implications. Since our experiment demonstrates that the positioning of referents exerts a strong effect on sentence production, counterbalancing may not be adequate or sufficient to account for its influences. Instead, the materials should be controlled more carefully and/or the effects of positioning should be systematically explored and reported.
The Time Course and Interplay of Visual and Conceptual Influences
Our study addressed visual and conceptual factors within the same experiment targeting visual and language responses, which allowed us to explore the time course of influences related to each of these factors, as well as whether they interact and if so, on which behavioral level.
The time course of influences exerted by each of the manipulated factors appears to depend on their relatedness to meaning. Patient animacy as a conceptual property drawing on meaning did not only have immediate but also long-lasting effects shaping both the initial and later visual inspection of scenes, as well as syntactic choices for their description. The impact of animacy is therefore both early and long lasting. This observation is in line with the findings of an EEG study by Malaia and Newman (2015) who investigated the syntax-semantics interface during word-byword reading of sentences where subject animacy and verb telicity were varied (e.g., The witness animate /mansion inanimate seized telic /protected atelic by the agent was in danger). The authors found neural support for first-noun animacy affecting the online comprehension of not only that noun but also later parts of the sentence, demonstrating that animacy effects persisted during sentence comprehension. In contrast to the long-lasting effects of animacy, our findings revealed that visual cueing which was unrelated to any conceptual interpretation only impacted initial scene scanning patterns but no later changes in gaze or speech. The effect of visual cueing can thus be considered early but rather short-lived. At the same time, another visual factor -the spatial position of patients -seems to have an intermediate effect, as it influenced the immediate gaze behavior, as well as syntactic choices and utterance initiation times. The impact of conceptual properties can thus be seen from the very onset of a visual scene until after the description is produced, while the influence of visual properties drawing on perceptual mechanisms reduces over time -be it immediately upon the onset of the scene (cueing) or once the syntactic choice is made (patient position).
Interestingly, the position of patients had an impact similar to that of animacy in that it did not only affect eye movements until speech onset but also influenced speaker's structural choices leading to more passivizations when patients were located to the left of agents. Moreover, patient position did not only affect the type of produced utterance but also their onsets, leading to longer delays in case of left-positioned patients. This could be related to a left-to-right bias we described earlier, as well as to a misalignment between the incrementally planned and structurally unmarked agent-first structure and the visual input. While both animacy and position effects emerged very early as judged by changes in gaze, the effect of patient position was relatively shortlived compared to that of patient animacy. In this sense, the spatial organization of thematic roles in a scene may also be given an additional meaning (i.e., of agency). This conceptual interpretation of an otherwise visual factor goes beyond its visual properties and confers it an intermediate status compared to visual cueing on one side and conceptual animacy on the other.
Although different factors indeed affected different aspects of sentence production and looking behavior, none of them interacted. This could mean that rather than depending on or interfering with each other, visual and conceptual factors exert their influences independently. Along with this, the relative importance appears to differ from factor to factor. In this way, the conceptual factor of animacy seems to be more powerful in making entities prominent than changes in visual saliency due to cueing. This is in line with Henderson et al. (2018), who argued that conceptual aspects affect the perception of visual scenes more than visual ones. At the same time, however, this finding is at odds with Rissman et al. (2018) who suggest that visual saliency may override conceptual saliency related to animacy. It is worth noting, however, that disentangling with certainty conceptual and visual saliency of animate vs. inanimate entities in depictions is highly problematic, since some of the characteristics (e.g., a possession of a face) that are intrinsic animate features (i.e., conceptual) may also increase their visual saliency. Whether animacy may be overridden by visual cues should be confronted in future research, which would vary the saliency of visual cues by manipulating such parameters as cue duration or size. So far, however, our findings suggest that when a scene combines both conceptual information (i.e., animacy) and visually salient features (i.e., cue), features loaded with meaning outrank perceptual saliency.
CONCLUSION
Unlike previous experiments, our study investigated the influence of visual and conceptual properties of scenes together rather than individually. We also considered a number of behavioral responses (gaze changes, speech onset, utterance structures) aiming at a fuller picture, as opposed to studies that targeted either speech or visual behavior. Applying this approach, we were able to demonstrate -for the first time -that spatial positioning of patients in scenes manifests in language production. Moreover, our experiment suggests that the position of referents in an event scene may increase their prominence similar to animacy. This is reflected in speakers' tendencies to assign left-positioned referents subject functions and to place them in initial sentence positions. Importantly, structural choices were affected by the manipulation of patients' and not agents' animacy status indicating that features like animacy may increase the prominence of both agent and patient roles in a sentence. The relative weight of visual cueing and patient position appears to gradually reduce over time, while conceptual factors drawing on meaning have longerlasting effects. Therefore, increasing visual salience of referents by means of visual cueing may not be as effective in influencing sentence production as previously reported. Future studies are needed to clarify this discrepancy in findings by considering if visual factors (e.g., duration or type of cue) and/or language-specific characteristics may be possible reasons for such differences.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the Ethics Commission of Cologne University's Faculty of Medicine with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Ethics Commission of Cologne University's Faculty of Medicine.
AUTHOR CONTRIBUTIONS
YE designed the study, collected and analyzed the data, and wrote the manuscript. MP and SD designed the study, and wrote the manuscript. | 2019-04-17T15:41:33.042Z | 2019-04-17T00:00:00.000 | {
"year": 2019,
"sha1": "10b76d23cb36a2c1625278241764cc97c9f176be",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00835/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10b76d23cb36a2c1625278241764cc97c9f176be",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
9776400 | pes2o/s2orc | v3-fos-license | The impact of SLMTA in improving laboratory quality systems in the Caribbean Region
Background Past efforts to improve laboratory quality systems and to achieve accreditation for better patient care in the Caribbean Region have been slow. Objective To describe the impact of the Strengthening of Laboratory Management Toward Accreditation (SLMTA) training programme and mentorship amongst five clinical laboratories in the Caribbean after 18 months. Method Five national reference laboratories from four countries participated in the SLMTA programme that incorporated classroom teaching and implementation of improvement projects. Mentors were assigned to the laboratories to guide trainees on their improvement projects and to assist in the development of Quality Management Systems (QMS). Audits were conducted at baseline, six months, exit (at 12 months) and post-SLMTA (at 18 months) using the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist to measure changes in implementation of the QMS during the period. At the end of each audit, a comprehensive implementation plan was developed in order to address gaps. Results Baseline audit scores ranged from 19% to 52%, corresponding to 0 stars on the SLIPTA five-star scale. After 18 months, one laboratory reached four stars, two reached three stars and two reached two stars. There was a corresponding decrease in nonconformities and development of over 100 management and technical standard operating procedures in each of the five laboratories. Conclusion The tremendous improvement in these five Caribbean laboratories shows that SLMTA coupled with mentorship is an effective, user-friendly, flexible and customisable approach to the implementation of laboratory QMS. It is recommended that other laboratories in the region consider using the SLMTA training programme as they engage in quality systems improvement and preparation for accreditation.
Introduction
Improving laboratory quality systems and attaining accreditation are important benchmarks in National Health Laboratory practice, as accreditation is a process that gives formal recognition of the technical competence of a laboratory to perform specific tests. 1 In many cases, the added value of accreditation far outweighs the necessary investment in human resources, finances and time, since it is an independent method of determining and monitoring laboratory performance, whilst assuring the validity of the results to the users. 2,3 Implementation of laboratory Quality Management Systems (QMS) and achievement of accreditation amongst laboratories in the Caribbean Region has been limited. Available data report only three accredited government-owned or public clinical laboratories in the Caribbean as of 2011. 4 Over the years, many Caribbean laboratory staff have been provided with information on QMS and accreditation in various forms, including training, conferences, meetings and printed material. However, using this knowledge collectively and developing a comprehensive plan in order to address quality gaps and begin the journey toward accreditation have been challenging. During a preliminary laboratory needs assessment survey conducted in 2009, laboratory managers and other stakeholders discussed the problems of an undertrained laboratory workforce, the lack of motivation and, most importantly, the perception that the quality improvement process was cumbersome. 4 The need to put strategies in place to eliminate these hindrances as soon as possible was emphasised. The recommendation was that a more user-friendly, stepwise approach to quality systems implementation, in combination with task-based training tools to improve staff knowledge, could lead to more substantial improvement in quality systems.
The Strengthening Laboratory Management Toward Accreditation (SLMTA) programme was launched in 2009 and has been implemented in 47 countries worldwide. 5 It is a management training programme that utilises a series of workshops interspersed with on-site projects designed to improve laboratory quality. Evidence from other settings has shown that the SLMTA training programme yields observable and measurable laboratory improvements. 6 Furthermore, the training empowers laboratory staff and enhances management's ability to improve their own laboratories by making use of existing resources. 7 A laboratory quality improvement mentorship intervention programme in Lesotho that incorporated the SLMTA training and a stepwise approach to accreditation preparedness has resulted in significant measurable improvements in the quality of enrolled laboratories over a period of 12 months. 8 The reauthorisation of the US President's Emergency Plan for AIDS Relief (PEPFAR II) in 2008 resulted in the establishment of the PEPFAR Caribbean Regional Program and the development of the PEPFAR Partnership Framework with 12 Caribbean countries (Barbados; Trinidad and Tobago; Belize; Suriname; Jamaica; the Bahamas; St. Lucia; St. Vincent and the Grenadines; Grenada; Antigua and Barbuda; St. Kitts and Nevis; and Dominica). Since then, the PEPFAR laboratory-strengthening working group has worked closely with the Ministries of Health (MOHs) in these countries to improve the quality and reliability of laboratory results and to offer basic testing services for persons living with HIV. The need to engage laboratories in these countries in quality improvement and accreditation was identified very early during this collaboration when it became apparent that laboratory services, systems and infrastructure in the region were weak, with various populations lacking access to timely, low-cost and high-quality laboratory services. 4 With the aim of improving laboratory quality in the region, the US Centers for Disease Control and Prevention (CDC) Caribbean Regional Office Laboratory Team, the International Laboratory Branch of the Division of Global HIV/AIDS at CDC Atlanta and the African Field Epidemiology Network (the laboratory implementing partner) collaborated to research options for effective laboratory quality improvement. The decision was made to use the SLMTA training programme, coupled with the World Health Organization Regional Office for Africa's (WHO AFRO) Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) checklist, along with mentorship, in order to improve the quality systems of five laboratories in four of the Caribbean Partnership Framework countries. This article discusses improvements in the laboratory quality systems during the 18-month implementation of the SLMTA training programme and mentorship in these laboratories.
Research method and design Advocacy strategy with governments
At the initiation of the regional laboratory strengthening activities, following the signing of the PEPFAR Caribbean Regional Partnership Framework in 2010, key sensitisation meetings were held with policymakers and other stakeholders in each of the four countries to highlight the need, importance and advantages of improved laboratory quality systems and accreditation. These meetings included Chief Medical Officers, Permanent Secretaries, laboratory directors and other regional partners. In addition to discussing an overall strategy for collaboration and strengthening of the entire laboratory health system, a presentation was made highlighting the stepwise approach toward accreditation, the SLMTA training programme and the use of mentors as innovative approaches to implementing quality systems and eventually achieving accreditation.
The proposed strategy for laboratory strengthening began by engaging the national reference laboratories in each of the four selected countries. Although each laboratory was unique in its operation, size and workload, it was agreed that the challenges faced were similar and they would, therefore, all benefit from the proposed interventions. To ensure buy-in and to highlight the need for providing additional resources to address the deficiencies previously identified during the laboratory needs assessment survey in 2009 and the subsequent baseline audits in 2011, key senior officials from the human resources, procurement and maintenance departments of the MOHs and hospitals were invited to attend the audit debrief meetings in their respective countries.
Laboratory audits
Periodic audits spanning three to four days were conducted in each of the five national reference laboratories by experienced auditors using the SLIPTA checklist. The SLIPTA programme uses a stepwise accreditation preparedness scheme that recognises laboratories according to their level of compliance with the the international standard ISO 15189 -Medical Laboratories -Particular requirements for quality and competence. The results of the laboratory audits were reported for each of the 12 sections of the checklist covering the 12 quality systems essentials (CLSI GP 26-A3 [2004]), including 111 main items for a total of 258 possible points ( Table 1). The score obtained by each laboratory indicates the level of performance, which determines the star rating from 0 to five stars.
The audits were conducted in each of the five participating laboratories at baseline, after six months (mid-term audit), after 12 months (exit audit) and after 18 months (follow-up audit) to ensure continuous monitoring of the laboratories and their performance ( Figure 1).
Each laboratory audit began with an introductory meeting convening the laboratory director and departmental heads in order to summarise the proposed audit plan which would be used to identify areas for improvement. At the end of the audit, a formal debrief meeting was held with laboratory management, technical staff and key persons from the MOH and hospital whose responsibilities affect the smooth functioning of the laboratories. After the baseline audit, a customised quality system implementation plan was developed in order to outline the nonconformities found, recommendations for follow-up actions, responsible persons, timeline for completion and status ( Table 2).
Throughout the programme the laboratories were audited at approximately six-month intervals, which allowed them to monitor their continued progress and update the quality system improvement plans originally developed at the baseline audit. The list of nonconformities found at the previous audit was also comprehensively reviewed and updated to determine the number of completed corrective actions over the period. Open nonconformities were assigned for further follow-up by the laboratory and its management.
Exit audits were conducted using the SLIPTA checklist three months after the last SLMTA workshop concluded (12 months after baseline). A follow-up audit was then conducted six months later to evaluate the longer-term effectiveness and sustainability of the programme. These audits allowed laboratories to determine their level of progress from the baseline to exit of the SLMTA training and mentorship programme.
SLMTA workshops
The SLMTA training programme was implemented as a series of three workshops which began in May 2011 and were conducted approximately three months apart ( Figure 1). A total of 24 participants (three to five per laboratory) from across the five focus laboratories were chosen based on the size of their laboratory and the testing needs of each country. These included staff from the various departments, (i.e., Chemistry, Blood Bank, Serology, etc.), as well as the quality manager or designee. Participants were required to develop improvement projects and complete them during the hiatus between workshops. The improvement projects were generally chosen based on the areas of nonconformity indicated in the laboratory's individualised quality systems implementation plan along with the needs of the laboratory at the time. Each participant presented a summary of their completed improvement projects at the subsequent workshop, including the baseline data collected, the measure of progress within the study period and the challenges experienced during project execution. Final improvement projects were presented orally by each participant and graduation certificates were awarded to them in the presence of officials from the MOH and the hospitals in order to highlight the importance of this event to the process of accreditation preparedness.
Mentorship for the laboratories
Each of the engaged laboratories was assigned a mentor to assist in developing and establishing their QMS by providing technical assistance and coaching on implementing the improvement projects from the SLMTA training. Three fulltime mentors were used for this activity across the five laboratories. Each mentor had at least 10 years of experience in laboratory technology and development of QMS.
During the first few months of the programme, the mentors spent approximately one week each month embedded in the assigned laboratory. After approximately six months, the length of each mentor's assignment was increased to two or three weeks, depending on the needs of the laboratories at that time.
Six-week mentorship action plans were developed to give direction to both the laboratory and the mentor, allowing for measurement of progress over the specific six-week period (Table 3). Since the mentor was physically on-site for only part of the six-week period, the laboratory had a period of selfmanagement during which time they communicated with the mentor via email, internet conferencing and telephone. All management and technical procedures produced during the assignments were forwarded to the laboratory directors or department directors for final approval.
Results
At the baseline audits the laboratory scores ranged from 19% to 52%, corresponding to 0 stars ( Figure 2). Scores increased steadily throughout the programme and by 18 months each laboratory had improved, with three of the laboratories more than doubling their baseline scores. One laboratory reached four stars on the five-star scale, two attained three stars and the remaining two laboratories each attained two stars. Of this group, one laboratory achieved accreditation through the College of American Pathologists (CAP) in September 2013; meanwhile three others have applied for accreditation and are preparing for the assessment within the next few months. Figure 3 shows the average percentage improvement across the five laboratories for each of the 12 sections of the checklist (i.e., the 12 quality system essentials), measured as the difference between the baseline and follow-up score after 18 months. The greatest improvements were in corrective action (66%), organisation and personnel (55%) and purchasing and inventory (54%). The sections showing the least improvement were process control (18%), occurrence management (25%), internal audits (30%) and equipment (36%). Average final absolute scores were > 60% for all areas except occurrence management and internal audits.
Overall, between 141 and 735 Standard Operating Procedures (SOPs) were completed and approved for each laboratory over the 18-month period (Table 4), leading to an average increase on the checklist of 40% from the baseline score in the area of documents and records ( Figure 3). Improvement in each laboratory can also be measured by the change in the number of identified nonconformities ( Figure 4). Nonconformities decreased by more than half during the intervention period. For each laboratory this translated into at least a 50% decrease in outstanding nonconformities over the entire implementation period.
Case studies
Each participant enrolled in the SLMTA training programme was required to choose, plan and execute at least three improvement projects over the duration of the programme. SLMTA trainers provided tools, techniques and examples in order to guide participants to design effective projects within their laboratory, whilst mentors provided implementation support. As a result of these projects, tangible improvements were observed in the QMS and overall operations of the laboratories. Two high-impact projects are presented here as case studies: Case Study 1 -Inventory management Laboratory 4 has three store rooms containing hundreds of supplies from various vendors. The baseline audit showed that management of stock was a challenge within this facility, with frequent stock-outs, lack of proper tracking forms in the storage areas and increased borrowing from other laboratories. Upon investigation, factors such as unpredictable patient-testing workload, delivery delays and back-order issues consistently affected the supply levels. These issues were exacerbated by the poor record keeping and lack of an organised inventory management system, preventing effective forecasting.
A key recommendation to the SLMTA trainee was to put a system in place to ensure sufficient stock levels of all supplies. Hence, an improvement project was designed to enhance inventory management in all areas of the system, with the overall objective to reduce stock-outs to less than 5% within a four-month period. To achieve this objective, all staff were briefed on the project, including their specific roles in the success of the intervention. During the improvement project, 15 quality indicators were monitored ( Figure 5). The results showed that seven of the 15 areas either maintained or achieved 100% compliance, whilst two other areas achieved 90% and 80% compliance over the baseline results. Other areas achieved appreciable improvements ( Figure 5). Overall stock-outs were reduced to 5% as a result of the general improvements in the system.
Case Study 2 -Improving documents and records management in the microbiology laboratory
Laboratory 2 has had problems managing quality system documents and associated manuals in their microbiology section. This has resulted in limited progress toward
Percentage
Percentages and stars correspond to the Stepwise Laboratory Quality Improvement Process Towards Accreditation (SLIPTA) scoring system (Table 1). achieving accreditation and difficulty in training new staff in the department.
An improvement project was designed to address document and records management. A team of key organisational individuals was convened to work together on the development of the QMS. This critical step helped to gain support for the project throughout the various sections in the department. Section leaders had the ultimate responsibility of designating and distributing the assignments within their sections. The documents and records were grouped into four categories: Technical SOPs; Management SOPs; Logs and Checklists; and Equipment (including the Equipment list, Preventative Maintenance logs and SOPs for each item of equipment). Figure 6 depicts the level of improvement in documentation after three months of this intervention. Technical SOPs showed the highest level of improvement, from 0% to 67%, closely followed by Equipment documentation, from 0% to 63%; the least improvement was in the Management SOPs.
Discussion
Although diverse in its geography, people, size and economy, the Caribbean Region shares a common challenge in achieving accreditation of its medical laboratories. Previous didactic training programmes introduced laboratory staff to the basic quality management principles and the existence of the ISO 15189 standard. Despite this knowledge, limited progress was seen. An approach that encompassed SLMTA training, a stepwise evaluation process and mentorship has resulted in tremendous improvement in the quality systems of five national laboratories in four countries of the region within an 18-month period, one of them having attained accreditation. Several factors may have contributed to the successes:
Early engagement of key stakeholders
Key to the success of global health interventions is full engagement of decision makers in the process from the beginning. In particular, facilitating meetings of policy makers -Permanent Secretaries, Chief Medical Officers and top management officials of the hospital -along with technical staff, in order to identify challenges and opportunities to resolve nonconformities was important for this project, since these individuals subsequently provided the laboratories with the support and resources needed to ensure timely improvement of the quality systems. Endorsement by top management for laboratory systems strengthening activities has proven to be important for the success of this stepwise approach.
An implementation roadmap
The process of accreditation can appear to be daunting, as extremely high levels of compliance with the quality requirements are essential for a successful assessment and a passing score. For a laboratory without an effective QMS, identifying challenges and developing a quality improvement plan can seem like an insurmountable goal, which can lead to demotivation and subsequent inaction. The use of a stepwise improvement process, along with specialised guidance documents, has been shown to provide laboratory stakeholders with a clearer path toward quality systems improvement and accreditation. 1 Caribbean laboratory directors and managers emphasised that past laboratory assessments and training did not provide them with a structured roadmap to assist in implementation; as a result, the majority of these laboratories did not initiate the process of QMS development and implementation. 4 The SLIPTA checklist was used to conduct an initial gap analysis in the participating laboratories, leading to the development of an implementation plan, which provided direction for improving the laboratory QMS. This plan outlined the process to be taken and the indicators that would be used to measure tangible progress and outcomes over time. Everyone involved, including hospital management, was assigned specific tasks relating to their functions and roles, with key deliverables and solid deadlines. Use of the stepwise evaluation method enabled recognition of incremental improvements at each audit throughout the process, providing added motivation to all the staff. The scores achieved at each audit highlighted the status attained and the progress that the laboratories had made in building an effective QMS, in eliminating nonconformities and in their readiness for accreditation.
Structured improvement approach
Prior approaches to laboratory strengthening in the region focused mainly on mass sensitisation to and training on the ISO standards and quality management basics, but not on implementation. In some cases the persons trained had not previously been exposed to the principles of continuous quality improvement, total quality management, or development of a quality system specifically for the laboratory. The SLMTA programme taught the enrolled laboratories how to change the way they approached quality management and their daily operations. The programme also provided user-friendly tools that allowed staff to work more efficiently, as evidenced by their improved star ratings after 18 months.
An important component of the SLMTA training is the improvement projects developed and implemented by the trainees. This promoted a culture of systematic problem solving and a strategic approach to the application of quality system requirements. These projects and their measureable results served as a tool for the laboratory to advocate with management and policymakers for continued support. With the changing economic priorities and limited resources in these developing countries, it was critical to document the impact of any quality improvement and accreditation preparations, so as to demonstrate for stakeholders that the benefits outweigh the costs. 2 In the case of these Caribbean laboratories, nonconformities were drastically reduced, with corresponding improvement in each of the quality management systems. For example, a 66% improvement was observed in the laboratories' ability to perform corrective actions. A similar SLMTA intervention in Lesotho 6 reported a 34% improvement in corrective action application over an 11-month period.
Mentorship
According to Maruta, Rotz and Peter, 'a laboratory mentoring program can be an important way to establish and solidify quality management systems and to help laboratories achieve accreditation goals'. 9 The presence of the mentors in this programme served two main purposes. Firstly, mentors provided needed technical assistance in order to aid the laboratory in the development and finalisation of the QMS documentation. It has been documented that a strong foundation for quality assurance begins with development of a quality manual, SOPs and test methods, since they serve as a guide for both implementing and enhancing the quality system. 10 The mentors played a critical role in bridging the gap between what was learnt in the workshops and what was implemented within the laboratories, drawing the team together to develop a strategy and guiding them to address the existing issues. For example, the majority of laboratory staff initially reported that their quality documents were delayed in the process of development for six or more months. The reduction in nonconformities recorded in these laboratories can be directly linked to the increase in the number of documents developed, completed and implemented as a result of the technical assistance provided by the mentors.
Key challenges and recommendations
The Caribbean Region is made up of small island nations with most country populations in the range of hundreds of thousands. Ensuring a sufficient number of well-qualified laboratory workers is an ongoing challenge, exacerbated by high levels of attrition as staff that have benefitted from government-supported training leave the public sector for more lucrative jobs in the private sector, either locally or overseas. Thus the remaining staff are overworked, reducing the amount of time available for training and quality improvement activities. There is also a shortage of qualified mentors who can provide the needed support to laboratories engaged in quality improvement efforts and accreditation preparation. These personnel challenges limit the laboratories' opportunities for development of QMS and achievement of laboratory accreditation. Encouraging governments in the region to prioritise health systemstrengthening strategies that lead to staff development and retention would benefit not only laboratories, but the health system overall.
One of the main logistical challenges faced in this programme stemmed from the use of mentors based in different countries, who were required to travel by air to provide on-site support. Thus, considerable funds needed to be invested and intervention was sometimes delayed because of travel issues. Establishment of a cadre of incountry or regional SLMTA trainers and mentors would build local capacity and help reduce programme costs, especially as the programme expands. The momentum achieved through success of the SLMTA programme in these five laboratories must now be directed to further improvements in these laboratories, as well as expansion of the programme throughout the region. One of the participating laboratories recently achieved accreditation from CAP and three more have subsequently applied for accreditation, as a direct result of the training and technical assistance received in the SLMTA programme.
The remaining laboratory will continue to be monitored by means of SLIPTA audits, whilst preparing actively for accreditation in the near future.
Introduction and implementation of the SLMTA programme in the Caribbean Region has been made possible by funding from the PEPFAR programme; however, there is now a need to internalise the programme and transition it to local governments and other donors in order to facilitate expansion and ensure sustainability.
Conclusion
Quality management interventions in the Caribbean over the past 10 years had resulted in few improvements in the overall laboratory quality infrastructure, as evidenced by the low performance scores achieved at baseline audits and the limited number of previously-accredited laboratories in the region. A change of approach was thus needed in order to increase these numbers and put more laboratories on the path to accreditation. Implementation of the SLMTA and mentorship approach in several laboratories in the region has achieved tangible improvements in QMS development and overall quality within a very short period. Continued improvement in these laboratories and expansion of this programme to other laboratories in the region are recommended.
Sustained improvement will require government funds to be invested in training resources, including development and establishment of local mentorship programmes. Our results strongly support the growing body of evidence indicating that the SLMTA training programme is an important tool to empower laboratory staff, enhance management competence and achieve observable and measurable results for improved laboratory quality. | 2017-04-08T19:15:14.058Z | 2014-03-11T00:00:00.000 | {
"year": 2014,
"sha1": "731b07b924b5fabfe95f68ea45d633ce86c990ba",
"oa_license": "CCBY",
"oa_url": "https://ajlmonline.org/index.php/ajlm/article/download/199/187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "731b07b924b5fabfe95f68ea45d633ce86c990ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236933556 | pes2o/s2orc | v3-fos-license | Fluid management in patients undergoing neurosurgery
Fluid management is an important component of perioperative care for patients undergoing neurosurgery. The primary goal of fluid management in neurosurgery is the maintenance of normovolemia and prevention of serum osmolarity reduction. To maintain normovolemia, it is important to administer fluids in appropriate amounts following appropriate methods, and to prevent a decrease in serum osmolarity, the choice of fluid is essential. There is considerable debate about the choice and optimal amounts of fluids administered in the perioperative period. However, there is little high-quality clinical research on fluid therapy for patients undergoing neurosurgery. This review will discuss the choice and optimal amounts of fluids in neurosurgical patients based on the literature, recent issues, and perioperative fluid management practices.
INTRODUCTION
Fluid management is part of the basic care in many clinical situations. Perioperative fluid therapy in patients undergoing neurosurgery is a vital component of anesthetic practice and critical care. There is increasing evidence that intraoperative fluid therapy may influence postoperative outcomes [1][2][3].
The main purpose of fluid management in neurosurgical anesthesia is to prevent brain damage caused by inadequate cerebral perfusion and provide a good surgical environment. Therefore, it is essential to maintain hemodynamic stability and proper cerebral perfusion pressure during neurosurgery.
Hemodynamic alterations and electrolyte imbalances often occur during neurosurgery because of the frequent use of diuretics to relieve increased intracranial cerebral pressure and edema. In addition, depending on the type of sur-neurosurgery, the osmolarity of the fluid is the most important factor to prevent cerebral edema.
A crystalloid fluid contains small molecular substances without high molecular substances, and it is classified as hypotonic, isotonic, or hypertonic according to its osmolarity. Lactated Ringer's solution (LR), a commonly used crystalloid, is hypotonic at 273 mOsm/L. Low plasma osmolarity can cause cerebral edema. Therefore, hypotonic solutions, such as LR, are avoided, while normal saline (NS) has traditionally been used as the main fluid in patients with neurosurgery [4].
Since a reduction in oncotic pressure without changing the osmolarity increases cerebral edema in animal models of brain injury [5], colloid solutions have been known to prevent the severe reduction of colloidal oncotic pressure when used appropriately. However, the European Society of Intensive Care Medicine (ESICM) task force recommended against the use of colloids in patients with brain injury [6], continuing the debate about the use of colloids in neurosurgery.
Crystalloid solutions
Hypotonic solutions, such as the LR solution, are avoided in neurosurgical patients to minimize cerebral fluid accumulation. In contrast, NS, an isotonic crystalloid, has been widely used in neurosurgery because it is thought to reduce the risk of cerebral edema [7]. However, since NS has equal amounts of sodium and chloride (154 mEq/L), hyperchloremic metabolic acidosis occurs when a large amount of NS is administered because its chloride concentration is higher than the normal plasma chloride concentration (96-106 mEq/L).
Numerous laboratory and clinical studies have reported a dose-dependent association between hyperchloremia and the use of NS [8][9][10]. Hyperchloremic acidosis is associated with acute kidney injury (AKI) during abdominal surgery [9]. In a large, propensity-matched retrospective study of 22,851 patients who underwent a non-cardiac surgery, postoperative hyperchloremia resulted in acute metabolic acidosis, leading to increased 30-days mortality and length of hospital stay [10]. A large retrospective study on abdominal surgery showed that patients treated with balanced crystalloids had better outcomes, including mortality, postoperative infection, need for renal replacement therapy (RRT), need for transfusions, electrolyte imbalance, and acidosis than those treated with NS [9].
Meanwhile, the adverse outcomes of NS were not observed in a randomized control study of critically ill patients [11,12], non-critically ill patients [13], and postoperative patients who underwent neurosurgery [14]. In a recent meta-analysis, the balanced crystalloid solution was beneficial in significantly reducing postoperative hyperchloremia and metabolic acidosis, but the evidence was insufficient to compare the effects of buffered and non-buffered crystalloids on mortality and organ failure [15].
In contrast, balanced salt solutions (BSSs) replace chloride ions with lactate, acetate, and gluconate, which prevents the occurrence of hyperchloremic metabolic acidosis [16]. A BSS is the most common choice of resuscitation fluid in clinical practice [17]. In patients who underwent craniotomy, the NS group had higher sodium and chloride levels and had more patients with marked acidosis than in the BSS group [18].
However, though LR is a balanced crystalloid solution, it is hypotonic. A decrease of 1 mOsm/L in the plasma osmolality results in an increase of 19 mmHg in the pressure of fluid movements across the BBB, and a 3% decrease in the plasma osmolarity results in cerebral edema with a 3% increase in the brain volume and 30% decrease in the intracranial blood cerebrospinal fluid volume [16,19]. Prehospital resuscitation with LR compared to NS was associated with increased mortality in patients with traumatic brain injuries (TBI) [20]. Therefore, LR is not suitable for neurosurgical patients. Instead, isotonic BSS, excluding hypotonic solutions, such as LR, has emerged as a fluid of choice for patients undergoing neurosurgery [21].
An isotonic balanced solution reduces the incidence of hyperchloremic metabolic acidosis and electrolyte imbalances in patients with brain injury, but the intracranial pressure is not different compared with NS [22]. Although a balanced solution has a clear benefit of reducing hyperchloremic metabolic acidosis, its advantage of reducing morbidity and mortality is not clear and requires evaluation.
High-quality data comparing NS and balanced solutions in perioperative and neurosurgical patients are not yet available. Based on the above evidence, although evidence is still lacking, an isotonic balanced solution is preferred over NS in neurosurgical patients because of the lower risk of metabolic acidosis and renal injury.
Colloid solutions
Large insoluble molecules in colloid solutions increase KSNACC the intravascular oncotic pressure. In an animal model of brain injury, oncotic pressure reduction without changing the osmolarity increased cerebral edema [5]. Colloid solutions have commonly been used to decrease cerebral edema and improve hemodynamics during neurosurgery [23].
Hydroxyethyl starch (HES)
Several randomized trials have shown that HES has adverse effects on kidney function. The routine clinical application of HES in patients with severe sepsis in the VISEP study [24] was associated with higher rates of acute renal failure and RRT than LR. Similarly, two large trials comparing colloids and crystalloids in patients with severe sepsis, the 6S trial [25] and CHEST trial [26], showed an increased incidence of AKI and need for RRT.
In contrast, there was no difference in the incidence of renal failure and mortality between saline and HES 130/0.4 in patients with severe sepsis in the CRYSTMAS trial [27]. Likewise, the CRISTAL study, a large, randomized trial, [28], compared the effects of colloids and crystalloids in critically ill patients with hypovolemia and found no significant differences in the 28-day mortality and need for RRT.
Due to the conflicting results, a systematic review and meta-analysis that included the above trials concluded that HES significantly increased the risk of mortality and AKI in critically ill patients [29]. The ESICM task force on colloid volume therapy in critically ill patients recommended against the use of 6% HES 130 in patients with severe sepsis or at risk of AKI. They also recommended not to use colloids in patients with head injuries [6]. Based on accumulating evidence, the European Medicines Agency has restricted the use of HES in critically ill patients, and the United States Food and Drug Administration has added a black box warning. A recent meta-analysis comparing colloids versus crystalloids for fluid resuscitation in critically ill patients showed little or no difference in mortality with moderate-certainty evidence, though starches slightly increased the need for blood transfusion and RRT [30]. However, the heterogeneity of protocols and results in the aforementioned research continues to cause controversy on the recommendations on HES restrictions.
There is some opposing evidence on the restricted use of HES in patients with neurosurgery.
Some animal models and in vitro studies have shown protective effects of HES on the BBB [31][32][33]. Two early randomized control trials comparing HES with crystalloid solutions in patients with ischemic stroke reported no differences in the safety, hemodynamic efficacy, and complication rates [34,35].
HES has been sometimes used to maintain an optimal volume status to prevent delayed cerebral ischemia (DCI) due to cerebral vasospasm following a subarachnoid hemorrhage (SAH) as a component of the triple H-therapy. Compared to the standard therapy group, the goal-directed fluid therapy (GDFT) with a HES bolus group showed reduced frequencies of vasospasm and cardiopulmonary complications [36]. A recent retrospective study compared SAH patients who received HES with those who received crystalloids and found no significant difference in RRT [37]. Another retrospective study showed no positive correlation between the cumulative doses of HES and serum creatinine in SAH patients who had a normal renal function and concluded that the administration of HES 6% 130/0.4 is safe in SAH patients without pre-existing renal insufficiency. However, caution is warranted in the period of repetitive administration of contrast media [38]. It is noteworthy that the incidence of AKI did not increase despite the substantial amount of HES used in the above trials.
However, there is still no evidence of the superiority of the use of HES in patients undergoing neurosurgery. The possible negative effects, such as renal injury and coagulopathy, should be considered, and HES should be used with caution in neurosurgical patients, in line with the do not harm principle.
Albumin
In animal studies, high-concentration albumin therapy improved local cerebral blood flow (CBF), reduced infarct size and brain swelling, and improved neurological function [39][40][41]. In a retrospective study of patients with SAH, there was a higher proportion of patients with good outcomes at 3 months in the albumin group than in the non-albumin group, although there was no significant difference in the incidence of symptomatic vasospasm [42].
However, the SAFE trial, a multicenter, randomized, double-blinded trial, compared 4% albumin and NS in critically ill patients and showed no significant difference in the outcomes, such as mortality, proportions of organ failures, duration of intensive care unit (ICU) stay, duration of hospital stay, duration of mechanical ventilation, and duration of RRT [43]. However, in the subgroup analysis, the relative risk (RR) of death of trauma patients in the albumin group compared to the saline group (RR = 1.36) was higher than that in the patients without trauma (RR = 0.96). This difference in www.anesth-pain-med.org the RR of death was because more brain injury patients were assigned to the albumin group than to the saline group.
A post-hoc analysis of a subgroup of patients with TBI in the SAFE trial, the SAFE-TBI study, showed that the 2-year mortality of patients with severe brain injury was significantly higher in the albumin group than in the saline group [44]. A post-hoc follow-up analysis of severe TBI suggested that increased intracranial pressure may have contributed to the high mortality in the albumin group [7]. The results of the SAFE trial and post-hoc analysis continue to influence albumin use in patients with TBI [45].
However, these results should be considered with caution. The SAFE-TBI trial has its own limitations in post hoc subgroup analysis. The mortality of TBI patients was not the primary endpoint of the SAFE trial, and the trial design was not randomized for TBI analysis. Furthermore, the 4% human albumin used in the SAFE study is a hypo-osmolar solution that may potentially increase the intracranial pressure and cause cerebral edema [46].
Experimental SAH models on animals have demonstrated the beneficial effects of albumin [39,47,48], and there has been some evidence on the beneficial effects of albumin in SAH patients [49,50].
The ALISAH trial [49], designed to determine the feasibility and safety of albumin administration in SAH patients, was terminated as two serious complications of pulmonary edema were reported. Patients receiving 1.25 g/kg/d of 25% albumin for 7 days demonstrated better neurological outcomes than those receiving a lower dose. Follow-up analysis of the ALISAH trial showed that higher doses of albumin were associated with a lower incidence of vasospasm, DCI, and cerebral infarction [50]. However, these results should be interpreted with caution. The said trial had an inadequate sample size and insufficient power because it was not designed to study the beneficial effects of albumin.
The ALIAS pilot trial suggested that high-dose albumin therapy has potential neuroprotective effects after ischemic stroke [51]. However, the ALIAS part 1 trial was suspended after safety analysis revealed an increased incidence of pulmonary edema and mortality [52]. The ALIAS part 2 trial, which was modified by adding exclusion criteria and safety measures, was also suspended because of the high incidence of pulmonary edema in the albumin group [53]. The pooled analysis of the data from the ALIAS part 1 and 2 trials showed no difference in the 90-day neurological outcomes and mortality between the 25% albumin and saline groups. However, there was an increased risk of pulmonary edema and intracerebral hemorrhage in the patients administered with albumin 25% at 2 g/kg [54]. Based on this evidence, the ESICM recommends against the use of high-dose albumin in patients with acute ischemic stroke and the use of low-(4%) or high-dose (20-25%) albumin in neurointensive care patients [55].
Although controversies still exist based on the above evidence, the use of albumin in the perioperative period of neurosurgery remains questionable. The potential risks and benefits of albumin administration should be assessed on a case-by-case basis.
HOW TO ADMINISTER THE OPTIMAL AMOUNT OF FLUIDS IN NEUROSURGICAL PATIENTS
The primary goal of perioperative fluid management during neurosurgery is to maintain hemodynamic stability and an adequate CBF. There is a growing body of evidence that intraoperative fluid therapy influences postoperative outcomes [1][2][3].
Restrictive versus liberal fluid therapy in major surgeries
Traditional intraoperative fluid regimens, which include preoperative dehydration, third space loss, and insensible loss, tend to induce a positive fluid balance that is related to postoperative complications [1].
In the recent decade, several randomized controlled studies have compared restricted fluid therapy with liberal fluid therapy in patients undergoing major abdominal surgeries. Brandstrup et al. [2] showed that patients in the liberal group gained body weight and had more complications than the restrictive group.
After this trial, numerous studies on abdominal surgery showed positive results for restricted fluid therapy, leading to a gradual shift to the trend of using fluid restriction during surgery with the concept of zero-balance. However, in two large observational studies, the zero-balance concept has been concerning due to the possibility of worse outcomes, including AKI associated with excessive restriction [56,57].
Recently, RELIEF trial compared restrictive fluid therapy while maintaining perioperative zero balance with liberal fluid therapy [3]. The results showed that the patients in the restriction group had increased rates of surgical site infection and high risks of AKI.
KSNACC
Based on this recent evidence, worse perioperative outcomes have been observed in patients with both overhydration and excessive fluid restriction. Therefore, fluid optimization is essential for perioperative fluid management. It should also be noted that the amounts of administered volume in the liberal and restricted volume therapies were inconsistent and slightly different for each study [58]. In particular, the postoperative weight gain of the restrictive group in an earlier study by Brandstrup et al. [2] was comparable to the liberal group of the RELIEF study [3]. As such, an excessive restriction can result in worse outcomes, such as AKI.
GDFT based on dynamic parameters
To achieve the optimal fluid volume status, it is essential to avoid overhydration and excessive restriction and develop individually optimized fluid regimens using objective parameters. These objective parameters should be targeted preoperatively and measured perioperatively.
GDFT, a recently emerging fluid regimen, is a type of fluid administration that optimizes pre-defined targets based on directly measured hemodynamic parameters (Fig. 1), such as the cardiac output, stroke volume (SV), stroke volume variation (SVV), pulse pressure variation (PPV), systolic pressure variation (SPV), pleth variability index (PVI), and other factors [1]. Favorable outcomes and decreased costs have been shown for patients who underwent GDFT during a major abdominal surgery [59][60][61]. Although the certainty of the evidence was very low, a meta-analysis comparing GDFT and restrictive fluid therapy in major non-cardiac surgeries showed that the mortality was slightly low in the GDFT group, and there were no differences between the two groups in the complication rate and length of hospital stay [1]. Unlike other studies, including this meta-analysis, one study [62] found that the total infused volume was higher in the restrictive group (basal crystalloid infusion ranging from 4 to 10 ml/kg/h) than in the GDFT group. A limitation of this meta-analysis was the lack of a definition of restrictive fluid therapy. GDFT consists of a given basal infusion and repeat- Fig. 1. Dynamic parameters derived from the arterial pressure wave. Mechanical ventilation induces periodic changes in the arterial waveform. Various parameters are derived from this periodic change. Pulse pressure (PP) is the difference between the systolic and diastolic pressures. The area under curve of the arterial pressure wave represents the stroke volume (SV). Systolic pressure variation (SPV) is the difference between the maximum and minimal systolic pressures. SPV consists of two components, delta up (Δup) and delta down (Δdown), by reference pressure (Pref). Pref is the systolic pressure measured at the end of expiration or during apnea. PPV: pulse pressure variation, SVV: stroke volume variation.
www.anesth-pain-med.org ed boluses of fluids (usually colloids) to achieve a predefined target. The basal infusion rate is particularly important to compare GDFT with other fluid regimens.
GDFT during neurosurgery
In two retrospective studies of patients with SAH, a positive net fluid balance was independently associated with poor outcomes [63,64]. However, as it is difficult to compare restrictive and liberal fluid therapies in neurosurgical patients who must maintain euvolemia, recent studies on GDFT have been conducted. There have been some studies to optimize fluid administration using continuously measured dynamic parameters, such as SVV, PPV, and PVI for patients undergoing neurosurgery.
The SVV is a sensitive predictor of fluid responsiveness before and during brain surgery [65][66][67]. After the induction of anesthesia and before the start of the surgical procedure, the SVV more sensitively predicted an increase of more than 10% in the SV by LR solution infusion compared to the mean arterial pressure, heart rate, cardiac output, and central venous pressure (CVP) in neurosurgical patients [65]. An SVV of 9.5% was concluded as the optimal threshold (sensitivity: 78.6%, specificity: 93%) for predicting a > 5% increase in the SV after a 100-ml colloid solution infusion [66]. The target of the SVV of GDFT can affect clinical outcomes for supratentorial brain tumor resection [67]. Comparing two GDFT regimens for supratentorial tumor resection (with threshold SVV values set at 10 for the low SVV group and at 18 for the high SVV group), the low SVV group had lower postoperative serum lactate levels, shorter length of ICU stay, and a lower incidence of postoperative neurologic events than the high SVV group [67]. Comparing the GDFT group managed fluid by hemodynamic parameters including the SVV with the control group managed fluid by the therapeutic decision of the attending anesthesiologist, the former had less administered fluids, shorter length of ICU stay, lower ICU costs, and lower lactate levels than the control group [68].
The PPV and PVI have also been reported to be good predictors of fluid reactivity during brain surgery [69][70][71][72]. Between the CVP group, which maintained a CVP of 5-10 cm-H2O, and the PPV group, which maintained a PPV below 13%, in patients undergoing a brain tumor surgery, the latter had better postoperative hemodynamic stability and less postoperative fluid requirement [69]. The PPV-guided GDFT during supratentorial tumor resection had a comparable brain relaxation scale, low serum lactate levels, more intra-operative fluids, and higher urine output than the standard care group [70]. In the sitting position for neurosurgery, measuring the PPV and PVI with an ear sensor predicted fluid responsiveness well, but the PVI could not be predicted with a finger sensor. However, the PVI measured with an ear sensor was limited by an unreliable signal in 26% of the patients [71].
A study on children undergoing neurosurgery showed different results. Comparing the PVI, ΔVpeak (respiratory variations in aortic blood flow peak velocity), arterial pressure, CVP, heart rate, inferior vena cava diameter, SPV (including delta up [Δup] and delta down [Δdown]), and PPV in pediatric patients undergoing neurosurgery, the PVI and ΔVpeak predicted the fluid response well, but the PPV and other static and dynamic parameters were reported to be unpredictable [72].
Considering that hemodynamic changes are relatively common in neurosurgery, GDFT, which provides individualized optimal fluid status, is a promising fluid management strategy.
CONCLUSION
Despite numerous studies on perioperative fluid management, there is insufficient evidence to draw definitive conclusions regarding fluid management in neurosurgical patients.
Although evidence is still lacking, isotonic balanced crystalloid solutions should be considered the first-choice fluid, while hypotonic solutions should be avoided. Furthermore, colloid solutions should be used with caution, and their potential risks and benefits should be considered.
To achieve an optimal fluid volume status while avoiding overhydration and excessive restriction, the amount and duration of fluid administration should be considered, and an individualized fluid strategy is recommended using GDFT based on dynamic fluid parameters.
CONFLICTS OF INTEREST
No potential conflict of interest relevant to this article was reported.
DATA AVAILABILITY STATEMENT
Not applicable. | 2021-08-07T06:18:11.395Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "ee1607acb74b544f2c059c6062a198cf2888562a",
"oa_license": "CCBYNC",
"oa_url": "https://www.anesth-pain-med.org/upload/pdf/apm-21072.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a70b45ef2238e0c7161a972737cf68c3f54af98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269770966 | pes2o/s2orc | v3-fos-license | Self-determined use of provided powered oral hygiene devices leads to improved gingival health after 1 year: a longitudinal clinical trial
Purpose Our study aimed to evaluate the long-term concordance and acceptance when using powered devices for everyday oral hygiene routine and gingival health in patients showing papillary bleeding. Patients and methods Thirty-one participants were recruited at the dental clinic of the University Hospital of Cologne, Germany, over a 6-week duration. At baseline, a standard dental check-up was performed, including oral hygiene indices and documentation of oral hygiene devices used. The study consisted of two consecutive phases: the first (motivational trial) was designed to prove the effectiveness and safety of a microdroplet device and a powered toothbrush compared to dental floss and a manual toothbrush over a period of 4 weeks. The second (observational) phase began with all participants receiving the powered oral homecare devices. Participants were able to use their oral hygiene measures of choice over an unsupervised period of 1 year. All participants were then rescheduled for a routine dental check-up, where oral hygiene indices and oral hygiene devices used were reevaluated. Results After 1 year, 93.3% of participants stated they performed interdental cleaning on a regular basis (baseline 60.0%). The percentage using a powered toothbrush increased from 41.9% (baseline) to 90.0% after 1 year. Oral hygiene parameters had improved after both the motivational trial and observational phases compared to baseline (papillary bleeding index p = .000; Rustogi Modified Navy Plaque Index p < .05; Quigley-Hein Index p = .000). Conclusion In the long term, participants preferred using powered oral hygiene devices over the gold standard dental floss and manual toothbrush. Improved oral hygiene parameters after 1 year may indicate implementation of newly acquired oral-hygiene skills during the 4-week instruction phase.
Introduction
90% of the global population suffers from gingivitis [1][2][3].Despite the availability of a broad range of oral hygiene products and increasing oral health and hygiene competency [3], the prevalence of gingivitis prevalence [2].A study observing toothbrushing efficacy in adolescents concluded that removal of plaque was poor despite the high frequency of daily toothbrushing, even though participants were asked to clean their teeth to the best of their ability [4].Reasons for this were poor brushing methods and a lack of motivation, knowledge, or ability to carry out efficient brushing movements [4,5].Recommendations regarding oral hygiene products and behavior consist of toothbrushing with a manual or powered toothbrush twice a day, with an even distribution of brushing time across all reachable surfaces [4,6], as well as an efficient daily usage of interdental brushes or dental floss [1].Flossing in addition to toothbrushing can lead to a reduction in gingivitis [7].In cases where interdental flossing is not a realistic measure, other interdental cleaning devices may be useful in addition to the daily bushing routine.Interdental brushes are first choice if interdental tissues in narrow interdental spaces will not be damaged [1].Alternatively, different water flossers and the more recent microdroplet devices (such as Philips AirFloss Pro®) are available, which aim to be more comfortable to use for interdental cleaning.Several authors have stated a reduction in gingival inflammation after using microdroplet devices [8][9][10].
Behavioral changes in daily oral hygiene routines is a first step to overcome these oral hygiene deficits.It is well known that many people find it difficult to adjust to recommended daily oral hygiene routines, and longterm adherence to these recommendations deteriorates quickly [7,11,12].This difficulty particularly occurs when products are too complicated to use correctly [12] or recommendations are not made based on patients' individual preferences [13].Additionally, a lower socioeconomic status often limits the possibility of affording high-priced oral hygiene products and can correlate with a lack of knowledge regarding oral hygiene products [3].
To understand behavioral changes in patients, several definitions have been described.Compliance is defined by Cramer et al. as "the extent to which a patient acts in accordance with the prescribed interval and dose of a dosing regimen" [14].Adherence is a less stringent term that may be used instead of compliance.Compliance is often documented as good or poor, mainly using percentages, where 80% is the cut-off point [14,15].In contrast, concordance describes a cooperative relationship between doctors and patients, to reach a set health goal together.In this way, patient preferences, fears, and concerns about a treatment choice are important [16,17].
Most diseases of the oral cavity are preventable, at least in their severity, but only if preventive measures are routinely implemented [18].In the context of daily oral hygiene recommendations, continuous concordance to measures such as toothbrushing or interdental care is an important success factor regarding lifelong oral and dental health.We know that 30-65% of health information provided by medical professionals will be forgotten within one hour after the appointment [11,12].For some medical conditions, non-adherence averages up to 50% [19].Since adherence with daily brushing and interproximal care is the most essential factor for stable oral health, more effort must be made to investigate how concordance between patients and professionals can be achieved, and whether improved concordance leads to improved oral hygiene measures [1].
Convenient oral hygiene products, such as powered toothbrushes or a microdroplet device, show at least similar efficacy compared to use of a manual toothbrush or flossing with dental floss [1,20].However, both interdental care and use of a manual or powered toothbrush are technique-sensitive procedures [21][22][23].Evaluation of patient acceptance of short-term use found that a microdroplet device was superior to dental floss [24].Understanding patient preferences in their daily oral hygiene routine is important to provide advice when selecting oral hygiene products.However, less is known about patient acceptance and efficacy of a combination of powered oral hygiene devices for interdental care and brushing.It is important to understand whether patients with poor oral hygiene would achieve concordance and improve oral hygiene parameters while using powered oral homecare products in the long term.
Thus, the purpose of our study was to evaluate the long-term concordance with and acceptance of powered devices for oral homecare, as well as gingival health in gingivitis patients using powered devices in their everyday oral hygiene routine.We hypothesized that an easyto-perform oral homecare-routine supported by powered oral hygiene devices would result in long-term concordance with use and an improvement in clinical oral hygiene parameters in gingivitis patients.
Study design and methodology
We carried out a prospective, observational study divided into two phases.Prior to the study, all participants underwent routine dental assessment where oral hygiene indices were evaluated.The study began with a motivational trial (MT) phase, where efficacy, safety, and short-term acceptance were evaluated.In this phase, participants were randomly assigned to three groups (group 1: Sonicare® powered toothbrush & AirFloss Pro® (both Philips Nederland B.V., Netherlands) filled with water; group 2: Sonicare powered toothbrush & AirFloss Pro filled with Listerine® mouth rinse; group 3: manual toothbrush & dental floss).For 4 weeks, participants used the oral hygiene combination to which they were assigned (Fig. 1).The MT phase was designed to evaluate whether combinations of powered devices were at least as efficient as the combination of dental floss and a manual toothbrush, as previously described by Stauff et al. [24].
AirFloss Pro is a microdroplet device designed to clean narrow proximal spaces.The integrated water tank has a capacity of 14 ml.Depending on the operation modus used, one, two, or three puffs with 110 µl can be shot through a proximal space, using a nozzle to correctly locate the proximal space.AirFloss Pro should be used at least once a day for effective proximal hygiene [19].The Sonicare Philips FlexCare toothbrush with the "ProResults C1" brush head is a powered toothbrush.To clean teeth and gingiva effectively, it should be used at least twice a day for 2 min, based on current literature [25].All participants were instructed to use the toothbrush in "clean" mode [22].Waxed dental floss (OralB Essential floss waxed, Procter & Gamble Service GmbH, Germany) was used as gold standard in interdental cleaning devices.The medium-hard toothbrush used (Friscodent M + C Schiffer GmbH, Germany) is available for purchase at a German supermarket and thus is an often-purchased product.
Assigned products were demonstrated with instructions for use by trained staff at the baseline of the MT phase, to enable effective use and minimize any potential danger of self-harm in all oral areas (incisive, premolars, molars).All members of staff were members of the postgraduate periodontology program at the Polyclinic of Operative Dentistry and Periodontology, University of Cologne, Germany.The control group also received the same demonstration and training on use of the powered devices during reevaluation of the MT phase (reevaluation 1).Therefore, all participants were instructed on the potential use of a combination of powered devices before the observational trial (OT) phase began.
At the reevaluation of the MT phase, all participants received the sonic toothbrush and microdroplet device and the unsupervised OT phase began.Participants were able to use their oral hygiene products of choice for 1 year.No specifications were made regarding the type or combination of products.At baseline, none of the participants had stated that they used any additional oral hygiene products (especially mouth rinse); therefore, any potential bias due to their ability to reduce plaque was minimized.After 1 year of unsupervised use, long-term clinical outcomes and concordance were investigated during a routine dental appointment by examination of oral hygiene indices and questionnaires (reevaluation 2).All oral examinations were performed at the Polyclinic of Operative Dentistry and Periodontology, University of Cologne, Germany.
The primary outcome was long-term concordance.Concordance and acceptance were evaluated using questionnaires about the patients' oral hygiene routine, which were completed by participants prior to the MT phase (baseline), after the MT phase (reevaluation 1), and at the end of OT phase (reevaluation 2).Questions were asked regarding dental and interdental cleaning habits based on frequently asked questions regarding patients' oral hygiene routine at check-up appointments at the Polyclinic of Operative Dentistry and Periodontology, University of Cologne (i.e., "Which type of toothbrush do you use?" or "Do you engage in interdental cleaning?")(Fig. 1).Secondary outcomes were acceptance, the Rustogi-modified Navy Plaque Index (RMNPI) [26], the Quigley-Hein Index (QHI) [27,28], and the papillary bleeding index (PBI) [26][27][28][29].
The clinical trial was approved by the local ethics review board of the University of Cologne, Germany (study number: 17-206) and registered (09.04.2021,DRKS00011619).The study design was in accordance with the Declaration of Helsinki ( 2001) and was carried out following Good Clinical Practice Guidelines (ICH-GCP).All participants gave written consent prior to baseline appointments after being informed about the contents, aims, and duration of the trial.
Study population
Screening for participants used an announcement poster in the dental clinic of the University Hospital of Cologne, Germany, over a duration of 6 weeks.Participants meeting the following criteria [24] were included: (i) selfreported irregular use of interdental hygiene products (questionnaire data regarding regular oral hygiene routine and QHI > 0); (ii) PBI ≥1); (iii) no caries lesions (International Caries Detection and Assessment System (ICDAS) > I 2 ) or restauration margins proximal to the first premolar (second if the first premolar was removed); (iv) no interdental clinical attachment loss and narrow interdental spaces; (v) interest in participation and written consent.Exclusion criteria were defined as: (i) periodontal disease (Community Periodontal Index (CPI) ≥ 3) or health (CPI 0); (ii) regular use of antiseptic mouth rinses; (iii) smoker (≥10 cigarettes/day); (iv) consumption of medication known to affect gingival health (antibiotic, calcium channel blocker, immunosuppressive) in the 3 months prior to the study; (v) dental professionals.
Randomization and allocation concealment
Randomization of all participants into three groups during the MT phase was carried out by the senior investigator (S.H.M.D.), using a random, computer-generated list in sealed envelopes (Sealed Envelope LTD.2018, available from https://www.sealedenvelope.com).The calibrated examiners were blinded regarding group allocation and the oral hygiene products used.A member of staff, who was not involved in clinical examinations during the study, carried out the randomization so that allocation concealment could be achieved.
Adherence and patient acceptance after 4 weeks
At baseline, all participants were asked to complete a questionnaire regarding their usual daily oral routine.Questions were set based on a questionnaire used in a previous study at our clinic [24].To evaluate adherence, participants were asked to keep an oral hygiene diary during the 4-week duration of the MT phase.All diaries were collected at the first recall appointment and checked for completion.After the MT phase, participants were asked to complete a questionnaire regarding selfreported efficacy and acceptance of their assigned oral hygiene cleaning routine.Questions such as "How do you perceive the usage of AirFloss Pro®/dental floss?" (with multiple choice answers) or "Do you wish to continue the usage of your assigned proximal cleaning devices?" were answered by participants.These questions were set based on a questionnaire used in a previous study at our clinic [24].
Concordance with oral hygiene products and routine after 1 year
Concordance with the oral hygiene routine was evaluated using questionnaires at patient appointments 1 year after baseline (reevaluation 2).Patients were asked to name the type of oral hygiene products used (type of toothbrush and interdental care), as well as frequency of usage (daily and/or weekly).Items of the questionnaire were based on questions regularly asked about the patient's oral hygiene routine during check-up appointments at the Polyclinic of Operative Dentistry and Periodontology, University of Cologne.Examples of questions asked include "Do you clean your interdental spaces?" or "Which type of interdental cleaning device do you prefer?".Similar questions were asked regarding toothbrushing routines (Fig. 1).
Clinical parameters
Oral health indices such as the PBI, RMNPI, and QHI and safety were elevated and documented by I.S./D.D. in a case report form.All investigators were members of the postgraduate periodontology program and trained by the senior investigator S.H.M.D. in the section of periodontology.The PBI was documented buccal mesial and distal of the tested premolar teeth using a periodontal probe (PCPUNC15, Hu-Friedy, Mfg.Co., LLC, Tuttlingen, Germany) [29].Dental plaque was visualized mesial and distal of the tested premolar tooth, using a plaque elevator solution (Mira-2-Ton, Miradent, Hager & Werken GmbH & Co. KG, Germany).Biofilm was evaluated using the QHI and RMNPI in the proximal and gingival areas A/D and F/C.The amount of plaque at each area was documented photographically and in writing [26][27][28].
Safety
Routinely in all clinical studies, safety protocols are mandatory to assess and document potential study induced harms.In this case, the expected unwanted side effects of using oral home care devices were gingival lesions, i.e., gingival abrasion [30][31][32][33][34][35].A case report form was designed to document these lesions if they occurred.These were documented at all oral examination appointments and characterized by localization and extent.
Sample size
Sample size calculation was based on the MT phase.Previous studies regarding changes in plaque indices over time showed an effect size of Cohen's d = 1.41 [36].Assuming an effect size of 1.0, a power of 95%, and a beta error of 5% when comparing baseline values to measurements after 4 weeks, a sample size of 16 was estimated [37].This sample size was needed to show that the chosen oral hygiene products were safe and effective to use.During the OT phase, high numbers of dropouts were expected due to the long-term appointments after 1 year.Therefore, subjects were initially recruited over a duration of 6 weeks, resulting in 31 participants.
Statistical analysis
Statistical analysis was carried out at participant level (unit of analysis), using SPSS statistics 27.0 software (SPSS Inc., Chicago, IL, USA).Statistical significance was indicated when p < .05 was reached.
Concordance of patients with their daily oral hygiene routine was derived from completed questionnaires and listed in descriptive tables.For all three groups (MT phase), mean values (standard deviations, SD) for PBI, QHI, and RMNPI were calculated.Differences between groups at baseline and recall appointments were investigated using a one-way ANOVA test.Within-group variations of the parameters between baseline, first, and/ or second recall appointments were analyzed using Wilcoxon signed rank test.Missing values were processed using the last-observation-carried-forward principle.
Results
All 31 participants included in the study finished the MT Phase (Table 1).Twenty-seven of these (52% female, mean age 33 (SD 14) years) finished the OT phase (period of recruitment prior to MT phase: 12 April 2021 to 23 May 2021) (Fig. 2).Four participants did not attend the dental appointment after 1 year (pregnancy n = 2, relocation n = 2).Three of these patients returned the questionnaire (via email) regarding their daily oral hygiene routine.
At baseline, 74.2% of participants stated they used interdental cleaning devices less than once a week (Fig. 3).The main reason reported (38.7%) was "too hard to use".
Adherence and patient acceptance after 4 weeks
After the MT phase, all participants in the AirFloss Pro groups used the microdroplet device daily (control group: dental floss 70.0%) and said they would continue to use it after finishing the MT phase (control group: dental floss 80.0%).Overall, 85.7% of patients in the AirFloss Pro groups said they had a "comfortable" feeling while using AirFloss Pro (control group: dental floss 20.0%) (Table 2).
At baseline, 41.9% of participants used a powered toothbrush.After the MT phase, 61.9% of participants in the AirFloss Pro groups thought the experience of using a powered toothbrush was "very comfortable" and 95.2% would continue brushing with a powered toothbrush after the MT phase.
Concordance with oral hygiene products after 1 year (primary outcome)
During their annual check-up after 1 year, 93.3% of participants reported that they performed interdental cleaning on a regular basis (compared to 60.0% at baseline), and 63.3% stated that they cleaned their interdental spaces more than once a week (Table 3).
After 1 year, 53.3% of participants preferred Air-Floss Pro and 30.0%used dental floss for their daily oral hygiene (Table 3).Frequency of usage of AirFloss Pro and dental floss was almost similar.The main reason for using AirFloss Pro was a "clean feeling" (33.3%).Reasons against using AirFloss Pro included "other products more effective" (23.3%) and other reasons (23.3%; for example, "nozzle location of device too complicated", "changing habits in oral hygiene not possible").
The percentage of patients using a powered toothbrush increased from 41.9% at baseline to 90.0% after 1 year.Frequency of usage twice daily increased from 71.0% at baseline to 80.0% at 1 year (Table 4).
Safety
No gingival injuries or abrasions were observed at any appointment.
Discussion
The purpose of our study was to evaluate long-term concordance with and acceptance of unsupervised use of powered devices for oral homecare, and the impact on gingival health in patients with papillary bleeding using powered devices in their everyday oral hygiene routine.Our results suggest that oral hygiene indices remained improved over a period of 1 year after providing powered oral homecare devices and oral hygiene training.
The randomized MT phase was scheduled for 4 weeks.As shown in previous studies, a duration of 4 weeks was a suitable period to evaluate short-term changes in patient motivation, as well as clinical bleeding indices and biofilm accumulation [38,39].According to the guidelines of the American Dental Association, 4 weeks is long enough to evaluate the efficacy of oral hygiene devices such as a microdroplet device and to observe changes in gingival health [40,41].The OT phase took place over an approximate duration of 1 year.We chose the participants' individual recall appointment to evaluate their actual daily oral routine with minimized disruptive influences such as the Hawthorne effect.In addition, it has been stated that instructions regarding oral hygiene routines could change patient behavior for up to 3 months [32].As several previous studies lasted over 6 months, we doubled the duration to 1 year to mirror patients' oral homecare routine as precisely as possible [42].
In our investigation, patient acceptance of a microdroplet device was high after 4 weeks, and all participants in AirFloss Pro groups stated continuous usage after the MT phase.These results reflect a previous evaluation of the use of a microdroplet device for 4 weeks [24].In our study after 1 year, 93.3% of participants cleaned their interdental spaces and 53.3% used AirFloss Pro.These findings are supported by other studies, where patients rated the use of AirFloss Pro in daily routine as a positive adjunctive in the short-term and after 6 months [24,42].Our results are also supported by a recent mixed methods study, where patients in focus group discussions reported a lack of motivation or knowledge of usage regarding interdental care products such as dental floss; patients recommended improvement of interdental devices such as floss or interdental soft picks to make the product easier to use and more convenient [43].In our study, participants perceived greater comfort when using AirFloss Pro filled with Listerine mouth rinse, which may generate an even cleaner feeling that may be caused by the fresh taste.It needs to be considered that Listerine mouth rinse may lead to plaque reduction and reduction of gingival bleeding and thus a reduction of gingivitis [44].However our results did not reflect such alterations as participants who used AirFloss Pro filled with Listerine microdroplet devices were able to reduce gingival bleeding after 4 weeks and 6 months [20,42].This may be attributed to improved oral homecare, especially regular interdental cleaning routine with the microdroplet device.The use of the microdroplet devices may cause an alteration in composition of dental plaque, a reduction of biofilm thickness after usage, alteration in the hosts' immune response, or stimulation of the gingiva [9].Furthermore, some participants may have used chemical plaque control in addition to their oral homecare routine, which is also able to reduce dental biofilm and therefore gingivitis [44].All these mechanisms support the transition of an incipient dysbiosis to a healthy symbiosis [45].Overall, 90.0% of our participants reported using a powered toothbrush after 1 year.Several other authors have stated that powered toothbrushes are superior to brushing with a manual toothbrush with respect to reduction of plaque and gingivitis [46,47].Additionally, a long-term comparison of three nationwide, crosssectional surveys over 17 years showed more caries-free teeth surfaces and more remaining teeth in patients who used a powered toothbrush and interdental care [48].Another long-term observation found a correlation between use of a powered toothbrush and reduction in pocket depths and less progression in clinical attachment loss after 11 years [49].
However, none of the previous long-term observations have focused on the impact of either the toothbrush or the interdental cleaning aid used in terms of cleaning efficacy.This raises the question whether use of a powered toothbrush or a microdroplet device alone would result in reduced bleeding and plaque indices.Possible answers may be found regarding the different types of plaque indices evaluated.The QHI focuses on the entire buccal site of the tooth, representing the ability to reduce plaque by a toothbrush [28].The RMNPI areas A/D and F/C represent the interproximal marginal gingival space of a tooth, therefore mirroring the efficacy of proximal cleaning actions [26].Both indices were significantly reduced after 4 weeks, but only in the AirFloss Pro groups using a powered toothbrush and microdroplet device; this indicates sufficient plaque control of both powered devices in their specific areas of the tooth.It should be noted that the ability of dental floss to clean proximal spaces of premolars, especially in the approximal retraction, is reduced due to their anatomical design, even though interdental spaces were narrow.
After 1 year, at the participants` individual dental check-up, concordance with powered devices was high.The percentage of participants cleaning their interdental spaces increased to 93.3% (baseline 60.0%), with 53.3% of patients preferring to use the microdroplet device (30.0%dental floss).20% of participants even stated that they cleaned their interdental areas daily.One possible explanation might be implementation of adequate brushing and interdental cleaning skills into the patients' everyday dental cleaning routine.As shown before, (repeated) professional dental instructions can lead to an increased understanding and use of the methods instructed [50,51].Planning actions, such as planning when, where, and how to use the dental hygiene method of choice, can result in increased patient adherence [52].It should be noted that in some previous studies, adherence was defined only as the daily use of dental floss in contrast to the definition of concordance [52,53].Our results may indicate a long-term behavioral change, one of the highest goals in medical treatment but especially in dentistry because biofilm control is a main risk factor for most oral diseases and can be reduced by an adequate oral homecare routine.
Our investigation has some limitations.The questionnaires evaluating concordance at reevaluation 2 were self-designed, based on questions regularly asked at dental appointments in our clinic.Tisnado et al. evaluated the concordance between medical records and patients' self-reports to multiple medical items [54].They found a high concordance and patients were able to report with good sensitivity.In contrast to this study, our participants chose their own preferred combination for oral homecare.It might be expected that reporting of a preferred product combination was high, even though the questionnaires were not validated.A supervised, individual, patient-centered, 4-week motivational phase is hard to implement in everyday dental care because it is time consuming and ties up human resources.This highlights the need for adjustment regarding prevention concepts in dental settings.For example, professionally teaching of oral homecare may be a valuable addition during regular dental check-ups.Another limitation is that our results are not applicable to patients suffering Fig. 4 Papillary bleeding index (PBI), Rustogi Modified Navy Plaque Index (RMNPI), and Quigley-Hein Index (QHI) at baseline, after the MT phase at 4 weeks (reevaluation 1), and the OT phase after 1 year (reevaluation 2).* p < 0,05; analyzed with ANOVA from periodontal disease and therefore loss of papillae and open interdental spaces [1].Patients who smoked fewer than ten cigarettes per day were eligible to participate in the study and were not distributed equally.Tobacco smoke reduces microvascular vasoconstriction and causes fibrosis of the gums through systemic circulation of components of cigarette smoke, as well as local uptake.Such consequences may mask gingivitis indices in the short and long term [55].Furthermore, the investigation was carried out with dental floss as control.A local (German) guideline focusses on at-home mechanical biofilm management in the prevention and therapy of gingivitis [56].Even in patients without clinical attachment loss, interdental brushes are more effective at biofilm reduction than dental floss.Dental floss should only be considered if narrow interdental spaces are present.In future studies, interdental brushes will serve as control of choice.Moreover, other areas are harder to reach during oral homecare (such as areas with orthodontic retainers or molars), which might make our results less applicable.As stated in recent literature, patient-reported outcomes such as oral wellbeing or willingness-to-pay need to be taken into account when investigating the treatments of oral diseases [57].Until now, most short-and long-term investigations regarding oral hygiene measurements focus on clinical outcomes; recently, patient preferences have been gaining more attention in this area [57].Actual changes in the daily routine of patients for prevention of oral diseases can only take place if barriers to achieve these goals are low or prevention measures are elaborated In our investigation, we showed how patient behavior can change when providing them with powered, convenient oral healthcare products such as a powered toothbrush and a microdroplet device after professional instruction.Prevention of an illness or treatment at an early stage is less expensive than treating the actual illness [2].In particular, treatment of gum diseases such as gingivitis using adequate daily proximal care can prevent the prevalence of periodontitis [1].Future investigations should be carried out on a wider scope, exploring how the combination of professional advice for dental homecare can be combined most efficiently with oral healthcare products.
Conclusion
In this study, an initial 4-week motivational trial phase, which included oral hygiene instructions and individual support, led to improved interdental cleaning and brushing skills and implementation of newly acquired habits in the mindset of patients.In the long term, if patients had a free choice of different devices offered, patients with initial gingival bleeding preferred the unsupervised use of powered oral hygiene products over manual devices, including dental floss.This choice resulted in improved oral hygiene indices after 1 year.
Fig. 1
Fig. 1 Questionnaire developed based on questions regularly asked regarding the patient's oral hygiene routine during check-up appointments at the Polyclinic of Operative Dentistry and Periodontology, University of Cologne, Germany
Fig. 2
Fig. 2 Study flow-chart demonstrating the duration and different phases of the investigation
Fig. 3
Fig. 3 Frequency of usage of interdental cleaning devices after the 1-year OT phase compared to baseline
Table 3
Preferred interdental cleaning devices after the 1-year OT phase, including the frequency of usage during the last 7 days and 4 weeks (n = 27) | 2024-05-16T06:17:55.234Z | 2024-05-14T00:00:00.000 | {
"year": 2024,
"sha1": "4eb26d191f554ba83729ef63102448bb35ebd8bf",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/counter/pdf/10.1186/s12903-024-04313-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "711c7ff5d757f6c1df81a5d3acd03d7e66e47058",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225238403 | pes2o/s2orc | v3-fos-license | Difficult diagnosis of a neurogenic thoracic outlet syndrome and review of the current literature
Thoracic outlet syndrome (TOS) is an uncommon disorder with some controversies concerning its causes, diagnosis, and treatment despite years of intense study of hundreds of patients. Thoracic outlet syndrome is characterized by the presence of a compression of the structures at thoracic outlet such as subclavian artery, subclavian vein and brachial plexus. The main causes are the presence of a cervical rib, trauma, scalene muscle hypertrophy and fibrous band. Cervical rib is a congenital anomaly which is originated from enlargement of the transverse process of C7. This anomaly occurs 1% of the population but induces the symptoms about 5%. The symptoms may vary according to the compressed structure such as paleness, swelling, edema, numbness and pain. A normal electromyogram does not excludes its presence, as reported in our clinical case. As a result, the diagnosis of this syndrome is very defying. CASE REPORT
INTRODUCTION
Thoracic outlet syndrome (TOS) is an uncommon disorder with some controversies concerning its causes, diagnosis, and treatment despite years of intense study of hundreds of patients. 1 Thoracic outlet syndrome is characterized by the presence of a compression of the structures at thoracic outlet such as subclavian artery, subclavian vein and brachial plexus. The main causes are the presence of a cervical rib, trauma, scalene muscle hypertrophy and fibrous band. 2 Cervical rib is a congenital anomaly which is originated from enlargement of the transverse process of C7. This anomaly occurs 1% of the population but induces the symptoms about 5%. 3 The symptoms may vary according to the compressed structure such as paleness, swelling, edema, numbness and pain. 1 A normal electromyogram does not excludes its presence, as reported in our clinical case. 4 As a result, the diagnosis of this syndrome is very defying.
CASE REPORT
A 20-year-old woman was referred to our hospital due to diffuse pain in the shoulder and right cervico-scapular region with 2 years of evolution. She had no history of trauma or prior surgery.
Initially attended to the shoulder unit consultation where a shoulder ultrasound was requested, which revealed no alterations and a cervical Rx which revealed the presence of an accessory cervical rib on the right side ( Figure 1-3).
In a second consultation, when specifically questioned, she reported sporadic paresthesia in the territory of C7-T1 nerve root. At the physical examination, it was possible to reproduce these complaints during the abduction and external rotation of the right arm (Wright's test). An electromyogram was performed that revealed no evidence of brachial plexus injury or neuropathy in the upper limb or motor cervical radiculopathy.
The CT revealed the presence of a right cervical rib, which had immediate relation with the trunk inferior to the brachial plexus, but not to the subclavian artery ( Figure 4 and 5). Nearly 2 years after the first consultation, motivated by diffuse pain in the shoulder and right cervical-scapular region and a study directed at the shoulder and cervical radiculopathy with no etiology found and with no improvements with the conservative treatment, the patient was then submitted to surgical decompression of the lower brachial plexus with partial resection of the cervical rib using a right supraclavicular approach (Figure 6 a-c).
About 24 months after surgery, the patient presented a complete resolution of pain and paresthesia in the C7-T1 territory.
DISCUSSION
Most individuals with nTOS are young female. According to the literature, there are several possible etiologies for TOS, both congenital and acquired. The most common are scalene hypertrophy, congenital cervical ribs, first rib and clavicle anomalies or fibrotic bands.
Even so, the presence of a cervical rib is not synonymous with TOS. 5 The prevalence of cervical ribs in the population is 0.5-2% and the prevalence of nTOS is 1 per million, so the ratio of cervical ribs to nTOS is 5000-20,000 to 1. Thus, the presence of a cervical rib in an individual with nonspecific symptoms of the upper extremities is probably only an incidental finding. 6 The vast majority of cases of TOS are delayed in diagnosis due to confounding factors such as associated comorbidities, like psychiatric disorders or those related to the shoulder joint and even so they may be associated with a normal electromyogram that is not sensitive at an early stage. 4 This case report highlights the importance of this syndrome in the differential diagnosis of cervical pain or radiculopathy, as well as the good results of its surgical treatment, with success rates above 90% being described. 7
CONCLUSION
Many patients with TOS benefit from surgical treatment to resolve their complaints. However, the difficulties in obtaining a clear diagnosis, as well as the socio-economic characteristics and associated diseases of these patients can often delay the definitive treatment and thereby compromise the final clinical result. An adequate operative procedure performed by an experienced team is also crucial to success. 8 | 2020-09-03T09:13:25.059Z | 2020-08-26T00:00:00.000 | {
"year": 2020,
"sha1": "71ec2c5c5caf49c3e203ee8a2f3a1e9602ef9b11",
"oa_license": null,
"oa_url": "https://ijoro.org/index.php/ijoro/article/download/1638/961",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "939ef28c118ecfb055841f7a6b0e6eebaa8fb44e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
81392311 | pes2o/s2orc | v3-fos-license | Total plasma sul fi de in mild to moderate diastolic heart dysfunction
Background. Th e early pathophysiological mechanisms of diastolic dysfunction are not understood well. Hydrogen sul fi de is an important endogenous gaseous transmitter that can in fl uence heart remodeling. Th e aim was to determine total plasma sul fi de (TPS) levels, as a surrogate marker of hydrogen sul fi de, in patients with mild diastolic dysfunction. Methods. Total plasma sul fi de and N-terminal pro brain-type natriuretic peptide (NT-proBNP) levels were determined in ambulatory patients with arterial hypertension or diabetes mellitus and echocar-diographically mild to moderate diastolic dysfunction. Results. Twenty-four patients were included: nine with normal diastolic function (Grade 0), eight with an impaired relaxation pattern (Grade 1), and seven with a pseudo-normalized pattern (Grade 2). TPS levels were highest in patients with normal diastolic function (Grade 0), and lowest in patients with Grade 2 diastolic dysfunction, with this di ff erence between Grade 0 and Grade 2 showing statistical signi fi cance (p = 0.017). NT-proBNP levels showed the reverse behavior, with this di ff erence again showing statistical signi fi - cance (p = 0.042). Conclusions. Total plasma sul fi de levels decrease with worsening of diastolic function from normal to moderate diastolic dysfunction.
INTRODUCTION
Heart failure with preserved ejection fraction is a new condition that has only been described in recent years.(1) One of its defining criteria is diastolic dysfunction.It has gained a lot of attention since its incidence has been rising and there is still no solid guidelines for treatment or prevention.There remains a huge gap in our understanding of the pathological and pathophysiological processes during the primary and advancing stages of diastolic dysfunction.The key event is myocyte hypertrophy and collagen deposition, which causes diastolic relaxation problems.Insufficient diastolic relaxation results in higher left ventricular filling pressure, left atrial stretch, and pulmonary congestion.Clinically, all of these represent the exercise dyspnea and signs of pulmonary congestion seen in cases of advanced heart failure.(2) Hydrogen sulfide (H2S) is a gas that has the characteristic odor of rotten eggs, and it is generally known for its toxicity.(3) H2S is toxic because it binds to cytochrome c oxidase, and therefore inhibits the mitochondrial respiratory chain.As well as showing toxicity, H2S has an important role in signaling at the cellular level.H2S can modulate vascular tone and neuronal function, and it can also be cryoprotective during ischemia.(4) H2S has recently emerged as an important gaseous transmitter in mammals, along with nitric oxide and carbon monoxide.(5) In mammals, H2S is synthesized from the sulfur-containing amino acid Lcysteine through the activities of either cystathionine-β-synthase or cystathionineγ-lyase, both of which require vitamin B6 as a cofactor.(5) A further mitochondrial enzyme has been described recently to also synthesize H2S: 3-mercaptopyruvate sulfurtransferase.(6) In conjunction with cysteine aminotransferase, this enzyme also contributes significantly to the generation of H2S.All three of these enzymes are expressed in smooth muscle and endothelium.Since H2S shows not only complex production, but also has a complex influence on various tissues, it has been studied extensively in recent years.Although H2S is a gas, it is very short-lived due to its dissolving to form a weak acid: HS-and S2-(although S2-is negligible at physiological pH).Here, we will use the term H2S only for the gaseous form, and in all other instances, the term sulfide will refer to the combined gas and anions, as the total plasma sulfide (TPS).( 7) Due to the rapid turnover of H2S, it is difficult to obtain meaningful measurements under various clinical conditions.We therefore determined TPS, which comprises H2S plus the dissolved and protein-bound sulfide.The aim of the current study was to determine the levels of TPS and N-terminal pro brain-type natriuretic peptide (NT-proBNP) in ambulatory hypertensive or diabetic patients with mild to moderate diastolic dysfunction and no clinical signs of pulmonary congestion.
Study population
Twenty-four consecutive patients who were referred for ambulatory echocardiographic examination in Celje General and Teaching Hospital (Celje, Slovenia) and showed preserved left ventricular systolic ejection fraction were included in this study, between May 2009 and May 2011.Informed consent was obtained from each patient.The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki, revised in 2010, as reflected in the prior approval by the Institutional Human Research Committee (N° 117bis/01/10).The study protocol was also approved by the National Ethical Human Research Committee.The exclusion criteria included age <18 years, valvular heart disease, congestive heart failure, liver failure, hypothyreosis or hyperthyreosis, pheochromocytoma, acute infection, malignancy, and psychiatric disorders that limited cooperation.The additional inclusion criteria included systemic arterial hypertension and/or diabetes mellitus type 1 or type 2, as documented and treated for ≥5 years.All of the patients had to be in New York Heart Association (NYHA) class I or II heart failure without signs of pulmonary congestion.
Estimation of diastolic function
Echocardiographic studies were performed (GE Vivid 7 or GE Vivid 6; GE Healthcare, USA), with trans-thoracic echocardiograms completed according to the laboratory protocol.All of the patients underwent a standard echocardiographic study to exclude other abnormalities.Echocardiographyc images were read by three blinded investigators (N.G.P, D.K., M.P.) for the re-measurement of all of the relevant parameters.These included ejection fraction, end diastolic volume (estimated using the Teicholz and Simpson method), (8) peak early and atrial velocities of mitral inflow, early mitral inflow deceleration time, and septal and lateral mitral annular velocities (e´).Where possible, the mean for each measurement was taken over multiple cardiac cycles.
The diastolic function grading was based on the relevant guidelines.(9) In cases where the parameters were non-congruent, the diastolic dysfunction grade was established as that with the highest number of characteristic parameters, with the assumption of equal weighting.Thus, the patients with normal diastolic function were classified as Grade 0. The patients were classified as having mild diastolic dysfunction (Grade 1) according to: mitral early/arterial ratio, <0.8; deceleration time, >200 ms; isovolumic relaxation time, 100 ms; predominantly systolic for pulmonary venous flow (i.e., systolic > diastolic); annular e´, <8 cm/s; and mean E/e´ ratio, <8 (septal and lateral).The patients were classified as having moderate diastolic dysfunction (Grade 2) according to: mitral early/arterial ratio, 0.8 to 1.5 (pseudonormal), which decreased by >50% during the Valsalva maneuver; annular e´, <8 cm/s; and mean E/e´ ratio, 9 to 12.
Total plasma sulfide and NT-proBNP measurements
Total plasma sulfide and NT-proBNP were measured in blood samples taken from the patients at the time of their echocardiographic study.TPS was measured by a modified spectrophotometric method, (10) which was first described in 1949 by Fogo.(11) This method was further refined in 1965 by Siegel, ( 12) and has been used in other studies afterwards.(13)(14)(15)(16)(17) Immediately after collection, blood samples were briefly centrifuged at 3000 rpm for 10 min at 4°C to obtain the plasma.Two hundred microliters of each plasma sample was mixed with 100 μL of a pre-prepared solution of 10% (v/v) trichloroacetic acid, and 60 μL of 1% (w/v) zinc acetate, to trap any dissolved H2S.The mixture was then frozen at -20 ºC until further analysis.Upon defrosting of the samples, 40 µL 20 µM N, N-dimethyl-p-phenylenediamine sulfate prepared in 7.2 M HCl, and 40 µL 30 µM FeCl3 prepared in 1.2 M HCl, were added.After vortexing, these samples were incubated for 20 min at room temperature to allow the color reaction to develop, and then they were centrifuged at 9000 rpm for 5 min at 4°C, to remove the precipitate.The absorbance at 670 nm was then determined spectrophotometrically (Epoch microplate spectrophotometer, Biotek, VT, USA) for the resulting bluecolored supernatants.The TPS concentrations of the samples were then calculated from the absorbance calibration curve of known Na2S concentrations.To ensure accurate measurements, all of the samples were analyzed in triplicate, with the data expressed as median [range].The NT-proBNP levels were determined using electrochemiluminescence immunoassay (Elecsys; Roche Diagnostics, Switzerland) according to manufacturer protocol.
Statistics
The data are presented as means ±standard deviations, or as medians [range, minimum-maximum].Categorical data were compared using c2 tests.Wilcoxon rank tests were used to compare data within groups, and Kruskal-Wallis tests to compare data between groups.For all analyses, p <0.05 was regarded as statistically significant.The data were also tested for rank correlations between NT-proBNP and TPS.The database management and all of the statistical analyses were performed using the MedCalc 12 software (MedCalc, Belgium).The data for the TPS and NT-proBNP levels according to the diastolic function of the different patients groups are presented in table 3. The TPS levels decreased from Grade 0 to Grade 2, and were thus highest in patients with normal diastolic function (Grade 0) and lowest in patients with Grade 2 diastolic dysfunction, with statistical significance reached between Grade 0 and Grade 2 (p = 0.017) (figure 1).The differences in the TPS levels between Grade 1 and Grade 2 did not reach statistical significance (p=0,450).Conversely, the NT-proBNP levels increased from patients with normal diastolic function (Grade 0) to Grade 2 diastolic dysfunction (figure 2), where statistical significance was reached again between Grade 0 and Grade 2 (p = 0.042).There were no statistically significant correlations found between TPS and NT-proBNP.
DISCUSSION
The present study shows lower TPS levels in the plasma of patients with moderate diastolic dysfunction compared to those with normal diastolic function.Conversely, the NT-proBNP levels increased with increasing severity of diastolic dysfunction.Diastolic dysfunction is a pathophysiological concept that is defined by decreased relaxation of the left ventricular myocardium during diastole.It is often the first sign of ongoing pathological processes in the myocardium.Although the pathological mechanisms behind diastolic dysfunction are not completely understood, it is known to result in the accumulation of various proteins in the extracellular matrix, and to promote fibrosis.(18) In heart failure, the plasma levels of NT-proBNP are known to rise according to worsening of the clinical signs of pulmonary congestion, NYHA class, and grade of diastolic dysfunction.(19) The echosonographically defined grade of diastolic dysfunction correlates with invasively measured increased left-ventricular wall stress.(20) Higher plasma NT-proBNP levels correlate with higher ventricular wall tension and higher grade of diastolic dysfunction.( 21) NT-proBNP transcription and secretion is activated by left ventricular longitudinal strain.(22) The present study thus confirms significantly higher plasma NT-proBNP levels in patients with Grade 2 diastolic dysfunction, compared to patients with normal diastolic function.(23) This negative correlation was expected according to previous studies with NT-proBNP, (24) and is thus confirmed by our investigations.Hydrogen sulfide is an important gaseous transmitter that modulates vasodilatory effects in the body.(25) It also protects the endothelium through decreased oxidative stress, (26) inhibition of inflammation, (27) and activation of serine phosphorylation of endothelial nitric oxide synthase.(28) All these are well-known mechanisms that promote normal endothelial function.Failure of these mechanisms can lead to endothelial dysfunction, which can cause atherosclerosis and arterial hypertension.Myocardial hypertrophy is a further consequence of endothelial dysfunction.This occurs partly as a reaction to the elevated afterload, and probably partly due to myocardial microcirculation dysfunction, myocites remodeling, changes in the cellular matrix, and fibrosis.(29) H2S has been shown to intervene in the myocardial fibrosis pathway in hypertensive rats, although the precise mechanism has not been defined yet.(30) As myocardial hypertrophy and fibrosis are hallmarks of hypertonic heart failure and echocardiographic signs of diastolic dysfunction, we feel that our data fit perfectly into the H2S puzzle.H2S production is down-regulated in hypertrophic and fibrotic myocardium, so we would assume that the lower levels of TPS seen in patients with arterial hypertension and diabetes mellitus represent an As H2S is a gas, this might cause difficulties for its measurement under various clinical conditions.A recent report has also confirmed that H2S is short-lived, and can even be undetectable in normal physiological states.(37) Several different methods have been used to measure H2S and/or sulfide concentrations in biological systems, which have included: headspace gas analysis; derivatization methods, such as entafluorobenzyl bromide or N,N-dimethyl-p-phenylenediamine to form methylene blue; spectrophotometry; a monobromobimane-based assay; and direct measurements in solution with a silver sulfide or polarographic sensor.Due to this wide variety of experimental methods used, highly variable data have been obtained regarding the absolute concentrations of sulfide in the blood and tissues.Thus, there appears to be no general consensus in the field as to which measurement(s) correctly define(s) the 'biologically available H2S/ sulfide' .(38) In the present study, we used a method to de-termine the TPS levels (i.e., H2S, dissolved sulfide, acid labile sulfide), to estimate the larger pool of sulfur molecules.The main factor that might limit the interpretation of our findings here is the low number of patients enrolled, which is partly attributable to the recruitment from a single secondary heart failure clinic, as well as the very often newly diagnosed diabetes mellitus at baseline.Our data thus require confirmation in larger studies.Furthermore, studies are needed to evaluate the prognostic value of serial TPS measurements, and the effects of H2S therapy.
In conclusion, our study has revealed significantly lower TPS levels and higher plasma NT-proBNP levels in patients with diastolic dysfunction Grade 2 compared to patients with normal diastolic function.
Figure 1 .Figure 2 .
Figure 1.Patient total plasma sulfide levels according to diastolic function.Figure2.Patient plasma NT-proBNP levels according to diastolic function.
-four consecutive patients were included in this study.Their demographic and clinical characteristics are represented in table 1. Echocardiography data are represented in table 2. Nine patients had normal diastolic function (Grade 0), eight patients had Grade 1 diastolic dysfunction, and seven patients had Grade 2 diastolic dysfunction.
Table 1 .
Demographic and clinical characteristics of the patients included in this study.
Table 2 .
Echocardiographic data of the patients included in this study.
Table 3 .
Patient total plasma sulfide and NT-proBNP levels according to diastolic function. | 2019-03-03T18:06:34.505Z | 2018-10-24T00:00:00.000 | {
"year": 2018,
"sha1": "03f59b5a6120f36e540fa920e615e9ed57726561",
"oa_license": "CCBY",
"oa_url": "http://www.signavitae.com/wp-content/uploads/2018/10/SIGNA-VITAE-2018-142-35-40.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "03f59b5a6120f36e540fa920e615e9ed57726561",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.