id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
244223022
|
pes2o/s2orc
|
v3-fos-license
|
Immunomodulation of Domestic Animals Using Conventional Methods and Panchgavya
With increase in the prevalence of diseases mainly the new emerging diseases, finding a cure is like dodging a bullet. Not only this, but the new world also has to fight against emerging drug resistance among pathogens. Overall, the possibility of survival not only depends on combating the diseases by using expensive resources but it now relies more on immunity building. This fact is applicable in the livestock sector as well. The farmer's economy mainly depends upon the production performance of his livestocks. Also, maximum part of his earnings spends on the feed, fodder, feed additives and other production enhancing drugs. But a single sick animal can disturb the entire planning of a small farmer. Hence the main objective of this article is to focus on the herbal ways that could help in the immunomodulation of livestock and thus conserving both time and money of the farmers. Many villagers of India use their heritage and knowledge to defeat the diseases and enhance the productivity of their livestock. On other side, Panchgavya came like a missile against the body ailments of humans. Research product of algae and their use as immunomodulators and production enhancers is now available as a boon in the market. But still there is a lot to search. Overall, relying totally on the drugs for relief, creates loss both in terms of animal health, productivity and farmer's economy. So, prevention via immunomodulators is the best weapon to make our animals healthy and to be wealthy.
INTRODUCTION
Using Tulsi (Ocimum tenuiflorum), Giloi (Tinospora cordifolia), Turmeric (Curcuma longa), Ginger (Zingiber officinale), Neem (Azadirachta indica), Ashwagandha (Withania somnifera), Aloe vera (Aloe barbadensis miller) and many other herbal plants in daily life is inherently transferring from our generations to generations. This is the precious gift of our Ayurveda. The Ayurveda irrespective of focusing on curing the diseases believes in taking certain preventive measures in order to avoid the occurrence of diseases. A famous Sanskrit sloka clearly depicts the principles of Ayurveda along with its main motto.
SWASTHASYA SWASTHYA RAKSHANAM, ATURASYA VIKARA PRASHAMANAM CHA Ayurveda aims at the protection and maintenance of good health of the healthy people and elimination of or control over the ailments and health-disorders of the unhealthy. Modulation of immune response is now being recognized as an alternative to conventional chemotherapy against various ailments especially related to the immune system like in the case of organ transplantation or any autoimmune disorder, immunosuppression is needed to avoid complications. Immunomodulation is made of two-parent words which simply mean modulation of immunity. It includes either immunostimulation (improve the host's defensive mechanism by stimulating the immune response) or immunosuppression (subdue the immune response) by using certain groups of biological and synthetic compounds. In this modern scientific era, the immunomodulatory compounds of plant origin are used to evoke the immune response of an organism non-specifically against pathogens. The immunomodulators are mainly plants derived; hence there are least chances of side-effects. Also, Ayurvedists are practicing these concepts for centuries. In fact, one of the therapeutic strategies in Ayurvedic medicines is to enhance the overall natural resistance of the body against disease-causing agents instead of neutralizing them directly. This is the basic difference between the therapeutic approach of today's modern science that focuses on chemotherapy and Ayurveda. Overall this approach of Ayurveda is a boon for our industrializing society with many new emerging diseases. We can co-relate this, with the current situation that our country is facing. The havoc caused by the SARScov-II virus is dreadful mainly for those with poor immunity. We don't even have any vaccine yet. So the only measure which could protect us is the use of immuno-boosters. Our honorable Prime Minister has also said that "Prevention is the only vaccine against COVID". And Ayurveda is the best way to boost up immunity in an easy, affordable and effective manner. This pandemic has told everyone about the importance of Ayurveda.
ROLE OF THE MEDICINAL PLANTS ON INNATE AND ACQUIRED IMMUNITY
The medicinal plants have various secondary metabolites that affect our immune system. Natural killer (NK) cells, NK-T cells, T-cells, macrophages, granulocytes (neutrophils, eosinophils basophils) and dendritic cells are the components of innate immunity involved in the immunomodulation. Chlorophytum borivilianum root extract is an effective immunomodulator which not only potentiates non-specific immune response but also improves humoral and cell-mediated immunity. Hence we could use it against infections to enhance our immunological response against foreign particles or antigens. Overall we could use it to boost up the defensive response under normal circumstances. Another example is the ethanolic extract and aqueous extract of Picrorhiza kurroa which acts on the various levels of immune response like release of mediators of hypersensitivity reactions, tissue responses to these mediators in target organs and antibody production [1] . The alkaloids, flavonoids, terpenoids, polysaccharides, lactones and glycosides present in the plant extracts are also responsible to cause alterations in the immunomodulatory properties. Immunomodulation does not affect the microbes directly. Therefore, these agents have an advantage over chemotherapeutic agents as there is no chance of emergence of resistance against immunomodulators of natural origin.
Panchgavya therapy (COWPATHY): a powerful tool of ayurveda
The Panchgavya principle of Ayurveda constitutes a concoction of cow's urine, milk, curd, ghee and dung as its main ingredients. Hence named PANCHGAVYA.
1. In research on cow's urine distillate (Kamdhenu ark), scientists found an increase in the phagocytic activity of macrophages and secretion of Interleukin-1 and IL-2. It also acts as a bio enhancer and is known for its synergistic properties with antibiotics. Therefore, it is used as a prophylactic and therapeutic tool for livestock and poultry along with humans.
2. Cow's ghee has immunomodulatory properties. It improves the cell rejuvenation process and boosts up the healing process of our body. It is a rich source of Butyrate fatty acid which is great for keeping the digestive system healthy and improving immunity. 4. Cow's milk could be considered as myriads of nutrients along with its antibacterial and antiviral properties.
Casein (milk protein) acts as an antiviral and immunoregulatory factor as it regulates the innate immunity both via up-regulation to enhance virus killing and down-regulation to reduce detrimental conditions like sepsis. Further, it activates B and T-cell mediated functions. Thus, it links innate immunity with adaptive immunity. 5. Indian cow's dung possesses superior antimicrobial activity and therefore is used to formulate drugs for several diseases [2] .
Immunomodulation in livestock by ayurveda
Traditionally herbal remedies are known to be used all over the country. Herbs like Tulsi, giloi, aloe vera, turmeric etc. are known to possess immunomodulatory properties. Although currently, their use is limited in livestock practice but still is known to be used in many rural settings. 1. One such example is the use of medicine balls by a few communities in Tamil Nadu. These balls are composed of 3 medicinal plants-Veldt grape or Hadjod (Cissus quadrangularis), Aloe vera and Pergularia or Trellis-vine (Pergularia daemia) and fed to for boosting their immunity [3] .
2. Curcumin, the active principle in Curcuma longa (turmeric) has been found to increase serum level of IgG and IgM, hence known to have immunomodulatory action in animals [4] .
Withania somnifera (Ashwagandha) and
Ocimum tenuiflorum (Tulsi) when fed orally to animals alone or in combination are seen to enhance WBCs count and increase peripheral blood monocyte count respectively thereby helping to fight against infections [5,6] . 4. Supplementation of Tinospora cordifolia (Giloi) in the diet of peripartum cows is known to increase the total leukocyte count and neutrophils to lymphocyte ratio, preventing postpartum uterine infections [7] .
Drenching the juice from Morinda citrifolia (Noni fruit) enhanced the activation of CD4+ and CD8+ T cells in
neonatal calves that may have a stimulatory effect in the maturation of their immune system [8] .
6. Aloe vera, as an additive to livestock feed, has great potential for improving nutrient utilization, intestinal health and immune response [9] .
Although commercial preparation of the above-mentioned plants is available in the market, the adoption rate of Ayurveda for livestock is still in infancy. However, there is a wide scope for this alternate system of medicine in veterinary practice. IL-8 and TNFα [11] . As a result, Algimun is the combination of two biologically active macroalgal extracts: MSP IMMUNITY, a green algal extract that reinforces innate and adaptive immune responses; and MSP BARRIER, a red algal extract, which enhances the barrier function of the intestinal mucosa. This product can be used as a feed additive for all animals in order to enhance their disease resistance capability and performance. Algimun can also be used to uplift the immune system of a pregnant sow and increases the lactogenic immunity transfer, overall reducing our dependence on using antibiotics [12] .
Aleta™: It is a product made from Euglena gracilis which contains a high level of linear Beta Glucan. This product makes a consistent bioavailability of linear Beta-1,3-Glucan which improves gut health, acts as an antiinflammatory agent, improves animal's production performance, levels up its immunity and increases vaccination efficacy [13] . Many microalgae are also known to be used as an active ingredient for making pharmacological immunity boosters [14,15] . These products not only enhance the immunity but also resist the use of antimicrobial drugs by uplifting the body's disease resistance. Thus they also help us to combat the antibiotic drug resistance shown by pathogenic microbes. CONCLUSION This is the age of Science and Technology where revolutionized technology has become an essential part of our daily lifestyle. No doubt, today every disease has its remedy. But in many cases, the cure is worse than the Research and Reviews: Research Journal of Biology eISSN:2322-0066 RRJOB| Volume 9 |Special Issue 3|July, 2021 disease. As there are still many diseases with no cure, so prevention becomes the best remedy. Overall, if our experienced researchers work along with the knowledge of modern technologies then they could give an advanced and quality research in the veterinary field with foresight of not attaining pain to sustain the life.
Remember, it's easier to stop something happening at its first stage than to repair the damage after it has happened. Hence "Better be Safe than Sorry".
|
2021-10-19T15:28:43.691Z
|
2021-09-10T00:00:00.000
|
{
"year": 2021,
"sha1": "1d8376803ecb2494b049288abf58ff55430b59a2",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/10-9-2021/Rinkal%20Sundriyal,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8a2979f2cb8502563dea8b9a05c7ce7c19e4c0cf",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
30022228
|
pes2o/s2orc
|
v3-fos-license
|
T-cell Ubiquitin Ligand Affects Cell Death through a Functional Interaction with Apoptosis-inducing Factor, a Key Factor of Caspase-independent Apoptosis*
The lymphoid protein T-cell ubiquitin ligand (TULA)/suppressor of T-cell receptor signaling (Sts)-2 is associated with c-Cbl and ubiquitylated proteins and has been implicated in the regulation of signaling mediated by protein-tyrosine kinases. The results presented in this report indicate that TULA facilitates T-cell apoptosis independent of either T-cell receptor/CD3-mediated signaling or caspase activity. Mass spectrometry-based analysis of protein-protein interactions of TULA demonstrates that TULA binds to the apoptosis-inducing protein AIF, which has previously been shown to function as a key factor of caspase-independent apoptosis. Using RNA interference, we demonstrate that AIF is essential for the apoptotic effect of TULA. Analysis of the subcellular localization of TULA and AIF together with the functional analysis of TULA mutants is consistent with the idea that TULA enhances the apoptotic effect of AIF by facilitating the interactions of AIF with its apoptotic co-factors, which remain to be identified. Overall, our results shed new light on the biological functions of TULA, a recently discovered protein, describing its role as one of very few known functional interactors of AIF.
We recently identified TULA among multiple proteins that co-purified with c-Cbl from T-lymphoblastoid cells (1). TULA contains an N-terminal UBA domain, a centrally positioned SH3 2 domain, and a region of homology to phosphoglycero-mutases, which was initially termed HCD ( Fig. 1) (1, 2). TULA binds to c-Cbl through its SH3 domain and to ubiquitin and ubiquitylated proteins through its UBA domain (1,3). Dimerization of TULA through its phosphoglyceromutase domain has also been shown (3). Analysis of cell and tissue expression of TULA demonstrates that this protein is expressed primarily in T and B lymphocytes and is localized both in the cytoplasm and nucleus (1,4).
A mouse orthologue of TULA (Sts-2) was recently identified (4), as was a second member of the family, Sts-1 (5). Unlike TULA, Sts-1 is expressed ubiquitously (4,5). (In this report we will use the term TULA for consistency.) TULA has been implicated in the regulation of cell signaling mediated by protein-tyrosine kinases. On the one hand, TULA was reported to increase activity of receptor protein-tyrosine kinases by inhibiting c-Cbl-driven down-regulation of their activated forms. This appears to be mediated by preventing interactions between ubiquitylated forms of activated proteintyrosine kinases and proteins recruiting them to the degradation pathway and, possibly, by decreasing the level of c-Cbl (1,3). On the other, the lack of both proteins of the TULA/Sts family resulted in hyper-reactivity of T lymphocytes correlated with an increase in the activity of Zap-70, the molecular basis of which remained unclear (4). These results implied that the effect of TULA on protein-tyrosine kinases might not be the only mechanism through which TULA exerts its biological effect. Indeed, the presence in TULA of multiple functional domains and extensive stretches of amino acid sequences with unknown functions suggested that TULA might exert effects unrelated to either c-Cbl or protein-tyrosine kinases.
In an effort to discover novel functions of TULA, we purified proteins that interact with TULA and identified among them apoptosis-inducing factor (AIF). AIF is a key factor of caspaseindependent apoptosis (6 -8). In the absence of cellular stress signals, AIF is localized to the internal mitochondrial membrane, where it functions as a FAD-dependent NADH oxidase, which is required for normal oxidative phosphorylation (9) and maintenance of mitochondrial structure (10). Under conditions inducing apoptosis, AIF is released from mitochondria (11)(12)(13)(14) and translocated to the nucleus, where it induces caspase-independent apoptotic events through binding to DNA (15). These two functions of AIF are mediated by distinct structural domains (15,16) and can be dissociated (6,10,17).
Overall, the molecular mechanism of the apoptotic effect of AIF remains poorly understood, and in particular, few functional interaction partners of AIF have been identified (18 -21). Our work, presented here, demonstrates that TULA and AIF are interaction partners and establishes a functional link between them in inducing caspase-independent apoptosis. These results shed new light on the mechanism of the apoptotic effect of AIF and reveal a novel biological function of TULA.
EXPERIMENTAL PROCEDURES
DNA Constructs and Mutagenesis-cDNA encoding the fulllength TULA or its N-terminal half (TULA-N1/2) was subcloned into the pFLAG 5a vector (Sigma) using the Advantage-Hf2 polymerase (Clontech). The forward primer (5Ј-CAGGATAT-CATGGCAGCGGGGGAG-3Ј) annealed to nucleotides at the N-terminal end of TULA and included a unique EcoRV restriction site. The reverse primers (5Ј-TAGGGTACCATCCGTG-TAGTTTTCC-3Ј and 5Ј-TAGGGTACCGTTGCCTGAGAT-CCAGTT-3Ј) annealed to nucleotides 893 to 908 (TULA-N1/2) or 1863 to 1880 (full-length TULA) within the TULA short (1) protein sequence and included a unique KpnI restriction site. These restriction sites were included to create compatible ends for ligating the fragments into the pFLAG 5a vector. The obtained constructs were confirmed by sequencing.
To introduce mutations, two synthetic oligonucleotides complementary to the opposite strands of double-stranded DNA containing the sequence to be mutated were designed to contain 15-18 nucleotides on either side of the mutation site. The oligonucleotides were gel purified (IDT Technologies, Coralville, IA). The mutagenesis reactions were performed using the QuikChange site-directed mutagenesis kit according to the manufacturer's recommendations (Stratagene, La Jolla, CA).
Cells-HEK293T and HeLa cells were cultured in Dulbecco's modified Eagle's medium supplemented with 2 mM L-glutamine, 100 IU/ml penicillin, 100 g/ml streptomycin and 10% fetal bovine serum (FBS) (complete medium). HEK293T cells were plated 24 h before transfection to be 80% confluent on the day of transfection in antibiotic-free medium. Purified plasmid DNA was transfected into HEK293T cells (10 -20 g/2 ϫ 10 6 cells) using Lipofectamine 2000 (Invitrogen) according to the manufacturer's recommendations. After a total of 48 h, transfected cells were harvested and washed with phosphate-buffered saline (PBS). Cells were lysed in CelLytic buffer (Sigma) for 15 min at room temperature, and cell debris was removed by centrifugation. HeLa cells were transfected in the same fashion, but using Lipofectin (Invitrogen) or FuGENE 6 (Roche Applied Science).
Jurkat tag cells were cultured in RPMI1640 supplemented with 20 mM HEPES, 2 mM L-glutamine, 100 IU/ml penicillin, 100 g/ml streptomycin, and 10% FBS (complete medium). The cells were grown in antibiotic-free medium for 24 h prior to electroporation. Cells were centrifuged and resuspended at a final density of 2 ϫ 10 7 cells/ml in antibiotic-free medium. DNA (10 g) was added to a 4-mm cuvette followed by addition of 1 ϫ 10 7 cells in 500 l of medium. The mixture was pulsed at 310 V for 10 ms in an electroporator (ECM 830 from BTX, Holliston, MA). After electroporation, cells were cultured in complete medium for 48 h. The efficiency of electroporation was ϳ70%.
In several experiments Jurkat tag cells were transfected using DMRIE-C (3 g of DNA/5 ϫ 10 6 cells) according to the manufacturer's recommendations. Because the efficiency of DMRIE-C-mediated transfection did not exceed 10%, a GFPencoding expression plasmid (pEGFP-C2, Clontech) was cotransfected in each sample at a ratio of 1:15 to the total DNA, and only GFP ϩ cells were analyzed using flow cytometry. Stable Jurkat cells with a reduced TULA expression level and the corresponding control cells were generated using the shRNA-encoding or empty control lentiviral vector (1).
Z-VAD-fmk and Z-IETD-fmk (Biomol, Plymouth Meeting, PA) and camptothecin and etoposide (Sigma) were added to final concentrations of 100, 4, 5, and 10 M, respectively. Growth factor withdrawal of Jurkat tag cells was carried out in medium supplemented with 0.5% FBS. For anti-CD3 stimulation, wells of a 24-well plate were pre-coated with the mouse monoclonal antibody OKT3 at 10 g/ml in PBS overnight at 4°C.
Isolation of TULA-associated Proteins-1-3 mg of total protein from FLAG-TULA-expressing or vector-transfected HEK293T cells was incubated with 20 l of anti-FLAG M2 affinity gel (Sigma) and incubated at 4°C for 4 h. The beads were washed three times with lysis buffer, and anti-FLAGbound proteins were eluted from the beads with 0.1 M glycine (pH 3). Proteins eluted from the anti-FLAG beads were separated on a one-dimensional BisTris minigel and stained in Simply Blue Coomassie (Invitrogen). Each gel lane was divided and cut into 10 equal-sized gel slices. Proteins contained within each slice were equilibrated in 100 mM ammonium bicarbonate and reduced, alkylated, and digested with trypsin as previously described (22). One-tenth of each unfractionated tryptic digest was analyzed by LC-ES MS/MS using a micro-column (Zorbax C18, 75 mm ϫ 12 cm) reverse-phased HPLC interfaced with an Agilent LC-MSD Ion Trap MS. ES MS/MS-based sequencing was performed on-line in a data-dependent manner, and two tandem mass spectra were taken per survey scan as peptides eluted from the HPLC (23). Uninterpreted mass spectra from each of the 10 individual liquid chromatography-tandem mass spectrometry (LC-MS/MS) runs were collated and searched as a single file against a human nonredundant protein data base using the Mascot search engine (Matrix Science) (24). Errors used were 2.0 Da on MS data and 0.8 Da on MS/MS data.
Immunoprecipitation and Immunoblotting-1-3 mg of total protein from whole cell lysate was immunoprecipitated with 1-3 g of anti-TULA-N (GETQLYAKVSNKLKSRSSPS) (Proteintech Group Inc., Chicago, IL) in a total volume of 1 ml as described previously (1). Then proteins were separated using SDS-PAGE, transferred to nitrocellulose, and probed with 1:1000 anti-FLAG M2 (Sigma), 1:1000 anti-TULA-N, or 1:500 anti-AIF (Santa Cruz Biotechnology, Santa Cruz, CA). After blots were washed, the appropriate peroxidase-conjugated secondary antibody was added, and proteins were visualized using the ECL Plus Kit and the Typhoon Fluorescent Imager (GE Healthcare).
Annexin-V Staining-Electroporated Jurkat tag cells were washed and resuspended in 100 l of annexin-V binding buffer (10 mM HEPES, 140 mM NaCl, 2.5 mM CaCl 2 , pH 7.4). Then 5 l of 0.1 mg/ml propidium iodide and 5 l annexin-V allophycocyanin conjugate (Molecular Probes, Eugene, OR) were added to the cells. After cells were incubated for 15 min at room temperature, 400 l of annexin binding buffer was added, and cells were analyzed using flow cytometry. DMRIE-C-transfected Jurkat tag cells and TULA-knockdown Jurkat cells were analyzed using an annexin V-Cy5 apoptosis kit from Biovision (Mountain View, CA).
Transfection of Small Interfering RNAs (siRNAs)-To deplete endogenous AIF and simultaneously overexpress TULA, a 21-mer annealed AIF-targeting siRNA and scrambled control (Ambion, Austin, TX) were resuspended in water at a final concentration of 100 M. The sense sequence of the AIF-specific siRNA corresponded to nucleotides 1540 -1558 in the AIF sequence. (Several AIF-specific siRNAs were tested in pilot experiments, and this siRNA was selected as the most efficient one.) siRNA was electroporated into Jurkat tag cells (100 nM siRNA and 1 ϫ 10 5 cells in 75 l of Opti-MEM (Invitrogen)) using 1-mm cuvettes in the BTX Electroporator at 150 V for 100 s. To simultaneously electroporate siRNA and DNA, FLAG-TULA expression or control plasmid (2 g) was added to siRNA. After recovery in complete medium for 48 h, transfected cells were either cultured in complete RPMI1640 medium or subjected to serum deprivation in RPMI1640 supplemented with 0.5% FBS for an additional 24 h. At that time overall cell death was measured using trypan blue exclusion. To deplete endogenous TULA, the same electroporation procedure was done using TULA-specific siRNA SMARTpool L-008616-00 from Dharmacon (Lafayette, CO).
Subcellular Distribution-To obtain immunofluorescence images, HeLa cells were seeded onto fibronectin-coated coverslips (BD Biocoat) at a confluence of 50% in Dulbecco's modified Eagle's medium containing 10% FBS without antibiotics. On the following day, the cells were transfected to express FLAG-TULA and/or Myc-AIF (3 g of each construct per coverslip) using FuGENE 6 as per the manufacturer's recommendations. Forty-eight hours post-transfection the cells were washed, fixed with 4% paraformaldehyde in PBS, washed again, and permeabilized with 0.2% Triton X-100 in PBS for 5 min at room temperature. Cells were blocked with 1% bovine serum albumin and washed twice with PBS. Fluorescein isothiocyanateconjugated anti-FLAG (5-10 g/ml) and Cy3-conjugated anti-Myc (1 g/ml) (Sigma) were added as appropriate. The antibodies were incubated with the cells overnight at 4°C in the dark. The cells were washed three times with PBS before mounting the coverslips onto a slide with anti-fade mounting solution including 4Ј,6-diamidino-2-phenylindole stain (Molecular Probes). Cell images were obtained using the Leica DM IRE2 confocal microscope with a ϫ100 objective.
For subcellular fractionation, 293T cells were transfected with either empty or TULA expression vector (10 g/75-cm 2 flask) using Lipofectamine 2000. Subcellular fractions were obtained from transfected cells at 48 h post-transfection using a Qproteome Cell Compartment kit (Qiagen).
RESULTS
AIF Is a Novel TULA Interacting Protein-To search for novel functions of TULA we sought to identify TULA interaction partners via a proteomics approach. For this purpose, FLAGtagged full-length TULA and TULA-N1/2 (1-299), a truncation mutant lacking the C-terminal half, but containing both binding domains of TULA (UBA and SH3) (see Fig. 1), were transiently overexpressed in HEK293T cells and immunoprecipitated with anti-FLAG antibody. The eluted immune complexes were separated by SDS-PAGE and proteins associated with these forms of TULA were identified using LC-ES MS/MS. Several proteins were identified in the TULA and TULA-N1/2 immunoprecipitates and not in the vector control, and one of these was c-Cbl (8 unique peptides), a previously characterized TULA-interacting protein (1,3). A second protein identified with 15 unique peptides was AIF. Originally, we identified AIF only in the TULA-N1/2 immunoprecipitates. However, the molecular mass of AIF suggested that it co-migrates with fulllength TULA, which is a very large band on the SDS-PAGE gel. Because co-migration with TULA was likely to hinder identification of AIF in this system, we targeted four unique AIF peptides for mass spectrometry-based sequencing in the gel band corresponding to full-length TULA and identified AIF from all four peptide sequencing events. We performed these experiments using TULA and TULA-N1/2 in triplicate, and AIF was identified each time with more than 10 peptides in each trial (supplemental Table S1). Interestingly, c-Cbl was only identified in the immune complexes with full-length TULA.
To verify association of TULA and AIF and to identify the region of TULA involved in AIF binding, we transiently overexpressed TULA and TULA mutants in HEK293T cells, immunoprecipitated them, and analyzed the obtained immune complexes using Western blotting. Consistent with the mass spectrometry results, co-immunoprecipitation of AIF was clearly detectable (Fig. 2A). Immunoblotting also showed that TULA-N1/2 binds to AIF better than full-length TULA. To assure that the difference in the amount of co-immunoprecipitated AIF was not due to differences in the cellular levels of AIF in cells overexpressing full-length TULA and TULA-N1/2 (as well as other TULA mutant forms, see below), we immuno- blotted AIF in whole cell lysates and demonstrated that its level did not vary significantly between samples (Fig. 2B). Because the TULA-N1/2 mutant contains an SH3 domain and because AIF has several putative SH3-binding motifs (PXXP) including 545 PSTPAVPQAP 554 , we hypothesized that TULA binds to AIF through the SH3 domain. However, the mutant form of TULA lacking a functional SH3 domain as a result of the W279L point mutation bound AIF with the same efficiency as wild-type TULA did ( Fig. 2A). Furthermore, the SH3-deleted forms of both TULA-N1/2 and full-length TULA bound AIF to the extent characteristic of the binding of AIF by TULA-N1/2 (data not shown). Likewise, deletion of the UBA domain had no effect on AIF binding ( Fig. 2A). Finally, the TULA-C1/2 truncated form (amino acids 300 -623) did not bind to AIF (data not shown). Taken together these findings indicate that the N-terminal half of TULA is necessary for AIF binding, but that neither the SH3 nor the UBA domain is critical.
Because c-Cbl is well characterized as a TULA binding partner in T cells, we examined its binding to TULA relative to AIF. Interestingly, high binding of c-Cbl to various forms of TULA was invariably linked to the low AIF binding to them and vice versa (Fig. 3), thus being in agreement with the findings of our mass spectrometry based experiments (see above). This mutual exclusion is unlikely to be due to a direct competition of c-Cbl and AIF for the same binding site, because c-Cbl binds to the SH3 domain of TULA (1), which is dispensable for AIF binding (see Fig. 2). It is more likely that c-Cbl and AIF bind to alternative conformation states of TULA or induce such states upon binding.
We also evaluated the possibility that AIF binds to the TULA homologue Sts-1/TULA-2. Both proteins were co-expressed in 293T cells, Sts-1 was immunoprecipitated and the obtained immune complexes were analyzed using Western blotting. Coimmunoprecipitation of AIF and TULA-2 was not detected (data not shown), despite their high levels of expression, indicating that the interaction of TULA with AIF is specific for this particular family member.
TULA Facilitates T-cell Apoptosis-Our multiple attempts to generate stable TULA-overexpressing T-or B-cell lines using lentiviral transduction have failed, despite the generally high success rate for the lentiviral system used (25,26) and the fact that vector control stable transductants were consistently generated (data not shown), suggesting that constitutive high expression of TULA is detrimental for cell viability. Because AIF has been shown to be a key factor of caspase-independent apoptosis, we decided to explore the effect of TULA expression on T-cell apoptosis and to analyze the functional link between TULA and AIF in this event.
First, we analyzed the effect of reducing endogenous levels of TULA on T-cell apoptosis. A stable variant of Jurkat cells with a reduced level of TULA ("TULA-knockdown") was generated using a shRNA-encoding lentiviral vector (1). In these cells, the level of TULA protein was reduced ϳ4-fold as compared with the parental cells and cells expressing a control shRNA (Fig. 4A). (A decrease in TULA mRNA in these cells has been shown in our previous report (1).) Apoptosis in the TULA knockdown and control cells was induced using anti-CD3 (a T cell-specific apoptotic stimulus mimicking biological TCR/CD3-mediated signaling), etoposide (an apoptosis-inducing drug), and growth factor withdrawal (serum deprivation was used, because Jurkat cells are interleukin-2-independent). Early apoptosis of Jurkat cells was assessed using annexin-V staining indicative of phosphatidylserine exposure on the outer leaf of the plasma membrane. Results from these experiments show that TULA expression is critical for apoptosis induced by serum deprivation, but not by anti-CD3 or etoposide treatment (Fig. 4B).
To ascertain that the effect of TULA shRNA was not due simply to clonal variability, we determined whether depletion of TULA achieved using transient transfection of TULA-targeting siRNA (Fig. 4C) would produce a similar effect. These experiments indicated that TULA-specific siRNA substantially reduces serum deprivation-induced cell death (Fig. 4D). The finding that the effect of transient transfection is somewhat lower than that of stable TULA-specific shRNA expression is likely to be explained by a higher residual level of TULA in siRNA-treated cells as compared with shRNA-expressing cells (Fig. 4, A versus C).
To further establish a role for TULA in T-cell apoptosis and compare the pro-apoptotic potential of mutant forms of TULA, we employed an independent experimental approach, transiently overexpressing wild-type TULA and its mutants lacking known functional domains in Jurkat cells and comparing sensitivity of these cells to apoptosis induced with anti-CD3 or camptothecin. The percentage of cells undergoing apoptosis in the control and the camptothecin-treated cells increased substantially when wild-type TULA was overexpressed, although no synergism between TULA overexpression and camptothecin treatment was observed. In contrast, overexpression of TULA did not significantly modify sensitivity of Jurkat cells to anti-CD3-induced apoptosis; they were highly sensitive regardless of TULA overexpression (Fig. 5A). These results taken together with those shown in Fig. 4 indicate that the pro-apoptotic effect of TULA is not linked to TCR/CD3mediated signaling.
These experiments also demonstrated that TULA mutants lacking the UBA domain or a functional SH3 domain were unable to facilitate T-cell apoptosis despite being expressed at a level comparable with that of TULA (Fig. 5A, bottom panel).
Considering that proteins of a single family may have similar functions, we evaluated the effect of Sts-1 (TULA-2) on apoptosis of Jurkat cells. Unlike TULA, its ubiquitous homologue did not facilitate apoptosis either in untreated cells or in cells treated with camptothecin (Fig. 5B). This result indicates that the pro-apoptotic effect of TULA is specific for this family member.
TULA Exerts Its Apoptotic Effect through AIF-The results
shown in Figs. 4 and 5 indicate that sensitivity of Jurkat cells to TCR/CD3-mediated apoptosis remains unchanged by either knockdown or overexpression of TULA, thus ruling out the possibility that TULA affects apoptosis by regulating TCR/CD3 signaling. Given that TULA and AIF bind in vivo and that AIF induces apoptosis, we considered the possibility that TULA exerts its pro-apoptotic effect through AIF.
First, we addressed this issue by evaluating caspase dependence of the observed effect of TULA, because TCR/CD3-induced apoptosis requires caspase activation (29 -32), whereas AIF has an established role in caspase-independent apoptosis (6 -8). To determine whether or not caspases play a role in the pro-apoptotic effects of TULA, we determined the effects of Z-VAD, a pan-caspase inhibitor, on cell death in our experimental system. These experiments indicated that inhibition of caspases reduces death of TULA-overexpressing Jurkat cells neither in complete medium (Fig. 6A) nor under serum deprivation (Fig. 6B), while inhibiting camptothecin-induced death of Jurkat cells under both conditions. Likewise, the effect of TULA was insensitive to Z-IETD, a caspase-8 inhibitor (data not shown). Therefore, the pro-apoptotic effect of TULA is largely caspase-independent in agreement with the idea that TULA exerts this effect through AIF.
We then directly evaluated the possibility that the pro-apoptotic effect of TULA requires AIF. We transiently transfected Jurkat cells with AIF-specific siRNA and TULA-encoding vector to simultaneously reduce the endogenous level of AIF and overexpress TULA and subjected them to serum deprivation to stimulate apoptosis. As expected, a decrease in the level of AIF reduced cell death, whereas overexpression of TULA caused an increase. Consistent with a cooperative role of these proteins in apoptosis, a decrease in AIF expression significantly inhibited the effect of TULA overexpression on serum deprivation-induced cell death (Fig. 7). (It should be noted that in this system TULA overexpression exerted only a minor pro-apoptotic effect in complete medium (cell death was 15.8 Ϯ 3.1 and 21.0 Ϯ 1.1% for vector control and TULA-overexpressing cells, respectively), and therefore, serum deprivation was essential for revealing the observed effects. This variation between the results shown in Figs. 5 and 7 is probably due to the differences between the basal levels of cell death in the systems utilized; it is higher for electroporated cells (Fig. 7) than for transfected cells (Fig. 5).) Taken together, our results indicate that TULA physically interacts with AIF and exerts its pro-apoptotic effect in an AIF-dependent fashion. However, it remains unclear how the interaction of TULA and AIF facilitates apoptosis, because these proteins appear to reside in different compartments inside the cell; AIF is localized to mitochondria under normal physiological conditions (6 -8, 15, 33, 34), but after apoptotic stimulation it is released to the cytosol and subsequently translocates to the nucleus (11)(12)(13)(14), whereas no mitochondrial localization has been shown for TULA. The latter is localized primarily to the cytoplasm with some amount of it being present in the nucleus (1). To determine whether or not AIF and TULA can co-localize, we transiently co-expressed epitope-tagged constructs of both proteins in HeLa cells and examined their localization under normal and stressed culture conditions using confocal microscopy. Under normal conditions, no significant co-localization of TULA and AIF was detectable (Fig. 8). However, their colocalization became apparent in the cytoplasm when cells were subjected to staurosporine treatment, which induces apoptosis mediated by the release of pro-apoptotic factors from mitochondria (Fig. 8). The apoptosis induction under these conditions is visualized by considerable morphological disturbances of the nuclei in staurosporine-treated cells. The lack of co-localization of TULA and AIF prior to stress treatment argues that TULA is unlikely to facilitate apoptosis by promoting the release of AIF from mitochondria.
To address this issue directly, we determined the effect of TULA overexpression on subcellular localization of AIF. HEK293T cells were transfected with wild-type and mutant forms of TULA, and distribution of AIF was examined using subcellular fractionation followed by Western blotting. AIF is largely localized to the membrane in a manner consistent with its mitochondrial targeting, whereas a very small fraction of it is present in the cytosol and in the nucleus (Fig. 9A), probably due to the transfection-induced cell stress, and not to cross-contamination of fractions, which was minimal (Fig. 9B). This pattern of subcellular distribution of AIF was not affected by overexpression of any form of TULA (Fig. 9C). Therefore, the results shown in Figs. 8 and 9 argue that (a) the functional interaction of TULA with AIF occurs when AIF has already been released from mitochondria and (b) translocation of AIF to the nucleus is not the major target of the pro-apoptotic effect of TULA.
DISCUSSION
Taken together, our results obtained using several independent approaches demonstrate that TULA is involved in T-cell apoptosis (Figs. 4 and 5). Furthermore, our results indicate that TULA affects apoptosis through a mechanism independent of either TCR/CD3-mediated signaling or caspase activation . Finally, we have shown that TULA binds to AIF (Fig. 2, supplemental Table S1), a key factor of caspase-independent apoptotic events. Binding of AIF to TULA together with the caspase independence of the proapoptotic effect of TULA have provided an initial argument in favor of the idea that TULA affects apoptosis through its interaction with AIF.
Indeed, further experiments indicated that AIF is essential for the pro-apoptotic effect of TULA (Fig. 7). Notably, the effect of AIF-specific siRNA on the pro-apoptotic activity of TULA was higher than its effect on the protein level of AIF. This apparent discrepancy may reflect the fact that the pro-apoptotic effect of TULA is highly sensitive to the level of AIF or that siRNA differentially affects the level of AIF in various cellular compartments, i.e. the fraction of AIF that mediates the effect of TULA may be depleted more than others. It should also be noted that AIF depletion reduced not only TULA-facilitated apoptosis, but also apoptosis in the absence of overexpressed TULA. This may be due to the role endogenous TULA plays in the basal apoptosis in Jurkat cells.
The essential role of AIF in TULA-facilitated apoptosis and the physical interaction of TULA with AIF, taken together, strongly suggest that binding of TULA to AIF is crucial for the pro-apoptotic effect of TULA. Indeed, overexpression of the TULA mutant lacking the N-terminal domain (TULA-C1/2) and incapable of binding to AIF does not promote apoptosis in Jurkat cells and even decreases it (data not shown). This finding is consistent with the notion that TULA-AIF binding is essential for the pro-apoptotic effect of TULA, but may also be explained by the lack of TULA UBA and SH3, which are essential for this effect, whereas not required for TULA-AIF binding (Fig. 2). However, because our mutational studies implicated multiple sites within the N-terminal half of TULA in binding to AIF, thus not permitting us to obtain a specific mutant of TULA defective in AIF binding, but fully functional otherwise (data not shown), evidence of the essential role of TULA-AIF interactions in the effect of TULA based on the mutational disruption of these interactions remains to be provided.
Several findings presented in this report allow us to outline the molecular basis of the pro-apoptotic effect of TULA. First of all, the caspase-independent nature of the effects of TULA argues that TULA does not act by permeabilizing mitochondrial membrane, because this would result in the release of multiple apoptotic factors acting through caspases. Therefore, considering that the effect of TULA is mediated by AIF, TULA may facilitate (a) release of AIF from mitochondria, (b) transfer of AIF to the nucleus, and (c) interactions of AIF with its co-factors. Because TULA and AIF are not substantially co-localized in unstressed cells (Fig. 8) and because overexpression of TULA does not alter subcellular distribution of AIF in unstressed cells (Fig. 9), it is unlikely that TULA acts by inducing the release of AIF from mitochondria. Likewise, the lack of a significant effect of TULA overexpression on the nuclear localization of AIF ( Fig. 9) provides no support for the idea that TULA facilitates apoptosis by increasing the nuclear translocation of AIF. Therefore, it is more likely that TULA promotes AIF-dependent apoptosis by facilitating interactions between AIF and its apoptotic co-factors (6). This possibility is supported by the following findings: UBA-and SH3-deficient forms of TULA bind to AIF, but lack a pro-apoptotic effect (Figs. 2 and 5), suggesting that interactions of the UBA and/or SH3 domains of TULA with proteins other than AIF are required for this effect. Although the identity of co-factors whose binding to AIF is facilitated by TULA remains to be elucidated, our results argue that c-Cbl is unlikely to be essential for the AIF-TULA cooperation, because the ability of several forms of TULA to bind to AIF and c-Cbl showed a clear inverse correlation (Fig. 3).
Analysis of the mechanism by which TULA cooperates with AIF is hindered by the lack of clarity regarding the molecular mechanism of the apoptotic effect of AIF. It has been shown that the nuclear transfer of AIF is essential for this effect, that AIF exerts its effect through binding to DNA, and that the ini-tial step of AIF-induced apoptosis is likely to be chromatin condensation (6 -8, 15, 33, 34). However, little is known about proteins cooperating with AIF in apoptosis. Although it was suggested that endonuclease G and cyclophilin A might cooperate with AIF in apoptosis (19,21), the involvement of these two proteins in the effect of AIF remains to be established.
As noted above, the caspase-independent nature of the pro-apoptotic effect of TULA (Fig. 6), which is consistent with the body of data related to AIF, is one argument supporting the functional cooperation between TULA and AIF in apoptosis induction. It remains unclear what needs of the cell are served by the existence of the AIF-dependent apoptotic mechanism alongside the well characterized caspase-based mechanisms. Possibly, AIF plays a critical role in cell death in response to specific stimuli, including CD2and CD44-mediated signaling (34,35) and certain drugs (7, 36 -43). For instance, comparison of the sensitivity of wild-type and AIF-null ES cells to various apoptogenic stimuli indicated that AIF is essential for some (growth factor withdrawal), but not the other (etoposide, azide, UV), pathways of death induction (44). Similarly, TULA does not interfere with T-cell apoptosis induced by TCR/CD3 ligation, but plays a critical role in growth factor withdrawal-induced apoptosis (Fig. 4), thus suggesting that the pro-apoptotic effect of TULA is specific for certain types of cell death. It is also possible that AIF functionally cooperates with caspases in the cell death cascades, being responsible for the early stages of apoptosis (34,39). In particular, a recent report indicates that large scale DNA degradation, which was thought to be a hallmark of the apoptotic effect of AIF, may be caspase-dependent, but agrees with the previous studies that AIF-dependent chromatin condensation is independent of caspases and represents an early step of the cell death, which precedes its caspase-dependent steps (45).
To summarize, we have demonstrated, for the first time, that TULA (Sts-2) exerts a pro-apoptotic effect and that this effect is mediated by AIF. The effect of TULA on cell death is specific for particular death stimuli; it is significant for serum deprivationinduced apoptosis, but is negligible for apoptosis induced by TCR/CD3 ligation or several DNA-damaging drugs. The effect of TULA is largely caspase-independent, thus being entirely consistent with the idea that it is mediated by AIF, a key factor of caspase-independent apoptosis. It appears that the role of FIGURE 9. Subcellular localization of AIF. A, HEK293T cells were transfected to express wild-type or mutant TULA, as described in the legend to Fig. 2, and subjected to fractionation. The obtained fractions were analyzed using Western blotting (WB) as shown, and the proteins detected are indicated by arrowheads at the right. Left and right panels represent two different gels; to adjust the quantitative results cross-reference samples were added to each gel. B, subcellular fractions of cells transfected to express wild-type TULA were used to characterize the fractionation procedure using glyceraldehyde-3-phosphate dehydrogenase (GAPDH), VLA-2␣, and lamin as proteins located primarily in the cytosol, membrane, and nucleus, respectively. C, percentage of AIF in the fractions analyzed in A was calculated based on the intensity of AIF bands and sample volumes. The sample volumes were equal with the exception of those for immunoblotting of membrane-localized AIF, which were 10-fold smaller due to a dramatic difference between the amounts of AIF in fractions. For comparison, panel B shows anti-AIF immunoblotting of equal size samples.
TULA is primarily to amplify the apoptotic events dependent on AIF rather than act as an independent inducer of apoptosis. Our results suggest that TULA promotes AIF-dependent apoptosis primarily by facilitating the interactions between AIF and its co-factors. Further studies should reveal the molecular basis of the apoptotic effect of TULA in detail.
|
2018-04-03T01:33:43.113Z
|
2007-10-19T00:00:00.000
|
{
"year": 2007,
"sha1": "771f1c1c9d5f2286de916385cf7c46d8bfd3826a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/42/30920.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "3f8ab80d3c912b30eea1c4815910219d1ad47dc8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
204334
|
pes2o/s2orc
|
v3-fos-license
|
Cervical cancer prevention in South Africa : HPV vaccination and screening both essential to achieve and maintain a reduction in incidence
Cervical cancer rates remain high in South Africa, despite a cytology-based national screening programme. Human papillomavirus (HPV) vaccination is an important intervention that will benefit the younger generation. A school-based programme has been demonstrated to be a cost-effective and efficient way to administer the vaccinations. Education about HPV and cervical cancer will benefit children and their caregivers. Linkage of HPV vaccination with opportunistic screening of mothers may increase total programme coverage.
January 2015, Vol.105, No. 1 Cervical cancer remains an important cause of morbidity and mortality in South Africa (SA).A national cervical cancer prevention programme exists that offers three cervical cytology smears per lifetime, starting after the age of 30 at 10-year intervals.Despite this programme the incidence remains unacceptably high, cases are often diagnosed late, and many patients have poor response to treatment.Primary healthcare systems in many areas are poorly developed, and uptake of cytological screening is generally poor, with some metropolitan areas and regions doing slightly better.Health systems interventions are necessary to improve the quality of screening. [1]In addition, there is often significant loss to follow-up after the initial screening test among women identified with abnormal cytology.Determinants of the high cervical cancer rate and poor outcome of treatment are similar to those in other developing countries and include a low doctor/population ratio, a high prevalence of HIV infections, and competing healthcare needs.A lack of consumer (patient) knowledge and empowerment leads to a low degree of health-seeking behaviour.
Disease prevention
Disease prevention strategies can be broadly categorised as primary prevention and secondary prevention.Primary prevention aims to reduce the risk of an individual contracting a particular disease by eliminating the aetiological agents from the environment.In the case of cervical cancer, the most important risk factor for the development of premalignant and malignant disease is persistent infection with oncogenic types of human papillomavirus (HPV) infection.
Since the approval of the HPV vaccines in SA in 2008, they have been available in the private market.Uptake has been slow because of factors such as the initial high costs of both vaccines, poor community knowledge of cervical cancer and the causal relationship between HPV and cervical cancer, and lack of population experience with and acceptance of vaccines targeting adolescents. [2,3]A schoolbased introduction was suggested for SA. [4]The national Department of Health introduced an HPV vaccine roll-out programme in April 2014.Introduction of the HPV vaccination programme in public schools is widely supported by scientists and healthcare workers involved in the prevention and treatment of cervical cancer, who emphasise the excellent efficacy and safety record of the vaccines. [5]inking health interventions may achieve cost-effective ways of preventing disease.Information from a qualitative study in SA concluded that HPV vaccination can be linked to other adolescent preventive health services. [6]The strong link between HIV infection and immunosuppression with HPV-associated disease is well established.By controlling HIV, the incidence of HPV-related disease will also be reduced.In addition, HIV treatment facilities can be used to monitor cervical screening and treatment.
Smoking is associated with an increased risk of development of squamous and other carcinomas of the cervix, and a national antismoking campaign, like the programme introduced in SA under health minister Dlamini-Zuma, would be highly effective in reducing smoking-related diseases including cervical cancer.
Secondary prevention by screening has been shown to reduce cervical cancer significantly when comprehensive population-based call-and-recall programmes were introduced.Since the SA cervical cytology programme is not a programme of this sort and has low uptake rates, success has been limited.Opportunistic screening will continue to be an important part of our programme for the foreseeable future.Linking opportunistic screening of mothers to vaccination of their children is a potential way to increase disease awareness and screening uptake.
Adolescent vaccination programmes
The successful introduction of a school-based vaccination programme is a momentous task, especially in settings like SA where no national adolescent, adult or school-based vaccination programmes existed before roll-out.Education of healthcare practitioners and the general public will be crucial to the success of such a programme, as no framework and culture for the immunisation of older youth and adult populations are established in SA.HPV vaccine implementation programmes from other countries have shown that a school-based approach is most successful when HPV vaccines are introduced in girls 9 -14 years of age. [7]The first of a series of three articles describing the Vaccine and Cervical Cancer Screen (VACCS) project appears in this issue of SAMJ. [8]These pilot projects combined vaccination of adolescent girls against HPV with cervical cancer screening interventions offered to their female caregivers.
Lessons that can be learned from such vaccination pilot projects are useful to guide nationwide implementation locally and in other African countries.Barriers that had to be addressed included the challenges associated with administration of a three-dose vaccine in a busy school calendar.A two-dose schedule was investigated in phase 2 of the study in the light of new efficacy data in girls 9 -14 years of age. [9]The relative lack of health infrastructure for adolescent vaccination programmes was overcome by using schoolbased infrastructure and dedicated roving vaccination teams.
Parental or caregiver consent procedures could impact on vaccine uptake in school-based programmes and may be a barrier to new vaccine introduction. [7]Clear messages to parents will reduce the likelihood of negative publicity and address concerns about safety.Vaccine uptake among girls whose caregivers attended information evenings were significantly better (almost 90%) than that among girls whose caregivers did not attend (around 50%), [8] underscoring the importance of information dissemination, creating of awareness and disease-specific education.Information must be distributed using multiple strategies, which can utilise health workers or teachers. [7]Providing written information only may not be enough in some communities.Overall, only around 50% of all invited girls were ultimately sufficiently vaccinated, mainly owing to lack of parental consent.After similar experiences, alternative strategies were employed in some countries.Community-based consent strategies that negated individual parental consent was introduced in Vietnam and Uganda, and parent opt-out strategies were used in Tanzania and Rwanda. [7]The acceptability of mandatory vaccination in the SA context will probably be problematic.
Vaccine course completion rates were addressed.A limitation of the national HPV vaccine roll-out programme is the absence of alternative vaccination opportunities when vaccination is missed as a result of absenteeism.The VACCS trials showed that at Cervical cancer prevention in South Africa: HPV vaccination and screening both essential to achieve and maintain a reduction in incidence least two opportunities may be needed per facility because of illness or school-related activities that could cause girls to miss the vaccination opportunity, especially if a two-dose schedule is used.HPV vaccine coverage and completion rates may also increase by the introduction of an additional public health facility-based HPV vaccination programme, which currently does not exist.The importance of grade-based as opposed to age-based eligibility criteria and completion of the vaccination series in one calendar year, as is done in the national roll-out, is highlighted by these data.
Linking vaccination and screening
An exciting and novel approach is the linking of cervical cancer screening of female caregivers to the vaccination of schoolgirls.Even though the uptake of self-screening for high-risk HPV was relatively low, a significant proportion (around 30%) of women at increased risk for cervical cancer were identified.It is important to understand that the aim of home-based cervical cancer screening is as an adjunct to existing healthcare facility-based screening programmes to reach unscreened, high-risk women who did not access healthcare.An additional benefit of linking screening to vaccination is the ease of traceability of screened women, which decreases the loss to follow-up experienced in the current national screening programme.
Implementation of HPV vaccine programmes and extension of cervical cancer screening programmes in African countries are important steps towards reducing the high burden of cervical cancer in this region.
Future considerations
Screening with HPV testing will almost certainly replace traditional cervical cytology in the not-too-distant future.Studies to evaluate the validity of the various HPV tests in our setting are extremely important.Higher rates of HPV infection, possible differences in genotype distribution and the effects of a high background incidence will affect the performance characteristics of HPV-based screening.The possibility of patient-collected (self-sampling) specimens will cater for a large number of women who may not have access to healthcare facilities.
HPV vaccination will only make a significant impact if high enough vaccination rates can be achieved.Continuing education of the public and of healthcare workers is essential.Monitoring of the programme to evaluate vaccination rates needs to be in place from the start.A catch-up school-based vaccination campaign to include a larger cohort of girls, perhaps up to 18 years of age, will be costeffective and may produce herd protection.
Conclusion
There is currently no comprehensive cervical cancer control programme in place.We urgently need to outline strategies to: • maximise the number of individuals receiving HPV vaccines • monitor the vaccination programme effectively • commit to a screening policy, with consideration of HPV testing and self-sampling.
In primary and secondary prevention of cervical cancer, opportunities for linkage with existing infrastructure and services should be investigated.
|
2017-04-19T00:19:26.890Z
|
2014-12-03T00:00:00.000
|
{
"year": 2015,
"sha1": "fcfa8d7855c27690f486c35319fdb4b7f8aae230",
"oa_license": "CCBYNC",
"oa_url": "http://www.samj.org.za/index.php/samj/article/download/9233/6487",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "59c5b3ecbb20420e5b715392602d141fde6d933b",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20645183
|
pes2o/s2orc
|
v3-fos-license
|
MARIA M4: clinical evaluation of a prototype ultrawideband radar scanner for breast cancer detection
Abstract. A microwave imaging system has been developed as a clinical diagnostic tool operating in the 3- to 8-GHz region using multistatic data collection. A total of 86 patients recruited from a symptomatic breast care clinic were scanned with a prototype design. The resultant three-dimensional images have been compared “blind” with available ultrasound and mammogram images to determine the detection rate. Images show the location of the strongest signal, and this corresponded in both older and younger women, with sensitivity of >74%, which was found to be maintained in dense breasts. The pathway from clinical prototype to clinical evaluation is outlined.
Introduction
Breast cancer (BC) is the most frequently diagnosed cancer in women worldwide, with nearly 1.7 million new cases diagnosed in 2012, and more than half of BC cases and deaths occurring in economically developing countries. [1][2][3] Asian countries, which represent 59% of the global population, have the largest burden of BC, with 39% of new cases, followed by Europe at 28%. 3,4 In 2012, deaths from BC in the USA accounted for 783,000 years of potential life lost and an average of 19 years of life lost per death. 5 Early detection has been shown to be associated with reduced BC morbidity and mortality 6,7 and the goal of BC screening programs is to reduce both. Most BCs are detected due to clinical symptoms or by screening mammography (MMG). The standard way to assess suspicious lesions is with the so-called triple assessment: clinical examination, imaging by MMG and ultrasound (US), and image-guided needle biopsy. Magnetic resonance imaging is currently used for initial cancer detection in women at high risk of developing BC but is a complex investigation with high direct and indirect costs. [8][9][10][11] MMG is one of the most effective detection techniques, but suffers from relatively low sensitivity, entails exposure to ionizing radiation and also involves uncomfortable compression of the breast. MMG also performs less well in younger, more dense breasts, which is pertinent as breast density is now established as an independent risk factor for developing BC irrespective of other known risk factors. [12][13][14][15][16] This coupled with the increased risk from ionizing radiation in younger women, restricts the lower age for use based on risk/benefit ratio. Limitations of MMG have resulted in research into alternative methods for imaging of breasts with microwave detection of breast tumors being a potential nonionizing alternative. 17 Initial results of microwave radar-based imaging have been presented [17][18][19][20][21][22][23] and approaches rely on a difference in the dielectric properties (Dk) of normal and malignant breast tissues. [24][25][26][27][28][29][30][31] The breast as an organ is unique in the human body in that basic structure consists of glandular tissue (high dielectric constant, high conductivity, and radioopaque) in a fat (low dielectric constant, low conductivity, and relatively radiolucent)-based matrix. Inclusions, such as a tumor, are also of high permittivity, enhanced by the angiogenic increase in vascularity, and cysts contain fluid, which also have very high permittivity.
Some early measurements at 3.2 GHz 26 indicate that the most common relative permittivity values for breast fat were 4 to 4.5, for normal glandular tissue 10 to 25 and for malignant tissues 45 to 60, but overlaps occurred so that values up to 55 and down to 10 for normal and malignant tissues, respectively, occurred. Glandular tissue is distributed whereas malignant and cystic tissue tends at diagnosis to be discrete and, therefore, much easier to image. Similar results were obtained using completely different measurement techniques by Sugitani et al. 32 showing overlap of tissue values.
Such inclusions alter the speed of propagation of radio waves passing through the tissue and the higher conductivity results in radio wave absorption. These changes mean that the phase and amplitude of a signal is affected by inclusions. In order to image inclusions, an array of antennas transmit signals in turn to be detected by all the other nontransmitting antennas-a so-called multistatic array. The choice of frequency for such a radar system is a compromise between absorption of radio waves (which increases with frequency) and resolution (which increases with decreasing wavelength). Availability of a suitable radio wave transmitter and receiver [in this case, a vector network analyzer (VNA)] is also a factor. An ultrawideband (UWB) signal from 3 to 8 GHz is used in this development.
Methods and Materials
A series of prototype MARIA radar scanners were constructed within the Electrical and Electronic Engineering Department of the University of Bristol with funding from Micrima Ltd. All systems were based on multistatic radar operation, originally proposed for land mine detection by Benjamin. 33 Prototypes have evolved from an initial 16-antenna array 34 through to a 31-element UWB slot antenna system (MARIA M3). 35 To increase the number of antennas, arrays have been redesigned with new smaller UWB antennas. For improved imaging performance and reduced scanning times, a new 60-element antenna array system has been designed (MARIA M4). 36 This system consists of 60 wide-slot antenna elements positioned in a hemispherical arrangement. 36,37 The antennas operate over a frequency range of 3 to 8 GHz in a cavity loaded slot arrangement. 38 Each antenna is designed to couple into a dielectric constant environment of Dk ¼ 10.
To interface the antenna into tissue and to provide a fixed spacer to place the imaged tissue volume in the antenna far field, a separate fixed coupling shell with a uniform Dk ¼ 10 is employed between the antennas and the breast tissue. This shell leaves a space between the antenna face and the shell is filled with a water-/oil-based coupling fluid also with a Dk ¼ 10.
The coupling shell and coupling fluid allows the antenna array not only to match the antenna into its surrounding environment and provide maximum radiated power, but as importantly, provides a method to allow the antenna array to rotate underneath the fixed shell. The system signal source is a VNA operating in the range of 3 to 8 GHz employing standard stepped continuous wave mode. To couple the antennas to the VNA source/receiver, a low-loss high-isolation switch matrix allows a single signal to be connected to any one of the 60 antennas and groups of receiving antenna signals to be, simultaneously, received at the VNA The system collects signal data from the finished array by serially energizing each antenna and collecting the scattering parameter values at each incident frequency from the receive signals collected at all remaining antennas. This method results in a set of signal data for each of the bistatic ray paths. Due to antenna reciprocity, we can reduce the number of bistatic signals collected to half of the 3540/2 or 1770. This reduces the overall scan time.
As signals are transmitted from each antenna, every signal passes through the external coupling shell and into the tissue volume. Multiple reflections occur within the antenna array and its associated coupling shell and coupling fluid and at the interface between the coupling shell and the breast skin surface. The breast skin surface has an estimated Dk of 25, so a significant portion of the incident signal is reflected (depending on its incident angle at this interface). Signals that penetrate into tissue are then reflected at random angles from the surfaces of tissue dielectric discontinuities within the tissue volume.
The various tissue types found in the breast have clearly identifiable dielectric constants in the microwave frequency range, 27,32 which result in incident signals being reflected and attenuated differently at each interface between the tissue types. It is these signals the system collects and accumulates at each point in the estimated tissue space. These intratissue response signals are very small in comparison to the signals from the hardware/skin reflection signals that also appear in the final signal set; thus, a method to remove the nontissue generated signals from the final set is necessary before image generation.
Each complete image scan of the breast is a result of two separate scans offset from one another by a fixed angle. Unwanted signals produced by hardware and skin reflections are almost identical and appear at the same time position in each scan; therefore, they can be eliminated. In contrast, a tumor response will appear at different time positions in these two measured sets (except on the axis of rotation).
During image generation the single scans are subtracted from one another. This leaves the "nonstationary" signals intact and significantly reduces the "stationary" signals so that the signals generated by the tissue volume reflections predominate.
Image generation from the resulting RF signal data makes a number of assumptions. We assume that within the angle of array rotation (a) distance between antennas and skin remains unchanged, (b) skin properties and thickness are the same, (c) normal breast tissue properties do not change, and (d) a uniform dielectric constant of Dk ¼ 10 exists with the hardware and breast tissue across all frequencies of interest. These assumptions allow an estimate to be made of the location of a received signal based on the time-of-flight to and from the target location and based on the transition time of signals at each frequency in a medium whose Dk ¼ 10.
To generate the image, the system uses a modified version of the classical delay-and-sum (DAS) beamforming algorithm. 23,39,40 First, we perform the preprocessing steps, consisting of extraction of the tumor response from measured data, 23 equalization of tissue losses, and then equalization of radial spread of the spherical wavefront. Next, appropriate time delays for all received signals are computed. The time delay for a given transmitting and receiving antenna is calculated based on the antenna's position, position of the focal point r ¼ ðx; y; zÞ, as well as an estimate of average wave propagation speed, which in our case is assumed to be constant across the band. During the focusing, the focal point moves from one position to another within the breast; at each location, all time-shifted responses are coherently summed and integrated. Integration is performed on the windowed signal, and the length of the integration window is chosen according to the system bandwidth, which is 50% longer than the synthetic pulse duration and was set to 0.55 ns, 23 to form a three-dimensional (3-D) map of scattered energy. The main advantage of the DAS algorithm is its simplicity, robustness, and short computation time.
The 3-D map of spatial energy is presented to the user as a colored image comprising slices along three axes: craniocaudal (CC), mediolateral, and physician point-of-view. The energy image is normalized to the maximum energy value within the image. The image presented to the reader is thresholded (calibrated) at 70% of the maximum, which corresponds to the significant scatters within the breast as determined through extensive phantom experimental work. 35 Typically energy values less than 70% of the maximum correspond to clutter. An isometric 3-D rotatable image is provided showing an iso-surface representation of the energy values whose relative contrast is adjustable by the image reader.
Clinical Equipment
The microwave components and supporting mechanical parts are incorporated into a fully integrated bed/system cabinet design (Fig. 1).The antenna array position is adjustable with the patient in position on the bed. It can be raised and lowered and is provided with lateral and cranial/caudal adjustment to allow the operator to optimally position the breast within the scanning cup in terms of fit without the patient having to move during normal clinical application. The system cabinet can also be rotated out from under the bed to allow introduction of additional inserts designed to accommodate smaller breast cup sizes into the basic breast cup [ Fig. 1(c)].
Patient Population
The MARIA M4 prototype system was initially tested for efficacy in women attending symptomatic breast care clinics. Eighty-six patients were identified by clinicians as meeting study inclusion criteria [symptomatic clinic, to be examined by US and MMG, able to lie prone and having breast size within the range of available cups (310 to 850 ml)] and after giving informed consent were recruited at either Frenchay or Southmead Hospital, Bristol, U.K., and included in the observational, prospective MARIA M4 clinical evaluation study [approved by Central and South Bristol Research Ethics Committee (REC) 06/Q2006/30]. The type of lesions included were mainly cysts and cancers but a small number of "other" conditions that had mammogram and US were included. These conditions were a mix of hematoma, lipoma, or fibroadenoma.
Procedure in Clinic
Patients had an US examination and MMG, and where possible a cytology or histology examination (if appropriate and for patient benefit) as part of normal clinical procedure. Patients were scanned using MARIA M4 prior to any surgical or biopsy intervention. Patients were required to lie prone with the breast inserted into a ceramic cup lined with a small amount of "coupling fluid" of dielectric constant 10 and attenuation of 0.8 dB∕cm at 3 GHz. 41 The scan consisted of checks for goodness of fit of the breast inside the cup (lack of air gap), followed by at least two further scans of about 30 s each. Data was processed offline.
Data Collection
Data collected were Breast Imaging Reporting and Data System (BI-RADS) score, 42 age, menopausal status, and breast size. Evaluation of MARIA M4 scans consisted of two stages: a judgment of lesion(s) type, size, and location using all available clinical data by a researcher who had no knowledge of the MARIA image, and an assessment of the MARIA image by an engineer who had no access to the clinical data or image. The two observations were then compared jointly by the two observers to decide on the available data of a good correspondence, failure to correspond, or a need to exclude. In this, the results from US with or without MMG were the "gold standard." Additionally, a nested evaluation was undertaken in which a blind read of all available MMGs was completed for MARIA M4 study patients (n ¼ 66). All patients' identifiable information was removed from MMGs by a patient archiving communication system (PACS) administrator and a blind read of the MMG was conducted by an experienced radiologist. Blind read results were compared to the original clinical result using all available clinical information and to MARIA M4 detection (versus "gold standard" results).
Results
Of 86 MARIA M4 patients included in the study, a sensitivity score of 74% (64/86) correspondence with the "gold standard" (mean age 51.4 years, age range 24 to 87, diagnoses: cysts n ¼ 36 (57%), cancer n ¼ 20 (31%), others n ¼ 8 (12%) was obtained (Table 1). Before reviewing a MARIA image, the location of the lesion within the breast was recorded on the basis of octant of breast, depth on US (allowing for degree of compression by the probe), and distance from the nipple as noted in clinical and imaging examinations. The sensitivity was judged by whether MARIA located an apparent lesion in the corresponding position, making subjective allowance for US probe compression and MMG breast compression. The MARIA image was produced by an engineer "blind" to the clinical status. On this basis there was 75% (45/60) sensitivity in pre/ peri-menopausal women and 73% (19/26) in postmenopausal women. An example of a MARIA M4 scan is given at 70% threshold within the image [ Fig. 2(a)].
Of the initial 86 studies reviewed, 66 had a MMG available for comparison. Of these cases there was 74% (49/66) sensitivity for MARIA M4 compared to MMG (Table 2). Sensitivity was (Fig. 3). For comparison, a negative example of MARIA is shown in which conventional methods (US and MMG) were successful (Fig. 4).
Discussion
Although the number of subjects analyzed here is too small to permit extensive statistical comparisons, nevertheless, some trends can be demonstrated. A detection rate of 74% in all 86 breasts scanned compares very well to the 78% score in digital MMG reported in the digital mammographic imaging screening trial (DMIST) study. 43 Further improved results in dense breasts at 86% compares even more favorably to the DMIST dense breast group at 78% and these MARIA results in dense breasts is important as women with dense tissue in 75% or more of the breast have a risk of BC four to six times as great as the risk among women with little or no dense tissue. [13][14][15]44,45 Patients undergoing a MARIA scan reported that the procedure was acceptable and easily managed by those able to lie prone, and still, for about 2 min and particularly appreciated the lack of breast compression. As the incidence of BC has increased and ∼25% of all deaths due to BC occur in the 40-to 49-year-old age group, 46,47 the MARIA system has potential to provide a major impact in improving BC screening. The MARIA system produces a high-contrast 3-D image of the breast and offers the provision of a safer, more comfortable, and inexpensive breast screening alternative compared to other modalities, which has been shown to be particularly effective at detecting cancer in younger, premenopausal women with dense breasts. MARIA may also overcome some of the challenges posed by trying to optimize the balance between benefit and harm of MMG screening in women of younger age. An improved MARIA M5 system with full CE marking is currently undergoing additional clinical evaluation (approved by Yorkshire & The Humber and South Yorkshire REC 15/YH/0084, ClinicalTrials.gov NCT02493595). Microwave imaging is a rapid, potentially diagnostic technology that is nonionizing, does not involve breast compression, and that has been found to be able to identify regions of significant dielectric contrast, even in dense breasts. This suggests it has value in a routine diagnostic breast care clinic, where x-ray MMG is known to perform suboptimally in dense tissue. 13 Due to MARIA's completely benign radiation characteristic, the technique lends itself to future applications within a younger screening demographic, including women who are deemed to be at a high risk of developing BC.
|
2018-04-03T00:44:55.948Z
|
2016-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "e0a5480d6771162a746710fce3910e191d457e22",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/Journal-of-Medical-Imaging/volume-3/issue-3/033502/MARIA-M4--clinical-evaluation-of-a-prototype-ultrawideband-radar/10.1117/1.JMI.3.3.033502.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eea66d01d034be74d46ca46dffd845d956f3374b",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Engineering"
]
}
|
221257835
|
pes2o/s2orc
|
v3-fos-license
|
Unraveling the molecular heterogeneity in type 2 diabetes: a potential subtype discovery followed by metabolic modeling
Background Type 2 diabetes mellitus (T2DM) is a complex multifactorial disease with a high prevalence worldwide. Insulin resistance and impaired insulin secretion are the two major abnormalities in the pathogenesis of T2DM. Skeletal muscle is responsible for over 75% of the glucose uptake and plays a critical role in T2DM. Here, we sought to provide a better understanding of the abnormalities in this tissue. Methods The muscle gene expression patterns were explored in healthy and newly diagnosed T2DM individuals using supervised and unsupervised classification approaches. Moreover, the potential of subtyping T2DM patients was evaluated based on the gene expression patterns. Results A machine-learning technique was applied to identify a set of genes whose expression patterns could discriminate diabetic subjects from healthy ones. A gene set comprising of 26 genes was found that was able to distinguish healthy from diabetic individuals with 94% accuracy. In addition, three distinct clusters of diabetic patients with different dysregulated genes and metabolic pathways were identified. Conclusions This study indicates that T2DM is triggered by different cellular/molecular mechanisms, and it can be categorized into different subtypes. Subtyping of T2DM patients in combination with their real clinical profiles will provide a better understanding of the abnormalities in each group and more effective therapeutic approaches in the future.
Background
T2DM is a complex multifactorial disorder. Impaired insulin secretion by pancreatic β-cells is the main cause of T2DM. This usually happens due to having a background of reduced sensitivity to insulin in target tissues [1]. Skeletal muscle, liver, and adipose tissues are the key insulin-sensitive tissues. Skeletal muscle takes a major role in lowering the blood glucose level and is responsible for over 75% of the glucose uptake [2,3]. Better prognostic signatures and therapeutic targets necessitate a better understanding of the molecular mechanisms underlying insulin resistance in skeletal muscle. Whereas considerable experimental and computational attempts have been made to determine the molecular mechanisms involved in insulin resistance [4][5][6][7][8], the exact underlying cause of this phenomenon is still unclear [4], and in some cases, failure of the current therapies has been reported. One possible reason for this failure may be the multifactorial nature of T2DM, which results in different groups of molecular mechanisms, all leading to insulin resistance. Precision medicine for each group may, therefore, help develop more effective treatments for T2DM.
In the present work, we attempted to better understand T2DM. We studied gene expression profiles of human skeletal muscle from healthy and newly diagnosed diabetic patients with two goals: 1) To identify a set of genes whose expression patterns can discriminate T2DM individuals from healthy ones using a machinelearning approach; and 2) To examine the potential existence of molecular subtypes based on the gene expression profile of diabetic individuals. For this purpose, unsupervised classification was used to find different possible subgroups of T2DM. We applied differential gene expression analysis and metabolic modeling to gain an in-depth insight into the molecular mechanisms leading to insulin resistance in each subgroup. This finding can be helpful in developing effective treatments for this disease in the future. The overall study design is shown in Fig. 1.
Data
Gene expression data were obtained from a sub-study of the Finland-United States Investigation of NIDDM Genetics project [9]. This is the largest dataset of human skeletal muscle transcriptome. The dataset contains gene expression data from participants with glucose tolerance ranging from normal to newly diagnosed T2DM, in which 91 and 63 individuals were healthy and diabetic, respectively. Data are available through the repository's data access request procedure in the database of Genotypes and Phenotypes (dbGaP) with the accession code phs001068.v1.p1. Data from healthy and diabetic individuals were downloaded and were used for subsequent analyses.
Differential gene expression
Detection of differentially expressed genes (DEGs) was done by employing the DESeq2, which is a standard, well-known, and powerful method for RNA-Seq differential gene expression analysis, and gives the highest power estimations even with a small sample size [10,11]. The analysis was conducted using the Bioconductor R package DESeq2 [12]. A pre-filtering stage was performed that removed genes whose expression levels were below a minimum cutoff level (< 5 read counts in less than 25% of samples). According to the DESeq2 manual, between samples normalization was applied to account for differences in sequencing depth. DESeq2 employs one of the best between-sample normalization methods to detect differentially expressed genes [13]. Since without between sample normalization in RNA-Seq data, cross-sample analysis is not reliable, we did not use FPKM/RPKM in this analysis. These are only suitable for the comparison of genes in one sample. In the context of differential analysis, RPKM [FPKM] is inefficient and should be abandoned [13].
DEGs between two states (e.g., healthy vs. diabetic) were assessed based on a negative binomial distribution. Multiple testing correction was applied by adjusting the P values using the Benjamini-Hochberg procedure and false discovery rate of 0.1 was considered significant. Moreover, the KEGG pathway enrichment analysis of significant DEGs was performed using Enrichr [14]. For this enrichment analysis, we used the genes with the absolute value of log2 fold change more than 0.9.
Feature selection method: GA-SVM
To select a near-optimal feature subset, a wrapper feature selection algorithm that is a hybrid of genetic algorithm (GA) and support vector machine (SVM), was used. GA is a global optimal search algorithm inspired by Darwin's theory of evolution. In the algorithm, the candidate solution (feature subset) is encoded on a chromosome-like structure. A set of chromosomes constitutes a population in which crossover and mutation can occur to generate new feature subsets. For each chromosome, a fitness value is calculated representing how well a feature subset is adapted to the environment. The algorithm employs a competing solution in which better feature subsets have more chance to be selected for reproduction and creating the next generation. This search process will be repeated until a stopping criterion is satisfied.
In this analysis, a binary genetic algorithm was implemented. Each gene in this algorithm has one of the binary values, 1 or 0, as either the presence or absence of a particular feature at the relevant chromosome. The chromosome length and the population size were set to the number of features and 500 chromosomes, respectively. The maximal number of generations was set to be 100. SVM classification accuracy was used as the fitness score. The genetic algorithm was terminated when the fitness score was at least 95% or the maximum number of generations was reached.
Supervised classification
Supervised machine learning methods, including SVM, k-nearest neighbor (KNN), neural network (NN), naïve Bayes (NB), and random forest (RF) were employed for the classification of T2DM individuals from controls. We used the Orange data mining toolbox for this analysis [15]. The classifiers were validated by 10-fold stratified cross-validation and analysis of the area under the ROC curve (AUC), accuracy (ACC), F1 score, precision, and recall were reported.
Unsupervised classification
Potential subtyping of diabetic patients was performed. The gene expression values were considered as the features for unsupervised classification. The low expressed genes were filtered and the remaining genes were normalized using the DESeq2 normalization method. This resulted in 21,826 genes as the features. Samples were categorized into potential subtypes based on the similarity in their gene expression patterns. Here, we used complete linkage hierarchical clustering with the Euclidean distance metric.
Cluster-based genome-scale metabolic modeling
To reconstruct the personalized metabolic model, we need a generic genome-scale metabolic model (GEM) and gene expression data. A generic human GEM is reconstructed from all possible reactions, in which relevant enzymes are encoded in the genome, and can occur in different human cell types. By having geneprotein-reaction associations and mapping gene expression data to the generic metabolic model, active enzymes and subsequently active reactions are identified, and a context-specific metabolic model will be reconstructed. These context-specific metabolic Fig. 1 Graphical overview of the study design. This study included two supervised and unsupervised classification sections. At the supervised classification part, we used a machine learning approach to identify a set of genes whose expression patterns could discriminate T2DM individuals from healthy ones. At the unsupervised section, the clustering of T2DM patients was employed for potential subtyping of the disease models can be employed for subsequent simulations to study metabolic reprogramming under specific conditions. Possible minimum and maximum flux through a specific reaction can be simulated using flux variability analysis (FVA). The readers are referred to [16] for a full description of the principle concept of this simulation.
Here, personalized metabolic models were reconstructed based on the Human Metabolic Reaction 2 (HMR 2) as the generic model [17,18]. E-Flux method was applied to reconstruct the context-specific metabolic models, using gene expression data [19]. Pre-processing of gene expression data, including pre-filtering of low expressed genes, between-sample normalization (DESeq2 normalization method with gene length adjustment), and log2 transformation was applied. The myocyte biomass reaction was added to the model from the Bordbar model [6]. Body fluid metabolites were used as media conditions [20]. The objective function was set to maximize flux through the production of mitochondrial ATP. Besides, to ensure the viability of the cell, the lower bound of biomass reaction was set to 0.8 of the maximum amount of biomass production in the healthy model [21]. FVA for each model was applied to obtain the minimum and maximum possible fluxes of each reaction using the COBRA Toolbox version 3.0 [22]. Personalized metabolic models (154 models) were categorized into the three groups based on the clusters obtained from the previous section. Subsequently, to find perturbed reactions between each cluster and controls, a two-sample t-test was performed on the minimum and maximum fluxes obtained from FVA. Multiple testing correction was applied using the Benjamini-Hochberg procedure, and reactions with false discovery rate less than 0.1 were considered as significant perturbed reactions. Figure 2 shows the workflow for this section.
Supervised classification
There were 57,820 gene expression values for each individual that can be regarded as features in the classification. Using all of these genes as features were not applicable, leading to the high dimensional data and reduced performance of the conventional machine learning approaches. To overcome this problem, we used differential gene expression analysis. We removed those genes without any significant quantitative changes in T2DM versus healthy group from the feature list. Regarding the remaining genes as features, we applied a feature selection method to find near-optimal genes subset whose expression patterns can discriminate T2DM individuals from healthy ones. Thus, DEGs between healthy and T2DM were explored, which resulted in 247 differentially expressed genes. These 247 genes were used as the features of classification, and classifiers' accuracy was investigated. SVM, KNN, NN, NB, and RF classifiers were evaluated and SVM showed the best performance in our analysis, as are shown in Table 1.
To achieve a near-optimal feature subset and to improve the classification accuracy, feature selection was applied based on a combination of GA and SVM. Different subsets of features were found that could distinguish T2DM from normoglycemic subjects with high accuracy. The GA-SVM procedure was repeated 100 times and 100 feature subsets with the prediction accuracy around 95 percentages were obtained. Features were ranked according to the frequency of their presence in these 100 subsets. Our analysis revealed that using 26 topranked genes as the features could improve classification accuracy to 94%. This subset consists of important genes including, CERK, FGFBP3, ETV5, E2F8, MAFB, and ten non-coding genes. The complete list of genes with Ensemble ID can be found in Additional file 2. These top-ranked genes were selected as the final features. The performance of different classifiers with these features was assessed (Table 2).
To evaluate the SVM classifier using final features, classification was repeated 100 times with 10-fold crossvalidation, and accuracy, sensitivity, and specificity were calculated. Figure S1 in Additional file 1 shows the box plot of this evaluation.
Unsupervised classification
In this section, the objective was to assess the possibility of the existence of different subtypes in the disease. We tried to answer the following questions: 1) Do the diabetic participants show different patterns of gene expression or not; and 2) Is it possible to categorize T2DM samples into distinct sub-groups with specific abnormalities in gene expression pattern? To answer the questions mentioned above, the unsupervised hierarchical clustering algorithm was exploited on diabetic samples using the measure of Euclidean distance and complete linkage method. The top three clusters were selected and studied (Fig. 3). Clusters 1 to 3 consist of 18, 18, and 27 individuals, respectively.
To study biological differences between clusters, metabolic modeling of each cluster, and differential gene expression analysis were applied. It was found that differences in gene expression patterns and pathways between healthy and all newly diagnosed diabetic patients are low. Clustering of patients and analysis between each cluster and healthy individuals helped to find more DEGs and more perturbed pathways. Results showed that each cluster has specific dysregulated genes and pathways, which do not exist in the other two clusters. A heatmap representation of the gene expression in three clusters is shown in Fig. 4. In addition, pathway enrichment analysis of DEGs in each cluster was performed. The results can be found in Table S1-3 of Additional file 1.
The analysis demonstrated that among these three clusters, the first cluster has the most number of perturbed pathways and dysregulated genes. Dysregulation of several genes in cluster 1, including down-regulation of DDIT4L, subunits of cytochrome c oxidase, several mitochondrial genes, ADIPOQ, and up-regulation of several inflammatory genes such as GADD45G, TGFB1, CARD9, IGHA2, IGHG2, IGHA1, IGHD, and MIF genes were found. Down-regulation of several genes encoding mitochondrial genes and subunits of cytochrome c oxidase (COX) can reflect mitochondrial dysfunction and oxidative stress. Down-regulation of the adiponectin gene also was found in cluster 1. At the metabolic modeling level, perturbations in pathways related to inositol phosphate metabolism, pentose phosphate pathway, tyrosine metabolism, folate metabolism, acylglycerides Workflow for cluster-based metabolic modeling. HMR2 model was used as the generic model. The personalized metabolic models were reconstructed by integrating gene expression data into the HMR2 using the E-Flux algorithm. Diabetic models were categorized into three groups based on the clusters obtained from the hierarchical clustering of T2DM patients. FVA was employed to obtain maximum and minimum possible fluxes in each reaction. Perturbed reactions in each cluster in comparison to the healthy group were identified by applying t-test on obtained fluxes metabolism, glutathione metabolism, ROS detoxification, glycerolipid metabolism, acyl-CoA hydrolysis, fatty acid activation, beta-oxidation of fatty acids, sphingolipid metabolism, glycerophospholipid metabolism, chondroitin/ heparan sulfate metabolism, purine and pyrimidine metabolism, carnitine shuttle, TCA, oxidative phosphorylation, omega-3 and omega-6 fatty acid metabolism, and glycosphingolipid metabolism were observed.
Cluster 2 displayed no significant perturbed pathway, although, changes in the expression of various genes were observed. Overexpression of SPP1, TNFRSF11B, FRK, and down-regulation of PRKAG3 and ATP2A1 are some examples. We speculated that people in this group may be closer to the control group in respect of blood sugar levels. Thus, we compared the phenotypic features of people in each cluster with controls. Table 3 shows the average value of each feature in different clusters. In addition, the box plots of fasting glucose and fasting insulin values in each diabetic cluster and normoglycemic group are shown in Figures. S2 and S3 of Additional file 1. We also provided more information about differences of clinical features between each pair of clusters in Additional file 1 Table S4 and S5. This analysis revealed that this cluster is very close to the healthy state in terms of blood glucose and insulin levels.
From these three clusters, cluster 3 has the least number of DEGs, although perturbations in the expression of various important genes like MSTN, ERBB3, EGR1, CIDEC, and HK2 were found in this cluster. At the metabolic level, the perturbation in glucose metabolism was observed. Dysregulation of branched-chain amino acids (BCAAs) metabolism, glycolysis, pyruvate metabolism, tricarboxylic acid cycle, and glyoxylate/dicarboxylate metabolism and several exchange and transport reactions were found. The complete list of DEGs and perturbed reactions in each cluster can be found in Additional file 2.
Supervised classification discriminates diabetic patients from healthy ones
In this study, gene expression data from newly diagnosed type 2 diabetic patients were analyzed using supervised and unsupervised machine learning approaches. At the supervised level, we aimed to identify a set of genes whose expressions were dysregulated in most patients and could potentially discriminate normoglycemic from T2DM individuals.
The gene set comprised of genes such as FGFBP3, CERK, ETV5, E2F8, MAFB, and non-coding RNAs, which may be used to study and develop novel T2DM treatments in the future. Noticeably, the injection of FGFBP3 has been patented as a treatment for diabetes, obesity, and nonalcoholic fatty liver disease [23,24]. It has been demonstrated that the single injection of FGFBP3 regulates blood glucose level and keeps it at the normal range for more than 24 h. CERK plays an important role in inflammation-associated diseases [25]. It has been observed that CERK deficiency in CERK-null mice suppresses the elevation of obesity-mediated inflammatory cytokines and improves glucose intolerance [26]. Studies also have indicated the relationship between diet and obesity and ETV5 gene expression, which participates in food intake control mechanisms [27].
Moreover, it has been found that impaired glucose tolerance in obese individuals is associated with the upregulation of E2F8, which possibly is implicated in the progression of obesity, glucose intolerance, and its complications [28]. MAFB also has been linked to the metabolism and development of obesity and diabetes. The MAFB-deficient mice have exhibited higher body weights and a faster rate of increase in body weight than control mice [29]. Up-regulation of MAFB expression in human adipocytes has been correlated with adverse metabolic features and inflammation, which may lead to the development of insulin resistance [30]. In addition to the protein-encoding genes, we found that about 40% of top-ranked genes comprise non-coding RNAs, including pseudogenes and long non-coding RNAs. Recent studies have revealed that the deregulation of pseudogenes and lncRNAs can relate to diabetes [31,32]. In the present analysis, more non-coding candidates were found that support the role of lncRNAs in complex diseases like diabetes. These non-coding RNAs can be functionally analyzed to understand their biological roles in the pathology of T2DM. Unsupervised classification of diabetic patients reveals the potential existence of molecular subtypes The objective of analysis at the unsupervised level was to identify different gene expression patterns among T2DM patients, potentially leading to insulin resistance through different mechanisms. In this part, the diabetic samples were categorized into three clusters, and specific dysregulated genes and pathways in each cluster were found. This analysis shows that because of the heterogeneous and multifactorial nature of this disease, the gene expression dysregulations of all diabetic people are not necessarily the same. Thus, people can be clustered into different subgroups with different dysregulations in gene expression patterns. We attempted to model the subsequent effects of these gene expression dysregulations on their metabolisms. Although, we did not claim these transcriptional differences lead to the manifestation of different clinical features such as fasting glucose and insulin levels in these clusters. Moreover, we only investigated the potential existence of molecular subtypes in T2DM, and we did not introduce specific subtypes. Accurate subtyping requires more data from additional individuals and validation with an independent data set and experimental verification.
Cluster 1: mitochondrial dysfunction, oxidative stress, and inflammation
In cluster 1, perturbed pathways and dysregulated genes possibly represent perturbation of lipid and free fatty acids (FFAs) metabolism, inflammation, oxidative stress, and mitochondrial dysfunction. Perturbed pentose phosphate, folate metabolism, and glutathione metabolism as well as dysregulated genes such as IGHA1 and IGHA2, GADD45G, and DDIT4 exhibit inflammation and oxidative stress. The up-regulation of IGHA1 and IGHA2 may trigger an inflammatory cascade involving a neutrophilic response, phagocytosis, the oxidative burst, and subsequent tissue damage. Also, GADD45G plays the role of a stress sensor [33] which is overexpressed in this group. DNA damage and energy stress can also activate DDIT4 expression; thus, this gene contributes to regulating reactive oxygen species [34]. Oxidative stress may impair mitochondrial function, which possibly leads to impairment of insulin sensitivity. Some evidence has supported the role of oxidative stress and mitochondrial dysfunction in the pathogenesis of insulin resistance and type 2 diabetes [35]. In diabetes mellitus, mitochondria are the major source of oxidative stress [35]. Free radicals can damage lipids, proteins, and DNA and play a role in diabetes complications. Down-regulated mitochondrial genes and perturbation in oxidative phosphorylation may demonstrate mitochondrial dysfunction in this cluster. Furthermore, MIF, which is a proinflammatory cytokine, is up-regulated in this cluster. A positive association has been reported between MIF plasma levels, FFAs concentration, and insulin resistance [36]. The perturbation of FFAs metabolism that possibly leads to an increase in FFAs was observed in this cluster. Evidence has demonstrated that FFAs can induce insulin resistance in skeletal muscle. FFAs may induce insulin resistance via mitochondrial dysfunction, increased ROS production and oxidative stress, and activation of inflammatory signals, which was observed in this cluster [37]. An increase in FFAs is associated with a decrease in adiponectin. ADIPOQ is mainly known as the adipokine, but the importance of adiponectin production in muscle cells has also been demonstrated [38]. This study also has reported an increased expression of adiponectin in response to rosiglitazone treatment in muscle cells and has confirmed the functional role of muscle adiponectin in insulin sensitivity. Adiponectin contributes to the glucose metabolism of muscle cells via increased insulin-induced serine phosphorylation of protein kinase B and inhibition of the inflammatory response [39]. Moreover, in this cluster, abnormalities in inositol phosphate metabolism with Myo-inositol deficiency was observed. Myo-inositol, one of the inositol isomers, participates in signal transduction and vesicle trafficking and associates with glucose utilization. Clinical reports have suggested that the administration of inositol supplements is a therapeutic approach in insulin resistance and improves glucose metabolism [40]. Figure S4 in Additional file 1 shows the overview of abnormalities in this cluster.
Cluster 2: ER-stress and inflammation
Surprisingly, no significant dysregulated pathway found in the second cluster. Therefore, we compared the phenotypic features of people in each cluster with healthy individuals. It was interesting that this cluster is very similar to the healthy state in respect of blood glucose and insulin levels. Therefore, people at this group may be at the early stage of diabetes onset, and there is still no apparent change in their metabolism. However, using differential gene expression analysis, the changes in the expression of non-metabolic genes (e.g. overexpression of OPN, OPG, CHAC1, ERN1, and down-regulation of SERCA1) were observed in this cluster. These genes are related to diabetes by promoting ER-stress and inflammation. OPN and OPG play roles in inflammation, insulin resistance, prediabetes, and diabetes. A recent study has demonstrated that OPN and OPG levels in pre-diabetic subjects are increased, and alterations in OPN and OPG might be involved in the pathogenesis of prediabetes and T2DM [41,42]. Obese mice lacking osteopontin have shown improved wholebody glucose tolerance and insulin resistance, also with decreased markers of inflammation [43]. In addition, ER-stress can induce the expression of OPN and OPG. Recent pieces of evidence have supported the presence and role of ER stress in muscle [44][45][46]. In this cluster, SERCA1, which is an intracellular membrane-bound Ca 2+ -transport ATPase enzyme encoded by the ATP2A1 gene was down-regulated. The dysregulation of SERCA promotes ER Stress [41]. SERCA1 resides in the sarcoplasmic or endoplasmic reticula of muscle cells and contributes to the modulation of cellular Ca 2+ homeostasis within the physiological range. Lower SERCA expression may lead to reduced Ca 2+ accumulation in the ER lumen and ER dysfunction. High luminal calcium concentration is essential for proper protein folding and processing. Ca 2+ depletion can result in the accumulation of unfolded proteins and can trigger the unfolded protein response (UPR) and cell death [47]. High-fat diet and obesity induce ER stress in muscles and subsequently suppress insulin signaling [48]. Antidiabetic compounds such as azoramide and rosiglitazone, have been demonstrated to induce SERCA expression and increased accumulation of Ca 2+ in ER [49,50]. Schematic representation of abnormalities in cluster 2 is shown in Figure S5 of Additional file 1.
Cluster 3: perturbation in IRS-mediated insulin signaling
In cluster 3, the differential gene expression analysis revealed the perturbation in insulin signaling and inflammation. Results showed down-regulation of insulinresponsive genes, HK2, EGR1, and CIDEC, which verify insulin resistance through deficiency of insulin signaling. Furthermore, overexpression of MSTN and ERBB3 was found. Myostatin has been shown to induce insulin resistance by degrading IRS1 proteins [51] and diminishing insulin-induced IRS1 tyrosine phosphorylation, thus interrupting insulin signaling cascade [52]. In addition, treating HeLa cells with myostatin has suppressed HK2 expression [53]. Evidence has revealed that stressinduced transactivation of ERBB2/ERBB3 receptors triggers a PI3K cascade leading to the serine phosphorylation of IRS proteins [54,55]. Overexpression of ERBB3 may enhance PI3K activity and implicating ERBB proteins in stress-induced insulin resistance. Taken together, MSTN and ERBB3 can lead to serine phosphorylation of IRS, reducing tyrosine phosphorylation of IRS and degradation of them. Since expressions of insulinregulated genes are positively correlated with insulin sensitivity, down-regulation of HK2, EGR1, and CIDEC genes in this group possibly verify insulin resistance through deficiency of insulin signaling. In addition, at the metabolic analysis, lower phosphorylation of glucose with the subsequent perturbation in glycolysis and TCA pathways was observed. Moreover, dysmetabolism of branched-chain amino acids was observed at metabolic analysis. A mechanism involved leucine-mediated activation of the mammalian target of rapamycin complex 1 (mTORC1) has been proposed to link higher levels of BCAAs and T2DM [56]. This activation results in the serine phosphorylation of IRS1 and IRS2 and subsequent uncoupling of insulin signaling at an early stage. A brief representation of abnormalities in this cluster is shown in Figure S6 of Additional file 1.
The cluster-based study can improve understanding of T2DM
Our analysis showed that at the early stage of diabetes, associated changes at the gene expression level in skeletal muscle are low, compared to healthy subjects. Moreover, the clustering of patients leads to the identification of the abnormalities that are usually hidden in cohort studies. For example, dysregulation of genes such as MIF, ATP2A1, GADD45G, EEF2, EGR1, CIDEC, and MSTN, and perturbations in several reactions implicated in BCAAs metabolism, folate metabolism, and pentose phosphate were only observed in our cluster-based analysis. In a cohort study, a sample consists of several subjects is gathered and is examined ( Figure S7 of Additional file 1). This makes it possible to see only an approximate average of the features in the samples and as a result, some of the abnormalities are covered in this way. In a cluster-based study, a collected sample in a cohort study is broken down into the sub-groups so that the members within each subgroup have the most similarity and differ from the members of the outer subgroups. Each sub-group will be analyzed individually (e.g., here we divided the diabetic group into three subgroups). The cluster-based analysis in this study led to find more dysregulated genes and pathways that are specific in each cluster. Therefore, for a progressive and heterogenic disease like T2DM, applying a cluster-based study will enhance our understanding of the factors involved in the disease pathogenesis. Focusing on homogeneous sub-groups in a heterogenic disease such as T2DM may improve the success of therapeutic strategies.
Conclusion
In this study, the changes in gene expression patterns of newly diagnosed diabetic patients were analyzed using supervised and unsupervised classification methods. Using only gene expression data, it is possible to discriminate T2DM individuals from healthy controls with approximately 90% accuracy. Clustering of diabetic patients according to their gene expression patterns and subsequent more in-depth analysis of each cluster unraveled specific abnormalities leading to insulin resistance in each cluster. Based on the observed results in this work, it seems that the disease has the potential to be subtyped based on the gene expression patterns. This is a pilot study, and further empirical analysis is still needed to confirm our findings. We propose that using the unsupervised clustering of diabetic patients in combination with their real clinical profiles helps to find significant molecular subtypes of T2DM with specific abnormalities. This approach potentially will lead to better therapeutic measures in each subtype in the future.
Additional file 1: Figures. S1-7. show bar plot of SVM classification evaluation, boxplots related to individuals characteristics in each cluster and schematic representation of abnormalities in each cluster. Tables S1-3. related to KEGG pathway enrichment analysis of each cluster.
Additional file 2. The complete list of differentially expressed genes in each cluster, top-ranked genes with Ensemble ID, perturbed reactions obtained from metabolic modeling in each cluster.
|
2020-08-24T13:47:14.970Z
|
2020-01-09T00:00:00.000
|
{
"year": 2020,
"sha1": "5285088cac26eb9bdba567b4271b398d3738c143",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenomics.biomedcentral.com/track/pdf/10.1186/s12920-020-00767-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5285088cac26eb9bdba567b4271b398d3738c143",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250598232
|
pes2o/s2orc
|
v3-fos-license
|
Digital Finance, Environmental Regulation, and Green Technology Innovation: An Empirical Study of 278 Cities in China
: Digital finance provides a premises guarantee for green technology innovation, and effective environmental regulation helps to achieve green and sustainable development. This article selects Chinese urban panel data from 2011 to 2019 to explore the impact mechanism of the influence of digital finance and environmental regulation on the innovation capacity of green science and technology. It is found that extensive financing channels and the strong information-matching ability of digital finance have a significant promoting effect on local green science and technology innovation. Moreover, government environmental regulation not only facilitates the development of green technology innovation locally and in nearby regions, but also strengthens the utility of digital finance in driving green science and technology innovation. Further research found that the influence of digital finance and environmental regulation on the ability of green science and technology innovation has regional heterogeneity, and only digital finance in Central China can promote green science and technology innovation in both local and adjacent areas. Therefore, the government should continue to promote the development of digital finance, optimize environmental regulations by increasing environmental protection subsidies and creating a green innovation environment, and further stimulate willingness to innovate green technologies. At the same time, it is also important to note the coordinated development and governance with neighboring regional governments.
Introduction
Since the reform and opening up, China has created a number of "China miracles", but environmental protection has been neglected in the rush to boost economic growth. After entering the new normal, China's economy has begun to aim for high-quality development. This goal sets higher requirements for economic progress and environmental protection. However, China ranked only 120th out of 180 countries in the 2020 Environmental Performance Index, illustrating the imbalance between high-quality economic development and environmental protection. In order to balance "economic performance" and "environmental performance", since 2012, China has gradually introduced and improved laws and regulations to protect ecological civilization, aiming to promote green transformation through strict environmental regulations.
Traditional innovation only considers technological progress and economic development, while green technology innovation also needs to take into account ecological civilization. Therefore, in order to achieve green technology innovation, the main body of innovation will face higher technical standards, capital investment, financing costs, and risks [1]. In this way, in the process of realizing green technology innovation, good financial services are needed as a prerequisite, and at the same time, there should be correct guidance from the environmental regulations by the government [2]. However, it is difficult for traditional financial services to meet the capital needs of many enterprises for green technology innovation because of its high threshold, high cost, and low efficiency [3]. Therefore, digital finance based on the continuous development of digital technology has gradually caused concern in various circles. Digital finance can solve the capital problem in the process of green technology innovation by lowering the threshold, improving resource allocation, alleviating information asymmetry, reducing transaction costs, widening inter-regional overflow channels, and using other methods [2,4,5]. So, as a new way of financial services, can digital finance drive green technology innovation to achieve sustainable development? Is digital finance affected by government environmental regulation in the process of influencing green technology innovation? Is there a spatial spillover effect of digital finance on green technology innovation?
To solve the above problems, the mechanism studied in this paper covers digital finance, environmental regulation, and green technology innovation. Based on the panel data of 278 cities from 2011 to 2019, the spatial Durbin model was constructed to explore the impact of digital finance and environmental regulation on green technology innovation. The main research contents of this paper are as follows: first, the influence of digital finance and environmental regulation on green technology innovation is discussed. Second, the moderating role of environmental regulation in the process of digital finance affecting regional green technology innovation is explored; third, the marginal effect of digital finance and environmental regulation on green technology innovation is analyzed. Fourth, 278 cities are divided into seven regions: East China, South China, North China, Central China, Southwest China, Northwest China, and Northeast China. Based on the perspective of regional heterogeneity, the influence mechanism of digital finance and environmental regulation on green technology innovation is discussed in depth.
The rest of this paper is arranged as follows: Section 2 reviews the relevant literature. Section 3 proposes the research hypothesis. Section 4 introduces the model setting and data description. Section 5 presents an empirical analysis, robust testing, and regionalheterogeneity analysis and discussion. Section 6 puts forward research conclusions and relevant policy suggestions. Section 7 puts forward research limitations and prospects.
Literature Review
Green technology innovation is innovation activity with high investment, high risk and long cycle. To enhance the abilities of science and technology, good financial service is the key [6,7]. Digital finance is a new financial service model that integrates traditional financial industry with big data, internet, cloud computing and other information technologies [8,9]. The G20 Advanced Principles for Digital Inclusive Finance adopted at the G20 Hangzhou Summit in 2016 advocated the development of inclusive finance relying on digital technologies and included relevant indicators of digital finance into the evaluation system of inclusive finance, which greatly promoted the development of digital finance. At present, there have been abundant discussions on the relationship between digital finance and technological innovation in academic circles. At the micro level, Lin [10] points out that fintech can reduce the financing risk caused by information asymmetry on enterprise technological innovation. Subsequently, Tang et al. [11] took Chinese listed companies as research objects and believed that digital finance could promote the output of technological innovation by broadening financing channels and reducing financing costs. At the macro level, scholars have reached a relatively consistent conclusion that digital finance has the advantages of low threshold, wide coverage, low transaction cost and high resource allocation rate, which is of far-reaching significance to realize technological innovation [12,13]. For example, Nie et al. [14] selected the SYS-GMM model and found heterogeneity in the promotion effect of digital finance on regional technological innovation. Combined with the trickle-down effect, Xu [15] found that digital finance can also drive technological innovation in neighboring areas through spatial econometric model. In addition, some scholars also discussed the relationship between digital finance and green technology innovation. For example, Yu et al. [16] pointed out that digital finance can significantly promote green technology innovation on family farms and believed that promoting the development of digital finance is of great significance to the sustainable development of agriculture. Habiba et al. [17] took 12 major countries with carbon emissions as research objects and found that green technological innovation is a key factor in reducing carbon emissions and achieving sustainable development, and digital finance can effectively promote the progress of green technological innovation. When exploring the impact of digital finance on carbon emissions, Lee [18] concluded that green technology innovation plays an intermediary role.
Environmental regulations are related environmental laws and regulations formulated by the government for the purpose of protecting the environment, aiming to guide economic subjects to make decisions to improve the environment, reduce pollutant emissions while improving the overall economic benefits, and achieve the goal of the sustainable development of technology and the environment [19]. In the 1960s, neoclassical economic theory, based on the static perspective, pointed out that under the environmental supervision of the government, enterprises need to pay a large amount of environmental protection costs, which are bound to occupy the R&D funds originally used for innovation activities of enterprises, resulting in the innovation crowding-out effect. Therefore, environmental regulation will inhibit economic development [20,21]. Porter put forward a different point of view in 1991. Porter believed that with economic development, production technology and equipment of enterprises are constantly upgrading, and the key to environmental protection has shifted from process to result. Therefore, the environment in which enterprises are located should be regarded as dynamic and the impact of environmental regulation on economic development should be studied from a dynamic perspective. Therefore, based on the dynamic perspective, the Porter hypothesis is proposed. Porter [22], as well as Porter and Vander [23] believe that strict and effective environmental regulations can guide enterprises to voluntarily strengthen their investment in green technology R&D, enhance their competitive advantages, and achieve a win-win balance between economic performance and environmental performance. Since the porter hypothesis was put forward, scholars have continued to discuss the relationship between environmental regulation and green technology innovation. However, the opinion camp is always divided into three parts: the first side mainly believes that environmental regulation can effectively promote the ability of green technology innovation based on the Porter hypothesis [24,25]. Li et al. [26] pointed out that the financing availability of large enterprises is relatively high. Therefore, in the face of strict environmental regulations, enterprises will reduce environmental costs and improve resource utilization through green technology innovation, so as to enhance their competitive advantages and achieve sustainable development. Zhang et al. [27] studied 33 countries and concluded that environmental regulation has a significant incentive effect on green patent output. The second side, supported by neoclassical economic theory, holds that environmental supervision inhibits technological innovation ability [28,29]. Lanoie et al. [30] found that the benefits generated by enterprise green technology innovation could not cover the costs generated in the process of environmental compliance. Therefore, compared with green technological innovation with high investment, high risk and long cycles, enterprises are more inclined to pay the environmental penalty. Dechezleprêtre [31] believes that environmental costs caused by environmental regulations occupy the funds originally used for innovation activities of enterprises, thus hindering the development of green technological innovation of enterprises. The third party believes that there are preconditions for the relationship between the two and emphasizes the role of environmental regulation intensity, senior executives' environmental awareness, regional economic development level, financing and other factors [32,33].
At the same time, many scholars also pay attention to the interactive relationship between digital finance and environmental regulation. For example, Shi et al. [34] points out that the synergy between digital finance and environmental regulation can effectively improve the degree of environmental pollution and play an important role in environmental governance. Li et al. [35] showed through the study of urban panel data that the interaction between digital finance and environmental regulation is conducive to the upgrading of urban industrial structure. Wang et al. [36] point out that digital finance cannot do without the regulatory role of government intervention in the process of promoting county economic growth. In addition, Feng et al. [37] took the intensity of regional environmental regulation as the threshold variable when exploring the relationship between digital finance and green technology innovation, and found that digital finance significantly promoted of green technology innovation only in regions with stricter environmental regulation.
According to the above literature, scholars have made many achievements in the research on the relationship between digital finance and technological innovation, environmental regulation and green technological innovation. However, if we place digital finance, environmental regulation and green technology innovation in a research framework, we can find that the existing research has three characteristics. First, the literature pays more attention to the influence of digital finance on technological innovation, and less attention is paid to the influence mechanism of digital finance on green technological innovation. Second, scholars have conducted preliminary discussions on the relationship between digital finance and environmental regulation, but the discussions are few and scattered, focusing on economic development and environmental governance. Third, existing studies mostly focus on spatial spillover effects of digital finance from the perspective of spatial independence.
Compared with the existing research, this study has three main contributions. Firstly, in terms of research perspective, this paper constructs a research framework of digital finance, environmental regulation and green technological innovation, in which environmental regulation is taken as a regulating variable to provide a perspective for the discussion of the significance of digital finance. Secondly, in terms of research methods, considering the flow of financial elements, the migration behavior of enterprises and the spillover effect of technological innovation, this paper chooses the spatial Durbin model to explore the interaction between regions from the perspective of spatial correlation, further enriching the empirical research on digital finance, environmental regulation and green technological innovation. Thirdly, in terms of practical significance, this paper studies the heterogeneous impact of digital finance and environmental regulation on green technology innovation according to geographical location, providing theoretical basis for the sustainable development of each region.
Research Hypotheses
As a high-risk, high-investment, and long-cycle activity, green technology innovation is prone to being restricted by financing problems during its development [16,38,39]. In order to realize the improvement of green technology innovation ability, a large amount of capital is needed to support it [40]. However, the problems of traditional finance, such as information asymmetry, high threshold, and low service efficiency, all lead to its poor inclusiveness and difficulty in effectively alleviating financing difficulties [41]. Therefore, with the integration of information technology, digital finance with strong universality is gradually becoming known by all circles. Digital finance can increase the possibility of obtaining financing through a variety of ways, promote R&D investment, and strengthen green technology innovation so as to achieve high-quality economic development [42,43]. On the one hand, digital finance absorbs investors that are "large, small and scattered" in the market, that is, the long tail group [44,45], which has more financial resources and can effectively broaden supply channels. Due to technical limitations and high service costs, traditional financial markets cannot effectively absorb these investors [46]. Supported by information technology, digital finance can process massive data at low cost and low risk, lower the service threshold, and promote broader long-tail groups to join the financial market [47,48]. In addition, digital finance provides intelligent investment, supply-chain finance, consumer finance, and third-party payment, which broadens financing channels [49] and further provides the possibility of obtaining funds for green technological innovation.
On the other hand, the information matching function of digital finance can alleviate information asymmetry and enhance the allocation efficiency of financial resources [50,51]. Most scholars believe that information asymmetry between the financial market and innovation subject is one of the main reasons for inefficient resource allocation. The cost of information collection reduces investors' willingness to invest, so it is more difficult for the innovation subject to obtain external financing. Digital finance can evaluate investor credit through algorithms and big data, provide credit informatization and transparency, alleviate information asymmetry, improve the credit-resource mismatch, overcome external financing constraints, and help innovation subjects to make reasonable and effective green technology innovation decisions [52], so as to comprehensively improve regional green technology innovation.
With the continuous improvement in the development level of digital finance, due to the profit-seeking of capital and the liquidity of financial elements, digital finance can continuously radiate to neighboring areas through the "trickle-down effect", resulting in a spatial spillover effect [53]. Especially with the support of digital technology, geographical distance is no longer one of the more difficult problems affecting innovation subjects' access to financial services [54]. Therefore, the spatial spillover effect of digital finance strengthens the financial support and information exchange of neighboring regions, and also promotes the green technology innovation ability of neighboring regions. Based on the above, this paper proposes research Hypothesis 1:
Hypothesis 1 (H1).
Digital finance can significantly enhance urban green technology innovation. At the same time, digital finance will also help improve the green technology innovation capacity of surrounding cities.
Porter hypothesis holds that strict and effective environmental regulation can stimulate enterprises' willingness to innovate green technology and obtain competitive advantage through improving resource utilization rate, enhancing product performance and meeting production emission standards [22,23]. When the government implements environmental regulations, enterprises need to invest a large number of research and development personnel, research and development funds, purchase environmental protection equipment, emission permits, etc., which can be collectively referred to as environmental protection costs. In order to avoid the decline in economic benefits, enterprises will add environmental protection costs back into the product price. However, companies will also lose customers as prices rise, resulting in a loss of profits. At this time, the government can force and guide enterprises to carry out green technology innovation through environmental regulation. With the upgrading of technological structure, enterprises can realize the improvement of the resource utilization rate and the reduction of production costs and administrative penalty costs, thus obtaining a greater profit margin [55]. At the same time, an enterprise's environmental image can attract more green consumers, increase the market share, and obtain competitive advantages. In this process, the innovation income of green technology is greater than the innovation cost, resulting in the "innovation compensation" effect [56,57]. Therefore, the government can further encourage enterprises' green innovation behavior through environmental regulation. Combined with imitative learning between governments, relocation behavior of enterprises and technology spillover effect, this paper proposes research Hypothesis 2: Hypothesis 2 (H2). Environmental regulation can significantly improve urban green technology innovation. At the same time, environmental regulation also helps to improve the green technology innovation capacity of surrounding cities.
The good financial supply of digital finance provides a financial guarantee for the technological innovation activities of enterprises. However, whether the innovation results can improve the competitiveness of enterprises and also give consideration to environmental protection depends on the environmental regulation of the government [58,59]. Under the constraints of environmental regulations, enterprises need to carry out green technological innovation to achieve environmental compliance. Both front-end green production innovation and back-end governance innovation require a large amount of capital [60]. At this point, if the investment in green innovation exceeds the enterprise's expectation and the financing cost is high, the enterprise will give up green transformation and turn to the negative behavior of reducing or stopping production [61]. Digital finance provides credit support to green transformation enterprises under the guidance of government environmental regulations and facilitates the green technological innovation of enterprises with low-cost and low-threshold financial services [40,62]. Therefore, in the process of providing effective financial services, digital finance should combine the green development orientation of the government to jointly promote the ability of green technology innovation and achieve the goal of high-quality economic development. Based on the above, this study proposes research Hypothesis 3:
Hypothesis 3 (H3).
Environmental regulation positively moderates the relationship between digital finance and urban green technology innovation capability.
Model Construction
This study constructs a spatial Durbin model to explore the mechanism of digital finance and environmental regulation on green technology innovation capability. The specific measurement model is as follows: where i represents a city (i = 1, 2, 3, · · · , 278), t represents the year (t = 2011, 2012, 2013, . . . , 2019), gt represents green technology innovation, df represents digital finance, er denotes environmental regulation, X means control variables, ρ stands for the space autoregressive coefficient, W stands for the weight matrix of adjacent space, and v stands for the error term.
Explained Variable
Green Technology Innovation (lngt): Based on Lu's [63] opinion, this study selects the data of urban invention patents and utility model patent applications and uses the principle of entropy weight method to construct a comprehensive index to measure the level of urban green technology innovation. The specific calculation process is as follows: first, the data indicators are normalized. Second, the entropy weight method is used to calculate the weight of each index. Finally, the comprehensive index of green technology innovation in each city is calculated.
Core Explanatory Variable
Digital finance (lndf): Guo et al. [64] combined the characteristics of digital finance and data availability, and constructed "Peking University Digital Inclusive Finance Index" through three first-level dimensions, 12 s-level dimensions, and 33 specific indicators by using micro-data. This index scientifically portrays the degree of development of digital inclusive finance in China. Therefore, this paper chooses its comprehensive index as the measurement index of digital inclusive finance.
Environmental regulation (lner): Based on the ideas of Ye et al. [65], this study selected wastewater, sulfur dioxide, and smoke (powder) dust emissions for a comprehensive evaluation of environmental regulation intensity through the entropy weight method to build the index system. This indicator is a positive indicator; that is, the greater the indicator, the greater the intensity of environmental regulation.
Control Variables
In order to improve the scientific nature of the empirical results between digital finance, environmental regulation, and green technology innovation, a series of control variables are added. (1) Regional economic development level (lngdp ): measured by gross regional product; (2) urban innovation environment (lnie): measured by the general budget of local finance; (3) degree of opening to the outside world (lnod): measured by the gross industrial output value of foreign-invested enterprises in the region; (4) urban environmental quality (lneq): use harmless treatment rate of household garbage to measure; (5) urban industrial structure (lnis): the proportion of added value of the secondary industry in GDP is selected for measurement.
In consideration of data integrity and reliability, panel data of 278 Chinese cities from 2011 to 2019 were selected in this study. The data come from The Research Center for Digital Finance of Peking University and The Statistical Yearbook of Chinese Cities. In this study, all data were logarithmically processed to mitigate the impact of heteroscedasticity, extreme values, and skewness on the estimated results. Statistical results of variable description are shown in Table 1.
Spatial Autocorrelation Test
Before the empirical analysis, the Moran index was used to analyze the spatial autocorrelation of digital finance and the green technology innovation ability of 278 cities by using the adjacent spatial weight matrix, and the spatial econometric model was investigated. Its calculation formula was as follows: Among them : The Moran index is one of the most commonly used indicators of spatial correlation. The value of Moran index I is generally between [−1,1]. A Moran index I close to 0 indicates that the spatial distribution is random and there is no spatial autocorrelation; greater than 0 indicates positive correlation, and the larger its value, the more obvious the spatial correlation; a value less than 0 indicates negative correlation, indicating greater spatial heterogeneity. As can be seen from Table 2, the Moran index I of digital finance and green technology innovation from 2011 to 2019 is between 0.060 and 0.126, and is significant at the 1% level, indicating a strong spatial correlation between digital finance and green technology innovation. To observe the spatial agglomeration of digital finance, this paper draws local Moran scatter plots of digital finance in 2011 and 2019, as shown in Figure 1. Figure 1 shows that digital finance has a spatial agglomeration effect and strong spatial correlation.
Model Selection
In this study, the LM test and its robustness test were used to judge the spatial distribution properties of each variable and the choice of spatial econometric model. As can be seen from the LM test results in Table 3, both passed the significance test and significantly rejected the null hypothesis. The panel model with spatial effect should be selected in this paper. Secondly, the LR test of the spatial Durbin models (1) and (2) shows that the hypothesis that they degenerate into spatial error model or spatial lag model is significantly rejected, which supports the scientific selection of the spatial Durbin model. Meanwhile, Hausman test strongly rejects the null hypothesis; that is, the Durbin model with fixed effects is more suitable for this study than the Durbin model with random effects. Therefore, this paper should select the spatial Durbin model for spatial econometric analysis.
Spatial Model Results
This study examined the relationship between digital finance, environmental regulation, and green technology innovation using the spatial Durbin model with time-city dual fixations. Model (1) mainly examines the impact of two explanatory variables on green technology innovation, while Model (2) includes the interaction term of digital finance and environmental regulation, and comprehensively considers the interaction relationship among the three. As can be seen from the results of Model (1) in Table 4, the regression coefficient of digital finance on local green technology innovation is 2.721, which passes the significance test. However, the influence of digital finance on green technology innovation in neighboring areas is not significant. Part of hypothesis 1 is verified. This indicates that digital finance can only promote local green technology innovation. The wide financing channels and strong information-matching ability of digital inclusive finance stimulate the willingness of innovation subjects to green innovation, so it has a significant positive impact on the local green technology innovation ability. However, the influence of digital finance on green technology innovation in neighboring areas is not significant. This indicates that green technology innovation is only affected by the development of digital finance in this region, and is not affected by the development of digital finance in other regions, which is consistent with the research conclusion of Zhang et al. [66]. The possible reason that lies in the difference between the development level of inter-regional digital finance and the degree of government interaction leads to the regional heterogeneity of the spatial spillover effect of digital financial development. Combined with the results of regional heterogeneity analysis in Section 5.5, it can be seen that the significant inhibition effect in southwest China and northwest China may offset the significant promotion effect in central China, resulting in the insignificant total sample estimation coefficient. The regression coefficient of environmental regulation on local green technology innovation was 0.092, which passed the significance test. Environmental regulation promotes local green technology innovation Moreover, environmental regulation also has a positive impact on green technology innovation in neighboring areas. On the one hand, with the increase in government environmental regulation intensity, enterprises will achieve the effect of reducing environmental protection costs and improving resource utilization rate through green technological innovation, aiming to achieve the common progress of economic benefits and environmental benefits through the "innovation compensation" effect. On the other hand, in order to avoid excessive expenditure in environmental costs, some small and medium-sized high-tech enterprises move to the neighboring areas with relatively low environmental regulation intensity. Therefore, the flow of capital, information, technology and personnel promotes green technology innovation in neighboring areas. Therefore, hypothesis 2 is supported. The regression coefficient of the interaction term between digital finance and environmental regulation in Model (2) is significantly positive, indicating that environmental regulation plays a positive moderating role in the process of digital finance affecting local green technology innovation. Hypothesis 3 is supported. That is, the government's environmental regulation can play a positive role in the process of digital finance promoting green technology innovation. The empirical study shows that the level of economic development, the degree of urban openness, and the quality of urban environment all have a significant promoting effect on green technology innovation, while the industrial structure has a significant positive effect on green technology innovation. This indicates that the higher the proportion of secondary industry is, the more unfavorable it is to the progress of urban green technology innovation level. The coefficient of the spatial Durbin model passed the significance test at the 1% level, indicating that the level of local green technology innovation contributes to the improvement of the level of green technology innovation in neighboring areas; that is, there is a spatial spillover effect of green technology innovation. Note: *, ** and *** represent significant at the significance levels of 10%, 5%, and 1%, respectively, and t-statistics in parentheses.
Considering that digital finance and environmental regulation may have a lag effect on green technology innovation, this study adopts digital finance and environmental regulation with a lag of one stage to conduct re-regression on Model (1) and Model (2). The test results are shown in Model (3) and Model (4) in Table 4. Based on Table 4, it can be seen that, compared with Model (1) and Model (2), the test result of one lag period is basically consistent with that of the current period. Therefore, the following robustness test adopts lagged one-phase variables to further test the model.
Spatial Effect Decomposition
To further illustrate the marginal effects of digital finance and environmental regulation on green technology innovation, this study performs a spatial effect decomposition and divides the changes into direct, indirect and total effects. The direct effects include the direct impact of explanatory variables on local green technology innovation and the feedback effect of neighboring explained variables on local green technology innovation. Indirect effects reflect the influence of local explanatory variables on green technology innovation in neighboring areas Table 5 shows the decomposition results of spatial effects of digital financial and environmental regulations. According to the direct-effect test results, digital finance has a significant positive influence on local green technology innovation; that is, every 1% increase in the development level of digital finance can improve the local green technology innovation level by 2.725%. Environmental regulation plays an important role in promoting local green technology innovation; that is, when the intensity of environmental regulation increases by 1%, the level of local green innovation will increase by 0.091%. Compared with the parameter estimation of the fixed effect of the spatial Durbin model in Table 4, it can be seen that there are some differences between the parameter estimation results of digital finance and environmental regulation. For example, the direct effect of digital finance on local green technology innovation is 2.725, while the regression coefficient estimated by the spatial Durbin model is 2.721. The difference between the two is caused by the feedback effect of digital finance on green technology innovation in nearby areas. Note: ** and *** represent significant at the significance levels of 5%, and 1%, respectively, and t-statistics in parentheses.
The estimation results of indirect effects show that the environmental regulation has a significant positive spillover effect, while the spillover effects of digital finance do not pass the significance test. Each 1% increase in the intensity of environmental regulation has a 0.612% promotion effect on the green technology innovation ability of neighboring areas.
Robustness Test
To maintain the reliability of the regression results, data around 3% of the sample maximum and minimum values were excluded for robustness testing, and the results of each indicator after excluding outliers are analyzed in detail in the columns of Table 6. From the results in Table 6, it is known that the estimated coefficient values of the variables remain significant, the coefficient fluctuation range is not large, and the sign of the positive and negative have not changed. It is not difficult to see that the results are basically consistent with the previous spatial regression results, which further confirms the robustness of the empirical results in this study. Note: *, ** and *** represent significant at the significance level of 10%, 5%, and 1%, respectively, and t-statistics in parentheses.
Heterogeneity Analysis
To further analyze the regional differences in digital finance and environmental regulation on green innovation, 278 cities were divided into seven parts, namely East China, South China, North China, Central China, Southwest China, Northwest China, and Northeast China, and each region was tested. Specific test results are shown in Table 7. By comparing the total effect of digital finance, we found that smart finance in Northeast, South and Central China has a significant contribution to green technology innovation. Moreover, from the elastic coefficient, digital finance has the best promotion effect on South China. On the contrary, digital finance inhibits green technology innovation in North China and Southwest China. The elasticity coefficient of Southwest China is -0.985, indicating that the level of technology innovation in Southwest China will decrease by 0.985% when digital finance increases by 1%. To be specific, digital finance can promote local green technology innovation except in North China, and digital finance has a spillover effect only in Central China, Northwest China, and Southwest China. In conclusion, there is regional heterogeneity in the impact of digital finance on China's green technology innovation in China, so it is difficult to comprehensively promote green innovation Table 7. Heterogeneity analysis of the impact of regional digital finance and environmental regulation on green technology innovation. Note: *, ** and *** represent significant at the significance level of 10%, 5%, and 1%, respectively, and t-statistics in parentheses.
Variable
According to the results in Table 7, we find that the total effect of environmental regulation only passes the significance test in East China, North China, and Central China. Among them, the total effect of environmental regulation in East China and Central China is positive, indicating that environmental regulation has an inhibitory effect on green technology innovation. Moreover, the promotion effect in East China is greater than that in Central China. Xu et al. [67] believe that the role of environmental regulation is closely related to the degree of economic development. East China has a high-quality economy, so its environmental regulations are more scientific and perfect. Therefore, under strict and effective supervision, green technology innovation can be further promoted. Central China is in the middle of the economic development level, because it is largely East China that undertakes energy-intensive industries. Therefore, its regulation is limited. When environmental regulation is strengthened in central China, its promotion effect on green technology innovation is less than that in eastern China. On the contrary, environmental regulation inhibits the progress of green technology innovation in North China. There are negative spillover effects that hinder the progress of green technology innovation in neighboring areas. The environmental regulations in East China and Central China not only hinder the local innovation and progress but also inhibit the neighboring regions. Xin [68] believes that North China is an important political center of China, where a large number of technological enterprises gather together. Therefore, when the intensity of environmental regulation increases in North China, some enterprises with strong pollution move to neighboring areas, which ultimately inhibits the level of green technology innovation in local and surrounding areas.
Discussion of Empirical Results
This paper examines the influence mechanism among digital finance, environmental regulation and technological innovation by constructing a spatial Durbin model. It can be seen from the robustness test results in Table 6 that the model has passed the test, indicating that the conclusion is true and reliable. Meanwhile, according to the empirical results, although digital finance has a significant role in promoting local green technology innovation, the spatial spillover effect on neighboring areas fails to pass the test. This conclusion is consistent with Xie's [69] research on digital finance and regional technological innovation based on provincial panel data. Although digital finance has a significant spatial agglomeration effect, it fails to drive green technology innovation in neighboring areas. Secondly, the empirical results show that environmental regulation can not only promote local green technology innovation progress, but also stimulate the improvement of green technology innovation levels in neighboring areas through a positive spatial spillover effect. This is consistent with the conclusions of Zheng et al. [70] analyzing the impact of environmental regulation on industrial green innovation and Zhang et al. [71] examining environmental regulation and environmental governance. Thirdly, the study also shows that environmental regulation can strengthen the promotion effect of digital finance on green technology innovation. That is, in the process of digital finance affecting green technology innovation, environmental regulation plays a positive moderating role. Shi et al. [37], Li et al. [35], Wang et al. [36] reached similar conclusions when exploring the impact of digital finance and environmental regulation on environmental pollution, industrial structure upgrading and economic growth. Finally, the relationship among digital finance, environmental regulation and green technology innovation is found to have regional heterogeneity. Green technology innovation simultaneously considers technological progress, economic performance and environmental performance. Thus, spurred by digital finance and environmental regulation, companies can make more profits from cleaner methods of production. These achieve sustainable economic development through green technology innovation. In summary, the research in this paper further enriches the research related to digital finance, environmental regulation and green technology innovation, and at the same time provides a theoretical basis for the government to adopt relevant mechanisms and thus achieve regional green transformation and upgrading.
Conclusions and Suggestions
This paper selects panel data of 278 cities in China from 2011-2019 and builds a spatial Durbin model based on a spatial correlation perspective to empirically investigate the relationship between digital finance, environmental regulation and green technology innovation and conducted robustness tests. Then, considering the regional heterogeneity, 278 cities were divided into seven parts according to geographical location, and the relationship among the three areas was discussed, respectively. The results are as follows.
•
Digital finance has an important role to play in promoting local green technology innovation. It is obvious that the low superlative threshold, low cost, high efficiency and informatization of digital finance encourage local enterprises' green technology innovation through channels such as improving financing availability, reducing financing cost and transaction time, and improving resource allocation rate. • Government environmental regulation facilitates the development of green technology innovation in local and adjacent areas. For one thing, it shows that the Porter hypothesis is valid in China. For another, environmental governance also reflects the relationship between learning and competition among local governments in China.
When local governments force companies to innovate in green technologies by enforcing strict environmental regulations, neighboring governments also strengthen environmental regulations to achieve high-quality development.
•
Environmental regulation plays a positive moderating role in the process of digital finance affecting green technological innovation. That is, environmental regulation plays a positive moderating role in the process of digital finance affecting green technology innovation. It shows that in the process of digital finance promoting green technology innovation, government environmental regulation plays an important guiding role. • There is regional heterogeneity in the relationship between digital finance, environmental regulation, and green technology innovation. Among them, the environmental regulation in North China inhibits the local green technology innovation the most; Digital finance in Central China can not only promote green technology innovation in the region but also green technology innovation in neighboring regions through a spillover effect.
•
The development of the secondary industry hinders the progress of green industry and further inhibits the level of urban green technology innovation.
In summary, we put forward the following policy recommendations: First, the government should continue to promote the development of digital finance and accelerate the innovative integration of finance and technology, on the basis of improving digital finance development infrastructure, promoting the construction of credit evaluation system, and guiding more practitioners to join. Additionally, it is essential to standardize the financial market service system and strengthen information protection. Second, the government should fully consider regional heterogeneity when formulating environmental regulations, combining regional characteristics to guide enterprises through green technology innovation to through environmental subsidies and policy publicity, so as to realize the coordination of environmental protection and economic progress. Therefore, local governments should break the restrictions of administrative regions and strengthen the communication and cooperation between regions when formulating and implementing environmental regulations. We should give full play to the role of environmental regulations in improving green technological innovation, and work together to achieve green upgrading and transformation. Third, the government should vigorously promote the transformation of the secondary industry. To achieve high-quality economic development, the government needs create a good industrial innovation environment and stimulate the willingness of the secondary industry to innovate.
Research Limitations and Future Research
There are some limitations in this study. First, due to limited data availability, this paper and a large number of existing studies in the construction of green technology innovation comprehensive evaluation index only consider the relevant data of green patents, not the R & D personnel in the process of innovation, R & D funds and R & D results of green product sales and other related data into the evaluation system. In the future, data will continue to be mined to further improve the comprehensive index of green technological innovation. Second, the study in this paper focuses more on the impact of digital finance on green technology innovation, and therefore does not provide a detailed delineation of environmental regulation. In the later research, environmental regulation should be divided into command-and-control type, market incentive type and voluntary type according to the different regulatory tools, so as to further explore the heterogeneous impact of environmental regulation.
|
2022-07-17T15:07:31.148Z
|
2022-07-15T00:00:00.000
|
{
"year": 2022,
"sha1": "eea9bdb863e63e61e4633f596096d6756d875c99",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/14/8652/pdf?version=1657854806",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3caefaba7fe96e8405359d283766d85d28523f2d",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
}
|
22836393
|
pes2o/s2orc
|
v3-fos-license
|
Clustering and Coupled Gating Modulate the Activity in KcsA, a Potassium Channel Model*
Different patterns of channel activity have been detected by patch clamping excised membrane patches from reconstituted giant liposomes containing purified KcsA, a potassium channel from prokaryotes. The more frequent pattern has a characteristic low channel opening probability and exhibits many other features reported for KcsA reconstituted into planar lipid bilayers, including a moderate voltage dependence, blockade by Na+, and a strict dependence on acidic pH for channel opening. The predominant gating event in this low channel opening probability pattern corresponds to the positive coupling of two KcsA channels. However, other activity patterns have been detected as well, which are characterized by a high channel opening probability (HOP patterns), positive coupling of mostly five concerted channels, and profound changes in other KcsA features, including a different voltage dependence, channel opening at neutral pH, and lack of Na+ blockade. The above functional diversity occurs correlatively to the heterogeneous supramolecular assembly of KcsA into clusters. Clustering of KcsA depends on protein concentration and occurs both in detergent solution and more markedly in reconstituted membranes, including giant liposomes, where some of the clusters are large enough (up to micrometer size) to be observed by confocal microscopy. As in the allosteric conformational spread responses observed in receptor clustering (Bray, D. and Duke, T. (2004) Annu. Rev. Biophys. Biomol. Struct. 33, 53-73) our tenet is that physical clustering of KcsA channels is behind the observed multiple coupled gating and diverse functional responses.
During the last decades, the use of high resolution electrophysiological techniques to study ion channels has provided a large amount of information on functional aspects of these important membrane proteins. Such a detailed information on channel function, however, has not been accompanied by structural knowledge until recently, when several structurally simpler homologues of mammalian ion channels found in extremophyle bacteria or Archaea and remarkably resistant to harsh experimental conditions, have been purified, crystallized and their structure solved at high resolution by x-ray diffraction methods (1)(2)(3)(4). A K ϩ channel from the soil bacteria Streptomyces lividans named KcsA 4 (1), a homotetramer made up of identical 160-amino acid subunits, was the first of such structures to be solved (5,6), and, although the x-ray structure corresponds to a closed channel conformation, it has contributed much to our current understanding of ion selectivity and permeation. Ironically, there was little or no functional information on KcsA by the time its structure was solved, and then several groups undertook the task of characterizing its single channel properties, which has been surrounded by controversy. For instance, Schrempf's group, discoverers of KcsA in S. lividans, reported a strong dependence of channel opening on acidic pH, multiple conductance states with opening probabilities near 0.5, and unusual permeabilities to Na ϩ , Li ϩ , Ca 2ϩ , or Mg 2ϩ , along with K ϩ (7)(8)(9). In contrast, Miller's group (10,11) using purified KcsA reconstituted into planar lipid bilayers found a single conductance state with a much lower opening probability, as well as orthodox ion selectivity and other properties to validate KcsA as a bona fide K ϩ channel and as a faithful structural model for these molecules. The above discrepancies were never fully explained but, still, it became generally accepted that KcsA behaves as a moderately voltage-dependent, K ϩ -selective channel with a characteristic low opening probability and the peculiar property of opening only in response to very acidic pH conditions at the intracellular side of the membrane. More recently, however, it was found that KcsA opens also at neutral pH when subjected to an outward K ϩ gradient (12). Furthermore, it has been proposed that a more "physiological" version of KcsA might correspond to a supramolecular conductive complex in which the channel would coassemble with polyhydroxybutyrate and inorganic polyphosphates (13), which are abundant reservoir materials in many prokaryotes.
In this report we have used excised membrane patches from reconstituted giant liposomes containing purified KcsA. Through the analysis of a large number of patch clamp recordings we found a clearly diverse functional behavior for the KcsA channel. The more frequent pattern of activity corresponds to the low opening probability and acidic pH-dependent channel referred above, but other activity patterns have been detected as well, which are characterized by a high channel opening probability at both acidic and neutral pH. As an additional salient feature, the latter recordings show frequent coupled gating involving multiple channels. These observations are unprecedented, and we interpret them based on the additional finding that heterogeneous "cluster"-like supramolecular assemblies of KcsA are formed, into which the channels adopt different, integrated behaviors.
EXPERIMENTAL PROCEDURES
Constructs and Mutants-The wild-type KcsA construct contained the kcsA gene of S. lividans cloned in-frame into the pQE30 vector (Qiagen), which provided ampicillin resistance and a N-terminal hexahistidine tag (14). The KcsA S22C mutant was obtained (QuikChange site-directed mutagenesis kit, Stratagene) by generating PCR fragments using pairs of complementary mutant primers, sense primer 5Ј-CTC GGG CGC CAC GGC TGT GCG CTG CAC TGG and antisense primer 5Ј-CCA GTG CAG CGC ACA GCC GTG GCG CCC GAG. The KcsA S22C mutant sequence was verified by dideoxy-nucleotide sequencing.
Protein Expression and Purification-Expression of the wild-type KcsA protein and the KcsA S22C mutant in Escherichia coli M15 (pRep4) cells, and its purification by affinity chromatography on a nickel-nitrilotriacetic acid-agarose column, was carried out as reported previously (14). The purified protein consisted primarily of the characteristic SDS-resistant tetramer, which is accompanied by monomeric KcsA as a minor component and in sufficiently loaded SDS-PAGE gels, by higher molecular weight, SDS-resistant KcsA multimers. All the above KcsA species were immunoreactive against commercial anti-His tag monoclonal antibodies (see the inset to Fig. 8A).
The expression yields and the SDS-PAGE profile of the KcsA S22C mutant were very similar to those exhibited by the wild-type KcsA. The protein concentration was determined by the DC-Protein colorimetric assay (Bio-Rad), relative to a bovine serum albumin standard. When expressed in molar terms, the protein concentration refers to KcsA tetramers. 1-125 KcsA was prepared by chymotrypsin hydrolysis of wild-type KcsA as described earlier (14).
Reconstitution of Proteins into Asolectin Lipid Vesicles and Preparation of Giant Liposomes-Batches of large unilamellar vesicles of asolectin (soybean lipids, type II-S, Sigma) were prepared at 25 mg/ml as described earlier (15) in 10 mM Hepes, pH 7.0, 100 mM KCl (reconstitution buffer) and stored in liquid N 2 . The purified DDMsolubilized protein (wild-type KcsA, 1-125 KcsA, or fluorescently labeled KcsA S22C derivatives, depending on the different experiments) was mixed with the above asolectin vesicles previously resolubilized in 3 mM DDM. Reconstituted liposomes were formed by removing the detergent by gel filtration (14). The protein-containing reconstituted vesicles eluted in the void volume and were pooled, centrifuged 30 min at 300,000 ϫ g, resuspended at 1 mg of protein/ml in reconstitution buffer, divided into aliquots, and stored in liquid N 2 .
Multilamellar giant liposomes (up to 50 -100 M in diameter) were prepared by submitting a mixture of the reconstituted vesicles (usually containing 50 g of protein) and asolectin lipid vesicles (25 mg of total lipids) to a cycle of partial dehydration/rehydration (15), with the exception that the dehydration solution used here was 10 mM Hepes (potassium salt) buffer, pH 7, containing 5% ethylene glycol and the rehydration solution was 10 mM Hepes (potassium salt) buffer, pH 7. As a control, each of the different batches of asolectin vesicles was also used to prepare protein-free giant liposomes. Those liposome batches, posing difficulties to obtain high resistance seals (see below) or showing erratic baselines in the patch clamp recordings because of remaining detergent or other reasons, were discarded.
Electrophysiological Recordings-For patch clamp measurements of channel activity, aliquots (3-6 l) of giant liposomes were deposited onto 3.5-cm Petri dishes and mixed with 2 ml of the buffer of choice for electrical recording (bath solution; usually 10 mM Mes buffer, pH 4, containing 100 mM KCl). Giga seals were formed on giant liposomes with borosilicate microelectrodes (Sutter Instruments) of 7-10 megohms open resistances, filled with 10 mM Hepes buffer, pH 7, 100 mM KCl (pipette solution). After sealing, excised inside-out patches were obtained by withdrawing the pipette from the liposome surface. Standard patch clamp recordings (16) were obtained using either Axopatch 200A (Axon Instruments, Union City, CA) or EPC-9 (Heka Electronic, Lambrecht/Pfalzt, Germany) patch clamp amplifiers, at a gain of 50 mV/pA. The holding potential was applied to the interior of the patch pipette, and the bath was maintained at virtual ground (V ϭ V bath Ϫ V pipette ). An Ag-AgCl wire was used as the reference electrode through an agar bridge, and the junction potential was compensated when necessary. Routinely, the membrane patches were subjected to a protocol of pulses and/or voltage ramps. The protocol of pulses went from Ϫ200 to ϩ200 mV, at 50-mV intervals, and 2 s of recording at each individual voltage was used, holding the patch back to 0 mV between the different voltage steps. The voltage ramps went from Ϫ200 to ϩ200 mV, during a 3-s scan. All measurements were made at room temperature. Recordings were filtered at 1 kHz, and the data were analyzed off-line with the pClamp9 software (Axon Instruments).
Recordings from giant liposomes prepared from either 50 or 100 g of wild-type KcsA protein and registered under identical experimental conditions (pH 4 at the bath and pH 7 at the pipette solutions) exhibited qualitatively similar patterns of ion channel activity but differed in complexity (a larger number of events as the amount of protein increased) and in the percent of silent patches, which went from only 9% (n ϭ 23) when using 100 g of protein, to ϳ38% (n ϭ 150) for 50 g of protein, respectively. Thus, for practical purposes, we studied in more detail the giant liposomes made from 50 g of protein, which became our "standard" experimental condition.
SDS-PAGE and Western Immunoblotting-For SDS-PAGE analysis, the protein-containing samples were mixed with an equal volume of electrophoresis buffer sample (20 mM Tris, pH 6.8, 20% glycerol, 0.1% bromphenol blue, and 4% SDS) and applied to a 13.5% acrylamide gel in the presence of 0.1% SDS (17). After electrophoresis, proteins were transferred onto a nitrocellulose membrane. Blots were incubated with 3% (w/v) bovine serum albumin in PBS-T (phosphate-buffered saline, pH 7.4, containing 0.05% Tween 20). The Histagged KcsA was detected with a mouse monoclonal anti-Tetra-His antibody (1:1000 dilution, Qiagen) in PBS-T. After washing, the immunoblots were incubated with a secondary horseradish peroxidase-conjugated rabbit anti-mouse IgG (1:1000, Sigma) in PBS-T. Immunoreactive proteins were visualized by chemiluminescent ECL detection reagent (Amersham Biosciences).
Analytical Ultracentrifugation-Sedimentation velocity experiments were conducted in a Beckman Optima XL-I ultracentrifuge (Beckman Coulter) with an An50Ti eight-hole rotor and double-sector Eponcharcoal centerpieces. DDM-solubilized KcsA samples at protein concentrations ranging 0.5-10 M in 20 mM Hepes buffer, pH 7.0, containing 100 mM KCl and 5 mM DDM, were centrifuged at 40,000 rpm, 20°C, and the absorbance at 280 nm was followed. Differential sedimentation coefficient distributions, c(s), were calculated by least-squares boundary modeling of sedimentation velocity data by using the program SEDFIT (18,19).
Fluorescence Labeling of KcsA-Aliquots of the sulfhydryl-containing mutant KcsA S22C at 7 M in 20 mM Hepes, pH 7, 100 mM KCl and 5 mM DDM were treated for 1 h in the dark with a 10-fold molar excess of Tris(2-carboxyethyl)phosphine hydrochloride to keep the sulfhydryl groups in a reduced form. The maleimide Alexa probes (Alexa Fluor 546 C5-maleimide or Alexa Fluor 647 C2-maleimide; Molecular Probes) were dissolved in buffer and added in a 10-fold molar excess to the reduced KcsA samples. After 2-h incubation at 4°C, an excess of 2-mercaptoethanol was added to react with the excess free probe. Finally, the fluorescently labeled KcsA was separated from the free fluorophores by gel filtration on Sephadex G-50 (medium), which also eliminates the minor population of monomeric KcsA present in the purified KcsA preparations. Monitoring of the absorbance at either 546 or 647 nm was used to define the elution profile, while the protein was detected by SDS-PAGE of the different fractions. Routinely, yields ranging 20 to 30% labeling of the available sulfhydryls were obtained.
Fluorescence Anisotropy Measurements-Stock solutions of Alexa 546-labeled KcsA in 10 mM Hepes buffer, pH 7.0, 100 mM KCl, and 5 mM DDM were subjected to successive dilutions with the same buffer to attain different protein concentrations. Steady-state fluorescence anisotropy ͗r͘ was determined at 25°C in an SLM-8000C spectrofluorometer equipped with Glan-Thompson polarizers in the "L" format, by measuring the vertical (I VV ) and horizontal (I VH ) components of the fluorescence emission with excitation polarized vertically, as defined by (20), where the G factor (G ϭ I HV /I HH ) corrects for the transmission bias introduced by the detection system. Excitation and emission wavelengths were 525 and 574 nm, respectively. The protein and DDM concentration were low enough to prevent scattering artifacts that could result in an artificial depolarization of the fluorescence. Similar measurements carried out using Alexa 647-labeled KcsA yielded essentially identical results.
Fluorescence Resonance Energy Transfer Measurements-Alexa 546and Alexa 647-labeled KcsA were used as the donor-acceptor pair for FRET measurements both in detergent solution (10 mM Hepes buffer, pH 7, 100 mM KCl, 5 mM DDM) and in reconstituted asolectin lipid vesicles. For the latter, the reconstituted vesicles were prepared at a fixed asolectin lipid to total protein weight ratio of 10:1 in 10 mM Hepes buffer, pH 7, 100 mM KCl. Because the donor concentration (ϳ6 g of protein/ml) was not identical in the different samples, particularly in the reconstituted vesicles, FRET efficiency (E) was not calculated through the usual method of quenching of donor steady-state emission. Instead, two other approaches were used. In the first approach, E was determined by measuring the increase in fluorescence of the acceptor due to energy transfer and comparing this to the residual donor emission (21). For this, steady-state emission scans of the samples at different donor to acceptor ratios were recorded in an SLM-8000C spectrofluorometer at an excitation wavelength of 525 nm and at emission wavelengths from 540 to 750 nm at 1-nm intervals, and corrected for background and instrument response. The acceptor emission coming from its direct excitation at 525 nm was negligible in the reconstituted samples but not in detergent solution, where such contribution was always subtracted. Afterward, the spectra were normalized to the donor maximum at 574 nm where there is no acceptor fluorescence. Then, the donor spectrum was subtracted from each of the donors plus acceptor spectra, and the integrated areas of the resulting curves were calculated (I AD ). The area under the donor spectrum was also calculated (I D ), and then E results from, where q A (0.80) and q D (0.85 (22)) are the experimentally determined quantum yields of the acceptor and donor, respectively.
The second approach estimates the transfer efficiency in samples with different donor concentration from measurements of the donor fluorescence decay at different donor to acceptor ratios. Fluorescence decays were measured in a fluorescence lifetime instrument (Photon Technology International Inc.) using a proprietary stroboscopic detection technique (23,24). The system used a GL-330 pulsed nitrogen laser pumping a GL-302 high resolution dye laser. The dye laser output at 525 nm was fitted to the sample compartment via fiber optics. The emission wavelength was 574 nm. Fluorescence decays were analyzed using a non-linear least-squares regression method. The average decay times, which are proportional to the steady-state intensities, were calculated from the results of multiexponential fits by using the expression, where a i and i represent the pre-exponential factors and the lifetimes, respectively. From these, E was calculated using the expression (20), where DA and D are the average fluorescence lifetimes of the donor in the presence and in the absence of acceptor, respectively. Finally, the theoretical contribution to FRET arising purely from the random distribution of labeled KcsA donors and acceptors within the two-dimensional membrane bilayer was estimated according to Capeta et al. (25). Such calculations take into account three parameters,  w ,  1 , and B.  w , the interplanar spacing between donors and acceptors, was fixed to zero, because both probes are located at cysteine 22 in different KcsA molecules.  1 is the ratio R 1 /R 0 , where R 1 , the exclusion distance, represents the minimal distance between two probes, and R 0 is the Förster's distance. R 1 was fixed as twice the protein radius (51.9 Å in the KcsA crystal structure), whereas R 0 was 68 Å as calculated from the spectral overlap and the donor quantum yield. B, the relative enrichment factor for the acceptor in the proximities of the donor was fixed to 1.05, which assumes a random distribution of donors and acceptors in the bilayer. A similar theoretical curve was also obtained by applying other models such as that from Wolber and Hudson (26).
Confocal Fluorescence Microscopy-Aliquots (3-6 l) of giant liposomes containing fluorescently labeled KcsA were deposited on a coverslip mounted on a custom-made chamber and mixed with 1 ml of 10 mM Hepes buffer, pH 7.0, 100 mM KCl. The samples were visualized without any further treatment by using an LSM 5 Pascal confocal laser scanning microscope (Axiovert 200M, Carl Zeiss) and a Plan-Neofluar 40ϫ/1.3 objective. Giant liposomes containing Alexa 546-labeled KcsA were excited with the 543 nm line of an argon laser, and the emitted light was filtered through a 560 -615 nm band pass filter. For giant liposomes containing Alexa 647-labeled KcsA, the 633 nm line of a He-Ne laser was used for excitation, whereas the emitted light was filtered through a 650 nm long pass filter. For FRET images, giant liposomes containing both Alexa 546-and Alexa 647-labeled KcsA, usually at a 1:1 donor/ acceptor ratio, were excited at the above 543 nm emission line of the argon laser and the emission was filtered through the 650 long pass filter. These conditions do not require any spectral bleed-through correction.
Giant liposomes containing the fluorescently labeled phospholipid NBD-DMPE, added (0.05%) to the asolectin lipids during reconstitution, were visualized by exciting the NBD-DMPE probe with the 488 nm emission line of the argon laser and using a 505-560 nm band pass emission filter.
Diverse Functional Behavior of KcsA: Low and High Channel Opening
Probability Patterns-Unless stated otherwise, excised membrane patches from giant liposomes prepared from 50 g of purified KcsA and 25 mg of asolectin lipids (see "Experimental Procedures") were always used in these experiments. The recording bath solution was 10 mM Mes buffer, pH 4, containing 100 mM KCl, whereas the pipette solution was 10 mM Hepes buffer, pH 7, 100 mM KCl. Despite the identical experimental conditions, different types of activity were distinguished in these patches (n ϭ 93) and classified as "low" or "high" channel opening probability patterns based on the probability of finding channel opening events in the recordings (Fig. 1). These experiments used a large number of different batches of both purified KcsA and asolectin lipids to prepare the giant liposomes. However, we found no dependence with either the moment in which the experiments were carried out or with the different batches used. Moreover, the different activity patterns illustrated in Fig. 1 were often observed in different patches from the same preparation of giant liposomes. Fig. 1 shows typical voltage ramps from protein-free patches used as a control (Fig. 1A), as well as from different KcsA-containing patches representative of the different opening probability patterns observed. Fig. 1B shows a few openings in the form of bursts of activity, but the channels are closed most of the time, which are well known reported features of KcsA (10,11). Accordingly, we named this pattern "low channel opening probability" or LOP pattern, seen in 55% of the recordings (n ϭ 51). On the contrary, Fig. 1 (C and D) shows examples of activity patterns in which the most salient feature is that the channels are opened most of the time. These patches were named as "high channel opening probability" or HOP patterns and were observed in 45% of the cases (n ϭ 42), including some instances in which somewhat intermediate behaviors between those depicted in Fig. 1 (C and D) were detected in the recordings. The predominant HOP pattern corresponds to that in Fig. 1C (n ϭ 26), in which similar current is conducted at either positive or negative potentials, following an almost symmetrical sigmoid-like voltage dependence. Characteristically, channel closings are observed at extreme voltages and variable flickering may sometimes be present at any of the voltages studied. Fig. 1D shows a different HOP pattern encountered with a lower incidence (n ϭ 10) in which more current is conducted at negative than at positive voltages, thus showing an inward rectifier behavior. These latter recordings do not show a predominance of channel closings at the extreme values in the voltage ramps, while variable flickering (from moderate to very intense) may also be present. This variability observed when KcsA is reconstituted into giant liposomes was not explicitly reported in the earlier characterization of KcsA in planar lipid bilayers, but it seems reminiscent of that referred more recently (27) using the latter reconstituted system.
Regardless of the pattern exhibited, additional experiments carried out under asymmetrical KCl concentrations (400 and 100 mM KCl in the bath and pipette solutions, respectively, under otherwise identical conditions) yielded very similar reversal potentials, which were near that corresponding to potassium under these gradient conditions (not shown).
Curiously, a small population of patches (n ϭ 6) was also found in which the main feature was that the number of open channels increased progressively in a "ladder"-like manner during the time course of the recordings as the protocol of pulses was applied repetitively. This latter behavior is discussed below under "Results." Analysis of the LOP Pattern-In agreement with previous reports, channel opening in this activity pattern requires acidic pH (8 -11, 28). Fig. 2A shows that these excised membrane patches do not exhibit channel opening events when symmetrical solutions at pH 7 were maintained in the bath and pipette sides of the patch. On the contrary, changing the bath solution to pH 4 causes channel opening activity ( Fig. 2A, lower traces), regardless of whether the pipette solution was maintained at pH 4 or 7. These pH-dependent changes in gating behavior are produced immediately upon perfusion of the bath solution and are also fully reversible. Finally, changing the pipette solution to pH 4, while leaving the bath solution at pH 7, results in only occasional channel openings (not shown). All the above indicates that the acidic pH-sensing sites in our excised patches are mostly exposed to the bath solution. Using point mutations in the KcsA structure, it was elegantly demonstrated that the characteristic pH sensitivity of this channel was confined to its intracellular portion (10), thus, it should be concluded that our excised patches are "inside-out," with the cytoplasmic portion of KcsA oriented toward the bath side. Fig. 2B shows representative recordings of channel activity at two different holding potentials. The recordings typically show rapid gating JULY 7, 2006 • VOLUME 281 • NUMBER 27 in bursts of activity within long-lived silent periods. Channel openings are somewhat variable in amplitude, particularly at positive voltages, and quite noisy, mostly at negative voltages. These recordings were used to calculate the open channel probability (Fig. 2C), as the fraction of time during which the channels are opened. Such values were lower than 0.06 (n ϭ 6), which is in fair accordance with reports by others (29) and exhibited a bell-shaped voltage dependence with a maximum at ϳϩ120 mV. The regions of these recordings showing bursts of activity were also used to study the voltage dependence of channel current. Fig. 2D shows the channel I/V relationship obtained by averaging the current amplitudes at each of the different potentials from several different patches (n ϭ 9). An estimated mean slope conductance of 75.5 Ϯ 3.0 pS was obtained. Also, it was observed that KcsA showed open channel rectification with mean chord conductances at ϩ200 and Ϫ200 mV of 41.8 Ϯ 1.2 pS and 28.4 Ϯ 1.2 pS, respectively. These conductance values are comparable to those reported previously for KcsA reconstituted into planar lipid bilayers (8,10,11,28,30).
Functional Diversity and Clustering in KcsA
The routine averaging of the current size measurements used for the I/V plots, however, might be misleading, as a closer examination of individual patches reveals that single channel-like openings of clearly different sizes are present in the recordings. Fig. 3 illustrates such variability in recordings taken at ϩ150 mV in which either 4-pA (Fig. 3A) or 8-pA (Fig. 3B) currents, can be observed as the almost exclusive gating events in each case. The latter 8-pA currents were predominant in most of the patches recorded and in fact, I/V plots obtained from selected recordings showing almost exclusively such currents yielded conductance values very similar to those determined from the "average" measurements from above. Regardless of their frequency of appearance, both the 4-and 8-pA closing and opening events have a single channel-like appearance as far as instrumental resolution distinguishes. This, along with the fact that the larger currents have twice the amplitude of the smaller ones suggests the possibility that, rather than variability in the channel currents, we might be dealing with a phenomenon of coupled gating involving two identical channels acting synchronously. To test this possibility, we proceeded as reported in Kenyon and Bauer (31) by analyzing the amplitude distributions in recordings having 0 (closed states), 4-and 8-pA events to calculate the so-called "coupling parameter," which in all cases yielded values higher than zero, indicating that indeed the openings of the two single channels involved occur cooperatively and are positively coupled. Moreover, we also encountered other recordings in which currents of either 12-or 16-pA currents were detected at ϩ150 mV in the absence of the above 4-or 8-pA currents (not shown) during short but significant periods of time. This suggests that coupling could occasionally go beyond the association of two KcsA channels providing an explanation to the apparently controversial finding of several subconductance states reported earlier (7,8,30), in which the possibility of coupled gating as a source of diversity was not considered. Other previously reported channel properties of KcsA, including its selectivity for K ϩ or its typical blockade by Na ϩ added to the bath solution, mainly at positive voltages, were also found in our experimental system and will not be described here.
Analysis of the HOP Pattern-HOP patterns can be readily distinguished from the LOP patterns from above, because under identical experimental conditions, the channels now remain open most of the time and much more current is conducted through these patches (Fig. 1, C and D). Fig. 4A shows the currents observed at different voltages in the most frequently found HOP pattern, such as that in Fig. 1C. Estimates of the open channel probability yielded values as high as 0.9 within the ϩ100 to Ϫ100 mV range, with a maximum near 0 mV and decreasing both at extreme negative or positive voltages (Fig. 4B). These extreme voltages, which allow for more clear recordings of both channel openings and closings were used to analyze the gating properties of these HOP patterns. Fig. 5A shows in detail a representative long recording taken at ϩ150 mV in which three main current levels were observed, each of ϳ20 pA (equivalent to 133 pS at this voltage), closing successively to finally reach the zero current level. A closer view of the lower 20-pA current level (Fig. 5, B and C) reveals that, indeed, 20-pA currents were the most frequently observed events and, as far as instrumental resolution permits, appeared mostly as large, single channel-like openings or closings, going all the way from the closed state to the 20-pA current level or vice versa. These large 20-pA currents, however, were accompanied by much less frequent, smaller current levels of ϳ4, 8, 12, and 16 pA, i.e. integer multiples of the smaller 4-pA currents. Disregarding the differences in the open channel probability, this is reminiscent of the coupled gating observed in the LOP pattern from above, except that in HOP patterns the coupling seemingly involves up to five 4-pA currents to give rise to the main 20-pA "single" channel events. Unfortunately, the high number of channels involved in this latter putative coupling process prevents the determination of the coupling parameter as done in the LOP patterns from above. The apparent coupling observed in HOP patterns, however, may occasionally be incomplete as shown in Fig. 5B, in which the lower 0-to 4-pA current level remains open for some time, as if temporarily excluded from coupling with the other four current levels. Additionally, intense variable flickering, spanning several current levels such as those shown also in Fig. 5C, was observed intercalated between regions of coupled gating, as if the ensemble of unitary currents went through periods of variable stability.
In addition to the differences in open channel probability and its voltage dependence, as well as in the degree of the apparent coupled gating, there were other striking differences between HOP and LOP patterns. For instance, channel activity in HOP patterns does not depend on acidic pH anymore and is present at either pH 4 or pH 7 in the bath solution, although the gating features are different (Fig. 6). Also and most remarkably, HOP patterns are not blocked by Na ϩ , which in fact becomes a conducting species (data not shown). Blockade by Na ϩ is considered a hallmark of potassium channels, although in KcsA in particular, a significant Na ϩ conduction has been reported previously in the reconstituted planar bilayer system at extreme voltages, in which a "punch-through" mechanism was invoked (32), or in protoplast-liposome vesicle preparations from Streptomyces micelia, where a permeability ratio P K ϩ /P Na ϩ of only three was estimated (7). The above observations on the different pH sensitivity and Na ϩ blockade between HOP and LOP patterns strongly indicate that some of the previously reported properties of KcsA may change quite dramatically when in an HOP pattern.
Ladder-like Openings: From No Activity to Building a HOP Pattern-We referred above to a minor population of patches (only 6 out of a total of 93 active patches) in which the main feature is that the number of open channels increases during the time course of the recording as the protocol JULY 7, 2006 • VOLUME 281 • NUMBER 27
JOURNAL OF BIOLOGICAL CHEMISTRY 18843
of pulses is applied repetitively. Fig. 7 shows recordings from one of such patches in which the first group of pulses (numbered '1 ' in Fig. 7) results in essentially no activity at any of the voltages studied. Subsequent recordings (numbered '2 ' to '4') taken immediately afterward, however, show that channel openings begin to appear quite conspicuously on top of one another in a ladder-like manner, first at Ϫ200 mV, but also at most other voltages in subsequent groups of pulses. Estimation of the size of the current "steps" in the ladder-like ensemble allows distinguishing that, in three out of the six patches showing this behavior, currents corresponding to the unitary current level seen in the LOP or HOP patterns from above, enter or leave the ladder mostly one by one, whereas in the remaining three patches such as that shown in Fig. 7, entering or leaving of pairs of such current levels was observed. Therefore, the same elemental current events seen in the LOP or HOP patterns from above seem also responsible for building up these unusual ladder-like patterns. Eventually, these patches either broke up or become an inward-rectifier type HOP pattern similar to that depicted in Fig. 1D, but more importantly, the observation of this ladder-like behavior suggests that the KcsA channels contained in these patches, which are initially inactive, are subjected to a dynamic process that somehow makes them interact with each other in response to the voltage pulses and to acquire a much higher open channel probability.
Evidence for KcsA Clustering-Previous reports on ryanodine (33) or dihydropyridine receptors (34) correlated functional coupling with physical clustering of the channels by evidencing the appearance of a higher sedimentation coefficient species by centrifugation of detergentsolubilized preparations of the channel protein. Here, we carried out analytical ultracentrifugation sedimentation velocity studies in DDMsolubilized KcsA samples at protein concentrations ranging 0.5 to 10 M. The results show that, regardless of the protein concentration, there is a major sedimenting species of 6.7 S which, depending on the different KcsA preparations, accounted for 80 -90% of the total protein (Fig. 8A). According to the Svedberg equation (35) and assuming a spherical shape and a 0.73 cm 3 /g partial specific volume, the above sedimentation coefficient corresponds to an apparent molecular mass of 110 kDa. This fits fairly well to the theoretical molecular mass of 76 kDa for the KcsA tetramer (160 amino acids per monomer in the native protein, plus 12 additional N-terminal amino acids containing the polyhistidine tag) bound to a reasonable number of DDM molecules. Such putative tetrameric species is accompanied by a lighter species, whose sedimentation coefficient yields an apparent molecular weight similar to that expected from the KcsA monomer, 19 kDa, and by up to three heavier species with average sedimentation coefficients of 9.6, 12.4, and 15.0 S (corresponding to ϳ190, 280, and 370 kDa, respectively), which altogether accounted for ϳ10 -15% of the total protein in these samples. These observations are somewhat reminiscent of those made using SDS-PAGE (14) in which, in addition to a major band of tetrameric KcsA (the major form in which the purified KcsA runs in SDS-PAGE from non-boiled samples), there is an accompanying band corresponding to KcsA monomer and additional bands heavier than the tetramer, which are more apparent upon reconstitution of the purified KcsA into lipid vesicles and whose molecular weights correspond to the SDS-resistant association into multimers of two or more KcsA tetramers, respectively (see inset to Fig. 8A).
The possible protein concentration dependence of multimer formation could not be illustrated accurately by analytical centrifugation because of the low 280-nm absorbance in the lower protein concentration range. For this reason, we turned out to use more sensitive fluorescence measurements to study this low protein concentration range. For these experiments, a cysteine-substituted mutant at a strategic site in the KcsA sequence (KcsA S22C) was reacted with sulfhydryl-reactive 025-1 M). The anisotropy for the free Alexa probe was 0.069. C, Western blots from detergent-solubilized KcsA samples to illustrate that tetrameric KcsA remains as the predominant species even under these conditions of low protein concentration, but multimeric KcsA species can hardly be seen. Such multimeric KcsA species in the Western blots, however, can be readily observed at these protein concentrations when reconstituted KcsA samples, instead of detergent-solubilized KcsA, are used in the experiments (D). fluorophores (Alexa 546 or 647 maleimide derivatives) to attain fluorescently labeled protein. Fig. 8B shows fluorescence anisotropy measurements obtained from the Alexa 546-labeled KcsA. At protein concentrations higher than 0.25 M, the anisotropy values remain constant and fairly high, as expected from a limited rotational mobility of the relatively large protein species seen in solution. However, at lower concentrations there is a clear concentration dependence of the fluorescence anisotropy, which suggests that the assembly of the larger species from the predominant KcsA tetramers occurs within that concentration range. Such an assembly can be partly reversed by simple dilution of the samples to lower protein concentrations, as shown in Fig. 8. The anisotropy data receive apparent support from simple SDS-PAGE/Western blots (Fig. 8C), which indicate that, in detergent-solubilized samples, bands of KcsA multimers can hardly be seen at such low concentrations unless heavily overexposed, whereas they are more easily detected in reconstituted samples containing an identical amount of protein.
The fluorescently labeled Alexa 546 and Alexa 647 KcsA derivatives were also used as donor-acceptor pairs for FRET experiments in detergent solution and in reconstituted vesicles made under identical conditions to those used to prepare the giant liposomes for patch clamp measurements. The R 0 (distance at which there is a 50% efficiency of transfer) was calculated from the spectral overlap as ϳ68 Å, which seems a sensible distance to monitor supramolecular assembly of KcsA tetramers. Fig. 9 shows that, upon excitation of the donor, there is an increase in acceptor emission as the acceptor:donor ratio is increased, consistent with the occurrence of energy transfer. Such process, however, is less efficient in the detergent-solubilized samples (Fig. 9A) than in the reconstituted vesicles (Fig. 9, B and C), suggesting that reconstitution of the tetrameric KcsA into lipids favors its supramolecular assembly into clusters.
Giant liposomes were also prepared from the reconstituted vesicles containing the fluorescently labeled KcsA in an attempt to visualize the clusters by confocal microscopy. The experimental conditions used in the preparation of such giant liposomes were essentially identical to those used for the recording of KcsA activity by patch clamp techniques. Fig. 10A illustrates that the fluorescently labeled protein is distributed heterogeneously in the giant liposomes, defining large (up to micrometer size) highly fluorescent array-like protein complexes on a more homogeneous background containing small fluorescent spots and punctuations of different size throughout. The observed heterogeneity in the distribution of the labeled protein truly responds to the specific association of KcsA molecules, because the distribution of an additional fluorescent phospholipid probe, contained simultaneously in the giant liposomes (Fig. 10B), shows that the lipid is distributed much more homogeneously. Thus, the above observations strongly suggest that extensive, heterogeneous clustering of KcsA occurs upon reconstitution into the giant liposomes.
Giant liposomes containing both donor-and acceptor-labeled KcsA at different ratios are also amenable for "in situ" FRET measurements by exciting at the appropriate donor excitation wavelength and recording acceptor emission in the confocal microscope (Fig. 11). In agreement with the FRET measurements in the fluorometer cuvette from above, these in situ measurements show FRET in the individual giant liposomes between closely arranged donor-and acceptor-labeled KcsA. Moreover, these observations suggests that the larger KcsA clusters are heterogeneous, because they contain regions where energy transfer occurs, along with adjacent regions within the same cluster, which exhibit fluorescence from only the donor-or the acceptor-labeled protein.
DISCUSSION
The more we learn on the different ion channel families, the more we realize that their functional responses are sometimes exquisitely dependent on molecular interactions among themselves or with other cellular components, whose regulatory roles could not be always anticipated from the in vitro characterization of the purified channel proteins. Physical clustering of ion channels into closely packed assemblies is among the possible consequences of such intermolecular interactions, and because it is often accompanied by coupled channel gating (33, 36 -42) it seems a sensible strategy to secure an optimal ion channel-mediated signaling in response to the appropriate stimuli.
Herein we report that the purified, tetrameric KcsA assembles into clusters of different size. In detergent solution, supramolecular clustering occurs to some extent affecting 10 -15% of the total protein. Clustering in detergent solution has been demonstrated by analytical ultracentrifugation, fluorescence anisotropy of the covalently labeled protein, and fluorescence resonance energy transfer using a donor-acceptor pair of KcsA-bound probes. The analytical ultracentrifugation studies show that the KcsA multimers are conformed as discrete sedimenting species with defined stoichiometries, thus excluding the possibility of a nonspecific protein aggregation process. Additionally, the fluorescence anisotropy studies indicate that, even at fairly low protein concentrations, the detergent-solubilized KcsA is subjected to a partly reversible, concentration-dependent equilibrium between cluster-assembled and unassembled forms. Likewise, the efficiency of fluorescence energy transfer between donor-and acceptor-labeled KcsA, which is lower in detergent solution when compared with that seen upon reconstitution of the labeled proteins into lipid vesicles, indicates that the assembly process is clearly favored in the latter conditions. Moreover, confocal microscopy experiments in the reconstituted giant liposomes allow for the visualization of heterogeneous KcsA clusters, some of which reach micrometer size. Additional retrospective evidence to support the clustering of KcsA comes from SDS-PAGE analysis of the purified protein as prepared by most groups working in this field: Bands of molecular weight higher than that corresponding to the characteristic tetramer of four identical subunits (the usual way in which the SDS-resistant KcsA runs in SDS gels) are present both in detergent-solubilized preparations and even more markedly, in reconstituted membranes (14), attesting to the stability (SDS resistance) of some of the building blocks in the clustered forms.
All the above indicates that the tetrameric KcsA has an intrinsic tendency to assemble in vitro into heterogeneous supramolecular assemblies or clusters, particularly when reconstituted into membranes. This implies that when a membrane patch is excised from the reconstituted giant liposome for patch clamp recording purposes, there is a finite probability that it would contain KcsA organized in the form of different supramolecular entities, from the more complex large protein arrays to the individual tetrameric protein. According to the observations made in the clustering of receptors (43), clustered assemblies provide the means to convert conformational changes from a single origin into intermolecular allosteric behavior. Indeed, some of the best characterized cases of ion channel clustering, such as ryanodine receptors (33,37,38), serotonin 5-hydro-xytryptamine2C receptors (40), Kir 4.1 (36), or Kv 2.1 (39) potassium channels, or cystic fibrosis transmembrane conductance regulator chloride channels (41,42), show that channel activity is dependent on clustering. Therefore, assuming that this would also be the case for KcsA, different channel activity patterns should be expected in the patch clamp recordings arising from the different supramolecular entities present in the giant liposomes. The experimental observation seemingly complies with such an expectation, because different channel activity patterns are in fact detected. The LOP pattern is the most frequently observed and likely represents the simplest mode of assembly of KcsA, because only single channel openings or, more often, gating of two positively coupled channels are detected as the predominant events. The features of the LOP pattern, i.e. an acidic pH-and voltage-dependent gating, moderate selectivity for potassium, blockade by sodium, and a very low channel opening probability, essentially coincide with those reported previously for KcsA reconstituted in planar lipid bilay- ers (8,10,11,28). Such an apparent equivalence seems consistent with the present findings, because the concentration of protein that became incorporated into planar bilayers is usually very low, and therefore, the protein concentration dependence of the clustering equilibrium as reported here should be displaced to favor the less complex forms of assembly of KcsA. It should be emphasized, however, that the predominant gating event found in our LOP patterns corresponds to the positive coupling of two KcsA channels and that the estimated conductance is practically identical to that reported as the single channel conductance in planar lipid bilayers (see e.g. Ref. 11). Thus, it seems likely that similar coupling phenomena might serve to explain previous results (and subsequent discrepancies) on complex, multiple conductance levels reported occasionally for KcsA (7,8,28,30).
HOP patterns have been detected in different forms and, besides their characteristic high channel opening probability, they all have in common frequent events of multiple coupled gating, mostly involving five single channels acting synchronously, and channel opening both at neutral and acidic pH. As to the former, regardless of how many channels might be involved in the concerted multiple openings seen in the HOP patterns, the minimal currents detected seem identical in size to that seen in the LOP pattern, suggesting that the same "building blocks" are used in all possible KcsA assemblies, regardless of their complexity and behavior. Moreover, because the existing reports on channel clustering show that it is often accompanied by coupled gating and increased activity, we interpret our multiple coupled gating and increased channel opening probability in the HOP patterns as a direct consequence of the clustering process. Indeed, the ladder-like HOP patterns (Fig. 7) show how the number of open channels in a given patch increases dramatically during the time course of the recording, likely as a consequence of clustering of the existing channels within the patch. Interestingly, a recent report on clustering of inositol 1,4,5-trisphosphate receptors (44) shows that a conformational change to the open channel state is required prior to its assembly into clusters. We do not know whether this would also be the case for KcsA but, certainly, a similar mechanism should be invoked to explain the much increased activity. Indeed, this possibility seems supported by a recent report (45) in which dimers of KcsA tetramers are formed in detergent solution when at pH 5, which favors open channel forms of KcsA, but not at pH 7.
As to the change in pH sensitivity, our observations in the HOP patterns are not completely unprecedented, because the opening of KcsA at neutral pH was found in KcsA when simply subjected to a transmembrane ionic gradient (12). More strikingly, KcsA in the HOP patterns seems to have lost the ability to be blocked by Na ϩ , which is exhibited only by LOP patterns. Such an apparently controversial finding, however, might be related to existing reports on an altered ionic selectivity and other properties of KcsA reconstituted in different conditions (7,8,13) or when subjected to extreme voltages in planar bilayers (32).
The above unexpected changes in the known properties of KcsA when in a HOP pattern might arise from modification or direct involvement of their corresponding molecular determinants in the protein-protein interactions and/or conformational rearrangements involved in the clustering process. For instance, it is known that inward rectification in Kir channels (46) is partly determined by the relative proximity between acidic amino acid res-
Functional Diversity and Clustering in KcsA
idues at strategic sites near the cytoplasmic channel mouth. Such an example provides a nice correlation between a channel feature (inward rectification) and the topology of specific residues, but, because KcsA too has acidic side chains (Glu-118 and Glu-120) near the inner channel mouth whose relative positioning might be affected during clustering, it also provides a plausible hypothesis to explain the inward rectification observed in some of the HOP patterns reported here.
Concentration-dependent clustering in protein solutions and colloids has been attributed to a combination of short range attractive and long range electrostatic repulsive forces (47). In KcsA, both the C-and N-terminal regions in the protein sequence are rich in charged amino acid side chains, and therefore, they appear as potential candidates to act as molecular determinants for cluster formation and integrated behavior. In this respect, preliminary experiments using 1-125 KcsA obtained from chymotrypsin cleavage of full-length 1-160 KcsA show clustering and HOP patterns undistinguishable from those of the wild-type protein, i.e. indicating that the large 126-to 160-amino acid C-terminal portion of the protein is not involved in these processes (not shown). The possible roles of the N-terminal or the transmembrane segments of the protein in these processes are presently still under investigation, and additional work is needed to reach more definitive conclusions on this issue.
The finding that KcsA, one of the structurally simplest ion channels known to date, shows such a complex clustering and coupled gating behavior "in vitro" is surprising. We do not know enough about the biology of Streptomyces to be able to say whether these phenomena would also happen "in vivo" and serve any physiological purpose. However, the molecular crowding expected in vivo, along with the protein concentration dependence observed in the in vitro process, makes it likely that clustering would also occur in the bacterial membrane. Nevertheless, it should be noted that our in vitro observations probably correspond to deregulated clustering processes, because the putative anchoring molecules needed in vivo, if any, would not be present in our purified, recombinant KcsA preparations. In KcsA in particular, polyphosphates and polyhydroxybutyrate, which are abundant reservoir materials in prokaryotic cells, have been reported to interact with KcsA to form conductive complexes (13,48). Therefore, these or similar compounds, such as prokaryotic alternatives to PDZ-domains or other anchoring proteins observed in the clustering of other channels (33,41,42,49), might provide a clue as to where to start looking for potential cluster-inducing or cluster-stabilizing molecules. Thus far, however, it seems reasonable to assume that KcsA clustering, coupled gating, and channel opening at neutral pH with high probability, may be more biologically meaningful than the currently established view of KcsA opening only at very acidic intracellular pH and with very low probability.
|
2018-04-03T05:25:07.069Z
|
2006-07-07T00:00:00.000
|
{
"year": 2006,
"sha1": "5a1f4c3afccb852be635ff4135451084bce4e0fd",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/27/18837.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "13d7deca7baa900d0e89f660583652d01ed3bc29",
"s2fieldsofstudy": [
"Physics",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
58537481
|
pes2o/s2orc
|
v3-fos-license
|
Professionalisation of International Medical Volunteer Work to Maintain Ethical Standards: A Qualitative Study Exploring the Experience of Volunteer Doctors in Relation to UK Policy
Doctors from the United Kingdom are increasingly involved in international medical volunteerism in low- and middle-income countries (LMICs). Although supported by government policy this practice lacks infrastructure and coordination. Volunteer activities can have positive impact but also risk causing harm. Without external governance the responsibility lies with volunteers and their organisations to self-evaluate their activities. This study aimed to explore influences affecting volunteer engagement with ethical standards and evaluative practice. Semi-structured interviews were conducted with seven doctors working in the Scottish National Health Service with volunteer experience in LMICs. Findings were analysed thematically to explore this issue in view of ongoing policy development. Although ethical standards were valued by participants they were unaware of relevant government policy. Influences on volunteer development are unstructured and vary in quality. Evaluation lacks structure and framing. Volunteer physicians face a number of barriers to engaging in critical evaluation of their activities in LMICs. Development and professionalization of medical volunteering in LMICs needs to address volunteer preparation and evaluative practice to maximise the benefits of volunteering, reduce the risk of harm and maximise learning and accountability. Further areas of research are suggested to inform professionalisation of this sector.
Introduction
The practice of healthcare professionals volunteering in low and middle-income countries (LMICs) has become a recognised part of the United Kingdom's (UK) global health contribution. It has been described in government policy as having a key role in international healthcare development [1] and predicted to become the 'norm not the exception' [2] (p. 5).
There are no formal statistics to quantify the number of physicians from the UK volunteering overseas. However, in a 2016 study of 911 National Health Service (NHS) staff, 42% reported they had experience overseas either volunteering or as a student [3]. The Tropical Health Education Trust reported involvement with over 2000 NHS workers as part of their volunteer partnership programmes [4]. A recent estimate of volunteer activity in LMICs by doctors from the United States suggested that this could represent an economic investment of $3.7 billion [5]. Although on a smaller scale, UK costs are likely to be significant, particularly when compared to the most recent total budget for bilateral overseas development aid for health of £1003 million in 2015 [6]. The scale of volunteer activities cannot only be measured in economic terms; Caldron et al. [7] describe how these activities comprise one part of a country's wider global political engagement and highlight the fact that medical volunteerism carries diplomatic as well as economic value.
Despite the scale of investment volunteer opportunities can vary widely and the sector lacks infrastructure and coordination. A wide range of volunteer opportunities exist in surgery, dentistry and medicine, from large to small organisations [8]. The Academy of Medical Royal Colleges [9] reported that these opportunities are often fragmented, poorly coordinated and volunteers may be lacking in information. There is an overall lack of standardisation in engaging volunteers, pre-departure training, support and debrief.
While medical volunteerism has potential for positive impact, the risk of harm to patients, institutions and communities from recipient countries is also well recognised [10,11]. Clinical benefit may be less than anticipated due factors such as different diseases or patient demographics and limited follow-up [12]. Patients may even come to direct clinical harm due to acceptance of lower quality standards or volunteers acting beyond their competency. Critics have also described a risk of social harm, for example language barriers or cultural incompetency impacting on the patient physician relationship [13,14] or on a broader scale the perpetuation of structural violence through reinforcement of pre-existing power imbalances [15]. This has resulted in a number of ethical standards being proposed in academic literature [16][17][18] and by the UK government [1]. Although developed within different contexts, broad themes are similar: partnership, sustainability, education, preparation and evaluation of impact.
Anecdotal evidence of unethical and harmful practice suggests there is potential for a gap to open between these proposed ethical standards and the reality of volunteer practice on the ground. To prevent this there is a need for volunteers to develop an awareness and understanding of the ethical standards which are required. Developing a personal ethical framework is a complex process and likely to be influenced by multiple factors including previous educational and clinical experiences. Little research has been done on these influences and processes of how physicians develop their understanding of core ethical standards for volunteer work in LMICs.
Volunteer physicians must also be prepared to undertake critical self-appraisal to maintain the ethical quality of their activities. Government policy calls for active engagement in critical reflection and evaluation for learning and accountability [2]. Médecins Sans Frontières (MSF) agree that evaluation is necessary for transparency and accountability [19]. They maintain that it is a key mechanism to keep global health interventions on track both operationally and in terms of organisational values. Reflective and evaluative practice may also contribute to a volunteer physician's ethical framework as they learn from experience. Systematic reviews of literature regarding short-term medical volunteerism have found that rigorous evaluation is scarcely published [7,[20][21][22]. Frameworks have been developed to guide reflection [23] and evaluation [16], however there is little evidence that these are being actively used. There has been little research exploring potential barriers and opportunities to participating in critical evaluation.
The Scottish government is planning to develop international medical volunteering in the wake of their refreshed International Development Strategy [24] which addresses Scotland's contribution towards the Sustainable Development Goals. They requested a report exploring the current state of volunteering, which was recently published by the Royal College of Physicians and Surgeons of Glasgow, entitled 'Global Citizenship in the Scottish Health Service' [25]. This report highlighted the need to develop policy and infrastructure to professionalise medical volunteerism in LMICs to maximise benefits to partners in LMICs as well as the NHS.
The current lack of infrastructure and standardised pathways for UK medical volunteers is no longer universal to all work in LMICs. The development of the UK International Emergency Trauma and Medical Registers (UKIETR) have made significant steps towards coordination and professionalisation of UK physicians in humanitarian disaster response [26]. This body has created a register for UK medical volunteers under one organisation to improve the quality, coordination and governance of the UK response. Through this they are able to deliver a more organised approach to training and pre-departure simulation as well as focusing on team based competencies to improve their performance [27]. They also recognise the value of evaluation and accreditation, supervision of less experienced volunteers and post-trip debriefing as part of their professionalisation. Furthermore, Wall [15] and DeCamp [11] highlight a stark contrast between the rigor of governance in international medical volunteering compared to medical research in LMICs. In addition to being strongly advocated in UK government policy [28], familiarity with ethical standards in a research setting is a legal requirement [29]. Individuals can access appropriate training in-person or online to learn about Good Clinical Practice standards. There is no equivalent training requirement or governance framework for international medical volunteering. This lack of professionalisation is also more widely relevant to the rest of the UK and other countries supplying medical volunteers to LMICs. Lasker [30] undertook research of 177 organisations involved in medical volunteering in the US. She found that volunteer organisations often have competing interests and incentives which may obscure and detract from their focus on optimally designing projects to meet the needs of LMICs. This further highlights the need for individual physicians to be able to make wise judgement when investing their time and expertise in a field which has been described as intrinsically ethical not neutral in nature [10].
A literature review was undertaken, using the Ovid MEDLINE®ALL database (1946 to 10 March 2017) (https://ovidsp.ovid.com/). The literature search used combinations of the following terms (with synonyms and closely related words): "medical," "mission," "trip," "brigade," "foreign" "overseas," "international," "volunteer," "short-term," "ethics." Further publications were identified by examining the reference lists of all included articles and searching relevant websites and grey literature.
This literature review established that key ethical quality standards for international medical volunteerism have been discussed in academic literature and at a policy level, although no universal guidelines currently exist. Furthermore, the application of these ethical standards in practice is not well documented. While self-evaluation of volunteer activities is an essential step to maintain ethical standards, it is rarely documented and may not be taking place. In the absence of external regulation, the process of upholding ethical standards requires effective engagement and collaboration between volunteers and their organisations. The framework in Figure 1 was developed by the author from the literature review to represent the range of processes involved.
Materials and Methods
A qualitative approach was chosen to explore the perceptions and lived experience of volunteer physicians. Two main aims were identified to explore this theme of ethics of international medical volunteerism and self-evaluation from a volunteer perspective: 1. Volunteer development and engagement with ethical standards. 2. Issues influencing the evaluation of their activities This research aims to investigate some of the knowledge gaps around this issue. Qualitative interviews were conducted with volunteer doctors from the Scottish NHS to explore volunteer development and engagement with ethical standards as well as their experience of debrief and evaluation processes.
Materials and Methods
A qualitative approach was chosen to explore the perceptions and lived experience of volunteer physicians. Two main aims were identified to explore this theme of ethics of international medical volunteerism and self-evaluation from a volunteer perspective: 1.
Volunteer development and engagement with ethical standards.
2.
Issues influencing the evaluation of their activities Recruitment was through a mixture of purposive and snowball sampling. Participants were required to have had experience as a fully qualified clinician of volunteering in LMICs. Volunteers who had experience of only military, expedition or acute disaster relief were excluded as these represented different contexts to those explored in the literature review. To mitigate against recall bias, only participants with experience within the last five years were included.
Semi-structured interviews lasting 30-50 min were conducted between April and May 2017. One interview was via telephone, the rest were face-to-face. They were audio recorded and transcribed verbatim. Written consent was obtained from each interviewee prior to the interview taking place. The framework in Figure 1 was used to structure an interview guide (see Appendix A) and address the main themes although not all aspects could be addressed in the scope of this research. Questions were adapted to each interviewee. Participants were asked to provide context to their answers by describing specific examples of their experiences as well as volunteer preparation, educational and clinical experiences which may have influenced them. For analysis, the 4-stage approach outlined by Green et al. [31] of immersion, coding, categorisation and theme identification was used. NVivo was used to facilitate coding of transcripts. Although coding was approached inductively to allow for new and unexpected findings to emerge from the data, the framework from Figure 1 was considered during categorisation. The analysis involved comparing and contrasting data from individuals as well as between different participants to establish conflict and agreement within the transcripts. Thematic analysis used secondary research on social theory to explore and interpret these findings and establish possible implications in the discussion.
Result
Seven participants were recruited and interviewed from across NHS Scotland. The participants represent a range of volunteer experience and clinical seniority (see Table 1) This reflects the variety of volunteer opportunities available and the diverse backgrounds of volunteer physicians. There were a variety of reasons which motivated these doctors to volunteer in LMICs which ranged from clearly defined roles to a more general personal interest in volunteering. Participants #5 and #7 had been invited by colleagues to volunteer in specific roles based on their skillset, which included teaching or service development. Participant #3 applied for his position as a training opportunity provided by his specialist postgraduate college. The remaining volunteers described motivations including the desire to experience different cultures (#1), gain more clinical experience (#2) and to have impact and 'make a difference' (#4,6).
Volunteer Engagement with Ethical Standards
The literature review provided anecdotal evidence of volunteers who were clearly lacking in awareness and/or engagement with ethical risks and standards of international medical volunteerism. Subsequently, one of the research objectives was to explore whether volunteers were aware of the ethical principles involved in their volunteer activities and whether their understanding and interpretation of these principles reflected current academic and policy discourse. All of these interviewees demonstrated high regard for maintaining core ethical standards, describing them as 'critical,' 'crucial,' 'important' and 'highly relevant.' Their interpretation of these standards was in keeping with recognised definitions outlined in academic literature. For example, sustainability requiring mutual engagement and ownership: "I suppose in terms of sustainability it's about getting people to buy into it and take ownership of it." #3 All of the participants also showed that they recognised the potential for lack of benefit or harm resulting from poorly designed or executed volunteer activities.
"If you look then [volunteer physicians] have been doing this for decades now and actually the situation's still pretty bad." #6 "We're not leaving half of [these donations] because it's not going to help, its actually just going to make things worse." #1 "[. . . ] you see the damage that does to places by people wanting to do good but not the end outcomes not being anything near as beneficial as people anticipate." #5 However, none of the participants were familiar with any of the frameworks for ethical engagement found in the literature review, including ethical standards for volunteer engagement published by the government [1].
Influences on Volunteer Development
A variety of possible influences on volunteer development of an ethical framework were discussed by participants. These included formal undergraduate and postgraduate training, preparation from volunteer organisations and informal influences.
None of the participants felt that their undergraduate educational experience had prepared them well for volunteering in LMICs. As postgraduates, two interviewees had done the Diploma in Tropical Medicine & Hygiene which they described as a valuable experience and Participant #3 had also taken a short postgraduate course on his specialty in LMICs. This individual had also received more extensive training prior to his volunteer placement which had been co-ordinated by his specialist college. He described this as a 'solid' preparation for his voluntary experience. Participant #7 felt that the 'Good Clinical Practice 'clinical research training mentioned in the literature review had helped provide him with an ethical framework which he could then apply to volunteering.
Training received from volunteer organisations varied widely. Two participants noted time constraints which they felt limited the preparation they received. Participant #1 received on the job training from other volunteers who had only been in-country for a week or two and was quickly expected to provide this herself for subsequent volunteers. Participant #6 described his preparation as 'pretty shocking.' They also described that their training focused exclusively on logistics rather than ethical standards or overall objectives. These experiences contrast to Participant #2 who had lengthy telephone discussions with one of the organisation's trustees who used this opportunity to share their strategic vision of the organisation and ethical standards with him.
Five of the participants mentioned more experienced colleagues who had a significant impact on their development as a volunteer. This had taken various forms either through sharing attitudes and values, providing networking opportunities or by guiding them through cultural obstacles. Participants described these influential characters positively, as providing 'eye-opening' or 'valuable' insights or experiences, having a 'nurturing' role and often they had long-term contact with their role models, either socially or in a work context.
A variety of informal social networks were described which had allowed interviewees to meet colleagues who had similar experiences. These included university global health departments, specialist colleges, friendship groups established from previous volunteer experience and professional courses. One interviewee even discussed how the technical department in his hospital represented a common point of contact for doctors who volunteered abroad as they would encounter each other while salvaging equipment for their missions.
Interviewees described different effects from these peer interactions. Some interviewees described 'bouncing ideas' off colleagues during their development of projects in LMICs. Some requested specific advice about a peer's experience in a certain role or organisation. Two participants demonstrated active and critical reflection on their colleagues' experiences which influenced their understanding of what makes a volunteer project ethical and effective: "[I discuss] with people and think 'Oh well, you know, what did I like the sound of, what did I not like the sound of.' And then you know I suppose from there, you then form realisation of well, you know, what is it about the things that sound good that make them sound good and what is it about the things that sound like a disaster that make them sound like a disaster." #2
Structure and Framing
Interviewees had encountered a wide range of evaluation processes. Some had regular meetings throughout their volunteer placement, where others gave feedback at the end. Often evaluation was multi-modal in the form of both written and verbal feedback. Questions provided by organisations for written feedback were described as 'broad' and 'generic': "[. . . ] it was just 'What went well, what didn't? What could be improved?' I don't think there was anything particularly. . . Yeah it didn't really ask a great deal of specifics." #7 This 'open' approach to evaluation was described by five of the participants. In contrast, Participant #3 described how a structured debrief forced him to confront more difficult questions.
"I think the structured stuff was really useful in that it asked questions that I had to answer and I wasn't able to shy away from, that I was just caught up for some time with." #3 Participants were unclear in many examples about the purpose of feedback or reports they had provided to their organisations.
"It wasn't sort of as 'This is a debrief' but I guess that's what it was." #1 The debrief experiences described by volunteers covered a range of issues as well as or instead of evaluating the quality standards of their activities. These included feedback on logistical factors (e.g., accommodation and transport), personal reflection on what they had learned from the opportunity, psychological or emotional support, clinical case reports and gathering evidence to support future funding opportunities.
Barriers to Participation in Critical Evaluation
Volunteers indicated that they valued reflective practice and critical evaluation but that this process was not prioritised compared to other commitments: "[. . . ] things like that are important but non-urgent, so they never get done." #4 Two participants involved at a managerial level with their volunteer activities described 'moving forwards' to plan future activities as a high priority which reduced time available for reflection on ongoing or completed projects.
Another barrier frequently described was the perceived difficulty of assessing the impact of their activities: "Have I done anything sustainable, at all? And sometimes I look at things and just think 'What did I actually do?'" #3 "This qualitative stuff is much harder work and not really my skill, at all." #4 Interviewees described the challenge in evaluating outcomes that were 'indirect,' 'on a societal level' and 'hard to measure.' This compared this to quantitative evidence which was described by more than one interviewee as more 'tangible': "Yeah so the clinical experience itself was doing good. But obviously it was on a much smaller, a much lower level than anything educational would." #2 Participants described difficulty in finding appropriate forums to share their critical evaluation findings to support wider organizational learning: "I think these experiences are so multifaceted and so. you know it's very difficult to present it all in a written document." #3 "It would be too soft in points [. . . ] it's not really scientific, in inverted commas, what people want." #5
Discussion
This study aimed to address some of the knowledge gaps around how to maintain ethical standards of international medical volunteering from the perspective of volunteer doctors. This is a topical issue gaining increasing traction in political discourse and currently undergoing development in Scotland [25]. The findings from this research highlight a number of areas for consideration in the professionalisation of medical volunteerism in the Scottish NHS and on a wider scale. The broad spread of background and experience even within a small sample demonstrates the varied nature of volunteer work undertaken by physicians of in LMICs.
It is reassuring that the volunteers in this study recognised and valued the need to maintain ethical standards in their practice in LMICs. However, they were unaware of existing policy regarding quality standards of volunteering. This included consultants with many years' experience of volunteering in LMICs, as well as doctors within a few years of graduation. In the absence of structured training or clear frameworks which effectively communicate policy objectives to a volunteer level, volunteers are developing their own personal ethical frameworks through a number of informal mechanisms including social networks and role-models. These influences lack standardization and are not necessarily aligned with Scotland's development policy. This may result in physician's participating in activities which have limited benefit to partners in LMICs or the NHS and may even risk harm. Mano-Negrin and Mittman [32] highlight the importance of informal and unstructured social networks on physician behaviour in the context of clinical decisions. and suggest that these may be a powerful tool for the dissemination of clinical guidelines. The 'Global Citizenship' report [25] outlines plans emerging in Scotland to formalise and expand existing global health networks. This could represent an opportunity to disseminate structured ethical frameworks and offer an alternative to traditional educational methods at bridging the gap between policy and practice.
Influential role-models were also described by participants as having a key role in shaping their understanding of the ethical implications of volunteer work in LMICs. Role-modelling represents a well-recognised form of professional development in the medical field, although Paice et al. [33] describe how it is not a dependable way of imparting attitudes and values as some senior clinicians may display poor attitudes and unethical behaviour. The development of mentorship programmes could offer another strategy to facilitate this transfer of knowledge and values from more to less experienced medical volunteers. These potential opportunities for standardization and development of influences on volunteer development through alternative education and dissemination methods warrant further research.
There is little evidence of active critical evaluation of medical volunteer activities in LMICs in academic literature. This study identified that volunteer physicians are engaging in evaluative practice to some extent. However, this process may lack enough structure to focus on meaningful aspects of volunteer activity. For comparison, see Table 2 which compares the debrief experience of these participants to the list of key evaluation questions from MSF's evaluation handbook [19] which examine specific aspects of their programs in relation to ethical standards. Participants described many potential areas for discussion as part of a 'debrief' experience which do not necessarily facilitate critical evaluation of quality standards. Without adequate structure and framing, evaluation processes risk being superficial and failing to address whether quality standards are fundamentally being upheld.
MSF Key Evaluation Questions [19]
Participant Experience of Debrief a medical science paradigm which leads to inappropriate focus on quantitative results and neglect of more complex social issues at stake. This was reflected in the experience of these participants. Efforts to increase evaluation and monitoring as a method to professionalise volunteering may prove ineffective if these barriers to engagement are not addressed. This research is on a small scale and in the particular context of NHS Scotland, therefore findings should be regarded as preliminary and exploratory in nature. There is still further research to be done to explore this issue which could help further establish the key factors involved in volunteer development and inform developing government policy, programmes and recommendations. The perspectives of LMIC partners, volunteer organisations or other cadres such as volunteer allied health professionals are also clearly relevant and would give a more comprehensive view of this issue.
Overall this research has explored some of the less well understood areas and processes which underpin the ethical standards of international medical volunteering. Volunteer activities are occurring on a large scale with significant political and financial investment. Despite the potential for limited benefit or even harm, there is limited evidence of ethical engagement and evaluation from volunteers and their organisations in the literature. The lack of standardization and coordination of these activities in general contrasts to established governance systems in the UK for research in LMICs and more recent developments in professionalizing the medical humanitarian response. The potential gap between policy and practice, the influence of role models, a lack of structure and framing for debrief as well as barriers to engagement in evaluation are all issues which may be generalisable to the medical volunteer force on a larger scale. These findings highlight the need to develop guidance on best practice in volunteer work in LMICs which is publicized effectively to volunteer physicians. Evaluation of volunteer activities also requires scrutiny to ensure aspects of ethical quality are appropriately addressed. Professionalisation is necessary to maximise benefits and avoid harm for both partners in LMICs and the NHS. Further research is needed to help guide this development to ensure ethical standards are upheld. If you were volunteering with a colleague who didn't seem to recognise any need for ethical standards like sustainability how would you react?
Do you think your awareness or understanding of these was influenced by your last STMM?
|
2019-01-22T22:33:29.134Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "e5ad297511c8be414170326d541098d69ba68211",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3271/7/1/9/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5ad297511c8be414170326d541098d69ba68211",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251826257
|
pes2o/s2orc
|
v3-fos-license
|
Digital twins’ implications for innovation
ABSTRACT Digital transformation and emerging technologies create new business opportunities that differ from traditional requirements. Digital twins, referring to digital replicas of a physical entity, are rapidly developing and creating novel innovation opportunities. This study focuses on increasing the understanding of the implications of the use of digital twins for innovation, and case studies were used to gather evidence. The study offers an interesting contribution to the literature on digital twins, presenting a framework that demonstrates how digital twins are characterised, and how they contribute (impact) and are used (scope) in innovation processes in the context of innovation and technology management. In addition, the findings provide interesting implications for different practitioners interested in the utilisation of digital twins. Using the results of this study, managers can enhance their companies’ digital transformation by acknowledging the multiple uses of digital twins.
Introduction
Innovation and technology management in companies is increasingly being shaped by the business potential of distinct digital technologies, such as digital twins (Hilbolling et al. 2020;Blichfeldt and Faullant 2021;Muscio and Ciffolilli 2020;Urbinati et al. 2020).Digital twins are digital replicas of physical entities such as products, processes, or systems.Digital twins differ from other related concepts in the level of data integration between the physical and digital counterparts, as they are fully integrated with real-time data exchange (Kritzinger et al. 2018).Technical solutions offered by digital twins are rapidly developing, and from a technical point of view, their application is constantly becoming easier and more cost-effective.Research on digital twins has mainly been conducted in different fields of engineering science.However, a meaningful application of digital twins requires an understanding of the connection of the concept to innovation.Tao et al. (2019) argue that little effort has been devoted to exploring the applicability of digital twins for product design with respect to how communication, synergy, and coevolution between a physical product and its digital representation (virtual product) can lead to a more informed, expedited, and innovative design process.
Digital twins provide several innovation opportunities, as they offer great avenues for the interoperation and fusion of the physical world and the cyberworld of manufacturing (Liu et al. 2019).Digital twins can facilitate visualisation, promote collaboration, and further decision-making, among others (Bao et al. 2019).More specifically, Zhou et al. (2020) suggest that digital twins can help in understanding, predicting, or optimising the performance of manufacturing processes through an intelligent analysis and decision-making process enabled by dynamic knowledge and skills.Tao et al. (2019) present that digital twins are mostly used for fault diagnosis, predictive maintenance, and performance analysis, with only some effort devoted to more innovative design processes or innovations.Despite the growing trend toward digital twins-driven innovations and their commercialisation, the literature is deficient in several important ways.Business and management researchers have reported only a few studies on the utilisation of digital twins.Since digital twins play an important role in providing value for stakeholders and generating profit, a comprehensive analysis of the benefits of digital twins is required (Lim, Zheng, and Chen 2020).Hence, the implications of digital twin utilisation should be systematically studied.
There has been a significant rise in digital twin studies in recent years.Despite the profusion of digital twin studies published, most of the studies focus on the technical aspects.Little is known about how digital twins are integrated into a firms' innovation development.This study contributes to previously identified research gaps by investigating the utilisation of digital twins in innovation.The research question is as follows: What implications do digital twins have for innovation?The study presents a framework that demonstrates how digital twins are characterised, and how they contribute (impact) and are used (scope) in innovation processes in the context of innovation and technology management.Thus, this study aims to enhance the understanding of the implications of using digital twins in the contexts of innovation and technology management.
The digital twin concept
The commonly agreed definition of a digital twin currently highlights two important features.First, a digital twin provides a connection between a physical entity and its virtual counterpart (He and Bai 2021).Second, the connection between the physical entity and its virtual twin is established by sensors to provide real-time information (Tao et al. 2019;Wang et al. 2017).As mentioned earlier, a digital twin is typically considered a digital replica of a living or nonliving entity (He and Bai 2021)-with a two-way dynamic mapping between a real-life object and its digital counterpart, which has a structure of connected data and metainformation (Aheleroff et al. 2021;Shao and Helu 2020;Tao et al. 2019;Zhou et al., 2020).A digital twin enables real-time data transfer by connecting physical and virtual entities (such as processes, products, assets, and personnel), making it possible for virtual entities to occur simultaneously with physical ones (He and Bai 2021).Digital twins have various characteristics and can be utilised in many phases of organisation, production, sales, and innovation processes and activities.The next section presents the possibilities of digital twins in the context of innovation.
Innovation as a process
The rise of digital twins includes transformations in technology and production (Cimino, Negri, and Fumagalli 2019;Liu et al. 2019;Tao et al. 2019).This, in turn, results in significant organisational implications (Parmar, Leiponen, and Thomas 2020), such as emphasising decision-making support for optimising production or maximising profitability (Lim, Zheng, and Chen 2020).Thus, digital twins are changing the nature of innovation.Innovation, in general, can be viewed as 'a value-added novelty in economic and social spheres' (Crossan andApaydin 2010, 1155).Innovation has two roles: innovation as a process and innovation as an outcome (Crossan and Apaydin 2010).Regarding innovation as a process, the current digital era especially highlights the challenges of companies in generating innovation in isolation.This emphasises the locus of an innovation process, which may be either a firm-only or a network-level process (Crossan and Apaydin 2010).
In this sense, digital twins have various applications that can be exploited in different business areas (Fuller et al. 2020;Qi et al. 2021;Rasheed, San, and Kvamsdal 2020).As an innovation process, a digital twin enables collaboration throughout the value chain, not only in the integration and sharing of data between upstream and downstream companies but also in collaborative product development, manufacturing, operation, and maintenance (Cheng et al. 2020).Users can take part in the innovation and development processes of future products, services, processes, and business innovations via cloud computing (Lim, Zheng, and Chen 2020;Zheng et al. 2018), enabling the development of innovation through cooperation both within the company and with external parties.Using digital twins in the design phase, users can monitor the progress of the design of a digital model, implement and test modifications to the model, and provide direct online feedback on product features to the company (Tao et al. 2019).
Innovation as an outcome
The other role of innovation-innovation as an outcome-considers innovation to be more than a creative process and includes utilisation (Crossan and Apaydin 2010).Prior research identifies different forms of innovation-namely, product innovation, service innovation, process innovation, and business model innovation.Product innovation refers to novel products targeted at technological competitiveness to serve the markets (Carboni and Russu 2018).Some scholars have defined service innovation as a specific type of product innovation connected to the actions by which services are designed or improved to meet customer needs (Wang et al. 2015).Process innovation covers the application of novel production, management, and process-related approaches (Wang and Ahmed 2004).Business model innovation is about 'designing a new or modifying the firm's extant activity system' (Amit and Zott 2010, 2).
In this sense, the introduction of digital twins provides novel innovation and value creation opportunities for industrial companies (Chirumalla 2021;Blichfeldt and Faullant 2021).Digital twins applications can be utilised in the design, production, and use phases of the product life cycle, allowing designers to digitalise, visualise, and materialise complex systems such as ships, aircraft, and factories, enabling product innovations (Qi et al. 2021).Digital twins can be used, for example, in digital design, where they can provide a framework for product design or the optimisation of physical parameters (Tao et al. 2019).In addition, services are an important component of digital twins.Liu et al. (2021) mention that the possible applications of digital twins as a service are predictive maintenance, fault detection and diagnosis, state monitoring, performance prediction, and virtual testing.When digital twins are utilised in service operations, they provide opportunities, for example, for preventive maintenance and equipment status monitoring (Errandonea, Beltrán, and Arrizabalaga 2020;Kritzinger et al. 2018).In addition, according to Qi et al. (2021), the potential of digital twins as a service is related to simulation, verification, monitoring, optimisation, diagnosis, and prognostic and health management.By enabling the transmission of data between the physical and virtual world, digital twins have the potential to provide real-time information on equipment and production operations as well as potential actions, increasing production predictability and enabling process innovations leading to process and performance improvements (Aheleroff et al. 2021;Fuller et al. 2020).In addition, digital twins can be utilised in production optimisation through process simulation to support virtual production and productivity (He and Bai 2021;Zheng et al. 2018).By acting as a real-time monitoring and forecasting tool, digital twin technology can be utilised in the development phase of buildings, structures, and smart cities, as well as in their maintenance (Fuller et al. 2020).Digital twins' ability to synchronise the real and digital worlds also enables novel types of business model innovation, changing existing ways of operating.These innovations can be used to create new types of earning models-for example, to offer smart solutions in the construction, transportation, and energy sectors (Rasheed, San, and Kvamsdal 2020).
Summary
It can be argued that despite the promising possibilities that digital twins offer organisations, the phenomenon from the perspective of innovation has been underexplored.Previous studies have focused on how digital twins can support the optimisation of an organisation's internal operations and processes by streamlining existing activities through data collection and visibility.Although the use of digital twins is likely to contribute to the development of innovation (e.g.Cheng et al. 2020;Lim, Zheng, and Chen 2020;Zheng et al. 2018), and an increasing number of organisations are taking advantage of various forms of digital twins as part of their operations, understanding of the implications of their use is relatively limited (e.g.Aheleroff et al. 2021;Fuller et al. 2020;He and Bai 2021).The focus on streamlining and optimising internal operations diverts attention from innovation, in the form of new business models, services, and products, as an outcome.Thus, a systematic understanding of the implications of the use of digital twins, covering scope, characteristics, and impact and based on empirical evidence, is necessary.
Research approach
Digital transformation and emerging technologies create new business opportunities that differ from traditional requirements.We studied digital twins in the industrial environment to answer the following question: What implications do digital twins have for innovation?The research was conducted using a qualitative case study approach (Eisenhardt 1989).This is an appropriate research methodology when there are many variables in the study and the study consists of a specific, complex phenomenon in a real-world context (Yin 2014).A case study approach allows an indepth and multi-faceted understanding of the researched phenomenon in its natural context (Crowe et al. 2011).
Case selection and data collection
Following Stake (1995), the study used a collective case study approach.A multiple (Yin 2003) or collective case study (Stake 1995) involves consideration of more than one case study in order to allow researchers to explore differences within and between cases, providing a broad understanding.In addition, as using multiple case studies constitutes an appropriate research methodology when the same phenomenon is thought to occur in multiple situations (Yin 1981).This research approach helped us identify the similarities and differences in how companies seek to leverage digital twins in their innovation activities.The research utilised multiple cases involving smalland medium-sized companies (SMEs) and large companies.SMEs are defined as companies with less than 250 employees and an annual turnover not exceeding EUR 50 million or a balance sheet total not exceeding EUR 43 million.In order to achieve higher external validity, multiple cases were used (Voss, Tsikriktsis, and Frohlich 2002) with the goal of identifying repeat findings in the different cases (Yin 2003).
Case selection is an important part of a successful case study (Eisenhardt 1989).Cases were carefully selected to allow comparison and replication (Yin 2003) using the following criteria: (1) the company must be taking concrete initiatives to develop and/or implement digital twins, (2) the company must allow access to relevant information concerning their use of digital twins and innovation processes, and (3) the interviewee(s) should be aware of their company's use of digital twins and the relevant organisational areas and processes.A total of six companies and 14 interviewees participated in the study (Table 1).Both SMEs and large companies were selected to provide a comprehensive picture of the development and use of digital twins, particularly in relation to innovation.
Furthermore, this research is based on the analysis of open-ended interviews conducted among digital twin solution providers and utilisers.The case companies are actively developing or using digital twin solutions and were therefore selected as cases for this study.To gain a complete view of the mechanisms of digital twins in innovation processes, representatives from different units, hierarchical levels, and job descriptions were interviewed.Because the utilisation of digital twins in innovation development is evolving, preliminary questions were constructed.Thus, the interview execution permitted unofficial discussions and allowed researchers to ask support questions.In particular, the interviews focused on how the company currently utilises digital twins to support its operation and how digital twins and their multiple characteristics could support companies in the future.These provided important insights into how digital twins can be leveraged to support innovation and development processes, enabling new types of products, services, processes, and business model innovations.
The interviews were conducted in 2020.They were carried out with the entire research team, each interview lasting approximately between 60 and 80 min.The interview questions focused on predefined dimensions taken from a review of the literature (Scope, Table 2).Building interview questions based on these dimensions helped ensure the reliability of the data (Yin 2003).Given that the topic of digital twins is evolving, the interview protocol allowed for informal discussions and supporting questions from the researchers.The flexibility of the data collection made it possible to obtain a detailed description of each individual case (Crowe et al. 2011).The interviews were recorded and transcribed to facilitate the analysis phase.
Data analysis
The study can be classified as a theory-generating case study in which several cases were exploited and in which researchers looked for both similarities and differences between cases in pursuit of theoretical generalisations (Ketokivi and Choi 2014).Existent theoretical considerations guided the predefined dimensions of the interviews (Scope, Table 2), but empirical analysis informed and shaped the findings (Ketokivi and Choi 2014).The collected data were coded with the help of the predefined dimensions and were analyzed using within-case and cross-case analyses (Eisenhardt 1989).The unit of analysis was companies, specifically companies' experiences with the use of digital twins and their potential for innovation.The within-case analysis provided more detailed information on how each company uses or develops digital twins and identified the related innovation mechanisms.Cross-case analyses were conducted between companies to identify repetitive patterns and to support the creation of the framework of the results (Table 2).The analyses were performed by a single researcher.After that, there was an open discussion between the four researchers about the results of the cases.Finally, the results were analyzed during the discussion, and theory-generating proposals were formed (Figure 1).
Findings
In examining the impact of digital twins on innovation, we focused on the different scopes of digital twin utilisation.In analyzing these scopes, the implications of digital twins were analyzed in terms of their impact, characteristics, and two roles of innovation: as a process and as an outcome.This section describes the main results and the propositions (P) derived from them.Table 2 provides a summary of the results.
The findings show the impacts of digital twins based on their characteristics in different areas of business, verifying the diverse possibilities of digital twins.
The same digital twin can be used throughout the product life cycle as a design aid, for validation and testing, and in part, also for production, training, marketing, maintenance, and service.(The chief technology officer, Company D) The results of this study show the ways that digital twins contribute to innovation processes (impacts) and how digital twins are used in innovation processes (scope).For example, digital twin exploration characteristics could be applied in the product design and development areas, enabling experiments, innovations, and new opportunities through digital twins' simulation capabilities.
It allows you to go through different operating situations without breaking anything.Different situations can be simulated, and customers also want to do different tests on operating situations that would be tricky on the right machine.(The customer project manager/production manager, Company D) P1: Making greater use of a digital twin facilitates product development, as it allows designing and testing products virtually, reducing errors and surprises, increasing collaboration, saving time, and enhancing information sharing for all parties.
On the other hand, digital twins offer opportunities for more efficient production processes because they can be used to implement real-time production monitoring and control, enhancing the guidance of the operations.
With the help of the digital twin, you have access to real-time production information, so it is very easy to develop the production and modify processes so that the entire production capacity can be utilized.… the benefit of real-time monitoring is that everyone in the organization has the same knowledge of what enables development and of data-based decisions to be made.(The chief operating officer, Company F) P2: Making greater use of a digital twin facilitates production processes, as it enhances the quality of decision-making and information transparency for workers and management.
According to the study, digital twins serve as an excellent communication tool.Possessing interaction characteristics, digital twins enable information sharing and communication between different parties both within and outside the company.
The digital twin is a great channel of communication with partners and customers.It kind of allows us to speak the same language when we are wondering at the digital twin together.It also allows you to try things out, look at the machine in action, see the work process, and much more.So, it is a good channel for dialogue with other parties.… And then within the company, it is, of course, a great communication channel in the sense that usually, the company has its own department which is responsible for electrical components, its own department which is responsible for hydraulics, for mechanics, and for the control system.And then all these departments now have a conversation around the digital twin.When changes are made, their effects on department-specific issues are immediately seen, and it helps to go through a wide range of problems in different situations.(The customer project manager/production manager, Company D) P3: Making greater use of a digital twin facilitates cooperation, as it increases the availability of information and improves information sharing.
The digital twin model has various implications for sales and marketing processes, enhancing their execution.At the sales stage, the customer can see and try out the features of the products, influencing the customer's purchasing decision and speeding it up.
If we have a good digital twin at the sales stage, then with the customer we can customize and optimize the operation of the real product as desired.(The product development director, Company B) With digital twin of the products, our customers have been able to streamline the early stages of the sales process.For example, two to four weeks of work can be condensed to 15 min, the solution can be presented in a much clearer form to the end customer, and the salesperson does not have to be an expert with 10 years of experience.(The chief operating officer, Company F) P4: Making greater use of a digital twin facilitates sales and marketing, as it promotes the making of offers in a timely manner and increases the flow of information to customers.
Also, the study showed that digital twins have various implications for the companies' maintenance services.By optimising the maintenance of the products and spare parts, and using preventive maintenance, companies saw that savings in both money and time could be achieved.In addition, the companies found that digital twins will create new types of after-sales maintenance services in the future, such as education and remote support services.
With digital twins, we can get help with the timeliness and planning of maintenance, so that it is not necessary to stop a piece of equipment or the whole factory when a place breaks down unexpectedly, but to monitor the whole chain so accurately and in good time that it is not known when any part will break down and then plan maintenance and repairs.(The business director, Company F) P5: Making greater use of a digital twin facilitates product maintenance and preventative maintenance, as it saves money and time by optimising maintenance and enables the application of new types of maintenance services, such as training and remote support.
Digital twins offer completely new opportunities for business discovery and development.By learning more about customer needs and integrating information silos, new service opportunities can be found.
I want to emphasize that, in a way, the integration of these silos is one of the key issues.Because, firstly, it would allow a lot of new business at no extra cost if the world of information between these silos were kind of connected.(The CEO, Company A) One of the new possibilities was the utilisation of the digital twin for learning purposes, which was already taken advantage of by Company E.
The more technology evolves, the easier it is for it to be used for both internal and external learning purposes.For example, when we are supplied with an entire factory and we have a good digital twin from it, then we can utilize the digital twin as a tool to help the customer learn how to operate the factory or the complex production lines.(The product development manager, Company C) Well, we utilize digital twins in driver training.We have two simulator chairs where drivers can try our systems and how they work in the virtual world.(The research team leader, Company E) P6: Making greater use of a digital twin facilitates finding new business, as it increases knowledge of the customer and their needs.
The findings demonstrate how businesses are moving from technology-centric to value-centric innovation processes to create value with digital twins.This was reflected in the fact that companies-both providers and users-were actively looking for new ways to leverage the digital twins in their business, looking for new business opportunities and seeking answers to various challenges.The results showed how companies were seeking new opportunities and developing their operations with digital twins (innovation as a process); some of the companies' innovation activities were conducted as firm-only processes, while others as network-level processes.Overall, the development of networking was emphasised, as the digital twin will enable cooperation between different parties.
In my opinion, it would be good to be an open platform that has these digital twins, and it would also not hinder, that it would reach out to external developers, that external parties would have more information and skills, that innovations would probably increase, and that it would then help the whole ecosystem to develop.(The development manager, Company C) P7: The characteristics of a digital twin are dependent on the understanding of innovation either as a process or as an outcome.
The results also demonstrate the importance of the distinct characteristics of digital twins.Distinct characteristics impact the extent to which digital twins can be utilised to achieve different benefitsfor example, enhancing decision-making, improving information sharing and collaboration, optimising processes, and boosting sales.
P8: The characteristics of a digital twin determine its implications.
Discussion
This study enhances the understanding of how digital twins are integrated into a firm's innovation.The research question is as follows: What implications do digital twins bring to innovation?The study offers an interesting contribution to the digital twin literature: we consider digital twins in the context of innovation and investigate the implications of digital twins in terms of different scopes of operation.
The results of the study show that companies are more actively searching for possibilities to support innovation through the utilisation of digital twins.The findings of the study further indicate that organisations are able to recognise the scope where the utilisation of digital twins can best support innovation and produce positive results.The possible and searched impacts of the utilisation of digital twins in innovation are also recognised both from the service provider's and utiliser's perspectives.However, these recognised advantages seem to rely on boosting existing business elements and optimising processes.It is still uncommon to add digital twin support to a product during its early design phase, even though their supporting elements as a part of innovation are understood.
Thus, the study reveals three distinct issues that describe the current state of digital twins' utilisation in innovation activities.First, the efforts of companies toward value-centric innovation through digital twin solutions were recognised (cf.Lim, Zheng, and Chen 2020).The technology and product-oriented innovations around digital twins are considered just one aspect of innovation, and the focus is shifting rapidly toward the entire value chain to strive for new business opportunities for the entire network.This is in line with prior research suggesting that the use of digital twins provides novel innovation and value creation opportunities for industrial companies (Chirumalla 2021; Blichfeldt and Faullant 2021).Thus, this study proposes that the characteristics of digital twins are dependent on the understanding of innovation as either a process or an outcome.
Second, the findings indicate that innovations related to digital twins allow for a completely new kind of cooperation, for example, in the form of open platforms.This means simultaneous and mutual creation of all types of innovations around digital twins, where digital twins are considered as open platforms that in turn allow simultaneous upgrading of technologies, communication, and data sharing among partners.This study shows that these efforts require various characteristics of digital twins, such as exploration, experimentation, optimisation, interaction, discovery, and guidance.The result is in line with Cheng et al. (2020), who concluded that digital twins enable collaboration that is more than just the integration and sharing of data between upstream and downstream companies.As the rise in the use of digital twins includes transformations in technology and production (Cimino, Negri, and Fumagalli 2019;Liu et al. 2019;Tao et al. 2019), the organisational implications may also vary (Lim, Zheng, and Chen 2020;Parmar, Leiponen, and Thomas 2020).Thus, this study proposes that the characteristics of digital twins determine the implications of their use.
Third, the results support prior research concluding that digital twins have various applications in multiple scopes (Fuller et al. 2020;Qi et al. 2021;Rasheed, San, and Kvamsdal 2020).Some studies (e.g.Rasheed, San, and Kvamsdal 2020) have highlighted digital twins' ability to combine the real and digital worlds in a way that enables novel types of business model innovation, but this possibility has not been fully exploited in the studied companies.Although the companies were actively striving for new ways to leverage digital twins in their businesses, looking for new business opportunities and seeking answers to various challenges, the focus was still developing existing operations.This development perspective can prevent companies from understanding the potential of twin-based innovations to create novel earning models.Thus, we propose that making greater use of a digital twin facilitates product development and design, sales and marketing, maintenance services, cooperation with the customer and within the company, finding new business, and production processes.However, the topic requires further research, because the findings may indicate a need for stand-alone development activities in the companies, focusing only on novel business model innovations around digital twins.
Finally, based on the findings, the study suggests a framework (Figure 1) for the innovation implications of the use of digital twins.The framework provides a foundation for researchers exploring the use of digital twins in innovation.The framework emphasises the need to understand the developmental scope before the type of digital twin innovation can be understood and defined.The type of innovation influences the desired characteristics of the digital twin.Demonstrating the characteristics of digital twins will also help in understanding the type of innovation required and indicate the related implications.In this way, the framework can also promote the exploitation of digital twins as an innovation process that enables collaboration throughout the value chain of product development, manufacturing, operation, and maintenance (cf.Cheng et al. 2020).
Conclusion
This study investigates the implications that digital twins have for innovation, presenting empirical evidence supporting a framework for the implications of digital twins.Thus, our study contributes to the existing literature by providing a systematic understanding of the implications of digital twins.
From a theoretical perspective, this study uniquely contributes to the field of innovation and technology management by investigating digital twins with respect to innovation.Our results emphasise the importance of first defining the scope of operation that the digital innovation targets.This typifies the nature of the innovation to be developed.Our findings also suggest that the use of digital twins requires not only an understanding of the nature of innovation but also knowledge of the characteristics of digital twin development.The characteristics of digital twins determine their implications.The developed framework builds on the recognised classification of innovation and specific characteristics of digital twins, which the more technically oriented digital twin studies tend to overlook.Second, the research is the first to systematically study the actual use cases of digital twins and to provide implications for further theoretical and empirical studies on innovation.The study shows that making greater use of digital twins has implications for various scopes of operation, including product development and design, sales and marketing, maintenance services, cooperation, finding new business, and production processes.This framework can be used in future studies investigating the implications of the use of digital twins.
From a practical point of view, the findings of this study provide interesting implications for different practitioners interested in the utilisation of digital twins.Currently, there is growing interest by different innovation management professionals in the use of digital twins, and by studying their actual use, the study provides examples to practitioners concerning how digital twins can be utilised in different contexts.As such, the study findings provide possibilities for the adoption of digital twins to practitioners to support their innovation processes.The framework can be used as a design tool to facilitate the implementation of digital twins.In summary, using the results of this study, managers can enhance their companies' digital transformation by acknowledging the multiple uses of digital twins.
While the results of this study reveal that both digital twin service providers and utilisers can recognise the scopes and impacts of the utilisation of digital twins in innovation processes, an understanding of their actual effectiveness is still rare.As such, further studies may be needed to explore the effectiveness of digital twins in the innovation processes in organisations.It may be valuable and important to understand more about the characteristics of a digital twin that generate value in innovation processes.
Disclosure statement
No potential conflict of interest was reported by the author(s).Juhani Ukko (D.Sc.Tech.) is a Professor at LUT University, School of Engineering Science, Department of Industrial Engineering and Management.He is also an adjunct professor at Tampere University.His current research focuses on performance measurement, operations management, digital transformation, digital services and corporate sustainability performance.In recent years, he has managed and participated in research projects related to digital transformation in companies and society.His work has been published in journals such as Information Systems Frontiers, Computers in Industry, International Journal of Operations and Production Management and International Journal of Production Economics.
Notes on contributors
Mira Holopainen (M.Sc.Tech.) is a Project Researcher and Doctoral Student in the School of Engineering Science at LUT University, Finland.Her research is related to performance measurement and management as well as digital transformation of industrial companies.Minna Saunila (D.Sc.Tech.) is an Associate Professor at LUT University, School of Engineering Science, Department of Industrial Engineering and Management.She received a D.Sc.degree from LUT in 2014 in the field of Industrial Management.Her research covers topics related to performance management, innovation, service operations, as well as sustainable value creation.Recently, her research projects have been related to digitisation of services and production.She has previously published in Technovation, Computers in Industry, Journal of Engineering and Technology Management, and Technology Analysis and Strategic Management among others.Since 2018 she is also a docent of the University of Jyväskylä School of Business and Economics.Tero Rantala (D.Sc.Tech.) is a Postdoctoral Researcher at LUT University, School of Engineering Science.His current research focuses on performance management and measurement of university-industry collaborations.In addition, his current research interests involve different areas of performance management in digital business environments and sustainable business contexts.He has previously published in journals such as European Journal of Operational Research, Journal of Cleaner Production, Information Technology & People, and Education and Work.
Table 1 .
Informants of the interviews.
Table 2 .
The implications of digital twins in different scopes.
|
2022-08-26T15:15:16.219Z
|
2022-08-24T00:00:00.000
|
{
"year": 2024,
"sha1": "285d74a8566459d49c53019a55d4805a1f88ed81",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/09537325.2022.2115881",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3ef8b45ce176a07b2bf5c0fbe19f4e87313d92bd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
234742505
|
pes2o/s2orc
|
v3-fos-license
|
Exclusive $f_{1}(1285)$ meson production for energy ranges available at the GSI-FAIR with HADES and PANDA
We evaluate the cross section for the $p p \to p p f_{1}(1285)$ and $p \bar{p} \to p \bar{p} f_{1}(1285)$ reactions at near threshold energies relevant for the HADES and PANDA experiments at GSI-FAIR. We assume that at energies close to the threshold the $\omega \omega \to f_{1}(1285)$ and $\rho^{0} \rho^{0} \to f_{1}(1285)$ fusion processes are the dominant production mechanisms. The vertex for the $VV \to f_{1}$ coupling is derived from an effective coupling Lagrangian. The $g_{\rho \rho f_1}$ coupling constant is extracted from the decay rate of $f_{1}(1285) \to \rho^{0} \gamma$ using the vector-meson-dominance ansatz. We assume $g_{\omega \omega f_1} = g_{\rho \rho f_1}$, equality of these two coupling constants, based on arguments from the naive quark model and vector-meson dominance. The amplitude for the $VV \to f_{1}$ fusion, supplemented by phenomenological vertex form factors for the process, is given. The differential cross sections at energies close to the threshold are calculated. In order to determine the parameters of the model the $\gamma p \to f_{1}(1285) p$ reaction is discussed in addition and results are compared with the CLAS data. The possibility of a measurement by HADES@GSI is presented and discussed. We performed a Monte Carlo feasibility simulations of the $p p \to p p f_1$ reaction for $\sqrt{s}$ = 3.46 GeV in the $\pi^+ \pi^- \pi^+ \pi^-$ (not shown explicitly) and $\pi^+ \pi^- \pi^+ \pi^- \pi^0$ final states using the PLUTO generator. The latter one is especially promising as a peak in the $\pi^+ \pi^- \eta$ should be observable by HADES.
I. INTRODUCTION
The production of light axial-vector mesons with quantum numbers I G J PC = 0 + 1 ++ is very interesting and was discussed in a number of experimental and theoretical papers. For example, the f 1 (1285) meson was measured in two-photon interactions in the reaction e + e − → e + e − ηπ + π − (η → γγ) by the Mark II [1], the TPC/Two-Gamma [2,3], and, more recently, by the L3 [4] collaborations. In such a process the γ * γ * → f 1 (1285) vertex, associated with corresponding transition form factors, is the building block in calculating the amplitude. Different vector-vector-f 1 vertices and corresponding transition form factors were suggested in the literature [5][6][7][8][9][10][11][12][13][14]. It was suggested in [15] that a measurement of the e + e − → e + e − f 1 (1285) reaction with double tagging at Belle II at KEK could shed new light on the γ * γ * f 1 coupling with two virtual photons.
The f 1 (1285) meson was also measured in the photoproduction process γp → f 1 (1285)p by the CLAS Collaboration at JLAB [16]. The differential cross sections were measured from threshold up to a center-of-mass energy of W γp = 2.8 GeV in a wide range of production angles. The f 1 (1285) photoproduction was studied extensively from the theoretical point of view; see [17][18][19][20][21]. There, the t-channel ρ and ω exchange (either Regge trajectories or meson exchanges) is the dominant reaction mechanism for the small-t behaviour of the cross section, that is, in the forward scattering region. The contribution of the u-channel proton-exchange term with the coupling of f 1 (1285) to the nucleon is dominant at the backward angles [18][19][20]22]. In [20] the authors showed that also the s-channel nucleon resonance N(2300) with J P = 1/2 + may play an important role in the reaction of γp → f 1 (1285)p around √ s = 2.3 GeV. As was shown in [20] other contributions, the s-channel proton-exchange term, the u-channel N(2300)-exchange term, and the contact term, are very small and can be neglected in the analysis of the CLAS data. The Primakoff effect by the virtual photon exchange in the t-channel was discussed in [21]. This mechanism is especially important in the forward region and at higher W γp energies.
The pp → pp f 1 (1285) reaction was already measured by the WA102 Collaboration for center-of-mass energies √ s = 12.7 and 29.1 GeV [23][24][25][26]. There the dominant contribution at √ s = 29.1 GeV is most probably related to the double-pomeron-exchange (PP-fusion) mechanism; see [27]. In [27] the pp → pp f 1 (1285) and pp → pp f 1 (1420) reactions were considered in the tensor-pomeron approach [28]. A good description of the WA102 data [25] at √ s = 29.1 GeV was achieved. A study of central exclusive production (CEP) of the axial vector mesons f 1 at high energies (RHIC, LHC) could shed more light on the coupling of two pomerons to the f 1 meson [27]. As discussed in Appendix D of [27] at the lower energy √ s = 12.7 GeV the reggeized-vector-meson-exchange or reggeonreggeon-exchange contributions should be taken into account.
The ωω → f 1 and ρ 0 ρ 0 → f 1 fusion are the most probable low energy production processes. We know how the ω and ρ 0 couple to nucleons. However, the couplings of ωω → f 1 and ρ 0 ρ 0 → f 1 are less known. We note that future experiments at HADES and PANDA will provide new information there. The ρ 0 ρ 0 → f 1 (1285) coupling constant can be obtained from the decays: f 1 → ρ 0 γ and/or f 1 → π + π − π + π − .
In the present analysis we obtain the g ρρ f 1 coupling constant from the radiative decay process f 1 (1285) → γρ 0 → γπ + π − using the vector-meson-dominance (VMD) ansatz; see Appendices A and B. We discuss briefly our results for the γp → f 1 (1285)p reaction and compare with the CLAS data in Appendix C. From this comparison we estimate the form-factor cutoff parameters.
The PANDA experiment (antiProton ANnihilations at DArmstadt) [29] will be one of the key experiments at the Facility for Antiproton and Ion Research (FAIR) which is currently being constructed. At FAIR, a system of accelerators and storage rings will be used to generate a beam of antiprotons with a momentum between 1.5 and 15 GeV/c. The design maximum energy in the center-of-mass (c.m.) system for antiproton-proton collisions is √ s ≃ 5.5 GeV. The exclusive production of the f 0 (1500) meson in antiproton-proton collisions via the pion-pion fusion mechanism was discussed for the PANDA experiment in [30]; see also Fig. 3 of [31]. The pion-pion fusion contribution grows quickly from the threshold, has a maximum at √ s ≃ 6 GeV and then drops slowly with increasing energy. The predicted cross section for the pp f 0 (1500) final state is σ f 0 = 0.3 − 0.8 µb for √ s = 5.5 GeV; see Sec. III C of [30]. At intermediate energies (e.g. for the WA102 and COMPASS experiments) other exchange processes such as the reggeon-reggeon, reggeon-pomeron and pomeron-pomeron exchanges are very probable; see e.g. [31].
A measurement at low energies, such as HADES@GSI would be interesting to impose constraints on the VV → f 1 (1285) vertices. In this paper we wish to make first estimates of the total and differential cross sections for the pp → pp f 1 (1285) and pp → pp f 1 (1285) reactions at energies relevant for the HADES and PANDA experiments. We shall present some differential distributions for the HADES energy at √ s = 3.46 GeV and for the future experiments with the PANDA detector at √ s = 5.0 GeV. The experimental possibilities of such measurements will be discussed in addition.
In [27] for the reaction (2.1) the pomeron-pomeron-fusion mechanism was considered which seems to dominate at the WA102 energy of √ s = 29.1 GeV. As discussed in Appendix D of [27] at lower energies other fusion mechanisms may be important. We shall take into account only the main processes at energies close to the threshold, the VV-fusion mechanism, shown by the diagrams in Fig. 1. There can be also the a 0 1 (1260)π 0 -fusion mechanism not discussed in the present paper. Note that due to the large width of the a 1 (1260) the decay f 1 (1285) → π ± a ∓ 1 can easily occur for off-shell a 1 (1260) and this is an important decay mode in the f 1 → 2π + 2π − channel as will be discussed in [32].
The VV-fusion mechanisms (VV stands for ωω or ρ 0 ρ 0 ) for f 1 production in proton-proton collisions.
The kinematic variables for the reaction (2.1) are For the kinematics see e.g. Appendix D of [31]. The amplitude for the reaction (2.1) includes two terms The VV-fusion (VV = ρ 0 ρ 0 or ωω) amplitude can be written as Here ǫ α (λ) is the polarisation vector of the f 1 meson, Γ are the V pp and VV f 1 vertex functions, respectively, and∆ (V) µν is the propagator for the reggeized vector meson V. At very low energies the latter must be replaced by ∆ (V) µν , the standard propagator for the vector meson V. We shall now discuss all these quantities in turn.
First we discuss the VV f 1 coupling. We start by considering the on shell process of two real vector particles V fusing to give an f 1 meson: The angular momentum analysis of such reactions was made in [31]. The spins of the two vectors can be combined to a total spin S = 0, 1, 2. Then S has to be combined with the orbital angular momentum l to give the spin J = 1 and parity +1 of the f 1 state. From Table 8 of [31] we find that there is here only one possible coupling, namely (l, S) = (2, 2). A convenient corresponding coupling Lagrangian, given in (D9) of [27], reads with M 0 ≡ 1 GeV and g VV f 1 a dimensionless coupling constant. U α (x) and V κ (x) are the fields of the f 1 meson and the vector meson V, respectively. For the Levi-Civita symbol we use the normalisation ε 0123 = +1. The expression for the VV f 1 vertex obtained from (2.6) is as follows [27]. Here the label "bare" is used for a vertex as derived from (2.6) without a form-factor function. The vertex function (2.8) satisfies the relations For realistic applications we should multiply the 'bare' vertex (2.8) by a phenomenological cutoff function (form factor) F VV f 1 which we take in the factorised ansatz We make the assumption thatF V (t) is parametrized as where the cutoff parameter Λ V , taken to be the same for both ρ 0 and ω, is a free parameter. For the on-shell V and f 1 mesons we have with the tensor-to-vector coupling ratio, κ V = f V NN /g V NN . We use the following values for these coupling constants: We give a short discussion of values for the ρpp and ωpp coupling constants found in the literature. For the the ρNN coupling constants one finds g ρpp = 2.63 − 3.36 [34,35] and κ ρ is expected to be κ ρ = 6.1 ± 0.2 [36]. There is a considerable uncertainty in the ωNN coupling constants. From Table 1 of [33] we see a broad range of values: g ω pp ≃ 10 to 21 and κ ω ≃ −0.16 to +0.14. For example, in [36] it was estimated g ω pp = 20.86 ± 0.25 and κ ω = −0.16 ± 0.01; see Table 3 of [36]. Within the (full) Bonn potential [37] values of g ω pp = 15.85 and κ ω = 0 are required for a best fit to NN scattering data. In [38] it was shown that such a fairly large value of g ω pp must be considered as an effective coupling strength rather than as the intrinsic ωNN coupling constant. They found that the additional repulsion provided by the correlated πρ exchange to the NN interaction allows g ωNN to be reduced by about a factor 2, leading to an "intrinsic" ωNN coupling constant which is more in line with the value one would obtain from the SU(3) flavour symmetry considerations, g ωNN = 3 g ρNN cos(∆θ V ) [35], where ∆θ V ≃ 3.7 • is the deviation from the ideal ω-φ mixing angle. The values of g ω pp = 7.0 − 10.5 and κ ω ≃ 0 were found to describe consistently the πN scattering and π photoproduction processes [39]. The values of g ω pp = 9.0 and κ ω = −0.5 have been used in the analysis of the pp → ppω reaction to reproduce the shape of the measured ω angular distribution; see Fig. 7 of [40]. It was shown [41] that the energy dependence of the total cross section and the angular distribution for pp → ppω can be described rather reasonably even with a vanishing κ ω (g ω pp = 9.0, κ ω = 0); see Fig. 4 of [41]. Finally we note that in [28] the couplings of the ω R and ρ R reggeons to the proton were estimated from high-energy scattering data and found as g ρ R pp = 2.02 and g ω R pp = 8.65 ; (2.14) see (3.60) and (3.62) of [28]. Taking all these informations into account we think that our choice (2.13) for the coupling constants is quite reasonable. The form factor F V NN (t) in (2.12), describing the t-dependence of the V-(anti)proton coupling, can be parametrized as where Λ V NN > m V and t < 0. Please note that the form factor F V NN (t) is normalized to unity at t = m 2 V . On the other hand, the reggeon-proton couplings (2.14) are defined for t = 0. Since F V NN (0) < 1 we expect that g ρ R pp < g ρpp and g ω R pp < g ω pp , which is indeed the case; see (2.13) and (2.14).
The coupling constant g VV f 1 and cutoff parameters Λ V and Λ V NN should be adjusted to experimental data. Examples are discussed in Appendices B and C. There, the form factor F VV f 1 (2.10) is used for F f 1 (m 2 f 1 ) = 1 and for different kinematic conditions ofF V (q 2 ) (2.11), that is, for spacelike (q 2 < 0) and timelike (q 2 > 0) momentum transfers of the V meson, and also at q 2 = 0. In Appendix B we discuss the radiative decays of the f 1 (1285) meson in two ways f 1 → ργ (B1) and f 1 → (ρ 0 → π + π − )γ (B2) where we have F ρρ f 1 (m 2 ρ , 0, m 2 f 1 ) and F ρρ f 1 (q 2 > 0, 0, m 2 f 1 ), respectively. In Table III in Appendix B we collect our results for g ρρ f 1 extracted from the decay rate of f 1 → ρ 0 γ using the VMD ansatz. The process f 1 → ρ 0 ρ 0 → 2π + 2π − , where both ρ 0 mesons carry timelike momentum transfers, will be studied in detail in [32]. For the γp → f 1 p reaction, discussed in Appendix C, we have F ρρ f 1 (0, q 2 < 0, m 2 f 1 ). This is closer to the VV → f 1 fusion mechanisms shown in Fig. 1 where both V mesons have spacelike momentum transfers. From comparison of the model to the f 1 -meson angular distributions of the CLAS experimental data [16] we shall extract the cutoff parameter Λ V NN in the V-proton vertex (2.15); see (C7)-(C12) and Fig. 14 in Appendix C.
In the following we shall use the VV f 1 coupling (2.6) and the corresponding vertex (2.8)-(2.11) for our VV → f 1 fusion processes of Fig. 1 for both: normal off-shell vector mesons V and reggeized vector mesons V R .
The standard form of the vector-meson propagator is given e.g. in (3.2) of [28] i∆ For higher values of s 1 and s 2 we must take into account reggeization. We do this, following (3.21), (3.24) of [42], by making in (2.16) the replacements for i = 1 or 2, and s thr is the lowest value of s i possible here: We use the standard linear form for the vector meson Regge trajectories (cf., e.g., [43]) Our reggeized vector meson propagator, denoted by∆ In the following we shall also consider the CEP of the f 1 (1285) with subsequent decay into ρ 0 γ: with p 34 = p 3 + p 4 . Here p 3 , p 4 and λ 3 = 0, ±1, λ 4 = ±1 denote the four-momenta and helicities of the ρ 0 meson and the photon, respectively.
The amplitude for the reaction (2.22) can be written as in (2.4) but with the replacements Here ǫ (ρ) and ǫ (γ) are the polarisation vectors of ρ 0 and γ, respectively, and ∆ is the transverse part of the f 1 propagator which has a structure analogous to (2.16). The factor e/γ ρ comes from the ρ-γ transition vertex; see (3.23)-(3.25) of [28].
In practical calculations we introduce in the ρρ f 1 vertex the form factor F f 1 (p 2 34 ) [see (2.10)] for the virtual f 1 meson In (2.23) we shall use a simple Breit-Wigner ansatz for the f 1 meson propagator (2.25) The mass and total width of f 1 meson from [44] are We note that the mass of 1281.0 ± 0.8 MeV measured in the CLAS experiment [16] is in very good agreement with the PDG average value (2.26). The total width measured by the CLAS Collaboration is however smaller than the value (2.27): Then the corresponding amplitudes are as in (2.4) but with the replacement The main decay modes of the f 1 (1285) are [44] 4π, ηππ, KKπ, and ρ 0 γ. If the f 1 is to be identified and measured in CEP in any one of these channels one will have to consider background processes giving the same final state, for instance, pp4π. Therefore, in this section we discuss two background reactions: CEP of 4π via ρ 0 ρ 0 in the continuum and CEP of ρ 0 γ in the continuum.
First we discuss the exclusive production of ρ 0 ρ 0 in proton-proton collisions, Diagrams for exclusive continuum ρ 0 ρ 0 production in proton-proton collisions. There are also the diagrams with p 3 ↔ p 4 .
There can also be ρρ fusion with exchange of an intermediate σ ≡ f 0 (500) meson and the σσ fusion with ρ 0 exchange. From the Bonn potential [37,45] we get for the squared coupling constant g 2 σpp /4π ≃ 6.0 which is smaller than g 2 π pp /4π ≃ 14.0. Moreover, we can expect that |g σρρ | ≪ |g ρωπ |. Due to large form-factor uncertainties and the poorly known σρρ coupling we neglect these contributions in our present study. Other contributions may be due to the exchanges of the f 2 (1270) meson ( f 2 -ρ 0 -f 2 or ρ 0 -f 2 -ρ 0 ) and the neutral a 2 (1320) meson (a 2 -ω-a 2 or ω-a 2 -ω). For the f 2 ρρ and a 2 ωρ couplings one could use the rather well known couplings from (3.55), (3.56), (7.29)-(7.34) and (3.57), (3.58), (7.38)-(7.43) of [28], respectively. Since the f 2 pp coupling, taking it equal to f 2R pp from (3.49), (3.50) of [28], is rather large, the f 2 -ρ 0 -f 2 fusion may give a large background contribution. Since g ω R pp > g a 2R pp , see (3.52) and (3.60) of [28], and the a 2 ωρ couplings have values similar to the f 2 ρρ couplings the ω-a 2 -ω contribution may also be potentially important. However, we expect that the tensor meson propagator(s) will reduce the cross section for these processes.
At higher energies the pomeron plus f 2 reggeon (P + f 2R ) fusion [(P + f 2R )-ρ 0 -(P + f 2R )] and ρ 0 fusion with P + f 2R exchange [ρ 0 -(P + f 2R )-ρ 0 ] will be important, probably the dominant processes; see [46]. We expect that these processes will give only a small contribution in the threshold region, of interest for us here. Therefore, we shall neglect also these mechanisms in the following.
With the assumption, motivated above, that the diagrams of Fig. 2 represent the dominant reaction mechanisms in the threshold region, the continuum amplitude for the reaction (2.32) can be written as The ωωand ππ-fusion amplitudes (2.33) are given by µ 's denote the polarisation vectors of the outgoing ρ 0 mesons. The standard pion prop- is used in the calculations. The reggeized vector meson propagator, denoted by∆ .17) and (2.18) and with the relevant s ij , s thr , and t i , the four-momentum transfer squared, in the pρ 0 and ρ 0 ρ 0 subsystems.
With k ′ , µ and k, ν the four-momentum and vector index of the outgoing ρ 0 and incoming ω meson, respectively, and k ′ − k the four-momentum of the pion the ρωπ vertex, including form factor, reads 1 where g ρωπ ≃ ±10 [33,35,41]. We note that the value of g ρωπ = +10, has been extracted in [35] from the measured ω → π 0 γ radiative decay rate and the positive sign from the analysis of pion photoproduction reaction in conjunction with the VMD assumption. In [41] it was found that the data for the reaction pp → ppω strongly favour a negative sign of the coupling constant g ρωπ . In our case, the sign of g ρωπ does not matter as this coupling occurs twice in the amplitudes (2.34) and (2.35). We use a factorized ansatz for the form factor The form factor (2.37) should be normalised as F(0, m 2 ω , m 2 π ) = 1, consistent with the kinematics at which the coupling constant g ρωπ is determined. This is the ω → π 0 γ reaction where ω and π 0 are on shell and the virtual ρ 0 which gives the γ has mass zero. Following [35] we take We assume for the cutoff parameters that they are equal to a common value Λ M ≡ Λ Mω = Λ Mρ = Λ Mπ . Following [35] we take Λ M = 1.45 GeV. Smaller values of the cutoff parameters, Λ Mρ = Λ Mπ = 1.0 GeV, were used in [40] (see Table I there). Also a dipole form factor F V (t) in (2.38) was considered; see [47,48] and Table II of [40].
Likewise, the monopole form factor (2.15) in the V pp vertex (2.12) is assumed with the cutoff parameter Λ V NN . We take Λ V NN = 0.9 GeV and 1.35 GeV in accordance with (C10) and (C7), respectively.
Taking into account the statistical factor 1 2 due to the identity of the two ρ 0 mesons in (2.32) we get for the amplitude squared respectively. Using (2.30) and (2.42) we obtain M (ωω fusion) As will be discussed in the following, from the π + π − π + π − channel it may be rather difficult to extract the f 1 (1285) signal. Another decay channel worth considering is ρ 0 γ.
Therefore, now we discuss the exclusive production of the ρ 0 γ continuum in protonproton collisions, with p 4 and λ 4 = ±1 the four-momentum and helicities of the photon.
(a) In order to calculate the amplitude for the reaction (2.45) we use the standard VMD model with the γV couplings as given in (3.23)-(3.25) of [28]. We shall consider the dia-grams shown in Fig. 3. The result is as follows: We could also have πη and πσ fusion contributions. For these we have to replace in the left (right) diagram in Fig. 3(c) the lower (upper) particles (π 0 , ρ 0 ) by (η, ω) or (σ, ω), respectively. Discussing first πη fusion we note that the couplings η pp and ωωη are smaller than those of π 0 pp and ρωπ [33]. In addition, the η exchange is suppressed relative to the π 0 exchange because of the heavier mass occurring in the propagator. Another mechanism is the πσ fusion involving the σpp and σωω vertices. However, here g σωω ∼ 0.5 [33] is extremely small. Moreover, the ω → γ transition coupling is much smaller than the ρ → γ one; see (A5). Therefore, we neglect the πη and πσ contributions in our considerations. Thus, we are left with the (ω + ρ 0 )-π 0 -ω, ω-π 0 -(ω + ρ 0 ), and π 0 -ω-π 0 contributions, which we shall treat in a way similar to (2.34) and (2.35). As an example, the M (ωω fusion) pp→ppρ 0 γ amplitude can be written as in (2.34) with the following replacement: In the case of the diagrams with the ω → γ transition, the outgoing ω has fourmomentum squared p 2 = 0. Since nothing is known about the form factor at the ρωπ vertex where both the π 0 and ρ 0 are off their mass-shell, we assume in (2.36) the form factor (2.37) as F(m 2 ρ , 0, m 2 π ) = 1 which is consistent with (2.38) and (2.39). The ργ-continuum processes in proton-antiproton collisions can be treated in a completely analogous way to the ργ-continuum processes in proton-proton collisions but with the appropriate replacements given by (2.30) and (2.42).
III. NUMERICAL RESULTS
We start by showing the integrated cross section for the exclusive reaction pp → pp f 1 (1285) as a function of collision energy √ s from threshold to 8 GeV. Note that due to (2.31) the cross sections and distributions for the VV-fusion mechanism are equal for pp and pp scattering for the same kinematical values.
In Fig. 4 we show results for for the VV-fusion contributions (V = ρ, ω) for different parameters given by (C7), (C9) and (C10) in Appendix C. We assume g ωω f 1 = g ρρ f 1 ≡ g VV f 1 ; see (A9). The cross section first rises from the threshold √ s thr = 2m p + m f 1 to √ s ≈ 5 GeV (PANDA energy range), where it starts to decrease towards higher energies. The region of fast growth of the cross section is related to the fast opening of the phase space, while the reggeization is responsible for the decreasing part. Without the reggeization the cross section would continue to grow. The reggeization, calculated according to (2.17)-(2.19), reduces the cross section by a factor of 1.8 already for the HADES c.m. energy √ s = 3.46 GeV. For comparison we also show the high-energy contribution of the PP → f 1 (1285) fusion (see the red dashed line) with parameters fixed in [27]; see Eq. (3.7) there.
At near-threshold energies one should consider final state interactions (FSI) between the two produced protons; see e.g. [33,48]. But the effect is sizeable only for extremely small excess energies of tens of MeV: Q exc = √ s − √ s thr . In our case, we have Q exc > 300 MeV and this FSI effect can be neglected.
We remind the reader that our calculation of the VV-fusion processes should only be applied at energies √ s 8 GeV. In the intermediate energy range also other processes like f 2R f 2R fusion must be considered; see the discussion in Appendix D of [27].
The salient feature of the results shown in Fig. 4 is the high sensitivity of the VVfusion cross section to the different sets of parameters. In our procedure of extracting the coupling constant g VV f 1 and the form-factor cutoff parameters from the CLAS data [see Appendices B and C] the dominant sensitivity is on g VV f 1 , not on the form factors. Also the form of reggeization used in our model, according to (2.17)-(2.21), affects the size of the cross section. With the parameter values of (C10) we get With the parameter values of (C7) we get As mentioned above, the different numbers in (3.1) and (3.2) compared to (3.3) and (3.4) reflect mainly the different couplings g VV f 1 . Indeed, from (3.3) and (3.1) we get for the cross section ratio 3.8, from (3.4) and (3.2) we get 5.0, and from (C7) and (C10) we get for the ratio of the coupling constants squared 5.6, not far from the two numbers above. . We show also the pomeron-pomeron fusion mechanism (red dashed line). In the right panel, the solid line is for the parameters of (C10) and the reggeized propagators∆ 5.0 GeV (PANDA). One can observe that dσ/dt decreases rapidly at forward scattering |t| → |t| min , where |t| min ≃ 0.3 GeV 2 at √ s = 3.46 GeV. At near threshold energy the values of small |t 1 | and |t 2 | are not accessible kinematically. The maximum of dσ/dt appears at −t 1,2 ≃ 0.65 GeV 2 for the parameter values of (C10) and at −t 1,2 ≃ 0.77 GeV 2 for those of (C11). The close-to-threshold production of the f 1 meson, therefore, probes corresponding form factors, (2.10), (2.11) and (2.15), at relatively large values of |t 1 | and |t 2 |, far from their on mass-shell values at t 1,2 = m 2 V where they were normalised. Thus, the VV-fusion cross section is very sensitive to the choice of the form factors. Therefore the HADES and PANDA experiments have a good opportunity to study physics of large four-momentum transfer squared. In Fig. 6 we present the contributions for the ωωand ρρ-fusion processes separately and their coherent sum (total). The interference term is shown also (see the green solid line). Both processes play roughly similar role. For large values of |t 1 | and |t 2 |, in spite of g ρpp < g ω pp (2.13), the spin-flip term of the ρ 0 -proton coupling is important. For √ s = 5 GeV the ωω-fusion contribution is the dominant process for |t 1,2 | 0.5 GeV 2 . There one can see also a large constructive interference effect.
In Figs. 7 and 8 we show several differential distributions for the reaction pp → pp f 1 (1285) for √ s = 3.46 GeV relevant for the HADES experiment and for the reaction pp → pp f 1 (1285) for √ s = 5.0 GeV relevant for the PANDA experiment, respectively. We show the distributions in the transverse momentum of the f 1 (1285) meson, in x F,M , the Feynman variable of the meson, in the cos θ M where θ M is the angle between k and p a in the c.m. frame, and in φ pp , the azimuthal angle between the transverse momentum vectors p t,1 , p t,2 of the outgoing nucleons in the c.m. frame. We predict a strong preference for the outgoing nucleons to be produced with their transverse momenta being back-to-back (φ pp ≈ π). The distributions in cos θ M for the energies √ s = 3.46 GeV and √ s = 5.0 GeV have a different shape. This is explained in Fig. 9. One can observe from Figs. 6 and 9 that the ωωand ρρ-fusion processes have different kinematic depen- dences. With increasing energy √ s the averages of |t 1 | and |t 2 | decrease (damping by form factors), hence the ωω contribution becomes more important. Now we turn to the pp → pp( f 1 (1285) → ρ 0 γ) reaction and the discussion of background processes.
In Fig. 10 (2.28). For the set of parameters (C10) the VV-continuum contribution, due to the small value of Λ V NN , turns out to be negligible. The situation changes when we use the parameter set of (C7). But still the ππ-continuum contribution is larger than the VV-continuum contribution. In both cases the f 1 (1285) resonance is clearly visible, even without the reggeization effects in the continuum processes. This result makes us rather optimistic that an experimental study of the f 1 in the ρ 0 γ decay channel should be possible.
In our calculations we find practically no interference effects between the ππ and VV fusion contributions in the continuum. For our exploratory study we have neglected interference effects between the background ρ 0 γ and the signal f 1 → ρ 0 γ processes. We have also neglected the background processes due to bremsstrahlung of γ and ρ 0 from the nucleon lines. For an analysis of real data these effects should be included or at least estimated. But this goes beyond the scope of our present paper. Now we wish to discuss the integrated cross sections for the reactions pp → pp( f 1 → ρ 0 γ) and pp → pp( f 1 → ρ 0 γ) treated with exact 2 → 4 kinematics. In our calculation we took into account the reggeization effects according to (2.17)-(2.21) and the replacements given in (2.23). We consider two sets of parameters, (C10) and (C7), extracted from the CLAS data. With the parameter values of (C10) we get for √ s = 3.46 GeV : σ pp→pp( f 1 →ρ 0 γ) = 1.26 nb , for the exclusive axial-vector f 1 (1285) production compared to the continuum processes considered in the ρ 0 γ channel.
In Table I we have collected integrated cross sections in nb for the continuum processes considered. These numbers were obtained for g ρωπ = 10.0, Λ M = 1.45 GeV in (2.36)-(2.39), Λ V NN = 1.35 GeV in (2.15), and Λ πNN = 1.0 GeV in (2.40). The reggeization effects were included. We can observe very small numbers for the production of ρ 0 ρ 0 at √ s = 3.46 GeV which is caused by the threshold behaviour of the process (the assumption of a fixed ρ 0 -meson mass of m ρ = 0.775 GeV in the calculation) and limited phase space. TABLE I. The integrated cross sections in nb for the continuum processes in proton-(anti)proton collisions. We show results for the VVand ππ-fusion contributions separately and for their coherent sum ("total"). Now we compare the cross section for the ρρ continuum from Table I to the cross section for the f 1 (1285) signal according to with σ pp→pp f 1 from Fig. 4 and a branching ratio from [44]. Taking These roughly estimated results show that, for the cases treated here, the background processes considered in the ρ 0 ρ 0 channel (see Table I) can be important only for √ s = 5.0 GeV in the pp case.
The reaction pp → ppρ 0 ρ 0 is treated technically as a 2 → 4 process. A better approach would be to consider the pp → ppπ + π − π + π − reaction, as a 2 → 6 process. This is however beyond the scope of the present study. In addition, as will be discussed in the following, the background for the pp → ppπ + π − π + π − reaction measured long ago by the bubble chamber experiment [50] was found to be much larger than the result for the continuum terms ("total") presented in Table I.
IV. HADES AND PANDA EXPERIMENTS
The HADES (High Acceptance Dielectron Spectrometer) is a magnetic spectrometer located at the SIS18 accelerator in the Facility for Antiproton and Ion Research (FAIR) in Darmstadt (Germany) [51]. It is a versatile detector allowing measurement of charged hadrons (pions, kaons and protons), leptons (electrons and positrons) originating from various reactions on fixed proton or nuclear targets in the energy regime of a few A · GeV. The spectrometer covers the polar angle region 18 • < θ < 80 • and features almost complete azimuthal coverage w.r.t. the beam axis. The detector has been recently upgraded by a large area electromagnetic calorimeter and a forward detector (for a recent review see [52]) extending the coverage to very forward region (0.5 • < θ < 7.5 • ). These upgrades allow to measure hadron decays involving photons and significantly improve acceptance for protons and hyperons which at these energies are emitted to large extent in forward directions.
The spectrometer is specialized for electron-positron pair detection but it also provides excellent hadron (pion, kaon, proton)-identification capabilities. It has a low material budget and consequently features an excellent invariant mass resolution for electronpositron pairs of ∆M/M ≈ 2.5 % in the ρ/ω/φ vector meson mass region.
The PANDA (antiProton ANnihilations at DArmstadt) detector is currently under construction at FAIR. PANDA will utilise a beam of antiprotons, provided by the High Energy Storage Ring (HESR), and with its almost full solid-angle coverage will be a detector for precise measurements in hadron physics. HESR will deliver antiprotons with momenta from 1.5 GeV/c up to 15 GeV/c (which corresponds to √ s ≃ 2.25 − 5.47 GeV) impinging on a cluster jet or pallet proton target placed in PANDA. The scientific programme of PANDA is very broad and includes charmonium and hyperon spectroscopy, elastic proton form-factor measurements, searches of exotic states and studies of in-medium hadron properties (for a recent review of stage-one experiments see [53]).
The luminosity of both detectors are comparable and are at the level of L = 10 31 cm −2 s −1 (after first years of operation and completion of the detector PANDA will increase it by one order of magnitude).
For the count rate estimates and signal to background considerations for the f 1 meson production we will use the properties of the HADES detector. This presents a "worse case" scenario. As it was shown in previous sections cross sections for the meson production in proton-proton interactions are about a factor 10 lower than for the protonantiproton case. Furthermore, the PANDA detector features also larger acceptance for the reaction of multi-particle finals and presents better opportunities for the studies discussed in this work. On the other hand HADES will measure proton-proton reactions at the c.m. energy √ s = 3.46 GeV (proton beam energy E kin = 4.5 GeV) already in 2021. Hence it will provide first valuable experimental results to verify our model predictions.
A. Simulation for 2π + 2π − and π + π − η decay channels We have considered production of the f 1 (1285) meson in proton-proton reactions and its decay into final states with four charged pions reconstructed in the HADES detector. For the f 1 production cross section we have assumed σ f 1 = 150 nb [estimate using the C7 parameter set; see (C7) and (3.3)].
Two reaction channels were simulated: In the second case the η meson is reconstructed via the η → π + π − π 0 decay channel, hence the final state has also four charged pions. The neutral pion from the η decay can be reconstructed via missing mass technique or via two photon decay. However, the latter case has smaller total reconstruction efficiency (see below for details). The f 1 (1285) meson decay into four charged pions has been simulated using the PLUTO event generator [54][55][56]. For the meson reconstruction four pions from the decay and at least one final state proton have been demanded in the analysis to establish exclusive channel identification. The HADES acceptance and reconstruction efficiencies for protons and pions have been parametrized as a function of the polar and azimuthal angles and the momentum. Furthermore, a momentum resolution ∆p/p = 2 % of the spectrometer for charged tracks has been taken into account in the simulation, as described in [51].
For the pp → pp2π + 2π − reaction a total cross section σ back = (227 ± 23) µb has been measured; see Table I of [50]. This reaction was measured in [50] at slightly higher energies E kin = 4.64 GeV (corresponding to proton beam momentum P = 5.5 GeV/c or √ s ≃ 3.5 GeV). 2 We tried to understand the large background in the π + π − π + π − channel. We analysed a few contributions due to double nucleon excitations. We considered the following processes: Both resonances have considerable branching fraction to the Nππ channel and the N(1535) to the Nη channel; see PDG [44]. In our evaluation (estimation) we used effective Lagrangians and relevant parameters from [57]. These parameters were found in [57] to describe the total cross section for the the pp → pnπ + reaction measured in the close-to-threshold region. The coupling constants and the cutoff parameters in the monopole form factors used in the calculation are the following ones: Similar values were also taken in [58] for the pn → dφ reaction. To describe the total cross sections of the pN → NNππ andpN →NNππ reactions measured in the near-threshold region the cutoff parameters Λ N * NM = 1.0 GeV were assumed in [59,60]. Therefore, our estimates for the reactions (4.3)-(4.5) with the parameters given in (4.6) should be treated rather as an upper limit.
There is a question about the role of the η ′ exchange in the reaction (4.5). For example, in [61] sub-threshold resonance-dominance of the N(1535) was assumed with g 2 N(1535)Nη ′ /4π = 1.1 to describe both the πN → η ′ N and NN → NNη ′ cross section data. However, it was shown in [62] that the N(1535) contribution is not necessary in these processes (see Figs. 9-14 of [62]) or, at least, its significant role (significant coupling strength of N(1535) → η ′ N used in [61,63]) was precluded.
For energy √ s = 3.5 GeV we get the cross section for the pp → N(1440)N(1440) reaction of the order of 0.8 mb. With the input from [64][65][66], g 2 N(1440)Nσ /4π = 1.33 and Λ N(1440)Nσ = 1.7 GeV, we get even smaller cross section by about 30 %. For the pp → N(1440)N(1535) reaction we get the cross section of 10 µb and for the pp → N(1535)N(1535) reaction about 7 µb. So we conclude that the double excitation of the N(1440) resonances via the σ-meson exchange is probably the dominant mechanism of this type in the pp → pp2π + 2π − reaction. This is due to the large N(1440)Nσ coupling. Taking BR(N(1440) → pπ + π − ) = 0.1 we get σ pp→N * N * →pp2π + 2π − ≃ 80 µb. This background is much higher than that for the ωωand ππ-fusion mechanisms considered in Sec. III; see Table I.
So far we have considered the 5-pion background with all components (1, 2, 3) listed in Table II. The contribution (1) can be, in principle, eliminated by using side-band subtraction method. We wish to discuss now separately the contribution (2), in the π + π − η mesonic state, to proof feasibility of the f 1 (1285) measurement. In Fig. 12 we make such a comparison. The nonreducible background contribution from double excitation of N * resonances has a broader distribution than the VV → f 1 signal. With our estimate of the cross section for the pp → pp f 1 (1285) reaction (see Table II) we expect that the f 1 (1285) could be observed in the π + π − η (→ π + π − π 0 ) channel. Invariant mass distribution of π + π − η observed in the ppπ + π − π + π − π 0 final state corresponding to the measurement with the p + p reactions at E kin = 4.5 GeV ( √ s = 3.46 GeV) with the HADES apparatus. Here, the two contributions (2) and (4) of Table II were included. The result includes the cut on the η meson mass 0.54 GeV < M π + π − π 0 < 0.56 GeV.
V. CONCLUSIONS
In the present paper we have discussed the possibility to observe the f 1 (1285) in the pp → pp f 1 (1285) reaction at energies close to the threshold where the pomeron-pomeron fusion, known to be the dominant mechanism at high energies, is expected to give only a very small contribution. Two different mechanisms have been considered: (a) ωω → f 1 (1285) fusion and (b) ρ 0 ρ 0 → f 1 (1285) fusion. We have estimated the cross section for √ s = 3.46 GeV for which a measurement will soon be possible for HADES@GSI. We have presented our method for the derivation of the VV → f 1 (1285) vertex for V = ρ 0 , ω. The coupling constant g ρρ f 1 has been extracted from the decay rate of f 1 → ρ 0 γ using the VMD ansatz. From naive quark model and VMD relations we have obtained equality of the g ρρ f 1 and the g ωω f 1 coupling constants; see Appendix A. In reality this relation can be expected to hold at the 20 % level. Then, we have fixed the cutoff parameters in the form factors and the corresponding coupling constants by fits to the CLAS experimental data for the process γp → f 1 (1285)p. There, the ρand ω-exchange contributions play a crucial role in reproducing the forward-peaked angular distributions, especially at higher energies, W γp > 2.55 GeV.
The corresponding ρρ and ωω fusion amplitudes have been written out explicitly. The two amplitudes have been used to estimate the total and differential cross sections for c.m. energy √ s = 3.46 GeV. The energy dependence close to the threshold has been discussed. The distributions in t (see Fig. 6) and the distributions in cos θ M (see Fig. 9) seem particularly interesting. The shape of these distributions gives information on the role of the individual fusion processes.
We have discussed the possibility of a measurement of the pp → pp f 1 (1285) reaction by the HADES collaboration at GSI. For this, the π + π − π + π − , ρ 0 γ, and π + π − η channels, have been considered. For the four-pion channel we have estimated the background using the cross section from an old bubble chamber experiment [50]. We have found that the double excitation of the N(1440) resonances via the σ-meson exchange is probably the dominant mechanism in the pp → pp2π + 2π − reaction. The mechanisms considered by us: π 0 -ω-π 0 and ω-π 0 -ω exchanges give much smaller background cross sections. We conclude that it may be difficult to identify the f 1 (1285) meson in this channel. The ρ 0 γ channel should be much better suited as far as signal-to-background ratio is considered. There, however, dominant background channel ppπ + π − π 0 is of the order of 2 mb [50] and ρ 0 is so broad that it will not provide sufficient reductions (as it is the case in η decay channel). In our opinion the π + π − η(→ π + π − π 0 ) channel is especially promising. We have performed feasibility studies and estimated that a 30-days measurement with HADES should allow to identify the f 1 (1285) meson in the ppπ + π − η final state. No simulation of the π + π − η(→ π + π − π 0 ) channel has been done for PANDA energies.
In [68] the f 1 (1285) decays into a 0 (980)π 0 , f 0 (980)π 0 and isospin breaking were studied. An interesting proposal was also discussed in [69,70]: to study the anomalous isospin breaking decay f 1 (1285) → π + π − π 0 in central exclusive production of the f 1 . There is another important decay channel, KKπ, with branching fraction 9% [44] which can be used for f 1 meson studies in CEP. See also [71] for a discussion of the KKπ decay and the nature of the f 1 (1285) meson.
Predictions for the PANDA experiment at FAIR, for the pp → pp f 1 (1285) reaction, have also been presented. The possibility to study the underlying reaction mechanisms have been discussed. For the VV → f 1 (1285) fusion processes for √ s = 5.0 GeV we have obtained about 10 times larger cross sections than for √ s = 3.46 GeV. Thus we predict a large cross section for the exclusive axial-vector f 1 (1285) production, compared to the background continuum processes via VV and ππ fusion, in the ρ 0 γ channel. The ρ 0 γ channel seems, therefore, also promising for identifying the f 1 (1285) meson.
To conclude: we have shown that the study of f 1 (1285) production at HADES and PANDA should be feasible. From such experiments we will learn more on the nature of the f 1 . For instance, is it a normal qq state orKK * molecule [71,72]? Can it be described in holographic QCD [18]? In particular, we shall learn from f 1 CEP at low energies about the ρρ f 1 and ωω f 1 coupling strengths. These in turn are very interesting parameters for the calculations of light-by-light contributions to the anomalous magnetic moment of the muon [10][11][12][13][14][15]. The final aim for studies of f 1 CEP in proton-proton collisions should be to have a good understanding of this reaction, both from theory and from experiment, in the near threshold region, in the intermediate energy region 8 GeV √ s 30 GeV, and up to high energies available at the LHC as discussed in [27].
following quark content Consider now a radiative decay of the f 1 . After the emission of the photon by the f 1 the quark state should have the structure, with the quark charges e u = 2/3, e d = −1/3, e s = −1/3: Therefore, this simple argument suggests for the f 1 Vγ coupling constants the relation This is the relation suggested, e.g., in [17]. Now we can combine this with VMD which allows to relate the g VV f 1 and g f 1 Vγ by the standard Vγ transition vertices; see e.g. (3.23)-(3.25) of [28]. This gives, with e = √ 4πα em , where γ ρ > 0, γ ω > 0, and In the naive quark model plus VMD the hadronic light-quark-electromagnetic current is written as follows: J em µ = e e uū γ µ u + e dd γ µ d + e ss γ µ s Assuming m 2 ρ = m 2 ω (which is quite good) and m 2 ρ = m 2 φ (which is less good) we find from (A6) the following "ideal mixing" coupling ratios: From (A4) and (A7) we obtain with the "ideal" γV couplings With (A3) plus (A8) we obtain, thus, the simple estimate based on naive quark-model relations plus the simplest VMD ansatz.
That is, ideal mixing (A7) gives only an approximation, valid to within 15 %, compared to the experimental value (A11). We can, therefore, expect that also the relation (A9) may be violated in the real world by 15 to 20 %. We emphasize that the arguments presented in this Appendix depend crucially on the assumption made in (A1) that the f 1 (1285) is a normal qq state. The relation (A3) in particular could be quite different if this assumption is violated and the f 1 (1285) has another structure. In [71,72], for instance, the f 1 (1285) is described as a K * K molecule, not as a qq state. From (B1) and (B2) we will estimate the g ρρ f 1 coupling constant and the cutoff parameter Λ V in the form factor F ρρ f 1 from experiment.
Then, the coupling constant g ρρ f 1 , occurring in Γ (ρρ f 1 ) in the amplitudes above, can be adjusted to the experimental decay width Γ( f 1 (1285) → γρ 0 ). For the 1 → 2 decay process (B1) this is straightforward. For the 1 → 3 decay process (B2) this will be done with the help of a new Monte Carlo generator DECAY [73] designed for a general decay of the 1 → n type.
Unfortunately the partial decay width Γ( f 1 (1285) → γρ 0 ) appears to be not well known in the literature, see also the discussion in Sec. VII C and Table IV in [16], from PDG [44] : from CLAS [16] : Using the values of total widths accordingly from PDG (2.27) and the CLAS experiment (2.28) we get from PDG [44] : from CLAS [16] : We note that the CLAS result is in agreement with that found in [74], where the decay f 1 (1285) → ρ 0 γ was studied in the reaction π − N → π − f 1 N. Theoretical estimates based on the QCD inspired models such as the covariant oscillator quark model [75] and the Nambu-Jona-Lasinio model [8], which assume that the f 1 (1285) has a quarkantiquark nature, suggest (B8) rather than (B7). We hope that the future experimental measurements can clarify this issue. In the following we shall use both values, (B7) and (B8), to highlight the problem.
In Table III we collect our results for the two processes (B1) and (B2) obtained from (B7) and (B8). In the calculations we take m ρ = 775 MeV. We show results for the cutoff parameter from Λ ρ = 0.65 GeV to 2 GeV in (2.11). We expect the upper limit of the ρρ f 1 coupling constant to be not much larger than |g ρρ f 1 | ≃ 20. Otherwise one gets a nonrealistically large cutoff parameter Λ V NN in the V NN vertex (see the discussion in Appendix C). It is also interesting to compare our results with those of [72]. In [72] the radiative decays f 1 (1285) → γV were evaluated with the assumption that the f 1 (1285) is dynamically generated from the K * K interaction. In this model the partial decay widths strongly depend on the cutoff parameter Λ, for instance, Γ( f 1 (1285) → γρ 0 ) = 560 keV, or 1360 keV, for Λ = 1.0 GeV, or 2.5 GeV, respectively; see Table I of [72]. Moreover, there were also determined the ratios The dependence of both ratios on the cutoff parameter is rather weak. In the model of [72] the partial decay width of Γ( f 1 → γρ 0 ) is much larger than the ones of the γω and γφ channels due to constructive (destructive) interference of the triangle loop diagrams for the ρ 0 (ω and φ) production. Now we consider the decay f 1 → ωγ in our approach. We use the formula of (B3) with the replacements ρ → ω [γ ρ → γ ω (A5), g f 1 ρρ → g f 1 ωω , m ρ → m ω ]. In the calculation we take m ω = 783 MeV. We assume g ωω f 1 = g ρρ f 1 (A9) and take g ρρ f 1 corresponding to Λ ρ = 0.65 GeV and 2.0 GeV from Table III With Λ ρ = 0.65 GeV (first line in Table III), we obtain Γ( f 1 → γω) = 106.61 keV for |g f 1 ωω | = 27.37 and Γ( f 1 → γω) = 34.90 keV for |g f 1 ωω | = 15.66. Using the central values of (B7) and (B8) these correspond to the ratios of R 2 = 12.98 and R 2 = 12.99, respectively. With Λ ρ = 2.0 GeV (fourth line in Table III), we obtain Γ( f 1 → γω) = 112.61 keV for |g f 1 ωω | = 9.27 and Γ( f 1 → γω) = 36.81 keV for |g f 1 ωω | = 5.30. With the central values of (B7) and (B8) we obtain the ratios R 2 = 12.31 and R 2 = 12.30, respectively. These values for R 2 are about 2 times smaller than (B13) estimated in [72].
The recent average for R 1 given by PDG [44] is R 1 = 82.4 +11. 4 −23.8 . This is about 1 s.d. away from the theoretical result (B12) of [72]. But we have to keep in mind the differences in the width of f 1 → γρ 0 given by PDG and CLAS; see (B7) and (B8). There are currently no experimental data available for f 1 (1285) → γω decay. Further experiments will hopefully clarify the situation.
Appendix C: Photoproduction of the f 1 (1285) meson and comparison with the CLAS experimental data
Here we discuss the photoproduction of the f 1 (1285) meson. Using VMD and the g VV f 1 coupling constants introduced in (2.8) we have to calculate the diagram shown in Fig. 13. The differential cross section for the reaction γp → f 1 (1285)p will be compared with the CLAS data [16]. From this we will estimate the form factor and cutoff parameters of the model. . 13. Photoproduction of an f 1 meson via vector-meson exchanges.
The unpolarized differential cross section for the reaction γp → f 1 (1285)p is given by Here we work in the center-of-mass (c.m.) frame, s is the invariant mass squared of the γp system, and q and k are the c.m. three-momenta of the initial photon and final f 1 (1285), respectively. Taking the direction of q as a z axis we denote the polar and azimuthal angles of k by θ and φ.
We use standard kinematic variables The amplitude for the γp → f 1 (1284)p reaction via the vector-meson exchange includes two terms The generic amplitude with V = ρ 0 , ω, for the diagram in Fig. 13, can be written as where p b , p 2 and λ b , λ 2 = ± 1 2 denote the four-momenta and helicities of the incoming and outgoing protons.
We use the relations for the γ-V couplings (V = ρ 0 , ω) from (A4) and (A5). For the other building blocks of the amplitude (C4) see (2.8)-(2.21) in Sec. II. We can then write We perform the calculation of the total and differential cross sections with the cutoff parameter Λ ρ and corresponding VV f 1 coupling constant g VV f 1 from Table III. We choose the values from the last column (CLAS). For instance, |g ρρ f 1 | = 8.49 corresponds to Λ ρ = 1.0 GeV and |g ρρ f 1 | = 20.03 corresponds to Λ ρ = 0.65 GeV. We assume g ωω f 1 = g ρρ f 1 ≡ g VV f 1 ; see (A9). For the V pp coupling constants we take (2.13). For the V-proton form factor F V NN (t) we take the monopole form as in (2.15) with the parameter Λ V NN to be extracted from the CLAS data.
In the bottom right panel of Fig. 14 the individual ρand ω-exchange contributions at W γp = 2.75 GeV are shown. Here we use the parameters given in (C10). The ρ-exchange term is larger than the ω-exchange term due to larger coupling constants both in the γ → V transition vertex (A5), (A11) and for the tensor coupling in the V-proton vertex (2.12), (2.13). The differential distribution at W γp = 2.75 GeV peaks for cos θ = 0.7 corresponding to −t = 0.66 GeV 2 . The tensor coupling in the ρ-proton vertex with parameters κ ρ F ρNN (t) plays the most important role there. One can observe also an interference effect between the ρ and ω exchange terms. 3 In Fig. 15 we show the integrated cross sections for the reaction γp → f 1 (1285)p together with the CLAS data. Results for −0.8 < cos θ < 0.9 are presented. In the calculation we take (C10). In the left panel we show the respective contributions of ρ and ω exchanges and their coherent sum with the same V-proton coupling parameters as in the bottom right panel of Fig. 14. There, for W γp ≃ 2.7 GeV, a large interference between the ρ exchange term and the ω exchange term can be observed. In the right panel we compare our reggeized model results with those of the model without this effect. We note that the form of reggeization used in our model, calculated according to (2.17)-(2.21), affects both, the t-dependence of the V exchanges, and the size of the cross section. FIG. 14. The differential cross sections for the reaction γp → f 1 (1285)p → ηπ + π − p. Data are taken from Table V of [16]. The vertical error bars are the statistical and systematic uncertainties. Our results are scaled by a factor of 0.35 to account for the branching fraction from f 1 (1285) → ηπ + π − (C6). We take the V pp coupling constants from (2.13) and the different values of g VV f 1 corresponding to Λ V from the column "CLAS" of Table III. In the bottom right panel we show the individual contributions of ρ and ω exchanges and their coherent sum (total) at W γp = 2.75 GeV. For the ρ-exchange contribution also the results for only one type of coupling, tensor or vector, in the ρ-proton vertex (2.12) are shown. . The elastic f 1 (1285) photoproduction cross section as a function of the center-of-mass energy W γp . Five data points are obtained by integrating out the differential cross sections given in Table V of [16]. The experimental results have been scaled by the branching fraction BR( f 1 (1285) → ηπ + π − ) = 0.35; see (C6). We take the coupling parameters the same as in the bottom right panel of Fig. 14. We integrate for −0.8 < cos θ < 0.9. In the left panel the reggeized contributions of ρ and ω exchanges, their coherent sum (total), and the interference term are shown. In the right panel the solid line is the result from the reggeized model, the dotted line indicates the result without the reggeization.
|
2021-05-18T01:16:11.199Z
|
2021-05-15T00:00:00.000
|
{
"year": 2021,
"sha1": "e0e4653f2608223d9a6b464603a8a5a673293c70",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.104.034031",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "e0e4653f2608223d9a6b464603a8a5a673293c70",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259270173
|
pes2o/s2orc
|
v3-fos-license
|
Insights into Urologic Cancer
Collectively, urological malignancies account for a considerable proportion of cancer cases worldwide [...].
Collectively, urological malignancies account for a considerable proportion of cancer cases worldwide. Among them, prostate cancer (PCA) is the most frequently diagnosed cancer in men, while bladder cancer (BCA) and renal cancer (RCC) rank among the top 10 most prevalent cancers globally. The high incidence rates pose a significant public health problem. The treatment of urological malignancies often involves complex approaches such as surgery, radiation therapy, chemotherapy, and targeted therapies. However, a better understanding is needed to further enhance the therapeutic management and to improve outcomes for patients. In this editorial, we examine and analyze the key findings from the original articles published in the Special Issue "Insights into Urologic Cancer". These studies contribute to advancing our understanding of urological malignancies and hold significant implications for patient care and outcomes.
Jirásko et al. [1] explored altered profiles of sulfatides and sphingomyelins in plasma, urine, and tissue samples of patients with RCC. By investigating lipidomic changes, this study enhances our understanding of the molecular mechanisms underlying RCC. The findings highlight the potential of lipid profiling as a diagnostic and prognostic tool. Altered lipid profiles may serve as biomarkers for early detection, monitoring of treatment response, and the identification of novel therapeutic targets. This research will pave the way for personalized approaches and precision medicine in RCC management.
Histopathological discrimination of chromophobe RCC and oncocytoma may be challenging due to a similar appearance. Bin Satter et al. [2] developed an "Chromophobe-Oncocytoma Gene Signature" using a single molecule counting assay and thereby achieved accurate discrimination between chromophobe RCC and oncocytoma. The assay's ability to provide a reliable and precise distinction is a significant breakthrough and may help to improve classification of these renal tumors.
Metastasis is a major challenge in RCC, often associated with poorer outcomes. Sanders et al. [3] investigated the potential significance of immune cells in the metastatic process. They showed that a higher density of CD103+ cells and a higher ITGAE/CD103 expression were significantly correlated with poor overall survival in clear cell RCC. Understanding the role of tissue resident T-lymphocytes and their correlation with prognosis may lead to the development of novel predictive biomarkers or immunotherapeutic approaches targeting the immune microenvironment in metastatic RCC. This research highlighted the importance of exploring the tumor microenvironment to identify potential prognostic/predictive markers and therapeutic targets.
Tyrosine kinase inhibitors have revolutionized RCC treatment, but resistance remains a challenge. Ding et al. [4] highlighted a potential therapeutic strategy to overcome tyrosine kinase inhibitor resistance and thereby improve patient outcomes. They showed that the circular RNA circDGKD can be targeted to counteract the up-regulation of estrogen receptor β and vasculogenic mimicry in RCC, particularly in response to tyrosine kinase inhibitors. This improved survival in an orthotopic mouse model employing sunitinib treatment.
DJ-1 is involved in various cellular processes and has been implicated in cancer development and progression. Hirano et al. [5] studied DJ-1 expression in the serum of patients with BCA and control subjects. They observed higher DJ-1 levels in BCA patients using a simple ELISA test, indicating that DJ-1 could serve as diagnostic biomarker for BCA. The immunohistochemical detection of DJ-1 in the cytoplasm was associated with poor prognosis.
The study by Gutierrez et al. [6] sheds light on the unique fluorescence patterns associated with urothelial tumor cells. Peri-membrane fluorescence patterns may be assessed to improve urine cytology for early diagnosis and monitoring of bladder cancer. The authors showed that the plasma membrane plays a major role in the maintenance of peri-membrane fluorescence and that stress decreases peri-membrane.
The introduction of the antibody-drug conjugates enfortumab vedotin targeting Nectin-4 revolutionized the treatment of metastatic BCA. However, BCA exhibits diverse histological subtypes with varying prognoses, and the prevalence of Nectin-4 expression remained unclear. Rodler et al. [7] focused on the expression of Nectin-4 in variant histologies of BCA and its prognostic significance. They demonstrated that Nectin-4 expression is weak in sarcomatoid urothelial carcinoma, thereby probably limiting Nectin-4-directed antibody-drug conjugates in patients suffering from this specific subtype.
CD155 is mainly expressed in various cancer cells. Mori et al. [8] studied the expression of CD155 on the membrane and the cytoplasm of urothelial cells. They confirmed a lack of CD155 in normal urothelial cells, whereas CD155 was identified in the cytoplasm and membrane of tumor cells. Membranous CD155 was associated with a shortened recurrencefree survival and cancer-specific survival following radical cystectomy for BCA. A high CD155 expression on tumor cells may lead to tumor immune tolerance and may be targeted by treatment with an anti-TIGIT antibody.
Gemcitabine is a commonly used chemotherapy drug, but resistance often limits its effectiveness. Wang et al. [9] investigated the role of NXPH4 in BCA. They showed that NXPH4 plays a crucial role in BCA progression. NXPH4 contributes to the proliferation, migration, and invasion of BCA by maintaining the stability of NDUFA4L2 and activating reactive oxygen species and glycolysis, thus uncovering the mechanism by which NXPH4 enhances reactive oxygen species production and activates glycolysis through the modulation of NDUFA4L2. This together modulates gemcitabine resistance.
RNA-binding proteins play an essential role in post-transcriptional gene regulation, and their dysregulation has been implicated in cancer. Gu et al. [10] developed an RNAbinding protein risk score based on six genes (AHNAK, MAP1B, P4HB, FASN, LAMA2, and GSDMB) that was an independent predictor of overall survival and could be used for the development of a nomogram in patients with BCA. AHNAK was functionally validated, as were the oncogenic role (proliferation, invasion, and migration) and effect on immune cell infiltration.
Ferroptosis, a form of regulated cell death, has emerged as a potential therapeutic target in various cancers. Zhang et al. [11] explored survival and therapeutic responserelated ferroptosis regulators in bladder cancer through data mining and experimental validation. Ferroptosis regulators impact BCA microenvironment and influence BCA survival. A ferroptosis gene signature predicts the effect of chemo-and immunotherapy in BCA. Thus, this research may pave the way for the development of novel therapeutic strategies targeting this cell death pathway.
Immunotherapy has shown promise in the treatment of BCA, but response rates can vary. Shimizu et al. [12] investigated, in a multicenter retrospective analysis, the outcome of patients with urothelial cancer undergoing pembrolizumab therapy. They showed that bone metastases responded only infrequently to pembrolizumab. Furthermore, responses were prolonged in locally unresectable cancers and lymph node metastasis compared with lung and liver metastases. This study highlights the need for personalized treatment strategies based on individual patient characteristics.
Radical prostatectomy is a common treatment option for localized prostate cancer, and it is crucial to consider not only the surgical procedure itself but also the supportive measures provided to patients. Wolf et al. [13] aimed to identify gaps between patient expectations and the actual provision of supportive care in certified and non-certified PCA cancer centers. Interestingly, patients rated the availability of most measures similar in certified and non-certified PCA centers. with no statistically significant difference observed concerning the supportive measures rated most relevant by the patients.
In conclusion, the collection of articles in the Special Issue "Insights into Urologic Cancer" has made substantial contributions to our comprehension of urological malignancies. These studies have shed light on various aspects of urological cancers. The studies have also shed light on emerging therapeutic approaches, such as targeting specific molecular pathways or exploring the role of circular RNAs in overcoming drug resistance.
Looking ahead, the insights gained from these studies open up new avenues for future functional, translational, and clinical research. Further investigations can build upon the knowledge obtained in this Special Issue to refine diagnostic methods, optimize treatment protocols, and develop novel therapeutic interventions. By addressing the gaps in our understanding of urological malignancies, future research endeavors hold the potential to improve patient outcomes and to ultimately contribute to the global fight against urologic cancers.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-06-29T05:15:39.525Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "eb86766d73effbab4bd7a713c97ef5f7d20c0385",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers15123108",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb86766d73effbab4bd7a713c97ef5f7d20c0385",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7225841
|
pes2o/s2orc
|
v3-fos-license
|
How Do We Get from Cell and Animal Data to Risks for Humans from Space Radiations?
After four decades of human exploration in space, many scientists consider the medical consequences from radiation exposures to be the major biological risk associated with long-term missions. This conclusion is based upon results from a research program that has evolved over the past thirty years. Despite the diversity in both opinions and approaches that necessarily arise in research endeavors such as this, a commonality has emerged from our community. We need epidemiological data for humans, animal data in areas where no human data exist, and data on mechanisms to get from animal to humans. We need a programmatic infrastructure that addresses specific goals as well as basic research. These concepts might be deemed overly simplistic and even tautologous were it not for the fact that they are frequently underutilized and even ignored. This article examines the goals, premises, and infrastructures proposed by expert panels and agencies to address radiation risks in space. It is proposed that the required level of effort and the resources available demand a unified, focused international effort that is, at the same time, subjected to rigorous peer review if it is to be successful. There is a plan; let us implement it. “ I suppose we shall air-instead of sea-voyages, and at length find our way to the moon; in spite of the want of atmosphere.”
INTRODUCTION
The story of Phaethon and Helios in Greek mythology about Phaethon's abortive attempt to orbit the Earth might be interpreted as an example of early awareness on the part of our ancestors of the intrinsic risks associated with space travel. Similarly, the misfortune of Icarus when he ignored the risk limits set by Daedalus for flying toward the sun (not too high and not too low) might be one of the first written theses about the potential dangers from solar radiation in space. Today, after decades of relatively frequent shuttle trips and orbital missions, and the realization that interplanetary missions "toward a human presence in space" are technologically feasible, radiation has emerged as a major hazard, perhaps the major health hazard, for personnel in space. Nevertheless, despite an increasing concern about the potential health consequences and following decades of research, the uncertainties in those risks remain too high 1) . Moreover, there frequently appears to be a dichotomy between the types of information we purport to need to determine these risks in space and the types of biological data being sought. With continuous human presence in space but decreasing resources, it is imperative that we examine whether we are being as effective and as efficient as possible in determining the health risks from radiations in space.
At The 1st International Workshop on Space Radiation Research in Arona, Italy, the opening speakers searched for solutions. Dr. Juergen Kiefer 2) questioned whether the procedures for radiation protection on Earth are necessarily the best systems to be applied in space. Dr. Eric Hall 3) noted that cancer risks are not detectable at low doses through epidemiological investigations and proposed that mechanistic studies can lead us to the means to extrapolate appropriate data to that low-dose region. Dr. Francis Cucinotta 4) suggested that experimental models could lead to testable theories that ultimately could allow accurate risk projections. Dr. Gerhard Kraft 5) presented an example of such a theoretical model currently being used successfully in radiotherapy.
To define more precisely the means for moving from experimental cell and animal data to risk in humans in a space radiation environment, we briefly review expert opinions of what those potential risks are, how they might be best determined, and how the research outcomes compare with the proposed needs. Finally, we conclude with specific recommendations.
WHAT ARE THE RISKS?
It is difficult to ascertain exactly when radiation in space was recognized as a serious health hazard. Certainly, there was a strong interest in examining space radiations unattenuated by the Earth's atmosphere even before the space program for purposes of particle physics, astronomy and astrophysics. Radiation doses, however, were not monitored until the fifth Mercury mission 6) . Nevertheless, by the mid 1960s, the potential consequences of radiation were recognized as a problem to the extent that NASA and the National Research Council of the U.S. National Academy of Sciences reported on the radiobiological factors associated with human flights 7,8) . The level of concern has oscillated significantly during the last four decades as public and governmental interest in human presence in space has varied and as new information has been forthcoming. In 1989, the National Council on Radiation Protection and Measurements 9) evaluated radiation received in space activities. The group was relatively conservative in its evaluation of the risk, concluding only that it was expected that exposures in space would be greater than those for terrestrial radiation workers. Two recent NRC/NAS reports 10,11) led to the conclusion that radiation beyond low Earth orbit (LEO) potentially poses serious health effects that must be controlled before longterm missions are initiated. Most recently, we seem to be reaching unanimity that radiation is one of the major hazards in space, possibly the major unresolved biomedical issue. NASA's Critical Path Roadmap (CPR) 12) lists four type-I severe risks, its most severe classification. Two of these, Human Behavior and Clinical Capability, address the ability of crews to respond to changes in performance or acute medical problems rather than specific hazards or diseases. The remaining two, Radiation and Bone Loss, have direct and perhaps synergistic clinical consequences.
In the case of radiation, the major risk according to NASA's CPR is the increased likelihood of cancer, with the additional possibilities of damage to the central nervous system, synergistic effects with other hazards, acute responses to exposures, and effects on fertility, sterility, and heredity.
The NCR/NAS Space Studies Board strategy for research in space biology and medicine 10,11) enumerates seven higher priority recommendations with regard to risks from radiation and, again, carcinogenesis is the first item. Their higher priorities also include determining cell killing and chromosomal aberrations, better methods for extrapolation from rodents to humans, better error analyses, better designs for space vehicles, and better means of predicting solar events, as well as five lower priority recommendations.
The NASA-sponsored report on modeling human risks 13) infers from the existing literature that space radiation can cause cancers and organ damage. The report of the NCRP on radiation protection guidance for space activities 9) concluded that the major concerns about radiation in space are cancer and genetic effects. The more recent NCRP report on radiation protection for low Earth orbits 14) likewise concludes that the concern about radiation exposure is the possibility of late effects, the most important of which is cancer.
There is a consensus, then, that cancer is the number one issue to be resolved with regard to radiation exposures in space, with CNS damage, synergistic effects, and a few other key issues to be evaluated as well for their relevancy. Radiation is likely to be the major cause of long-term complications. Shorter-term bone loss and other acute responses may be enhanced in a radiation field, although little data are available to evaluate the risk.
It is appropriate at this point to quote the words from NCRP Report 98 9) in reference to the early NAS/NRC reports, namely, "It should be recalled that, with the other attendant and much greater risks involved in space activities, it seemed inappropriate to be unduly restrictive about radiation exposures." It is our inability to address in a satisfactory manner what appears to be a potentially manageable risk despite the dedicated efforts of our colleagues that suggests that we should reexamine the modus operandi. If cancer or CNS damage or other diseases pose a potential risk great enough to seriously jeopardize either crews or missions, how can we determine sufficiently the level of risk and evaluate potential countermeasures that might reduce the risk?
WHAT INFORMATION DO WE NEED?
Although there is general agreement that radiation is a major hazard, there would appear to be less agreement on what research is needed to obtain adequate risk values with acceptable uncertainties for low doses of radiations in space. Human epidemiological studies provide the most direct and, in principle, the most reliable information. For the case at hand, the human population exposed in space is small and the likelihood of observing radiation induced cancers or other diseases above background levels even over long periods of time is likewise small and encompassed with large uncertainties. Moreover, such confounding factors as microgravity, changes in diets, or changes in sleep cycles contribute to the risks for these diseases, making it necessary to infer that fraction of the total risk arising from radiation, increasing the uncertainties even further. There are basically three other sources of experimental data: epidemiological data for humans exposed in other scenarios (such as survivors of atomic-bomb or reactor incidents, patients given radiation therapy, or radiation workers), animal studies, and cellular or subcellular studies. The human data that we have obtained is generally for photons not protons or energetic heavy ions and at acute rather than protected exposures. Although these data are essential for establishing absolute risks, they nevertheless must be extrapolated to the radiations and conditions in space. It is that extrapolation that introduces major uncertainties in the final results 1) . Recently, there is an emphasis on genetic and cytogenetic biomarkers in-vitro, which are more easily obtained and more easily quantified. Relevant genetic information is essential for developing mechanistic models, and mechanistic models, we claim, are essential for determining risk. Such data have already been useful in establishing relative susceptibilities of sub-populations with specific genetic characteristics 15,16) . However, relative susceptibility and risk, while related, are not the same. It has been considerably more difficult to correlate observed genetic changes, particularly between those in vitro, with cancer rates in animal models. In fact, recent data suggest that in-vitro genetic changes do not correspond directly to those observed in cells in vivo 17) .
For decades, in-vivo studies have served as the mainstay for evaluating risks arising from environmental modifying factors. Over two decades ago, Fry 18) observed that "The existing human data cannot alone provide estimates of risk of exposures to very low doses…" further noting "the fact that some model for the dose-response relationships must be used makes it imperative to design animal experiments in order to test the model." We draw two important inferences from Fry's paper. The first is that we need animal experiments to measure relevant endpoints such as cancer to determine risks. The second is that we need theoretical models to establish dose response relations from the animal results in order to apply them (extrapolate) to humans. In practice, however, little work is being done to measure carcinogenesis or other diseases in vivo, and there is little direct support from agencies for developing theoretical biology models.
In view of the major advances in genetic and molecular biology, we might question whether Fry's approach is necessary or even applicable to today's situations. The importance of such research is poignantly summarized in the National Academies' review of NASA's Biomedical Research Program 19) , which concluded that "There is a good balance between dosimetry and molecular and cellular radiobiology. However, more emphasis should be placed on carcinogenesis and CNS end points, using animal models. Such experiments must be carried out in ground-based facilities so as to estimate the risks to astronauts of exposure to HZE particles and develop guidelines for limits on exposure to these particles." [Boldface in reference.] The dearth of such experiments was dramatically summarized in "The First Biennial Space Biomedical Investigators' Workshop (January 1999) which included descriptions (abstracts) of 23 relevant projects, of which only three dealt with research on vertebrates-one on mutations of an exogenously incorporated gene in mice; one on behavioral effects of HZE particles on rats; and one on the induction of breast cancer in rats by HZE particles and gamma rays" 19) . There was a similar distribution for the 10th Annual Space Radiation Health Investigators' Workshop (1999) 20) . These statistics are not much different than those for The Second International Workshop on Space Radiation Research in Nara, Japan, 2002.
Despite the repeatedly expressed consensus that animalbased studies of carcinogenesis are necessary to obtain the needed risk factors, my conclusion is that little such research has been taking place. Further, if we accept the premise that we need carcinogenesis, CNS damage, and other relevant endpoints in animal models, the question remains how we would use those data.
IS THERE AN ACCEPTED PROCEDURE THAT WILL GET US TO RISKS IN SPACE?
Despite the legitimate criticisms against indiscriminately extrapolating animal results to humans, animal research has been uniquely successful for determining risks from radiation and drugs as well as for evaluating countermeasures and treatments. In 1997, a NASA-sponsored panel 13) did an excellent job of reviewing the status of cell and animal research and the procedures for applying these types of data.
Their general conclusion was that "neither existing in vitro and culture models nor theoretical and computational biology can substitute for in vivo studies…" Equally important, they have summarized succinctly and precisely the process needed to go from cells to humans. We present a variation of that procedure in Figure 1modified for the present application.
The important underlying premise is that a mechanisticbased theoretical model of sufficient accuracy must be developed with which to calculate risks for cancer, CNS damage, or other diseases. The key point is that it is the theoretical model, not the experimental data, not even the human epidemiological data that is used to determine risk in humans.
Genetic and molecular data are used to establish what important cellular and subcellular mechanisms must be incorporated into the model. However, care must be exercised to establish that the processes apply to the in-vivo case. For that, we need in-vivo genetic and molecular data to establish relative and absolute levels of importance of the different pathways and to establish what epigenetic, abscopal, and exogenic factors modify the cellular pathways and endpoints and, therefore, also must be incorporated into the model.
Changes in both the cellular processes and clinical endpoints with radiation type, energy, and dose rate must be both measured in biological systems and modeled.
Potential synergisms between different radiations and radiation and other factors such as bone loss, tissue atrophy, or nutritional changes must be examined and incorporated into the model, if they are significant. Finally, the model must be benchmarked with existing human data to establish the absolute magnitude of the risks.
In summary, a theoretical model or models must be constructed that can model cellular and subcellular processes leading to clinically relevant endpoints for protons and HZE particles. The cellular component of this model is tested by using it to model clinical endpoints including cancer and CNS damage in relevant animal models, iterating the process until sufficient accuracy is established for the calculations. That model is also used to calculate cellular responses for relevant human cells in vitro, compared with measurements, and again tuned for accuracy. In all cases, changes in responses, if any, with particle types, energies, and dose rates must be established. The model is then used to calculate expected clinical responses for humans for relevant situations where we have data for subcellular responses and for cancer and other clinical endpoints. Throughout the A matrix representation of a process for assessing risks in humans with theoretical models and cell and animal data. Adapted from LBNL Report 40278, Modeling Human Risk: Cell & Molecular Biology in Context, (13). entire process, realistic error analyses must be carried out to provide the level of confidence for situations where there are no benchmarking data. Then, the model can be used to establish risks and risk uncertainties for humans in space where we have no data.
Obviously, this is a time-consuming program requiring a high degree of organization and coordination. However, in the long run, only a focused programmatic effort is likely to yield meaningful results in a reasonable period of time. National coordination and international cooperation are certainly already present, but there is room for improvement. There are obvious recommendations to be made, to remind us of the goal and to refocus the successful programs, and to question research endeavors that do not appear to fulfill the criteria.
RECOMMENDATIONS
Before stating the overall conclusion, I have five recommendations to make, each of equal importance, so the order of presentation is not meant as an indication of relative significance. These five recommendations are: 1. We must support low-dose, in-vivo studies of carcinogenesis and tumorigenesis as a function of dose rate. As costly and time-consuming as they are, without such data, we will almost certainly not achieve a scientifically meaningful conclusion.
2. We must support the development of relevant theoretical models. These models are necessary tools for determining risks in humans where there are no human data. We must be careful to differentiate between theoretical modeling, which uses existing models to interpret or interpolate experimental data, and the development of new theoretical models, which allow better and more accurate modeling.
3. We should reexamine the human epidemiological data in terms of low dose responses, both in terms of cancer and other diseases, but also in terms of genetic and molecular changes. This requires a correlation of specific responses with individuals and individual characteristics rather than global representations, a level of examination yet to be done.
4. There should be a small international cooperative group coordinating animal studies, particularly those for carcinogenesis, tumorigenesis, and neurotoxicity, to maximize resources, quality, and productivity. A good model for effective organizations of this type might be the cooperative groups organized to run clinical trials in medicine, such as the Quality Assurance Review Center (QARC), Children's Oncology Group (COG), the Eastern Cooperative Group (ECOG), or the American College of Surgeons Cooperative group (ACOSOC).
5. There should be an independent review by scientists, overseen by a neutral organization such as a national academy of science or a council for radiation protection, of the types of in-vivo carcinogenesis and CNS studies that should be carried out. This report should be the basis for a programmatic, peer-reviewed research program.
Finally, we have a well-defined problem with a welldefined goal. That is, we have a radiation environment in space with an uncertain risk, uncertain both in terms of the nature of those risks and their magnitudes. We need to determine the types and levels of risk adequately and develop countermeasures to assure an acceptable level of risk for personnel in space. The methods for determining radiation risks have been developed and refined over the years and are well known to the scientific community, albeit resource and time intensive. We can hope for serendipitous alternatives that might solve the problem in a quick and simple way, but we must focus on a strategic, programmatic effort. An internationally coordinated goal-oriented, peer-reviewed program should be formed that supports large, focused research projects for risk assessments and countermeasures, commonly called a top-down approach. In parallel, we should continue with the more typical investigator-initiated research, the bottom-up approach, to stimulate innovative ideas and to search for better methods.
|
2017-09-15T07:38:35.290Z
|
2002-12-01T00:00:00.000
|
{
"year": 2002,
"sha1": "0743d72ff90ece5dd6ec8125b67477d44e27076e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1269/jrr.43.s1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6abb85c1653e7c6ca5eb78aad873f9a42bfaea02",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6933288
|
pes2o/s2orc
|
v3-fos-license
|
The effect of resin thickness on polymerization characteristics of silorane-based composite resin
Objectives This study examined the influence of the resin thickness on the polymerization of silorane- and methacrylate-based composites. Materials and Methods One silorane-based (Filtek P90, 3M ESPE) and two methacrylate-based (Filtek Z250 and Z350, 3M ESPE) composite resins were used. The number of photons were detected using a photodiode detector at the different thicknesses (thickness, 1, 2 and 3 mm) specimens. The microhardness of the top and bottom surfaces was measured (n = 15) using a Vickers hardness with 200 gf load and 15 sec dwell time conditions. The degree of conversion (DC) of the specimens was determined using Fourier transform infrared spectroscopy (FTIR). Scratched powder of each top and bottom surface of the specimen dissolved in ethanol for transmission FTIR spectroscopy. The refractive index was measured using a Abbe-type refractometer. To measure the polymerization shrinkage, a linometer was used. The results were analyzed using two-way ANOVA and Tukey's test at p < 0.05 level. Results The silorane-based resin composite showed the lowest filler content and light attenuation among the specimens. P90 showed the highest values in the DC and the lowest microhardness at all depth. In the polymerization shrinkage, P90 showed a significantly lower shrinkage than the rest two resin products (p < 0.05). P90 showed a significantly lower refractive index than the remaining two resin products (p < 0.05). Conclusions DC, microhardness, polymerization rate and refractive index linearly decreased as specimen thickness linearly increased. P90 showed much less polymerization shrinkage compared to other specimens. P90, even though achieved the highest DC, showed the lowest microhardness and refractive index.
Introduction
Since Bowen introduced composites in 1962, composite resins have almost replaced amalgam for dental restorations because of the remarkably improved aesthetics and excellent mechanical properties. 1 However, the dimensional stability of the esthetic composite restorative material is compromised by the polymerization reaction of the matrix phase. The conversion of the monomer molecules into a polymer network is accompanied with a closer packing of the molecules, which leads to polymerization shrinkage. Polymerization shrinkage of composite resin leads to many clinical problems such as marginal staining, recurrent caries and restoration failure at the restoration/tooth interface, and remains a major concern for the clinical performance of restorations using composite resins. 2,3 Considerable efforts have been made to slow or eliminate polymerization shrinkage in composite resins. Recently, a new siloranebased low-shrinkage composite resin was introduced to dentistry. [4][5][6][7] Silorane has the configuration of siloxane and oxirane molecules. Siloxane molecules are hydrophobic, so silorane-based composite resin is expected to exhibit a reduced water sorption and water-mediated exogenous discoloration. A low shrinkage property can be achieved by oxirane molecules that extends their linkage through ringopening, flattening and extending toward each other. 8 The curing depth of composite resins is related directly to their thickness. [9][10][11] It can also affect the amount of photons from light source received at the top and bottom surfaces of resin composite restoration. Because the polymerization process is initiated by external light, variations in the transmission and attenuation of incident light between specimens of different thicknesses can have a range of outcomes. Research on the polymerization in the transmission and attenuation of the light that passes through the various thicknesses of the silorane-based composite resin is quite limited.
Furthermore, evaluating the degree of polymerization in the specimen surface is important for the proper placement of restorative materials. Factors, such as the organic matrix composition, type and amount of filler particles and the refractive index of the polymeric matrix, can affect the light transmittance and the degree of polymerization of composite resins. 12,13 The surface hardness was evaluated to verify indirectly the degree of conversion of composite resins. The degree of polymerization of light-activated composite resins is important for their clinical success and directly affects their mechanical properties. Several studies have documented the degree of polymerization of Bis-GMAbased composite resins, and stated that it is important to compare silorane-based composite resins. [9][10][11][12][13][14][15] The present study examined the influence of resin thickness on the degree of polymerization of silorane-based composite resin.
Light-curing unit (LCU) and photon count
For light curing, a quartz-tungsten-halogen (QTH)-based LCU (Optilux 501, Kerr, Danbury, CT, USA) was used with an intensity of 900 mW/cm 2 , as measured using a builtin radiometer. The tip of LCU was conventional type. To measure the number of photons, specimens of different thicknesses (diameter, 7 mm; thickness, 1, 2 and 3 mm) were prepared and placed over a 1 mm-thick stage with a 6 mm-diameter hole on it. Light was irradiated continually from the top surface of the hole. A photodiode detector (M1420, EG&G PARC, Princeton, NJ, USA) connected to a spectrometer (SpectroPro-500, Acton Research, Acton, MA, USA) was placed under the hole in a fixed position to consistently measure the photons.
Vikers Microhardness measurement
Disc-shaped specimens (diameter, 4 mm; thickness, 1, 2 and 3 mm) were prepared to evaluate the surface microhardness of the specimens (n = 5). A 200 µm-thin slide glass was placed on the table. A metal mold was placed over the glass and packed with composite resin. After packing, the top surface of the mold was covered with a thin slide glass, pressed firmly and light-cured using the LCU for 40 senconds by placing the end of the light-guide in contact with the top surface of the slide glass. After light-curing, the specimens were removed from the mold and kept in a dark chamber at 37℃ for 24 hours. The microhardness of the top and bottom surfaces was measured (n = 15) using a Vickers hardness tester (MVK-H1, Akashi Co., Tokyo, Japan) with a 200 gf load and a 15-second dwell time. The microhardness of the top and bottom surfaces measured three times each on specimen.
Degree of conversion (DC)
The specimens (n = 5 for each condition) prepared for the microhardness measurement were also used to evaluate the DC (%). The DC of the specimens was determined by Fourier transform infrared (FTIR) spectroscopy (Nicolet 6700/8700, Thermo Fisher Scientific Inc., Waltham, MA, USA). Immediately after measuring the microhardness, the top and bottom surface of each specimen was scratched (thickness of 100 -150 µm) using a scalpel to obtain a powder. The collected powder was dissolved in ethanol for transmission FTIR spectroscopy. The spectra were taken from 7,800 -350 cm -1 after 32 scans with a 0.09 cm -1 resolution. The DC of the cured resins was evaluated using a baseline technique. For the methacrylate-based composite resins, the peak from the aliphatic C=C bonds (at 1,636 cm -1 ) and reference C-C aromatic ring bonds (at 1,608 cm -1 ) were determined. For the silorane-based composite resin, the stretching vibrations of the epoxy rings C-O-C (883 cm -1 ) and reference CH bond (1,257 cm -1 ) were chosen. Uncured resins were tested in a similar manner.
Refractive Index
The refractive index of the specimens was measured using a commercial Abbe-type refractometer (NAR-IT, ATAGO, Tokyo, Japan). For the measurement, a small amount of resin was sandwiched between 2 glass slides placed below the mold (thickness, 1, 2 and 3 mm), and light was irradiated from the top of the mold to the bottom for 40 seconds. The light-cured thin slabs were aged for 24 hours in a 37℃ dark chamber. One drop of monobromonaphthalene (nD = 1.64) was added to the specimens as a high refractive index interfacial contact agent. The milky-white refractor was then placed over the specimen to enhance diffuse scattering of the cured specimen. Diffuse scattering at the front surface is necessary to improve the measurement accuracy. Unless otherwise noted, refractive index readings were performed at room temperature (22.5 ± 1.0℃). The system used in this study gives light from the sodium D-line (589 nm).
Polymerization shrinkage measurement
A linometer was used (RB 404, R&B Inc., Daejeon, Korea) to measure the level of polymerization shrinkage (n = 5) during and after light irradiation. The measurement system was composed of a specimen holder, curing light, shrinkage-sensing part, software and computer. A Teflon (PTFE, polytetrafluoroethylene) disc mold (inner diameter, 4 mm; thickness, 1, 2 and 3 mm) was placed over the aluminum disc (the specimen stage) and filled with resin. The Teflon mold was removed after being filled completely with resin. A slide glass was then secured over the resin, so the resin was placed between the covering slide glass and aluminum disc on the specimen holder. The end of the light guide was placed in contact with the slide glass. Before light curing, the initial position of the aluminum disc was set to zero. Light was irradiated from the LCU for 40 seconds. As the resin polymerized, it shrank toward the light source and the aluminum disc under the resin moved toward the light source. The amount of disc displacement due to polymerization shrinkage was measured automatically for 130 secodns using an inductive gauge. A non-contacting type shrinkage sensor was used in this study. The resolution and measurement range were 0.1 µm and 100 µm, respectively.
Statistical analysis
The results of each test were analyzed by 2-way ANOVAs for different thicknesses and resin products. A post-hoc Tukey's test was performed for multiple comparisons. A p value < 0.05 was considered significant. Table 2 lists the number of photons detected at the specimens with different thicknesses and attenuation coefficients (μ, mm -1 ) after exponential-curve fitting. In the subsurface, the incident light (photons) decreased exponentially. Among the specimens, P90 showed less light attenuation than the other two resin products. Table 3 lists the DC of the specimens tested at the different depths. Among the specimens, P90 showed the highest DC at all depths. The results revealed a correlation between the specimen thickness and DC. Each resin product showed an inverse linear correlation between DC and depth (R = 0.98 -0.99) with similar slopes. Table 4 presents the microhardness of the specimens at the surface of different depths. Among the specimens, P90 and Z250 showed the lowest (54.1 -67.8 Hv) and highest (73.9 -86.1 Hv) microhardness, respectively. According to curve fitting, the microhardness and specimen thickness showed an inverse correlation (R = 0.975 -0.995) regardless of the resin products. Table 5 shows the polymerization shrinkage of the Son SA et al. specimens with different thicknesses. Among the specimens, P90 showed significantly lower shrinkage (6.5 -10.4 μm) than the other two resin products (p < 0.05). The polymerization shrinkage of the specimens increased linearly with increasing specimen thickness (R = 0.99 -1.00). Table 6 lists the refractive index at the surface of specimens of different depths. Each resin product had a significantly different refractive index (p < 0.001). According to curve fitting, the refractive index and specimen thickness showed an inverse linear correlation (R = 0.98 -0.99).
Evaluation of correlation between tested values
The correlations among the DC, microhardness and refractive index were evaluated. Figure 1 shows the correlation between the DC and microhardness. The DC showed a linear correlation with the microhardness (R = 0.92 -0.99) and refractive index (R = 0.97 -0.98) at different depths (Figure 2). A similar linear correlation was observed between the microhardness and refractive index (R = 0.93) of the tested resin products with different depths (Figure 3).
Discussion
The degree of polymerization of the silorane-based composite resin was examined in terms of the curing depth. The results were compared with those of methacrylate-based composite resins. For the lightcuring composite resins, the polymerization process was initiated by activating the photoinitiator using an external blue light. In this process, the number of photons is important because it regulates the capacity to activate the photoinitiator. The number of photons is related to the intensity of incident light, where a high intensity implies a high quantity of photons. Within the specimen, the incident light was attenuated by scattering and absorption events with ubiquitously distributed fillers, pigments and photoinitiator. The number of photons decreased exponentially with increasing specimen thickness. Such an exponential decrease normally follows the Beer-Lambert law. Less frequent light scattering and absorption was observed in the subsurface of the specimens containing less filler, which increased the survival of the incident photons. The factors such as polymeric matrix, monomer type, filler typer and filler content, can influence the light transmittance of composite resins. There are the differences in filler and monomer component between methacrylate and silorane composite resin. In this study, among the specimens, P90 showed a slightly lower attenuation coefficient than the other two resin products, which might be due to the relatively lower filler content. A lower attenuation coefficient suggests that there are more photons surviving and fewer photons lost in the subsurface. With less photon loss, a higher degree of conversion can be expected compared to other resin products with the same depth. 16,17 Previous studies reported that the DC of methacrylatebased resins ranged from around 55 -75% using conventional curing technique. 18,19 In this study, the DC of methacrylate-based resin specimens showed from 45.3 to 63.9%. On the contrary, the DC of silorane-based resin specimens was ranged from 66.8 to 81.9%, regardless of the subsurface position. The previous studies showed that the result of DC of silorane composite resin ranged from 50 to 64.9%. 6,[20][21][22][23] The results of this study do not support previous results in DC of silorane-based composite resin. These studies have been undertaken using different methodologies to determine the DC of silorane-based resin. In this study, immediately after measuring the microhardenss, scratched powder was used after dissolving in ethanol for transmission FTIR spectroscopy. 8,24 Recently, one study reported that the result of DC of silorane-based resin in the depth of 2 mm specimen was 72.85%, which is higher than previous results. 25 DC is related to the differences in monomer system, filler size, filler volume, and type between methacrylate and silorane-based composie resin. Moreover, there are some differences in the photoinitiating component. Methacrylate-based composite resin is initiated by a two component system consisting of camphoroquinone and tertiary amine. Silorane-based composite resin is photoactivated with a three component initiating system consisting of camphoroquinone, iodonium salt and an electron donor. 26 DC is influenced by complex interaction of these factors. The high DC of P90 is due partly to its lower light attenuation and partly to oxygen. Oxygen can be an inhibitor in free-radical mediated polymerization process. It can inactivate the free radicals by scavenging, impeding further polymerization. 27 Microhardness (Hv) Figure 3. Correlation between the Vickers microhardness and refractive index values for different depths and resin products.
R: 0.93, p < 0.001 other hand, the cationic ring-opening process is probably insensitive to oxygen because of their cationic reaction, which explains the high DC of P90. Curve fitting revealed an inverse linear correlation between the DC and subsurface depth, regardless of the exponential decrease in light intensity, which was attributed to the three-dimensional crosslinking process. Irradiated photons immediately reach the subsurface and initiate polymerization at the subsurface by crosslinking monomer molecules three-dimensionally from the top to bottom. The intensity of these photons, however, decreases exponentially with depth. Nevertheless, the insufficient DC due to the exponential decrease in photons can be compensated by the three dimensional crosslinking.
Depth of cure for light-activated dental composites has often been evaluated by measurement of the hardness of the material at specific depths. In general, higher hardness values are an indicator of more extensive polymerization. 29,30 In this study, the specimens showed significantly different microhardness at different depths. Among the specimens, P90 and Z250 showed the lowest (54.1 -67.8 Hv) and highest (73.9 -86.1 Hv) microhardness, respectively. A linear correlation was observed between the microhardness and specimen depth (R = 0.975 -0.995) regardless of the resin product. It is similar to the correlation between the DC and depth. Degree of polymerization of the specimens can be measured by both the DC and microhardness. In general, higher DC correlates with greater hardness. 13 However, as the microhardness is the indirect methods to verify DC characterizing the monomer conversion, hardness values do not always predict the DC in comparisons of different resin materials. Despite of similar DC, 3-D structures of polymerized composite with different concentrations of C=C bonds can coexist in the same polymer structure. 31 Also, microhardness can be influenced by monomer phase and filler phase. As the filler phase is harder than the polymer phase, therefore, the low filler content leads to a lower microhardness values. 13,32 In this study, even with a highest DC of P90, the difference in the microhardness values between P90 and other tested methacrylate based composites could be attributed to the filler content (vol % / wt%: 55 / 76 vs. 59.5 / 84.5). The microhardness decreased gradient from top to bottom with the increase in thickness of all tested specimens. It has been suggested that the microhardness ratio from top to bottom should not exceed 10 -20% for proper polymerization of composite resin restorations. 15 In Z250, Z350, and P90, the each microhardness ration from top to bottom (3 mm thickness) were 15, 16 and 21%, which means that polymerization of P90 at the bottom surface (3 mm thickness) was insufficient to provide optimal mechanical properties.
Silorane-based composite resin achieves low shrinkage due to the ring-opening oxirane moieties, despite having the lowest filler content among the specimens tested. 33 The silorane monomer ring differs from the chain monomers of methacrylate composites. In contrast to methacrylates, which are crosslinked via radicals, silorane is polymerized by a cationic reaction. The cationic curing initiation process involves an acidic center. After addition to an oxirane monomer, the epoxy ring is opened to from a chain or a network, in the case of multifunctional monomers. 7 The opening of the oxirane rings during polymerization compensates for this polymerization shrinkage to some extent. The oxirane rings are responsible for the physical properties and low shrinkage. The polymerization of silorane-based composites occurs through a photocationic ring-opening reaction, which results in less polymerization contraction compared to the methacrylate-based composite. 7 P90 exhibited less polymerization shrinkage and a slower shrinkage rate (ratio between polymerization shrinkage and specimen thickness: 0.35 -0.65%) than Z250 and Z350 (0.55 -1.13%), which are methacrylate-based composite resins. Regardless of the specimen, shrinkage increased linearly with increasing specimen thickness (R = 0.99 -1.00), whereas the ratio between polymerization shrinkage and specimen thickness decreased. As the specimen thickness increased, the polymerization shrinkage rate decreased due to incomplete polymerization. The level of insufficient polymerization increased further in the deep subsurface due to the exponential decrease in photons.
The refractive index of a medium measures the speed of light in that medium and reflects the polymerization state. In the present study, the refractive index was significantly different in resin products and subsurface positions. For each resin product, the reflective index decreased linearly with increasing specimen thickness. Among the specimens, P90 showed the lowest reflective index. According to previous results, the DC and microhardness decreased linearly with increasing specimen thickness. These results suggest that the contraction of the top surface due to polymerization shrinkage is greater than that of the bottom surface. The higher DC and microhardness on the upper surface than on the lower subsurface can be explained by the larger number of photons on the upper surface than subsurface. Therefore, the density might decrease gradually from the top to bottom with a similar gradual decrease in refractive index from the top to bottom. [34][35][36] In the tested specimens, there was a linear correlation among the DC, microhardness and refractive index. Nevertheless, it is unclear if this correlation is common to other composite resins. Hence, further investigation is needed.
Conclusions
T he s i l o ra ne -b a s e d P 9 0 a c h ie v e d t he l o w e s t polymerization shrinkage compared to other methacrylatebased composite resins independent of the specimen thicknesses. On the other hand, P90 had the lowest microhardness, despite having the highest DC among the specimens examined because it has the lowest filler content. The DC, microhardness and refractive index of the tested specimens showed an inverse linear correlation (in case of polymerization shrinkage, it showed a positive linear correlation) with the position (depth, thickness) in the subsurface despite the exponential decrease in incident photons within the specimens.
|
2017-06-18T20:18:03.491Z
|
2014-09-05T00:00:00.000
|
{
"year": 2014,
"sha1": "74e01fea6b8c61a9db2b6c899b5abb66476460a4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5395/rde.2014.39.4.310",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74e01fea6b8c61a9db2b6c899b5abb66476460a4",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
124937322
|
pes2o/s2orc
|
v3-fos-license
|
The GlueX Experiment: Search for Gluonic Excitations via Photoproduction at Jefferson Lab
Studies of meson spectra via strong decays provide insight regarding QCD at the confinement scale. These studies have led to phenomenological models for QCD such as the constituent quark model. However, QCD allows for a much richer spectrum of meson states which include extra states such as exotics, hybrids, multi-quarks, and glueballs. First discussion of the status of exotic meson searches is given followed by an overview of the progress at Jefferson Lab to double the energy of the machine to 12 GeV, which will allow us to access photoproduction of mesons in search for gluonic excited states.
Status of Exotic Mesons
Discoveries of new phenomena in nuclear and particle physics have provided insight into the fundamental constituents of matter.In the past few decades we have seen a new picture emerge in which quarks form the building blocks of nearly all matter.Yet the gluon, which carries the force which binds quarks, can interact with other gluons to form a bound state, or interact as a fundamental constituent of matter along with the quarks.Thus new forms of gluonic or hybrid matter should exist.
Studies of meson spectra via strong decays of hadrons provide insight regarding QCD at the confinement scale.These studies have led to phenomenological models for QCD such as the constituent quark model.However, QCD demands a much richer spectrum of meson states which includes extra states such as gluonic hybrids (q qg), multiquarks (q qq q), and glueballs (gg or ggg).
Results from lattice gauge theory studies suggest that the lightest gluonic hybrid mesons have J PC = 1 −+ and mass in the range of (1600 − 2000)MeV /c 2 .In the Fux-tube model the lightest 1-+ isovector hybrid is predicted to decay primarily to b 1 π [1].The f 1 π branch is also expected to be large and many other decay modes are suppressed.This suppression is consistent with recent calculations showing 1/Nc behavior for decays to spin-zero mesons in the large-Nc limit of QCD.
Few experiments have addressed the b 1 π and f 1 π meson decay channels.The VES collaboration reported a broad J PC = 1 −+ peak in b 1 π decay [2], and Lee, et al. [3] observed significant J PC = 1 −+ signal in a f 1 π decay.In neither case was a definitive resonance interpretation of the 1-+ waves possible.Preliminary results from a later VES analysis show excitation of π 1 (1600) [4].Significant b 1 π strength for π 1 (1600) was also reported [5].BNL experiment E852 reported a measurement of f 1 π and b 1 π decays for π 1 (1600) and π 1 (2000) [6,7].In addition, claims of J PC = 1 −+ exotic signals exist in decay channels which were not favored by the flux-tube model.The most controversial of which is the observation of π 1 (1400) → ηπ whereas the strongest evidence for a J PC exotic exists for π 1 (1600) → η π .
In previous publications [8,9], Brookhaven E852 presented evidence for an exotic meson produced in the reaction π − p → ηπ − p at 18 GeV/c from an analysis of the 1994 E852 data set.A large asymmetry in the angular distribution was observed indicating interference between l-even and lodd partial waves.The a 2 (1320) was observed in the J PC = 2 ++ wave, as was a broad enhancement between 1200 and 1600 MeV/c 2 in the 1 −+ exotic wave.The observed phase difference between these waves shows that there was phase motion in addition to that due to a 2 (1320) decay.An analysis of the mass dependence of the partial waves shows that the data is well described by the interference between the a 2 (1320) and an exotic 1 −+ resonance with Mass = (1370 ± 16 +50 −30 ) MeV/c 2 and Γ = (385 ± 40 +65 −105 ) MeV/c 2 .E852 has performed a partial wave analysis of the reaction π − p → π + π − π − p [10].In summary, all expected well-known states (a 1 , a 2 , and π 2 ) are observed.In addition, the natural parity exchange wave with manifestly exotic quantum numbers J PC = 1 −+ shows structure in the intensity and phase motion which is consistent with a resonance at 1600 MeV/c 2 decaying into the ρπ
PoS(Confinement X)349
The GlueX Experiment: Search for Gluonic Excitations via Photoproduction at Jefferson Lab Paul Eugenio channel.A mass dependent fit results in a resonance mass of 1593 ± 8 +29 −47 MeV/c 2 and a width of 168 ± 20 +150 −12 MeV/c 2 .A recent analysis of a larger 3π data set emphasizes discrepancies in the PWA systematics which make the claims of an exotic signal controversial [11].
A subsequent analysis studied the η π − system produced in the reaction π − p → ηπ + π − π − p.The data exhibit a clear signal for η → ηπ + π − in the ηπ + π − mass spectrum.The accepted η π − mass spectrum shows a peak in the region of the a 2 (1320) and a broad peak near 1600 MeV/c 2 .The results of a partial wave analysis find two important partial waves: the 2 ++ wave consistent with an a 2 (1320) signal and a broad higher mass structure; and a dominant J PC = 1 −+ exotic wave which peaks at 1600 MeV/c 2 .
A study of the mass dependence of the PWA results finds three Breit-Wigner (BW) poles in the 2 ++ wave and two BW poles in the 1 −+ wave.The 1 −+ wave is dominated by a pole at 1600 MeV/c 2 with a small contribution from a pole at 1400 MeV/c 2 .An alternative fit, one with a poorer χ 2 /DoF of 1.22, finds that the 1 −+ wave can be described by the pole at 1600 MeV/c 2 only.The results for the 2 ++ poles and the 1 −+ pole at 1600 MeV/c 2 are stable and are not affected by the parameterization of the 1 −+ low mass region.
Recently, g12 at Jefferson Lab's CEBAF Large Acceptance Spectrometer (CLAS) has acquired a high statistics photoproduction dataset, using a liquid hydrogen target and tagged photons from a 5.71 GeV electron beam.The CLAS experimental apparatus was modified to maximize forward acceptance for peripheral production of mesons.The resulting data contain the worldâ largest 3π photoproduction dataset.
The latest results, presented June 1st at the Eleventh Conference on the Intersections of Particle and Nuclear Physics (CIPANP 2012).Features of the results exhibit clear observation of the well know meson states: a 1 (1260), a 2 (1320), and the π 2 (1670).The main feature is a non-observation of the 1 −+ exotic state in both the mass intensity and in the phase motion relative to the pi 2 (1670).
The GlueX Experiment
A recent effort at Jefferson Lab, in conjunction with the plans for the energy upgrade of CE-BAF, is the development of a state-of-the-art hermetic spectrometer-the GlueX project (formerly know as the Hall D project).One of the scientific motivations for upgrading CEBAF is a highstatistics definitive study of the photoproduction of mesons with masses below the c c threshold in a search for new forms of hadronic and gluonic matter.After many years of searching for gluonic excitation produced via hadronic probes, only now are we finding some promising candidates.An unexplored search area is the photoproduction of light quark meson states.Since the photon can be thought of as a virtual vector meson with quark spins aligned, it is a probe distinct from the traditional hadronic probes of pions, kaons, and protons.Within the Flux-tube model, the production of gluonic excitations via photon interaction is expected to produce a wealth of states with manifestly exotic J PC 's (q q forbidden quantum numbers).
The goal of the GlueX experiment is a mapping of the spectrum of gluonic excitations with the ultimate objective being a quantitative understanding of the nature of confinement in QCD.To achieve this goal a hermetic detector, the GlueX spectrometer, optimized for amplitude analysis, will be constructed in a new experimental hall (Hall D).A tagger facility will produce 9GeV linearly polarized photons via coherent bremsstrahlung radiation of 12GeV electrons through a di-
PoS(Confinement X)349
The GlueX Experiment: Search for Gluonic Excitations via Photoproduction at Jefferson Lab Paul Eugenio amond wafer.To achieve 12GeV electrons CEBAF will be upgraded to 12GeV with additional cryomodules, modified arcs and an additional arc.The GlueX detector uses a geometry based on solenoidal magnetic field (Figure 1).The superconducting solenoid produces a 2.25 T field.A tagged, ≈ 9 GeV, linearly polarized photon beam is incident on a 30 cm long liquid-hydrogen target that is surrounded by a scintillator start counter which is used in triggering.Following that is a cylindrical tracking chamber, the CDC, and then a cylindrical electromagnetic calorimeter, the BCAL.Downstream of the CDC are four packages of circular planar drift chambers, FDC, followed by a time-of-flight wall, TOF.This is followed by a circular planar electromagnetic calorimeter, the FCAL.Space has been reserved between the downstream end of the magnet and the TOF for a future upgrade particle identification system.This design provides for nearly 4ź acceptance for both charged particles and photons.While the acceptance is not completely uniform in all variables, there are no holes in the kinematic variables of interest.
The GlueX collaboration was formed in 1998 [12].The project has been reviewed externally and by the Jefferson Lab PAC.In September 2008, the U.S. Department of Energy gave Jefferson Lab Critical Decision 3 (CD-3) approval for the CEBAF upgrade and construction of Hall D and the GlueX apparatus.
At the present time, the civil and accelerator construction are 90% complete and the experimental equipment is 50-60% obligated.Accelerator commissioning is planned for Jan 2014 with
|
2018-12-22T02:25:51.900Z
|
2013-05-21T00:00:00.000
|
{
"year": 2013,
"sha1": "c157762ef0e8c597e5f990ba938a22e3a6caf0fb",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/171/349/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c157762ef0e8c597e5f990ba938a22e3a6caf0fb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
201058404
|
pes2o/s2orc
|
v3-fos-license
|
Forbidden frozen-in dark matter
We examine and point out the importance of a regime of dark matter production through the freeze-in mechanism that results from a large thermal correction to a decaying mediator particle mass from hot plasma in the early Universe. We show that mediator decays to dark matter that are kinematically forbidden at the usually considered ranges of low temperatures can be generically present at higher temperatures and actually dominate the overall dark matter production, thus leading to very distinct solutions from the standard case. We illustrate these features by considering a dark Higgs portal model where dark matter is produced via decays of a scalar field with a large thermal mass. We identify the resulting ranges of parameters that are consistent with the correct dark matter relic abundance and further apply current and expected future collider, cosmological, and astrophysical limits.
Introduction
Attempts to explain the presence and abundance of dark matter (DM) in the Universe often involve making various assumptions about the history of the very early Universe. The simplest and most natural one is to assume that, at high enough temperatures a DM particle is in thermal equilibrium with the plasma of Standard Model (SM) particles, which ensures that its density is given by Maxwell-Boltzman statistics. At some point in the expansion and cooling down of the Universe, DM undergoes a well-known freeze-out mechanism, which determines its subsequent population in the Universe. The freeze-out mechanism has been particularly popular because it requires a minimum amount of rather natural assumptions and, for reasonable values of parameters of specific particle candidates in the class of weakly-interacting massive particles (WIMPs), it is often able to produce the observed abundance of DM in the Universe. Furthermore, it does so in a manner that is insensitive to the condition of the Universe after inflation, thus effectively separating the high temperature regime from the one responsible for dark matter production.
However, it has long been known that, in addition to freeze-out, some other DM production mechanisms exist and can in fact play a dominant role in achieving the observed relic density. One particularly well-motivated example involves sub-eV axions that, due to their tiny interactions, are mainly produced not thermally but via the well-known misalignment mechanism; for recent reviews see, e.g., [1,2]. This mechanism was later extended to the case of ultra-light vector boson in [3,4]. Furthermore, extremely weakly interacting massive particles (usually referred to as E-WIMPs or super-WIMPs) are often predicted by many well-motivated extensions of the SM, for instance a gravitino in scenarios based on local supersymmetry (SUSY) or an axino in SUSY models of axions; see e.g., [1] for a recent review. If stable, they are potential candidates for dark matter in the Universe. However, due to their exceedingly feeble interactions, their population after inflation is negligible -assuming that their decoupling temperature is higher than the reheating temperature T R -since they never reach thermal equilibrium with the SM plasma, and the freeze-out mechanism is ineffective. Instead, they can be generated through so-called freeze-in [5] from scatterings and decays of some other particles.
A key feature of such "frozen-in" dark matter scenarios is that, while all SM particles remain in thermal equilibrium since the Universe reheats after inflation, the DM particle χ is absent in the early Universe and never reaches equilibrium with the SM plasma. Its production is mediated by some particles that typically remain in equilibrium with the plasma. Once the temperature drops below the mediator mass, DM production essentially stops and its relic density freezes-in.
In freeze-in scenarios, specific features and the final relic abundance of DM often depend on the details of a specific beyond-the-SM (BSM) model. In models with either the gravitino or axino as DM, their freeze-in production is typically dominated by non-renormalizable interactions at high temperatures in the case of scattering or at low ones in the case of decays [6,7]. On the other hand, in models where DM production involves for instance a light mediator, the low-temperature production dominates over the high-temperature one, thus separating again the physics of inflation from the one of dark matter [5,8,9].
In this article, we will consider a previously neglected case that some mediator field S -that could be a scalar, vector boson or a fermion -is not only in equilibrium with the thermal bath, but also develops a substantial thermal mass. That is, at sufficiently high temperatures the mass m S,T of the mediator deviates significantly from its "vacuum" one m S , i.e., the mass is dominated by thermal effects. Such an effect has recently been studied for instance while considering thermal photon decays [10]. The population of DM particles χ is assumed to be initially absent when the Universe reheats after inflation, but is generated by the decays of S. If at high enough temperatures the thermal mass of the mediator becomes sufficiently large, the possibility opens up that, when m S,T > 2m χ the decay S →χχ becomes allowed, while at T = 0 it was kinematically forbidden. As we will show, this opens up a new regime for DM production which we will call "forbidden frozen-in dark matter".
This kind of effect we believe was first identified for gravitino [11] and subsequently axino production [12]. The production rate of singlet fermions from the decay of scalar fields in a plasma including thermal corrections was calculated in [13] and applied to the case of right-handed neutrino DM. More recently it was also described in a more generic context in [14,15].
In this paper, we take a closer look at the "forbidden freeze-in" regime and identify its main phenomenological features. We further show that the equilibrium assumption of the mediator can be relaxed as long as S obtains a sizeable thermal correction to its mass, e.g., when it is chemically decoupled from the SM plasma, but remains in kinetic equilibrium with itself via self-scatterings. Interestingly, albeit perhaps as expected, the ensuing phenomenology is found to depend strongly on the dimension of the operator controlling the mediator decay into DM pair. For dimension-four operators, the production is dominated at low-temperature regime and peaks at m S,T ∼ 2m χ . Additionally, a striking feature is that, in this regime the relic abundance is ultimately almost insensitive of the DM mass, while the coupling responsible for DM production typically takes significantly larger values than in the standard freeze-in case. For mediator decays through higher-dimensional operators, on the other hand, the production is dominant at high temperatures and therefore depends on the reheating temperature. Furthermore, we argue that, since the forbidden freeze-in regime is a generic property, it might be worth exploring it in models of the freeze-in mechanism of DM production, e.g. [16][17][18][19][20][21][22][23][24][25][26][27].
As a specific realisation of the case presented above, we examine an explicit Higgs portal scenario, where the dark Higgs boson, kept in equilibrium with the SM fields through a quartic mixing term with the SM Higgs, can decay to a light (GeV-scale) Dirac fermion dark matter at a strongly suppressed rate. The thermal mass is predominantly generated by the dark Higgs self-coupling, enabling it to easily reach a thermal-mass dominated regime. Since Higgs portal scenarios are typically constrained by a variety of limits, we briefly review them and apply them to the considered model provide in order to identify new regions that are allowed by forbidden freeze-in. We will assume the dark Higgs boson is originally in equilibrium with the SM thermal bath. If the quartic mixing is low enough, it may also be produced through freeze-in, see e.g., [28] for more details on this setup. This scenario is similar to case III of [14], which closely resembles our model as the particle content is similar. The main difference, however, is the vacuum expectation value (VEV) structure. In our model the portal particle has a zero VEV ( S = 0) throughout the early Universe, and develops one only through its mixing with the Higgs boson after electroweak phase transition. This allows us to isolate the pure forbidden freeze-in regime without the impact of the SM Higgs VEV, thus simplifying the analysis and exploring the forbidden freeze-in independently. We additionally explore a different mass region than [14], which leads to a distinct phenomenology for the portal particle.
The paper has the following structure. In section Sec. 2 we briefly review the calculation of thermal mass of a scalar boson and proceed to describe in detail the mechanism of "forbidden freeze-in" through thermal mass effects. In Sec 3 we con-sider as an example an explicit Higgs-portal model in which the scenario can natually be realised, and briefly examine various criteria to ensure its consistent implemention. We then proceed to a full numerical study of the predicted relic density and describe various aspects of our scans and results, as well as the effect of applying relevant astrophysical and collider constraints.
2 Freeze-in with a thermally induced mass 2.1 Thermal mass in the early Universe As mentioned above, in this article we study the freeze-in production of DM via some mediator decays that are energetically allowed solely in a thermal bath. We expect this to occur in general, since frozen-in DM is usually assumed to be produced by particle species which are in thermal equilibrium with the SM plasma, and which should therefore develop a thermal mass correction [29][30][31] in the early Universe, similarly to the SM particles [32]. Moreover, it is this effective mass that allows "forbidden" decays to occur, as is the case for instance for plasmons (thermallydressed photons in a medium) that can decay to neutrinos [33].
Generally, at high temperatures applicable to the early Universe the thermal mass of a particle is proportional to the temperature. As this effect will be critical in realizing our forbidden freeze-in scenario, below we briefly review the case of a scalar mediator field S. In general, a scalar field features a self-interaction term, which implies that it does not need to interact very strongly with the rest of the plasma in order to develop a sizeable thermal mass. In the following we assume a self interaction term for S of the form The self-energy diagram, shown in Fig. 1, can then be readily evaluated at a finite temperature T , leading to the self-energy term where Π S corresponds to the corrected mass of S, i.e., m S,T 2 = m S 2 + Π S , and we have denoted β = T −1 , ω n = 2nπβ −1 , and ω 2 k = k 2 + m S 2 . The sum over n is evaluated by a standard procedure: 1 by transforming it to an integral over a complex quantity ω while introducing a function which has poles corresponding to ω n and unit residue. One obtains where we identify the first term as the T = 0 one-loop correction to m S , and the second one (denoted Π (T ) S henceforth) as the correction due to the finite temperature of the medium with f B ≡ e ω k β − 1 −1 the Bose-Einstein phase-space distribution.
The appearance of the phase-space distribution function regulates this otherwise quadratically divergent integral since it introduces a natural "cut-off" energy proportional to the temperature. The final result scales quadratically with temperature: Π (T ) S ∼ T 2 . In the high temperature limit, we can therefore neglect the m S contribution to ω k and arrive at In this limit, since the vacuum one-loop contribution is expected to be small compared to the tree-level one, we can neglect all T = 0 contributions and obtain an estimated form of the mass of S, It is well known, though, that naive perturbation theory does not work well when finite temperature effects are included (for examples see [29][30][31]). This can be seen by calculating the thermal correction using m 2 S → λ 24 T 2 , i.e., by re-summing the socalled "daisy" diagrams, where one would expect to get a correction of order at least O(λ 2 ). However, this is not the case in finite temperature calculations, since such diagrams induce correction O(λ 3/2 ), which may be important especially for larger values of the self-interaction coupling. We have explicitly checked that for λ 1 this re-summation leads to at most a 20% variation in the thermal mass. We will thus use the approximate result eq. (2.3) throughout this paper.
Freeze-in and mediator decay
We are interested in estimating the final relic density of a DM particle χ interacting extremely feebly with the Standard Model particles. The key assumption is that χ was never in thermal contact with the SM sector during the thermal history of the Universe, nor was it ever produced through some other means in the post-inflationary period, e.g., during reheating. Our assumed dominant dark matter production mechanism will be a suppressed decay of a bath particle S into a dark matter pair. More precisely, following the standard lore, we will assume the presence of a strongly suppressed decay channel with a small decay rate Γ χ (i.e., such as to unable one to overproduce or thermalise the χs). Assuming a boson mediator and neglecting Pauli blocking/Bose-Einstein enhancement factors, the Boltzmann equation governing the density of dark matter particle in an expanding universe is then (see, e.g., [34] for a complete recent treatment) given bẏ where M is the amplitude (summed over all internal degrees of freedom "idof") for the decay process (2.4), and the integration is over the standard phase space , and similarly for dΠ χ and dΠχ. Without loss of generality regarding the operator generating the decay S →χχ, we can rewrite the squared amplitude from the decay rate Γ χ as idof s where λ is the usual Källén/triangle function and J S is the spin of the mediator. It can then be shown (see, e.g., [34]) that under some general assumptions (i.e., a negligible initial number of DM particles, entropy conservation, and Maxwell-Boltzmann distributions for the plasma), the evolution of the DM yield (Y DM = nχ+nχ s ) is given by where h (g) are the relativistic degrees of freedom associated with the entropy (energy) density, 2 and K 1 (x) the modified Bessel function of the first kind. Defining x ≡ m S T and focusing for simplicity on J S = 0, the evolution of the yield becomes An important comment at this point is that, while in the standard freeze-in case Γ χ can be considered to be a number which factors out of the x dependence, this is not the case for forbidden freeze-in where the presence of a thermal mass m S (T ) needs to be accounted for. Let us first review in the rest of this section the standard freeze-in case where the thermal dependence of the mass can be neglected.
Assuming that m S > 2m χ and slowly varying relativistic degrees of freedom (which is the case for T 1 GeV), we can calculate the yield today (Y DM,0 ), by integrating from the reheating temperature (T R m S , x → 0) down until today (T 0 m S , x → ∞). 3 We then obtain the relic abundance in the form (2.12) where we evaluate g and h at the "mean" value of x during the DM production. 4 In Fig. 2a we show Y DM /Y DM,0 as a function of x for various values of m S , where we see that the production of DM essentially stops at the freeze-in temperature T FI ∼ m S 7 , as can be seen from the figure. That is, since typically S decouples at temperature T FO ≈ 20 m S (i.e., freeze-out), the calculation holds. However, if S decouples earlier than expected, the relic abundance of χ can be considerably smaller (if S decays rapidly to SM particles) or larger (if S decays predominantly to DM particles). In both cases the coupled system of Boltzmann equations describing the evolution of both S and χ has to be solved.
A different behavior is expected, however, when DM particles are produced via non-renormalizable operators, since the corresponding production rate increases with the temperature [5,36]. As an example, consider DM production via a 2 → 2 process which occurs due to a dimension-d operator. At high temperatures, all masses should be irrelevant, so the matrix element squared for the process can be written as function of the center-of-mass energy √ŝ as which (assuming constant g and h) can be integrated from (2.13) where it is apparent that the high-temperature contributions dominate for n > 0 (i.e., d > 4). In the case of d ≤ 4, we expect DM production to be dominated at low temperatures (around m S , as denoted previously). Thus, these features should be treated in a case-by-case way, since the masses of the particles play an important role, and so the actual structure of the matrix element is needed.
Large thermal mass and forbidden freeze-in
Let us now turn to the case with a large thermal mass. In order to determine its effect on the freeze-in mechanism we shall assume for concretness that the scalar mediator mass takes the form 5 (2.14) An important consequence of eq. (2.14) is that the decay S →χχ can become kinematically allowed at large temperatures even if m S < 2m χ . This feature will determine the forbidden freeze-in regime. Before discussing this regime it is worthwhile to note that in some models dark matter production at early times is dominated not by decays but by the 2 ↔ 2 processes. 6 In such cases the thermal effects typically introduce only a correction, the significance of which is very model-dependent. 7 We will explicitly address the role of 2 ↔ 2 production processes for the Higgs portal model in Sec. 3, while for the following general discussion we restrict ourselves to the cases when they are subdominant.
As an example, let us consider the case m S = 0, i.e., when m S,T = αT . Assuming that the temperature is large enough so that at some early time m S,T > 2m χ is satisfied, our aim is to solve in this case the Boltzmann equation (2.11). Defining z as We observe two very different types of behavior depending on the dimension d of the operator that mediates the decay of S. In the case when d > 4, the right-hand side of eq. (2.16) increases with temperature and is therefore dominant at high temperatures close to the reheating temperature T R . Thermal effects in this case only provide a modification to the standard freeze-in through higher-dimensional operators, as is the case for gravitino or axino DM produced in scatterings of particles in the thermal plasma [37]. On the other hand, when d ≤ 4 most of the production takes place at temperatures around the dark matter mass. Indeed, DM production in this case increases at low temperature but stops when the decay becomes kinematically forbidden at αT = 2m χ . The production is thus dominated by temperatures close to m χ (or higher for small α).
In the following we will study in more details both cases to obtain a closed approximate form for the final dark matter aboundance when possible.
Higher-dimensional case
Let us first assume that d > 4 in which case at high temperatures the thermal mass of S dominates and we can write its decay rate in the form where again n = d − 4, and γ Sχ a dimensionless factor that depends on the nature of this operator. In the high-temperature regime where the approximation (2.17) is justified, the abundance equation becomes Since the production is dominated by the high temperature contribution, it is straightforward to integrate this equation, between z = 1 (the decays are kinematically not It is clear that, for d > 4 the dominant contribution comes from the regime of high temperatures (z R → 0). An important consequence of the thermal effects included here is the fact that two-body decays can significantly alter the predictions of the scenario mentioned before (which was akin to the so-called ultraviolet freeze-in scenario advocated, e.g., in [36]). That is, even if the decays S →χχ are allowed in the vacuum, the appearance of the thermal mass of S still plays a dominant role at high enough reheating temperature since in this case DM production is most efficient at high temperatures. Furthermore, comparing eq. (2.13) with eq. (2.19), we can see that since α < 1, the later tends to be generally less efficient. 8 Therefore, we conclude that the DM production via the forbidden freeze-in, in general, requires larger couplings in order to reproduce the observed relic abundance.
Four-or three-dimensional case
In the four (or three) dimensional case, most of the production is expected to take place at low temperatures, as can be seen from eq. (2.19) where the contribution from z = z R drops out (unless α is so small that the production happens close to the reheating temperature). More precisely, it takes place at around the time when the decay S →χχ stops. Thus, we expect the production to be dominated at time scale corresponding to the temperature at which m S,T ∼ 2m χ . This actually implies that up to an order one function, the decay rate satisfies Γ χ ∝ m χ . While it is therefore not possible to fully simplify the decay rate without specifying the details of the interaction, we can straightforwardly observe from eq. (2.18) that the abundance will be proportional to 1/m χ , thus implying that, up to order one corrections, the final relic density will be independent of the dark matter mass, as mentioned before.
As an example and in order to obtain a closed form for the final relic density, let us assume that: S is a scalar field, the dark matter candidate χ is a Dirac fermion and the Lagrangian contains an Yukawa interaction between S and χ, (2.20) The bath particle S decay width to dark matter is then given by The evolution of the yield is then given by In Fig. 2b we show the evolution Y DM /Y DM,0 of the number of DM particles as a function of z for the thermal mass case. It is similar to the standard case, apart from the point when the production stops, i.e., at m S,T = 2m χ . Assuming that the relativistic degrees of freedom do not vary rapidly during the production of the χs, we can integrate eq. (2.22) to obtain where again g and h are evaluated at z . 9 As we pointed out earlier, the relic abundance of χ becomes (mostly) independent of its mass, with any m χ dependence coming from z . This, and the suppression due to the α 4 , will result in relaxed constraints for the Yukawa coupling, with respect to the standard freeze-in, where 9 In this case, z is defined as Ωh 2 scales predominantly linearly with the DM mass. Notice furthermore that in the case where the temperature correction never dominates (i.e. α T < m S ), the relic abundance is given by eq. (2.12) with the decay width (2.21), which is the standard freeze-in case, as expected.
Finally, let us conclude this section by presenting some numerical results in the case where both m S and m S,T play an important role as the temperature varies. In this case one has to calculate Y DM,0 by including both mass terms. That is, the evolution of Y DM as in eq. (2.7) needs to be solved, with m S,T given by eq. (2.14), numerically.
An example of typical dependence of Ωh 2 on m χ for the production of DM due to the decay of S, is shown in Fig. 3a. The two extreme cases of α = 0 (standard freezein) and m S = 0 (dominance of the thermal corrections to the mass) are shown by dashed blue and orange lines, respectively, while the exact numerical result is shown in solid grey. Notice that the transition between the two limits happens suddenly at m χ ≈ m S /2 which is where the blue line terminates since S →χχ becomes forbidden in the vacuum.
In Fig. 3b we present the Yukawa coupling y χ as a function of α that give the observed Ωh 2 for the scanned range of masses 10 MeV ≤ m S , m χ ≤ 1 TeV, hence overlapping regions between the two regimes may correspond to completely different values of the masses. We observe two distinct regimes: the region of standard freeze-in where m S > 2m χ is marked in blue, while the forbidden freeze-in region of m S < 2m χ is marked in orange. The shape of the forbidden freeze-in band in Fig. 3b is a simple consequence of the α 2 y χ dependence of Y DM,0 in eq. (2.23). As already noted in the d > 4 case, in the forbidden freeze-in regime one requires either larger self-interaction α of the mediator to generate a larger thermal mass, or a stronger interaction coupling between DM and the mediator, since the DM production is not as efficient as the standard case (as also shown in Fig. 3a). An important comment is that the transition between the two regimes, which happens for m S ∼ 2m χ occurs typically in a mass range of order (2m χ − m S ) ∼ αm S , which become very narrow for small α. 10 3 Forbidden freeze-in and the Higgs portal In this section we explore an explicit realisation of the general mechanism described above. We focus on a Higgs portal model, which is an archetype for a wide class of DM models where the dark sector is connected to the visible sector by a scalar mediator mixing with the SM Higgs boson.
The model
We introduce a real scalar "dark Higgs" boson field S, which is not protected by a Z 2 symmetry and hence can decay into Standard Model fields through its mixing with the SM Higgs boson. A dark matter candidate is taken to be a Dirac fermion that couples to the dark Higgs boson through a small Yukawa coupling y χ . The corresponding part of the Lagrangian thus reads with the dark Higgs boson potential term defined as 11 where H denotes the Standard Model Higgs boson doublet. The total scalar potential 10 This corresponds to the case where the mass difference preventing the decay of S into two DM particles is of the same order as the thermal contribution to m S at the typical scale T ∼ m S . In particular, in Figure 3b the forbidden region shown in orange do not probe this tuned transition regime in details for small α. 11 Notice that several other operators can be written within our symmetries, including a trilinear coupling S 3 and Yukawa couplings to left and right components of the dark matter fermion. We will neglect the trilinear in the following and enforce an exact χ-number global symmetry to fix the latter to zero.
At low temperatures (T 160 GeV), both the Higgs and dark Higgs fields
develop a non-zero vacuum expectation value (VEV), so that H = 1 √ 2 0 h + v and S → v S + S. 12 In the limit where A v the calculation simplifies significantly and the minimization conditions for the scalar potential in term of λ H and v S can be easily obtained as Furthermore, we can rotate the scalars to their eigenvalue basis, i.e. h, S → R h, S , where R is a rotation matrix parametrised by the small angle θ given by where we have used the masses of h and S (at T = 0) defined by The branching ratio of the Higgs decay to invisible particles is constrained to be [38] smaller than 0.19, which translates to λ HS 10 −2 . Furthermore, note that while we have supposed that the trilinear term λ 3 S 3 was negligible in our original Lagrangian, 13 the shift by v S re-introduces such a term as λ S 3! v S S 3 . For consistency, we will therefore further require that this contribution is negligible with respect to µ S , leading to the condition (3.6) Notice that this also automatically ensures that the shift in the SM Higgs boson quartic coupling λ H is negligible in eq. (3.3). An interesting feature is that the dark Higgs boson is extremely long-lived at low mass. When only its decays into a lepton pair are kinematically allowed, and assuming µ S ∼ m S , we obtain (3.7) 12 Since S plays a crucial role in the production of DM before and after EW phase transition, we just denote the VEV-shifted dark Higgs boson as S in order to avoid changing the notation when dealing with different temperature regimes. The relative importance of these processes is to large extent determined by the hierarchy of the highlighted couplings y and λ HS as well as by the mixing angle θ.
As we will see in the next section, such long lifetime are severely constrained by astrophysical limits and beam dump limits. For simplicity, we will therefore typically restrict ourselves to m S > 100 MeV in the following. 14 The relevant processes determining the evolution of number densities of S and χ in this model are: i) the direct mediator decay S →χχ, ii) the mediator decay to SM particles due to its mixing with the SM Higgs boson, and iii) the annihilation of S to SM particles, as well as all the inverse reactions. The Feynman diagrams for these processes are given in Fig. 4. 15 The direct S →χχ decay width is given by eq. (2.21) and is suppressed by the very small Yukawa coupling y χ . The decay of S to SM particles is given by Γ(S → SM) = θ 2 Γ h→SM (m S,T ), where the Γ h→SM (m S,T ) is the total width of the SM-like Higgs boson with mass m S,T . We implement using the results taken from [39][40][41] and a direct evaluation for leptonic decay at low masses.
The S annihilation cross section as a function of the Mandelstam variable s reads It can be of the order of the standard WIMP annihilation cross-section, or smaller. This is because S is unstable and therefore its number density right after freeze-out can be much larger than for standard WIMP. We will assume that either λ HS or the mixing angle θ are large enough to ensure that S was in equilibrium at very early times (see discussion in the next section).
Apart from the processes shown in Fig. 4, additional 2 ↔ 2 processes can in principle play a role in the production of χs and/or their early-time thermalization with the SM plasma. These are: SS ↔χχ, hh ↔χχ and the co-annihilation process Sh ↔χχ. The first one has s−, t− and u−channel contributions which are proportional to θ 2 λ 2 HS y 2 χ and y 4 χ , respectively. The second and third have only s−channel diagrams proportional to A 2 y 2 χ and λ 2 HS y 2 χ , respectively. It is clear that all of theses 2 ↔ 2 processes are strongly suppressed with respect to direct S decays due to phase space suppression exhibited by the 2−body phase space of the former channels. However, in the deeply forbidden regime (i.e., for very small λ S ), when the decay is kinematically allowed only at very high temperatures, all the aforementioned channels could in principle play some role in the evolution of χ. In light of this, we have implemented all of the above processes in the numerical approach presented in the next section and checked explicitly that for the parameter ranges covered by our scan these processes indeed can be safely neglected in solving the evolution equations of S and χ number densities.
Relic density and numerical study
In light of the above discussion the coupled computation of the freeze-out of S and the freeze-in of χ is performed under the assumptions that: i) χ had negligible abundance after reheating and had not reached chemical equilibrium, ii) S was in chemical equilibrium at early times and remained in kinetic equilibrium for all the temperatures relevant for the production of χs. 16 In practice, the assumption made in the numerical code is that the above conditions are satisfied up to x = 0.1, where we define x ≡ m S /T . For x < 0.1 it is assumed that S traces its equilibrium value while the evolution of χ is given by eq. (2.11), starting from the reheating temperature T R assumed to be given by x R = 10 −9 . We checked explicitly that assuming different T R does not change the result. For x > 0.1 the coupled system of the Boltzmann equations for the number densities of S and χ is numerically solved, including all the relevant processes discussed above. 17 16 Kinetic equilibrium is an extremely good assumption in the parameter space studied in this work since away from the Higgs boson resonance elastic scatterings of S off particles of the SM plasma are much more frequent than annihilations of S. In a different model where this assumption would be violated one would be required to solve also for the temperature of S or even its full phase space density, see [42]. This would also bring additional complication to the forbidden freeze-in case as the thermal mass of S would need to be computed out of equilibrium. In fact, even if S is still in kinetic equilibrium (with the SM plasma or with itself), but already chemically frozen-out, the thermal mass would not be given by eq. (2.3). However, this caveat has no implications for our results since in the studied model the forbidden freeze-in happens at large enough temperatures where S is still in equilibrium. 17 This is done to ensure that the χ production from S decay takes into account possible deviations from chemical equilibrium of S. As stated before, this does not affect the forbidden freeze-in regime in our model, but it does some part of the parameter space of the standard freeze-in. For discussion and explicit forms of suitable Boltzmann equations see e.g. [34].
Within this setup there are several possible regimes leading to the correct DM abundance. In the following we first show some representative examples of the evolution of the yields of S and χ for different regimes and then present and discuss the results of our scan of the parameter space of the model.
Evolution of number densities
In Figures 5-7 we present the yields of S and χ for some characteristic cases. In all following figures the green dashed lines correspond to Y S while the solid lines to Y DM with the blue color indicating standard (non-forbidden) regimes and the beige one forbidden regimes. For completeness, the light gray area highlights the evolution of the yields during the time before the electroweak phase transition (EWPT). In all the plots the different shadings of the lines correspond to the variation of the most relevant parameter for a given regime, as indicated in the figures.
The simplest case is the usual freeze-in, where m S > 2m χ and Y DM gradually grows, with most of the production happening around T ∼ m S . This is shown in Fig. 5a. In this case the final relic abundance of χ is insensitive to any variations in the self-coupling λ S due to the fact that the thermal effects are important only for T m S , which is a very short (in real time) period. Thus, the thermal mass of S has a very small impact on the result in the standard freeze-in regime, as expected. Additionally, note that the equilibrium number density of S is also affected only at early times due to thermal corrections, as they shift the value of m S,T .
In Fig. 5b we show a typical case of forbidden freeze-in, where an opposite behaviour can be seen. The production is active only at small x and is both stronger and terminates later for larger values of λ S . In this forbidden regime the final DM abundance is therefore very sensitive not only to value of y χ but also the self-coupling of the mediator. Another point worth stressing is that one does not need large values of λ S to get a sizable effect, so the opening of the forbidden decay due to thermal effects is in fact a generic feature of the freeze-in mechanism. Figure 6a shows a case of a transition between the standard and the forbidden regimes. For fixed m S = 100 GeV we vary m χ and see that, as expected, around the transition the result is very sensitive to precise value of the DM mass. In the forbidden regime increasing m χ further leads to only very mild change in the relic abundance, i.e., the yield Y DM is inversely proportional to m χ , in agreement with eq. (2.23). This approximate DM mass independence of the relic density is an distinct feature of the forbidden freeze-in scenario.
In Fig. 6b a slightly different mechanism is shown. It occurs when nominally this would be a standard freeze-in case with m S > 2m χ but, due to the EWPT and its effect on the mass of S (which arises when the SM Higgs gets its VEV due to the presence of the mixing quartic coupling λ HS ), there appears a temporary regime where S →χχ is not allowed and the χ production is blocked for a while. However, if the self-coupling λ S is large enough the thermal mass overcomes the suppression due to the EWPT and re-opens the decay. This is an example of a situation when the thermal mass has a large impact on the relic abundance even in the standard freeze-in regime of m S > 2m χ . A scenario like this is close to what was studied, in a more general context, in ref. [14].
Finally, in Fig. 7 we show for completeness examples of cases where the χ production is dominated by the late-time decay of S. These cases are not directly related to the main focus of this work but are present in some regions of the parameter when we scan of the full model and therefore important in their own right. In these cases the complete evolution of both S and χ is crucial. In Fig. 6a the final DM abundance is determined by the branching fraction of the S decays to χ and to SM particles which in the plot is parametrised by the value of the trilinear coupling A. For smaller values (corresponding to a weaker mixing with the SM Higgs boson), DM particles constitute a larger fraction of S-decay products. Figure 7b shows a situation where the details of the freeze-out of S strongly affect its abundance that is then transferred to the χs via (rare) decays. This also shows the potential impact that the choice of λ HS can have on the final relic abundance of DM. Note that in this plot different lines correspond to different relation between x and T due to electroweak symmetry breaking contribution to m S which depends on λ HS . Around the EWPT the T -dependence of the VEV causes a temporary regime where in the standard case the S →χχ is forbidden and χ production is blocked. However, if the self-coupling λ S is large enough the thermal mass overcomes the suppression of m S,T due to the EWPT and re-opens the decay.
Scan setup and results
A numerical scan of the model parameter space has been conducted using MultiNest [43] to direct the scan towards values of the relic density within 2σ of the standard result from the Planck Collaboration [44] Ωh 2 = 0.1198 ± 0.0012 that we set as an allowed range. 18 The private code BayesFITS, automatically created using routines from SARAH [45][46][47] is used to interface it with the Mathematica implementing the approach discussed above which we use to evaluate the relic density. The details of the parameter ranges are given in Table 1.
In Fig. 8 we show the points in the scan that satisfy the DM relic density constraint. As before, blue colour indicates the standard freeze-in regime and the beige one the forbidden regime. It is apparent that these two regimes exhibit very distinct patterns. In particular, as discussed in a previous section, the standard freeze-in is in most cases not sensitive to the value of self-coupling λ S . It also requires very low values of the Yukawa coupling; otherwise DM is overproduced. In contrast, the forbidden regime is highly sensitive to λ S , as expected. Indeed, the smaller the self-coupling, and therefore the thermal mass, the earlier the production stops and therefore the larger y χ is needed to obtain the correct relic abundance of DM. Nevertheless, it is a new, interesting regime that is generically present in our scans and Figure 7. Examples of yields evolution when the χ production is dominated by the late time decay of S. (a) Dependence on the trilinear coupling A, which (for fixed λ HS ) governs the branching ratio of S decay to χ and to SM particles. Here the freeze-out of S proceeds as for usual WIMP, with decoupling at x ∼ 20. (b) Dependence on the portal coupling λ HS (for fixed A). Lowering λ HS leads to smaller mass, due to the EWSB contribution, and also earlier freeze-out with larger Y S which then translates to larger χ population. Note that in this plot the relation between x and time/temperature is different for different lines.
Parameter
Description additionally leads to a freeze-in DM interacting more strongly than in the usually studied scenarios. An important comment is that, while we explicitly enforce the consistency condition (3.6) for all the scan-based plots, our choice of parameters implies that most of our points with low dark Higgs boson mass exhibit also a small quartic mixing λ SH . This is a direct consequence of eq. (3.5). Figure 9. Experimental limits for our model, for points satisfying the observed relic density at 95%CL in the plane m S − τ S for m S < 2m χ (orange) and m S > 2m χ (blue).
Experimental limits
In dark Higgs models dark matter particles are largely out-of-reach of current experiments due to their extremely small interactions with the visible sector. The mixing of the scalars h and S induces, however, interactions of S with the SM particles which are proportional to θ, hence mediating the decay of S to SM particles (if kinematically allowed). Since θ is suppressed by powers of v S /v, the dark Higgs boson S is typically long-lived, as shown in eq. (3.7) -particularly for low masses. In this case bounds from both colliders and fixed target experiments [48], and for longer life-time, from astrophysics [40] apply. Such limits have traditionally been very well-studied. We summarise them below and in Figure 9 which indicates the most relevant ones for our setups. First of all, and apart from enforcing the proper dark matter relic density, astrophysical bounds can be divided in two main categories, and typically set an upper bound on the dark Higgs boson lifetime, or equivalently a lower limit on its mixing angle with the SM Higgs boson.
• Cooling rate of the supernovae SN1987. This limit uses the fact that the core of the nova is a thermal environment with temperature T SN ∼ 30 MeV where dark Higgs bosons can be produced and -if sufficiently feebly coupled -escape the core and lead to a faster cooling of the supernova. Standard bounds for dark Higgs boson [49] are derived from the requirement that the cooling rate from dark sector particles do not exceed the neutrinos one [50][51][52]. 19 • Bounds from enforcing a successful big bang nucleosynthesis. We use the recent bounds from [40] which are derived from the same Lagrangian as in Sec. 3.1.
In the lower mass range (below the π-meson mass threshold) the dominant bounds are derived by constraining the entropy injections from the e + e − / µ + µ − decays of the dark Higgs boson. Once dark Higgs boson annihilation/decay into hadrons becomes accessible, more stringent bounds arise from preventing neutron-proton ratio to differ significantly from 1/6 ∼ 1/7 due to the p ↔ n meson-mediated interaction. Finally, for heavy enough dark Higgs boson, direct baryon/anti-baryon production become the dominant decay channel of S. The subsequent anti-baryon annihilation with the ambient proton and neutron population further modifies the proton-neutron ratio. This limit dominates above the di b-quark threshold. An important comment is that this limits depends on the dark Higgs bosons abundance Y S , however given our restriction eq. (3.6), dark Higgs bosons abundance typically freezes-out earlier than in [40] which implies that the relativistic abundance is maintained for larger masses. Altogether, modifying λ HS only changes the limits by an O(1) factor, as can be seen in [40]. This is a simple consequence of the fact that, in order to avoid a significant modification of the p/n ratio, one relies on ensuring that the dark Higgs boson decay before BBN. The limit then roughly depends on the exponentially suppressed initial abundance Y S exp(−t p/n /τ S ) where t p/n ∼ 2.6s is the freeze-out time of the proton/neutron ratio. 20 The second class of constraints arises from colliders and beam-dump experiments, and typically sets a lower bound on the dark Higgs boson life-time.
• Limits from dark Higgs boson production and decay. Based on the original ALP searches in CHARM [53], these limits have been recently updated with a better modelling of the dark Higgs boson lifetime in the challenging region of m S around 1 GeV in [41]. Note that we have included the projected limits from SHiP at 2 × 10 20 proton-on-target [54] as a long-term prospect. Similarly, and as an example of limits from LHC-based experiments, we have included a projection for FASER phase 2 at the HL-LHC from [55]. Notice that these next generation experiments have the potential to start probing the relevant parameter space.
• Precision physics in meson decays. In the lower mass range, the dominant limits arise from the meson decay K + → π + νν studied in the E949 experiment [56]. Finally, for the heavier mass range -corresponding to intermediate masses around 1 GeV -the main constraints come from searches for visible decay of B-meson by the LHCb collaboration [57]. In both cases, we use the recasted bound from [41].
Note that in the long term, several planned experiment have the potential to greatly improve the limits in this mass range [48]. LHC-based experiments, such as FASER, MATHUSLA [58] or CODEX-b [59] are particularly interesting in that the decay of Higgs boson mediated through the quartic mixing λ HS can significantly enhanced the detection prospects as they are not tied to the mixing angle per-se but only to λ SH (hence the invisible branching ratio of the SM Higgs). Saturating the limits from invisible Higgs decay then leads to orders of magnitude improvements, particularly in the case of MATHUSLA or CODEX-b [48].
Conclusion
In this article we studied the forbidden freeze-in regime. Building on a standard decay-mediated freeze-in scenario, we focused on the case where the decaying mediator field couples strongly enough to the SM thermal bath to develop a significant thermal mass at high temperature. This strongly modifies existing predictions, and in particular leads to a particularly interesting regime of forbidden freeze-in, where the decay into DM particles is kinematically forbidden in the vacuum but is allowed to proceed in the thermal bath. In Sec. 2, we described in some detail the effect of including a sizeable thermal mass of the mediator. Assuming that the main production channel of DM is the decays of a bath particle into a pair of DM particles, we showed that freeze-in can be dominant at both high and low temperatures, depending on the dimension of the operators that couple the DM to the bath particle. Although the d > 4 operators show high-temperature dominance of DM production, this is different from the standard freeze-in case at high temperatures since the dominance does not happen due to the kinematics of the production process, but due to the thermal mass of the bath particle. Comparing the forbidden with the standard case of high-temperature freeze-in, we showed that the forbidden freeze-in is generally less efficient, leading to a stronger coupling between the DM particle and the mediator. For the case of operators with d ≤ 4 we showed that the production is dominant at lower temperatures close to the DM mass. In this case the scale of DM production is insensitive to the scale of inflation and reheating, similarly to the case of standard "freeze-out". Furthermore, the relic abundance is ultimately almost insensitive to the DM mass and the coupling responsible for the DM production can take significantly larger values than in the standard freeze-in scenario.
As a concrete example we studied a scalar portal model where the DM (assumed to be a Dirac fermion) is coupled only to a scalar which in turn is coupled to the SM Higgs boson field. In Sec. 3 we showed the effect that the scalar thermal mass has on the production of DM. We studied in detail the solution of the coupled Boltzmann equations for the DM particle and the mediator and discussed various possible types of the evolution of DM relic density. We also performed a scan of the parameter space of the model at hand and presented the region where the observed relic abundance can be obtained. Focusing on the same model, in Sec. 3 we discussed its experimental search prospects. Since the DM coupling to the SM particles is expected to be extremely suppressed (due to the small Yukawa coupling and the small mixing angle between the portal and Higgs boson fields) this model can be mostly probed by searching for a long-lived scalar mediator. We showed the impact of all the relevant bounds on the parameter space, including BBN, LHCb, CHARM, as well as astrophysical bounds for the presence of a light scalar field coupled to the Higgs boson. Also, we discussed the reach of upcoming fixed-target experiments (SHiP and FASER) and showed what part of the parameter space they will be able to probe.
As we have already pointed-out, the forbidden freeze-in regime is a general feature of the freeze-in mechanism. It greatly expands the parameter space in models where otherwise the DM cannot be produced by the decays of bath particle. Therefore, the analysis performed in this work not only provides new interesting viable regions of the Higgs portal model but may also bring some insight into how the forbidden freeze-in works in general. Our results also strongly suggest that it would be interesting to re-examine the dark matter abundance in other types of freeze-in models in order to uncover their respective forbidden freeze-in regimes.
|
2019-08-15T18:00:01.000Z
|
2019-08-15T00:00:00.000
|
{
"year": 2019,
"sha1": "42169815d03f807f72540bb6be9937eeeb338fe8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2019)159.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0bf67716cd7cdde22cf760addd48468dcd420e98",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248309279
|
pes2o/s2orc
|
v3-fos-license
|
Regulatory Requirements and Financial Inclusion in FinTech Companies
The general objective of the study was to establish the influence of regulatory requirements on financial inclusion in FinTech companies. Specifically, the study assessed the effects of customer protection and investor protection regulations on financial inclusion. This study adopted a descriptive research design. The target population was 38 FinTech companies. A population of 435 managers in FinTech companies were targeted. Stratified random sampling method was used to select the sample of 218 respondents. Primary data was collected using questionnaires. Pearson correlation coefficient and multiple regression was used in data analysis to establish the relationship between the variables. The study found that customer protection and investor protection regulations had significant relationships with financial inclusion. The study concluded that customer protection and investor protection were key attributes in financial inclusion in FinTech companies and recommended that stakeholder regulations need a special focus in the management of FinTech companies.
Globally, financial inclusion has been facilitated by the treasury department through ensuring expansion on the access to financial services for all individuals regardless of their status. This has been facilitated by dealing with the digital divide, fostering community development, ensuring financial capability, and encouraging partnerships (Baker & Wurgler, 2015). Regionally, financial inclusion has been a key policy objective to very many countries. Government agencies have shown interest through ensuring that they play an active role to facilitate financial inclusion with specific areas of rural finance availability, ensuring consumer protection and easing access to credit facilities by Small and Medium Enterprises (SMEs) (Sarma & Pias, 2012). In East African region, financial inclusion continues to deepen. With the high rate of financial inclusion, the focus is now based on how the private sector and the public sector can facilitate digital financial services as well as products (Spratt, 2015). Among the key catalysts for financial inclusion include technology, innovation and creativity.
A large percentage of people across the globe do not use formal financial services. According to World Bank (2020) approximately 42% of adult Kenyans had a financial account of some kind in 2011. According to the Global Findex database, the number had risen to 75%, including 63 percent of the poorest two-fifths (World Bank, 2017). Poor regulation is a major obstacle to financial inclusion. This is because, regulatory changes are often needed to enable the successful adoption and adaptation of innovations in digital finance, encourage their use, and increase competition among their providers, so that those new technologies can benefit the poor. Maina (2015) noted that progress in improving financial inclusion has to be compatible with the traditional mandates of financial regulation and supervision namely, safeguarding the stability of the financial system, and protecting consumers. On the same note Kumar, Rama and Rupayan (2019) noted that maintaining high integrity is one of the key roles of financial regulation.
Many studies have been done on financial inclusion. Zwedu (2014) did a study on financial inclusion, regulation, and inclusive growth in Ethiopia. He found out that despite huge progress in the last ten years, financial inclusion is still very low. Odongo (2018) researched on financial regulations, financial literacy, and financial inclusion focusing on insights from Kenya. The study revealed that agency banking regulations and financial literacy could improve formal financial access while know-your-customers rules capital and liquidity and macro-prudential regulations could harm financial inclusion. Though some of the studies have focused on financial regulation and financial inclusion, they have failed to establish how regulatory requirements influence financial inclusion. There is limited empirical evidence on regulatory requirements and financial inclusion in Fintech companies. Therefore, this study sought to fill this knowledge gap and establish the influence of regulatory requirements on financial inclusion. Specifically, the study seeks to assess the influence of customer protection regulation and investor protection regulation on financial inclusion in the Fintech companies.
Customer Protection Regulation and Financial Inclusion
Customers are important stakeholders that help establish the firm's reputation and identification. Competition, asymmetry of information, the occurrence of transaction costs and inequality of consumer agreements constitute the theoretical basis of the contemporary consumer policy (Hirshleifer, 2008). The relationship between a customer and a firm exists because of mutual expectations built on trust, good faith, and fair dealing in their interaction. Not only is this an ethical requirement but it has been legally enforced in some states. Market orientation focuses on an understanding of customers' expressed and latent needs and development of superior solutions to the needs. Such an approach seeks to elevate the interests of the customer over those of others (Ferrell (2004). Ethically, questionable actions toward consumers can be addressed in civil litigation and are punishable by law as a regulation within the business environment.
Information is a key asset in financial management. As far as transaction between buyers and sellers is concerned, information is a powerful tool to the customer (Mujeri, 2015). Furthermore, the providers of the financial services try to scrutinize financial information on the customers which include their credit history, financial decision-making modes, and their level of market assessments (Atakli & Agbenyo, 2020). The knowledge gap between the customers and the suppliers of financial services becomes wider when the financial products offered are more sophisticated. As a result of competition, information imbalance can easily be reduced since consumers prefer the financial institutions that offer more clear information while the financial entities try to please their customers through offering the best services which entails provision of adequate information (Bharat, 2014). However, in situations where the market forces fail to result in high level of information disclosure, provision of information to borrowers facilitates financial literacy of the customers and make it easier for new customers to enter the market (Terfa, 2015). By demanding financial information, consumers facilitate transparency in the sectors and business transactions. Nanziri (2020) noted that in the turbulent market environmental situation where important disclosures are not attained to the maximum it is only regulation which informs and protects the financial market (Nanziri, 2020).
The borrowers of financial services have less information concerning the financial transactions as compared to the financial institutions that provide the financial services. Consumer protection ensures equality between the lenders and borrowers (McClure, 2015). This therefore can lead to payment of high interest rates, inadequate knowledge on the available financial options as well as the inadequate means for redress (Karpowicz, 2014). This kind of information asymmetry occurs where the products offered are more sophisticated and when the beneficiaries are less experienced. The push for financial inclusion through reaching the unbanked facilitates more borrowers to enter the financial market every year. Even though most of the financial institutions have incorporated practices for ensuring their customers are well served, other financial institutions have gained advantage of this information to benefit themselves at the expense of the borrower who may be under-insured or over-indebted (Lewis & Lindley, 2015).
There are a variety of financial challenges in the business environment. The financial harm ranges from loss of saving due to unscrupulous actors encountering market for short term benefit to over indebtedness because of very high prices and predatory lending (Peake, 2016). Under the hard to serve markets, there is warranting of high prices, but this ends up unchecked since some of the lending institutions charge high prices. In free and regulated markets, over indebtedness comes because of predatory lending which at the long last have high rates of defaulters. Unscrupulous actors steal collaterals or customers money less frequently which result to emotional and physical abuse of clients (Dabla-Norris, Yan & Filiz, 2015). The journey to customer-centricity for financial service providers to the different segments begins with understanding how access to financial services can add value to the lives of lower-income customers. Kilara and Rhyne (2014) noted that well-tailored services can help customers meet daily needs, achieve personal and business goals, and build resistance against vulnerability.
Ethical responsibilities to consumers have a strong foundation of legal protection. Customers normally evaluate the ethical practices of companies, environmental issues, level of service quality, and other responsibility issues that impact the purchasing and consumption of products. The regulatory framework is the foundation for a broad and complex consumer protection (Goodhart, 1998). It needs to be implemented because of the weaker position of consumers in economic relations with professional entities. Janine and Monica (2018) posited that in the financial service industry, the consumer protection framework is evolving along with increasing complexity with products, and a greater number of people using financial services. Despite a rising awareness of customers' rights and numerous activities focused on customers' protection, there are many indications that on financial inclusion, customer interests are still threatened. Thus, we propose the first hypothesis as follows; H1: Customer protection regulation has no significant relationship with financial inclusion.
Investor Protection Regulation and Financial Inclusion
Agency theory highlights the problems encountered by investors who cannot avoid the services of agents in managing their businesses. Jensen and Meckling (1976) developed the classical agency theory that views managers as agents of the shareholders, who are driven by the motive to maximize their interests and fail to act in the best of the interests of the shareholders. In the last four decades, Mamun, Yasser & Rahman, (2013) postulated that agency theory has been predominantly used to explain relationships between management and ownership and has been used as the bedrock for the development and application of corporate governance principles. Léon and Zins (2020) noted that due to no disclosures of the true position of the financial statements, investors benefit from regulation as a mitigation against risk. Protection of the investors is very essential since in most countries, there is high rate of expropriation of the creditors as well as the minority shareholders by the controlling shareholders. Cherednychenko (2015) observed that due to exploitation by the controlling shareholders, investors who are outside the finance firms are faced with the risk of return on investment. Financial inclusions refer to the means through which the external investors protect themselves from being exploited by the investors within the finance firms (Lagarde, 2015).
Expropriation occurs in different forms. On some occasions, the insiders just steal the realized profits from the organization. Other ways of expropriation include the selling of securities, assets as well as outputs from the firm to other firms they own at a lower price. These actions (transfer pricing and investor dilution) are somehow legal, but they have the same impact as theft. Other instances of expropriation include diversion of corporate opportunities from the organization, taking in family members to occupy the top management position illegally as well as overpayment of executives during meetings and business travels (Giannetti & Zhao, 2016).
Among the key benefits of legal protection is the fact that it ensures less efficiency of expropriation using technology. In instances of little or no protection of the external investors, the inside investors can effectively and efficiently steal the realized profits in the firm. External investors would therefore not finance such firms without a very strong reputation (Wardhani, 2021). Through improving the investor protection mechanisms, some of the inside investors come up with wasteful diversions whereby they can come up with intermediary firms where they channel all the realized profits. The implemented mechanisms are also effective enough to allow the insider investors to consider diverting profits extensively. In instances of effective investor protection mechanisms, the only left option of the insiders is to give themselves excess salaries, ensure the management is full of family members as well as adopting wasteful projects. At certain points it becomes better to just pay dividends. Through ensuring diversion technology is less efficient, the benefits to the insiders diminish and the options to expropriate become less.
Empirical studies conclude that without close monitoring by shareholders, managers adopt selfseeking behaviour and make value destroying decisions (Ramaswamy & Veliyath, 2002). Moreover, the changes in business environment and the presence of self-seeking managers forces shareholders and regulators to develop ethical responsibility, codes of conduct to control behaviour of managers and managerial incentives following the stipulations of the agency theory (Mumin et al, 2013). Thus, we propose the second hypothesis as follows; H2: Investor protection regulation has no significant relationship with financial inclusion.
MeTHodS
This study adopted a descriptive research design to analyse the effect of regulatory requirements on financial inclusion. Data was collected from Fintech companies in Kenya. The target population for this study were the three cadre of managerial positions, the top level, the middle level and the lower-level managers in the Fintech companies. Managers were chosen for the study because they are the custodians of customer and investor regulation policies and documents. From the human resources registry in the companies it was established that there are 435 managers. Therefore, the target population was 435 respondents. From the total population, stratified random sampling was done in each stratum of managerial levels in each of the participating company which gave rise to a sample size of 218 respondents for the study.
The study utilized primary data which was collected using structured questionnaires. The questionnaires were first pretested through a pilot study to determine the suitability of the tool. The pretesting was done by administering the questionnaire to 13 respondents who were selected simple random sampling from the population. Data was analysed using both descriptive and inferential statistics. Pearson Correlation was used to establish the existence, nature, and strength of the relationships between customer regulation and investor regulation and financial inclusion. Regression statistical analysis was done to establish magnitude of variation caused by customer regulation and investor regulation on financial inclusion using the coefficient of determination. The P-value was used to determine the significance in relationship between customer protection regulation and investor protection regulation and financial inclusion.
Correlational of Customer Protection and Financial Inclusion
The first objective of the study sought to find out the relationship between customer protection regulation and financial inclusion. This was achieved through both correlation and regression analysis. This objective was achieved by testing hypothesis H1: Customer protection regulation has no significant relationship with financial inclusion.
From the findings, customer protection regulation and financial inclusion had a correlation coefficient of 0.814 and significance value of 0.000. This therefore implies that customer protection has a strong positive correlation with financial inclusion. Also, customer protection and financial inclusion had a significant relationship because the p-value is less than selected significance level (p=0.000<0.01).
Regression of Customer Protection Regulation on Financial Inclusion
Regression analysis was done to determine the influence of customer protection regulation and financial inclusion. The findings were presented in three tables. The first table was the model summary which shows the amount of variation in financial inclusion because of a change in customer protection regulation. The second was the ANOVA table which indicated the level of significance while the last was the coefficients table which shows specific rate of change in the variables. Table 2 shows that the R 2 value was 0.376, an indication that a 37.6% variation in financial inclusion in the Fintech companies can be explained by customer protection regulation. The remaining 62.4% are explained by other factors not considered in the study and the error term. The findings also suggest that customer protection regulation and financial inclusion are strongly and positively related as indicated by the correlation coefficient (R) value of 0.613.
ANOVA was used to test whether the model was significant. The significance of the model was tested at a 5% level of significance. The findings indicate significance of the model was 0.002 which is an indication that the model was significant. From the results, the null hypothesis was therefore rejected, and the alternate adopted. Therefore, the study concluded that customer regulation has a significant relationship with customer regulation. From the above equation it is observed that when customer protection regulation is held to a constant zero, financial inclusion will be at a constant value of 1.309. The findings indicate that a unit increase in customer protection regulation will cause an increase of 0.574 units in financial inclusion in the Fintech Companies.
Influence of Investor Protection Regulation on Financial Inclusion
The second objective focused on establishing the influence of investor protection regulation on financial inclusion. This was achieved through testing hypothesis two as follows.
H2: Investor protection regulation has no significant relationship with financial inclusion.
This hypothesis was achieved through two statistical tests of correlation and regression analysis.
Correlational of Investor Protection Regulation and Financial Inclusion
The findings show that investor protection regulation and financial inclusion had a correlation coefficient of 0.835 and significance value of 0.000. This therefore suggests that investor protection has a strong positive correlation with financial inclusion. Also, the relationship between the investor and financial inclusion was significant because the p-value is less than selected significance level (P=0.000<0.01).
Regression of Investor Protection Regulation on Financial Inclusion
Regression analysis was done to determine the influence of investor protection regulation and financial inclusion. The findings were presented in three tables as shown below.
The findings indicate that R 2 value was 0.299, an indication that a 29.9% variation in financial inclusion in Fintech companies is explained by changes in investor protection regulation. The remaining 70.1% is explained by other factors not considered in the study and the error term. The findings also suggest that investor protection regulation and financial inclusion are strongly and positively related as indicated by the correlation coefficient (R) value of 0.547.
ANOVA was used to test whether the model was significant. The significance of the model was tested at a 5% level of significance. The significance of the model was 0.002 an indication that the model was significant since it was less than the level of significance (0.05). From the results, the null hypothesis was rejected, and the alternate adopted. Therefore, investor protection regulation has a significant influence on financial inclusion. From the coefficients table, the following regression equation was fitted. Financial Inclusion = 1.418 + 0.502 Investor Protection Regulation + Error Term From the above equation it is observed that when customer protection regulation is held to a constant zero, financial inclusion will be at a constant value of 1.418. It implies that a unit increase in investor protection regulation causes an increase of 0.502 units in financial inclusion in the Fintech Companies in Kenya.
Customer Protection Regulation on Financial Inclusion
The study established that customer protection regulation has a strong and positive correlation with financial inclusion. This finding resonates with Kilara and Rhyne (2014) who concluded that a customer-centric organizational approach is critical for solving a core challenge in financial inclusion and filling the access-usage gap. To further support these findings, Within the region, the results are also consistent with evidence from South Africa, as reported by Nanziri (2016), which shows that there are no significant differences in the welfare of financially included men and women, while financially included women, on average, enjoy higher welfare outcomes than their excluded female counterparts. In the study both the men and the women being safely guarded customers of the institutions.
The study found that financial education provides information on consumer protection. It focuses on increasing the level of knowledge regarding financial products and services and financial education programs. The findings concur with Feng and Wang (2018) who indicated that governments undertake massive finance education campaigns to help people manage money more effectively. To achieve financial well-being regulation ensure accessing appropriate financial products and services through regulated entities with fair and transparent machinery for consumer protection and grievance redressal. Gardeva and Rhyne (2015) found that low levels of financial literacy are most likely to be considered a major barrier to financial inclusion. Financial literacy is seen as an enabling factor that unlocks other key dimensions of financial inclusion. According to Remmele (2016) financial education can improve levels of financial literacy, help individuals to overcome financial vulnerability caused by personal circumstances and potentially breakdown psychological barriers.
A positive significant relationship was established between customer protection regulation and financial inclusion by this study. Lewis and Lindley (2015) noted that although many financial institutions adopt practices to ensure customers are well served, some have used their information advantage (often abetted by regulatory loopholes intended to promote financial inclusion), to increase profits at the expense of consumers who may find themselves over-indebted, under-insured or without a return on their investment, this adversely affects financial inclusion. It was also revealed that disclosure rule require providers to use clear language and present information in a format that is easily visible, to ensure transparency, regulators publish a list of all rates in newspapers or other freely accessible medium and financial institutions provide schedule of fees and charges to customers. According to Love and Pería (2015) to improve financial inclusion, disclosure rules requiring providers to use clear language and present information in a format that is easily visible can help to ensure that disclosed information is comprehensible, particularly for low-income consumers. Shukla (2015) indicated that where literacy rates are low, it is particularly important that information be orally communicated to consumers.
It was also established that, financial service providers must ensure that sales promotion materials are not misleading, financial service providers must ensure that the terms of contract are fair to consumers. Brown (2016) explained that financial service providers must ensure that the terms of contract are fair to consumers, and market practices are sound. Even when activities are outsourced, financial service providers remain accountable and must ensure that outsourced agents perform their functions in a reliable and professional manner.
Investor Protection Regulation on Financial Inclusion
The study established that investor protection had a significant influence on financial inclusion. Daehyun and Starks (2016) on the same note observed that the legal approach to inclusion holds that the key mechanism is the protection of outside investors, whether shareholders or creditors through the legal system. Wardhani (2017) explains that one way to think about legal protection of outside investors is that it makes the expropriation using technology less efficient. At the extreme of no investor protection, the insiders can steal a firm's profits perfectly and efficiently. Without a strong reputation, no rational outsider would finance such a firm. According to Mhlongo (2019) insider transactions which modify the ownership structure statement give investors relevant information regarding future opportunities of prosperity for the firm. Boot and Thakor (2017) concluded that investors in the market can be business experts and having more information about the variation in the preferences of customers and the industries is normally an added advantage.
It was also established that agency problems can be eliminated through disclosure, because they arise due to conflict of shareholder funds and financial intermediaries. The agency frameworks present a variety of mechanisms to eliminate the agency problem such as contracts, disclosure, financial intermediaries, corporate governance, and market for corporate control (Cheong & Zurbruegg, 2016). Busch (2019) noted that optimal contracts such as compensation agreements and debt contracts seek to align the interests of insiders to the external equity and debt claimants. The study found that some organization mislead third parties by only disclosing good news about their company. Financial institutions are required to disclose their financial information to shareholders. The flow of information is an essential requirement for free-market economics. Chu and Nguyen (2019) found out that the expanded flow of information is an essential requirement for free-market economics. This flow of information facilitates transparency and encourages competition, which leads to the improvement of the work at hand. Without this flow of information, it would be impossible for companies to grow and survive within the competitive business environment.
CoNCLUSIoNS ANd ReCoMMeNdATIoNS
In a nutshell, the study concluded that customer protection regulation more strongly and significantly influenced financial inclusion in Fintech companies. Customer protection was measured in terms of financial education, enhancement of disclosure policies and creation of an enabling environment and ethical environemtal responsiveness fo the company to the envirnment and the customers. Therefore, an enhancement of the regulatory framework which focus on the customers within the Fintech industry plays an important role in financial inclusion. The study contributes to the literature on agency theory by establishing the strenth and direction of the relationship between the two regulations and financial inclusion. As a result of this study agency theory can be seen through the lens of financial inclusion.
The study recommends that the government should ensure that the customers are educated promptly, disclosure policies are enhanced to create an enabling environment for financial inclusion. This can be ensured by punishing those who fail to adhere to the customer protection regulations. This would ensure that the consumers are not harmed through excessively high prices and predatory lending, to loss of savings and hence ensure financial inclusion. For the practitioners of the Fintech industry, since the study ties regulation to financial inclusion, it means that for policy makers to achieve sustainable development goals of poverty reduction and economic development the society needs empowerment through protection for the consumers and protection of the investors.
FUNdING AGeNCy
The publisher has waived the Open Access Processing fee for this article.
|
2022-04-22T15:16:39.454Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "24cef4d5fff05137ec7773fad965125929affd51",
"oa_license": "CCBY",
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=300345&isxn=9781683183495",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "adab1da2bfdbd8b02d394c3d1b8f77e033921e05",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
7968153
|
pes2o/s2orc
|
v3-fos-license
|
On religious and secular exemptions: A case study of childhood vaccination waivers
This paper analyses exemptions to general law through the prism of vaccine waivers in the United States. All US states legally require the vaccination of children prior to school or daycare entry; however, this obligation is accompanied with a system of medical, religious, and/or philosophical exemptions. Nonmedical exemptions became subject of discussion after the 2015 Disneyland measles outbreak in California, which unequivocally brought to light what had been brewing below the surface for a while: a slow but steady decline in vaccination rates in Western societies, resulting in the reoccurrence of measles outbreaks. This can be traced back to an increasing public questioning of vaccines by a growing anti-vaccination movement. In reaction to the outbreak and the public outrage it generated, several states proposed—and some already passed—bills to eliminate nonmedical exemptions. I analyze two questions. First, can legal exemptions from mandatory childhood vaccination schemes for parents who are opposed to vaccination (still) be justified? Second, should legal exemptions be limited to religious objections to vaccination, or should they also be granted to secular objections? Although the argument in the paper starts from the example of the US, it seeks to provide a more general philosophical reflection on the question of exemptions from mandatory childhood vaccination.
Introduction
The introduction of vaccines against infectious diseases has been one of the most important contributions to public health of the last century, ranking second only to the advent of clean water. Diseases like smallpox, polio, measles, mumps, rubella (MMR), and whooping cough were by far and away the major killers of human beings until the beginning of the 20th century. Nowadays, these diseases have been dramatically reduced or even eliminated in the Western world as a result of largescale vaccination programs. The major goal of such vaccination programs is the maintenance of the phenomenon of herd immunity, which occurs when a critical portion of a community is immunized against a contagious disease, the virus can no longer circulate in the population, so that the disease cannot gain foothold in that society. Indeed, it is through herd immunity that large-scale vaccination programs are so much more effective than individual vaccination.
A large majority of parents is convinced of the beneficial effect of vaccination on the health of their children and voluntarily enroll their children in such programs. However, since the introduction of the first vaccination programs in the beginning of the 19th century, a myriad of groups of parents have refused to vaccinate their children. Traditionally, the most well-known objectors are members of religious groups, predominantly Protestant Christian congregations, who argue that vaccination interferes with divine providence. In recent years, however, we can encounter a growing modern anti-vaccination movement, which argues that the dangers of vaccinations far outweigh their benefits. Unlike the more religious groups that are primarily inwardly oriented, this new anti-vaccination movement actively and successfully reaches out to new parents through anti-vaccination websites and TV celebrities. An important factor here is the MMR vaccine causes autism controversy a decade ago, in the wake of the publication of Andrew Wakefield's article in The Lancet (Wakefield et al., 1998).
By now, Wakefield's claim has been fully debunked, The Lancet retracted his article two years after its publication due to suspicions of fraud and the association of scientific interest with litigation-driven profit motives (Deer, 2011). As a result, Wakefield has been stripped of his medical license and academic reputation. This controversy generated a huge industry of peer-reviewed research, none of which could corroborate the alleged vaccination-autism link (Jain et al., 2015;Taylor et al., 2014). Still, the suggested vaccine-autism link remains ''the most damaging medical hoax of the last 100 years'' (Flaherty, 2011(Flaherty, : 1302. The claim was widely reported in the media and went viral on anti-vaccination websites. After a long period in which the idea that vaccinations were beneficial and safe gained an ever-stronger foothold in western societies, this new movement heralded a turning point in the public trust in vaccines. This renewed public questioning of vaccines has led to a decline in vaccination rates, which ultimately culminated in the notorious 2015 Disneyland measles outbreak (see Majumder et al., 2015;Phadke et al., 2016). This was the first major measles outbreak in a decade, which spread throughout the US and Mexico and caused the death of a woman in Washington State (Izadi, 2015). 1 How should liberal-democratic governments deal with such opposition to vaccination when it leads to compromised herd immunity and the re-emerging risk of outbreaks of a disease that for decades was assumed to be under control? 2 Given their possible devastating effects, the state has a compelling interest in preventing (major) outbreaks of infectious diseases. Indeed, although it remains contested whether the liberal state should promote public health through welfare state institutions, it is undisputed that it should protect society against major threats to public health. Fighting infectious diseases is generally considered to be a classic government task. As part of that task several states legally require that children receive vaccinations against diseases like measles.
At the same time, many liberal political orders have endorsed a practice of ruleand-exemption as a way of dealing with legal obligations for morally sensitive issues-for example the exemption from compulsory military service for conscientious objectors. Childhood vaccination is a similarly sensitive issue: although a large majority of parents voluntarily consent to vaccination, a minority has strong objections against the practice. Historically, these legal exemptions originated as religious exemptions, available only to a very limited category of members of recognized religions. Over time, such exemptions became available to a wider category of parents, but up to today, many states explicitly distinguish religious from secular claims.
The issue of religious and secular exemptions has also gained considerable attention in current liberal political theory. On what grounds can exemptions to general law be justified? Why should conscientious objections justify dispensation from democratically adopted and generally applicable laws? In addition, if legal exemptions should be granted, should they be limited to religious convictions or should they also be granted to secular ''strong beliefs'' about the central importance and value of certain convictions, practices, and purposes? What feature of religion makes it so special that it deserves protection from state interference? This paper analyses exemptions to general law through the prism of the topical and highly relevant case study of vaccine waivers in the United States. Although there is no federal regulation, all US states legally require the vaccination of children prior to school or daycare entry, but this requirement is accompanied with a system of medical, religious, and/or philosophical exemptions. These exemptions became subject to scrutiny after it emerged that the Disneyland measles outbreak was caused by substandard vaccination compliance due to high numbers of nonmedical exemptions. In reaction to the outbreak and the public outrage it generated, the state of California accepted a bill that eliminated all nonmedical exemptions. Besides California, legislators in other states have introduced bills that would make it harder for parents to opt out of vaccinating their kids. 3 Thus these exemptions, though for a long time virtually undisputed, have now become the subject of intense public and political discussion.
This paper seeks to answer two questions. First, can legal exemptions from mandatory childhood vaccination schemes for parents who are opposed to vaccination (still) be justified? Second, should legal exemptions be limited to religious objections to vaccination, or should they also be granted to parents with secular objections? Although the argument in the paper starts from the US example, it seeks to provide a more general philosophical reflection on the question of exemptions from mandatory childhood vaccination and, consequently, a more general conclusion on exemptions in general. To focus the paper, I do not discuss mandatory childhood vaccination in general, but limit the argument to one disease, the measles, due to the turmoil the Dinseyland outbreak generated and because it is a ''pure'' example in this context. The measles is an extraordinarily contagious disease; its effects are quite severe and outbreaks are common enough to pose a significant threat to public health. Moreover, over time the vaccine has proven to be effective and safe. Finally, measles is a predominant example of a childhood disease because the first vaccination must be administered long before the age of reason kicks in. 4 The paper is organized as follows. Section two explains why mandatory childhood vaccination is a necessity in currents days of vaccine hesitancy to maintain herd immunity. Section three conceptualizes legal exemptions in general, and section four discusses exemptions in the context of mandatory childhood vaccination. Sections five (on free-riders) and six (on the distinction between religious and secular objections) argue that it is impossible for liberal-democratic governments to substantially separate deeply held objections to vaccination from more superficial preferences. Section seven argues why an alternative approach, employing proxies to separate deeply held objections from more superficial preferences also fails. Section eight brings the various arguments together and concludes that a waiver system for mandatory childhood vaccination cannot be sustained: it either violates liberal-democratic tenets or it is incapable of limiting the number of exemptions in order to maintain robust herd immunity.
Justifying mandatory vaccination: The maintenance of herd immunity
The measles is a dangerous childhood disease for which there is no curative medicine; the only treatment available is prevention through vaccination. 5 Once infected, the patient has to endure the disease and during this period she is susceptible to various risks. Out of every 1000 individuals who become infected, one or two will die from the disease; approximately one will develop encephalitis (a swelling of the brain that can lead to convulsions and leave the person deaf or with an intellectual disability) and as many as 50 get pneumonia. Even an ''uncomplicated'' course of the measles results in a week with a high fever, cough, sore throat, and a rash covering the entire body. 6 Moreover, the measles is unusually contagious: an unvaccinated person exposed has a 90% chance of becoming infected with the disease. 7 This implies that a patient is not only a victim of the disease, but also a vector in its further spread. Infected persons (can) infect others and contribute to outbreaks. As a consequence, the measles should not merely be discussed in terms of parent-child responsibilities, but primarily in terms of public health. Vaccination reduces the number of potential hosts-and thus carriers-of the disease in the population. The higher the vaccination rate, the harder it is for a disease to spread. The threshold of herd immunity for measles is achieved at 92-94%, at which point major outbreaks are precluded (Orenstein et al., 2007(Orenstein et al., : 1434.
As such, herd immunity not only protects the vaccinated but also several categories of persons who cannot be vaccinated for various medical reasons. The first category concerns infants and young children who have not yet completed the recommended childhood immunization schedule. Newborn babies have maternally derived antibodies that protect them against the measles and other diseases. Over time, however, the effect of these antibodies fades out and these children remain unprotected until their first vaccination. During this period, they can only be protected through the vaccination of the persons around them. The second category concerns persons for whom their vaccination turns out to be insufficiently effective because, in very rare cases, vaccinations do not mount an adequate immune response. There will always be a small percentage of vaccinated persons who remain unprotected; however, it is unclear who these individuals are until they get infected. The third category of persons concerns those who cannot undergo vaccination because they have certain forms of cancer, have a compromised immune system, or are likely to suffer from a serious allergic reaction.
Herd immunity thus not only protects individual vaccinated persons against the disease, it provides a higher-order societal protection because it prevents diseases from breaking out altogether. In that sense it is an important collective good and a major contribution to public health. Vaccinating healthy toddlers protects them from falling ill and prevents them from becoming vectors in the further spread of the disease. This is the main reason why countries like the US have endorsed mandatory vaccination programs in order to guarantee sufficiently high vaccination levels that protect the safety of vulnerable co-members of society who cannot protect themselves. Moreover, we have arrived at a point in time where we can conclude that there can be no genuine controversy on the risks of vaccination any more. 8 A recent meta-analysis concluded that, after the administration of over 25 million vaccine doses, 33 cases of vaccine-triggered anaphylaxis, a potentially life-threatening allergic reaction, were confirmed (McNeil et al., 2016). Other research arrives at a similar conclusion: ''there is evidence that some vaccines are associated with serious adverse events; however, these events are extremely rare and must be weighed against the protective benefits that vaccines provide'' (Maglione et al., 2014: 325).
The state has a compelling interest in preventing (major) outbreaks of infectious diseases. The only way in which such outbreaks can be prevented is through the maintenance of robust herd immunity, and the only way herd immunity can be achieved is through mass vaccination. Until recently, outbreaks of the measles seemed abstract and remote in the western world. In that context it could seem to be an excessive use of governmental power to propose mandatory childhood vaccination. However, recent outbreaks have provided parents and the public with firsthand experience of the reality of these diseases and their harmful impact. When herd immunity cannot be taken for granted because of a growing vaccine denialism, the question arises whether a system of exemptions from mandatory childhood vaccination laws can be maintained.
Conceptualizing legal exemptions
Many liberal states have a cherished tradition of rule-and-exemption approaches as a way of dealing with legal obligations for morally sensitive issues. The most well-known example is that of exemption from compulsory military service for conscientious objectors. 9 Mandatory childhood vaccination is a similarly sensitive issue. As discussed in the previous section, states have a compelling interest in preventing (major) outbreaks of infectious diseases and this has led certain states to implement mandatory childhood vaccination programs. A case in point is the United States. Although there is no federal regulation, all 50 states legally require vaccination of children prior to school or daycare entry. At the same time, this legal duty is accompanied with a system of exemptions. Three states -Mississippi, West Virginia, and (since 30 June 2015) California -only accept medical exemptions, e.g. only for children who are immunocompromised, those who have allergic reactions to vaccine constituents, and those who have a moderate or severe illness. All other states also offer non-medical exemptions: 28 states accept religious exemptions and 19 states offer religious and secular exemptions. 10 How should the accommodation of exemptions be judged in the context of constitutional liberal democracies? At first sight, allowing exemptions seems to contradict a basic requirement of the idea of constitutional democracy. After all, clear application of the law, equal treatment, and the rule of law are paramount; law ought to be administered impartially and should have no favorites (Barry, 2001;Trigg, 2012). At the same time, the liberal state should acknowledge that facially neutral laws could nevertheless be disproportionally burdensome for certain citizens. Even though most parents comply voluntarily with the duty to vaccinate, some parents vehemently object to the practice. Mandatory vaccination implies that these parents have to go against their conscience or have to sacrifice deep commitments. And even though most other citizens do not share these convictions-or might even disagree with the convictions-they might nevertheless understand the importance of these convictions for the individual person, and the pain it would inflict upon parents if they have to act against their deepest commitments.
The question, then, is when universal application of law is paramount and under which circumstances exemptions should prevail. This is also a central question in current political-theoretical debates on legal exemptions. So-called muscular liberals rally around the idea of universal egalitarian law and argue that law, as the outcome of democratic deliberation and political processes should, in principle, be administered impartially and be binding on all. Barry's Culture and Equality (2001) is the notorious placeholder for this position. On the other hand, more tolerance-leaning liberals see legal exemptions to universal laws as the contemporary interpretation of the ancient-old liberal ideal of toleration (Dobbernack and Modood, 2013;Forst, 2012;Williams, 1996). Tolerance-leaning liberals argue that a blanket application of state law sometimes unduly burdens citizens who deeply disagree with the law because it contradicts squarely their conscience and deepest convictions. Allowing exemptions recognizes this fact by alleviating the particular burden of the members of these minority groups.
This paper does not aim to take a firm theoretical position in this more abstract political-theoretical debate. 11 I think that the idea of accommodation is, ipso facto, not inconsistent with the central tenets of constitutional liberal democracy, especially in cases in which granting exemptions does not directly violate the fundamental rights of others. 12 The fact that herd immunity can be maintained at a vaccination rate of 92-94% implies that there might be some room for exemptions from mandatory vaccination without endangering public health and the rights of others. I agree with Mahoney (2011: 311) that government should seek to accommodate minority practices in the most generous manner possible. We should be clear, though, what kind of right this is. It is not a straightforward and inviolable right of parents that nullifies the duty to vaccinate; instead, it is a toleration-based and conditional right to an exemption from a general legal duty, which can, and should, be revoked at the moment robust herd immunity is endangered.
In what follows, I accept that legal exemptions are, in principle, legitimate in liberal-democratic states, but that we should be sure that such exemptions are justified, all things considered, in two respects. First normatively: allowing exemptions should not collide with other central liberal-democratic values. Secondly, exemptions should be feasible practically: government agencies should be able to distinguish sincere objections against vaccination from so-called exemptions of convenience (Calandrillo, 2004), and they should be able to make this distinction by employing relatively straightforward legal norms. Narrowing down this approach to the subject of this paper: the first-order priority of government policy is to maintain and protect robust herd immunity. The second-order priority is to allow for exemptions if, and only if, they are feasible, both normatively and practically. Given these priorities, can a waiver system for mandatory childhood vaccination against the measles be maintained?
Exemptions from mandatory vaccination
The US waiver system for mandatory childhood vaccination became the subject of public and political dispute after the notorious 2015 Disneyland measles outbreak, especially when it became clear that the outbreak resulted from substandard vaccination compliance, which was caused by the high number of parents having received nonmedical exemptions (Colgrove and Lowin, 2016: 349;Majumder et al., 2015: E1). 13 As a result, the Disneyland outbreak brought to the fore what was discussed longer in academic circles, namely, that most US states provide exemptions in a very generous, and rather unprincipled way. In theory, exemptions should only be given to a limited and very specific set of parents with genuine objections to vaccination. In practice, however, opting out of vaccination turned out to be remarkably simple. In a large number of states parents can forgo vaccinations by simply ticking a box on a pre-printed form, no questions asked. In addition, the vast majority of states do not enforce any limitations on exemptions, as 32 of 48 states that allow exemptions have not denied a single claim (Calandrillo, 2004: 434). This lenient way of enforcing mandatory vaccination law did little to stop the rising vaccine hesitancy that endangered herd immunity and increased the risk of measles outbreaks. In the 2013-2014 school year, the US Centers for Disease Control and Prevention found that most states failed to reach the target of having 95% of children entering kindergarten complete the two-dose MMR vaccine sequence-in the state of Colorado even less than 85% had received both doses of MMR (US Centers for Disease Control and Prevention, 2014).
The Disneyland outbreak led to an outpour of public indignation over the irresponsible behavior of non-vaccinating parents and the risks they present to public health. Popular media ironically emphasized that vaccination rates in wealthy Los Angeles schools were as low as in South Sudan (Khazan, 2014). In reaction to the outbreak, the state of California discussed Senate Bill 277 to eliminate all nonmedical exemptions. The proposal led, predictably, to heated opposition by opponents but was in the end accepted by a significant margin in the State Legislature (Nagourney, 2015). An initiative to a referendum to overturn the Bill fell (far) short of the number of signatures needed to put the issue on the ballot (McGreevy, 2015). In May 2016, the state of Vermont passed Bill H 98 which removes philosophical exemption and requires those seeking religious exemptions to review evidence-based educational material regarding immunizations. In June 2015, the American Medical Association explicitly endorsed stringent state immunization requirements to only allow exemptions for medical reasons, and legislators and public health professionals in nearly 30 other states are similarly working to get such reforms passed. 14 At the same time, however, similar attempts in Oregon and Washington have failed-at least for the moment.
However, the fact that too many states have handed out exemptions too easily in the last decade, does not imply that a waiver system is, ipso facto, doomed to fail. As I will conclude later in this article, though, it might be an indication that such regulations are very hard-if not impossible-to implement in practice. Such a rule-and-exemption scheme can only be successfully implemented when (at least) three conditions are met. First, only a relatively small subset of persons should have objections to the duty as prescribed by law (Vallier, 2016). After all, allowing exemptions should not undermine or nullify the goal for which the specific legal duty is introduced. This implies that the large majority of citizens must have sufficient reason to endorse and abide by the law. Second, government agencies should be able to distinguish sincere deep objections from exemptions of convenience by relatively straightforward legal norms. And, thirdly, given the limited number of exemptions available, the distribution of this scarce good should not violate basic notions of justice.
Applied to a waiver system for mandatory childhood vaccination, we can translate these conditions in the following way. The first-order priority is to maintain and protect robust herd immunity, which implies that the number of exemptions should be limited. The second-order priority is that the process of distinguishing sincere objections from exemptions of convenience should not violate central liberal values by, for example, undermining state neutrality, the secular character of law, or by privileging or discriminating against certain religious or other comprehensive doctrines. 15 Only if these two priorities are met, a mandatory vaccination scheme with waivers is feasible.
The aim of vaccination policies is to protect all persons against infectious diseases, but this does not require that all persons have to be vaccinated. As mentioned above, herd immunity requires an overall vaccination coverage of 92-94%, which implies that a limited practice of non-vaccination of 6-8% can be accommodated without sacrificing the rights of others, that is, the right of citizens to be protected against outbreaks of vaccine-preventable diseases. Since the risk of non-vaccination is cumulative in nature, herd immunity can be sustained, even if a certain percentage of parents refrain from vaccination.
However, the question remains how much room this leaves in practice for nonmedical exemptions. Firstly, a certain proportion of the 6-8% will consist of persons who are not (yet) protected for medical reasons, as described in section two: infants too young to be vaccinated, persons for whom the vaccination turns out to be insufficiently effective, and those who cannot undergo vaccination. There are good public health arguments to prioritize these medical exemptions over non-medical exemptions.
Secondly, even though it might be possible to achieve the average threshold vaccination rate in society overall, vaccination coverage is never spread evenly over the territory. Societies usually contain certain risk clusters in which vaccination rates fall below the level required to maintain herd immunity. The Dutch and US bible-belt, for example, are well known for harboring undervaccinated religious communities. This is a reflection of the more general phenomenon that people who share religious beliefs that object to vaccination will usually live in close proximity to each other and will have much interaction through churches, schools, and communal life (May and Silverman, 2003;Omer et al., 2008).
To sum up: the first-order priority is to prevent outbreaks, which requires robust state-wide herd immunity. This implies that the average vaccination rate must thus be higher than the standard 92-94% threshold to also assure herd immunity in pockets of under-vaccination. Moreover, a relatively large proportion of the exemptions are already taken up by the medical exemptions mentioned above (newborn babies, those who cannot undergo vaccination for medical reasons). Thus, although herd immunity for measles permits 6-8% of the population not to be vaccinated, the percentage thereof that can be allocated to non-medical exemptions is considerably smaller. This makes it even more urgent to restrict the number of non-medical exemptions.
Limiting the numbers of nonmedical exemptions: Keeping out free riders In order to limit the number of nonmedical exemptions, sincere objections against vaccination must be separated from exemptions of convenience. Before we can discuss more fine-grained arguments concerning religious and nonreligious exemptions, first we have to address the more mundane subject of free riders. As mentioned above, herd immunity is a collective good and, as such, it has two important characteristics. Firstly, it is non-excludable, that is, once achieved, it is impossible to exclude people from using the collective good. Herd immunity protects vaccinated and non-vaccinated persons alike. Second, a collective good is non-rivalrous, that is, one person's use of it does not limit the use of others. These characteristics of collective goods open the door to free riding: persons benefiting from a collective provision like herd immunity without contributing to the maintenance of the public good. As long as herd immunity is firmly established, children of vaccinating and non-vaccinating parents are equally protected against the disease, and this can explain the incentive to free ride. For one thing, taking one's kid to the doctor for a vaccination disrupts the daily routine for which one might have to take a morning off from work. And knowing that kids are always slightly feverish for a night or two after a vaccination, it seems rational to seek for such an exemption of convenience. 16 Since vaccinations are not provided free of charge by the US government, 17 in some states it requires less effort to request an exemption than to fulfill the vaccination requirements (Salmon et al., 2006: 439). In addition, even though the standard vaccines like MMR have proven to be very safe, there is no such thing as a 100% risk-free medical intervention. It is inevitable that adverse reactions will occur in some cases, even when all reasonable precautions are taken in the manufacture and delivery of vaccinations. The great majority of side effects are local and minor-a sore arm or low-grade fever for a few days. As discussed above, more serious reactions to vaccines occur only in exceedingly rare circumstances, but still it is rational for parents to wish to avoid them, if possible, if one can assume herd immunity to be maintained by others.
However, free riding is, of course, inherently self-defeating: the more parents follow suit, the more herd immunity will be endangered and the more a waiver system is unsustainable. Some anti-vaccinators are very aware that their free ride can only be guaranteed as long as herd immunity is maintained by others. Dr Bob (Sears), a well-known US anti-vaccination celebrity, is blatantly honest in his advice to non-vaccinating patients: ''I also warn them not to share their fears with their neighbors, because if too many people avoid the MMR vaccine, we'll likely see the disease increase significantly'' (Sears, 2007: 96-97, as quoted in Navin, 2016.
Needless to say, the putative immorality of free riding in this case is clear. Indeed, it is quite hard to justify the choice of parents to make use of a collective good that they value, but refuse to contribute their fair share to its maintenance. However, morally condemning the concept of free riding is one thing; it is quite another thing to come up with legal tools that capture the distinction between free riders from conscious objectors in workable legal criteria. Free riders know that their behavior is objectionable and are therefore not very likely to admit that they are free riding. They either follow Dr Bob or seek to hide the fact that they do not vaccinate, or present their refusal in terms of a conscientious objection and apply for philosophical or religious exemptions.
This demonstrates the first epistemic problem with waiver programs. They not only provide room for genuine grievances against vaccination but also for lessgenuine free-rider behavior because free riders are forced to present their exemption-claim in terms of genuine grievances, in order to be taken into consideration. It is precisely this behavior that disables laws and formal regulations to make a straightforward distinction between exemption claims based on genuine objections and exemptions of convenience.
Separating religious and secular objections
Since the introduction of the first smallpox vaccines, several groups of parents have refused to vaccinate their children. Traditionally, the most well-known objectors are members of religious groups. For example, Dutch Protestant-Christian congregations refuse vaccination because they consider it contrary to their religious convictions. They believe that God has predestined the fate of all human beings, including their health and the prevalence of diseases. They might not necessarily deny the effectiveness of large-scale vaccinations programs, but nevertheless prioritize other values, and conclude that vaccination is an ''inappropriate meddling in the work of God.'' In the US, we find religious groups including Christian Scientists, Mennonites, and the Amish of which certain members also object to vaccination. For example, some Christian Scientists argue that disease is a spiritual phenomenon that should be healed through prayer instead of medication. They refuse vaccines because they believe that physical illness is an illusion of the material world and that prayer can help us to correct the false beliefs that give rise to illness.
These more traditional religiously inspired objectors to vaccination should be distinguished from the current, more secular wave of vaccine hesitancy, which primarily mobilizes parents and activists who are convinced that the risks of vaccination outweigh the purported benefits. 18 In the last three decades, a vocal anti-vaccination movement has emerged, which conveys its message primarily through anti-vaccination websites. This is a multifaceted movement, including ''spiritual'' or ''holistic'' approaches, anthroposophists, homeopaths, and adherents of ''natural healing'' and ''alternative healing.'' They dispute the medical consensus that vaccines are safe and effective; moreover, they question the selfevidence with which governments provide and promote large-scale vaccination programs. Some believe that a disease like measles could-in the case of otherwise healthy children-contribute to growth, development, and immunity building, which provide someone with greater resilience against diseases like cancer and allergies later in life. Others seek to carve out ''all-natural'' lives for their children, to maintain their ''purity,'' or avoid contamination, assuming that vaccines contain toxic preservatives. Still others argue that current programs overwhelm a child's immune system because it is forced to handle too many vaccines too early in life. Even though none of these claims has been corroborated by evidence-based academic research, such groups are usually not bothered by that lack of scientific confirmation. To the contrary: the anti-vaccination movement is typically characterized by an aversion to ''mainstream medical science'' (Navin, 2016).
Should religious objections have more weight in such debates than secular objections? Again, the development in the US jurisprudence on the waiver system can help us to make sense of the distinction. 19 Historically, the number of exemptions granted was limited because only a limited category was eligible: members of nationally recognized and established religious denominations. In 1971, several state courts widened the domain of exemptions ''to everyone and anyone who claims a sincerely held religious belief opposed to vaccination-and not just those emanating from officially recognized religions.'' 20 Only in 1979 was this limitation to religion disputed in court, because religious exemptions ''discriminate against the great majority of children whose parents have no such religious convictions.'' It makes sense to lift the distinction between religious and secular claims for exemptions, because it does not fit with current, more secular ideals that governments should be neutral toward various (religious and secular) ideas of the good life (Pierik and Van der Burg, 2014). Moreover, the original distinction led to many odd exceptions. For example, although many secular claims were not even taken into consideration, an exemption claimed by a Jewish parent was allowed by a US court, even though nothing in Judaism objects to vaccinations (Calandrillo, 2004: 414, n 388). Another example is the fact that thousands of parents have qualified for religious exemptions by joining sham mail-order religions such as the Congregation of Universal Wisdom, through a contribution of $75 and a $15 fee for the official notification necessary to qualify for the exemption. 21 Indeed, it is difficult for lawmakers and courts to come up with formal law and cogent court decisions to distinguish religious from more secular commitments because these commitments are typically insulated from ordinary standards of evidence and rational justification as employed in common sense and science (Leiter, 2013: 34). It is the religion, or the non-theistic equivalent in question that determines which commitments are legitimate causes for an exemption, not secular lawmakers or state judges (Macklem, 2008: 133). It is therefore quite an endeavor, if not impossible, for a liberal government to come up with a clear set of coherent conditions to separate legitimate, deep commitments from superficial preferences, and remain neutral to the various religious and secular philosophies of life.
More generally, the growing focus on state neutrality and secular law in the last decades affects the way such claims to exemptions are assessed. The more secular Pierik the assessment of exemption claims becomes, the harder it is to distinguish religious from secular convictions and, more importantly, to distinguish ''strong beliefs'' from ''mere preferences.'' Within the liberal tradition, one that is so much determined by inter-Christian strife in Europe after the Reformation, such strong beliefs and the mere concepts of ''conscience'' and ''conscientious objections'' were limited to the-indeed quite contingent category of-members of nationally recognized and established religious Christian denominations and very much understood in Christian terminology and symbolism (Spinner-Halev, 2005;Waldron, 1987). In current, more secular, times, we need a more inclusive conception of the ''strong beliefs'' and ''deep commitments'' that provides normative status to convictions that individuals closely identify with and recognize as theirs, on the grounds of their ''deep,'' ''serious,'' ''spiritual'' nature. After all, it is because these religious and secular commitments meet the criterion of deep commitments that they justify exemptions from universal law. 22 The transition to a more inclusive approach can be recognized in the way US and EU courts have assessed such claims. The European Court of Human Rights never provided a comprehensive definition of the term ''religion'' or ''belief.'' Mainstream religions are readily accepted as belief systems. For other religions and personal belief systems, the Court merely employs formal criteria: the conviction must display ''a certain level of cogency, seriousness, cohesion, and importance.'' 23 The latter terms have never been spelled out in case law, but Murdoch (2007: 11) explains that a specific act, i.e. objecting to vaccination, must relate to a weighty and substantial aspect of human life and behavior and be deemed worthy of protection in European democratic society. 24 But nothing in these formulations separates religious from secular convictions. In United States v. Seeger, the US Supreme Court has, in matters of conscientious objection to military service, abandoned the religious/secular distinction by holding that an objection could be understood as ''religious'' when it is based on a ''sincere and meaningful belief which occupies in the life of its possessor a place parallel to that filled by the God of those.'' 25 Following this jurisprudence, it is remarkable that several US states still only accept religious exemptions and deny secular exemptions; one would expect the distinction to collapse as soon as a secular parent in one of these states will make the case before the Supreme Court. However, it turns out that Seeger was an exception because the Supreme Court was interpreting the narrow terms of a statute rather than addressing the constitutional question of what should count as protected belief for purposes of the Free Exercise Clause of the First Amendment. As a result, judges have been reluctant to extend the constitutional protection of nonreligious deep, serious, moral commitments beyond narrowly circumscribed cases of conscientious objection to military service (Laborde, 2014: 68).
For liberal governments to comply with the contemporary demands of state neutrality-and for the US government to comply with the free exercise clause of the first amendment-the earlier theistic and substantial interpretation of the term ''religious'' must be abandoned and replaced by a more inclusive and formal one (Navin, forthcoming). This is also clear when we analyze the myriad of claims to exemptions from childhood vaccination today. Should modern objectors who, in one spiritual way or another, still adhere to Wakefield's debunked claim that vaccination causes autism be treated differently from Christians who argue that vaccination is an inappropriate meddling in the work of God, or from those who argue that diseases should be healed through prayer instead of medication, or from metaphysical thinkers who argue that vaccines undermine ''purity'' or hamper ''spiritual growth of the person?'' Yes, the former is based on a factual claim that contradicts evidence-based medicine, while the latter cannot be refuted scientifically, but this is, ipso facto, not sufficient as a criterion that can be employed by a neutral state for distinguishing the two claims, or to conclude that one justifies an exemption while the other does not.
Here the attempt to make meaningful distinctions between on the one hand, exemption-claims based on religious objections and secular objections and on the other hand, between sincere objections and mere exemptions of convenience, seem to end up in a free fall. The more law, policy, and adjudication have to rely on formal criteria like sincerity, cogency, or cohesion, the more a waiver system complies with the second-order priority, in that it does not privilege or discriminate against certain religious or other comprehensive doctrines. But when that distinction falls apart, it is also much harder to separate sincere objections from free-rider claims disguised as sincere objections. This, in turn, makes it much harder to fulfill the first-order priority: that a waiver system must be capable of limiting the number of exemptions to such an extent that herd immunity is not jeopardized. The more categories of exemption-claimers are acknowledged, the larger the number of (potential) claimants. If a liberal government aims to maintain herd immunity and if there is no neutral way of distinguishing insurmountable objections to vaccination from more superficial preferences, it will become impossible to design a waiver system that is both neutral to the several religions and secular ideas about the good life, as well as to be able to maintain robust herd immunity.
Employing proxies
In the previous section, I concluded that it is very hard to substantively identify genuine objections to vaccination and, consequently, to design law and policies to distinguish genuine objections from exemptions of convenience. One way to hold on to a waiver system is to give up the attempt to substantively assess parental convictions, but instead, to employ proxies to determine who can legitimately claim exemptions. The alternative service for objectors to the military service can serve as an example here. Recognized objectors have to contribute to the public good in another way, for example, by serving in educational or health care institutions. In addition, the alternative service usually takes a longer period than the military service, up to twice as long, in order to deter non-sincere objectors from taking the alternative route. In our case of exemptions from vaccination, a similar path can be taken. Vaccinating one's children contributes to the public good-it generates herd immunity-and is burdensome to the parents and the child. Alternative trajectories for vaccine objectors should contribute in a different way to the public good and/or should at least, in one way or another, be as onerous for parents as going through the vaccination procedure, in order to cut off the easy way out of vaccination. The question is to what extent such an approach can comply with the requirements formulated above: the first-order priority, limiting the number of exemptions in order to maintain and protect robust herd immunity; and the second-order priority, that the process of assigning exemptions should not privilege or discriminate against certain religious or other comprehensive doctrines.
Let me discuss three possible proxies. In the first, parents are required to follow a certain procedure before they are eligible for a vaccine waiver: to complete a set of educational sessions and to present their substantive opposition to vaccination before a formal review board. In this approach, the content of the objection is not substantially assessed; it is only marginally evaluated on whether it satisfies some basic formal requirements in order to entitle an exemption from mandatory vaccination. The basic idea is that, even though there is no substantive assessment of their actual arguments, the procedure forces parents to inform themselves about the dangers of non-vaccination, to formulate their objections against vaccination explicitly, and to defend them in a formal setting. Even though undergoing this procedure might not substantially alter the parents' beliefs about vaccination, at least it would make it harder for parents to forgo vaccination without being confronted with information on the possible dangers involved. Moreover, it would make the process of receiving a waiver more burdensome, which might at least deter some free riders.
A second proxy moves away from the problematic distinction between sincere objections and mere preferences by starting from an amoral conception of the public good of herd immunity in a specific community. It argues that, since herd immunity is such an essential public good, that all members of society can be expected to contribute. The most obvious contribution is to vaccinate one's children (and oneself). Those with objections to vaccinations have to contribute in another way to the public good, for example through paying a tax, for example financing vaccination schemes and to support vaccinations for low-income families. One advantage is that such a tax is much less intrusive and might therefore be more acceptable for those with religious or philosophical objections. A second advantage is that such an approach avoids the problem of assessing the true nature and depth of the objection. Your willingness-to-pay is taken as a proxy for the depth of your objection, and given the difficulty of determining sincere conscientious objections, willingness-to-pay might be the most neutral alternative. The level of taxation should yield at least the same burden as participating in a vaccination schedule-to make sure that opting out is not less burdensome than participating. Perhaps the charge could be based on the expected damage, according to the polluter paysprinciple. Another calculation method links the tax rate to the extent to which herd immunity is assured in a certain area. If the number of objectors within a specific community is small, the tax rate can be low, only covering the administrative fee required to uphold the system of exemptions, monitoring levels of herd immunity and possible outbreaks of infectious diseases. However, the larger the number of objectors in a specific area, the more the tax rate will rise. Above the threshold percentage to maintain herd immunity, some objectors have to lose out. Willingness-to-pay could, in a way, be the most neutral way to separate the wheat from the chaff.
Lotteries might provide a third proxy. Stone (2011) argues that lotteries are appropriately employed when it is essential to prevent bad reasons from affecting decisions about allocation of a certain good. If we conclude from the discussion in the previous section that it is impossible to substantially distinguish sincere objections from superficial preferences by relatively straightforward legal norms, we could distribute exemptions among parents who seek them through a lottery.
These proxies have the advantage that government is discharged from the impossible task of substantially assessing the content or the depth of an objection against vaccination. But all three proxies have their own problems. The first proxy might not provide enough of a barrier to exemption claims in order to secure robust herd immunity (first-order priority). After all, if parents know they only have to meet formal requirements but are in the end not assessed substantially, they know they just have to go through the motions to succeed. In addition, this proxy seems to be biased in favor of educated people, for whom it will be easier to formulate their substantive opposition than for less-educated people (secondorder priority). The second tax-proxy might have the ability to limit the number of exemptions by raising the tax to the threshold level, but it has the disadvantage that it is biased in favor of wealthier people (second-order priority). In an unequal society, a tax may not distinguish sincere from convenient objections, but rather pick out who is able to pay. If the tax is low, then we have done nothing to block the worries about exemptions of convenience for the better-off. If the tax is high, then only the well-off will be able to apply for exemptions. To the extent that we consider the current socio-economic inequality unjust, this proxy only reinforces these injustices. Moreover, it might be considered as insulting-buying one's right to follow one's conscience. The third lottery proxy has the disadvantage that it, in itself, does not solve the problem of free riding since every parent can subscribe to the lottery. As such it suffers from a reverse version of the second-order priority: is does not distinguish between groups that are relevantly different. Moreover, it does not take seriously the depth of the objections to vaccination. Genuine objectors seek for an exemption because they want their convictions to be taken seriously, not because they have won a lottery. As such it might provide exemptions to people with deep objections, but in such an insulting way that it will be despised and, as a result, maybe even rejected by persons with deep objections.
Conclusion
This paper discussed religious and secular exemptions to general law through the prism of waivers for mandatory childhood vaccination for the measles. Given that the measles is an extraordinarily contagious and quite severe disease and, given the fact that outbreaks are common enough to pose a significant threat to public health, government has a compelling interest in preventing outbreaks of vaccinepreventable contagious disease through maintaining robust herd immunity. The paper focused on legal regimes with mandatory measles vaccination schemes and discussed whether religious and secular exemptions can be maintained. The argument started from the idea that legal exemptions might ipso facto not be inconsistent with the central tenets of constitutional liberal democracy, but that they can only be accommodated when they are justified, all things considered, both from normative and practical perspectives. These conditions were translated into two priorities: the first-order priority that allowing such exemptions should not endanger robust herd immunity and the second-order priority that the distribution of this scarce good of exemptions should not violate central liberal-democratic values. A mandatory vaccination scheme with waivers is only justified when these two conditions can be met.
Since herd immunity against the measles only requires an average vaccination rate of 92-94%, the first-order priority leaves some leeway for exemptions. However, since the vaccination coverage is spread unevenly over the territory and since a large proportion of the exemptions are already taken up by medical exemptions, the question is how much room the first-order priority leaves for nonmedical religious and secular exemptions. Concerning the second-order priority: where earlier arrangements, only providing exemptions to members of nationally recognized and established religious denominations, might have very been very successful in limiting the number of exemptions, this came at the expense of privileging the claims of certain groups over others. After all, the exempted category was not selected on the basis of the content of their objections, but because they were well established as religious-read Christian-groups in the US society. Given the great importance that is given nowadays to state neutrality and secular law, such a selective approach is outdated. A more egalitarian analysis of exemptions should be neutral to the various ideas of the good life-religious and secular-and should not privilege religious groups merely because they are historically well established in a specific society. Such a more inclusive approach makes it impossible to separate religious from secular objections to vaccination, which is also exemplified by the fact that the European Court of Human Rights and the US Supreme court have abandoned substantive assessments of objections in favor of more formal criteria like sincerity, cogency, or cohesion.
But once the assessment in terms of substantial criteria has been abandoned, it becomes quite difficult to exclude many of the current secular convictions that are employed to object to vaccination. After all, most of them are not merely objections to vaccination, but, instead, embedded in wider religious, spiritual, or holistic ideas of the good life. As such, not only does the distinction between religious and secular objections collapse, but the distinction between genuine and less genuine free-rider objections might dissolve as well. After all, since free riders are forced to present their objections in terms of genuine grievances in order to be taken into consideration, it will be very hard to design law and policies that can straightforwardly make the distinction between those objections that warrant exemptions, and those that do not.
This leads to a paradox for liberal-democratic exemption policies for mandatory childhood vaccination law: it seems to be impossible to satisfy the first-order priority and second-order priority simultaneously. Either the number of categories of objectors must be limited (and this can only be done by privileging some ideas of the good life over others thus violating the second-order priority) or the policy of accepting categories of objectors is neutral. However, that would most probably endanger robust herd immunity because it would open the door to too many exemption claims. This leads to the conclusion that it is very difficult to maintain vaccine waivers for the measles that are consistent with central liberal-democratic tenets and that are also able to maintain robust herd immunity. 6. http://www.cdc.gov/measles/about/complications.html (accessed 12 January 2016). The last major measles outbreak in the Western world was in France between 2008 and 2011, in which 10 patients died and almost 5000 patients were hospitalized, including 1023 for severe pneumonia and 27 for encephalitis/myelitis (Antona et al., 2013). 7. For this reason, Opel et al. (2016) have argued that the measles pose a more important problem than other vaccine-preventable infectious diseases. For a critique see: Byington et al. (2016). 8. This vaccine denialism is very similar to global warming denialism with one important difference: research on global warming emerged relatively recently and is confronted with many uncertainties, whereas research on the MMR vaccine is very robust and became well established during the last century. 9. For the (dis)analogy between conscientious objection against conscription and vaccination, see Salmon and Siegel (2001: 292). 10. For an overview of the regulation per state, see: http://www.vaccinesafety.edu/cc-exem.
|
2018-04-03T01:17:58.331Z
|
2017-03-09T00:00:00.000
|
{
"year": 2017,
"sha1": "9db8c756665c012211b9e1a51e7cb5e8fb2f4e6f",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5428064?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4713ccfd7263570d1cd2c07761595ab2831ce7c0",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Medicine",
"Political Science"
]
}
|
245882654
|
pes2o/s2orc
|
v3-fos-license
|
Gender and Ethnic Disparities of Acute Kidney Injury in COVID-19 Infected Patients: A Literature Review
Coronavirus disease 2019(COVID-19) has become a public health emergency of concern worldwide. COVID-19 is a new infectious disease arising from Coronavirus 2 (SARS-CoV-2). It has a strong transmission capacity and can cause severe and even fatal respiratory diseases. It can also affect other organs such as the heart, kidneys and digestive tract. Clinical evidence indicates that kidney injury is a common complication of COVID-19, and acute kidney injury (AKI) may even occur in severely ill patients. Data from China and the United States showed that male sex, Black race, the elderly, chronic kidney disease, diabetes, hypertension, cardiovascular disease, and higher body mass index are associated with COVID-19‐induced AKI. In this review, we found gender and ethnic differences in the occurrence and development of AKI in patients with COVID-19 through literature search and analysis. By summarizing the mechanism of gender and ethnic differences in AKI among patients with COVID-19, we found that male and Black race have more progress to COVID-19-induced AKI than their counterparts.
INTRODUCTION
Coronavirus disease 2019 is an infectious disease caused by the infection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). As of May 23, 2021, there were 167 million confirmed cases and 3,475,086 deaths worldwide (https://www.worldometers.info/ coronavirus/). SARS-CoV-2 uses the host cell's angiotensin-converting enzyme 2 (ACE2) as a receptor to invade cells, thereby affecting their normal physiological functions. As the "gateway" recognized by SARS-CoV-2, ACE2 is not only expressed in lung cells, but also in podocytes and proximal tubules of the kidney. After binding to ACE2, the spike protein of SARS-CoV-2 is hydrolyzed and cleaved by type II transmembrane serine protease (TMPRSS2), which promotes the virus to invade host cells (Hoffmann et al., 2020;Matsuyama et al., 2020). Clinical evidence also shows that kidney involvement is common during the course of COVID-19, and the incidence of acute kidney injury (AKI) in severe COVID-19 patients is also high (Nadim et al., 2020). Current evidence suggests that AKI caused by COVID-19 may affect > 50% of patients in the ICU (Nadim et al., 2020). In addition, a consistent feature of the COVID-19 pandemic is that men are susceptible and have poor outcomes (Naicker et al., 2020). In several countries, men account for the majority of COVID-19 deaths. In China, men accounted for 73% of deaths, 59% in South Korea, and in Italy, 70% of deaths are men (La Vignera et al., 2020). Data from China and the United States showed that male sex, Black race, the elderly, chronic kidney disease (CKD), diabetes, hypertension, cardiovascular disease, and higher body mass index are associated with COVID-19-induced AKI (Hirsch et al., 2020;Pei et al., 2020). These findings indicate the presence of an association between males and higher mortality. It is known that AKI is an indicator of negative prognosis and disease severity in patients with COVID-19 (Nadim et al., 2020;Varga et al., 2020). However, currently few reports have focused on gender differences in COVID-19 patients with kidney injury. By summarizing the relevant research data published recently, this review not only found gender differences in AKI caused by COVID-19, but also observed ethnic differences in the occurrence and progression of AKI. And by describing and discussing the mechanism of these two differences, it helps to design better prevention and therapeutic strategies.
GENDER DIFFERENCES IN THE INCIDENCE OF AKI AMONG PATIENTS WITH COVID-19
The gender-related COVID-19 mortality rate is one of the most frequently reported epidemiological data. Current evidence indicates that the incidence rate is higher in males than in females. In addition, males show more serious results than females (Mauvais-Jarvis, 2020; Peckham et al., 2020). Data of 59,254 patients from 11 different countries showed an association between males and high mortality (Borges do Nascimento et al., 2020). We identified the data by searching PubMed and references from relevant articles using the search terms "Coronavirus Disease 2019", "COVID-19", "SARS-CoV-2", "kidney disease", "acute kidney injury", "AKI", "risk factors", "gender", "clinical outcomes", and "Clinical Characteristics". We found that the SARS-CoV-2 infection rate in males is higher than that in females, and the incidence of AKI in males is also higher than that in females in most studies (Figures 1, 2) (Cheng et al., 2020b;Fisher et al., 2020;Hirsch et al., 2020;Kolhe et al., 2020;Pei et al., 2020;Sang et al., 2020;Zahid et al., 2020;Basalely et al., 2021;Chan et al., 2021;Cheng et al., 2021;Costa et al., 2021;Dai et al., 2021;Diebold et al., 2021;Gasparini et al., 2021;Martinez-Rueda et al., 2021;Mousavi Movahed et al., 2021;Ng et al., 2021;Ozturk et al., 2021;Russo et al., 2021;Xu J. et al., 2021;Yildirim et al., 2021;Zamoner et al., 2021). In these studies, all of the patients with COVID-19 had new-onset AKI during hospitalization. The COVID-19 mortality rate is related to comorbidities. The latest data suggest that male sex, older age, CKD, diabetes, hypertension, cardiovascular disease, obesity, genetic risk factors, immunosuppression, and smoking history may induce or increase the incidence and progression of AKI (Hirsch et al., 2020;Nadim et al., 2020;Pei et al., 2020). Recent studies have also shown that the male sex is an independent predictor for AKI in patients with COVID-19 (Hirsch et al., 2020). This may be related to the higher infection rate of males with SARS-CoV-2.
Mechanically, compared with females, males are more susceptible to viral infections because of the different innate immunities related to sex chromosomes (Conti and Younes, 2020). Males experience infectious diseases more frequently and with increased severity (Crommelin, 2013), but females are more prone to the majority of autoimmune diseases (Beeson, 1994;Whitacre et al., 1999). Furthermore, ACE2 and TMPRSS2, as the key factors for virus invasion (Shang et al., 2020), are more highly expressed in males than females. Recent studies have shown that androgens (AR) regulate the expression of ACE2 and TMPRSS2, including the lungs, which may explain the increased susceptibility of males to severe SARS-CoV-2 infection (Mikkonen et al., 2010;Sama et al., 2020). AR can also modulate the immune response and reduce the antibody response to viral infection to increase the severity of SARS-CoV-2 infection in males (Klein and Flanagan, 2016). Interestingly, estradiol treatment can significantly reduce ACE2 mRNA levels in normal human bronchial epithelial cells (Stelzig et al., 2020). In addition, a study found that estradiol and phytoestrogens genistein can reduce the mRNA levels of TMPRSS2, indicating that estrogen may play a protective role in COVID-19 infection by inhibiting SARS-CoV-2 from entering host cells (O'Brien et al., 2020). Therefore, gender differences in hormone levels may contribute to male susceptibility to SARS-CoV-2. Moreover, the presence of underlying diseases such as CKD, diabetes, hypertension, and cardiovascular diseases will increase the infection rate and mortality of SARS-CoV-2. Data from the Public Health Agency of Canada shows that males are more likely to suffer from diabetes and heart disease than females (O'Brien et al., 2020). Evidence from epidemiological observations showed that diabetes patients with COVID-19 have a 50% increased risk of fatal outcomes compared with patients without diabetes . Single-cell RNA analysis has shown that male, old age, and smoking habits can increase the expression of ACE2 and TMPRESS2 (Muus et al., 2021), thereby promoting SARS-CoV-2 infection. These gender differences in behavior may also increase the risk of males suffering from underlying diseases, which provides a possible explanation for the higher infection rate in males (Ellison and Tasian, 2021). Finally, gender-related susceptibility may also be related to vitamin D. Vitamin D deficiency is an independent risk factor for viral acute respiratory infection (ARI). A study showed that men have lower vitamin D supplements compared with agematched females (La Vignera et al., 2020). This is also a factor that leads to gender differences in susceptibility.
In addition to the higher proportion of males in AKI patients, the incidence of AKI in male patients with COVID-19 is also higher than that in female patients ( Figure 2) (Cheng et al., 2020b;Fisher et al., 2020;Hirsch et al., 2020;Kolhe et al., 2020;Pei et al., 2020;Sang et al., 2020;Zahid et al., 2020;Basalely et al., 2021;Chan et al., 2021;Cheng et al., 2021;Costa et al., 2021;Dai et al., 2021;Diebold et al., 2021;Gasparini et al., 2021;Martinez-Rueda et al., 2021;Mousavi Movahed et al., 2021;Ng et al., 2021;Ozturk et al., 2021;Russo et al., 2021;Xu J. et al., 2021;Yildirim et al., 2021;Zamoner et al., 2021). According to the previously mentioned mechanism of SARS-CoV-2 directly invading kidney host cells to cause AKI, ACE2 and TMPRSS2 are expressed in proximal convoluted tubule cells and podocytes (He et al., 2020). Therefore, it is important to understand the gender differences in the expression of ACE2 and TMPRSS2 in the kidney. A study pointed out that AR could upregulate the expression of ACE2 in human glomerular epithelial cells, tubular cells and podocytes (Yanes Cardozo et al., 2021), providing a theoretical basis for the high incidence of AKI in males. Furthermore, AR could increase the efferent arteriolar resistance by increasing the level of Ang II, thereby exacerbating glomerular injury (Reckelhoff et al., 2005). The transcription of TMPRSS2 is also regulated by AR (Lucas et al., 2014). Thus, AR will not only increase the risk of SARS-CoV-2 kidney infection, but also cause aggravation of kidney damage. AKI caused by SARS-CoV-2 infection has many indirect factors, such as dysregulated immune responses, c y t o k i n e s t o r m , e n d o t h e l i a l d y s f u n c t i o n , a n d hypercoagulability. These indirect factors also have gender differences. Dysregulated immune responses can be observed in patients with severe COVID-19, and the neutrophils, leukocytes, and neutrophil to lymphocyte ratio (NLR) are significantly increased in these patients (Qin C.et al., 2020;Zhou et al., 2020). The increase in NLR reflects the depletion of lymphocytes and the increase in neutrophils that produce proinflammatory cytokines, and can predict the severity of clinical outcomes . Compared with females, males are more likely to exhibit systemic inflammation, ferritin, NLR>6, and a higher percentage of monocytes . A recent study found that men have higher circulating innate inflammatory cytokines IL-8 and IL-18 by comparing the differences in immune response between male and female patients with COVID-19 (Takahashi et al., 2021). In addition, AR can increase the number and function of circulating neutrophils and increase the production of IL-1b, IL-10, IL-2, and transforming growth factor-b by immune cells (Klein and Flanagan, 2016). These cytokines are related to AKI caused by cytokine storm. D-dimer is usually elevated in patients with FIGURE 1 | Data are extracted from 23 studies on COVID-19 patients, including the total number of patients included, the proportion of male patients, the incidence of AKI and the mortality of AKI. " *": The mortality of AKI is unavailable.
COVID-19. It is related to the severity of the disease and is also one of the indirect factors leading to AKI (Cui et al., 2020). Several studies have shown that although there is no significant difference in the expression level of D-dimer between male and female patients with COVID-19, the platelet count of males is significantly lower than that of females (Su W. et al., 2020;Yoshida et al., 2021). Males also have a higher risk of venous thromboembolism (VTE) than females (Baglin et al., 2004;Eichinger et al., 2010). Sepsis and endothelial dysfunction are also indirect factors that cause AKI. In general, females recover better than males from illnesses caused by infectious diseases, sepsis, trauma, or injury (McClelland and Smith, 2011;Markle and Fish, 2014). Immunity, inflammation, and hypercoagulability markers all have gender differences, and the biological factors underlying them are also deserve further investigation.
The baseline conditions of patients with COVID-19 at the time of admission also affect the occurrence of AKI. Early epidemiological reports indicated that hypertension, diabetes, cardiovascular disease, and CKD are risk factors for SARS-CoV-2 infection to cause AKI (Mesropian et al., 2016;Del Sole et al., 2020). Although epidemiology also showed that females have a higher prevalence of CKD than males (Murphy et al., 2016), lifetime risk studies have found that females may have slower kidney function decline compared with males (Albertus et al., 2016). Gender differences in the progression of CKD could be attributed to a variety of factors, including sex hormones, kidney hemodynamics, and differences in kidney quality between males and females (Neugarten, 2002;Neugarten and Golestaneh, 2013). Moreover, the frequency of increased indicators of myocardial injury in males during hospitalization is much higher than that in females, indicating that males are more susceptible to myocardial injury and heart failure after SARS-CoV-2 infection (Su W. et al., 2020). This condition impairs cardiac output and kidney perfusion, leading to gender differences in AKI.
By summarizing the relevant mechanisms and reasons, we provide a possible explanation for the high incidence of AKI in FIGURE 2 | Data are extracted from 23 studies on COVID-19 patients, including the total number of patients included, the proportion of males in AKI patients, the incidence of AKI in male patients and the incidence of AKI in female patients. male patients with COVID-19. These mechanisms and data will help understand the gender differences in kidney injury caused by SARS-CoV-2 and design better prevention and treatment strategies.
ETHNIC DIFFERENCES IN THE INCIDENCE OF AKI AMONG PATIENTS WITH COVID-19
Current epidemiological data indicate that African Americans or blacks are more likely to be affected by SARS-CoV-2 and have worse outcomes (https://coronavirus.jhu.edu/data/racial-datatransparency). The U.S. National health statistics have documented extensive health disparities for Black patients with COVID-19, they suffer a threefold greater infection rate and a sixfold greater mortality rate than their white counterparts (Yancy, 2020). The high prevalence and poor health outcomes of blacks reflect a complex set of factors, including income, education history, and occupational differences . Recent data show that only 16% of Hispanics and 20% of Blacks can work from home, and they account for an excessively high proportion of essential roles that require in-person work, resulting in frequent and prolonged exposure to hazardous environments (Dorn et al., 2020). Many black communities reside in poverty areas with high housing density, high crime rates, and difficult access to healthy food. Low socioeconomic status is a risk factor for total mortality independent of other risk factors, and is related to the occurrence and development of a variety of underlying diseases, such as cardiovascular disease, diabetes, and CKD. It has been confirmed that the existence of underlying diseases could affect the infection rate and outcome of COVID-19 (Yancy, 2020). Furthermore, a large-scale survey found that black and Latinx respondents had a serious lack of understanding of the symptoms and transmission of COVID-19 (Alsan et al., 2020). These social determinants of health put minorities who live in high-risk communities at greater risk of disease, not just for the basic diseases but now for the COVID-19 infection rate and mortality rate. The occurrence of AKI during COVID-19 infection is associated with high mortality. Data from China and the United States indicate that male sex, older age, Black race, diabetes, CKD, hypertension, cardiovascular disease, congestive heart failure, and higher body mass index (BMI) are associated with AKI in COVID-19 (Nadim et al., 2020). Several studies have shown that Black race is a risk factor for COVID-19-induced AKI. Nimkar et al. pointed out that among patients with COVID-19, older age, CKD, hyperlipidemia, and being of African American descent showed higher odds of AKI (Nimkar et al., 2020). Charoenngam et al. compared the hospital outcomes of Black and White hospitalized patients with COVID-19 at Boston Medical Center, the largest safety net hospital in New England. After adjusting for age, gender, BMI, potential type 2 diabetes, hypertension, and baseline estimated glomerular filtration rate (eGFR), the odds of AKI in black patients were statistically significantly higher (aOR 2.16, 95% CI, 1.57-2.97) (Charoenngam et al., 2021). In order to determine the risk factors related to the development of AKI in patients with COVID-19, Hirsch et al. found that black race was both an independent risk factor and an independent predictor for the occurrence of AKI through multivariate analysis (Hirsch et al., 2020). However, the differences in risk observed in patients with COVID-19 based on race may reflect disparities in social, economic, environmental, and other stressors, which may increase the risk of AKI and its adverse consequences. The complex interactions between various factors that affect health outcomes require a better and deeper understanding. Bowe et al. observed patients with COVID-19 in the Veterans Affairs (VA) health care system and found that blacks showed a strong association with AKI (1.9 times). Moreover, they pointed out that the percentage of Black race can explain the spatial and temporal changes in AKI rates (Bowe et al., 2021). The VA health care system is the largest nationally integrated health system in the United States, it aims to provide equitable access and reduce care variations, and has provided high-quality care many times (Asch et al., 2004). This research reduces the influence of socioeconomic factors on the results. In addition, Fisher et al. adjusted the age, gender, race/ethnicity, socioeconomic status, and neighborhood crowding of patients with COVID-19 and found that black race remained a significant risk for AKI. And they identified male sex, Black race, and older age as risk factors for development of AKI, regardless of COVID-19 status (Fisher et al., 2020). At present, the mechanism of the high incidence of AKI in black patients with COVID-19 is still unclear. By consulting relevant literature, we found that in addition to socioeconomic factors, the susceptibility of blacks to AKI may also be related to biological factors.
SARS-CoV-2 infection can cause collapsible glomerulopathy in individuals. The autopsy results of multiple studies described collapsing focal segmental glomerulosclerosis (FSGS) in patients with COVID-19, who developed rapid progressive renal function impairment (Minami et al., 2020;Wu et al., 2020;Akilesh et al., 2021). The increased risk of collapsing glomerulopathy in patients with COVID-19 is related to the high-risk apolipoprotein L1 (APOL1) genotype (G1, G2) . The APOL1 risk genotypes were originally identified in African Americans with FSGS and/or ESKD, and FSGS is usually thought to arise from podocyte dysfunction (Genovese et al., 2010;Reidy et al., 2018). About 13% of African Americans have the APOL1 high-risk genotype, and these individuals have a 3-to 30-fold increased risk of various forms of kidney disease (Friedman and Pollak, 2020). Reports of collapsing FSGS related to SARS-CoV-2 infection are common among blacks. A study found that 6 of 7 COVID-19 patients with collapsing glomerulopathy were black ethnicity, and 1 black patient was found to carry the high-risk G1 genotype (Akilesh et al., 2021). By contrast, Su et al. reported an autopsy study of patients who died of COVID-19 with multiple organ complications in China. In this study, no patients developed collapsing glomerulopathy (Su H. et al., 2020). The association between high-risk APOL1 genotypes and kidney damage is highly important for understanding the ethnic differences in the onset of AKI in patients with COVID-19. APOL1 is a constituent of the highdensity lipoprotein complexes and plays a vital role in lysing trypanosomes that cause African sleeping sickness (Albertus et al., 2016). The two coding variants of APOL1 (G1 and G2) are present at high frequency in individuals of recent African descent. These two genetic variants will cause amino acid changes, thereby changing the function of APOL1. Inheritance of two risk alleles will significantly increase the risk of kidney disease, including FSGS, ESKD caused by hypertension (Lipkowitz et al., 2013), HIV-associated nephropathy (Kopp et al., 2011;Kasembeli et al., 2015), lupus-associated kidney disease (Larsen et al., 2013;Freedman et al., 2014), and subtypes of membranous nephropathy (Larsen et al., 2014). However, individuals with the high-risk genotype of APOL1 do not universally suffer from kidney disease. Other factors must be needed to cause APOL1 nephropathy in high-risk groups. Such as viral infection, the link between viremia and APOL1 nephropathy also supports the idea that the virus can activate the APOL1 response to cause kidney damage (Divers et al., 2013;Freedman et al., 2018). In inflammatory settings, the expression of APOL1 is enhanced, and it has a strong up-regulation response to interferons, lipopolysaccharide, Toll-like receptor agonists, TNF, and other cytokines (Zhaorigetu et al., 2008;Nichols et al., 2015), which may contribute to a cytokine storm. In addition, a study of community-dwelling black adults found that patients with one or two risk alleles have a higher risk of sepsis than those carrying no APOL1 risk alleles (Chaudhary et al., 2019). As mentioned above, cytokine storm and sepsis are indirect factors of AKI caused by SARS-CoV-2 infection. These pieces of evidence provide a reasonable explanation for the ethnic differences in AKI among patients with COVID-19. If these mechanisms are confirmed, this phenomenon will have an important impact on public health.
Moreover, multiple studies have reported a higher prevalence of comorbidities, such as obesity, diabetes, hypertension, and CKD, in Black patients with COVID-19 (Chang et al., 2021;Wiley et al., 2021;Yoshida et al., 2021). The existence of comorbidities is a risk factor for the occurrence of AKI, which further explains why Black patients with COVID-19 suffer from AKI more than other races. The reason why Blacks are more affected by COVID-19 and have poor outcomes is also related to vitamin D deficiency. Vitamin D has been proven to be protective against COVID-19 infectivity and severity. It is known that there are significant ethnic differences in the genes encoding vitamin D-binding protein (DBP). Blacks are more likely to have variants of this gene, leading to low DBP levels and impaired vitamin D synthesis and metabolism (Nizamutdinov et al., 2019). In addition, the increase in melanin in black skin reduces the absorption of sunlight required to produce vitamin D (Nair and Maseeh, 2012;Cashman et al., 2016). Vitamin D deficiency is more common in individuals with obesity. Systematic reviews and meta-analysis showed a 35% higher prevalence of vitamin D deficiency in individuals with obesity (Pereira-Santos et al., 2015). Among patients with COVID-19, Blacks are more obese than other races. A study on the clinical aspects and outcome of COVID-19 in Black patients found that the average BMI of hospitalized patients is in the "obese" range, which is higher than the national average (Gupta et al., 2021). Vitamin D modulates the immune system by suppressing the T helper 1 immune profile and upregulating the expression of regulatory T cells, thereby reducing the severity of cytokine FIGURE 3 | AKI, acute kidney injury; AR, androgens; SARS-CoV-2, Severe Acute Respiratory Syndrome Coronavirus 2; COVID-19, coronavirus disease 2019; ACE2, angiotensin-converting enzyme 2; TMPRSS2, type II transmembrane serine protease; Ang II, angiotensin II; VTE, venous thromboembolism; CKD, chronic kidney disease; APOL1, apolipoprotein L1; ARI, acute respiratory infection. storm. Therefore, vitamin D deficiency puts Black patients at a higher risk of cytokine storm and resulting systemic and intrarenal inflammation (Charoenngam et al., 2021).
CONCLUSION
AKI is a serious complication of COVID-19 and an indicator of poor prognosis (Cheng et al., 2020a;Nadim et al., 2020;Varga et al., 2020). Therefore, it is very important to understand the gender and ethnic differences in the development of AKI in COVID-19 patients. Through literature search and analysis, we found that male and black races in COVID-19 patients are more likely to progress to AKI, and summarized the related mechanisms ( Figure 3). The biological factors underlying these differences deserve further investigation. Proposing appropriate interventions based on these mechanisms is of great significance for improving the prognosis of COVID-19 patients.
AUTHOR CONTRIBUTIONS
WH, XL, and BH searched the literature and conceived and wrote the review. DL, LC, YL, KZ, YT, and SX revised the paper, tables and graphic abstract. GW and BF critically appraised the literature and made an intellectual contribution to the work. All authors read and approved the final manuscript.
|
2022-01-13T14:10:38.791Z
|
2022-01-13T00:00:00.000
|
{
"year": 2021,
"sha1": "b9659f6f1a0cc432a4332bf509eff2a6c5dc1aa7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.778636/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9c26c6ce1b86432fd6904c0cdb9314c40dd5ea4",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
154948097
|
pes2o/s2orc
|
v3-fos-license
|
Global Energy Development and Climate-induced Water Scarcity—physical Limits, Sectoral Constraints, and Policy Imperatives
The current accelerated growth in demand for energy globally is confronted by water-resource limitations and hydrologic variability linked to climate change. The global spatial and temporal trends in water requirements for energy development and policy alternatives to address these constraints are poorly understood. This article analyzes national-level energy demand trends from U.S. Energy Information Administration data in relation to newly available assessments of water consumption and life-cycle impacts of thermoelectric generation and biofuel production, and freshwater availability and sectoral allocations from the U.N. Food and Agriculture Organization and the World Bank. Emerging, energy-related water scarcity flashpoints include the world's largest, most diversified economies (Brazil, India, China, and USA among others), while physical water scarcity continues to pose limits to energy development in the Middle East and small-island states. Findings include the following: (a) technological obstacles to alleviate water scarcity driven by energy demand are surmountable; (b) resource conservation is inevitable, driven by financial limitations and efficiency gains; and (c) institutional arrangements play a pivotal role in the virtuous water-energy-climate cycle. We conclude by making reference to coupled energy-water policy alternatives including water-conserving energy portfolios, intersectoral water transfers, virtual water for energy, hydropower tradeoffs, and use of impaired waters for energy development.
Introduction
Globally, increasing demand for energy continues to outpace rates of population and economic growth [1].The quest for sustainable energy futures will depend significantly on water-resource availability and quality impacts associated with energy development [2,3].Both energy and water are inextricably linked to climate change, which tends to heighten the use of both resources [4] while increasing the variability of water availability for energy development, other human uses, and ecosystem processes.Drought and water scarcity in particular have direct effects for energy development [5,6], principally electrical power generation [7] but also the rapidly expanding production of biofuels [8].The nexus between energy and water-both the water needed for energy development as described in this paper and energy for water pumping, conveyance, treatment, and other operations [9]-has important implications for climate change.For example, energy development and use generate greenhouse gases that significantly contribute to global warming.Additionally, adaptation to the effects of climate change [10] and mitigation of its anthropogenic causes are fundamentally centered on the use and management of energy and water-separately as resources and increasingly in tandem as the water-energy nexus [11].
Climate change and variability are now firmly linked to anthropogenic drivers via greenhouse gas emissions, in particular, carbon dioxide from a range of human activities including electricity generation and land use, the two processes we are concerned with in this paper.Policy-makers are increasingly called on to adopt and incentivize programs that mitigate CO2 while at the same time adapting to the effects of climate change [12].In this context, the water-energy nexus plays a critical role in resource-use policy [13].The availability and quality of water resources greatly influence energy options, and conversely, water management has an appreciable impact on CO2 emissions [14].
Water is required for a range of energy development processes.The environmental quality impacts of fossil-fuel development, e.g., petroleum, coal, and natural gas, are increasingly being factored into water-energy nexus assessments [15].Here we focus on water use for: (a) hydroelectric and thermoelectric power generation, and (b) biofuel production (chiefly feedstock irrigation but also other life-cycle processes).Even with the technological shift from once-through cooling to evaporative cooling of thermoelectric generation, water consumption (depletion through evaporation) per unit of power generated represents an increasing demand on water resources.Additionally, irrigation is required for biofuel feedstocks (e.g., sugarcane or corn for ethanol and soy or rapeseed for biodiesel), and consumes significant amounts of water although some feedstocks are raised under rainfed conditions.In river basins with physically stressed water resources or in locations where water is allocated for other human uses (often with secure water rights) or environmental flows, energy demands for water are of growing concern [16,17].
Several regional and national assessments of water requirements for energy development have been published [9,[18][19][20][21].Yet, limited work has addressed key components of the water-for-energy challenge at the global scale [8,22,23].Most recently, Spang et al. [8] developed a metric for water consumption for energy production portfolios including various fuel types, then used it to calculate the water-for-energy footprint for 158 countries.These data were normalized based on several other indicators in order to rank countries according to different metrics [24].While useful for developing a comprehensive assessment of the consumptive use of water for a given country's energy sector, such analyses are thus far temporally limited, providing a snapshot of water consumption for a given year.They should be augmented with current energy production trends including biofuels, analyses of technological innovation, and policy alternatives to address water-resource constraints.In order to develop a more complete picture, this paper quantitatively evaluates physical and sectoral (allocative) water scarcity resulting from thermoelectric generation and biofuels production trends at the global scale using current data, and identifies and assesses policy options to address these challenges.The goal of these analyses is to highlight current and future challenges with meeting water demands for energy generation.
This paper is organized as follows.Above, we have briefly framed the need to consider water and energy interlinkages in the context of climate change.While an increasing number of studies are available, most are constrained by regional or local focus or they are temporally limited.Next, we present our approach and methods for a global assessment of time-series trends of the water-resource use implications of electrical energy and biofuel feedstock irrigation.In the discussion section that follows, we consider the implications of climatic trends plus adaptive management and technological options to address these challenges.We identify and discuss "flashpoint" countries that are expected to face increasing constraints of water availability for energy development.Finally, we conclude with an assessment of policy alternatives for expanding energy requirements while also accounting for climate change and variability.
Methods and Data
The mapping and coupled energy-water resource analyses presented here are based on robust global datasets, specifically, 2010 electrical power and biofuel production and trends to 2020 from the U.S. Energy Information Administration [25], and freshwater availability and sectoral allocations from the U.N. Food and Agriculture Organization [26] and World Bank [27].Newly available data on water consumption and life-cycle impacts of electrical generation [28] and biofuel production were also incorporated [29][30][31], including projections to 2020 for ethanol and biodiesel for several countries [32].
The cooling tower process for thermal electricity generation requires approximately 45 times lower withdrawals than once-through cooling [22].Because cooling type for individual power plants globally is not widely reported, we estimated annual freshwater withdrawals for combined thermal and nuclear electricity based on both cooling tower and once-through technologies.We recognize that once-through cooling continues to be used in electricity generation, e.g., in some European countries, Canada, and U.S. (where once-through cooling has become less common since 1970).However, the results reported here for all countries are based on assumed cooling tower technology, which we derived by multiplying EIA generation values by a median withdrawal intensity of 3.8 m 3 per MWh.This assumption results in estimates of freshwater withdrawals that are lower than actual, i.e., if data existed to accurately account for once-through cooling.
Projected future energy generation was based on compound annual growth rates (CAGR) (Equation 1) where V(t0) is the earliest available electricity generation value and V(tn) is the most recent value within the period 2000-2010.No CAGR was calculated for lack of adequate data if fewer than five years of generation data were available for a given country.
CAGR (t0, tn) = (V(tn)/V(t0))^(1/(tn-t0)) -1 Irrigation water applied for ethanol feedstock production was calculated by multiplying EIA ethanol production data by country-specific irrigation quantity coefficients for ethanol-producing nations obtained from de Fraiture et al. [33].We also sought to include estimates of life-cycle water use associated with specific ethanol and biodiesel feedstocks.However, there is no robust global database on water use for the cultivation and processing of the many different biofuel feedstocks, although some estimates of "water consumption for energy production" [8] and life-cycle water use [29] have been derived for a few feedstocks.Use of water coefficients for biofuel production is further complicated by the fact that water consumption of biofuel feedstocks varies widely depending on the particular feedstock and climatic conditions [34].For example, while soybeans under rainfed conditions consume no "blue water" as irrigation [34], in locations where they are irrigated, water consumption estimates can reach up to 844 m 3 /GJ [8].Additionally, while some biodiesel feedstocks are grown under rainfed conditions, water is still used in the total life cycle during the processing stage.Because robust data on life cycle water usage for the various biodiesel feedstocks are not available, we assumed a minimum 0.031 l/MJ of water for biodiesel processing of all feedstocks under all climates of using the estimate reported in Spang et al. [8].This represented biodiesel production using feedstocks grown under rainfed conditions, imported from other countries (as in the United Kingdom and South Korea), or from sources such as recycled cooking oils that are not primarily produced via large-scale agricultural production.Because the USA's soybean feedstocks for biofuels are increasingly produced under irrigated conditions, a higher coefficient of 21.71 L/MJ was used for USA biodiesel [29].Utilizing water use coefficients obtained from Mulder et al. [29] and Spang et al. [8], we assumed ethanol feedstocks (corn and sugarcane) were irrigated, with the exception of Canada.For Brazil and the USA, we present a range of values (discussed below).
The derived estimates of sectoral withdrawals for combined thermal and nuclear electricity generation and for biofuels production (from EIA data) were then taken as a percentage of total available freshwater for the industrial and agricultural sectors, respectively, found in FAO and World Bank databases, just a year apart from EIA energy data.Electricity generation, water withdrawal and availability, and biofuel production data were compiled into a GIS geodatabase for mapping.
Spatial and Temporal Trends in Energy Generation
Presenting multiple dimensions of energy-generation analyses in a single graphic each for thermoelectric generation and biofuels production results in figures that require some explanation.Recent (2000-2010) growth rates in total electricity production comprising conventional thermoelectric, non-hydropower renewables, hydropower, and nuclear generation for 199 countries are presented in Figure 1, which also shows projected increases for the period 2010-2020 in total thermal electricity generation.Additionally, the percentage mix of fuel ethanol to total biofuel production for those countries producing more than 5000 barrels (795,000 liters) of biofuels per day in 2010 is shown in Figure 2, which also shows recent (2000-2010) growth rates in total biofuel production.The water requirements for current and future thermoelectric generation and biofuels production, as detailed in the Methods section above, are presented here.Hydropower was not assessed due to inherent methodological difficulties in attributing evaporative losses resulting solely from power generation by multi-purpose reservoirs [35].Additionally, non-hydropower renewables such as solar and wind energy, use minimal quantities of water (with the exception of concentrated solar and geothermal using steam cycles) and were not explicitly assessed here.Thus, our analysis focuses on water requirements for conventional thermal and nuclear generation.
Finally, water demand for biofuels was attributed to feedstock production (with irrigation volumes for sugarcane and ethanol reported separately by major producing countries), in addition to water for processing associated with life cycle and consumptive water use analyses.Based on figures reported by de Fraiture et al. [33], fuel ethanol feedstocks were assumed to be irrigated except in the case of Canada, where corn and wheat are not typically irrigated; instead we applied an estimate of the non-irrigation water use for corn ethanol from Mulder et al. [29].However, because the amount of water used for corn cultivation for ethanol varies widely from state to state in the USA [36], and between the major sugarcane growing regions in Brazil [37], we present a range of estimates for water consumption for ethanol feedstocks for those two countries that included lower bounds of zero irrigation (Table 1).Although not all sugarcane cultivation is irrigated in Brazil, the percentage is increasing, especially in the northeast region [37] and particularly as a result of drought and related climate effects.Because robust country-level data are not available for biodiesel feedstocks, we assumed a minimum amount of water use for all biodiesel processing for all countries except for irrigated soybeans in the USA [38] as mentioned.
Growth in Thermal Electrical Production
With increasing energy development comes increasing water demand.Results shown in Figure 1 indicate that most industrialized Western nations have exhibited low or flat growth rates in conventional thermoelectric power generation along with concurrent rapid growth in the development of non-hydropower renewable energy.This is especially true, for example, in Western European countries, consistent with renewable energy targets adopted in EU directives and individually by EU member states [39].Most of the recent growth in thermoelectric power generation at the global level has come from countries in the Middle East, East and Southeast Asia, and South America.The BRIC countries show diverse energy portfolios; Brazil, Russia, India, and especially China show positive growth rates in all four categories of electricity generation.At 3595.5 billion kWh in 2011, China is orders of magnitude greater than most other countries (data not shown).While China and USA currently have comparable total net electricity generation, USA has a CAGR of only 1% based on the period 2000-2010.By contrast, the CAGR for Chinese thermoelectric generation is 11.4%.
Increasing Water Demands for Conventional Electricity Generation
This anticipated global increase in electricity generation from conventional, i.e., non-renewable, sources will be accompanied by greater water demands.But while these nations are all expected to expand conventional thermal electricity generation capacity, they differ in the amount of overall industrial water usage that can and will be devoted to such development.The results shown in Table 1 indicate that all major countries with the exception of the UK are projected over 2010-2020 to increase the fraction of industrial water withdrawn for use in nuclear and conventional thermoelectric power generation.For example, in China, in 2010 water withdrawn for nuclear and conventional thermoelectric power generation accounted for an estimated 10.0% of all industrial water withdrawals and is projected to increase to 28.4% by 2020.India may also increase from 17% to almost 30% of its industrial water supply for conventional thermal electricity by that same future date.In contrast, water withdrawals for these same uses only account for 2.8% of Brazil's total industrial water withdrawals in 2010 and are expected to increase to about 6% by 2020.This is related to the significant contribution of hydropower to Brazil's overall portfolio.
Of the 16 major energy-producing nations included in Table 1, all but Brazil, Canada, and USA were devoting at least 10% of total industrial water available to nuclear and conventional thermoelectric power generation, with four using more than 30%, and South Africa and Saudi Arabia each over 100%.In this context, it should be noted that seawater used for cooling is not included in the current definition of industrial withdrawals of (fresh) water.Nevertheless, use of seawater and other waters not suitable for irrigation or other human purposes, e.g., inland brackish water, "produced" water from oil and gas development, effluent, etc., will increasingly need to be used in energy generation and other industrial processes.
Growth Trends in Water for Biofuel Production
The major fuel ethanol producing countries as shown in the map-USA and Brazil at 889.9 and 527.1 thousand BPD, respectively-were by far the largest producers of total biofuel (fuel ethanol and biodiesel) in 2010.At least 76% is fuel ethanol, not biodiesel.The next highest is China at 43.0 thousand BPD.For the purposes of this analysis, our results quantify the relative proportions of agricultural water withdrawals used to irrigate ethanol feedstocks.In Table 1 we report the current (2010) withdrawals for irrigation for ethanol as a percentage of total agricultural water withdrawals for each country.None of the countries is above 10% (one of our thresholds) except for the USA, the world's largest ethanol producer, where large volumes of irrigation water are used for corn production as an ethanol feedstock [40], especially in more arid western states [36].Irrigation requirements for corn per liter of ethanol produced vary widely geographically, from 5 L L -1 in Ohio to 2138 L L -1 in California [36]; thus, the relative share of agricultural water devoted to corn may be much lower at a state or regional scale.The second largest producer of biofuels, Brazil, applies less water than the USA, 7.7%, for irrigating energy feedstocks because sugarcane is largely rain-fed.While growth in biodiesel production has been rapid in recent years in Brazil, ethanol still comprises by far the larger share of total biofuel production.
We also estimated the future amount of total agricultural water devoted to ethanol crop production based on recent trends.These estimates assume that total agricultural water withdrawals do not begin increasing.This is based on data showing that total freshwater withdrawals for agriculture have remained steady or have not increased appreciably during the period 2002-2011 for the countries shown in Table 2, with the exceptions of India and Saudi Arabia.We also assume that the recent rates of expansion of energy crop acreage requiring irrigation continue.While in 2010 the irrigation requirements for ethanol feedstock production were relatively low, fuel ethanol production in these major producing countries has increased although less rapidly than in earlier decades.For example, in USA the contribution of ethanol to the renewable fuels standard is near its maximum while other biofuel feedstocks do not yet have appreciable market share.Additionally, the European Union has cut back its demand for biofuels in order to minimize impacts on developing countries.With uncertainty in energy security coupled with climate change impacts on energy demand and water availability, however, these feedstocks may demand a markedly greater proportion of the total water available for agriculture.Applying a 10% reduction in total water available for agriculture due to effects of climate change while assuming the percentage of all available agricultural water applied for ethanol feedstock cultivation remains the same, Canada and the USA would devote 10% and 12% of all agricultural water to ethanol feedstock cultivation, respectively.If we apply a further reduction due to allocative scarcity (i.e., other sectors adapting to water scarcity by reallocating water currently used in agriculture), totaling 25% reduction, the percentage of all agricultural water applied for ethanol feedstock production in the USA increases to 15%, Canada to 12%, and Brazil to 10%.It should be noted that drought conditions in California and Australia, for example, exemplify how reductions in water allocated to agriculture frequently result in such drastic cuts.We also combined 2010 water consumption for nuclear and thermoelectric electricity generation with lifecycle water use for all biofuels for each country and report as a percentage of total internal renewable water (Table 1).Egypt and Saudi Arabia are highlighted as already using a relatively high percentage (>10%) of freshwater resources.As shown in the far right column of Table 1, assuming growth rates continue, these two countries, with Thailand and USA added, project to withdraw an increasing percentage of freshwater for these combined purposes.
Discussion
It is evident that water withdrawals for energy production are increasing, a challenge that poses difficult policy questions for climate adaptation and carbon mitigation, as well as for the water-energy nexus as a management tool to meet future demands for these resources.While our analysis presents conservative estimates of water withdrawals, it is evident is that: (a) water demands for energy are increasing, (b) few robust estimates exist, (c) climate impacts are expected to exacerbate current and future trends, and (d) water and energy planners have taken little notice of these trends at least until very recently.We compare the results reported here to previous related work and then briefly demonstrate the implications of these results for several "flashpoint" countries.
Climate Adaptation in the Water and Energy Sectors
The principal climate-change processes that are projected to intensify globally-warming temperature and increasing variability of precipitation resulting in drought and flood extremes [12]-drive increased demand for energy and water separately as resources, and via nexus effects that each exerts on the other.Urban adaptation to climate change, for example, tends to raise electricity requirements for (a) air-conditioning resulting from warming, (b) pumping and infrastructure management under conditions of both drought and flooding, and (c) redundant power supplies in transportation, emergency response, and medical systems planned for under conditions of power-grid tripping or more catastrophic failure.Cities are also implementing a range of green-infrastructure interventions to address urban heat island effects of warming, e.g., urban water bodies and landscaping vegetation, which tend to raise water diversions and consumption.In agricultural systems, climate change has a multiplier effect for water and electricity demand as well as adaptive response-that is, warming temperatures significantly increase water requirements for crop growth that can be met through increasing irrigation applications, which in turn can increase power demand for pumping and reduce hydropower generation from storage reservoirs as infrastructure operators are forced to decide on tradeoffs among multiple uses of water.Alternately, as considered above, agriculture may experience allocative water scarcity, resulting in lower yields, reduced area planted, and in general, loss of output, financial returns, and farm labor.
Perhaps more significant, however, are the carbon implications of conventional fossil fuel-based generation of electricity.The first column in Table 2 shows rapidly escalating CO2 emissions at the country and global levels, for which a leading cause is the rising demand for electricity.The IPCC [12] indicates that economic growth is a more potent driver than population growth alone.Heightened emissions in turn translate into warming and a speeding up of the hydrological cycle with greater variability in drought and flood cycles.Carbon-mitigation efforts aiming to decarbonize economic activity and future growth consider alternative fuels, including hydropower and biofuels among other sources-all of which portend future increases in water consumption.
Relevance to Other Estimates of Intensity of Water Demand for Energy
As Spang et al. [8] point out, there is a global shortage of detailed estimates of the water consumption of energy generation.Still, to interpret the results reported here on geographic water availability on a per-country basis, it is helpful to consider how they relate to recent work in a similar vein.In particular, Spang et al. [8] developed the first country-level comparison of water consumption for fuels and electricity production using a derived metric of 'water consumption for energy production'.They calculated water consumption for production of various sub-types of fossil, nuclear, and biomass fuels and then applied them to the global scale, generating national energy portfolios for 158 countries.In their companion paper [24], they normalized these earlier per-country water consumption results by various other indicators (GDP, population, total energy production, and regional water availability).The results reported here expand on this approach, using more recent data as inputs (2010) and by examining temporal trends-in the form of compound annual growth rates-rather than a single snapshot in time, as was done in other previous studies [20,21,41,42].Based on the results described above, we have identified several flashpoint countries that warrant further discussion.
Comparative Analysis of Flashpoint Countries
We observe increasing water demands for conventional and nuclear electricity generation at alarming levels for several countries.As shown, at least 13 countries are already using a relatively high percentage-10% or more-of the total industrial water withdrawals for these purposes.Many of the very countries projected to increase thermoelectric generation are arid and already using relatively high amounts of freshwater resources for these power sources.Surprisingly, we find that a few countries, e.g., Saudi Arabia and South Africa as shown above, already appear to be diverting more water for thermoelectric and nuclear power generation than the total reported industrial water withdrawals.Our analysis does not account, however, for dry cooling systems, e.g., for coal-based generation as increasingly implemented in South Africa.The results for Saudi Arabia may seem counterintuitive, but Spang et al. [24] made a similar observation that both the United Arab Emirates and Qatar were using over three times the total amount of water naturally internally available in those countries.
We find a rapid expansion in recent years of irrigation of ethanol crops in the U.S. and associated water use.Upper bound estimates place Brazil, Canada, and the USA at close to 10% of total agricultural water applied to ethanol crop production.If recent growth rates in ethanol crop production under irrigation were to continue, the associated water withdrawals would escalate to unrealistic levels.Therefore, planted acreage will not increase indefinitely.However, even a more modest gradual increase would be accompanied by an increase in the use of irrigation to intensify production in certain regions, depending on climatic conditions.This appears to be the case for Brazilian sugarcane production.Additionally, while we assumed soybean production for Brazilian biodiesel was cultivated under rainfed conditions, FAO (AQUASTAT) [26] reports that 624,000 ha were irrigated in 2006; 11.7% of all irrigated cropland.Increases in the production of biofuels based on irrigated feedstocks are highly concerning because, as others have pointed out, biofuel feedstock cultivation is the most water-intensive compared to other fuel sources [8].Chiu et al. [36] observed that the continued expansion of corn cultivation for ethanol in the Great Plains and Western USA is likely to exacerbate the expected water challenges in those regions.Mulder et al. [29] analysis of water use efficiency led them to conclude that "the development of biomass energy technologies in scale sufficient to be a significant source of energy may produce or exacerbate water shortages around the globe and be limited by the availability of fresh water."
Conclusions
We have assessed current and future trends in energy production, specifically electricity generation and biofuel feedstocks and processing, in relation to the consumption of water under changing climatic conditions.Despite ongoing energy diversification, fossil fuels remain the principal energy source and will for some time to come.Policy options to address these challenges can be difficult and complex [43] and are often overlooked in sectorally focused planning [44].Technology enhancement and the means to spur innovation are crucial choices [45].Technological change tends to be most dynamic in countries with low installed capacity, which can allow for leap-frogging in the adoption of technologies.However, access and cost to new technologies can be formidable challenges that the global community must address, through funding of adaptation tied to verifiable benchmarks.Particularly for gains in efficiency, technology substitution has already resulted in progress.While this allows for better input-output conversions, e.g., reducing the coefficient values used and cited above, rebound and take-back effects [46] that tend to increase, instead of limiting resource use, must be explicitly addressed through programmatic interventions, incentives for conservation tied to efficiency, and low-carbon adaptive strategies.
Adaptation of water use under climate change and the implications this holds for energy demand are often not explicitly considered in climate or energy policy.Various coupled energy-water policy measures have been identified.These include water-conserving energy portfolios as described, e.g., in the United States by Scott et al. [13].Such options will be increasingly adopted, given the financing and public-resistance pressures against large, new energy and water infrastructure.We have referred above to allocative water scarcity, yet intersectoral water transfers can be used to enhance energy production while intensifying agriculture (invariably the source of water transferred) and assuring food security.The long-distance conveyance of energy through electricity grids allows for generation that can be distant from the location of acute water scarcity-an example of virtual water for energy.Policy-makers must be cognizant the reverse does not occur, i.e., locations with adequate water for power generation must not convey electricity from generation sources in water-scarce locations, even though financial advantages for such virtual exchange may exist.The use of impaired waters (effluent, saline and brackish waters) for energy production will become increasingly common, just as seawater is used for thermoelectric cooling.Finally, hydropower is a unique water-energy nexus technology and policy domain in which tradeoffs must be explored [47] and rights and regulations must be explicitly accounted for [48].As with the other options discussed above, integrating technology and policy options to address water, energy, and climate challenges in an integrated manner is above all a question of institutional arrangements.
Figure 2 .
Figure 2. Share of ethanol in total biofuel production, 2010; and growth in total biofuel production.
Table 1 .
[33]r withdrawals for thermoelectric and nuclear power generation as fractions of industrial water withdrawals; and total water withdrawals for energy (thermoelectric and nuclear power generation, and biofuels feedstock irrigation), 2010 and 2020.Water withdrawals are defined as diversions from freshwater bodies (FAO AQUASTAT)[26], not depletion through evaporation.Color coding shown at bottom of table.Lower values assume no irrigation of ethanol feedstock.Upper values assume some irrigation based on estimates reported by de Fraiture et al.(2008)[33].** Lower values for Brazil and USA assume water consumption for ethanol processing but no irrigation of feedstock.
Table 2 .
Increases in carbon dioxide emissions, agricultural freshwater withdrawals, and irrigation freshwater withdrawals based on reported data (FAO AQUASTAT) [26].Color coding shown at bottom of table.
|
2015-09-18T23:22:04.000Z
|
2015-08-01T00:00:00.000
|
{
"year": 2015,
"sha1": "3bf84a6e575f56831d255245e14c2583843a6dab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/8/8/8211/pdf?version=1438773844",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3bf84a6e575f56831d255245e14c2583843a6dab",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
154747372
|
pes2o/s2orc
|
v3-fos-license
|
Innovative strategies devised by Indian microfinance institutions to achieve cost efficiency
This study is a discussion on the ‘Non-Governmental Organization-Microfinance Institution Partnership Model’ and ‘Securitization Model’ used by Indian microfinance institutions to achieve cost efficiency. These two models are effective strategies devised and used by efficient and sustainable Indian MFIs to reduce their operating cost and financing cost. Achieving such cost efficiency is crucial for microfinance institutions to attain operational self-sustainability without levying high interest rates. Using interview method the study elicits information on these innovative strategies and recommends them to be worthy of emulation for other microfinance institutions operating in the Indian microfinance industry.
Introduction
Microfinance refers to the provision of financial services to low-income clients. By providing financial access to the poor clients, microfinance plays a decisive role towards financial inclusion. It economically empowers the poor and integrates them to the mainstreams of the economy. The institutions that provide such financial services to the poor are called Microfinance Institutions (MFIs). These MFIs act in an environment of high information asymmetric credit market risk, where there is a dearth of information about the credit history of the poor clients. These information asymmetric credit market risks are mitigated by the MFIs by using unconventional group lending models that work on joint-liability principle, sans collaterals. Though this unconventional group-lending model has the potential to mitigate risk and facilitate financial intermediation at the bottom of the pyramid, it has one major challenge associated with ithigh intermediation costs. In order to cover this high intermediation costs and attain operationally-self sustainability (OSS), it is imperative that MFIs remain cost efficient.
OSS denotes the ability of MFI to earn revenue to cover its costs and reach the poor now and in future (Schreiner, 1996). More specifically, it is the ability of MFI to generate enough revenue from its operations to cover its financing costs, transaction cost and loan loss provisions. Attaining OSS is imperative for the MFI to perpetually operate in the sector. In order to attain OSS, without resorting to the practice of levying high interest rate from the poor, it is essential that MFIs concentrate on achieving cost efficiency.
Considering the pertinence of achieving cost efficiency, this work aims to understand the innovative strategies used by efficient and sustainable MFIs, to remain cost efficient. This is expected to provide a valuable learning experience for other MFIs which aim to improve its cost efficiency.
Literature Review
Information asymmetric risk arises in credit-lending transactions, as the lender has less information about the creditworthiness of the borrower, than the borrower himself. Such risks are all the more exacerbated in microfinance market as the poor borrowers lack credit history. Information asymmetric credit market risks denotes the ex-ante risk of adverse selection 1 , interim risk of moral hazard 2 and the two ex-post risks of costly audits and enforcement 3 (Akerlof, 1970;Scholtens & Wensveen, 2003;Stiglitz & Weiss, 1981).
MFIs mitigate these information asymmetric credit market risks-adverse selection, by affecting group formation among the poor borrowers with joint-liability; moral hazard, by inducing group members to influence the way other members select their projects; costly monitoring, by helping the lender avoid external audits and enforcement problems, by encouraging borrowers to repay their loans without the lender having to impose sanctions-by its unconventional group lending models (Ghatak & Guinnane, 1999;Ghatak, 2000).
But the group-lending model used by MFIs to mitigate these risks, results in high operating costs for the MFIs (Thorat, 2006;Savita, 2007). The group lending models entails peculiar costs such as group formation costs, costs of training the borrowers on the procedures, costs of higher degree of supervision and higher frequency of installment payments, all adding to the operating costs of the MFI. Moreover since the average microfinance loan size is small, the transaction cost on a percentage basis for such microfinance loan tends to be higher. Adding to this, the MFIs experience less control over their financing costs, as cost of funds sourced from banks and financial institutions usually comes in fixed ranges of pricing. Thus the high intermediation costs incurred by MFIs are a major challenge at the stake of its sustainability.
In Indian microfinance industry, the average operating costs, ranges from nearly 6 to 18 per cent and the average financing costs, ranges from nearly 10 to 14 per cent of interest rates levied by the MFI (Chakraborty, 2010). The Malegam Committee, a special sub-committee appointed by Reserve Bank of India during the post-microfinance crisis period in India, cited that on an average the interest rate charged by Indian MFIs comes to 28-36 per cent in the year 2009-10. The Malegam Committee Report (2011) also cites few large Indian MFIs to be levying interest rates close to 50.53 per cent. This depicts that on an average the Indian MFIs experienced its operating costs and financing costs to be on the higher end of the average. This makes them charge higher cost-covering interest rates from the poor clientele, so as to remain sustainable. But after undertaking an efficiency and sustainability assessment on a sample of 50 Indian MFIs for the year 2009-10, Nadiya & RadhaRamanan (2011) cite the presence of few efficient MFIs, which operates sustainably by levying a reasonable interest rate of 26 per cent 4 or lower from the poor. It is of interest to know how these efficient and sustainable MFIs are managing their operating costs and financing costs, as the strategies used by them can be emulated by other MFIs aiming at cost efficiency. 1.Adverse selection risk arises when the lender has poor information about the borrowers while negotiating the credit-lending transaction. With the limited information on the poor borrowers, the lender cannot screen the riskier borrowers from safer ones. Therefore there is an adverse selection risk of lending to the more risky borrowers.
2. Moral hazard risk arises because the lender has difficulty in monitoring the behavior of the poor borrowers once the loans are disbursed. Therefore, the lender does not know whether the loan is being used optimally for the intended purpose for which it is sanctioned. The lender lacks information about the performance of the credit-lending transaction and the probability for the loans disbursed to be misused, results in the risk of moral hazard.
3. Costly audit and enforcement risks arise, because it becomes too costly for the lender to audit and enforce payments on the small loans disbursed to the poor, which lack collateral support. This study therefore dedicates efforts to understand the innovative strategies used by two of these efficient and sustainable Indian MFIs. One of the MFIs which experiences low operating costs than the market average is approached to understand the strategy devised and used by them to reduce operating cost. Another MFI which has low financing costs than market average is approached to understand how they reduce their cost of funds.
Methodology
The method of semi-structured interviews is used to understand the strategies used by the MFIs. The method of interview is chosen as it will enable the participants (MFI managers) to freely express their views and discuss the strategies used for reducing cost. The names of the interviewed MFIs are not disclosed in this study due to confidentiality reasons. Therefore the manager of the MFI with low operating cost 5 is proxied by the name 'MFI A' and MFI with the low financing cost 6 is proxied by the name 'MFI B'. The managers of MFI A and B were asked to explain the innovative strategies they use to reduce operating costs and financing costs respectively. The strategies are documented for the reference of other Indian MFIs.
Non-Governmental Organization-Microfinance Institution Partnership Model to Reduce Operating Costs
The manager of MFI A uses the Self-Help Group (SHG) credit delivery model to disburse credit to the poor clients. SHG credit delivery model is the home grown credit delivery model used by Indian MFIs. In this model affinity groups of around fifteen to twenty poor individuals are formed. The SHGs mostly comprises of women with a homogeneous socio-economic background, sharing the willingness to improve their living conditions. The SHGs serve as a platform for a range of welfare services that can empower the poor women. The group members in an SHG provide financial support to one another through internal credit assistance made from their pooled savings. Later after inculcating financial discipline among themselves, the SHGs borrow from MFIs, for on-lending to the group members.
MFI A observes that there is huge group formation costs associated with the SHG model, prior to the commencement of the financial intermediation activities. In order to ensure that the groups are mature enough to be linked to the MFI, the latter provides training and nurturing activities for the members, prior to the issue of the first loan. Such initial group formation and nurturing activities was found imperative for empowering clients for whom credit is not the only missing link to development. For MFI A such activities usually spread over a period of 6 months, accounting for around INR 7000 per group. MFI A considers this to be a major chunk of its operating cost which forms an inevitable part of its credit delivery model. In due course of its operations, it devised an innovative strategy to deal with this high group formation costs. MFI A found that it would be possible to considerably reduce the higher group formation costs of SHGs by entering into Non Governmental Organization (NGO)-MFI partnerships. This partnership model is regarded as an innovative strategy devised by MFI A, as it was the first Indian MFI to adopt this model in Indian microfinance industry. MFI A partnered with an NGO and outsourced its group formation and nurturing activities to the latter at a nominal cost. The manager states:
"We pay 350 INR per linkage or in special cases 1 per cent of the loan amount lent to the NGOs as commission. The actual cost of group formation comes to 7000 INR per group. Cost savings for us on account of this partnership is around 6650 INR per group."
The NGO-MFI partnership model is depicted below in Figure 1
Non-Governmental Organization-Microfinance Institution Partnership Model
As shown in Figure 1.1, the MFI enters into a partnership with NGO, whereby the latter forms the SHGs and links it to the former. The MFI pays a commission to the NGO for facilitating and undertaking this group formation task. The NGO usually have affinity groups of poor affiliated to them for activities related to its own welfare mission and therefore they normally do not have to put in additional efforts to form SHGs. The NGO interacts with the group members on a day-to-day basis and therefore inculcating financial discipline among the members, is done hand-inhand with their normal activities. The commission received from the MFIs for undertaking this group formation and nurturing services, serves as an additional income for the NGO, though it is usually a minimal amount. From the NGO's perspective, by linking the SHGs with MFI, they are able to address the capital constraints faced by their poor clients who undertake income-generating activities. This advantage, acts an incentive for the NGO to form SHGs, without compromising on the financial discipline of the members. Identifying such NGOs, which have a motivation to enter to this partnership, is crucial for the success of the model.
From the MFI's perspective, this partnership relieves them from undertaking the group formation and nurturing activities, thereby enabling them to concentrate more on its core activity of financial intermediation. Thus in the NGO-MFI partnership model, MFI A which otherwise incurs 7000 INR on group formation, outsources this task to NGO for a nominal fee of 350 INR. This results in a saving of 6650 INR for the MFI. Therefore in this model the MFI merely lends loans to the SHGs, which are already formed and nurtured by the NGO. The SHGs then directly makes the repayment of the loans to the MFIs. The repayment of loan by the group members is not the NGO's responsibility. But in the MFI's experience, since loan delinquencies adversely affect the NGO's chances to sustain the capital support for their clients, the NGOs takes special care to ensure quality and financial discipline of the groups formed. Therefore, the MFI has never experienced this partnership to adversely affect its portfolio quality. Overall, NGO-MFI partnership is an effective strategy that MFI A recommends for minimizing operating cost.
Securitization or Portfolio Buy-out Model for Reducing Financing Costs
Manager of MFI B observes that financing costs of MFIs are almost uncontrollable in nature. This is so as the rate at which MFIs source founds from banks is almost fixed in nature. Cost of funds always averages around 12-13 per cent for majority of MFIs in the industry. Nevertheless, MFI B uses and recommends the adoption of securitization model to reduce financing costs of MFIs. MFI B was among the pioneers to have applied the securitization model to Indian microfinance context. Though a popular model in banking sector, this had fewer applications in Indian microfinance industry, until MFI B proved the potential of this strategy to reduce financing costs for MFIs. In this sense the strategy is novel and innovative in microfinance context. The securitization process as explained by the manager is presented below in Figure 1.2.
Figure 1.2 Securitization Process
As shown in Figure 1.2, after issuing loans to the clients, the MFI transfers the loans to banks interested in a securitization deal. The bank which purchases the pool of assets then pays back cash at a discounted rate of interest to the MFI. The MFI will continue to service the sold out loans on behalf of the bank and will pass on the collections periodically to the bank. The MFI will be financially responsible for any losses on the sold out loans, up to a certain percentage as agreed at the time of the securitization contract. This clause is termed as first loan default guarantee in the contract.
Just as the MFI gets its loans liquidated, the bank too has an advantage in entering in such a deal. The bank can use this purchased loans to fulfill their priority sector lending requirements. It can also pool these assets and redistribute it as securities to new investors. For the investor, securitized microfinance loans are attractive as it mature much faster than other industry investments. The maturity period ranges from 6 months to 3 years and portfolio quality is generally high on microfinance loans. Thus securitization is a win-win deal for all the parties involved. But since there is no active secondary market for securitized microfinance instruments, usually the banks either use it to meet their priority lending requirements or resell it to other banks that face the similar need. So if the redistribution element in this model is not there, then it becomes a mere portfolio buy-out model between the MFI and the bank, with no issue of securities. MFI B suggests that such portfolio buy-out model can be used by the non-NBFC MFIs. So the MFI recommends the use of either the securitization or portfolio buy-out model as a means to reduce their cost of funds.
Implications for Indian Microfinance Managers
Based on the discussions in this paper, it is suggested that the NGO-MFI partnership model can be used for reducing operating costs of Indian MFIs that use SHG credit delivery model. The MFIs which emulate this model are recommended to select NGOs which have the motivation to enter and retain the partnerships. This is crucial for ensuring cost advantage without compromising on portfolio quality. The NBFC-MFIs are recommended to use the securitization model for reducing financing costs. Whereas the Non-NBFC MFIs are suggested to use portfolio buyout model to reduce its cost of funds. As the MFIs cannot rely on continued donor support for funding its operations, adopting strategies that reduce the cost of funds sourced from banks and financial institutions is crucial.
|
2019-05-16T13:05:47.988Z
|
2012-01-17T00:00:00.000
|
{
"year": 2012,
"sha1": "7ff6f6e85bc05f2101776086ca4bc9346d50962f",
"oa_license": "CCBYNC",
"oa_url": "https://www.ssbfnet.com/ojs/index.php/ijfbs/article/download/444/404",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "67bd0b9b8ad6b3d38eabc65b0b18688342fb70e8",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
257039037
|
pes2o/s2orc
|
v3-fos-license
|
Tracing multiple scattering trajectories for deep optical imaging in scattering media
Multiple light scattering hampers imaging objects in complex scattering media. Approaches used in real practices mainly aim to filter out multiple scattering obscuring the ballistic waves that travel straight through the scattering medium. Here, we propose a method that makes the deterministic use of multiple scattering for microscopic imaging of an object embedded deep within scattering media. The proposed method finds a stack of multiple complex phase plates that generate similar light trajectories as the original scattering medium. By implementing the inverse scattering using the identified phase plates, our method rectifies multiple scattering and amplifies ballistic waves by almost 600 times. This leads to a significant increase in imaging depth—more than three times the scattering mean free path—as well as the correction of image distortions. Our study marks an important milestone in solving the long-standing high-order inverse scattering problems.
Introduction
In ordinary daily life, we visually perceive the world around us from the light scattered by objects. Our brain processes the received signals assuming that they are scattered only once from the surface of the objects 1 . In situations where the objects are embedded deep within a scattering medium, almost all the received signals are scattered multiple times. Without knowing their travel history, our brain cannot process them properly, thereby perceiving the objects as obscured. Examples include automobiles moving on a foggy day, abnormal cells hidden under the skin tissues, and nervous systems under the cranial bone 2 . For this reason, imaging modalities actively used in real practices aim to suppress multiple scattering for finding ballistic waves traveling straight through the scattering medium. However, the exponential attenuation of ballistic waves with depth sets a hard limit on their achievable imaging depth 3,4 . To go beyond this limit, it is necessary to make use of multiple scattering for image reconstruction. This requires finding the trajectories of multiply scattered waves, which will allow for implementing the inverse scattering to see the objects clearly as if there is no scattering medium in the first place (Fig. 1). However, it is extremely difficult to trace individual light trajectories, especially when the objects are completely embedded in a thick and bulk scattering medium. In this general situation, we should select and trace only those backscattered waves that make roundtrips to the embedded target objects while ruling out the others that travel to shallower depths. There have been many prior reports making the use of multiple scattering for image reconstruction. However, almost all the previous works handled the problem in limited conditions. In most cases, the target object is in free space on the opposite side of either a scattering layer [5][6][7] or a wall 8,9 rather than embedded within a scattering medium. Some studies demonstrated a wave focusing on a target in a scattering medium [10][11][12] . However, wave focusing leads to imaging only for a thin scattering layer, where a focus can be scanned within the so-called memory effect range 10,13 . The thicker the scattering medium, the narrower the memory effect range, which is no longer usable for imaging. Tracing multiple scattering trajectories in general situations of imaging an embedded target is the task of solving an inverse scattering problem 14 . In this context, imaging modalities can be classified by the number of scattering events that they can trace. The first-order approach finds the light scattered only once by the object of interest 15,16 . Almost all microscopic imaging modalities relying on ballistic waves fall into this category. They employ various gating operations [17][18][19][20][21][22][23] to filter out the multiply scattered waves. High-order approaches have been developed to trace multiple scattering for extending imaging depth; however, they can trace only a limited number of scattering events. For example, adaptive optics dealing with the perturbation of either excitation and/or returning beams [24][25][26][27][28][29][30][31] can be considered either second or third-order approaches. Approaches based on the memory effect can be classified as a similar category 13,32,33 . In imaging an object embedded within a scattering medium, tracing beyond the third order has not been possible thus far mainly because of insufficient sampling and a lack of a physics model to address the underdetermined nature of the problem, especially in the presence of strong multiple scattering with no interaction with the objects of interest. Here, we propose a method-multiple scattering tracing (MST) algorithm-that can keep track of a number of scattering events responsible for reconstructing an object embedded deep within a thick scattering medium, using the experimentally measured reflection matrix of the scattering medium 29,30 . The algorithm uses the intrinsic correlations of multiple scatterings to find a set of complex phase plates that reproduce their trajectories in the scattering medium enclosing the target objects. Through the inverse of the transmission matrix of the phase plates, which is conceptually equivalent to placing an inverse scattering block counteracting the scattering medium, we can rectify the multiple scattering and obtain a diffraction-limited image of the object as if there is no scattering medium (as illustrated in Fig. 1). Essentially, this process realizes the deterministic use of multiple scattering for microscopic image formation by converting the multiply scattered waves to ballistic signal waves. We validated the proposed concept using numerical simulation and experimentally demonstrated its performance by conducting in vivo imaging of a mouse brain under an intact skull, an extreme form of a scattering medium. The MST algorithm could trace up to, but not limited to, 17 scattering events and successfully identified a stack of phase plates equivalent to a thick skull. In these demonstrations, the ballistic signal waves were amplified almost 600 times by converting the multiply scattered waves that are otherwise considered noise. Our work marks an important breakthrough in solving the high-order inverse scattering problem in the general situation when strong multiply scattered waves exist without any interaction with the target object. A scattering medium can be made transparent by placing a virtual inverse-scattering block counteracting the scattering medium. A target object, which is the letters "MST," located under the scattering medium becomes visible because of the inverse-scattering block. The red and blue arrows indicate the ballistic and multiply scattered waves, respectively.
Working principle
Let us consider probing an object embedded within a scattering medium using a light wave with a specific incidence angle !" (Fig. 2a). While there is a tiny fraction of the incident wave, termed as a ballistic wave, that preserves its propagation direction in the scattering medium (red arrows), the majority are scattered multiple times on their way to and from the object of interest (blue arrows). This multiple scattering leads to random spreading of the propagation angles, thereby undermining the reconstruction of the image information. In the context of imaging, one can find the object spectrum from the momentum difference Δ ( # ; !" ) = $ sin # − $ sin !" for a ballistic wave, where $ = 2 / , is the wavelength of the light source, and # is the angle of the backscattered wave 21 . However, Δ ( # ; !" ) of the multiply scattered wave differs from the object spectrum because the actual incident and reflected angles of the object are !" % ≠ !" and # % ≠ # , respectively. The multiple scattering process is random but deterministic. If one can keep track of the multiple scattering trajectories and, thus, find !" % and # % , then the object spectrum can be obtained from Δ 1 # % ; !" % 2. However, this is a heavily under-determined problem because numerous possible trajectories generate the same # . Therefore, in most studies, only the ballistic waves therein are exploited for image reconstruction [24][25][26][27]29 .
To keep track of the multiple scattering trajectories and use the multiply scattered waves for image reconstruction, we set up two strategies: experimental recording of a time-gated reflection matrix, and the development of the MST algorithm that finds !" % and # % from the measured reflection matrix. In the first strategy, we measure the electric fields of the backscattered waves, including both the ballistic wave and multiply scattered waves, whose flight times are identical to that of the ballistic wave. By repeating the same measurements for all the possible incident angles, we can construct a reflection matrix that describes the interaction between a light wave and scattering medium (see Methods and Supplementary Section 1 for a detailed experimental setup and the construction of .).
As a second strategy, we propose a powerful MST algorithm that exploits the wave correlation in the measured . Using this algorithm, we can model the thick and inhomogeneous volumetric scattering medium as a discrete stack of thin transmissive phase plates that generates light trajectories similar to those generated by the original scattering medium (Fig. 2b). In this case, we assume that the forward scattering is dominant and ignore the reflections and absorptions in the scattering medium. In fact, timegated detection plays a critical role in guaranteeing the validity of this assumption, because waves experiencing multiple reflections back and forth inside the scattering medium tend to have much more elongated flight times than the ballistic waves and forward scattering components that interact with the target object. In this multilayer model, a light wave is assumed to undergo a spatially varying phase shift & ( ) at each k th layer located at a depth = & with = ( , ), which is the lateral coordinate, and then propagate through free space to the adjacent layer. Therefore, the inward propagation of the wave from the surface of the scattering medium at ' to the object plane at $ is described by a transmission matrix = ∏ (Fig. 2c). Considering the roundtrip travel of the light wave, the reflection matrix of a scattering medium is modelled in the space domain as where is a diagonal matrix describing the amplitude reflection ( , $ ) of the object of interest located at the object plane, and 0 is the transpose of , which accounts for the outward propagation of the reflected waves from the object back to the surface of the scattering medium. Essentially, there are 2 phase layers and an object layer in + $ ,+ $ (%) that describe the roundtrip of the waves entering the surface of the scattering medium at ' , all the way to the object, and returning back to the surface. Our task is to find a set of & ( ) and ( , $ ) from the experimentally measured + $ ,+ $ . Finding & ( ) is identical to identifying , which in turn provides information on !" % and # % . Multiplying the inverses of and 0 on the input and output sides of the measured + $ ,+ $ , respectively, we can compensate for the multiple scattering effects, which is equivalent to placing the virtual inversescattering block as shown in Fig. 1.
Our MST algorithm consists of two major steps. The first step is to access each k th layer in + $ ,+ $ , and the second step is to find & ( ) asymptotically by taking advantage of the correlations of the wave fields interacting with the object. By repeating this process for all the 2 layers iteratively, we gradually reach the ground-truth & ( ). We describe the detailed process of the MST algorithm in the Method section and Supplementary section 2.2. As a result of the MST algorithm, we obtain the reconstructed phase map & 1 1 2 2 of each layer. Then the correction transmission matrix 1 is constructed by & 1 1 2 2, which yields the light trajectory. By multiplying the inverses of 1 and ( 1 ) 0 that compensates for the multiple scattering effects. Finally, the object function ( 4 , $ ) is reconstructed from the diagonal elements of + % ,+ % 1 (see Supplementary Section 2.2 for the flow chart describing the detailed iteration procedures).
We would like to emphasize that our MST algorithm is specially designed for deep imaging in a thick scattering medium. The incident wave experiences substantial lateral spreads throughout the thick and bulk scattering medium, whereas the backscattered waves are recorded only for the limited field of view. Therefore, there are substantial multiple scattering components in that have no interaction with the object of interest, and thus, cannot be modelled by (%) . This renders the direct minimization of the difference between and (%) , a strategy mainly employed in the conventional approaches for addressing higher-order inverse scattering problems in isolated single cells 34,35 , highly underdetermined and ill-posed. Instead, our algorithm aims to find those wave components in that are responsible for the object image reconstruction. Based on wave correlation, the algorithm identifies the phase functions at multiple depths to coherently accumulate the object-information-carrying multiple scattering signal components. This novel approach is highly robust to the existence of multiple scattering that remains unaccounted for by the model that does not carry object information. Furthermore, its computational cost can be much lower than the minimization algorithm. As shown below, it takes only 10 min to trace nine scattering events generated by four phase plates and one object plane when the sampling area is 100 × 100 pixels (field of view (FOV) size: ~50 × 50 ). Multiply scattered waves (blue arrows) change propagation directions multiple times, whereas the ballistic wave (red arrows) preserves its propagation direction when propagating through the scattering medium. b, Modelling of the scattering medium by a discrete stack of phase plates. Each phase plate is described by a phase function ! ( " ) at a depth ! . The object of interest is located at # . c, Illustration of our forward model describing the reflection matrix with a stack of discrete phase plates. For better understanding, the reflection process by the target object is shown to the right of the target.
Proof of concept with numerical simulation
We performed numerical simulations to validate our proposed method of solving the inverse scattering problem (Fig. 3). Here, we consider a case where a target object is covered with multiple discrete phase plates whose phase functions are known a priori (Fig. 3a). Four phase plates are placed at depths of { &()…6 } = {35, 50, 70, 100} μm, respectively, from the target object shown in the inset in Fig. 3a. The phase functions & of the four layers are shown in Fig. 3b. We set the size of each phase map as 200 × 200 μm 2 , while that of the target object is set as 45 × 45 μm 2 . Each phase map is filled with a random phase pattern to mimic a realistic scattering medium. In addition, different polygonal patterns and numbers are superimposed on each layer for ease of performance evaluation of the proposed algorithm. Then, we numerically generate the reflection matrix of the sample at a wavelength of 900 nm and an angular coverage of 1.0 NA based on Eq. (1).
By applying the MST algorithm to , we reconstructed the phase functions & 1 for the four scattering layers, as shown in Fig. 3c (see Supplementary Section 3 for the detailed iteration process of the MST algorithm). Essentially, our algorithm located nine scattering events, considering the roundtrip and the scattering at the object. Note that a circular sub-area of each phase function was recovered owing to the intrinsic cone-shaped imaging geometry (Methods and Supplementary Section 2.3). The numbers and polygonal patterns in each phase function were clearly identified, proving the efficacy of the proposed algorithm. By taking the diagonal elements of + % ,+ % , we could obtain a conventional confocal reflectance image of the object as shown in Fig. 3d 29 . Due to the multiple scattering by the upper scattering medium, the detailed object structure was invisible. We constructed 1 using the identified phase functions & 1 and obtained the corrected reflection matrix 1 by applying the inverses of 1 and ( 1 ) 0 to the input and output sides of , respectively. Then, the scattering-free object image was reconstructed from the diagonal elements of + % ,+ % 1 , which is referred to as an MST image. Figure 3e displays the reconstructed MST image, showing an excellent agreement with the ground-truth object image depicted in Fig. 3a. To quantify the accuracy of the proposed algorithm, we calculated the Pearson correlation coefficients of each phase function & 1 , the transmission matrix 1 , and the MST image with their ground-truth counterparts. The average correlation was measured to be approximately 70-80 % for each phase function and transmission matrix while that of the object image was about 94 %. The discrepancy mainly arises because not all the scatterings can be accounted for. In fact, the spatial resolution of mapping the phase functions is reduced with an increase in the distance from the object plane ( Supplementary Section 2.4). The correlation of the object image was much higher than that of the others since the confocal gating suppressed the unaccounted multiple scattering.
The rectification of multiple scattering by the MST algorithm makes the scattering medium transparent. To quantify this effect, we displayed an angular spread function (ASF), which is the angular distribution of a normally incident plane wave with depth. Figure 3h shows the ASF measured after each layer before the multiple scattering rectification. The ASF was progressively broadened as it propagated through the scattering layers, and the ballistic wave (peak in the center) preserving the original incident angle was drastically attenuated (red circular dots in Fig. 3f). When we rectified the multiple scattering trajectories by cancelling & , the broadening of ASF was largely removed (Fig. 3i), implying that a large fraction of the multiple scattering was converted to a ballistic signal wave. The ballistic wave intensity was attenuated much less after the multiple scattering rectification (blue square dots in Fig. 3g), with an increase of 18.5 times in its final intensity. Considering the roundtrip, there was ) = 344 times enhancement of the ballistic signal. In terms of the scattering mean free path 7 , the four phase plates can be considered a scattering medium with an optical thickness of 3.3 8 , but their thickness changed to 0.42 8 after the rectification.
The enhancement ) of the ballistic signal wave can be explained by the 1 identified by the MST algorithm. One can estimate the attenuation of the ballistic wave on its way to the object, as: ; here, Y 42 1 is the (i, j) th matrix element of [ 1 in the spatial frequency domain obtained by the Fourier transform of 1 . Because the multiple scattering rectification is the action of multiplying the inverse of 1 , there is an increase in the ballistic signal by a factor of 1 0 ⁄ during the one-way propagation. Therefore, the roundtrip enhancement of the ballistic signal is given by ) = 0 -* . The behaviour of ) is shown in Fig. 3g with the iteration process of MST algorithm, indicating its steady increase and saturation to ) = 344. This analysis confirms that our proposed method made the deterministic use of multiple scattering for the image reconstruction; this is a clear distinction from the approaches based on Born approximation and conventional adaptive optics.
Multiple scattering rectifications increase the ballistic signals and attenuate the multiple scattering background signals, which jointly enhance the signal-to-background ratio, and thus, the achievable imaging depth. Here, we propose another criterion that quantifies the ballistic signal enhancement based on the enhancement of the reconstructed MST image intensity, 1 ). Here, 44 and 44 1 are the diagonal elements of + % ,+ % and + % ,+ % 1 , respectively. The 1 indicates the enhancement of the confocal signal, which is primarily related to the enhancement of the ballistic signal. However, the diagonal elements of + % ,+ % contain substantial multiple scattering components when the initial ballistic signal is excessively weak. Therefore, 1 underestimates the actual enhancement of the ballistic signal. For a better estimation of the ballistic signal enhancement, we introduce another parameter , representing the ballistic signal's contribution in the diagonal element of . Then, * = 1 / is the ballistic signal enhancement obtained from the MST image analysis (see Supplementary Section 5 for details). The behaviour of * is shown in Fig. 3g with iteration; first, it increased and then saturated to a finite value of 217, which means that the ballistic signal in the MST image shown in Fig. 3e was increased by * = 217 times with respect to the initial confocal image shown in Fig. 3d. Since there is an ambiguity in estimating , especially when the initial ballistic signal is extremely weak, ) is a more reliable measure of ballistic signal enhancement than * . Color scales in d-e are normalized by the maximum amplitude in e. f, Ballistic wave intensity of the normally incident plane wave measured after each phase plate before (red circular dots) and after (blue square dots) the application of MST algorithm. g, The performance of MST algorithm by evaluating % and ) with the iteration process. h, Angular spread function of a normally incident plane wave in the spatial frequency domain measured underneath each phase plate. i, Same as h, but after rectifying the multiple scattering trajectory. Angular spread functions are displayed in the spatial frequency coordinates 1 * , + 3 with its center corresponding to (0,0). Scale bar: 0.1 # .
Experimental validation of the MST algorithm
We experimentally validated the MST algorithm using multi-layered onion tissue as a scattering medium. As illustrated in Fig. 4a, we placed an 800-μm-thick onion tissue on a custom-made resolution target. The onion tissue contained layers of cellular structures, which allowed us to check whether the phase functions retrieved by the MST algorithm originated from the real structures. The resolution target was fabricated by depositing a 200-nm-thick layer of gold onto a glass substrate with a patterned photomask. The target pattern was composed of multi-scale Siemens star structures. Using laser scanning reflection-matrix microscopy 29,30 , we recorded the time-gated reflection matrix + % ,+ % of the sample with the focal plane of the objective lens set to the axial position of the target ( $ ) (see Supplementary Section 1 for experimental setup and matrix construction). As shown in Fig. 4b, the time-gated confocal image reconstructed by the diagonal elements of + % ,+ % was highly distorted due to the multiple scattering and aberrations induced by the onion tissue. Then, we applied MST algorithm by modeling the thick onion tissue as five discrete layers placed at { &()…9 } = {250, 400, 600, 800, 1000} μm from the target plane at $ = 0. By rectifying the identified multiple scattering trajectories, we obtained the object function and reconstructed the MST image (Fig. 4c). We could identify the fine details of the target object with ~800 nm spatial resolution, close to the diffraction-limited resolution of 650 nm for the illumination wavelength of 1300 nm. Phase functions & 1 ( ) of the five layers obtained by the MST algorithm are displayed in Fig. 4d. The area of the reconstructed phase functions increased with an increase in the distance from the target plane due to the cone-shaped imaging geometry. The walls of individual onion cells and their nuclei were clearly visible, especially in ) 1 , * 1 , and : 1 . This clearly demonstrates that the MST algorithm reconstructs valid trajectories of multiple scattering by the real structures of the scattering medium covering the target object. Cellular shapes in the layers at 6 and 9 were rather indistinctive because the spatial resolving power for retrieving these layers was not high enough to identify the cell walls (see Supplementary Section 2.4 for a detailed discussion of the lateral and axial resolutions of the phase function reconstruction). The enhancement of ballistic signal estimated by & 1 was ) = 343.
Demonstration of MST algorithm with a highly scattering skull tissue
So far, we have validated the MST algorithm numerically and experimentally with a layered scattering medium. Next, we demonstrate our method with a thick bulk scattering medium which does not have well-defined layered structures. For the demonstration, we measured the reflection matrix + % ,+ % of the resolution target under a 180 µm-thick cranial bone of a mouse, as illustrated in Fig. 5a. As shown in Fig. 5b, the intensity map of the conventional confocal reflectance image was distorted due to the scattering and the aberration by the thick skull tissue. Unlike the previous samples, the axial positions and the number of required phase plates were not well defined for this bulk cranial tissue. We first determined the approximate position of the skull tissue by measuring the ballistic enhancement * of the MST algorithm assuming a single phase plate. We scanned the axial position of the phase plate from 50 μm to 400 μm and found that the skull tissue was positioned approximately 100 μm ~ 300 μm away from the resolution target (blue shaded region in Fig. 5c). Then, we applied the MST algorithm by increasing the number of the phase plates whose axial positions are shown in Fig. 5d. The resulting MST images and the map of the phase plates are displayed in Fig. 5e. For a single phase plate (top row of the figure), only the lower half of the resolution target was visible, and the overall intensity was much lower than the MST images with many phase plates. This is because only a small fraction of the multiple scattering trajectories could be corrected with a single phase plate. With the increase in the number of phase plates, the MST image became sharper over the entire view field, and the overall intensity was substantially increased. For quantitative analysis, we estimated the ballistic enhancement * with the increase of up to 8 layers (Fig. 5f). The ballistic signal rectified by the MST was enhanced by almost 580 times the initial ballistic intensity. The increase in * was saturated around = 5. Considering that the computation time of the MST algorithm is proportional to , the optimum number of phase plates without compromising the image contrast would be around = 5. Note that the required computation time is for post-processing and, thus, does not affect the experimental recording of the reflection matrix. If necessary, we can speed up the computation time by reducing the region of interest (ROI) (see Supplementary Section 2.5). In Fig. 5g, we display the final MST image with = 5 phase plates. It shows fine details of the target with high contrast despite the strong multiple scattering. The corresponding phase functions in Fig. 5e show real cellular structures in the cranial tissue as multiple blue spots with negative refractive index contrast. In the following section, we prove that these structures are osteocytes of the skull from in vivo imaging of the mouse brain.
Application to in vivo imaging
Here, we demonstrate that the proposed MST algorithm works for natural objects with weak reflectance embedded in a heavily scattering medium. We conducted in vivo imaging of a living mouse brain with its skull intact. We then applied our MST algorithm to find the phase functions of the skull and reconstruct myelinated axons in the cortical brain. An adult mouse (five-month-old) was placed on a custom-built sample stage after removing its scalp and covering the center of the exposed parietal bone with a circular glass window (Fig. 6a). The skull thickness was about 200 μm. We measured the timegated reflection matrix + % ,+ % at a depth of 270 µm from the surface of the skull over 112 × 112 µm 2 ROI. Then, we applied MST algorithm by modeling the skull as five discrete layers placed at the distances of { &()…9 } = {140, 180, 220, 260, 300} μm from object plane at $ (Fig. 6b). The diameters of the obtained phase functions were 280, 330, 380, 430, and 480 μm, respectively. Figure 6c shows the phase functions ) 1 , * 1 , and : 1 retrieved by MST algorithm. In the individual phase functions, many blue spots whose diameter is 10−15 μm are visible. In fact, they correspond to the osteocytes in the skull, presenting the negative phase delays relative to the background skull tissue due to their smaller refractive index. To validate this, we compared the phase functions with the MST images recorded directly at the same depths inside the skull (Fig. 6d). The volume segmentation was applied to the osteocyte cell bodies in the MST images for ease of comparison. As indicated by numbers, the location of osteocytes in the phase functions found by our MST algorithm agrees well with their actual positions directly measured at the corresponding depths. Fig. 6e is completely diffused owing to the severe multiple scattering induced by the skull. On the contrary, the PSF became single peaked after applying the MST algorithm (inset in Fig. 6f). The full width at half maximum (FWHM) of the PSF after correction (~2 µm) was broader than the diffraction-limited resolution of 0.65 µm. However, this does not imply that the spatial resolution of the MST image is 2 µm. The PSF shown in the inset of Fig. 6f is the wide-field image under a focused illumination. The MST image is reconstructed by the diagonal part of + % ,+ % 1 . This acts as a confocal filter, which raises the spatial resolution. For instance, the FWHM of the cross-sectional profile of MST image across a myelinated axon (white dashed line in Fig. 6f) was about 1.05 µm, which means that the resolving power is better than this myelin width.
We also investigated the performance of the MST algorithm and the behavior of the PSFs as a function of the number of the phase plates (see Supplementary Section 4.1). This led us to prove that its performance is far better than any previously developed imaging modalities relying on ballistic waves (see Supplementary Section 4.2).
Discussion
In this study, we proposed a method to trace the multiple scattering trajectories in situ and convert them into ballistic signal waves for imaging objects of interest embedded within a scattering medium as if there were no scattering medium. Conventional imaging considers multiple scattering as noise and intends to filter them out to obtain the ideal diffraction-limited image using the remaining ballistic waves. This strategy often fails when the ballistic signal is weaker than the multiple scattering recorded at the same time 3 . To the best of our knowledge, our proposed method is a first of its kind that enables the use of multiply scattered waves for microscopic image formation of an embedded object by converting the waves to ballistic signal waves, which leads to a substantial increase in imaging depth. We demonstrated the tracing of 17 scattering events and enhanced the ballistic signal strength by almost 580 times. As a result, we could recover the object images, completely obscured in the conventional confocal imaging, with a microscopic spatial resolution better than 1.05 µm, even for in vivo throughskull imaging. The successful demonstration of our algorithm in this example supports that our work is well beyond the simple proof-of-concept level.
Our study marks an important milestone in solving high-order inverse scattering problems-considered the holy grail in the field of deep imaging 36 . It is worthwhile to emphasize that the proposed method works in the most general condition: an object is embedded within a thick scattering medium, and one can only access the backscattered waves that undergo a roundtrip to the object. For this condition, there exists a vast amount of multiply scattered waves that do not interact with the object as well as those that reach the object. The experimental recording of a time-gated reflection matrix is the first critical step to detecting the multiple scattering with object information to the best possible degree while attenuating the unwanted multiple scattering. The proposed algorithm is a versatile and powerful technique that can selectively trace the multiple scattering events carrying the object information by exploiting their intrinsic wave correlations in the recorded reflection matrix. The algorithm is particularly robust because it processes only the experimentally recorded reflection matrix to rectify the multiple scattering rather than compare the data with any theoretical model. Our approach is a general framework for solving the inverse problem of wave scattering. Therefore, it can also be applied to a wide range of wave imaging modalities, including ultrasonic imaging and microwave inspection 37,38 . The concept was demonstrated in microscopic imaging in the present study, but it can be extended to macroscopic imaging as long as wave properties are used for image formation.
The MST algorithm finds a set of phase plates that generate a similar transmission matrix to that of the scattering medium covering the target object. Notably, our study shows that the identified phase plates are the depth-sectioned transmission phase images of the real structures constituting the scattering medium. Essentially, our methods provide the 3D transmission phase map of the scattering medium itself as well as the reflectance image of the object. The transmittance phase images provide better contrast in visualizing cell bodies and obtaining their refractive index, but they could be obtained only for thin-section tissues in the transmission-mode microscope 39 . On the contrary, our approach finds them for a thick tissue in situ in the reflection-mode imaging. The benefit of our MST approach is not limited to object image reconstruction. One can physically fabricate a multi-layered inverse scattering block whose transmission matrix is the inverse of the identified transmission matrix 40 . By attaching the inverse scattering block to the surface of the scattering medium, optical clearing of the scattering medium can be realized without damaging the scattering medium itself. This can exempt the need for chemical processing in the tissue clearing 41 . In the context of deep-tissue optical imaging, one can find a specific excitation wavefront that can generate a sharp focus at a respective position in the object plane. By shaping the incident wave with a spatial light modulator, it will be possible to achieve a substantial depth increase in fluorescence imaging and super-resolution imaging, where a tight focusing of light or precise control of illumination patterns inside a scattering medium is a prerequisite. The key advance of this approach with respect to previous adaptive optics is to control substantial multiple scattering to form a focus.
Our algorithm finds multiple scattering trajectories based on the wave correlation of multiple scattering having interacted with a target object. This approach is intuitive, robust, and cost-effective. However, this doesn't trace all the multiple scattering containing the object information. One may consider combining the MST algorithm with other computational approaches such as compressive sensing and deep learning to increase the traceable multiple scattering trajectories 42,43 . Imaging geometry is another defining factor, and one can consider various collection geometries to better capture the multiple scattering of interest. The width of time gating is shortened as much as possible in conventional imaging to better rule out the multiple scattering, but there may be an optimal time gating window for collecting useful multiple scattering. The extension of the algorithm to incorporate backscattering in the scattering medium can be another important direction. Future studies addressing all these factors will extend the degree of multiple scattering coverage.
|
2023-02-21T02:15:59.206Z
|
2023-02-19T00:00:00.000
|
{
"year": 2023,
"sha1": "bca62fff959028b6879274b82c5addf0d27ef5f6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bca62fff959028b6879274b82c5addf0d27ef5f6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13847461
|
pes2o/s2orc
|
v3-fos-license
|
Changes in saccharin preference behavior as a primary outcome to evaluate pain and analgesia in acetic acid-induced visceral pain in mice
Reflex-based procedures are important measures in preclinical pain studies that evaluate stimulated behaviors. These procedures, however, are insufficient to capture the complexity of the pain experience, which is often associated with the depression of several innate behaviors. While recent studies have made efforts to evidence the suppression of some positively motivated behaviors in certain pain models, they are still far from being routinely used as readouts for analgesic screening. Here, we characterized and compared the effect of the analgesic ibuprofen (Ibu) and the stimulant, caffeine, in assays of acute pain-stimulated and pain-depressed behavior. Intraperitoneal injection of acetic acid (AA) served as a noxious stimulus to stimulate a writhing response or depress saccharin preference and locomotor activity (LMA) in mice. AA injection caused the maximum number of writhes between 5 and 20 minutes after administration, and writhing almost disappeared 1 hour later. AA-treated mice showed signs of depression-like behaviors after writhing resolution, as evidenced by reduced locomotion and saccharin preference for at least 4 and 6 hours, respectively. Depression-like behaviors resolved within 24 hours after AA administration. A dose of Ibu (40 mg/kg) – inactive to reduce AA-induced abdominal writhing – administered before or after AA injection significantly reverted pain-induced saccharin preference deficit. The same dose of Ibu also significantly reverted the AA-depressed LMA, but only when it was administered after AA injection. Caffeine restored locomotion – but not saccharin preference – in AA-treated mice, thus suggesting that the reduction in saccharin preference – but not in locomotion – was specifically sensitive to analgesics. In conclusion, AA-induced acute pain attenuated saccharin preference and LMA beyond the resolution of writhing behavior, and the changes in the expression of hedonic behavior, such as sweet taste preference, can be used as a more sensitive and translational model to evaluate analgesics.
Introduction
Most studies on pain and analgesia use reflex-based procedures (eg, tail flick, licking, and guarding) induced by aversive stimulation through the application of particular mechanical, thermal, electrical, and chemical stimuli to identify analgesics. This approach has been evaluated critically because it overfocuses on reflex behaviors and consequently neglects the key affective component of pain phenomena. [1][2][3][4] Consequently, the development of relevant new dependent variables to increase the validity of animal models of pain is increasingly pursued. [4][5][6][7][8] Among them, the evaluation of innate behaviors suppressed -instead of enhanced -by pain has been highlighted. 6,7,[9][10][11][12] A claimed advantage of selecting these behaviors as endpoints is that those drugs with analgesic properties will be associated with increased behavior rates, and, as a result, analgesic effects would be readily dissociable from motor impairment. In addition, the study of pain-suppressed behaviors should allow outlining the role of behavioral depression, which is normally associated with pain syndromes 13,14 and with others aspects related to the mechanisms and determinants of the affective component of pain. 15 From this perspective, any behavior spontaneously performed by an animal can be selected as target behavior to evaluate whether pain is or is not able to depress it. However, hedonically oriented behaviors, which are behaviors that have the ability to ensure a positive emotional state experienced as pleasure, 16,17 are expected to be rapidly expressed and maintained by the animals at relatively high rates, which would help reduce methodological problems such as using food or water deprivation during the behavioral tasks. Furthermore, decreases in rate, frequency, duration, or intensity of highly preferred behaviors ("hedonic behaviors") caused by pain (or other insults) can be suggestive of a deterioration of the animal global welfare and/or quality of life, which makes hedonically oriented behaviors interesting in the testing of beneficial effects of analgesics -which should restore the normal hedonic behavior of the animals.
In this study, two positively motivated behaviors, such as the natural rodent preference for sweet taste and rodent locomotor activity (LMA) in a novel environment, were selected as the main dependent variables to measure the presence of pain or analgesia. Preference for sweet taste maintains a high rate in mice and requires an intact cognitive function as well as appetitive motivation. 18 A precise measurement of sweet taste preference is easy to conduct and can be determined in home cages without animal handling. This behavior has been shown to be sensitive to different pharmacological and environmental manipulations. It has been used to model anhedonia -the lack of interest or pleasure in response to hedonic stimuli or experiences -in the chronic mild stress animal model of depression. 19,20 LMA measures spontaneous, instinctive behaviors of rodents that are largely motivated by the exploration of a novel environment for means of escape. Decrease in locomotion as a consequence of pain has been consistently reported in both humans and rodents, 6,11,[21][22][23] and psychomotor retardation -which includes motor impairment affecting gross locomotor skills -is also a central feature of depression. 24,25 The classical preclinical pain test of acetic acid (AA)induced abdominal constriction to induce pain was used. In this test, AA injection causes inflammation of the abdominal cavity wall and evokes sustained writhing behavior and reduced motor activity. The occurrence of this writhing behavior (abdominal cramps or stretching) per unit of time is commonly evaluated. These behaviors are considered to be reflexes and to be evidence of visceral pain, 26 but the frequency of writhing decreases spontaneously with time.
The goal of the present study was to compare the analgesic sensitivity of two pain-suppressed behaviors with the AA-induced standard reflexive outcome (writhing behavior). For that purpose, the time course of AA-induced behavior (writhing) and AA-depressed behavior (saccharin preference and LMA) was first studied. Secondly, the restorative effects on both LMA and saccharin preference behavior of a dose of ibuprofen (Ibu) devoid of efficacy on AA-induced writhing were evaluated. This was performed by administering the drug before (development protocol) and after (expression protocol) the induction of pain by AA. Finally, the effects of caffeine-induced behavioral activation to assess the specificity of the different tests were evaluated.
Methods animals
Female CD1 mice weighing 25-30 g were used in all experiments (Charles River, L'Arbresle, France). The study protocol was approved by the local Committee of Animal Use and Care of our institution (ESTEVE) and was in accordance with the guidelines for the Care and Use of Laboratory Animals of the European Community (European Directive 2010/63/EU) and with the International Association for the Study of Pain guidelines on ethical standards for investigation in animals. 27 Light/dark cycle (reverted 12/12 hours, lights on at 6 pm), temperature (22°C), and humidity (40%) were controlled. Animals had free access to food and water and were used after 14 days of acclimatization to housing conditions. All experiments were performed between 9 am and 6 pm.
Drugs
The drugs investigated were Ibu (40-320 mg/kg), supplied by Laboratorios Esteve (Barcelona, Spain), and caffeine (5-20 mg/kg), purchased from Sigma Chemical Co (Barcelona, Spain). Approximately 0.5% hydroxypropyl methylcellulose (HPMC) (Sigma Chemical Co) dissolved in saline was used as vehicle. The drugs (or the vehicle in the control group) were administered intraperitoneally (IP) at a volume of 10 mL/kg. The time of administration was chosen in order to evaluate the putative preventive or restorative effect of Ibu on target behaviors. To evaluate the preventive effect, the drug was administered 30 minutes
665
changes in saccharin preference behavior to evaluate pain and analgesia before AA challenge ("development protocol"). To evaluate a purely restorative effect, the drug was administered 120 or 150 minutes after AA challenge for saccharin preference and LMA, respectively ("expression protocol").
assay of acetic acid-induced writhing
For the time course study, mice were injected 10 mL/kg of AA (0.6%) or vehicle (distilled water) by IP route. Each mouse was then placed in an individual, clear plastic observation chamber and the total number of writhes was counted for 1 hour after administration.
Based on the results of this protocol, the interval ranging between 5 and 15 minutes after AA injection was selected to evaluate the effects of Ibu and caffeine on the number of writhes. Separate groups of mice were administered vehicle (HPMC 0.5%), Ibu, or caffeine, IP, 30 minutes before 0.6% AA injection.
For scoring purposes, a "writhe" was defined as a contraction of the abdominal muscles accompanied by body elongation and hind limb extension. Data are expressed as the mean number of writhes over the 10-minute observation period.
saccharin preference test
Mice were habituated to saccharin (0.1%, Sigma Aldrich Co, St Louis, MO, USA) consumption by means of saccharin solution diluted in tap water as sole drinking fluid for 48 hours. After habituation, the baseline saccharin preference was measured for 6 hours 1 day before the test. During the saccharin preference test, fluid consumption was measured for 24 hours with a two-bottle protocol, whereby mice were exposed to a bottle each of tap water and 0.1% saccharin solution. Water and saccharin solution intake was estimated simultaneously in control and experimental groups by weighing the bottles at 2, 4, 6, 8, and 24 hours. The animals were not previously deprived of water and food, but had no access to food during the first 6-hour preference tests. For each mouse on each day, the ratio of solution preference was calculated according to the formula below: Ratio (%) = Saccharin solution intake/(Saccharin solution intake + Water intake)×100 novelty induced lMa evaluation LMA was scored automatically in independent experiments. Eight standard actimeters (Linton Instrumentation Inc., Norfolk, UK) equipped with infrared beam motion detectors were used. On the day of the experiment, mice were evaluated in a dark environment. Mice were marked and weighed at the beginning of each experimental session. After administering AA, the compounds, or their vehicles, the animals were returned to their home cages and then placed in the LMA cages at the scheduled time. In the time course experiment, LMA was evaluated in separate groups of mice exposed to the chamber only once at the scheduled post-AA time (1, 2, 3, 4 or 5 hours post-AA). Moving time (seconds) was measured for 60 minutes in each separate group, with readings performed every 5 minutes.
Data analysis
Data are expressed as mean ± standard error of mean. For studies of LMA and saccharin preference, data were analyzed with two-way repeated measures analysis of variance (ANOVA), with pain and treatment drug as factors. One-way ANOVA was used for area under the curve (AUC, from 0 to 24 hours) comparison. One-way repeated measures ANOVA was used to analyze writhing test data, and one-way ANOVA with Bonferroni's multiple comparison test as post hoc analysis was used to analyze drug treatment data. P,0.05 was considered statistically significant. Statistical analyses were carried out with the GraphPad Prism 5.00 program (GraphPad Software, San Diego, CA, USA).
Results
acetic acid-induced stimulation of writhing and depressed sweet preference behavior IP injection of 0.6% AA robustly induced the appearance of abdominal constrictions (writhing) in mice ( Figure 1, left axis). The number of writhes peaked 5-20 minutes after AA administration (P,0.001). Then, a progressive decrease in this behavior was observed, and the effects of AA were no longer apparent after 60 minutes (P.0.05). The right axis of Figure 1 shows the preference for a saccharin solution (0.1%) in animals pretreated with AA (pain group) or its vehicle (control group). The baseline values for saccharin preference measured 1 day earlier did not vary significantly between the pain and the control groups (80.1±2.1 and 77.3±3.3%, respectively). Mice treated with the vehicle of AA demonstrated a preference for saccharin solution over water of 65%-75% at different times. Variations between baseline and test day values were observed across all experiments. These variations were attributed to the different baseline recording times -6 hours on a continuous basis -and to mice handling on the test day, which included IP injection. Pretreatment with 0.6% but not 0.3% (data not shown) AA significantly decreased the expression of saccharin preference behavior as compared to control mice. Notes: separate groups of animals were injected with aa 0.6% (black squares) or its vehicle (white squares) at 0, 1, 2, 3, and 4 hours before each lMa evaluation. each group of mice was exposed to the chamber only once on the indicated post-aa time. Motion time (seconds) was measured between 0 and 60 minutes, every 5 minutes. note that lMa depression in mice injected with aa 0.6% remained for at least 4 hours as compared to vehicle-treated mice. Data are mean ± seM. Abbreviations: aa, acetic acid; lMa, locomotor activity; seM, standard error of mean; h, hours; s, seconds. AAinduced deficit in the expression of this hedonic behavior was observed for at least 6 hours, and normal preference behavior was restored 24 hours after AA administration ( Figure 1, right axis). Post hoc testing showed significantly reduced saccharin preference rates in AA-induced pain in mice at 2, 4, and 6 hours (P,0.01, P,0.001, and P,0.05, respectively), but not at 24 hours.
acetic acid-induced decrease of lMa LMA as a function of pretreatment interval at the same concentration of AA tested in the saccharin preference experiment is shown in Figure 2. Control mice showed peak activity during the first 5 minutes. After that, mice became habituated to the environment and their locomotion behavior progressively declined. and 320, but not 40 or 80 mg/kg, significantly inhibited AAinduced writhing behavior (P,0.01, Figure 3A). Caffeine administration, however, failed to significantly inhibit AAinduced writhing in mice at the doses of 5, 10, and 20 mg/kg IP ( Figure 3B, NS).
effects of ibuprofen on aa-induced deficit in saccharin preference behavior Next, we aimed at determining whether an analgesic was able to revert AA-induced deficit in the saccharin preference behavior of mice in two different administration protocols, the "development" and the "expression" protocols.
In the development protocol, 40 mg/kg of Ibu -a dose that failed to produce any analgesic effect evaluated by AAinduced writhing -or vehicle were administered 30 minutes before AA challenge. Mice receiving vehicle (vehicle + AA group) before AA injection showed a significantly depressed saccharin preference behavior as compared to control mice (vehicle + vehicle group). Repeated measures two-way ANOVA (time × pain) showed a significant effect of time [F(5,80) =27.17, P,0.001] and pain [F(1,80) =4.81, P,0.05], and a significant time × pain interaction between these two factors [F(5,80) =2.45, P,0.05]. Ibu did not affect the normal saccharin preference of vehicle-injected mice (Ibu + vehicle group) and did not prevent decreased saccharin preference in AA-treated mice (Ibu + AA group) before 2 hours, but it was able to revert the AA-induced deficit in the preference for saccharin from 2 to 6 hours ( Figure 4A). One-way ANOVA followed by Bonferroni's post hoc test of the AUC (from 0 to 24 hours) globally suggested total restoration of saccharin preference behavior in AA-treated mice ( Figure 4B; P,0.01).
We took advantage of the long-term duration of the AAinduced decrease in saccharin preference to evaluate whether Ibu was able to revert the deficit once established ("expression protocol"). Thus, Ibu or vehicle was administered 2 hours after AA or vehicle challenge (arrow in Figure 4C). AA-injected mice treated with vehicle (vehicle + AA group) showed a significant decrease in saccharin preference behavior as compared to those injected with vehicle (vehicle + vehicle group). Two-way repeated measures ANOVA (time effects of ibuprofen on the AA-induced deficit in LMA We next aimed to determinate whether Ibu was able to revert the AA-induced deficit in the exploratory behavior of mice, also using the two administration protocols ("development" and "expression" protocols).
In order to prevent deficit, Ibu was again administered at 40 mg/kg (inactive dose evaluated by writhing behaviors) 30 minutes before AA challenge ("development protocol").
effects of caffeine on the aa-induced depression in saccharin preference and lMa behavior
In order to study the specificity of the endpoints, we tested the effects of caffeine, a nonanalgesic stimulant producing behavioral increases, on AA-induced depression in both LMA and saccharin preference behaviors using the development protocol. The effects on LMA are shown in Figure 6A. Caffeine was administered at 10 mg/kg (IP) 30 minutes before AA challenge. As expected, AA-injected mice treated with vehicle (vehicle + AA group) showed a significantly depressed LMA behavior as compared to control mice (vehicle + vehicle group). Figure 6D).
Discussion
Efforts have recently been made to investigate pain and analgesia using novel paradigms that do not rely 4 Decreases in burrowing, 28 nesting, 29 feeding, 10 intracranial selfstimulation, 30 wheel running, 31,32 and food-maintained operant responding 7 to evaluate the presence of pain and analgesia have also been reported. Also, decreased LMA as a consequence of pain has been consistently reported in both humans and rodents. 6,11,[21][22][23] The present study provides evidence that the hedonic behavior of sweet taste preference using saccharin in mice was strongly depressed by AA and that it can be used to detect the analgesic effects of drugs. The characteristic pain writhing behavior induced by AA, which lasted less than 1 hour, was followed by a substantially longer "behavioral depression" manifested by a strongly decreased expression of both saccharin preference and LMA for at least 4 hours. Pain-suppressed behaviors long after AA-induced writhing behavior is consistent with the results of a previous study showing a similarly decreased LMA for 5 hours after treatment with 0.56% AA in male ICR mice. 11 However, to our knowledge, this is the first time that such a sustained depression (for at least 6 hours) of sweet taste preference after AA administration is described. Previously, the time of feeding suppression using a Liquid Ensure™ protein drink was determined 1 hour after 0.56% AA administration. 10 In the present study, a visceral noxious stimulus was selected to induce pain. Visceral pain presents with important differences as compared to cutaneous somatic pain. Somatic and visceral pain are mediated, at least in part, through different neural pathways at spinal and supraspinal sites, and evoke different emotional responses. [33][34][35][36][37][38] Cutaneous somatic pain is escapable, can be controlled, and characteristically evokes active emotional coping responses such as agitation, hyperactivity, fight-flight, and hypertension. In contrast, visceral pain is inescapable, cannot be controlled by the subjects themselves, and usually evokes passive coping or "conservation-withdrawal" strategies, characterized by "disengagement from" the environment, ie, behavioral quiescence and immobility, decreased reactivity to the environment, hypotension, and bradycardia. 36,38 The behavioral inhibition observed after the visceral noxious stimulation in our study is consistent with this view. Recent data from our laboratory -where formalin administration to the paw, a somatic pain model, was unable to alter saccharin preference behavior in mice -further support this view (unpublished data).
Ibu started to produce significant effect in the attenuation of the number of writhes at the dose of 160 mg/kg, but the pharmacological effect of 40 mg/kg of Ibu in the
671
changes in saccharin preference behavior to evaluate pain and analgesia saccharin preference paradigm was already consistent with analgesia, considering the whole 0-24 hours measurement period. This dose, however, was ineffective to prevent AAdepressed saccharin preference in the first 2 hours of the saccharin preference test (development protocol). A logical explanation for this is that the saccharin preference behavior reductions observed during the first 2 hours may be caused by AA-induced writhing behavior, and are not inhibited by Ibu at 40 mg/kg. These two behaviors (writhing and saccharin preference) seem incompatible because mice cannot drink and writhe at the same time. Interestingly, Ibu clearly prevented sweet preference behavior deficit after 2 hours. The fact that 40 mg/kg of Ibu -ineffective to block pain-induced writhing -was actually effective on the paindepressed behaviors of saccharin preference raises the possibility that the analgesic effects of drugs can be better observed with a pain-depressed endpoint than with a painstimulated endpoint. This conclusion agrees with those of several previous studies showing that some analgesics such as Ibu, morphine, pregabalin, or acetaminophen attenuate the affective component of pain more potently than its sensory component. 8,[39][40][41] In the present study, where decreased saccharin preference behavior reflects the affective component and increased writhing behavior reflects the sensory component of AA-induced pain, Ibu was better against the affective component than against the sensory component of pain. Furthermore, this could be indicating that a drug can have analgesic properties without inhibiting the writhing behavior. This may be of particular importance in a drug discovery context because possible analgesics may be currently being discarded based on a lack of efficacy on sensory-based pain screening experimental models.
The "expression protocol" allowed us to test the effect of the drug using a within-subject design in animals where the AA-induced deficit had already been established and once the AA-induced writhing behavior had disappeared. Before Ibu administration, AA-treated mice showed the expected depression in saccharin preference shown by the decrease observed during the first 2 hours as compared to control animals. When these animals were treated with Ibu, the preference for saccharin returned to that of vehicle-treated animals. This approach allowed avoiding the potential effect of AA-stimulated behaviors likely to compete with the target depressed behavior, as we have previously hypothesized to explain the lack of effect in the development protocol of the saccharin intake in the first 2 hours, ie, when writhing is occurring at a relatively high rate, mice cannot perform the intake of liquid.
In the present study, Ibu administration 30 minutes before AA (development protocol) was able to only partially prevent AA-induced decrease in LMA. The results obtained during the entire hour period suggest that the dose of 40 mg/kg was not sufficient to completely restore pain-depressed LMA behavior. The partial efficacy of Ibu on AA-induced deficit in LMA is consistent with the lack of efficacy observed in the AA-induced writhing test and during the first 2 hours of the saccharin preference test.
The administration of Ibu after AA (expression protocol) restored LMA, with the activity of AA-injected animals returning to that of vehicle-injected animals. In this protocol, mice received Ibu or its vehicle 150 minutes after AA injection (30 minutes before the behavioral test). Animals pretreated with Ibu -but not animals treated with the vehicleshowed LMA restoration, which is consistent with the results observed in the expression protocol of the saccharin test.
Finally, caffeine was used as a nonanalgesic stimulant to evaluate the specificity in relation with pain of the two target behaviors. Caffeine prevented LMA decrease in AAinjected mice, but also induced a strong LMA increase in vehicle-injected animals. In contrast, caffeine was unable to change the depressed saccharin preference behavior in AA-treated mice. Therefore, despite the fact that caffeine induced LMA normalization in AA-injected mice to the level of control animals, the deficit in saccharin preference behavior was not sensitive to this behavioral arousal induced by caffeine. In a previous study, Stevenson et al 11 did not find such effect of caffeine on AA-depressed LMA. The reasons for this discrepancy are not clear. Similar to this study, Stevenson et al 11 found that caffeine significantly increased LMA in nondepressed mice. However, they only found a nonsignificant tendency of caffeine to revert aciddepressed LMA. Discrepancy may be sex related because we used female mice and the Stevenson et al 11 study used male mice. However, no sex-related differences in caffeine-induced LMA increase have been found. 42 Discrepancy might also arise from the different light/dark cycles in which the two behavioral experiments were performed. In order to favor the higher levels of LMA associated with the dark (active) phase of the animal's activity cycle, our experiments were conducted under dark conditions, while the Stevenson et al's 11 study was conducted under light conditions. While the effects of caffeine on LMA did not seem to be altered by ambient lighting, 43,44 circadian fluctuations in visceral sensory functions have been reported. 45 Finally, despite standardization, systematic differences in behavior across laboratories have been well documented. 46 Journal of Pain In summary, saccharin preference and LMA behaviors were altered by a visceral noxious stimulus. AA-treated mice showed signs of depression-like behaviors after writhing resolution, as evidenced by reduced saccharin preference and locomotion for at least 6 and 4 hours, respectively. The decrease observed after AA administration in sweet taste preference was probably due to ongoing pain because it was specifically reverted by an analgesic drug such as Ibu but not by the stimulant drug caffeine. The decrease observed in novelty induced locomotion after AA injection was probably also due to ongoing pain because it was reverted by Ibu. However, the AA-depressed LMA was also reverted by the stimulant caffeine, thus suggesting that this behavioral endpoint is not robust enough to evaluate analgesic drugs and should be complemented with another pain-depressed behavior endpoint. The affective and sensory components of pain were selectively affected by Ibu because the same dose of Ibu was ineffective to block writhing behavior but effective to improve pain-depressed behaviors (saccharin preference).
Consequently, hedonic behaviors are more sensitive, and translational readouts to evaluate analgesics and changes in the expression of hedonic behavior -such as sweet taste preference described in this study -can be used as a primary outcome measure to evaluate pain in mice and may complement the more traditional procedures used to assess candidate analgesics.
|
2016-05-04T20:20:58.661Z
|
2015-10-06T00:00:00.000
|
{
"year": 2015,
"sha1": "83612f824a2702956c60c72cce1a39736f316d3b",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=27404",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88b534b10d182ab48c71565fba6c182d2a69685e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258786932
|
pes2o/s2orc
|
v3-fos-license
|
Development of a Command Line Interface for the Analysis of Result Sets from Automated Queries to Literature Databases
. The first step of a systematic review is the identification of publications related to a research question in different literature databases. The quality of the final review is mainly influenced by finding the best search query resulting in high precision and recall. Usually, this process is iterative and requires refining the initial query and comparing the different result sets. Furthermore, result sets of different literature databases must be compared as well. Objective of this work is to develop a command line interface, which supports the automated comparison of result sets of publications from literature databases. The tool should incorporate existing application programming interfaces of literature database and should be integrable into more complex analysis scripts. We present a command line interface written in Python and available as open-source application at https://imigitlab.uni-muenster.de/published/literature-cli under MIT license. The tool calculates the intersection and differences of the result sets of multiple queries on a single literature database or of the same query on different databases. These results and their configurable metadata can be exported as CSV-files or in Research Information System format for post-processing or as starting point for a systematic review. Due to the support of inline parameters, the tool can be integrated into existing analysis scripts. Currently, the literature databases PubMed and DBLP are supported, but the tool can easily be extended to support any literature database providing a web-based application programming interface.
Introduction
Systematic reviews are important for the scientific community to provide an overview of the current state-of-the-art of a given topic or an entire research field.Guidelines for performing systematic reviews have been proposed to ensure reproducibility.At the time of writing, most journals request the authors to follow the PRISMA guidelines [1].As first step of this guideline, a search string must be defined, which is used to query scientific articles from multiple literature databases.During the development of the search string, search terms are often exchanged, e.g., by synonyms, or different search terms are connected via logical operators like "and" or "or".After each iteration with different numbers of results, the main task is to identify the newly found articles and articles, which were missed by the changed query.Mathematically speaking, having two article result sets from different queries, A and B, the set difference A\B and B\A must be calculated and analyzed.When the final search string has been determined, it is used to query different literature databases and duplicates must be filtered out, i.e., the intersection of two result sets A∩B must be determined.These calculations involving thousands of articles are very time consuming and error prone.Even though most citation managers support the filtering of duplicates, the export of result sets and import into the manager is not feasible for rapid prototyping of queries.
Objective of this work is to provide a command line interface supporting the operations described above on result sets from search queries on literature databases.The tool should be capable of sending queries to literature databases automatically via their web-based application programming interfaces (APIs) and support the integration into existing analysis scripts via inline parameters.The tool should be flexible enough to allow the extension to any literature database providing an API.
Methods
Literature or academic databases are large collections of references to peer-reviewed scientific research articles.Besides the references themselves, these databases store structured metadata, like authors' names or journal information, to support detailed search queries and allow scientifically valid citations.Most literature databases provide their own online search engine on these metadata and an API for querying articles programmatically.Usually, these literature databases are focused on specific fields of research.In this work, we will exemplarily focus on PubMed (www.pubmed.gov)and DBLP (www.dblp.org).
PubMed is a free to use literature database covering life sciences and biomedical topics [2].In 1997, the database was first available to the public and is currently maintained by the United States National Library of Medicine (NLM) at the National Institutes of Health (NIH).At the time of writing, PubMed consists of over 34 million scientific references.Its search engine supports the search for phrases, i.e., exact order of words, and any combination of logical operations "and", "or" and "not", grouped by brackets.In addition, search terms can be limited to a certain metadata filed, e.g., author's name, by using a postfix notation.The search API is well documented and free to use.
The Digital Bibliography & Library Project (DBLP) is a literature database containing conference and journal articles from the field of computer science [3].DBLP was founded in 1993 by Michael Ley at the University of Trier.Since 2018, the database is hosted and maintained as a service by the Leibniz Center for Informatics Schloss Dagstuhl.At the time of writing, DBLP consists of over 6 million references.Its search engine does not support the search for phrases, however, by using "-" like "first-second" it can be enforced that "second" must appear anywhere after "first".By default, each word of the search string is implicitly connected via an "and".The logical "or" is only supported on the level of single words.The logical "not" as well as groupings and brackets is not supported and will be ignored.Similar to PubMed, each search term can be limited to a certain metadata field by using a prefix notation.Again, the search API is well documented and free to use but limited to 10000 references per query result.
The Command Line Interface
The application is available as open-source software under the MIT license.It is written in Python 3.11 and provides a command line interface for the programmatic integration into more complex analysis scripts.The tool can be called with "py literature-cli.py<parameters>".The parameter "-h" shows the instructions and a list of available parameters.The tool offers two modes, which support sending multiple queries to a single literature database or a single query to multiple databases.For the remainder of this section, we will focus on the second use case since both modes work similarly.
First, the query is transformed into the syntax of each supported database.Logical operations like "AND", "OR" and "NOT" are replaced according to the requirements of each literature database, e.g., in case of DBLP the replacement of "OR" with "|" and removal of all "AND" operators, which are implicit.
Afterwards, the transformed search string is sent to the corresponding API endpoints.By default, only the metadata of the result set, containing the authors' names, title, journal, publication year and DOI, are further processed.If a DOI is present, it is used to identify a publication in different result sets.Otherwise, the heuristic of matching title and year could be applied since both values are most consistent between different literature databases and are not affected by abbreviations.All combinations of intersections and differences of the result sets are calculated, which, for q queries sent, are 2 q -1 sets.These subsets can be exported as separate CSV-files containing all metadata for further post-processing or in the Research Information System (RIS) format for the upload into a citation manager as starting point of a systematic review.Furthermore, if defined in the parameter list, a Venn diagram is provided as visual feedback [4].To provide a history of all requests and enable reproducibility, all query parameters and results are stored in a logging file as shown in Table 1.
Configuration and Extensibility
The internal structure of the tool is designed to enable easy extensibility to additional databases, as long as they provide a web-based API.For each new database, a class must be implemented inheriting from the connection base interface.The class implements the specific API endpoints, because structure and call-order are highly literature database dependent.Besides the implementation, a configuration file must be provided, which contains the exact location of metadata in the response.The associated XML-tags in the response can be defined by XPath.Furthermore, a mapping to the tool's syntax must be provided, e.g., the handling of logical operations like "AND", "OR" and "NOT".
The metadata, which identifies a publication, if no DOI is provided, can be configured application-wide.In addition, all metadata, which should be exported or taken into consideration during analysis can be specified and extended as well.These changes must be replicated to the individual configuration of each connected database, i.e., the path to the corresponding XML element in the response message.
Example Workflow
The tool is most useful when processing large quantities of results that can hardly be managed manually.For comprehensibility, a minimal example is considered here.Let us assume that a systematic review about machine learning approaches in the context of the rare Kawasaki disease should be performed [5].Since it is an interdisciplinary topic between computer science and medicine, the literature databases DBLP and PubMed should be included in the systematic review.In the following, the process of finding the appropriate search string using the aforementioned tooling will be explained.
First, we directly experiment on the PubMed website by searching "Kawasaki disease" AND "machine learning" getting 16 results.Then, we try a word with a related meaning and replace "machine" by "deep" and receive 7 results.As shown in line 1 of Table 1, we use the tool to compare both results.There is only an overlap of 2 articles so combining both terms is beneficial.Back on the PubMed website, we combine both terms with the logical "OR" operator and get 21 results as expected.We are satisfied with the current result and want to apply it to DBLP.Unfortunately, DBLP does not support the phrase search and only supports OR for single tokens.Therefore, we restructure the query in a way that it can be interpreted correctly by DBLP.As shown in line 2, we verify by using the tool that our transformed query does not lose previous publications in PubMed.Instead, we are now getting 23 results, but keeping all previous 21 publications.Finally, we apply the query to both literature databases and get our result of 26 unique publications for our systematic review as shown in line 3.The corresponding RIS-files can directly be loaded into our citation manager.Table 1.Logging example of the command line interface.All used parameters and results are documented to track the progress and support reproducibility.
Discussion
All pre-defined requirements of the tool have been met.As illustrated by the example workflow, it can help identifying publications during a systematic review.By providing it as open-source, it can be used by other researchers freely.Tools like DistillerSR (www.distillersr.com)or Rayyan [6] also support systematic reviews.However, both are not open-source.They use advanced artificial intelligence to filter duplicate references and guide the user through the entire PRISMA workflow.In case of DistillerSR, even an API call to PubMed is supported.Nevertheless, these tools require a final query or already exported lists of references.Thus, our tool can be applied as a kind of preprocessing to determine the required references in a rapid-prototyping fashion, before the main PRISMA workflow begins.
A few limitations need to be addressed though.The tool was designed as a command line interface to integrate it programmatically into complex analysis scripts.The nature of the command line may be off-putting to technically unsophisticated users or even limit its usability.A graphical user interface could certainly promote acceptance in the context of future work.Until then, there is still the possibility to use the search engines of the literature databases directly as shown in the example workflow.
Secondly, only two literature databases are currently integrated, primarily due to problems with not freely usable APIs.Well-known literature databases such as Web of Science (www.webofscience.com) or Scopus (www.scopus.com)have paid licenses or usage restrictions (number of hits per week), which make meaningful free use harder.Another example is the literature database Google Scholar (scholar.google.de),which requires a paid third party provider license (www.serpapi.com).
The support of search tags and a general pagination approach are planned as future work.Many literature databases allow the restriction of individual search terms to specific metadata, such as searching only for authors' names, e.g., by using the postfix "[AU]" in PubMed or prefix "author:" in DBLP.The usage of such terms is generally supported by the application.However, corresponding queries should only be sent to a single database, since the query can be misinterpreted by other databases due to the greatly varying syntax.This issue could be handled similarly to logical operators during query conversion by applying mappings from the configuration files.Some APIs limit the number of results returned per request.Currently, the maximum number of possible results of a single query is delivered, which is 10000 hits for PubMed and DBLP.Results above 10000 hits are considered as too low precision and are therefore ignored.In the future, this should be addressed by dynamically loading all publications, if pagination is supported by the API or heuristically implemented by adding the publication year to the search string and iterating over each year that contains at least one publication in the result set.
Conclusions
In this work, we presented a command line tool, for the calculation of intersection and differences of article result sets from automated queries to literature databases.The tool fulfills all pre-defined requirements and can help during the process of conducting a systematic review.Currently, the connection to PubMed and DBLP is implemented but a connection to further databases providing an API can easily be added.The source code is available from https://imigitlab.uni-muenster.de/published/literature-cli.
|
2023-05-20T06:17:11.128Z
|
2023-05-18T00:00:00.000
|
{
"year": 2023,
"sha1": "3403b8e08857090313b8043b9ad1ca0a22ae202e",
"oa_license": "CCBYNC",
"oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230095",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "63f79e78dae970383cbd424e7a80c473d1fa0b82",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
252661755
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Analysis of Two-Dimensional Four-Qubit Entangled States with Photon Polarization and Spatial Path
Entanglement states serve as the central resource for a number of important applications in quantum information science, including quantum key distribution, quantum precision measurement, and quantum computing. In pursuit of more promising applications, efforts have been made to generate entangled states with more qubits. However, the efficient creation of a high-fidelity multiparticle entanglement remains an outstanding challenge due to the difficulty that increases exponentially with the number of particles. We design an interferometer that is capable of coupling the polarization and spatial paths of photons and prepare 2-D four-qubit GHZ entanglement states. Using quantum state tomography, entanglement witness, and the violation of Ardehali inequality against local realism, the properties of the prepared 2-D four-qubit entangled state are analyzed. The experimental results show that the prepared four-photon system is an entangled state with high fidelity.
Introduction
Quantum entanglement is the basic resource in quantum information processing. In recent years, a variety of entanglement schemes have been proposed and verified, such as multiphoton schemes [1], cold atom schemes [2,3], quantum dot schemes [4], etc. The quantum information processing theory and experiments have been developed rapidly [5], especially the multiphoton entanglement, which plays an important role not only in the basic test of quantum nonlocality [6][7][8], but also in optical quantum computing [9,10], quantum teleportation [11,12] quantum key distribution (QKD) [13], and many other aspects. Over the past years, great efforts have been devoted to generating and manipulating more qubits. At the same time, many efforts are also being made to study the theoretical predictions of quantum mechanics based on polarization-entangled photons [14]. In experiments, the entangled photon systems are usually prepared by the spontaneous parametric downconversion (SPDC) process in nonlinear crystal. In the conventional protocols for quantum information processing, the entanglement in one degree of freedom of photon systems is selected in the SPDC process. However, the difficulty of preparing a multiparticle entanglement increases exponentially with the number of particles because of the low multiphoton coincidence count rate and the high double-pair emission noise effect [15]. A single photon is able to carry more than just a qubit of quantum information, and when two photons are entangled in more than one degree of freedom, higher-dimensional (H-D) entanglement can be realized. In experiments, multidegree of freedom entanglement can be generated by the combination of the techniques used for creating entanglement in a single degree of freedom [16]. With this method, many different types of H-D entangled states can be prepared [17], such as the polarization-spatial H-D entangled state [18], polarization-orbital angular momentum H-D entangled state [19], etc. These H-D entangled systems have also On the other hand, people are also very interested in the nature and property description of entanglement. To date, people have found a variety of methods to describe the entanglement of the system. The fidelity measurement of the extent to which the desired state is created is the overlap of the experimentally produced state with the ideal one. The fidelity can be calculated using the density matrix combination with full state tomography [20]. The Einstein, Podolsky, and Rosen (EPR) paradox stimulated Bell to propose an inequality to detect the existence of entanglement in a two-particle system [21][22][23]. It has been proved that the violation of an inequality against local reality (LR) is a sufficient condition for the confirmation of entanglement. The entangled witnesses can also be used to detect the entanglement of the system. It can be obtained by measuring the Pauli operator of different angles [24,25].
Many methods have been devised to prepare multiparticle entangled states through H-D entangled states. These works provide a good entanglement source for the research and application of quantum information technology [16][17][18][19]. However, some systems are complex [26], and some require harsh experimental conditions such as low temperature [27]. In this paper, motivated by facilitating the generation of multiqubit entangled states, we have designed an interferometer that can couple the polarization and spatial paths of photons and prepare an entangled state of the two modes. As a basis, two photon entangled states with high brightness and high fidelity were prepared using SPDC technology. Next, we designed the composite interferometer and prepared two-dimensional (2-D) four-qubit Greenberger-Horne-Zeilinger (GHZ) entanglement states for the polarization and spatial path of photons. Finally, the properties of the prepared four-qubit entangled state were analyzed by three methods: quantum state tomography, entanglement witness, and the violation of Ardehali inequality against local realism (LR). The experimental results proved that the prepared 2-D four-photon system is an entangled state with high fidelity.
Experimental Preparation of a 2-D Four-Qubit Entangled State
The first step in preparing a 2-D four-qubit entangled state in this experiment is the preparation of a two-photon entangled state with high brightness and fidelity. We produce the polarization-entangled two-photon entangled states using SPDC [28]. The experimental setup is shown in Figure 1. Using a CW 532 nm all-solid state green laser (Millennia, Spectra-Physics, Palo Alto, CA, America) as a light source, we pump the mode-locked Ti: sapphire femtosecond (fs) laser (Millennia, Spectra-Physics, Palo Alto, CA, America). The parametric conversion effect of the Ti: sapphire generates a mode-locked femtosecond pulsed laser and emits an infrared (IR) pulse laser beam with a central wavelength of 780 nm, a pulse width of 100 fs, and a repetition of 80 MHz. In the experiment, the power of the pump laser is 8.5 W, and the output power of the pulsed IR laser is about 1.4 W. We focused the IR pulsed laser through a LiB3O5 (LBO) frequency-doubling crystal to generate ultraviolet (UV) light with a central wavelength of 390 nm under the process of parametric up-conversion. In order to improve the up-conversion efficiency, a lens with a focal length of 3.5mm was inserted in front of the LBO crystal to form a small, focused beam on the crystal. Since the output beam of the femtosecond laser is elliptical, a combination of two cylindrical lenses is used to refocus the beam into a circular shape. Moreover, because the IR laser cannot be completely up-converted in the LBO crystal, the UV light beam emitted after the LBO crystal is mixed with unconverted IR light. The mixed infrared light will interfere with the fidelity of the two-photon entangled state, so we must remove it effectively. We use five dichroic mirrors (DMs) that reflect UV light and transmit IR light to form a high-efficiency filter, and the mixed IR light is effectively removed. The UV light pulse is focused by a suitable lens on a 2 mm thick β-barium borate (BBO) nonlinear crystal. By choosing a specific direction of the incident pump laser, the type II parametric down-conversion process can occur in BBO crystals. In this process, a Entropy 2022, 24, 1388 3 of 8 390 nm UV photon is split into two 780 nm IR photons with a certain probability. Due to the conservation of energy and momentum, the horizontal and vertical polarizations of the photon pair are entangled, that is, an EPR entangled photon pair. A compensator composed of a half-wave plate (HWP) and a 1 mm thickn BBO crystal behind it is used to compensate the deviation between horizontally and vertically polarized photons. A pair of entangled photons 1 in paths 1 and 2 is prepared, where H and V represent horizontal and vertical polarization, respectively. It is experimentally found that the intensity of the UV laser strongly affects the brightness and fidelity of the EPR entangled pairs. When the average power of the UV laser pulse is reduced to about 100 mW, we can obtain better fidelity of the output state. The coincidence count rate for two-photon EPR entangled pairs is about 6 × 10 3 /s. The visibility of the EPR entangled state is about 97% on the H/V basis, and about 95% on the +/− basis, where reflect UV light and transmit IR light to form a high-efficiency filter, and the mixed IR light is effectively removed. The UV light pulse is focused by a suitable lens on a 2 mm thick β-barium borate (BBO) nonlinear crystal. By choosing a specific direction of the incident pump laser, the type II parametric down-conversion process can occur in BBO crystals. In this process, a 390 nm UV photon is split into two 780 nm IR photons with a certain probability. Due to the conservation of energy and momentum, the horizontal and vertical polarizations of the photon pair are entangled, that is, an EPR entangled photon pair. A compensator composed of a half-wave plate (HWP) and a 1 mm thickn BBO crystal behind it is used to compensate the deviation between horizontally and vertically polarized photons. A pair of entangled photons H H VV in paths 1 and 2 is prepared, where H and V represent horizontal and vertical polarization, respectively. It is experimentally found that the intensity of the UV laser strongly affects the brightness and fidelity of the EPR entangled pairs. When the average power of the UV laser pulse is reduced to about 100 mW, we can obtain better fidelity of the output state. The coincidence count rate for two-photon EPR entangled pairs is about 6 × 10 3 /s. The visibility of the EPR entangled state is about 97% on the H/V basis, and about 95% on the The quartz plate Δd is used to adjust the phase j between the photons H and V to ensure that the two photons reach the BS simultaneously. The coincidence time-window is set to be 5 ns, which ensures that accidental coincidence is negligible. Every output is spectrally filtered ΔFWHM = 3 nm and monitored by fiber coupled single-photon detectors. The state analyzer is structured by PBS, QWP, a filter, and a single-photon detector (SPCM-AQRH-13-FC, integrated detection efficiency 60%). (b) Optical device description. The quartz plate ∆d is used to adjust the phase ϕ between the photons H and V to ensure that the two photons reach the BS simultaneously. The coincidence time-window is set to be 5 ns, which ensures that accidental coincidence is negligible. Every output is spectrally filtered ∆ FWHM = 3 nm and monitored by fiber coupled single-photon detectors. The state analyzer is structured by PBS, QWP, a filter, and a single-photon detector (SPCM-AQRH-13-FC, integrated detection efficiency 60%).
(b) Optical device description.
Next, we generate the 2-D entangled state of photon polarization and spatial path. The two paths of the EPR entangled pair enter the interferometer composed of a polarizing beam splitter (PBS) and a beam splitter (BS), respectively, and obtain the 2-D four-qubit entangled state of photon polarization and a path in the 2-D entanglement generation system. In addition, the interferometer independently measures polarization and spatial qubits simultaneously on the basis of |H /|V and |H ± e iϕ |V . In the experiment, the PBS separates the photon into two possible spatial modes H and V, according to their polarization H and V, respectively. Here, the result H for every state analyzer represents the transmission path of the photon after the PBS before reaching the BS. The state of this single photon can now be written as an entangled state between its polarization and spatial degree. The interferometer combines the two paths of the spatial qubit to a BS. By adjusting retardation phase ϕ (adjusted by the thickness ∆d in the optical path inserted by a quartz plate), the interference contrast between the two paths can be adjusted, shown in Figure 2, and the measurement of spatial qubits can be achieved simultaneously. The maximum contrast of the interferometer is 83 ± 0.6. Thus, 2-D entangled 4-qubit GHZ states of polarization and spatial paths can be created. The intensity of the prepared GHZ state is about 220 coincidences per second [29,30].
beam splitter (PBS) and a beam splitter (BS), respectively, and obtain the 2-D four-qubit entangled state of photon polarization and a path in the 2-D entanglement generation system. In addition, the interferometer independently measures polarization and spatial qubits simultaneously on the basis of / H V and j i H e V . In the experiment, the PBS separates the photon into two possible spatial modes ¢ H and V , according to their polarization H and V , respectively. Here, the result ¢ H for every state analyzer represents the transmission path of the photon after the PBS before reaching the BS. The state of this single photon can now be written as an entangled state between its polarization and spatial degree. The interferometer combines the two paths of the spatial qubit to a BS. By adjusting retardation phase j (adjusted by the thickness Dd in the optical path inserted by a quartz plate), the interference contrast between the two paths can be adjusted, shown in Figure 2, and the measurement of spatial qubits can be achieved simultaneously. The maximum contrast of the interferometer is 83 ± 0.6. Thus, 2-D entangled 4-qubit GHZ states of polarization and spatial paths can be created. The intensity of the prepared GHZ state is about 220 coincidences per second [29,30].
GHZ H H H H VVVV
(1)
Analysis of 2-D Four-Qubit Entangled State
We first analyze the 2-D entangled states produced by the above steps using quantum state tomography. The prepared quantum states were analyzed using a state analyzer composed of PBS, QWP, and filters. Using the estimated density matrix in combination with full state tomography, the GHZ state fidelity can be calculated as r y y = ( )
GHZ GHZ
Tr = 0.83 ± 0.01. Thus, with high statistical significance, a genuine four-qubit entanglement of the GHZ states created in our experiment is confirmed [31,32]. To obtain the state density matrix, a set of complementary measurements needs to be performed on the prepared GHZ states. For each state, we
Analysis of 2-D Four-Qubit Entangled State
We first analyze the 2-D entangled states produced by the above steps using quantum state tomography. The prepared quantum states were analyzed using a state analyzer composed of PBS, QWP, and filters. Using the estimated density matrix in combination with full state tomography, the GHZ state fidelity can be calculated as F = Tr(ρ|ψ GHZ ψ GHZ |) = 0.83 ± 0.01. Thus, with high statistical significance, a genuine four-qubit entanglement of the GHZ states created in our experiment is confirmed [31,32]. To obtain the state density matrix, a set of complementary measurements needs to be performed on the prepared GHZ states. (|H − i|V ). Using these data and the maximum-likelihood technique, we reconstruct the density matrix of the 2-D entangled 4-qubit GHZ states, as shown in Figure 3. In standard error models of entangled photon measurements and counts, the counts are generally assumed to follow a Poisson distribution. Error is the standard deviation derived from Poisson count statistics of raw measurement counts. Here, we point out that the main factors affecting GHZ state fidelity include detector efficiency and double-pair effects in SPDC. Figure 3b. Therefore, we can further calculate the visibility of the prepared four-qubit 2-D GHZ states at about 0.72 ± 0.015, which greatly exceeds the minimum bound of 0.5 that proves the existence of entanglement.
Entropy 2022, 24, x FOR PEER REVIEW 6 of 9 Furthermore, studies have shown that entangled states can unambiguously display the conflict between quantum mechanics (QM) and LR. It is known that all bi-qubit pure states violate the bipartite Bell type inequality (BTI), namely the Clauser-Horne-Shimony-Holtlemma (CHSH) inequality [33]. Studies show that the violation of inequality indicates the existence of entanglement in the system, and the amount of violation increases with the degree of the entanglement in the state [34]. We consider the Ardehali operator in a four-qubit entangled system [35,36] , 1 2 3 1 2 3 1 2 3 1 2 3 4 4 1 2 3 1 2 3 1 2 3 1 2 3 4 1 1 Second, to verify that |H ⊗2 |H ⊗2 and |V ⊗2 |V ⊗2 are indeed in a coherent superposition, we can use the following entanglement witness operator to detect the GHZ entanglement [24,25]: where M i ⊗4 = (cos θσ x + sin θσ y ) ⊗4 , θ = kπ/4 (k = 0, 1 · · · 3). It can be obtained by measuring the Pauli operator of different angles. The measurement results of M i are shown in Figure 3b. Therefore, we can further calculate the visibility of the prepared four-qubit 2-D GHZ states at about 0.72 ± 0.015, which greatly exceeds the minimum bound of 0.5 that proves the existence of entanglement. Furthermore, studies have shown that entangled states can unambiguously display the conflict between quantum mechanics (QM) and LR. It is known that all bi-qubit pure states violate the bipartite Bell type inequality (BTI), namely the Clauser-Horne-Shimony-Holtlemma (CHSH) inequality [33]. Studies show that the violation of inequality indicates the existence of entanglement in the system, and the amount of violation increases with the degree of the entanglement in the state [34]. We consider the Ardehali operator A in a four-qubit entangled system [35,36], where σ a = 1 . σ x , σ y are the Pauli operators, which can be experimentally measured in + or − and R or L basis, respectively. For a mere classical system, that is, one which obeys a local hidden variable theory, it is well known that the correlation measure is bounded by the CHSH theorem [37]. The upper bound of operator A in the assumption of local realism is A LR = 4. This bound can be violated by an entangled quantum mechanical state, and, in fact, for any pure quantum state the correlation measure A QM is bounded by which is also called Tsirelson's bound [38]. Clearly, the QM results contradict the predictions of LR. That is to say, for a four-qubit entangled system, the expected value of operator A given by LR is not greater than 4, while the result given by the QM theory is not greater than 8 √ 2. The area between 4 and 8 √ 2 is the violation of the QM theory against the results of LR, which is not only a hot issue in quantum entanglement and nonlocality research, but also an important means to characterize the properties of entanglement resources. We verify this by performing polarization measurements on the state. The measurement of the eigenvalue of operator A is actually the joint measurement of the Pauli operator in the 2-D four-qubit system. For example, considering the joint measurement operator σ 1 x σ 2 x σ 3 x σ 4 y , sixteen sorts of polarization settings must be performed. For each measurement point, we collect the data of every setting σ * σ * σ * σ * , for 60 s and repeat it three times. After all the joint measurements are completed, we calculate that the measurement value of operator A is 8.32 ± 0.07, which shows a violation of LR with more than 61 standard deviations. This also indicates that the prepared state in the experiment is a genuine GHZ state.
Conclusions
In summary, using SPDC technology, a high brightness and high fidelity entangled two-photon state was prepared. Next, we designed a composite interferometer that can couple the polarization and spatial path of photons. Two-dimensional four-qubit GHZ states were prepared. The properties of the prepared four-qubit entangled states were analyzed by three methods. First, the density matrix of the state was reconstructed by quantum state tomography, and the fidelity was calculated to be 0.83. We also introduced an entanglement witness operator to characterize the 2-D four-qubit state entanglement. The value of the entanglement witness operator was measured to be 0.75. Further, the Ardehali inequality was shown to violate local realism by 62 standard deviations. The experimental results prove that the prepared 2-D four-photon system is an entangled state with high fidelity. There are many reasons for imperfect data. Primarily, there may be a defect in a linear optical element such as a beam splitter or filters that allows photons to be absorbed or scattered. The implementation of a high-intensity entangled source is a significant step towards practical long-distance multiparty quantum communication in the future.
H-D entanglement can be used directly in some important applications in quantum information technology. For example, it can assist us to implement many important tasks in quantum communication with one degree of freedom of photons, such as quantum dense coding with linear optics, the complete Bell state analysis for the quantum states in the polarization degree of freedom, the deterministic entanglement purification, and an efficient quantum repeater. In addition, it needs to be pointed out that it is very meaningful to further study the universality and generalization of new inequalities in multiqubit entangled states [39]. Due to the important future of quantum information technology, we all need to understand the evolution of quantum correlations under the influence of decoherence. This is also the research direction which people are quite interested in [40,41].
|
2022-10-02T15:07:44.357Z
|
2022-09-29T00:00:00.000
|
{
"year": 2022,
"sha1": "1521b6e3519025eb263324d3b22d89d3fbe459da",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/24/10/1388/pdf?version=1664449056",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ec97366ba990cbe0e1a84f0c82c853132d393c4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
242977668
|
pes2o/s2orc
|
v3-fos-license
|
Asymptomatic COVID-19 infection in multiple trauma patients: should we obtain more CT-scans?
Background: There are studies that show a chest CT scan is superior to reverse transcription-polymerase chain reaction (RT-PCR) studies in the diagnosis of COVID. This study was designed to assess the prevalence of COVID-related lung involvement in patients admitted to a trauma center. Methods: In a retrospective study, data from a regional referral trauma center from February 21, 2020 to April 10, 2020, were reviewed. All patients admitted to the hospital for whom a chest CT scan was performed during the study period for any reason were included. Trained physicians screened all CT-scans for ndings suggestive of COVID-19. Next, blinded radiologists selected CT-scans with ndings highly suggestive of COVID involvement. The clinical course and outcome, and the results of PCR for SARS-CoV-2 were recorded assessed. Results: A total of 4200 chest CT scans were reviewed. After multiple rounds of exclusion, 24 patients with highly- suggestive ndings were reviewed. Only three patients developed COVID symptoms during the course of admission. PCR results were positive in 22 patients (92.6%). Conclusions: We suggest having a lower threshold for ordering chest CT scans in trauma patients at a high risk of COVID infection, as well as those requiring extensive surgical interventions. Also, a thorough review of the available CT scans before invasive procedures, preferably with the help of an expert radiologist, is highly recommended, even when the results of the COVID laboratory tests are negative.
Background
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that causes the Coronavirus disease 2019 (COVID-19), rst emerged in China at the end of 2019 and soon spread throughout the world [1].
Presentation of SARS-CoV-2 infection ranges from asymptomatic infection to mild pneumonia and severe disease with dyspnea to critical disease with respiratory failure, shock, or multiorgan dysfunction. [2] As cases of COVID-19 increase globally, the knowledge around this virus is evolving. But so far, there are no reliable treatments on the close horizon to manage COVID-19 or a vaccine to prevent its spread. Also, clinicians face a signi cant challenge in dealing with complications related to this infection. Therefore, social distancing has been adopted globally to ' atten the curve' of COVID infection. Iran was hit hard with an early outbreak and a high initial rate of infection. Following the national lockdown orders, the rate of infection has reduced substantially, which has also led caused economic collateral damage. Countries around the world have already started to loosen the social distancing rules, which may lead to the spread of infection [3].
Trauma centers have a unique role in the healthcare system. The economic restart will increase the patient load of trauma centers. Combined with the anticipated increase in the incidence of COVID-19, trauma centers might be on the brink of an unanticipated resurgence of COVID, which they are unlikely to be prepared for.
Chest CT scan is highly sensitive and speci c in the diagnosis of COVID, even in asymptomatic cases.
There are studies that show a chest CT scan is superior to reverse transcription-polymerase chain reaction (RT-PCR) studies in the diagnosis of COVID, with the added advantage of being able to follow the progression of the disease objectively [4,5].
This study was, therefore, designed to assess the prevalence of COVID-related lung involvement in patients admitted to a trauma center. We hypothesized that considering the high prevalence of asymptomatic COVID infections reported previously, we would encounter a high incidence of COVIDrelated chest CT scan changes in asymptomatic patients.
Methods
In a retrospective study, data from a regional referral trauma center from February 21, 2020 to April 10, 2020, were reviewed. Inclusion criteria included patients admitted to the hospital, for whom a chest CT scan was performed during the study period for any reason. While admissions for all reasons were included, only multiple trauma patients are routinely evaluated with a chest CT scan, and therefore, there is a high probability of having orthopaedic injuries. Patients who were re-admitted and those with incomplete records were excluded. Also, for patients with >1 CT scans, only the rst imaging after admission was reviewed. All data were accessed through the electronic health system, and CT scans were reviewed on the computer screen from the picture archiving and communicating system (PACS).
A physician was trained to screen all CT scans, looking for COVID-related ndings, which were extrapolated from the literature [6][7][8]. This investigator was unaware of the clinical condition of the patients, as well as the status of COVID infection. A form was lled for each CT scan (Appendix 1), and the presence of ≥1 ndings suggestive of COVID quali ed the patient for the next round of readings.
Next, two experienced radiologists, also unaware of the clinical course and diagnosis of the patients, separately examined the CT scans from the previous round. In compliance with the national guidelines, the Iranian radiology society criteria for reporting COVID imaging were used, which reports CT scans in three categories [4]: highly-suggestive, inconsistent, and normal. A separate form was used by the radiologists to report their ndings (Appendix 2), with details of the lobes involved and the patterns visible on CT scans.
The charts of the patients remaining after the second round of screening were extracted and thoroughly reviewed. Demographic data, presence or absence of clinical COVID symptoms, the results of deep nasal swab polymerase chain reaction (PCR) for SARS-CoV-2, and the reason for admission, as well as the clinical course and outcome, were recorded.
Descriptive statistics were used to report frequencies and means. All statistical analyses were performed using the IBM SPSS Statistics for Windows, Version 23.0 (Armonk, NY, IBM Corp).
Results
During the study period, 4200 patients underwent a chest CT scan at our institution. After the rst round of readings, 320 studies (7.6%) were selected as having ndings suggestive of COVID. After separate readings by two radiologists, 74 CT scans were selected. Next, patients having patterns with the highest speci city for COVID were selected by consensus between the two radiologists. The last round yielded 24 records (Figure 1).
Of the 24 patients, 20 (83%) were male. The mean age was 37.6 years (SD 3.5). Sixteen patients were admitted following a car accident, ve after a falling accident, and three with blunt trauma from ghts.
Sixteen patients (67%) had sustained a fracture or dislocation (Appendix 3). Three patients (12.5%) had respiratory symptoms compatible with COVID. The mean white blood cell count of the patients was 14,500 (range, 4,900-23,900, SD:4,542.23) with a lymphocyte count of 2500.
One of the patients developed ARDS during the hospital stay and died following admission to the ICU. The remaining 23 patients were discharged following recovery from their initial injuries. All patients who came to the hospital due to trauma and required treatment underwent surgery and were discharged after the operation with no early complications.
On the chest CT scans, all 24 patients had a rounded morphology pattern of ground-glass opacities, and 4 patients also had the crazy-paving pattern. The lobes involved were left upper and lower lobes (each in 15 patients), right lower lobe (15 patients), right upper lobe (14 patients), and right middle lobe (9 patients). Eight patients had universal involvement of all lobes (33%). Five patients had unilateral involvement (3 on the right, 2 on the left).
The results of PCR for SARS-CoV-2 were positive in 22 patients. When highly-suggestive CT-scan ndings were considered the diagnostic gold-standard, a positive PCR had a sensitivity of 92.6%. Both patients with a negative PCR result were asymptomatic and had limited lobar involvement.
Discussion
Several countries have already started to ease the social distancing interim laws. The re-opening of the economy will nevertheless result in the resurgence of COVID [9]. With neither a proven treatment nor a vaccine available, this may result in overloaded hospitals in a yet recuperating healthcare system. Trauma centers will be at the forefront of this crisis, as the trauma caseload will undoubtedly increase, and COVID might affect the treatment and prognosis of traumatic injuries drastically [10]. In this study, we retrospectively reviewed the available CT scans of patients admitted in a 45-day period, to determine the prevalence of highly-suggestive ndings of lung involvement due to COVID. Our ndings are alarming, as we found 24 patients with lung involvement, most of which were multi-lobar. Only three patients were symptomatic on further review of the charts.
CT Scan has been shown to be highly speci c in the diagnosis of COVID [4,5]. We found ground-glass opacities of round morphology to be the most common ndings, similar to previous studies [11]. Some studies have shown than the sensitivity and speci city of CT ndings are higher than those of PCR studies [5]. In this study, PCR had a sensitivity of 92.6% in patients with highly-suggestive CT ndings. It should be noted that due to the overload of the laboratory facilities at our institution at the beginning of the pandemic, the results of the PCR tests were reported at least 1.5 days after the request, and were not available if the patients required an emergent surgical intervention.
A high rate of asymptomatic infection has been reported with COVID, and some studies have suggested that early screening of highly-suspected cases with CT scans may predict severe complications such as acute respiratory distress syndrome (ARDS) [12]. At a minimum, isolation measures could be undertaken earlier, contact tracing be commenced in regions with such measures, and PCR studies rechecked if negative [13]. With the high infectivity rate of COVID, these steps are necessary to break the chain of infection.
Patients in trauma centers are likely to require surgical intervention during their hospital stay. Surgery in patients with COVID has been shown to have a high complication rate, including ARDS, long ICU stay, and a high rate of postoperative mortality [14]. A proportion of surgical interventions could be postponed with no to minimal change in prognosis, including some orthopedic and reconstructive procedures. Therefore, during the current pandemic, proper screening and diagnosis of high-risk patients are absolutely essential to reduce mortality and improve prognosis.
This study has several limitations, including those inherent to a retrospective chart review. We did assess the results of COVID PCR or antibody tests for the intermediately suggestive ndings due to a high rate of inconclusive tests at the beginning of the study period. Also, some patients might have become symptomatic after discharge. We also did not include the less-suggestive nding to increase the speci city of our imaging ndings.
Conclusions
In conclusion, in a retrospective review of 4200 chest CT scans of patients at a trauma center, we found 24 patients with highly-suggestive ndings of COVID, with all except three being asymptomatic. The sensitivity of PCR was 92% in the presence of highly-suggestive CT ndings. We suggest having a lower threshold for ordering chest CT scans in trauma patients at a high risk of COVID infection, as well as those requiring extensive surgical interventions. Also, a thorough review of the available CT scans before invasive procedures, preferably with the help of an expert radiologist, is highly recommended, even when the results of the COVID laboratory tests are negative. Sciences. All consents to participate were obtained from the participants.
Consent for publication:
Written patient consent was obtained for publication of all aspects of the case, including personal and clinical details and images from the patient.
Availability of data and materials: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Asymptomatic COVID-19 infection in a multiple trauma patients. A 51-year-old patient with a both-bone forearm and proximal humeral fractures following a car accident. The patient had no respiratory symptoms, despite diffuse involvement of both lungs, as evident on the CT scan (A). Chest CT scan of an 18-year-old male admitted following a fall from a height, with bilateral calcaneal and lumbar spine fractures. The patient was also asymptomatic (B).
|
2020-08-06T09:05:18.484Z
|
2020-08-03T00:00:00.000
|
{
"year": 2020,
"sha1": "5c0b66db48711d5ce46b7bea842c02d67a72afd9",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-51098/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "717b15c3f2c6ae4526469b794b9dff2abd5f82b4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
27774298
|
pes2o/s2orc
|
v3-fos-license
|
Heat-Polymerized Resin Containing Dimethylaminododecyl Methacrylate Inhibits Candida albicans Biofilm
The prevalence of stomatitis, especially caused by Candida albicans, has highlighted the need of new antifungal denture materials. This study aimed to develop an antifungal heat-curing resin containing quaternary ammonium monomer (dimethylaminododecyl methacrylate, DMADDM), and evaluate its physical performance and antifungal properties. The discs were prepared by incorporating DMADDM into the polymer liquid of a methyl methacrylate-based, heat-polymerizing resin at 0% (control), 5%, 10%, and 20% (w/w). Flexure strength, bond quality, surface charge density, and surface roughness were measured to evaluate the physical properties of resin. The specimens were incubated with C. albicans solution in medium to form biofilms. Then Colony-Forming Units, XTT assay, and scanning electron microscope were used to evaluate antifungal effect of DMADDM-modified resin. DMADDM modified acrylic resin had no effect on the flexural strength, bond quality, and surface roughness, but it increased the surface charge density significantly. Meanwhile, this new resin inhibited the C. albicans biofilm significantly according to the XTT assay and CFU counting. The hyphae in C. albicans biofilm also reduced in DMADDM-containing groups observed by SEM. DMADDM modified acrylic resin was effective in the inhibition of C. albicans biofilm with good physical properties.
Introduction
Heat-curing resins are frequently used in prosthodontics, particularly in complete denture bases and partial denture bases due to their esthetically acceptable color and availability at low cost. However, there is a high incidence of denture-induced stomatitis in denture wearers [1][2][3].
Candida-associated denture stomatitis was observed in approximately 11% to 67% of otherwise healthy denture wearers [1]. C. albicans can exist in two basic forms, yeast phase and mycelial phase [8]. The switch between different forms of growth is one of the virulence factors, which can also lead to Candida biofilm formation [9,10]. Previous study indicated that the formation of C. albicans biofilms on dentures can not only assist the survival of fungal cells [5], but also increase the inflammation by secreting aspartic protease Sap4/Sap6, mycelium protein Hwp1, and adhesion gene ALS3/EPA1 [11]. The biofilms of C. albicans are usually difficult to remove by mechanical or chemical cleaning compared to the planktonic cells [12,13]. Reducing C. albicans biofilm on the surface of the prosthesis is a pragmatic strategy to control denture stomatitis [3,14,15]. Although adequate denture cleaning is imperative for the prevention of denture stomatitis, it is more beneficial and necessary to develop an antifungal denture surface [3,14,15].
To grant the acrylic resin antifungal ability, a large number of antimicrobial agents were added into dental materials. All of them can be divided into two classes: released and non-released materials. For the released materials, early study indicated that tea tree oil and chlorhexidine gluconate were effective in inhibiting C. albicans growth on heat-polymerized acrylic resin and denture soft liner [13,16]. The denture base resin containing nano-silver showed antifungal activity and an inhibitory effect on adhesion and biofilm formation of C. albicans, especially at a higher concentration [17]. However, with the antimicrobial ingredients releasing, the mechanical properties will decrease and the antimicrobial property is unsustainable [12,14]. The biosecurity of released ingredients is also unclear [14,[17][18][19][20].
Non-released antibacterial materials have been synthesized in many dental materials [21][22][23][24][25] and demonstrated good antibacterial effect. Quaternary ammonium methacrylates (QAMs), such as 12-Methacryloyloxydodecyl-pyridinium bromide (MDPB), can be copolymerized and covalently bonded in resins, immobilizing and exerting a contact-killing capability against oral bacteria and biofilms. Several other non-released antibacterial materials were recently reported, such as methacryloxylethylcetyl dimethylammoniumchloride (DMAE-CB) containing adhesive, antibacterial glass ionomer cements, antibacterial nanocomposites, and bonding agents using a quaternary ammoniumdimethacrylate (QADM). Dimethylaminododecyl methacrylate (DMADDM), a new kind of QAMs, was also added to composite resin, bonding agent, and glass-ionomer cement as a non-releasing agent, which has witnessed an antibacterial effect [25][26][27][28]. However, only a few articles described the QAMs (MDPB) as additives in acrylic resin to study the antibacterial activity of the new materials [29,30] and no antifungal investigation of the QAM-modified acrylic resin has been reported. Especially, the DMADDM has not been added in the acrylic resin to explore its antifungal effect [31][32][33].
The aim of this paper is to incorporate antibacterial DMADDM into heat-polymerized resin with a new process and further investigate the effects on both physical performance and the formation of C. albicans biofilms.
Synthesis of Antibacterial Monomer
Dimethylaminododecyl methacrylate (DMADDM) was synthesized according to a previously described process [27,34]. Briefly, 10 mmol of 2-(dimethylamino) ethyl methacrylate (DMAEMA), 10 mmol of 1-bromododecane (BDD), and 3 g of ethanol were mixed in a vial by capping and stirring at 70 • C for 24 h. Ethanol was evaporated after the reaction was completed. The clear liquid remaining in the vial was DMADDM, which can be verified via Fourier transform infrared spectroscopy.
Specimen Fabrication
The commercial acrylic resin, Nature Cryl™ MC (GC America Inc., Alsip, IL, USA), was used for making samples. Acrylic resin was prepared via polymerizing heat-polymerizable powder and liquid following the manufacture instructions in a cavity die metal box (10 × 10 × 5 cm 3 ). The acrylic resin was Materials 2017, 10, 431 3 of 14 cut into small samples (11 × 11 × 3 mm 3 ) by a diamond-coated band saw (Struers Minitom, Holstebro Kommune, Denmark). The control group was heat-polymerized with powder and liquid without DMADDM. We developed a new approach to making the double-decked acrylic resin ( Figure 1). The double-decked acrylic resin can be manufactured as follows: i.
Untreated heat-polymerizable liquid was mixed with untreated powder, reacting until the paste stage. iii.
Treated heat-polymerizable liquid was mixed with untreated powder, reacting until the paste stage. iv.
One-third treated acrylic resin and two-thirds untreated acrylic resin in the lower, were placed into the upper and lower portions of cavity die box, respectively, and filled with gypsum at the same time ( Figure 1). Pressure was used for polymerization by tightening bolts on the cavity die box and excess material was removed. v.
The box was put into an incubator, reacting at 72 • C for 90 min and then at 100 • C for 60 min. vi.
Once the die box cooled down, acrylic resin was taken out and cut into certain size specimens (11 × 11 × 3 mm 3 ) by a diamond-coated band saw (Struers Minitom). vii.
Materials 2017, 10, 431 3 of 13 without DMADDM. We developed a new approach to making the double-decked acrylic resin ( Figure 1). The double-decked acrylic resin can be manufactured as follows: i. DMADDM was added to heat-polymerizable liquid blending to a certain mass fraction (5%, 10%, and 20%). ii. Untreated heat-polymerizable liquid was mixed with untreated powder, reacting until the paste stage. iii. Treated heat-polymerizable liquid was mixed with untreated powder, reacting until the paste stage. iv. One-third treated acrylic resin and two-thirds untreated acrylic resin in the lower, were placed into the upper and lower portions of cavity die box, respectively, and filled with gypsum at the same time ( Figure 1). Pressure was used for polymerization by tightening bolts on the cavity die box and excess material was removed. v. The box was put into an incubator, reacting at 72 °C for 90 min and then at 100 °C for 60 min. vi. Once the die box cooled down, acrylic resin was taken out and cut into certain size specimens (11 × 11 × 3 mm 3 ) by a diamond-coated band saw (Struers Minitom). vii. In turn, treated acrylic resin surface was polished with different particle of standard metallographic sandpaper (P400, P800, P1000, P1500, P2000, P2400, and P4000) (Struers Minitom).
After being polymerized, the upper third of samples showed slight loss of color. There was a natural color transition and no obvious dividing line from lower two-thirds to the upper third in the successful double-decked sample.
After immersion in distilled water at 37 °C for 24 h, the specimens were sterilized in an ethylene oxide sterilizer (Anprolene AN74i, Andersen, Haw River, NC, Germany). Specimens were separated into four groups: acrylic resin with 0% DMADDM; acrylic resin with 5% DMADDM; acrylic resin with 10% DMADDM; acrylic resin with 20% DMADDM. The first group was the control group while the others were the experimental groups.
Mechanical Testing
Bond quality test of the interface between two-thirds denture base layer and one-third DMADDM layer was tested with the help of Universal testing machine (5500R, MTS, Cary, NC, USA) [35,36]. The denture base layer was rigidly fixed to the holding arm of the machine. The DMADDM layer was on the middle and above the surface of the base layer. Shear force was applied with the help of a screwdriver perpendicular to the vertical axis of the DMADDM layer at a distance of 0.2 mm from the bond interface ( Figure 2). The crosshead speed was 1 mm/min. The fracture faces were recorded and fracture strength was calculated.
Flexural strength of each acrylic resin specimen was measured via a three-point flexural test with a 15-mm span at a cross head-speed of 1 mm/min on a computer-controlled Universal Testing Machine (5500R, MTS, Cary, NC, USA). The flexural strength of the material was calculated by S = 3PmaxL/(2bh 2 ), where Pmax is the maximum load on the load-displacement curve, L is flexure span, b is specimen's width, and h is specimen's thickness [37,38]. After being polymerized, the upper third of samples showed slight loss of color. There was a natural color transition and no obvious dividing line from lower two-thirds to the upper third in the successful double-decked sample.
After immersion in distilled water at 37 • C for 24 h, the specimens were sterilized in an ethylene oxide sterilizer (Anprolene AN74i, Andersen, Haw River, NC, Germany). Specimens were separated into four groups: acrylic resin with 0% DMADDM; acrylic resin with 5% DMADDM; acrylic resin with 10% DMADDM; acrylic resin with 20% DMADDM. The first group was the control group while the others were the experimental groups.
Mechanical Testing
Bond quality test of the interface between two-thirds denture base layer and one-third DMADDM layer was tested with the help of Universal testing machine (5500R, MTS, Cary, NC, USA) [35,36]. The denture base layer was rigidly fixed to the holding arm of the machine. The DMADDM layer was on the middle and above the surface of the base layer. Shear force was applied with the help of a screwdriver perpendicular to the vertical axis of the DMADDM layer at a distance of 0.2 mm from the bond interface ( Figure 2). The crosshead speed was 1 mm/min. The fracture faces were recorded and fracture strength was calculated.
Flexural strength of each acrylic resin specimen was measured via a three-point flexural test with a 15-mm span at a cross head-speed of 1 mm/min on a computer-controlled Universal Testing Machine (5500R, MTS, Cary, NC, USA). The flexural strength of the material was calculated by S = 3P max L/(2bh 2 ), where P max is the maximum load on the load-displacement curve, L is flexure span, b is specimen's width, and h is specimen's thickness [37,38].
Surface Roughness Observation
An AFM (Atomic Force Microscopy, 5500SPM, Agilent, Palo Alto, CA, USA) was used at high resolution with a sharp silicon tip (0.5 N/m) in tapping mode. The surface topography of the treated acrylic resin disk was obtained over an area 10 × 10 μm. The surface roughness of the samples was provided with systemic software (SPIWIN 2.0, NSK, Tokyo, Japan) and data of Ra in different groups were compared [37,38].
Charge Density Testing
The charge density present on the polymer disk surfaces was quantified using a fluorescein dye method as previous study [39]. Acrylic resin disks were put in a 24-well plate. Fluorescein sodium salt (200 μL of 10 mg/mL) in deionized (DI) water was added into each well. Specimens were left in the dark at room temperature for 10 min. After removing the fluorescein solution and rinsing with DI water, each disk was placed in a new 24-well plate, and 200 μL of 0.1% (by mass) of cetyltrimethylammonium chloride (CTMAC) in DI water was added. Samples were shaken in the dark at room temperature for 20 min to absorb the bound dye. The CTMAC solution was supplemented with 10% (by volume) of 100 mM phosphate buffer at pH 8. Each sample's absorbance was read at 501 nm via a plate reader (SpectraMax M5, Molecular Devices, Sunnyvale, CA, USA). The fluorescein concentration was calculated by Beers Law and an extinction coefficient of 77 mM −1 ·cm −1 . Using a ratio of 1:1 for fluorescein molecules to the accessible quaternary ammonium groups, the surface charge density was calculated as the total molecules of charge per unit of exposed surface area. Six replicates were tested for each group.
Biofilm Formation Assay
C. albicans SC5314 (ATCC MYA-2876) were recovered on YPD plate (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar) at 35 °C overnight. For the biofilm formation inhibition assay, the specimens were incubated with 2 mL of prepared C. albicans solution (final concentration: 1 × 10 5 cells/mL) at 37 °C in spider medium (10 g nutrient broth, 10 g mannitol, and 2 g K2HPO4 dissolved in 1 L distilled water) for 120 h. After biofilm formation, non-adhering cells were removed by washing three times with autoclaved phosphate-buffered saline (PBS) [40]. All the experiments were repeated three times. The morphological structure and the biomass of biofilm will be tested in the following experiments.
Surface Roughness Observation
An AFM (Atomic Force Microscopy, 5500SPM, Agilent, Palo Alto, CA, USA) was used at high resolution with a sharp silicon tip (0.5 N/m) in tapping mode. The surface topography of the treated acrylic resin disk was obtained over an area 10 × 10 µm 2 . The surface roughness of the samples was provided with systemic software (SPIWIN 2.0, NSK, Tokyo, Japan) and data of Ra in different groups were compared [37,38].
Charge Density Testing
The charge density present on the polymer disk surfaces was quantified using a fluorescein dye method as previous study [39]. Acrylic resin disks were put in a 24-well plate. Fluorescein sodium salt (200 µL of 10 mg/mL) in deionized (DI) water was added into each well. Specimens were left in the dark at room temperature for 10 min. After removing the fluorescein solution and rinsing with DI water, each disk was placed in a new 24-well plate, and 200 µL of 0.1% (by mass) of cetyltrimethylammonium chloride (CTMAC) in DI water was added. Samples were shaken in the dark at room temperature for 20 min to absorb the bound dye. The CTMAC solution was supplemented with 10% (by volume) of 100 mM phosphate buffer at pH 8. Each sample's absorbance was read at 501 nm via a plate reader (SpectraMax M5, Molecular Devices, Sunnyvale, CA, USA). The fluorescein concentration was calculated by Beers Law and an extinction coefficient of 77 mM −1 ·cm −1 . Using a ratio of 1:1 for fluorescein molecules to the accessible quaternary ammonium groups, the surface charge density was calculated as the total molecules of charge per unit of exposed surface area. Six replicates were tested for each group.
Biofilm Formation Assay
C. albicans SC5314 (ATCC MYA-2876) were recovered on YPD plate (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar) at 35 • C overnight. For the biofilm formation inhibition assay, the specimens were incubated with 2 mL of prepared C. albicans solution (final concentration: 1 × 10 5 cells/mL) at Materials 2017, 10, 431 5 of 14 37 • C in spider medium (10 g nutrient broth, 10 g mannitol, and 2 g K 2 HPO 4 dissolved in 1 L distilled water) for 120 h. After biofilm formation, non-adhering cells were removed by washing three times with autoclaved phosphate-buffered saline (PBS) [40]. All the experiments were repeated three times. The morphological structure and the biomass of biofilm will be tested in the following experiments.
C. albicans Biofilm Metabolic Activity and Biomass Assay
An XTT (2, 3-bis (2-methoxy-4-nitro-5-sulfo-phenyl)-2H-tetrazolium-5-caboxanilide) assay was used to determine the metabolic activity of the biofilm as described previously [41]. XTT/menadione assay mix was made from 12.5 XTT/menadione (v/v) using stock solutions of 1 mg/mL XTT (Invitrogen X6493, Carlsbad, CA, USA) dissolved in PBS and menadione (reagent grade; Nutritional Biochemicals Corp., Cleveland, OH, USA) dissolved in acetone (reagent grade). After biofilm formed on the disks, the discs were put in a 24-well plate (with PBS) to wash biofilms three times, removing non-adherent cells. The washed discs were placed in a new 24-well plate with 100 µL PBS containing 50 µL XTT/menadione solutions and incubated at 37 • C for 2 h in the dark. After incubation, 200 µL of the solution was transferred to a 96-well plate, and colorimetric changes in the solution were measured using a microplate reader (Chro Mate1, Awareness technology, Palm City, FL, USA) at 490 nm.
Biomass Calculation
Specimens with 120 h biofilms were transferred into tubes with 2 mL saline and the biofilm on each disk was harvested by sonication and vortexing (Fisher, Pittsburgh, PA, USA), and then serially diluted in saline. 100 µL final diluted cell suspension was spread on YPD agar plates and incubated at 37 • C for 24 h to recover the viable cells in the biofilms [40]. The colony forming units (CFU) were counted.
Observation of Biofilm Structure
Disk specimens with C. albicans incubated for 120 h were prepared for examination with scanning electron microscope (SEM) (Quanta 200, FEI Company, Hillsboro, OR, USA). Each specimen with adherent biofilm was rinsed with PBS, and then immersed in 1% glutaraldehyde in PBS at 4 • C for 4 h. The specimens were rinsed with PBS, subjected to graded ethanol dehydrations, and rinsed twice with 100% hexamethyldisilazane. The specimens were then sputter-coated with gold and examined via SEM [42].
Statistical Analysis
One-way analysis of variance (ANOVA) was performed to detect the significant effects of the variables; however, when the data were different ariances, Kruskal-Wallis test was used. A p-value < 0.05 was considered statistically significant.
Physical Performance of Double-Decked Acrylic Resin
After bond quality testing, the fractured faces of different DMADDM concentration samples were presented ( Figure 3A). The fracture face of 0% DMADDM sample occurred in the mixed, the 5% DMADDM sample in the mixed, the 10% DMADDM sample in the mixed, the 20% DMADDM sample in the base resin layer respectively. None of the fracture faces merely occurred in the adhesive interface between the two-thirds base layer and one-third DMADDM layer ( Figure 3A). The acrylic resin with various DMADDM mass fractions (5%, 10%, and 20%) had fracture strength ( Figure 3B) and flexural strength ( Figure 4A) similar to that of the control group. Adding DMADDM into acrylic resin increased the surface charge density ( Figure 4B) significantly. The charge density value of acrylic resin containing 20% DMADDM was about seven-times that of the control group. Meanwhile, the acrylic resin with various DMADDM mass fractions (5%, 10%, and 20%) had a similar surface roughness to that of the control group ( Figure 5).
The Antifungal Properties of Double-Decked Acrylic Resin
The XTT assay results showed that the DMADDM-modified samples had increased antimicrobial properties more significantly than the control group ( Figure 6A). The results of CFU counting of different groups showed that the DMADDM containing group significantly inhibited the growth of C. albicans in biofilms in a dose-dependent manner compared to the control group ( Figure 6B). The acrylic resin disks containing DMADDM had also reduced the biofilm on the surface at different concentrations of DMADDM compared with the control group according to the SEM observation (Figure 7). Importantly, the mycelium of C. albicans had decreased significantly in DMADDM-containing groups (Figure 7).
The Antifungal Properties of Double-Decked Acrylic Resin
The XTT assay results showed that the DMADDM-modified samples had increased antimicrobial properties more significantly than the control group ( Figure 6A). The results of CFU counting of different groups showed that the DMADDM containing group significantly inhibited the growth of C. albicans in biofilms in a dose-dependent manner compared to the control group ( Figure 6B). The acrylic resin disks containing DMADDM had also reduced the biofilm on the surface at different concentrations of DMADDM compared with the control group according to the SEM observation ( Figure 7). Importantly, the mycelium of C. albicans had decreased significantly in DMADDM-containing groups (Figure 7).
The Antifungal Properties of Double-Decked Acrylic Resin
The XTT assay results showed that the DMADDM-modified samples had increased antimicrobial properties more significantly than the control group ( Figure 6A). The results of CFU counting of different groups showed that the DMADDM containing group significantly inhibited the growth of C. albicans in biofilms in a dose-dependent manner compared to the control group ( Figure 6B). The acrylic resin disks containing DMADDM had also reduced the biofilm on the surface at different concentrations of DMADDM compared with the control group according to the SEM observation (Figure 7). Importantly, the mycelium of C. albicans had decreased significantly in DMADDM-containing groups (Figure 7).
Discussion
There is a strong need of new dental materials that can inhibit C. albicans growth and reduce virulence due to the increase of denture stomatitis. In this study, we have developed a new method to synthesize the heat-polymerized acrylic resin containing DMADDM to grant the resin with antifungal ability. Recently, various coating approaches were applied to denture base materials with increased surface hydrophilicity to reduce C. albicans adherence [43][44][45]. Nevertheless, the coating layers were not stable [43][44][45] and were able to easily form rough surfaces to increase fungal adhesion [46]. Therefore, we developed a new double-decked resin as this article mentioned (Figure 1). This is an innovative approach to synthesize an antifungal resin. The upper one-third of the resin with DMADDM can maintain perfect antimicrobial effect and the two-thirds of substrate with ordinary resin can ensure decent mechanical properties. Bond quality test was used to measure the bond strength in the interface between the two-thirds denture base resin and the one-third DMADDM resin. None of the fracture faces occurred in the adhesive interface ( Figure 3A). The result showed the fracture strength was similar to the control group ( Figure 3B). Both methyl methacrylate composing the acrylic resin and DMADDM have double bonds in their molecular formula. Once the upper resin layer contacts the ordinary resin layer from paste stage, their double bonds will be opened and combined with each other to form an inseparable interface. After polymerization, there was a natural color transition and no obvious dividing line from the lower two-thirds to the upper third in the successful double-decked sample. The bond quality was allowed to utilize the strength of the materials. The double-decked acrylic resin was effective in the inhibition of C. albicans biofilm with good mechanical properties. Compared with the other methods, this way of manufacturing antifungal acrylic resin can be convenient and more effective.
Physical properties are an important aspect of acrylic resin, the resins containing DMADDM (5%, 10%, and 20%) had no adverse effect on flexural strength compared to the control group in this double-decked model. The surface roughness of resin could influence fungal adhesion [47]. We polished double-decked samples to ensure that the roughness of resin would remain consistent before biofilm formation in the following tests. Further investigations are needed to confirm whether physical properties of acrylic resin containing DMADDM are affected by oral microenvironment and aging since this in vitro study was only executed in a short time.
C. albicans may grow in different types: yeast, pseudohyphal, and hyphal. The hyphal formation significantly decreased on the DMADDM-containing resin. The reason that DMADDM inhibited C. albicans filamentous growth may be that: (i) DMADDM eliminated the C. albicans cells directly;
Discussion
There is a strong need of new dental materials that can inhibit C. albicans growth and reduce virulence due to the increase of denture stomatitis. In this study, we have developed a new method to synthesize the heat-polymerized acrylic resin containing DMADDM to grant the resin with antifungal ability. Recently, various coating approaches were applied to denture base materials with increased surface hydrophilicity to reduce C. albicans adherence [43][44][45]. Nevertheless, the coating layers were not stable [43][44][45] and were able to easily form rough surfaces to increase fungal adhesion [46]. Therefore, we developed a new double-decked resin as this article mentioned (Figure 1). This is an innovative approach to synthesize an antifungal resin. The upper one-third of the resin with DMADDM can maintain perfect antimicrobial effect and the two-thirds of substrate with ordinary resin can ensure decent mechanical properties. Bond quality test was used to measure the bond strength in the interface between the two-thirds denture base resin and the one-third DMADDM resin. None of the fracture faces occurred in the adhesive interface ( Figure 3A). The result showed the fracture strength was similar to the control group ( Figure 3B). Both methyl methacrylate composing the acrylic resin and DMADDM have double bonds in their molecular formula. Once the upper resin layer contacts the ordinary resin layer from paste stage, their double bonds will be opened and combined with each other to form an inseparable interface. After polymerization, there was a natural color transition and no obvious dividing line from the lower two-thirds to the upper third in the successful double-decked sample. The bond quality was allowed to utilize the strength of the materials. The double-decked acrylic resin was effective in the inhibition of C. albicans biofilm with good mechanical properties. Compared with the other methods, this way of manufacturing antifungal acrylic resin can be convenient and more effective.
Physical properties are an important aspect of acrylic resin, the resins containing DMADDM (5%, 10%, and 20%) had no adverse effect on flexural strength compared to the control group in this double-decked model. The surface roughness of resin could influence fungal adhesion [47]. We polished double-decked samples to ensure that the roughness of resin would remain consistent before biofilm formation in the following tests. Further investigations are needed to confirm whether physical properties of acrylic resin containing DMADDM are affected by oral microenvironment and aging since this in vitro study was only executed in a short time.
C. albicans may grow in different types: yeast, pseudohyphal, and hyphal. The hyphal formation significantly decreased on the DMADDM-containing resin. The reason that DMADDM inhibited C. albicans filamentous growth may be that: (i) DMADDM eliminated the C. albicans cells directly; (ii) DMADDM took part in the inhibition of C. albicans hyphal development. We will identify the mechanism during the following investigation.
There were few reports on the antifungal mechanisms of QAM compared to the antibacterial mechanisms. Beyth N et al. indicated that the positively charged quaternary amine N + of a QAM could attract the negatively-charged cell membrane of bacteria, disrupting the cell membrane and causing cytoplasmic leakage [48,49]. Similar to the bacteria, QAM can affect the fungal plasma membrane, causing mono-and divalent cation, as well as ATP, leakage, strongly disrupting plasma membrane structure and decreasing the survival of fungal cells [10,50,51]. In this study, the surface charge density of double-decked resin was increased ( Figure 4). Therefore, the heat-polymerized resin with a higher concentration of DMADDM, which increased the positive charge density significantly, had stronger antifungal potency.
The hydrophobic surface could promote the adherence of C. albicans [52,53] and selectively increased the propensity of hyphal forms of C. albicans to colonize denture surfaces [54]. DMADDM has a hydrophilic group which modifies the surface of acrylic resin after polymerization, to reduce biofilm formation (adhesion) and hyphal development.
DMADDM is a kind of quaternary ammonium methacrylate (QAMs). It is indispensable for clinical application of materials with good biosafety. Previous studies showed human gingival fibroblasts and odontoblasts have good biocompatibility with DMADDM [39,46,55]. In an in vivo histological evaluation, Keke Zhang et al. proved that less than 20% DMADDM in denture material did not increase the inflammatory response, suggesting good biocompatibility and biosafety of the newly synthesized material containing DMADDM as an antimicrobial additive [56]. Moreover, Nurit Beytha et al. showed that the antimicrobial-compound QAMs were stable and did not leach out from material into the saliva. QAMs can cause stress not only to the cells with which they come into contact but also to the other outer cells in the surrounding environment. It was shown that bacterial lysis by QAM on the resin surface may function as a stressful condition, triggering programmed cell death (PCD) in the surrounding bacteria [57]. Han Zhou also found that resin containing DMADDM can kill the entire biofilm, not just the bacteria in contact. This outcome was consistent with Nurit Beytha [58]. Jin feng et al. added DMADDM to glass ionomer cement (GIC) and measured the release of DMADDM [28]. The result showed that the release of DMADDM could not be found in saliva [28]. Therefore, the double-decked acrylic resin is an antifungal material with good biosafety.
Conclusions
The current study developed a new double-decked acrylic resin containing DMADDM and investigated the material physical properties for the first time. Double-decked acrylic resin containing DMADDM was effective in the inhibition of C. albicans biofilm with good physical properties compared to the control group.
|
2018-04-03T01:01:45.002Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "18b85d0c65dc4e809a5ec7c4546c14eccbaeaf3b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/10/4/431/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18b85d0c65dc4e809a5ec7c4546c14eccbaeaf3b",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
267590461
|
pes2o/s2orc
|
v3-fos-license
|
Current progress in research focused on salt tolerance in Vitis vinifera L.
Soil salinization represents an increasingly serious threat to agronomic productivity throughout the world, as rising ion concentrations can interfere with the growth and development of plants, ultimately reducing crop yields and quality. A combination of factors is driving this progressive soil salinization, including natural causes, global climate change, and irrigation practices that are increasing the global saline-alkali land footprint. Salt stress damages plants both by imposing osmotic stress that reduces water availability while also inducing direct sodium- and chlorine-mediated toxicity that harms plant cells. Vitis vinifera L. exhibits relatively high levels of resistance to soil salinization. However, as with other crops, grapevine growth, development, fruit yields, and fruit quality can all be adversely affected by salt stress. Many salt-tolerant grape germplasm resources have been screened in recent years, leading to the identification of many genes associated to salt stress and the characterization of the mechanistic basis for grapevine salt tolerance. These results have also been leveraged to improve grape yields through the growth of more tolerant cultivars and other appropriate cultivation measures. The present review was formulated to provide an overview of recent achievements in the field of research focused on grapevine salt tolerance from the perspectives of germplasm resource identification, the mining of functional genes, the cultivation of salt-tolerant grape varieties, and the selection of appropriate cultivation measures. Together, we hope that this systematic review will offer insight into promising approaches to enhancing grape salt tolerance in the future.
Introduction
23% of the cultivated arable lands are saline over 100 countries in all continents.20% (45 million ha) irrigated lands are human-induced salt-affected soils (secondary salinization) in the world (Zaman et al., 2018).For instance, 2.6×10 7 hectares (ha) of the total land area are salt affected mainly in the north part and tidal coastal regions and 6.7 million ha lands of the irrigated areas are affected by secondary salinization in China.It is thought that productivity enhancement of salt-affected lands in irrigated areas is one important method to provide more food, fruit, feed, and fiber to the expanding population worldwide (Qadir et al., 2014).
Vitis L. is among the most highly valued genera in the Vitaceae family, as the fruits it produces are rich in polyphenols and resveratrol that reportedly exhibit anti-aging activity.In addition to being consumed as fresh fruits, these grapes are also processed to produce raisins, juice, sauces, vinegar, and wine, all of which are important to the global fruit trade.Grapevines are plant species with a relatively high level of salt tolerance.Despite this advantageous trait, progressive soil salinization can still adversely impact the growth, fruit yield, and fruit quality (i.e.flavor, sugar content etc.), and further negatively affecting wine quality.The most effective approaches to improving grape production in lightly salinized soil and enhancing the growth and fruit quality of grapevines cultivated in the presence of soil salinization have thus been a focus of intensive research interest in recent years.This review was compiled in an effort to gain insight into the most effective means of improving the salt tolerance of grape plants, offering a theoretical reference for efforts to better explore the mechanistic basis for such salt tolerance, cultivate salt-tolerant grape varieties, and improve overall grape growth in saline soil (Figure 1).
Studies of salt-tolerant grape germplasm resources
Germplasm resources serve as the basic materials for the breeding of novel cultivars while also enabling biotechnological research and germplasm innovation.Certain varieties of wild grapes exhibit high levels of adaptability and resistance to a variety of stressors, and thus represent valuable resources that can be used to broaden the gene pool as a means of expanding the genetic repertoire of cultivated grape varieties (Aradhya et al., 2003;Wan et al., 2008).Prior studies have established grapes as being moderately sensitive to salt stress (Maas and Hoffman, 1977).Salt-treated leaves generally exhibit higher chloride levels than sodium levels during various stages of growth, with this difference being elevated by an order of magnitude in leaves exhibiting stress symptoms.The ability of grape plants to minimize chloride accumulation has thus been used as a criterion when attempting to screen for salt-tolerant germplasm resources (Fort et al., 2013).The chloride exclusion abilities of V. acerifolia, V. arizonica, V. berlandieri, V. doaniana, and V. girdiana accessions gathered from the southwestern United States have been categorized and compared to the benchmark chloride excluder V. rupestris 'St.George' (Heinitz et al., 2014).Longii 9018/9035, NM:03-17, and GRN1 with lower chloride accumulation in leaves were identified as chloride excluders from among 16 grape rootstocks treated with 75 mM NaCl for three weeks, highlighting a clear relationship between the degree of fine root production and chloride accumulation (Bent, 2017).Approximately 60 wild Vitis species have been identified in four main eco-geographic regions of China, but their relative levels of salt tolerance remain to be assessed (Wan et al., 2008).Currently, the Ramsey and Dogridge rootstocks of the V. champinii species, as well as the Fall grape, 140 Ruggeri, and 1103 Paulsen progenies of the V. berlandieri × V. rupestris cross have been established as salttolerant germplasms such that they are commonly used as rootstocks when seeking to breed novel salt-tolerant cultivars (Zhou-Tsang et al., 2021).
Given the long history of grape cultivation throughout the globe together with the diverse range of complex hybrids and rootstocks The effect of salt stress to grape.Han and Li 10.3389/fpls.2024.1353436Frontiers in Plant Science frontiersin.orggenerated through vegetative propagation, anywhere from 6,000 to 10,000 grape cultivars are thought to exist, providing ample opportunities for germplasm resource collection (Laucou et al., 2011).While researchers have tirelessly worked to study the relative ability of different grape varieties to tolerate saline conditions, these efforts have not been comprehensive or systematic.As no unified standards for identifying salt-tolerant cultivars have been established and cultivation conditions may vary across studies, comparing the results of these different analyses is often not possible.There is thus a clear need to establish a precise high-throughput platform for analyzing grapevine germplasm salt tolerance, as such a system would enable to more effective identification of the raw materials needed to breed cultivars and rootstock varieties better equipped to tolerate soil salinization period.
Identification of salt tolerance-related grape genes
Exposure to salt stress subjected plants to simultaneous ionic and osmotic stress, with relevant grapevine research conducted to date having largely focused on osmotic responses, ion accumulation, and the physiological characteristics of tolerant tissues.Under high levels of salt stress, osmotic changes occur rapidly with a half-time to the inhibition of Arabidopsis seedling root conductivity of just 45 min in response to 100 mM NaCl (Boursiac et al., 2005).Ion accumulation and osmoprotectant production can alter the ability of plant cells to balance water retention, and the aquaporin family of water channel proteins have been identified as particularly important regulators of plant salt stress responses (Li et al., 2014a).Further, plants would induce salt stress signaling pathway to decrease the adverse effects of excess Na + and other ions.For instance, Ca 2+ , phytohormone (abscisic acid, ethylene, salicylic acid etc.), reactive oxygen species and related cascade signal transduction reactions will be activated to adapt the salinity environment (Zhao et al., 2021).
Grapevines exhibiting high levels of soluble sugar and proline accumulation reportedly exhibit less severe growth inhibition and higher leaf chlorophyll and carotenoid concentrations (Fozouni et al., 2012).A range of other genes have also been shown to be induced in cultivars exposed to salt stress including glycine betaine-associated genes and genes encoding proteins that are abundant during the late phases of embryogenesis including VvDNHN1 and VvLEA-D29L (I ̇brahime et al., 2019;Haider et al., 2019).Multiple aquaporins have also been reported to play a role in osmotic regulation and salt tolerance in grapevines at the cellular and whole-plant levels (Galmeś et al., 2007;Vandeleur et al., 2009).However, the fine regulation mechanisms of the aquaporins are still unclear and the expression pattern of some genes is ecotype-dependent.For instance, VvPIP2;2 expression has been shown to be induced in salt-sensitive Shirazi plants, yet its expression is inhibited in salt-tolerant Gharashani plants (Mohammadkhani et al., 2012).
While some grape varieties are better able than others to exclude salt ions, the ion concentrations in these plants will inevitably be higher when cultivated in saline soil as compared to non-saline soil.
Accordingly, these plants must engage a series of processes at the molecular level to adapt to or mitigate this ionic stress.Protein kinases and transcription factors have been shown to be particularly important proteins that can integrate inputs from multiple ion homeostasis-and stress signaling-related pathways in Vitis.VaCPK21 was reportedly significantly up-regulated in wild grape (Vitis amurensis Rupr.) plants exposed to salt stress, with improved salt tolerance following the overexpression of this gene in grape callus samples (Dubrovina et al., 2016).Ji et al. found that VvMAPK9 can serve as a positive regulator of Arabidopsis and grape callus adaptability through its ability to regulate antioxidant system activity.The highest levels of VvMAPK9 expression were observed in root and leaf tissues, with pronounced induction in grapes in response to abscisic acid (ABA) or abiotic stressors such as high temperatures, salt, or drought conditions.Arabidopsis seedlings overexpressing VvMAPK9 exhibited enhanced salt tolerance, and the germination rates of transgenic lines were higher, with these plants exhibiting superior growth and longer roots under salt stress conditions as compared to wild-type plants.The expression of antioxidant enzyme (SOD and POD)s and ion transporter-related proteins (NHXP, HKT1, HKT2) was also significantly elevated in these VvMAPK9-overexpressing grape callus samples under salt stress conditions (Ji et al., 2022).
Transcription factors are expressed in all eukaryotic species and are essential regulators of signaling activity in plants exposed to abiotic stress conditions, promoting the upregulation of stress resistance-related genes such that they are an important focus of research exploring stress tolerance in plants.A transcriptomic analysis of grapes exposed to salt stress identified 52 transcription factors including WRKYs, EREBs, MYBs, NACs, and bHLHs among the 343 differentially expressed genes (Upadhyay et al., 2018).The C-repeat (CRT)/dehydration-responsive element (DRE) protein family is comprised of key regulators of plant abiotic stress tolerance, and the CRT/DRE binding factor VaCBF4 can be induced in V. amurensis in response to cold, drought, ABA, saline conditions, and other abiotic stressors, improving the ability of Arabidopsis seedlings to tolerate cold, drought, and salt stress when overexpressed (Li et al., 2013).VvWRKY30 is a transcription factor that is primarily expressed in leaves and shoot tips and that can be induced in response to salt stress, H 2 S, and H 2 O 2 , enhancing the ability of plant seedlings to tolerant saline conditions through the enhanced elimination of reactive oxygen species and osmotic membrane accumulation.When VvWRKY30 is overexpressed, seedlings reportedly exhibit improved antioxidant activity and corresponding reductions in reactive oxygen species, together with increases in soluble sugar and proline content and the concomitant upregulation of genes associated with antioxidant biosynthesis, sugar metabolism, and proline biosynthesis under salt stress conditions (Zhu et al., 2019).The VviERF073 transcription factor is a member of the APETALA2/ethylene response factor (AP2/ERF) family firstly identified as a salt stressinducible gene in a salt stress EST library, although subsequent reports of its functional role in grape plants exposed to salt stress conditions have been lacking (Shinde et al., 2017).The helix-loophelix transcription factor VvbHLH1 can significantly enhance flavonoid accumulation within Arabidopsis seedlings when overexpressed in a codon-optimized isoform, with further research suggesting that it can also shape drought and salt tolerance in Arabidopsis plants through the augmentation of ABA signal transduction and flavonoid accumulation (Wang et al., 2016).VpSBP16 encodes a SQUAMOSA promoter binding protein (SBP) box transcription factor that was cloned from the Chinese wild grape 'Baihe 35-1' variety that was found to regulate SOS and ROS signaling cascades to improve salt and drought stress tolerance during seed germination.Consistently, transgenic Arabidopsis seedlings in which VpSBP16 was overexpressed exhibited increased root length and seed germination rates as compared to wild-type plants exposed to osmotic stress (Hou et al., 2018).
Under salt stress conditions, Na + /H + antiporter (NHX) proteins can facilitate the ATP-dependent transport and sequestration of Na + , thereby effectively mitigating ionic stress in plants.The AtNHX1 gene reportedly conferred 'Thompson seedless' grape plants with a level of salt tolerance similar to that observed for other salt-tolerant cultivars ('Pedro Gimenez' and 'Criolla Chica').'Thompson seedless' seedlings the Arabidopsisderived AtNHX1 gene exhibited growth that was better than that of wild-type seedlings when treated for 7 weeks with 150 mM NaCl including significant improvements in stem length, leaf area, and dry weight (Venier et al., 2018).When overexpressed in potato seedlings, grape-derived VvNHX1 was similarly able to improve salt tolerance.These transgenic seedlings also reportedly exhibited higher levels of soluble sugar, Mg 2+ , and K + , together with enhanced antioxidant enzyme activity and reductions in Na + accumulation and oxidative stress (Li et al., 2014b;Charfeddine et al., 2019).
Cultivation practices that can improve the salt tolerance of grape plants
The application of optimized cultivation and management practices has the potential to improve the ability of grape plants to tolerate salt stress.Appropriately managing soil and water can help decrease the adverse effects of saline conditions on grape plants.Low ABA concentrations, for example, can render cells better able to adapt to saline conditions while reducing transpiration and the passive absorption of salt ions, supporting a link between ABA accumulation and the expression of functional genes including VvNHX1 and VvOSM1 (Saleh et al., 2020).Vineyard management technologies can help alleviate salinization by prolonging irrigation time to leach Na + and Cl -accumulated in the root zone, or by artificial ditches to induce rainfall inflow to leach salt in the soil below a depth of 1.5 m in the ground.The partial root drying (PRD) technique was designed to optimize the efficiency of water use for viticulture by reducing irrigation while improving fruit quality (Dry et al., 2000).Degaris et al. explored the effects of moderately saline water on Shiraz and Grenache vines grown in pots using the PRD irrigation technique, which reduces the amount of water used by 50% relative to control conditions (Degaris et al., 2016).Their results suggested that PRDirrigated vines exhibited higher levels of Cl -, Na + , K + , and Ca 2+ ions, but that Cl -can be partitioned away from leaves on a total content basis relative to controls.These results demonstrate that combining PRD irrigation techniques and saline water can alter ion levels and allocation within grapevines, underscoring the need to monitor field water during the growing season to promote long-term vine health and improved fruit composition.
ABA can reportedly induce phytochemical and morphological changes that can enhance the ability of grape plants to tolerate salt stress.Grape rootstocks with superior tolerance characteristics have been found to accumulate higher levels of ABA when exposed to salt stress (Upreti and Murti, 2010).Relative to untreated seedlings, seedlings subjected to exogenous ABA treatment exhibited increases in plant height, leaf area, leaf number, and shoot dry matter together with increases in leaf flavonoid, proline, soluble sugar, and phenol levels and enhanced activity of antioxidant enzymes including catalase, guaiacol peroxidase, and ascorbic acid peroxidase (Stevens et al., 2011).Acetic acid and oxalic acid irrigation can also promote significant increases in the root activity and leaf chlorophyll content of treated 'Fuke' cuttings exposed to salt stress while reducing root malondialdehyde levels and leaf H 2 O 2 concentrations (Guo et al., 2018).
Discussion
Soil salinization represents an increasingly severe threat to agronomic productivity throughout the globe, endangering the reliable production of foods necessary for human survival.Most popular grape varieties cultivated at present are of European provenance, but many of these exhibit relatively low poor tolerance for soil salinity.Further efforts to leverage stressresistant wild grape germplasm resources thus have the potential to provide key genes and rootstocks needed to breed salt-tolerant grape varieties.V. riparia, V. champini, V. berlandier, and V. shuttleworthii are salt-resistant North American grape varieties that represent promising rootstock materials for the further breeding of salt-tolerant varieties.Further efforts to understand the physiological and ecological characteristics and mechanisms governing rootstock salt tolerance will offer a scientific basis for the more effective breeding and cultivation of grape varieties that can tolerate rising levels of soil salinity.
Recent studies of grape salt tolerance have largely centered around efforts to evaluate germplasm resources, characterize physiological salt tolerance mechanisms, and related topics, providing a foundation for the breeding of salt-tolerant grapes.However, insufficient work focused on the isolation, cloning, and regulated expression of salt tolerance-related grape genes has been performed to date.There is thus an urgent need to develop biotechnology-based approaches to enhancing the adaptability of grape plants to salinized soil.The ongoing development of proteomics and functional genomics platforms, together with the application of novel technologies such as expressed sequence tags, cDNA microarrays, transposon tags, and T-DNA tags provide new opportunities for the more straightforward isolation and characterization of salt tolerance-associated genes in the future.
Based on the findings reported in this review, we believe that research evaluating salt-tolerant grape germplasm resources is still in its early stages with a clear lack of systematic research efforts.It is therefore crucial that a salt-tolerant germplasm resource database be established to provide robust data that can support breeding efforts.Wild grape resources are widely distributed and represent a rich source of novel genetic elements that warrant a higher degree of attention.These ongoing efforts to identify key salt tolerance-related genes and to more fully outline the functional relationship between salt stress and signaling activity in wild grapes may provide a foundation for the targeted breeding of salt-tolerant grapes through the appropriate application of genomics and proteomics techniques.
To date, studies focused on identifying salt-tolerant grape germplasm resources have largely been restricted to laboratory settings, since NaCl irrigation is generally used to simulate soil salinization as a means of testing seedling or seed responses to a range of salt concentrations.These artificial conditions, however, differ from true soil salinity.Tests of salt tolerance-related gene functions have also primarily been conducted in Arabidopsis model plants, limiting efforts to comprehensively survey and validate salt tolerance in grape germplasm resources.As such, while these prior findings provide a valuable foundation for further research efforts, they must be interpreted with caution owing to these limitations, underscoring the need for additional systematic physiological research efforts to better expand the current understanding of salt tolerance in grape cultivars.
(A) The photo of a salt germplasm after watered with 100 mM NaCl for 4 weeks in pots.(B) The efffects of salt stress on grape.(C) The screened salt tolerant germplasms.(D) Identified salt tolerant related gen.
|
2024-02-11T16:23:29.740Z
|
2024-02-08T00:00:00.000
|
{
"year": 2024,
"sha1": "e2e95958db4b92c30709a5946c7d1c0f1cf90e48",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2024.1353436/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f915e722a2d7b7248f52d9b136a8d3cb9068ed1c",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
260222689
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the bidirectional causal link between household income status and genetic susceptibility to neurological diseases: findings from a Mendelian randomization study
Objectives Observational studies have revealed that socioeconomic status is associated with neurological disorders and aging. However, the potential causal effect between the two remains unclear. We therefore aimed to investigate the causal relationship between household income status and genetic susceptibility to neurological diseases using a bidirectional Mendelian randomization (MR) study. Methods An MR study was conducted on a large-sample cohort of the European population pulled from a publicly available genome-wide association study dataset, using a random-effects inverse-variance weighting model as the main standard. MR-Egger regression, weighted median, and maximum likelihood estimation were also performed concurrently as supplements. A sensitivity analysis, consisting of a heterogeneity test and horizontal pleiotropy test, was performed using Cochran’s Q, MR-Egger intercept, and MR-PRESSO tests to ensure the reliability of the conclusion. Results The results suggested that higher household income tended to lower the risk of genetic susceptibility to Alzheimer’s disease (odds ratio [OR]: 0.740, 95% confidence interval [CI] = 0.559–0.980, p-value = 0.036) and ischemic stroke (OR: 0.801, 95% CI = 0.662–0.968, p-value = 0.022). By contrast, higher household income tended to increase the risk of genetic susceptibility to Parkinson’s disease (OR: 2.605, 95% CI = 1.413–4.802, p-value = 0.002). No associations were evident for intracranial hemorrhage (OR: 1.002, 95% CI = 0.607–1.653, p-value = 0.993), cerebral aneurysm (OR: 0.597, 95% CI = 0.243–1.465, p-value = 0.260), subarachnoid hemorrhage (OR: 1.474, 95% CI = 0.699–3.110, p-value = 0.308), or epilepsy (OR: 1.029, 95% CI = 0.662–1.600, p-value = 0.899). The reverse MR study suggested no reverse causal relationship between neurological disorders and household income status. A sensitivity analysis verified the reliability of the results. Conclusion Our results revealed that the populations with a superior household income exhibit an increased predisposition of genetic susceptibility to Parkinson’s Disease, while demonstrating a potential decreased genetic susceptibility to ischemic stroke and Alzheimer’s disease.
Conclusion: Our results revealed that the populations with a superior household income exhibit an increased predisposition of genetic susceptibility to Parkinson's Disease, while demonstrating a potential decreased genetic susceptibility to ischemic stroke and Alzheimer's disease.
Introduction
Neurological diseases can give rise to diverse physical, cognitive, and emotional impairments that can have significant and detrimental impact on the quality of life of an individual. The World Health Organization has reported that neurological conditions account for approximately 6.3% of the global disease burden, making them a primary cause of disability and mortality worldwide (1). With the global population continuing to age, there has been a rise in the prevalence of neurological diseases, particularly those associated with aging, such as Alzheimer's and Parkinson's disease (2)(3)(4). Over the past few decades, the morbidity rates for certain neurological disorders such as stroke and Alzheimer's disease have decreased substantially in high-income populations, owing to progress in public health education and awareness of risk factors for these ailments, as well as advancements in medical treatments and interventions (5,6). By contrast, individuals from low-income populations are at higher risks of developing some types of neurological diseases, such as stroke, due to higher risk factors and lack of access to preventative and specialized stroke care. This can result in a higher risk of disability and a poorer prognosis (7)(8)(9). Understanding the link between disease risk and socioeconomic status (SES) holds significance for generating novel hypotheses regarding the influence of environmental and social factors on disease etiology, as well as devising equitable social healthcare policies (10,11). There is limited evidence regarding the causal connection between household income status and neurological diseases, mainly due to the absence of large-sample cohort studies on the subject. Previous observational studies have reported a relationship between household income and neurological disorders; however observational studies have limitations such as lack of randomization, potential confounding factors, difficulty in controlling variables, and are unable to establish causality due to unaccounted factors that can bias the results (12,13). Further research is therefore needed to fully understand the nature of this relationship.
Mendelian randomization (MR) is a statistical technique used in epidemiology and genetics to determine the causal relationship between a risk factor and an outcome (14,15). MR is based on the principles of Mendel's laws of inheritance, which describe how genetic variants are randomly allocated during meiosis (16). This method uses instrumental variables, specifically genetic variations such as single nucleotide polymorphisms (SNPs) linked to a risk factor of concern (e.g., blood pressure or cholesterol levels), to explore whether the chosen risk factor has an causal impact on the outcome of interest (e.g., heart disease or stroke) (17). In the absence of randomized controlled trials (RCTs), MR studies represent an alternative strategy for causal inference because genetic variants are randomly assigned during meiosis, and therefore add an additional layer of data compared to observational studies. As a result, MR has advantages over traditional observational studies, MR reduces the risk of confounding and reverse causality, making it a superior tool for exploring causality in epidemiological studies (18). Multiple MR studies have effectively employed causal relationship analyses to investigate the links between behavioral exposure, education, socioeconomic conditions, and several diseases (19)(20)(21)(22).
This study aimed to utilize an MR approach to establish a bidirectional causal association between genetic susceptibility to common neurological diseases and household income status.
Study design and genome-wide association study (GWAS) dataset information
To achieve impartial results, an MR study depends on three fundamental assumptions: (1) the selected genetic instrumental variables (IVs) must be significantly associated with the exposure factor; (2) the IVs should be independent of potential confounders associated with exposure factors and outcomes; and (3) the IVs should affect the outcomes only through the exposure factor (23). This study conducted 14 separate instances of MR analyses designed to explore the bidirectional association between annual household income status and seven neurological diseases.
The study was conducted on data from a large-sample cohort of the European population, pulled from a publicly available GWAS dataset. The variable genetic information involved in this study was extracted from the Integrative Epidemiology Unit GWAS database 1 (24), which is a publicly available GWAS summary database. Therefore, the requirement for ethical committee approval was waived. The GWAS summary dataset "average total household income before tax" represented the household income status of 397,751 samples originally from the UK biobank database. Annual household income was divided into five intervals: less than 18,000 pounds, 18,000-30,999 pounds, 31,000-51,999 pounds, 52,000-100,000 pounds, and greater than 100,000 pounds. The neurological diseases were represented by Alzheimer's disease, Parkinson's disease, ischemic stroke, intracerebral hemorrhage, cerebral aneurysm, subarachnoid hemorrhage, and epilepsy. Detailed information on all the GWAS datasets is listed in Table 1. We followed the sample size and timeliness priority to make the best choices whenever possible. The GWAS datasets of household income and neurological diseases were selected from different consortiums to decrease the potential bias caused by sample overlap. In addition,
Selection criteria for IVs
The IVs were SNPs, which were filtered according to the three afore-mentioned pivotal assumptions of MR studies. First, the SNPs were matched with a genome-wide statistical significance threshold (p-value<5 × 10 −8 ). Second, the corresponding linkage disequilibrium was tested to confirm the presence of SNPs in the linkage disequilibrium state, as well as the independence of SNPs, by trimming them within a 0-10,000 kb window at a threshold of r 2 <0.001. Third, to evaluate the assumption that the IVs affected the outcomes only through the exposure factor, the potential phenotypes that may have been relevant to the IVs were investigated by searching the human genotype-phenotype association database (PhenoScanner-V 2 ) (30). Fourth, SNPs identified as the IVs were further matched to those in the outcome GWAS dataset to establish genetic associations. The summary SNP-phenotype and SNP-outcome statistics were harmonized to ensure effect size alignment, and any palindromic SNPs were excluded. Finally, F-statistics (>10) were used to evaluate the strength of the IVs in order to avoid the influence of weak instrumental bias (31).
MR study and sensitivity analysis
The MR study was performed using a random-effects inversevariance weighting (IVW) model (32) as the primary standard, as well as three other models [MR-Egger regression (33), weighted median (34), and maximum likelihood (35)] as supplements to evaluate the potential causal relationships between household income status and the seven chosen neurological diseases. The IVW method utilizes a meta-analysis approach to combine Wald estimates for each SNP and obtain an overall estimate of the exposure's effect on the outcome. In MR-Egger, the IVW estimates are recalculated, removing the constraint of the intercept. The weighted median provides an alternative estimate that remains valid when at least 50% of the instruments are valid. The maximum likelihood model is similar to IVW, assuming the absence of heterogeneity and horizontal pleiotropy. Under the fulfillment of these assumptions, the results will be unbiased, with smaller standard errors compared to IVW. The reverse MR study evaluated the potential causal relationship between the seven neurological diseases and household income status using the same methods. In addition, a sensitivity analysis was performed to measure the reliability and stability of the conclusion. The sensitivity analysis consisted of (1) a Cochran's Q test (according to the IVW model or MR-Egger regression model); (2) a horizontal pleiotropy test using an MR-Egger intercept (36) and an MR-PRESSO test (37); and (3) a "leave-one-out" test (each SNP was dropped successively and the IVW analysis was repeated to identify whether any specific SNP drove the causal relationship estimate). The results are reported as odds ratios (ORs) with corresponding 95% confidence intervals (CIs) and p-values, as well as scatterplots. The evidential threshold for the MR analysis was defined as p-value <0.004 (0.05/14), according to the Bonferroni correction method. A p-value <0.05 but above the Bonferroni corrected evidential threshold was regarded as a potential association. A p-value <0.05 was also considered significant in the sensitivity analysis. R v4.0.3 software, equipped with the "TwoSampleMR" (38) and "MR-PRESSO" (37) packages, was used to process and visualize the study.
MR study
The sample overlap of the seven GWAS datasets and the UK-biobank database were as follows: Alzheimer's disease: 0%; Parkinson's disease: none available; ischemic stroke: 0%; intracranial hemorrhage: 0%; cerebral aneurysm: 0%; subarachnoid hemorrhage: 0%; and epilepsy: 0%. Sample overlap rates between household income and neurological diseases were therefore shown to be extremely low.
The numbers of SNPs that were ultimately identified as the IVs in the different outcome datasets were 42 (Alzheimer's disease, intracranial hemorrhage, cerebral aneurysm, subarachnoid hemorrhage, and epilepsy) and 44 (Parkinson's disease, ischemic stroke), respectively. The F-statistic scores of all these selected SNPs were over 10 (Alzheimer's disease: 57 According to the random-effects IVW model results, higher household income, as the primary standard, tended to lower the risk of genetic susceptibility to Alzheimer's disease (OR: 0.740, 95% CI = 0.559-0.980, p-value = 0.036) and ischemic stroke (OR: 0.801, 95% CI = 0.662-0.968, p-value = 0.022). By contrast, higher household income tended to increase the risk of genetic susceptibility to Parkinson's disease (OR: 2.605, 95% CI = 1.413-4.802, p-value = 0.002). However, no evidence was found of a potential causal relationship between household income status and intracranial hemorrhage (OR: 1.002, 95% CI = 0.607-1.653, p-value = 0.993), cerebral aneurysm (OR: 0.597, 95% CI = 0.243-1.465, p-value = 0.260), subarachnoid hemorrhage (OR: 1.474, 95% CI = 0.699-3.110, p-value = 0.308), or epilepsy (OR: 1.029, 95% CI = 0.662-1.600, p-value = 0.899). The results of our weighted median and maximum likelihood estimation models supported these conclusions. The MR-Egger regression model results, however, did not show significant differences. In summary, according to the Bonferroni correction standard, this MR study revealed that the population with a higher household income tended to have a greater risk of genetic susceptibility to Parkinson's disease. The results also suggest a potentially negative relationship between Alzheimer's disease and ischemic stroke. Detailed information is displayed in the forest plot in Figure 1, and is illustrated as a scatterplot in Supplementary Figure 1
Sensitivity analyses
The results of our Cochran's Q test indicated certain heterogeneity among the IVs in terms of Parkinson's disease and epilepsy ( Table 2). The random effects IVW model was therefore used to minimize the effect of heterogeneity in the MR study. No horizontal pleiotropy was detected using the MR-Egger intercept and MR-PRESSO tests ( Table 2). In addition, the "leave-one-out" method indicated that no specific SNP among the IVs significantly affected the overall result (Supplementary Figure 2). In general, the sensitivity analysis verified the robustness of the conclusions.
Reverse MR study and sensitivity analyses
The numbers of SNPs that were ultimately identified as the IVs for different neurological diseases in the reverse MR study were 18 (Alzheimer's disease), 22 (Parkinson's disease), 7 (ischemic stroke), 1 (intracranial hemorrhage) and 0 (cerebral aneurysm/subarachnoid hemorrhage/epilepsy).
Based on the random-effects IVW model results, the reverse MR study suggested no reverse causal relationships between the neurological diseases and household income status. More detailed information on this analysis is displayed in Table 3. The result of the MR study illustrated by forest plot. The causal relationship between household income status and neurological diseases was evaluated using an MR study. OR, odds ratio; CI, confidence interval; MR, Mendelian randomization; SNP, single nucleotide polymorphism.
Frontiers in Public Health 05 frontiersin.org
Discussion
Epidemiological research has extensively investigated the impact of SES on neurological diseases (39). Household income, as a crucial component of SES, has consistently been associated with the probability of developing neurological diseases (40,41). However, a comprehensive investigation of the causal relationship between household income and neurological diseases is still necessary. This study aimed to address this research gap by conducting a bidirectional two-sample MR analysis to examine the causal relationship between household income and neurological diseases. To the best of our knowledge, this is the first study to investigate the genetic risk aspect. According to our results, individuals belonging to households with higher income tended to have reduced genetic risks of ischemic stroke and Alzheimer's disease. In contrast, household income exhibited a potentially positive correlation with Parkinson's Disease. We found no significant association between household income and the risk of developing epilepsy, intracranial hematoma, cerebral aneurysm, or subarachnoid hemorrhage.
Similarly, compelling evidence has previously suggested a correlation between low household income and the incidence of ischemic stroke, with higher household income levels observed as the incidence of stroke decreased (5,42). The results of that study indicated that some of the known classical risk factors for stroke were overrepresented in groups with low SES (43). The heightened risk of stroke in low-income groups may be partly attributed to lifestyle factors, specifically smoking, high alcohol consumption, and obesity (40,44). The prevalence of diabetes is also considerably higher in this group, which may contribute to the increased risk (45). After factoring in these conventional risk factors, the link of elevated risk of stroke with SES was mitigated, yet the incidence of stroke in this group remained higher (46). Evidence has also suggested that individuals with lower incomes have more limited access to healthcare and preventative resources compared to those with higher incomes (47). This lack of access, coupled with neglect of essential health maintenance behaviors such as annual medical checkups and adherence to secondary prevention medications, may further exacerbate the risk of stroke. Lower household incomes is potentially associated with an increased risk of ischemic stroke, which can be attributed to a range of underlying molecular biological mechanisms. Chronic inflammation resulting from higher levels of chronic stress and limited healthcare access promotes atherosclerosis, leading to plaque formation and arterial narrowing (48)(49)(50)(51). Limited management of cardiovascular diseases, such as hypertension, due to inadequate resources contributes to vascular dysfunction through impaired regulation of blood vessel tone, endothelial dysfunction, and increased oxidative stress (52). Additionally, epigenetic influences influenced by socio-economic factors may impact the expression and function of genes related to inflammation, vascular function, and coagulation (53).
According to our findings, which are similar to those of several previous studies, low-income status was associated with an increased risk of Alzheimer's disease (54,55). However, previous research has been limited by selection bias and the heterogeneity of comparison groups. Compared to previous MR studies on the relationship between income and incidence of Alzheimer's disease, our study utilized a bidirectional MR method to analyze the relationship between household income status and genetic susceptibility to Alzheimer's disease, which is an improvement over previous unidirectional MR studies that only examined the impact of household income on Alzheimer's disease (55). Additionally, the exposure dataset (household income) and the seven disease datasets used in this study were obtained from different databases with a low sample overlap rate, increasing the reliability of our conclusions. Low-income individuals may have higher risks of developing Alzheimer's disease due to various factors such as limited access to healthcare (resulting in untreated chronic conditions), lower levels of education (leading to less cognitive reserve), unhealthy lifestyle behaviors, and higher levels of chronic stress, all of which can cause inflammation and damage to brain cells (54,56,57). In the previous study that explored the connection between genetic factors and household income, researchers identified four genome-wide significant SNPs that exhibited a significant association with income levels (58). These SNPs resulted in the discovery of two distinct genomic regions, wherein genes previously implicated in intellectual disabilities, synaptic plasticity, and schizophrenia were found, indicating potential shared genetic mechanisms underlying income disparities and Alzheimer's disease. This study suggests that individuals with lower household incomes may be more susceptible to Alzheimer's disease. The utilization of a bidirectional MR method and data from different databases increased the reliability of our findings.
The association between SES and Parkinson's disease has not been studied extensively on a global level, and the existing findings on the subject have been inconclusive. In a Canadian populationbased study that used census data, SES categories were determined by the average household income, and the results indicated an inverse relationship between SES and the incidence of Parkinson's disease (59). Specifically, the incidence and prevalence of Parkinson's disease were significantly higher in the lower quintile of urban areas (59). Another population-based study in Sweden explored the (60). This study found that lower SES was associated with a lower incidence of Parkinson's disease, which is consistent with the findings of our study. Individuals in low-income households may have a reduced risk of Parkinson's disease, due to household income-related factors such as smoking and physical activity, which are strongly associated with a lower risk (60). Higher levels of physical activity and smoking are more common in low-income groups, especially those with manual labor occupations (61). The biological functions of SNPs as IVs and their impact on Parkinson's disease warrant further investigation. Hill et al. identified 30 independent loci associated with individual income, which may be implicated in the biological processes underlying gamma-aminobutyric acid (GABA)ergic and serotonergic neurotransmission. GABA and serotonin are neurotransmitters that play critical roles in regulating brain function and behavior (62). While Parkinson's disease is primarily characterized by the degeneration of dopamine-producing cells in The bidirectional MR study design carries a significant advantage in terms of effectively avoiding the influence of reverse causes and reducing residual confounding. However, certain limitations of this study need to be acknowledged as well. First, our heterogeneity test results revealed some heterogeneity among the IVs in terms of Parkinson's disease and epilepsy. Although the random effects IVW model was used to minimize the effect of heterogeneity in the MR study as much as possible, this heterogeneity should not be overlooked. Second, various MR study assumptions have distinct advantages and disadvantages, which may lead to inconsistent or contradictory results. Therefore, the results of our study need to be interpreted with some caution. Third, the GWAS dataset we used primarily drew from populations of European descent to evade confounding due to population stratification. As a result, the current findings may not be generalizable to other ethnic groups, and additional research is necessary to comprehend how these outcomes may apply to diverse populations.
Conclusion
This study explored the causal relationship between household income status and neurological diseases using a bidirectional MR study based on datasets with millions of individual samples. Our results revealed that the populations with a superior household income exhibit an increased predisposition of genetic susceptibility to Parkinson's Disease, while demonstrating a potential decreased genetic susceptibility to ischemic stroke and Alzheimer's disease.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary material.
Author contributions
WN was responsible for conception and article writing. GM was responsible for data mining. CL was responsible for scientific supervision. All authors reviewed and approved the final manuscript.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
|
2023-07-28T15:09:52.467Z
|
2023-07-26T00:00:00.000
|
{
"year": 2023,
"sha1": "0c7ad4a3b735bff31af4c2e30d131f00c27c321a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1202747/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5320ea32dced8d5bb61abd22f8feae7fc74b2c60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251160173
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosing the performance of food systems to increase accountability toward healthy diets and environmental sustainability
To reorient food systems to ensure they deliver healthy diets that protect against multiple forms of malnutrition and diet-related disease and safeguard the environment, ecosystems, and natural resources, there is a need for better governance and accountability. However, decision-makers are often in the dark on how to navigate their food systems to achieve these multiple outcomes. Even where there is sufficient data to describe various elements, drivers, and outcomes of food systems, there is a lack of tools to assess how food systems are performing. This paper presents a diagnostic methodology for 39 indicators representing food supply, food environments, nutrition outcomes, and environmental outcomes that offer cutoffs to assess performance of national food systems. For each indicator, thresholds are presented for unlikely, potential, or likely challenge areas. This information can be used to generate actions and decisions on where and how to intervene in food systems to improve human and planetary health. A global assessment and two country case studies—Greece and Tanzania—illustrate how the diagnostics could spur decision options available to countries.
Introduction
Food systems include the people, places, and methods involved in producing, storing, processing and packaging, transporting, and consuming food; they can consist of either long or short supply chains and be global or local [1,2]. Food systems have the potential to yield multiple positive outcomes including delivering healthy diets that protect against multiple forms of malnutrition and disease; safeguarding environments, ecosystems, and natural resources; and supporting fair, equitable livelihoods [3][4][5]. However, food systems are currently managed and governed in ways that do not meet these outcomes as well as they could [6][7][8].
Identification of diagnostic indicators
The FSD includes indicators relevant to the food systems conceptual framework from the Food Systems Countdown Initiative, which was adapted from the UN High-Level Panel of Experts on Food Systems and Nutrition report (Fig 1) [1,29]. Not all the indicators available on the FSD (over 200) are useful in diagnosing challenges in achieving nutrition and environmental outcomes; many are purely descriptive without any causal relationship to outcomes (e.g., percent urban population). To select diagnostic indicators, the following criteria were applied: 1) the indicator has a clear target value or direction (i.e. higher is better, lower is better, or a certain range is better); 2) the target value is universal and not dependent upon context; 3) data for the indicator are available for the majority of countries; 4) data are recent (the indicator has been updated at least once since 2010, as older values may not be representative of the current status of a country); and 5) the indicator is globally acceptable and preferably available in the public domain. A total of 39 diagnostic indicators were selected for the FSD diagnostic approach (Table 1). These indicators describe four major components of food systems illustrated in the conceptual framework (Fig 1): food supply chains; food environments; food security, diet, and nutrition outcomes; and environmental outcomes. All indicators and their sources are identified in Table 1. For food supply chains, five indicators were chosen that describe crop biodiversity and food losses. Production indicators, such as cereal and vegetable yield, were not included because appropriate thresholds for these indicators may depend on a country's agroecological setting. For the food environment, 11 indicators met the diagnostic criteria, encompassing food availability, food affordability, and product properties. For nutrition and food security outcomes, 14 indicators were selected that describe food security, diets, nutritional status for adults and children, and diet-related noncommunicable diseases (NCDs). Few diet indicators have been included due to lack of data, despite dietary outcomes being of high interest and importance as outcomes of the food system and being closely related to food environments as well as other nutrition, health, and environmental outcomes. The only measures of dietary intake included were three indicators of diet quality among infants and young children because they are the only diet quality indicators that are current and comparably collected across countries. These are collected by Demographic and Health Surveys (DHS) and are available mostly in low-and middle-income countries (LMICs). Dietary measures for other age groups (school-aged children, adults, and adolescents) do not currently meet the geographic distribution requirements to be included in the diagnostic approach, but diet quality data currently being collected by the Gallup World Poll and DHS will be added as soon as they are available, covering indicators of dietary adequacy and NCD risk factors in the general population [33]. For environmental outcomes, nine indicators met the diagnostic criteria and described production-level outcomes and consumption-level outcomes.
Establishing cutoffs for each indicator
To establish cutoffs for each indicator, there was a need to develop criteria for flagging values that would indicate a likely challenge associated with each indicator. In many applications, cutoffs are used to interpret continuous indicators, where a value on one side of the cutoff is diagnosed as problematic, while a value on the other side is diagnosed as acceptable. Because the severity of a condition is rarely tied to an exact value, but rather to a position of greater or lesser risk within a continuous range of values, setting cutoffs for diagnosis requires careful consideration. Each diagnostic indicator was categorized into three categories: green (unlikely challenge area), yellow (potential challenge area), or red (likely challenge area). Since different levels of evidence exist for each indicator, thresholds were established using four different methods, as follows. First, when possible, pre-defined cutoff values representative of global consensus on public health significance (such as pre-defined low to high categories for the prevalence of stunting in young children) were used (S1 Table). However, for most indicators, such predefined cutoff values do not currently exist. Second, where normative recommendations exist, these were used to establish cutoffs (S2 Table). For example, thresholds for fruit supply adequacy were based on globally recommended per capita intakes of fruit, with countries in the green category having a supply of fruit at or above the recommended intake and countries in the red category having a supply of less than half of the recommended amount. Third, where no cutoffs have been published and no normative values exist, the relative values of country data points can be compared as relatively higher or lower. For each indicator, density plots, a variation of histograms, were used to examine the distribution of data, using the data assembled on the FSD (S3 Table). A density plot was chosen over a histogram to view a smoothed distribution of the data using kernel density estimation. Most indicators had an approximately normal distribution and were divided into tertiles, rounded to interpretable values. We prioritized retaining meaningful or more easily interpretable cutoffs over exact tertiles. Fourth, some indicators had a bimodal or highly skewed distribution; in these cases, the peaks were bifurcated by the two cutoff points (low/medium; medium/high). An example of each of these is shown in Fig 2. The cutoffs for each indicator, as well as the method used to set them, are shown in Table 1.
Four example indicators are explained to demonstrate the methodology for determining the cut-offs. As mentioned above, the prevalence of stunting is an example of an indicator where cutoffs are based on published consensus on cutoffs [50]. An example of an indicator where cutoffs are based on normative recommendations is vegetable supply. This indicator is included as vegetable supply is a precursor of vegetable consumption; thus, the cutoffs are set based on the World Health Organization's recommendation for vegetable consumption as part of a healthy diet. Vegetable losses, on the other hand, is an example of an indicator where no normative cutoffs or recommendations exist. Because the data for this indicator are normally distributed across countries, the cutoffs are set using rounded tertiles. The prevalence of adult obesity similarly has no published or accepted cutoffs for public health significance, but the distribution shows two large peaks, so bimodal curve-based binning is used to set cutoffs.
Analysis of food systems diagnosis across countries
The analysis of national-level data included 195 countries globally. The most recent data available for all countries was used. Countries for which the most recent value was prior to 2010 were excluded. For visualization and analysis, countries were stratified by the 2022 World Bank income classification [51]. Analysis, visualization, and data management were conducted using the R Statistical Computing Environment (version: 3.6.2) [52].
Identifying actions for addressing challenge areas
Diagnosing challenging areas across food systems begs the question, "then what?" The intention of the diagnostic approach is to spur policy debate and advocacy for possible solutions to the challenge areas. To aid this process, a menu of possible actions can be linked to each challenge area. While possible actions are primarily up to the users to deliberate and decide, and may be very context specific, the diagnostic approach provides evidence to inform this deliberation, and a selection of possible evidence-based policies and actions to consider toward improving outcomes for each challenge [53]. Each of the diagnostic indicators is matched with other indicators in the FSD (Table 2), providing a road map to other potential contributing factors upstream that may provide deeper understanding into the causal pathway. Some outcomes have multiple food and non-food causes (e.g., poor nutritional status); only the possible causes related to food (e.g., food insecurity and inadequate diets) are identified.
Case studies
To demonstrate the use of the diagnostic approach in specific settings, two country case studies are presented. Tanzania and Greece were chosen to demonstrate how the diagnostic approach can be applied to different types of food systems, Tanzania having a predominantly rural and traditional food system and Greece an industrial and consolidated food system [54]. Furthermore, diet quality data for the general population were available from these two countries, which allowed for a richer analysis of the problems that food systems may need to address. Comparable diet quality data are currently being collected by the Gallup World Poll and DHS and will soon be available for a growing number of countries [33].
Applying the diagnostics to national food systems
Of the 195 countries assessed in the analysis, the average country coverage for indicators was 158 or 81% of countries (Table 1). Five indicators had established prevalence thresholds for (Table 1). Taking a systems approach, Figs 3 and 4 bring the indicators together, highlighting patterns of challenge areas across the set of 39 indicators. Fig 3 shows the percentage of countries that have a likely challenge area for each indicator by country income classification [51]. Patterns in likely challenge areas are visible by income status, with some indicators moving more or less strongly with income, or in different directions. For example, supply of dietary energy and of fruits and vegetables are frequently flagged as likely challenge areas in lower-middle-income countries, but not often in upper-middle-or high-income countries. Meanwhile, pulse supply appears to be low across all income groups, though the relative cost of legumes is particularly a
PLOS ONE
Diagnosing the performance of food systems to increase accountability challenge in higher-income settings. The percentage of the population who are hungry, food insecure, or who cannot afford a healthy diet are challenges in low-income countries, reflected in the dietary outcomes of low dietary diversity and low consumption of fruits, vegetables, and animal source foods among infants and young children in low-income countries. Sales of UPFs and adult obesity are challenges particularly in high-income countries. The set of nutrition outcome indicators tend to show nutrition transitions that mirror the food environment and dietary patterns. While low-income countries are mainly grappling with child undernutrition and food insecurity and high-income countries are largely grappling with adult obesity [55], middle-income countries are dealing with double burdens of malnutrition challenges [56]. Notably, however, adult raised blood pressure is much more problematic the lower the income, despite being an indicator of NCD risk. Moreover, diabetes presents the most significant challenge in upper-middle-income countries, not high-income countries. On the environmental side, eutrophication, GHGe, and consumption footprints are particular challenge areas in high-income countries, while threats to soil biodiversity, agricultural land change, and natural vegetation within agricultural landscapes are pressing challenge areas across countries of all incomes. Each country faces a unique set of likely challenge areas across the food system or within a subsector of the food system. Fig 4 shows the diversity of country-level challenges within a randomly selected set of countries in each income classification. There are many countries which follow typical patterns seen by income classification, including greater challenge areas of undernutrition in low-and middle-income countries (e.g., anemia) and greater challenge areas of obesity and UPF sales in high-income countries. But there are also interesting country outliers for many indicators. For example, child wasting is an unlikely challenge area for several low-income countries, including Tanzania, Mozambique, and Liberia; UPF sales are atypically high in Costa Rica, Mexico, Russia, and Serbia compared to other low-and middleincome countries; and the low affordability of a healthy diet stands out in the Maldives. On the environmental side, the food supply chains of the Gambia, Liberia, and Mozambique have fewer challenge areas compared to other low-income countries. Few food supply chain indicators are flagged as challenges in high-income countries, but there are some notable exceptions on food losses in individual countries, such as high fruit losses in Japan and high vegetable losses in Greece and Korea. Positive deviants can also be identified. For example, Cyprus and Japan have relatively fewer food systems-related environmental challenge areas than other high-income countries.
Performance across indicators within a specific food systems component, within an individual country, is typically varied, rarely consisting of all likely challenge areas or no likely challenge areas. For example, Angola, a lower-middle-income country, has several likely challenge areas in the food environment related to the availability of food-including the supply of vegetables, pulses, and the overall dietary energy supply-and the cost of an energy sufficient diet is also a likely challenge. However, the premium consumers must pay for nutrient-dense foods, evident in the relative cost of fruits, vegetables, and pulses, and the relative cost of a healthy diet, is not a likely challenge area, as it is in many higher-income countries. Still, the cost of a healthy diet relative to household food expenditure (affordability) is a likely challenge area, which may indicate that the general cost of food, across all food groups, is still high.
To use the diagnosis to inform decision-making, one of the first steps is to explore the possible factors related to each challenge area. In Table 2, such factors are identified among indicators where data are available on the FSD, following the food systems conceptual framework (Fig 1). For example, the high prevalence of infants and young children with zero fruit and vegetable intakes might trace back to high cost of fruits and vegetables, and in turn low availability of fruits and vegetables, possibly linked to the supply chain issues of low crop biodiversity and/or high fruit and vegetable losses. Countries that have high unaffordability of healthy diets tend to have low supply of fruits and vegetables.
Applying the diagnostics in two country case studies
Tanzania. Tanzania is a low-income country with a food system that is predominantly rural and traditional [54]. The country has made steady progress in combating child stunting, which fell by approximately 10% from 2010 to 2018 [40]. However, 32% of children under five are stunted today-well above the 20% prevalence cutoff indicating a likely challenge areaand progress towards the elimination of stunting, a target within SDG 2, remains an unfinished agenda [57]. Though stunting is a multisectoral challenge with determinants beyond the food system, the diagnostic approach can help identify priority areas to be addressed in order to maximize the food system's contribution to ending stunting.
The FSD shows that Tanzania performs relatively well on breastfeeding, with nearly 60% of infants exclusively breastfed for the first six months of life and 92% still breastfed at one year, but complementary feeding still requires more attention [53]. Just 21% of children 6-23 months of age achieve minimum dietary diversity (MDD), making this a likely challenge area for Tanzania, and a probable cause of stunting. Unpacking MDD further, just 35% of children 6-23 months of age consume any meat, eggs, or fish, making this a likely challenge area, while consumption of fruits and vegetables are a potential challenge area with 29% consuming zero fruits and vegetables in the previous day [39]. Animal-source foods (ASF) are important for child growth, due to their favorable amino acid profile and their high density of micronutrients such as iron and zinc [58,59].
The diagnostic approach can be used to trace further causal pathways through other areas of the food environment and food supply chains. Particularly relevant for MDD are the availability and affordability of diverse foods. Fifty-six percent of Tanzania's dietary energy supply is derived from cereals, roots, and tubers, which is a potential challenge area. The affordability of a healthy diet may be another area of concern, also flagged as a potential challenge area, though relative costs of fruits, vegetables, and pulses are low.
Recognizing the intergenerational nature of stunting, examining women's nutritional status and dietary intake may also shed light on possible causes of stunting. Nutritional status at the preconception stage and during pregnancy may influence intrauterine growth and birth outcomes [60]. The diagnostic approach indicates that anemia-which has both dietary and nondietary causes-is a significant problem in Tanzania, affecting 37% of women of reproductive age. Diet Quality Questionnaires (DQQ) collected in Tanzania from the Global Diet Quality Project provide more insights, including that only 63% of women consumed an ASF during the previous day compared with 71% of men. ASF consumption has been associated with reducing the risk for small-for-gestational age and low birthweight babies [61,62]. Looking at the sociocultural drivers of the food system, Tanzania's gender inequality index is high, which is consistent with this gender disparity in diets.
After identifying likely challenge areas that may be worth more in-depth, contextualized analysis, national stakeholders may be a step closer to selecting policies and actions that may be appropriate to address these challenges. In this example related to stunting in Tanzania, these could include investing in market infrastructure to enhance access to nutritious food and utilizing social protection platforms to enhance the purchasing power of women, especially around pregnancy.
Greece. Greece is a high-income country and its food system is indicative of an industrial and consolidated typology [54]. Countries associated with the Mediterranean Diet, like Greece, have historically consumed diets that are low in red meat and high in plant foods, including pulses, with high fat intake from olive oil [63,64]. Greece has 747 grams of fruits and vegetables available per person per day, an abundant supply making it likely that most people in Greece would be able to access at least 400 grams of fruits and vegetables per day, the WHOdefined minimum [65]. However, Greece's national pulse supply is just 14 grams per person per day, indicating a likely challenge area, while other Mediterranean countries, including Italy and Spain, are 14 and 15 grams, respectively, and France is just 4.7 grams per person per day, indicating it is a likely challenge area for all of these countries. As this diagnostic exercise demonstrates in Fig 4, a common challenge for many countries is to provide sufficient supply of pulses in their food environments, but this is especially problematic for high-income countries. Pulses could play a key role in transforming food systems for improved nutrition and environmental sustainability, as they are less intensive in their GHGe and use of water than other protein-rich foods, and their consumption has been associated with reductions in key NCD-related risk factors, including low-density lipoprotein (LDL) cholesterol concentration and blood pressure [6].
Recognizing the influence food environments have on consumer behavior and ultimately diet quality, a next step in this analysis might be to investigate whether diets are, in fact, also low in pulses. DQQ data from the Global Diet Quality Project indicate that in Greece, pulses are indeed a dietary gap, with just 18% of a nationally representative sample having consumed pulses in the day prior to the survey; this is coupled with relatively high consumption of red meat (44%) and processed meat (23%), and in contrast to high consumption of fruits and vegetables (95%) [33]. These diet data indicate that higher pulse consumption could substitute for some red and processed meat consumption, with co-benefits for NCD risk and environmental impact. In addition to the low physical supply, low pulse consumption could be brought on by unaffordability of pulses; however, in Greece the cost of pulses relative to starchy foods is cheap, indicating that cost is less likely to be a contributor.
Examining its production-related indicators, Greece performs well on crop species richness, but has a likely challenge area related to average threats to soil biodiversity. Greece's average soil organic matter is also 47 tonnes per hectare, slightly lower than the Southern Europe regional average of 59 tonnes per hectare [66].
A policy area for consideration to address these likely challenge areas may be to realign agricultural incentives towards increased production of pulses. Greater integration of pulses in agriculture may present an opportunity to improve environmental outcomes. Agroecological approaches emphasize agrobiodiversity as a means of enhancing the natural resources and ecosystem services that support sustainable yield gains, with low environmental impacts [67]. Inclusion of pulses in intercropping, cover cropping, and crop rotation strategies has been shown to improve soil structure, nitrogen fixing, and pest management [68][69][70].
These factors suggest that pulses could feature well in a dual strategy to shift diets and improve soil quality in Greece. Agriculture policy could incentivize pulse production to increase availability and environmental co-benefits. Consumer demand creation activities centered around the Mediterranean diet could also be considered to complement agriculture policy that includes or focuses on pulses.
Discussion
This paper is the first of its kind to develop a methodology to diagnose food systems' performance to help inform food systems governance and accountability. The results indicate certain clear and consistent trends across income groups. However, each country faces a unique set of likely challenge areas. While many trends observed by income classification may be intuitive, the diagnostic approach presented here adds numbers and nuances to these trends and supports the consideration of multiple likely challenge areas together. Jointly, this approach suggests a high potential for learning from different policy and programmatic interventions across countries-e.g., by identifying the positive deviants for a given indicator within a particular income classification or food system type, by connecting challenge areas, and by understanding the reasons behind successes and which ones could be replicated in other contexts.
As illustrated by the above case studies, this diagnostic approach can inform policy making. For countries where the diagnosis suggests unlikely challenge areas, policies can be encouraged to sustain success and share lessons learned. For likely challenge areas, policies can be encouraged to improve the highlighted sub-optimal outcomes. The diagnostic approach also helps identify bundles of challenge areas for policy action: for each nutrition outcome, a road map is provided to relevant indicators within the food supply chain and food environment. Diagnosis within these food supply and food environment indicators pinpoints areas of relatively poor performance upstream from diet outcomes, where attention can be focused on context-specific policy actions that could improve outcomes. In other words, the diagnostic approach identifies both the symptoms of a malfunctioning food system as well as potential contributing factors, providing evidence to then suggest an appropriate set of interventions or treatments to consider. This analysis will be further strengthened in future iterations of the FSD with additional dynamic tools that can use data to guide decision-making.
It is important to note that the diagnostic approach uses indicators to highlight likely challenge areas within food systems, but for many indicators the cutoffs were selected based on countries' relative performance, rather than absolute standards or targets. In addition, the indicators themselves are rarely an addressable problem-and should not be viewed as such. Rather, each indicator highlights one outcome of a complex causal chain of actions and interactions, along which there are several potential intervention points. For example, child stunting is a useful marker of delayed development and later chronic disease risk and indicative of multiple forms of deprivation occurring over a period of time-e.g., suboptimal nutrition, inadequate care, regular infection [71]. From a policy perspective, the key concerns are the underlying determinants and associated developmental outcomes of stunting. A high level of stunting indicates multiple underlying problems and should lead policy makers to seek to address these determinants (and their determinants). A proper diagnosis can thus begin with the indicator but not end there-instead looking for the possible points of leverage along the causal chain to that outcome. These points of leverage will vary across contexts and need to be interpreted with that local insight. Other indicators available on the FSD and elsewhere can help with this analysis-as indicated in the case studies shown above-but will also need to be combined with qualitative knowledge about the local culture, political economy, and which actions are likely to be most impactful. It is thus a guiding tool-not a determinative algorithm.
Previous efforts have developed aggregate indices to assess food systems sustainability and performance [72,73]. Indices developed by Béné et al. and Chaudhary et al. encompass 25 to 27 indicators, respectively, which are used to calculate a composite score. Indicators and composite metrics used to describe food systems in these two papers are continuous, which is useful to avoid misclassification, but from a policy standpoint, it is harder to identify areas within the food system for policymakers and other stakeholders to intervene. To our knowledge, the present paper is the first attempt to undertake a systematic food systems diagnosis using a dashboard approach with a diverse set of indicators spanning food systems components and applying this across countries.
Strengths of this work include the use of a food systems framework (Fig 1) [29] to guide the identification of priority indicators and their interpretation, leveraging a uniquely broad dataset (both in terms of geographical coverage and food systems components) from the FSD. It is also highly transparent, with all data publicly available and all thresholds and approaches for setting them presented here. The relative simplicity of the approach, which leverages the best available data and evidence from diverse sources but translates this into an easily understood 'stoplight' rating, is also an advantage, although it comes at a cost of masking complexity. When considering use for policy, this simplification is useful, as excess complexity can be paralyzing and difficult for non-specialists to interpret. The work has also helped to advance understanding on development of actionable food systems indicators-that is, highlighting which indicators (among a large number available) can be used to inform real-world decisions.
There are also certain limitations to this work. First, narrowing focus to just a few dozen indicators was necessary to prioritize and make the diagnostic approach understandable and actionable, but it may leave out other indicators that are also meaningful, especially in specific country contexts. In addition, there are certain components and outcome areas of the food system, such as livelihoods and cultural identity, which are not well covered with high-quality, relevant indicators-and are thus necessarily excluded here. Dietary data are also an important gap: due to limited availability of robust dietary data for most countries, dietary outcomes (aside from MDD, prevalence of infants 6-23 months consuming zero fruit or vegetables, and prevalence of infants 6-23 months consuming no meat, fish, or eggs) are omitted until they become available across countries. In the future, the FSD will include more dietary outcomes to better assess diets as the critical link between food environments and nutrition and environmental outcomes. These outcomes will include the minimum dietary diversity for women of reproductive age (MDD-W); an indicator of consumption of all five recommended food groups (vegetables; fruits; pulses, nuts, and seeds; animal source foods; and starchy staples); and indicators of risk factors for NCDs defined within WHO and other global recommendations, including consumption of adequate fruits and vegetables; whole grains; pulses, nuts, and seeds; and fiber and limited consumption of free sugar, salt, fat, saturated fat, and red and processed meat [33]. It is also recognized that the quality of data for certain indicators (e.g., GHGe) might differ between countries and that might affect identified patterns. Second, this systems approach allows users to consider bundles of challenge areas and draw potential connections between those, but to make statements about causality, more in-depth analysis is needed. Third, the presented results focus at the global and national levels and do not consider subnational data-even though certain countries (e.g., India) have considerable subnational diversity within their food systems as well as locally devolved policymaking processes. Fourth, many of the indicators come from official global repositories, the most reliable and comparable data sources (e.g., FAOSTAT); however, these often poorly capture the role of wild or local foods in diets, the environment, and local economies [49]. Finally, for indicators where no cutoffs have been published and no normative values exist, the cutoffs are based on density plots and countries' relative performance. These cutoffs could be refined in the future with more evidence of meaningful normative values.
There are several opportunities to build on this work. First, identifying potential challenge areas through this quantitative approach can trigger and support in-depth context-specific analysis, which includes stakeholder consultation and the integration of qualitative information to provide a more nuanced diagnosis and resulting decision options. National stakeholders may also enrich their analyses by supplementing the diagnosis with other data available at country-level, as has been demonstrated in the case studies in their drawing on DQQ data for Tanzania and Greece. Second, each of the diagnostic indicators could be paired with relevant policy and programmatic innovations (be they technological, nature-based, or societal) to improve both diets and planetary health. While no single action can fix food systems, governments, non-governmental organizations, civil society, and businesses can each act to start to transform food systems. It is hoped that the diagnostics presented in this paper are a step towards better monitoring of food systems performance that can lead to stronger governance and accountability of food systems and their transformation.
Supporting information S1
|
2022-07-30T06:16:51.317Z
|
2022-07-29T00:00:00.000
|
{
"year": 2022,
"sha1": "86dc0f52726cc85aafe1f40004bff72c5f0ac759",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ad108f4ddde3e23ece5bdea532599eb844110878",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
155091271
|
pes2o/s2orc
|
v3-fos-license
|
Notes on Projective, Contact, and Null Curves
These are notes on some algebraic geometry of complex projective curves, together with an application to studying the contact curves in CP^3 and the null curves in the complex quadric Q^3 in CP^4, related by the well-known Klein correspondence. Most of this note consists of recounting the classical background. The main application is the explicit classification of rational null curves of low degree in Q^3. I have recently received a number of requests for these notes, so I am posting them to make them generally available.
Along the way, I explain a few other results of interest. Mostly these are consequences of the results in [6]. Some of this material has, in the meantime, been rediscovered by others [2,3].
For the convenience of the reader, I include some discussion of the algebraic geometry of projective curves. All of this material is classical [8].
Invariants of projective curves
Let V be a complex vector space of dimension n+1 ≥ 2, and let P(V ) be its projectivization. When V is clear from context, I will write P n for P(V ).
Let S be a connected Riemann surface and let f : S → P n = P(C n+1 ) be a nondegenerate holomorphic curve, i.e., f (S) does not lie in any proper hyperplane H n−1 ⊂ P n .
When S is compact, the degree of f , deg(f ), is the number of points in the pre-image f −1 (H) ⊂ S where H ⊂ P n is any hyperplane that is nowhere tangent to f . When f : S → P n is nondegenerate, one knows that deg(f ) ≥ n.
2.1. Ramification. Given p ∈ S, one can write for some basis v 0 , . . . , v n of V where the h i are meromorphic functions on S that satisfy where ν p (h i ) is the order of vanishing of h i at p ∈ S. The numbers a i (p) = ν p (h i ) ≥ i for 0 ≤ i ≤ n depend only on f and p, not on the choice of basis v i and meromorphic functions h i satisfying (2.1) and (2.2). For all but a closed, discrete set of points p ∈ S, one will have a i (p) = i for 0 ≤ i ≤ n. It is useful to define, for i ≥ 1, r i (p) = a i (p) − a i−1 (p) − 1 ≥ 0, which is known as the i-th ramification degree of f at p. When f is not clear from context, I will write r i (p, f ).
Since r i (p) = 0 for 0 ≤ i ≤ n for all but a closed, discrete set of points p ∈ S, one can define the i-th ramification divisor of f to be the locally finite formal sum When S is compact, this is a finite sum, in which case, R i (f ) is an effective divisor on S.
Remark 1 (Branch points). A point p ∈ S at which r 1 (p, f ) > 0 is said to be a branch point of f of order r 1 (p, f ). When R 1 (f ) = 0, f is said to be unbranched, which is equivalent to f being an immersion.
2.2. The associated curves. Since f is nondegenerate, there is a well-defined sequence of associated curves, f k : S → P Λ k (V ) for 1 ≤ k ≤ n, defined, relative to any local holomorphic coordinate z : U → C where U ⊂ S is an open set, by where F : U → V is holomorphic and non-vanishing and f = [F ] on U ⊂ S. (It is easy to show that f k is well-defined, independent of the choice of z or F .) Of course, f 1 = f .
Remark 2 (Wronskians). If h 1 , . . . , h k are meromorphic functions on a connected Riemann surface S and z : U → C is a local holomorphic coordinate on U ⊂ S, then the Wronskian differential of (h 1 , . . . , h k ) is the expression It is not hard to show that W (h 1 , . . . , h k ) does not depend on the choice of local holomorphic coordinate z and hence is a globally defined (symmetric) differential on S.
The Wronskian has two important (and easily proved) properties that will be needed in the rest of these notes.
Note that, when f : S → P n is described as in (2.1), the associated curves can be written in the form 2.3. The canonical k-plane and line bundles. Since f k (p) is the projectivization of a nonzero simple k-vector for all p ∈ S, it follows that there exists a flag of subspaces It is easy to show that the subset be the set of (n+2)tuples (p, v 0 , . . . , v n ) that satisfy the conditions p ∈ S, v i ∈ E i+1 (p) for 0 ≤ i ≤ n, and (v 0 , . . . , v n ) is a basis of V . This B is a holomorphic submanifold of S × V n+1 , the projection σ : B → S onto the first factor is a submersion, and the V -valued functions e i : B → V defined by e i (p, v 0 , . . . , v n ) = v i for 0 ≤ i ≤ n are holomorphic. Consequently, there are unique holomorphic 1-forms ω j i on B satisfying the structure equations (2.8) de i = e j ω j i , and Moreover, since, by construction, it follows that ω j i = 0 whenever j > i+1 and that ω i+1 as its kernel and satisfies ǫ i (v i ) = 1. Thus, ǫ i (b) can be regarded as a nonzero linear function on the line With these definitions, it is not difficult to show that there is a well-defined Moreover, following the definitions above, one finds that the section ρ i vanishes to order r i (p) at p ∈ S.
2.4. The compact case and divisors. Now suppose that S is compact, and fix a nondegenerate f : S → P n , which will not be notated in the following discussion.
where D i is a divisor on S, well-defined up to linear equivalence.
From (2.7), it then follows that (2.12) where '≡' means linear equivalence of divisors. Moreover, because the zero divisor of the holomorphic section where, again, K is the canonical divisor of S. In particular, for ℓ > 1 we have Moreover, using (2.12), one obtains Since deg D 1 = deg f , taking degrees of divisors, one has (2.16) (n+1) deg f + n(n+1)(k−1) = n r 1 + (n−1) r 2 + · · · + r n where r i = deg R i ≥ 0 and k is the genus of S.
Example 1 (Rational normal curves). If S is compact and f : S → P n is nondegenerate and satisfies r i = 0 for all i, it follows from (2.16) that k = 0 and deg f = n, so that f (S) ⊂ P n is the rational normal curve of degree n, i.e., up to projective equivalence, where z is a meromorphic function on S = P 1 with a single, simple pole.
To conclude this subsection, I list a few further useful facts. First, Next, the dual curve f n : S → P Λ n (V ) = P(V * ) of f = f 1 is nondegenerate, and its ramification divisors are given by Moreover, the dual curve of f n is f 1 , i.e., (f n ) n = f 1 = f . Finally, one has the following relation between the first ramification divisor of f i and the i-th ramification divisor of f : (This follows immediately from (2.4) and the properties of the Wronskian.) However, note that, in general, for 1 < i < n, the higher ramification divisors of f i cannot be computed solely in terms of the ramification divisors of f = f 1 . In fact, the f i in this range need not even be nondegenerate, as will be seen.
Contact curves in P 3
Now let V have dimension 4 and let β ∈ Λ 2 (V * ) be a nondegenerate 2-form on V , i.e., V is a symplectic vector space of dimension 4. (Since any two nondegenerate 2forms on V are GL(V )-equivalent, the particular choice of β is not important.) Let Sp(β) ⊂ GL(V ) denote the group of linear transformations of V that preserve β.
The choice of β defines a volume form Ω = 1 2 β 2 ∈ Ω 4 (V * ) on V and, because of the nondegenerate pairing Moreover, by the usual reduction process induced by the C * -action of scalar multiplication on V , the projective space P 3 = P(V ) inherits a contact structure, i.e., a holomorphic 2-plane field C ⊂ T P 3 that is nowhere integrable and is invariant under the induced action of Sp(β) on P 3 .
A connected holomorphic curve f : S → P 3 is said to be a contact curve with respect to β if f ′ (T p S) ⊂ C f (p) ⊂ T f (p) P 3 for all p ∈ S. Equivalently, f is a contact curve if and only if either f is constant or else f 2 (S) has image in P(W ) ⊂ P Λ 2 (V ) . If f (S) does not lie in a line in P 3 , I will say that f is nonlinear.
Proposition 1. If f : S → P 3 is a nonlinear contact curve, then f is nondegenerate. Moreover, R 1 (f ) = R 3 (f ), and f 2 : S → P(W ) ≃ P 4 is nondegenerate, with Proof. If f were degenerate, then f (S) would be linearly full in some P 2 ⊂ P 3 , and hence it would be expressible on a neighborhood of p ∈ S in the form where the h i are meromorphic functions on S with ν p (h 1 ) = a 1 > 0 and ν p (h 2 ) = a 2 > a 1 , and with v 0 , v 1 , v 2 being linearly independent vectors in V . If z : U → C is a p-centered local holomorphic coordinate on an open p-neighborhood U ⊂ S, and we set dh i = h ′ i dz, then for some meromorphic functions h i on S that vanish at p and select a local pcentered holomorphic coordinate z : U → C on some p-neighborhood U ⊂ S. The condition that f be contact with respect to β is expressed as the equation does not vanish identically. Hence, by making a change of basis in (v 1 , v 2 ), it can be assumed that 0 < ν p (h 1 ) < ν p (h 2 ), which, by (3.2) and the fact that ν p (h 3 ) > 0, forces First, note that this implies that Since this holds for all p ∈ S, it follows that . Now, the sequence of orders of vanishing of these five coefficients of the basis elements of W are five distinct numbers: Hence, f 2 : S → P(W ) ≃ P 4 is nondegenerate and has the following ramification indices at p: Remark 3. Proposition 1 was proved in [6], though it was known classically [1]. It also appears (in slightly different notation) in [2], the authors of which do not appear to have been aware of [6].
Proposition 1 also suggests a slightly more general notion of contact curve, which is described by the following result.
Proposition 2. Let f : S → P(C 4 ) ≃ P 3 be a nondegenerate holomorphic curve for which f 2 : S → P(Λ 2 (C 4 )) ≃ P 5 is degenerate. Then there exists a nondegenerate symplectic form β ∈ Λ 2 ((C 4 ) * ), unique up to constant multiples, such that f is a contact curve with respect to β.
Proof. As before, fix a point p ∈ S and write f in the form for some meromorphic functions h i on S that vanish at p and satisfy Let z be a meromorphic function on S that has a simple zero at p and write dh i = h ′ i dz. Then f 2 takes the form , and the orders of vanishing at p of the mermomorphic coefficients of these terms in the order written in (3.5) are If a 3 were not equal to a 2 +a 1 , then these six integers would be distinct, and it would follow that the six coefficient functions were linearly independent as meromorphic functions on S. In this case, f 2 would be linearly full in Λ 2 (C 4 ), contrary to hypothesis. Thus, we must have a 3 = a 2 + a 1 , and the inequalities (3.6) become Now, in order for f 2 to be degenerate, these six coefficients must satisfy at least one nontrivial linear relation with constant coefficients. Because of the strict inequalities (3.7), this relation cannot involve h ′ 1 or h ′ 2 , so it must of the form Moreover, again because of the strict inequalities (3.7), neither c 1 nor c 2 can vanish, and, thus, there cannot be two independent linear relations of this kind. Now, consider the 2-form Since β∧β = 2c 1 c 2 ξ 0 ∧ξ 1 ∧ξ 2 ∧ξ 3 = 0, the 2-form β is nondegenerate and hence defines a symplectic structure on C 4 . By construction, f 2 lies in the projectivization of W ⊂ Λ 2 (C 4 ), the kernel of β. Hence, f is a contact curve in the projectivization of the symplectic space (C 4 , β).
Since there is only one linear relation among the meromorphic coefficients appearing in f 2 , it follows that f 2 lies linearly fully in P(W ) ≃ P 4 , which proves the uniqueness of β up to multiples.
Example 2 (Rational contact curves of arbitrary degree). Let p and q be relatively prime integers satisfying 0 < p < q, and consider the curve f : P 1 → P 3 , where z is a meromorphic function on P 1 possessing a single, simple pole at P and a single, simple zero at Q, defined by Thus, f is a contact curve for the symplectic structure and one has R 1 (f ) = (p−1)(P +Q), while R 2 (f ) = (q−p−1)(P +Q). This example, for q = p+1, appears in [6].
Null curves in C 3 and Q 3
Endow C 3 with a nondegenerate (complex) inner product, which will be denoted v·w ∈ C for v, w ∈ C 3 .
If S is a connected Riemann surface, then a non-constant meromorphic curve γ : S → C 3 will be said to be a null curve if the meromorphic symmetric quadratic form dγ · dγ vanishes identically on S.
In order to treat the poles of meromorphic null curves algebraically, it will be useful to introduce an algebraic compactification of C 3 . The usual compactification that regards C 3 as an affine open set in P 3 is not useful in this context, since there is no natural way to extend the notion of 'null' to the hyperplane at infinity.
Instead, one embeds C 3 into P 4 as a quadric hypersurface Q 3 by identifying x ∈ C 3 with the point The resulting image is an affine chart on the projective quadric Q 3 ⊂ P 4 defined by the homogeneous equations A meromorphic null curve γ : S → C 3 completes uniquely to an algebraic curve g : S → Q 3 ⊂ P 4 that is also null, in the sense that the tangent lines to the curve lie in Q 3 as well.
Moreover, (4.1) is the quadratic form associated to an inner product , on C 5 with the property that a g : S → Q 3 that is the completion of a meromorphic null curve γ : S → C 3 is of the form g = [G] where G : S → C 5 is meromorphic and satisfies (4.2) G, G = G, dG = dG, dG = 0.
(In the last equation, dG, dG is to be interpreted as a symmetric meromorphic quadratic form.) Proposition 3. If γ : S → C 3 is a meromorphic null curve with d simple poles and no other poles, then the completed null curve g : S → Q 3 has degree d (as a map to P 4 ). If, in addition, γ is an immersion away from its poles (i.e., γ is unbranched), then g : S → Q 3 is also an immersion.
Proof. This follows immediately from local computation.
4.1. The Klein correspondence. I now recall the famous Klein correspondence between nondegenerate contact curves f : S → P 3 and nondegenerate null curves g : S → Q 3 ⊂ P 4 . As before, let V be a symplectic complex vector space of dimension 4 with symplectic form β ∈ Λ 2 (V * ), with Ω = 1 2 β 2 ∈ Λ 4 (V * ) a volume form on V . Let W ⊂ Λ 2 (V ) be the 5-dimensional subspace annihilated by β. Then there is a nondegenerate symmetric inner product , on W defined by The (connected) symplectic group Sp(β) ⊂ GL(V ) acts on Λ 2 (V ) preserving W and preserving this inner product. Morover g(w) = w for all w ∈ W if and only if g = ±I V , thus defining a double cover Sp(β) → SO , , which is one of the so-called 'exceptional isomorphisms'.
Note that w, w = 0 for a nonzero w ∈ W if and only if w is a decomposable 2-vector, i.e., w = v 1 ∧v 2 for two linearly independent vectors v 1 , v 2 ∈ V . Such an element w will be said to be a null vector in W . Define Then Q 3 is the null hyperquadric of , . Since , is nondegenerate, Q 3 is a smooth hypersurface in P(W ) ≃ P 4 .
If f : S → P(V ) is a nondegenerate contact curve, then g = f 2 has image in W and, moreover, since, by construction, g(p) is the projectivization of a decomposable 2-vector for all p ∈ S, it follows that g(S) ⊂ Q 3 .
In fact, more is true: Writing f = [F ] where F : S → V is meromorphic and letting z : U → C be a local holomorphic coordinate on U ⊂ S and writing dH = H ′ dz for any meromorphic H on S, one obtains Hence, g : S → Q 3 is a null curve, which, by Proposition 1, is nondegenerate as a curve in P(W ).
The interesting thing is the converse, which is due to Klein: Proposition 4. If g : S → Q 3 is a null curve that is nonlinear, i.e., its image is not contained in a linear P 1 ⊂ P(W ), then g = f 2 for a unique nondegenerate contact curve f : S → P(V ).
Proof. The result is local, so write g = [G] where G : U → W is holomorphic and nonvanishing. By hypothesis, G∧G = 0, which, of course, implies that G∧G ′ = 0. The condition that g : S → Q 3 be null is then equivalent to G ′ ∧G ′ = 0. Thus, G and G ′ span a null 2-plane in W . Since G and G ′ are each decomposable, while G∧G ′ = 0, it follows that they can be written in the form G = F ∧H and G ′ = F ∧K for some meromorphic F, H, K : U → W that are virtually linearly independent, i.e., F ∧H ∧K vanishes only at isolated points. Since Suppose, on the contrary, that F ∧F ′ did vanish identically. In that case F = hv 0 for some vector v 0 ∈ V , unique up to multiples, and some meromorphic function h. Thus, we can assume that F = v 0 , and so G = v 0 ∧H . Consequently, G ′ = v 0 ∧H ′ and G ′′ = v 0 ∧H ′′ . If v 0 , H, H ′ , and H ′′ were linearly independent on any open set, it would follow that the subspace of W spanned by G, G ′ , and G ′′ , would be of dimension 3 and totally null (since all of the elements are decomposable), which is impossible since , is nondegenerate and W has dimension 5. Thus, H ′′ is a linear combination of v 0 , H and H ′ . Consequently, the 3-plane spanned by v 0 , H, and H ′ is constant, implying that the 2-plane in W spanned by G and G ′ is constant, which implies that g(S) ⊂ Q 3 is a line in P 4 , contrary to hypothesis.
It is now established that g = f 2 where f = [F ] : S → P(V ) is a contact curve, and the uniqueness of f is clear. Since g is not constant, f (S) does not lie in a line in P(V ) and hence, by Proposition 1, g : S → Q 3 is nondegenerate in P(W ).
Ramifications and degrees. Now suppose that S is a compact (connected)
Riemann surface and that f : S → P(V ) is a holomorphic contact curve that is not contained in a line and that g = f 2 : S → Q 3 ⊂ P(W ) is its Klein-corresponding null curve. The Plücker formula (2.16), coupled with the fact that r 3 (f ) = r 1 (f ), implies that where k is the genus of S. Note that, in consequence, r 2 (f ) is always even. Meanwhile, Proposition 1 and (2.16) imply (4.4) 5 deg(g) + 20(k − 1) = 5r 1 (f ) + 5r 2 (f ).
Example 3. The case of most interest in these notes will be when g is unbranched, i.e., r 2 (f ) = 0, and S has genus k = 0, in which case, the formulae above reduce to These relations will be useful in the sequel.
Rational null curves of low degrees
With the above preliminaries out of the way, I can now provide an analysis of the possibilities when f : P 1 → P 3 is a rational contact curve of low degree such that f 2 : P 1 → Q 3 is unbranched.
A contact curve f : P 1 → P(V ) of degree 1 is linear, and a null curve f : P 1 → Q 3 is linear. These linear cases will be set aside from now on.
However describing all the unbranched null curves in Q 3 of any given degree seems to be a harder problem.
5.1.
Degree at most 4. The very lowest possible degrees are easy to treat.
Proposition 5. If f : P 1 → P(V ) is a nonlinear contact curve of degree at most 3, then f (P 1 ) ⊂ P(V ) is a rational normal curve. All contact rational normal curves are symplectically equivalent.
Thus, all contact rational normal curves are symplectically equivalent.
Corollary 1. If g : P 1 → Q 3 is a nonlinear null curve of degree at most 4, then deg(g) = 4 and g = f 2 where f : P 1 → P 3 is a contact rational normal curve. In particular, there are no nonlinear null curves in Q 3 of degree 2 or 3.
5.2. Degree 5. To begin, I classify the nonlinear rational contact curves of degree 4.
Proposition 6. Up to symplectic equivalence, there is only one nonlinear contact curve f : P 1 → P 3 of degree 4. It satisfies R 1 (f ) = 0 and R 2 (f ) = p + q where p, q ∈ P 1 are distinct.
Proof. Let f : P 1 → P 3 be a nonlinear contact curve of degree 4. Then by (5.1), r 1 (f ), r 2 (f ) is either (1, 0) or (0, 2), and f can be written in the form for five vectors v 0 , . . . , v 4 in C 4 that satisfy one linear relation and where z is a meromorphic function on P 1 that has a single pole. Since f has degree 4, neither v 0 nor v 4 can be zero.
If r 1 (f ) = 1 and r 2 (f ) = 0, then f branches to order 1 at a single point of P 1 , which can be taken to be the pole of z. This implies that v 3 must be a multiple of v 4 . Hence, by replacing z by z + c for an appropriate constant c, it can be assumed that v 3 = 0. Thus, the vectors v 0 , v 1 , v 2 , v 4 are a basis of C 4 . However, when one finds that Thus, such an f cannot be a contact curve with respect to any symplectic structure on C 4 .
In the first subcase, assume, without loss of generality, that p is the zero of z.
where the unwritten terms vanish to order 3 or more at z = 0, the assumption that R 1 (f 2 ) = R 2 (f ) = 2·p implies that v 0 ∧v 2 and 3 v 0 ∧v 3 + v 1 ∧v 2 are multiples of v 0 ∧v 1 . This implies that both v 2 and v 3 lie in the linear span of v 0 and v 1 , which is impossible, since v 0 , v 1 , v 4 cannot span C 4 . Meanwhile, if R 2 (f ) = p + q, where p, q ∈ P 1 are distinct, then we can choose z so that p and q are defined by z = 0 and z = ∞. Since where the unwritten terms vanish to order at least 2 at z = 0 and have a pole of at most order 4 at z = ∞, it follows from R 1 (f 2 ) = R 2 (f ) = p + q that v 0 ∧v 2 is a multiple of v 0 ∧v 1 and v 2 ∧v 4 is a multiple of v 3 ∧v 4 . In particular, v 2 must be both a linear combination of v 0 and v 1 and a linear combination of v 3 and v 4 . Now, this can only happen if v 2 = 0, since v 0 , v 1 , v 3 , v 4 must be a basis of C 4 . Thus, which implies Thus, f 2 is linearly full in P(W ) ≃ P 4 , where W ⊂ Λ 2 (C 4 ) is the 5-dimensional subspace annihilated by the symplectic form Thus, f is a contact curve with respect to the contact structure on P 3 defined by β. The uniqueness of f up to symplectic equivalence is now clear.
Corollary 2.
There is no nonlinear null curve g : P 1 → Q 3 of degree 5.
Proof. If such a curve g existed, it would be of the form g = f 2 where f : P 1 → P 3 would be a nonlinear contact curve with ramification degrees r 1 (f ) and r 2 (f ). Now which, since r 2 (f ) must be even, implies that r 1 (f ) = 1 and r 2 (f ) = 0. Hence deg(f ) = 3 + r 1 (f ) + 1 2 r 2 (f ) = 4. However, Proposition 6 shows that the only nonlinear contact curve f : P 1 → P 3 of degree 4 has r 1 (f ) = 0 and r 2 (f ) = 2.
Thus, such a g does not exist.
5.3. Degree 6. Now, I will classify the nonlinear rational null curves of degree 6.
Proposition 7. Up to projective equivalence, there are only two nonlinear null curves g : P 1 → Q 3 of degree 6. One of these is unbranched, and the other has two distinct branch points, each of order 1.
Proof. Let g : P 1 → Q 3 be a nonlinear null curve of degree 6 and let f : P 1 → P 3 be the Klein-corresponding nonlinear contact curve. From the formulae above, Since r 2 (f ) is even, there are two possibilities: , then deg f = 3 + 0 + 1 = 4, and, by Proposition 6, this f is unique up to symplectic equivalence. In this case, g = f 2 , since R 1 (g) = R 2 (f ) = p + q where p, q ∈ P 1 are distinct, g has two branch points of order 1.
In the special case when R 1 (f ) = 2·p, choose a meromorphic function z on P 1 that has a single pole at p, and write where v 0 , . . . , v 5 span C 4 and v 0 and v 5 are not zero. The condition R 1 (f ) = 2·p implies that v 3 and v 4 are multiples of v 5 , and so, by replacing z by z + c for an appropriate constant, it can be assumes that v 4 = 0, so that where a is a constant. Thus, By inspection, whatever the value of a, the seven coefficients of z k in this expression span the entire 6-dimensional space Λ 2 (C 4 ). Thus, f is not a contact curve for any symplectic structure on C 4 .
Supposing, instead, that R 1 (f ) = p + q, where p, q ∈ P 1 are distinct, let z be a meromorphic function on P 1 with a simple pole at p and a zero at q. Then f takes the form for some constants a and b, where (v 0 , v 2 , v 3 , v 5 ) a basis of C 4 . Thus, By inspection, whenever either a or b is nonzero, g = f 2 is linearly full in Λ 2 (C 4 ), and, hence, f is not contact for any symplectic structure on C 4 . Meanwhile, if a = b = 0, then the formula for g simplifies to so that g(P 1 ) is linearly full in the 5-dimensional subspace W ⊂ Λ 2 (C 4 ) that is annihilated by the symplectic form Hence, f is contact and g : P 1 → Q 3 ⊂ P(W ) is an unbranched (since r 2 (f ) = 0) null curve of degree 6. This argument establishes the uniqueness up to projective equivalence of such an f : P 1 → P 3 of degree 5 with R 1 (f ) = p+q and R 2 (f ) = 0 and hence the uniqueness up to equivalence of an unbranched null curve g : P 1 → Q 3 of degree 6.
Remark 4 (Reducibility of a moduli space). Note that the two corresponding rational contact curves are which has g = f 2 branched at z = 0 and z = ∞, and which has g = f 2 unbranched, though f itself is branched at z = 0 and z = ∞.
In each case, the projective subgroup H ⊂ SL(4, C) that stabilizes f has dimension 1 (and has two components). Consequently, the moduli space of such contact curves for a given symplectic structure β is of the form Sp(β)/H and hence has dimension 9.
Thus, the moduli space of nonlinear rational null curves in Q 3 of degree 6 is disconnected. Even when compactified using geometric invariant theory, this moduli space will necessarily be reducible, being the union of two irreducible varieties of dimension 9. 5.4. Degree 7. Finally, we treat the unbranched case in degree 7.
Proposition 8. There is no unbranched nonlinear null curve g : P 1 → Q 3 of degree 7.
Proof. Suppose that an unbranched nonlinear null curve g : P 1 → Q 3 of degree 7 exists and let f : P 1 → P 3 be the Klein-corresponding contact curve. By the formulae (4.5) of Example 3, it follows that f has degree 6 and satisfies r 1 (f ) = 3 and r 2 (f ) = 0. There are three cases to consider, depending on the structure of R 1 (f ).
First, suppose that R 1 (f ) = 3·p for some p ∈ P 1 . Choose a meromorphic z on P 1 with a simple pole at p (and no other poles). Then f takes the form for some v 0 , . . . , v 6 ∈ C 4 with v 0 and v 6 nonzero. Since R 1 (f ) = 3·p, it follows that v 3 , v 4 , and v 5 are multiples of v 6 . By replacing z by z + c for some constant c, I can arrange that v 5 = 0, so I do that. Then f takes the form for some constants a and b, where (v 0 , v 1 , v 2 , v 6 ) are a basis for C 4 . Then (5.6) By inspection the coefficients of the different powers of z span Λ 2 (C 4 ), no matter what the values of a and b. Hence, this curve is linearly full in P(Λ 2 (C 4 )), and this f is not contact for any symplectic structure on C 4 . Second, suppose that R 1 (f ) = 2·p + q for some p, q ∈ P 1 that are distinct. Choose a meromorphic z on P 1 with a simple pole at p and a simple zero at q. Then, because R 1 (f ) = 2·p + q, it follows that f can be written in the form for some constants a, b, and c and vectors v 0 , v 2 , v 3 , v 6 that form a basis of C 4 . Then Looking at the coefficients of the 0-th, 1-st, 7-th, and 6-th powers of z in this formula, it follows that f 2 lies linearly fully in a space W ⊂ Λ 2 (C 4 ) that contains v 0 ∧v 2 , v 0 ∧v 3 , v 3 ∧v 6 , and v 2 ∧v 6 , no matter what the values of a, b, and c. The space W must also contain the elements in the set No matter what the values of a, b, and c are, the first three elements will span the multiples of v 0 ∧v 6 , and this, combined with the fourth element, will force W to contain v 2 ∧v 3 as well. Thus W = Λ 2 (C 4 ), implying that f 2 is nondegenerate, which is impossible if f is to be a contact curve. Thus, this case is also impossible. Third, and finally, suppose that R 1 (f ) = p + q + s where p, q, s ∈ P are distinct. Let z be the meromorphic function on P 1 that has a pole at p, a zero at q and satisfies z(s) = 1. (This uniquely specifies z.) and where v 0 , . . . , v 6 span C 4 and v 0 and v 6 are nonzero. Moreover, because p and q are branch points of f , it follows that v 0 ∧v 1 = v 5 ∧v 6 = 0, so that we must actually have for some constants a and b. Moreover, there can be only one linear relation among the 5 vectors v 0 , v 2 , v 3 , v 4 , v 6 . Also, because f must have degree 6, we cannot have F (z 0 ) = 0 for any z 0 ∈ C, since then F (z)/(z − z 0 ) would be a curve of degree 5, forcing f to have degree at most 5.
Now, it turns out that it greatly simplifies the argument below to make a change of basis so that F is written in the form as can clearly be done. The reason this is useful is that the condition that f have a branch point at s, which is where z = 1, is equivalent to the condition that F (1)∧F ′ (1) = 0, and computation now shows that Hence v 2 and v 4 must be linearly dependent. Since there can only be one linear relation among the 5 vectors {v 0 , v 2 , v 3 , v 4 , v 6 }, it follows that v 2 and v 4 must be multiples, not both zero, of a single vector. Consequently, after a renaming and a choice of two numbers p and q, not both zero, we can write F in the form (5.12) where now v 0 , v 2 , v 3 , v 6 are a basis of C 4 , and a, b, p, and q are constants, with p and q not both zero. Now, let so that g = f 2 = [G(z)]. We must determine the conditions on a, b, p, and q in order that g(P 1 ) not lie linearly fully in P(Λ 2 (C 4 )), which is that the eight vectors G 0 , . . . , G 7 in Λ 2 (C 4 ) should only span a vector space of dimension at most 5. Let B 1 = v 0 ∧v 2 , B 2 = v 0 ∧v 3 , B 3 = v 0 ∧v 6 , B 4 = v 2 ∧v 3 , B 5 = v 2 ∧v 6 , and B 6 = v 3 ∧v 6 . Then B 1 , . . . , B 6 form a basis of Λ 2 (C 4 ). Thus, there is a 8-by-6 matrix M aj such that G a = j M aj B j . Calculation now yields that M is Now, in order that g not be branched at z = 0, we must have G 0 and G 1 linearly independent, i.e., the first two rows of M must be of rank 2 and inspection shows that this requires that at least one of p and b+2 must be nonzero. Similarly, because g is not branched at z = ∞, at least one of q and a+2 must be nonzero.
In order for the rank of M to be at most 5, all of the 6-by-6 minors of M must be zero. By computation, the determinant of the first 6 rows is −48 (3p+q)b+4p+2q while the determinant of the last 6 rows is −48 (p+3q)a + 2p + 4q 3 . Thus, we must have (3p+q)b + 4p + 2q = (p+3q)a + 2p + 4q = 0. Recall that p and q cannot simultaneously vanish. It is now apparent that 3p+q cannot be zero either, since the above equations would then imply that 4p+2q = 0, forcing p = q = 0, which cannot happen. Similarly, p+3q cannot be zero. Thus, we can solve for a and b in the form From these formulae, we can see that, if q were zero, then a would be −2, but q = a+2 = 0 is not allowed. Hence q is non-zero. Similarly p must be nonzero. Finally, computing the determinant of the 6-by-6 minor of M obtained by deleting the third and sixth rows of M yields 8640 pq (p + q) 3 (p + 3q)(3p + q) .
Consequently, since p and q cannot be zero, it must be that p+ q = 0, which implies that a = −1 and b = −1. Further, by scaling v 2 , we can arrange that p = 1 and q = −1. Thus, the only possibility for f = [F ] is to have (5.13) However, since F (1) = 0, it follows that f = [F (z)/(z−1)] can only have degree 5 at most. This contradiction shows that the desired f does not exist. Hence, there is no unbranched null curve g : P 1 → Q 3 of degree 7, as claimed.
Remark 5. There does exist a branched nonlinear null curve g : P 1 → Q 3 of degree 7.
The corresponding contact curve f : P 1 → P 3 is of the form for a meromorphic parameter z on P 1 and a basis v 0 , v 1 , v 4 , v 5 of C 4 . This f satisfies R 1 (f ) = s, where z(s) = 1, and R 2 (f ) = p + q, where p is the pole of z and q is the zero of z.
It can be shown [2] that, up to projective equivalence, this is the unique contact curve f : P 1 → P 3 with r 1 (f ) = 1 and r 2 (f ) = 2.
Since any nonlinear null curve g : P 1 → Q 3 of degree 7 must satisfy r 1 (f ) + r 2 (f ) = 7 − 4 = 3 and since r 2 (f ) must be even, it follows that such a curve, which must be branched by Proposition 8, must have r 1 (f ) = 1 and r 2 (f ) = 2.
Hence all of the nonlinear rational null curves g : P 1 → Q 3 of degree 7 form a single Sp(2, C)-orbit of dimension 10.
|
2019-05-15T12:11:43.000Z
|
2019-05-15T00:00:00.000
|
{
"year": 2019,
"sha1": "b516412bd39b18e01529d9202532f6d307e5e659",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b516412bd39b18e01529d9202532f6d307e5e659",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
13942044
|
pes2o/s2orc
|
v3-fos-license
|
Fermion masses and quantum numbers from extra dimensions
We study the localization of fermions on a brane embedded in a space-time with $AdS_n \times M^k$ geometry. Quantum numbers of localized fermions are associated with their rotation momenta around the brane. Fermions with different quantum numbers have different higher-dimensional profiles. Fermion masses and mixings, which are proportional to the overlap of higher-dimensional profiles of the fermions, depend on the fermion quantum numbers.
The problem of explaining the hierarchy of fermion masses and mixings is one of the longstanding problems of particle physics which can not be resolved within the standard model [1]. This problem becomes especially interesting in the light of recent results on nonzero neutrino masses [2]. There exist different approaches to explanation of the hierarchy of fermion masses, like models with additional "horizontal" symmetry [3], or models where the hierarchy is generated by the "seesaw" mechanism [4], the last class of models is particularly popular for explanation of smallness of neutrino masses. Apart from the seesaw mechanism the smallness of neutrino masses can be related to the possible existence of large extra dimensions [5]. The idea is to assume that the right-handed sterile neutrino can propagate in higher-dimensional bulk while the left-handed neutrinos are confined to a four-dimensional brane so that the higher-dimensional profiles of left and right neutrinos have small overlap. Actually, it is possible to relate the whole hierarchy of fermion masses and mixings to the differences in the overlaps of higher-dimensional fermion profiles [6,7].
What can be the reason for different fermions to have different higher-dimensional profiles? What mechanism can be responsible for the fact that a fermion which is neutral with respect to the SU(3) × SU(2) × U(1) group of the standard model is not localized on the brane?
If a brane is embedded as a surface in a higher-dimensional space, matter fields bound to the brane are naturally classified by the values of their momenta of rotation around the brane. Indeed, in quite general settings, the space-time around the brane possess a rotation symmetry. For example, in the case of just two extra dimensions the metric around the brane has the form ds 2 = e ν(ρ) η µν dx µ dx ν + e λ(ρ) dρ 2 + e µ(ρ) dy 2 (1.1) where η µν = diag(−1, 1, 1, 1) is the four-dimensional Minkowsky metric and functions ν(ρ), λ(ρ), µ(ρ) are determined from Einstein equations. The coordinate ρ measures the distance from the brane placed at ρ = 0 and the coordinate y is periodic y ∈ [0, 2πR y ). The metric (1.1) possesses a U(1) y symmetry of rotations in y direction y → y + const (1.2) As in usual quantum mechanics, matter fields Ψ bound to the brane are characterized by different values q of the rotation momentum in the y direction From the four-dimensional point of view, this rotation momentum is a quantum number of observable four-dimensional particles. The profiles Ψ q (ρ, y) ("wave functions") of the bound states depend on the values of the rotation momentum. This means that particles with different quantum numbers q have different higher-dimensional profiles [8].
In what follows we consider branes embedded into higher-dimensional spaces with (asymptotic) AdS n × M k geometry of direct product of n-dimensional anti-deSitter space and compact manifold M k . Such spaces arise naturally in Freund-Rubin compactifications [9] of higher-dimensional supergravities [10] and received a considerable attention recently after the discovery of AdS/CFT correspondence [11] and observation that gravity can be localized on a brane in five-dimensional [12] or higher-dimensional [13] anti-deSitter space.
In section II we show that in six-dimensional space (1.1) fermion zero modes which are charged with respect to U(1) y group of rotations in y direction (1.2) can be localized on the brane, while the state neutral with respect to U(1) y propagates in the bulk. In Section III we discuss the breaking of rotation symmetry U(1) y (1.2) and derive a formula relating the effective four-dimensional mass of a localized fermion state and charges q L and q R of its left and right handed components. We also consider the case when the brane is embedded in seven-dimensional space with AdS 7 geometry. In this case localized fermions carry U(1)×U(1) charges. The extra U(1) symmetry can be associated with the "horizontal" symmetry [3] needed to distinguish between fermions from different generations. We show that in this model the masses of localized fermions are arranged hierarchically. In Section IV we consider a more complicated model where the localized fermions carry SU(2) × U(1) quantum numbers. In this case the brane is embedded into space-time which becomes the direct product of AdS space with a two-dimensional sphere S 2 far away from the brane. We find the higher-dimensional profiles of SU(2) singlet and doublet states localized on the brane. If the rotation symmetry SU(2) is broken, effective four-dimensional mixing between singlet and doublet states becomes nonzero and one of the doublet components becomes massive. The other component ("left-handed neutrino") is not mixed with the states localized on the brane and, therefore, its mass is naturally small. Let us consider a brane embedded in the space-time (1.1). Suppose that all the matter fields are localized in a region 0 ≤ ρ ≤ ρ 0 while the metric outside the brane, ρ 0 < ρ < ∞, is a solution of vacuum Einstein equations, possibly with a cosmological constant term.
In the most simple case the bulk metric is isometric to the metric of the six-dimensional anti-deSitter space AdS 6 where κ is the inverse curvature radius of anti-deSitter space. The massless modes of higher-dimensional Dirac field can be naturally localized on the brane in a space-time (1.1) with nontrivial warp factors ν(ρ), λ(ρ), µ(ρ) [8]. In order to see this let us consider the higher-dimensional Dirac equation The six-dimensional gamma matrices Γ A are defined with the help of the vielbein E Â B and flat space gamma matrices ΓÂ (the indexes with a hat are six-dimensional Lorentz indexes). The covariant derivative is defined as where ωBĈ A is the spin connection expressed through the vielbein E Â B and σBĈ = We can expand the solutions over the states with fixed rotation momentum q y in the y directionΨ (F is a two-component spinor). If we take the two gamma matrices Γr and Γŷ to be we find that the two-component spinor F = (f, g) satisfies the equations Thus, if q y > 0 and the bulk metric is isometric to anti-deSitter metric (2.1), the spinor describes a fermion state bound to the brane (We postpone the discussion of the behavior of solutions of Dirac equation in the core of the brane 0 ≤ ρ ≤ ρ 0 till Section IV). We have normalized the solution (2.15) with respect to the scalar product The rotation momentum q y is, in fact, a charge of the localized fermion mode with respect to Kaluza-Klein gauge field A µ which corresponds to the U(1) y symmetry (1.2) of the metric (1.1). Indeed, the metric (1.1) possesses a Killing vector which, according to Kaluza-Klein mechanism leads to the existence of a gauge field A µ in effective four-dimensional theory. This gauge field arises as a nondiagonal component of higher-dimensional gravitational perturbations. The zero mode of the field A µ can be localized on the brane [8]. In this case A µ can correspond to an observable U(1) gauge field and the charge q y can be related to an observable quantum number of standard model fermions. Note that the U(1) y -neutral state with q y = 0 ("sterile neutrino") is not localized on the brane.
III. BREAKING OF U (1) ROTATION SYMMETRY AND GENERATION OF FERMION MASSES.
We are interested in mixings between differently charged fermions Ψ q 1 and Ψ q 2 here O q 1 q 2 is a matrix with indexes which run through all fermion species (in our example through all possible q y ) and f is a constant. Let us consider (3.1) in more details. Higherdimensional profiles (2.15) of the localized fermion zero modes Ψ q depend on their charges q y . The mixing (3.1) between modes with charges q 1 and q 2 is proportional to the integral over the extra dimensions of the overlap of the profiles of Ψ q 1 and Ψ q 2 . This integral, in turn includes the integral over the circle S 1 parameterized by the coordinate y. Substituting the profiles (2.12) into (3.1) we find It is not difficult to see that this integral vanishes if q 1 = q 2 since the modes with different q y are orthogonal to each other. Thus, when the symmetry of rotations around the brane is not broken, the mixings between the modes with different charges vanish. Suppose, that the symmetry of rotations y → y+const is broken by some mechanism. We do not discuss here different possibilities for particular mechanism of symmetry breaking in the context of theories with extra dimensions (see, for example [14]). The symmetry breaking results in appearance of a (fundamental or effective) Higgs field which has nonzero U(1) y charge p The Higgs field is coupled to the higher-dimensional Dirac field Substituting the profiles (2.15), (3.3) into (3.4) we find that the mixing between the modes Ψ q 1 , Ψ q 2 with the charges q 1 , q 2 such that does not vanish.
Using the decomposition of the fermion field (2.9) we can write the mixing (3.4) in the form The mixing M q 1 q 2 can be naturally small, if the higher-dimensional profiles of fermions Ψ q 1 , Ψ q 2 have small overlap with each other, or with the Higgs profile H p . The Dirac mass of a fermion is the mixing between its left and right handed components Ψ L and Ψ R . If the charges of Ψ L , Ψ R are respectively q L , and q R = q L + p, we get from (3.7) Since we do not discuss the details of the symmetry breaking, we can not calculate the Higgs profile H p (ρ). Let us consider two extreme possibilities. If the profile H p (ρ) is "smooth", we get, substituting the profiles (2.15) into (3.8) and performing the integral where Γ(0, x) is the incomplete Gamma-function. (We have supposed that the space-time metric outside the brane is the anti-deSitter metric (2.1). ρ 0 is the thickness of the brane core.) If the brane thickness ρ 0 is much larger than R y we get an approximate expression If the profile H p (ρ) of the Higgs field is "sharp", that is peaked at a distance ρ h from the center of the brane we get from (3.8) In this case, if ρ h − ρ 0 = (several)R y , the masses of particles with different q L , q R are arranged hierarchically. The fact that the sharp higher-dimensional profile of the Higgs field leads to a hierarchical structure of four-dimensional fermion masses was noted in [7,15]. As a simple generalization of the above model let us consider a brane embedded in a seven-dimensional space with AdS 7 geometry ds 2 = 1 (κρ) 2 η µν dx µ dx ν + dy 2 + dz 2 + dρ 2 (3.14) The seven-dimensional anti-deSitter space can be obtained after compactification of elevendimensional supergravity [9]. Suppose that, as in the above example (1.1) the coordinate ρ counts the distance from the brane and the coordinates y and z are periodic with the periods 2πR y , 2πR z . The symmetry group of rotations around the brane is now U(1) y × U(1) z -the product of rotations in y and z directions. The fermions bound to the brane possess two quantum numbers q y , q z which are rotation momenta in y and z directions The Dirac equation for zero modes Γμψ ,µ = 0 is reduced to It has normalized solutions The Dirac mass M D which is the mixing between the modes Ψ q yL ,q zL and Ψ q yR ,q zR is given by the integral over the extra dimensions of the overlap of the profiles of left and right handed components with the profile of the Higgs field (see (3.7)). In the case of the sharp profile (3.12) of the Higgs field M D is proportional to (compare with (3.13)). If the scales R y , R z and (ρ h − ρ 0 ) are arranged as R y ∼ (several)R z , (ρ h − ρ 0 ) ∼ (several)R z , the masses of particles with different q zL , q zR go as different powers of a small parameter where ǫ is The rotation symmetry U(1) z can be identified with the "horizontal" symmetry [3] which is introduced in some approaches to the fermion mass hierarchy and enables to distinguish between fermions from different generations. The mass hierarchy of Eq. (3.20) is similar to the formula for the fermion masses derived in the models with the horizontal symmetry. In order to get a realistic pattern of masses of the standard model fermions the parameter ǫ must have numerical value ǫ ≈ 0.049 (3.22) which means that, if the thickness of the brane ρ 0 is negligibly small, the relation between the radius of the Higgs orbit r h and the size of the circle S 1 parameterized by the coordinate z must be From (3.19) one can see that the masses of fermions with different charges q yL , q yR are quasidegenerate.
Up to now we have considered models in which fermions carry only U(1) charges. If we want to include SU(2) ∼ SO(3) as a symmetry group of rotations around the brane, a natural generalization of the model of the previous sections would be to consider a brane embedded in an eight-dimensional space ds 2 = e ν(r) η µν dx µ dx ν + e µ(r) dy 2 + e λ(r) dr 2 + r 2 (dθ 2 + sin 2 θdφ 2 ) (4.1) that is, to add two more extra dimensions (θ, φ) with the geometry of two-dimensional sphere.
In the previous sections we have systematically abandoned the discussion of behavior of the fermion zero modes in the core of the brane. At the same time, if the brane is considered as a topological defect in a higher-dimensional space-time, the analysis of behavior of fermion zero modes in the core of defect is important, because the requirement of regularity of the fermion profiles in the core can impose restrictions on the number of normalizable zero modes bound to the brane [16]. In this section we will extend the analysis of fermion zero modes to the brane core. The topology of the core of the brane embedded into eight-dimensional space-time (4.1) depends on the behavior of warp factors ν(r), λ(r), µ(r) in the limit r → 0 [17]. We consider the situation when ν(r), µ(r), λ(r) → 0, r → 0 (4.2) so that the topology of the brane core is The solutions of the eight-dimensional Dirac equation can be expanded over the eigenstates of the total angular momentum Here r × p is the usual angular momentum in three-dimensional space (r, θ, φ) and 1 2 σ is the spin operator, (σ i are the Pauli matrices). If we relate SU(2) symmetry of space-time (4.1) to the SU(2) L gauge group of the standard model, we face immediately the following difficulty. It is known, that the right-handed fermions of the standard model are singlet representations of SU(2) L . But the lowest possible value of the angular momentum (4.3) is The state with the lowest angular momentum is a doublet representation of SU (2). This difficulty can be resolved if we suppose that the space-time (4.1) is obtained in result of Freund-Rubin compactification [9] of higher-dimensional theory. For example, the space-time with geometry of direct product AdS 6 × S 2 of anti-deSitter space with a two-sphere of the radius R can be obtained as a solution of eight-dimensional Einstein-Maxwell equations if we introduce a vector field A M in a topologically nontrivial monopole-like configuration in the eight-dimensional bulk. The metric (4.5) can describe the space-time geometry outside the brane, r 0 < r, or asymptotically when r → ∞, while in the core of the brane, 0 ≤ r ≤ r 0 , the metric is a solution of the Einstein-Maxwell equations coupled to other matter fields and the asymptotic behavior of the metric in the r → 0 limit is given by (4.2). If the higher-dimensional spin-1/2 field Ψ has nonzero charge e with respect to A M , the conserved total angular momentum is rather then (4.3). The lowest possible value of the angular momentum is j min = |em| − 1/2 (4.8) and if |em| = 1/2 the state with the lowest angular momentum is the SU(2) singlet. The properties of the fermion zero modes in the model with monopole-like configuration (4.6) of the vector field A M on the two-sphere S 2 were first considered in [18]. In fact, the existence of a fundamental vector field A M in the eight-dimensional bulk is not a necessary condition for the existence of SU(2) singlets in the spectrum of fermionic modes. If we consider a little bit more complicated Ansatz for the background metric in the eight-dimensional space-time where A φ is given by (4.6), the conserved angular momentum is again given by (4.7). The only difference between the above two ways of introducing monopole-like configuration (4.6) in the higher-dimensional theory is that in the Kaluza-Klein case (4.9) the Dirac field has a nominimal coupling to the field A M [19]: and the charge e is expressed through the rotation momentum in y direction e = q y R y (4.11) In both cases (in the space-time (4.1) with the fundamental vector field A M or in the spacetime (4.9) with Kaluza-Klein vector field A M ) the Dirac equation for the fermion zero modes (ΓμΨ ,µ = 0) reduces to (we have introduced an arbitrary coefficient k in front of the nonminimal coupling term in order to be able to analyze the cases with and without nonminimal coupling (4.10) simultaneously). It is convenient to write Ψ in the form where F is a four-component spinor. From (4.12) we find that F is a zero-energy solution of the four-dimensional Dirac equation with the Hamiltonian Here the matrices α and β are The Hamiltonian (4.15) coincides with the Hamiltonian of Dirac particle in the central field superposed with the field of magnetic monopole (4.6). The last term in the Hamiltonian (4.15) describes an extra magnetic moment of Dirac particle. The bound states of Dirac particles with an extra magnetic moment in the field of magnetic monopole were studied by Kazama and Yang [20]. The state with the lowest value (4.8) of the total angular momentum (4.7) and its projection J 3 = m on some axis is described by the spinor where η |em|,j,m is the two-component spinor and Y q,j,m (θ, φ) are the monopole harmonics [21]. Substituting (4.18) into (4.14) we find that functions f (r), g(r) satisfy equations We are interested in the bound state solution (C is a normalization constant). Suppose that at large r the bulk metric approaches the metric (4.5) of AdS 6 × S 2 space. Then the asymptotic behavior of (4.21) is This profile is essentially the same as (2.15) if we relate the coordinates r and ρ ρ = r κR (4.23) In the core of the brane the functions µ(r), λ(r) behave as in (4.2) and the profile (4.21) is If q y > 0, f and g vanish both when r → ∞ and r → 0 and the profile (4.21) corresponds to a fermion state localized on the brane. Let us suppose that the brane thickness r 0 is quite small, so that the region r > r 0 gives the main contribution in the normalization (2.16) of the fermion modes. If the metric outside the brane is given by (4.5), the normalized solution of Dirac equation which describes an SU(2) singlet state is given by (see (4.18), (4.21)) while the SU(2) doublet state has the profile If the monopole field (4.6) appears as a Kaluza-Klein field (4.9), then the charges q y and e are related through (4.11) and instead of the whole tower of singlet and doublet states (4.25), (4.26) with different q y = 1, 2, ..., we get just one singlet with q y = 1 and the only doublet with q y = 2.
The effective four-dimensional Dirac masses of localized fermions can be calculated by the same procedure as in the previous section. The conserved angular momentum for the Higgs scalar in the field of magnetic monopole (4.6) is given by (4.7) without the spin operator 1 2 σ. If the charge of the Higgs field is e h = 1/(2m) the lowest possible value of the angular momentum is j h = 1/2. Therefore the higher-dimensional profile of the Higgs field is H p,j=1/2,m=1/2 = H p (r)e ipy/Ry Y 1/2,1/2,1/2 (θ, φ) (4.27) where Y 1/2,1/2,1/2 is the corresponding monopole harmonic [21]. In the same way as in (3.8) the Dirac mass of a fermion is given by the integral over the extra dimensions of the overlap of the singlet (4.25) and doublet (4.26) profiles with the profile of the Higgs field (4.27) M D = f drdydθdφ e (λ−ν)/2 H p,1/2,1/2 F q R ,0,0 F q L ,1/2,m (4.28) Integrating over the angles (θ, φ) we find that the mixing of the doublet state with projection of angular momentum m = −1/2 with the singlet vanishes. The integral (4.28) for the mixing of m = 1/2 component of the doublet with the singlet reduces to (3.10). The mass M D is expressed through the charges q L and q R of doublet (left) and singlet (right) components in the same way as in the previous Section. Depending on the profile of the Higgs field H p we can get either hierarchical (3.13) or quasidegerate (3.11) pattern of fermion masses. It is straightforward to generalize the above model to the case when the brane is embedded in the nine-dimensional space-time with asymptotic AdS 7 × S 2 geometry. In this case the localized fermion modes are charged with respect to the SU(2) × U(1) × U(1) symmetry group. One of the U(1) factors can be related to the U(1) Y gauge group of the standard model, while the other will correspond to the horizontal symmetry U(1) G . There are SU(2) singlets and doublets in the spectrum of localized fermions. After the symmetry breaking the upper components of doublets become massive. The masses of these states are proportional to different powers of a small parameter ǫ (3.20), (3.21) and have hierarchical structure. Correspondingly, these states can be identified with the charged leptons of the standard model. The lower components of the SU(2) doublets remain massless, since they are not mixed to the states localized on the brane. These states should be identified with neutrinos.
In our model the higher-dimensional profiles of the fermions are completely determined by their quantum numbers. In order to include quarks in the model we must equip the fermions with SU(3) quantum numbers. We leave the discussion of relation between quark masses and quantum numbers for the future work.
V. CONCLUSION.
In this paper we have pointed out that in the models with a brane universe embedded into a higher-dimensional space-time, the fermions localized on the brane always possess quantum numbers which are their rotation momenta around the brane. We have considered the cases where the localized fermions carry U(1) (Section II), U(1) × U(1) (Section III) or SU(2) × U(1) (Section IV) quantum numbers.
The higher-dimensional profiles of the fermions depend on their quantum numbers (2.15), and it turns out that the fermions which are neutral with respect to the group of rotations around the brane are not localized.
The effective four-dimensional masses of localized fermions are proportional to the overlaps of the profiles Ψ L , Ψ R of left and right handed components of the fermion with the profile H p of (effective or fundamental) Higgs field (3.8). If the Higgs field is sharply peaked at a distance ρ h from the center of the brane, like in (3.12), the the masses of fermions with different quantum numbers go like different powers of a small parameter ǫ, (3.20), (3.21). Even if there is no hierarchy in the length scales ρ h and R z in (3.21), the masses of localized fermions are arranged hierarchically.
VI. ACKNOWLEDGMENT.
I am grateful to A.Barvinsky, X.Calmet, V.Mukhanov, I.Sachs and S.Solodukhin for useful discussions of the subject of the paper. This work was supported by the SFB 375 Grant of Deutsche Forschungsgemeinschaft.
|
2014-10-01T00:00:00.000Z
|
2001-06-28T00:00:00.000
|
{
"year": 2001,
"sha1": "24fc8f3bf107ea65ddc008b3b07064e5ef3dd5e4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/0106092",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7a5f4a9378b1f94d24818d241ffef58b3dddd9f9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222442232
|
pes2o/s2orc
|
v3-fos-license
|
THE DEVELOPMENT OF LEARNING MODEL THROUGH VIDEO DOCUMENTARY TO IMPROVE ENVIRONMENTAL KNOWLEDGE OF COASTAL RESIDENTS OF PALOPO CITY, INDONESIA
This study aims to develop the Environmental Learning Model (ELM) by using documentary videos to improve knowledge of coastal residents of Palopo City about mangrove, wastewater, domestic waste, and liveable house. There were two phases used in this research. The first phase examined the concept of the lesson plan, guidedbook, video material, evaluation tests, and answer sheets that were validated by the experts. In the second phase, two reviewers examined the practicality and the effectivity of the learning implementation. The validity result of the guided-book was in very valid category. The lesson plan was in very valid category, and the video material was also in the very valid category. This validation was carried out by educational expert and multimedia expert. Practicality: Two educational reviewers stated the syntax, social system, reactional principle, support system were implemented entirely. Effectivity: The learning result of environmental knowledge in the first trial was in the Medium category (47,5). And in the second trial, it improved to the High category (60,8).So, it can be stated that the development of learning model through documentary videos is valid, practical, and effective to be used in improving environmental knowledge for the coastal resident to avoid bad behaviour. © 2020 Science Education Study Program FMIPA UNNES Semarang
INTRODUCTION
Coastal residents experience health and safety threats every day due to environmental damage. Sembel (2015) noted several causes of health threats due to environmental pollution: (1) water pollution comes from industrial waste, domestic waste, chemicals, and chlorine from sewage disposal; (2) soil pollution comes from seepage of septic tank water into wells or rivers through groundwater. Septic tanks are sources of contaminants such as metals, microbial pathogens, and other compounds; (3) industrial and domestic waste pollution comes from insecticide residues, domestic waste, detergent residues, human was-te, cans, plastic, glass, and drinking water bottles. Nasrun (2016) stated that the knowledge level of coastal residents is generally in a moderate and low category. So, it is difficult to improve the condition to be a clean and healthy environment. The health threat has increased every day due to poor environmental conditions in the coastal area of Palopo. These are characterized by the average residents experiencing health problems in the form of fever, coughing, diarrhea, skin diseases, and other diseases.
Palopo coastal residents choose to live alongside the beach. In general, the condition of the house is very crowded and disorganized, so it seems slum. The condition of the houses around the coastal residents in Palopo is a slum and partly unfit for habitation (Nasrun, 2016;Idaman, 2017;Barrow, 2014). Some houses do not have Water Closet, bedroom, kitchen, and dining room. The coastal residents live in houses that do not comply with health standards for a long time -littering domestic waste directly to the sea damages mangrove forests. Dispose of domestic wastewater increases the proliferation of coliforms containing pathogenic microorganisms that might cause various diseases.
Improving the environmental conditions of the coastal city of Palopo need to be done by educating residents through daily interaction with the environment and the concept of sustainable development. Identification of the current status of pre-service secondary teachers' knowledge, attitude, and practices about the environment is necessary to assess their level of readiness to integrate ESD in their teaching (Boubonari et al., 2013;Esa, 2010). Sustainable development focuses on the quality of human life in utilizing spatial areas, including coastal areas. Improving damaged environments is the responsibility of everyone who every day interacts with it. Therefore, coastal residents are urged to learn about improving the quality of the environment through documentary video as an information media. The aim is to increase knowledge, change attitudes and behavior so that environmental conditions remain natural, healthy, and beautiful (Yustina al., 2020). Asri et al. (2015) stated that the concept of sustainable development in coastal areas requires two ideas of improvement, namely: (1) improvement of living needs must be met, but the average population of coastal life is limited due to low economic levels (poor); (2) improvement of knowledge and skills of coastal residents is still low so that it is challenging to manage natural resources to increase income. One of the best ways to improve the standard of living of coastal residents of Palopo City is to increase knowledge and skills in managing natural resources of mangrove forests as tourist attractions and protect against the wave abrasion. The target of educating the group of fishermen through nonformal education is to obtain information about environmental management using documentary videos as information media.
The design of the documentary video contains material of environmental damage that occurred in the coastal environment in Palopo City. It contains (1) 80% damaged mangrove forest is due to the construction of roads, cafes, food stalls, and expansion of ponds; (2) Domestic wastewater that discharges directly to the ground and sea as a source of pollution; (3) Domestic waste is directly disposed of to the sea or in the house yard; (4) Residence house does not comply with health standards. The coast and sea of the coastal area of Palopo City have changed; namely, the mangrove forests are no longer beautiful so that the habitat of marine life is extinct by itself, the sea is used as place to dispose of small and medium industrial liquid waste, domestic liquid waste, domestic solid waste, and hospital wastewater, modern and traditional markets. Waste is one of the causes of severe environmental damage that can be recycled (Lofrano & Brown, 2010;McKeown et al., 2002). A person will develop environmental awareness if someone understands environmental science, and most of the results of research on environmental education show that the ability to understand environmental knowledge is low among elementary, middle school. College students do not find a significant correlation between knowledge and attitudes-environment on the one hand, and their behavior on the other (Levine & Strube, 2012;Levy et al., 2018;Nugraha et al., 2020;Prasetyo & Trisyanti, 2018;Ramadhan et al., 2019).
Interactive video affect learning result and learner satisfaction in e-Learning environment (Zhang, 2005). Learning with documentary videos stimulates memory (cognitive) through images of the phenomenon of environmental damage, and its impact. Iskandar (2011) argued that humans interacting with their environment will form their cognitive mapping; cognitive formation also occurs with development since childhood. With cognitive video mapping, it is planned that students can learn fun by repeating what has happened in the past made in the form of recorded images containing colors, sounds, and movements that can animate personality (Sakchutchawan, 2011;Stuffleneam & Coryn, 2014) The research statement is whether the development of a learning model with documentary videos can improve environmental knowledge? Then what is measured is the level of validity of the device, the level of practicality, and its effectiveness. The level of validity was assessed by one education expert and one multimedia expert, respectively, the level of practicality and effectiveness was assessed by two education experts using the observation assessment sheet.
The Design of the Environmental Learning Model (ELM) utilizes computer technology to design video documentary software. The device of the Environmental Learning Model is a documentary video that uses the computer, stated that contends, a computer system can provide delivery instruction by allowing them to interact with the lesson programmed into the system; this refers to computer-based instruction.
Instructions can be carried out by computers as learning media, video documentaries as instructional media contain instructions in learning activities.
This study measured the level of validity of the ELM, the level of practicality, and the effectiveness of the model. The level of validity was assessed by two educational model experts and multimedia experts, the level of practicality and effectiveness was assessed by two education experts and learning model experts using observation assessment sheets. Gustafson (1991) stated that Model design refers to the non-formal education pathway of namely: Plan, Implement and Evaluate (PIE). Practicality and effectiveness about the implementation of learning refer to the theory of Nordyke. Nordyke (2011) stated that there are five essential components of the learning model. They are (1) Syntax, an activity sequences or activity phases; (2) Social Systems, teachers and students each have a role and rules; (3) Principles reaction, rules which must be met by both the teacher and students in the classroom; (4) System support, the conditions required in learning to use tools or media; (5) Impact instructional, the results achieved by students after learning. The effectiveness measured is the completeness of learning outcomes, and the response of fishermen study groups to the implementation of the Model.
Type of Research:
Learning development research. It is a development through learning stages to produce a learning model used by coastal residents. This learning model is called the Environmental Learning Model. The quality of this model can be assessed by several criteria, according to Nieveen (1997). The criteria are validity, practicality, and effectiveness. This model's development is followed by learning tools and packaged in the form of a documentary video as a method of information.
Research Instruments: There are several instruments used in the learning model which are packaged in the form of a documentary video as a method of information, namely (1) Assessment Sheet, (2) Observation Sheet, (3) Questionnaire for learning participants' responses, and (4) Validation of each instrument.
Tools and Instruments of Validation: The model has been tested. The learning device is packaged in the form of a documentary video as an information method. The instrument that has met the validity was implemented by 25 residents of the coastal study group of the Palopo City fishermen.
Data Analysis Techniques: data is analyzed in two ways, as follows: (1) Data analysis for the validity of the model using descriptive analysis; (2) Data analysis for model practicality using averaging the observations from each meeting Calculating the Leaning Model assessment sheet's reliability and using the modified formula for the percentage of agreement Grinnell (in Huda et al., 2017;Christensen et al., 2011) as follows: Description: To determine the model of the practicality, it calculates the reliability of implementation model observations sheets by using the formula of the percentage of agreement (Borich, 2016) as follows: Description: A = The magnitude of the matching fre quency between the two observers' data D = The frequency of the mismatch between the two observers' data R = Coefficient (degree) reliability instrument The criterion of the feasibility model observation sheet is said to be reliable if its reliability value (R) ≥ 0.75 (Borich, 2016).
Determining the category of implementation of each aspect or all aspects of the Model is defined as follows: 3.5 < M < 4.0 fully implemented 2.5 < M < 3.5 partially implemented 1.5 < M < 2.5 is not implemented M < 1.5 is not implemented (Salam et.al, 2019) Analysis of data on the effectiveness of the environmental learning model: Analysis of mastery of mangrove forest material, environmental sanitation, domestic waste, and livable houses: Describes the results of statistical analysis of the ability to understand the material, mangrove forests, environmental sanitation, domestic waste, livable houses, and determines the categories of learning outcomes based on Snowman & Mc-Cown (2011).
Validation Analysis of Environmental Learning Models (ELM)
Analysis of the Learning Environment Model's validity: Two experts and practitioners assessed the validity of the Learning Environment (LE) model by providing a simple book guide for the implementation of learning and an assessment sheet to be evaluated and analyzed as follows: The aspect of Supporting Theory, namely the theory used in the model book in the form of learning theory, learning model theory, learning media theory, and video documentary theory serve as supporting theories for the ELM book. The average score obtained was 3.8. From the data, it can be concluded that the result was "very valid". It can be concluded that the value of 3.8 was included in the category of "Very Valid" so that it was stated that the ELM met the validity criteria in terms of supporting theories.
The aspect of Syntax, learning activities using a documentary video beginning with the opening up to the stage of working on the questions done by a group of fishermen. The average score obtained was 3.7. From the data, it can be concluded that the result was "very valid". It was concluded that the value of 3.7 was included in the category of "Very Valid" so that it was stated that the ELM in the syntax aspect met the validity criteria.
The aspect of Social System, relationship patterns of interaction of fishermen groups studying documentary video material as a method of information in the form of a one-way relationship pattern. The average score obtained was 3.5. From the data, it can be concluded that the result was "very valid" with validity criteria. It was concluded that the value of 3.5 was included in the category of "Very Valid" so that it was stated that the ELM in the aspect of the social system met the criteria of validity.
The aspect of the Reaction Principle. Related to the learning process strategy used in the classroom using documentary videos in the form of noisy activities in the classroom, not being seriously studied, and not responding to responding to questions. The average score obtained was 3.7. From the data, it can be concluded that the result was "very valid". It was concluded that the value of 3.7 was included in the category of "Very Valid" so that the ELM stated in the aspect of the principle of the reaction met the criteria of validity.
The aspect of Supporting System. Aspects of supporting learning activities in the form of lesson plans, teaching material, documentary video designs, learning evaluations. The average score obtained was 3.8. From the data, it can be concluded that the result was "very valid". It was concluded that 3.8 was included in the category of "Very Valid" so that it was stated that the ELM in the aspect of the support system met the criteria of validity.
The aspects of Instructional Impact and Accompaniment Impact. Fishermen groups immediately felt the results of the learning process of mangrove forest material, domestic waste, environmental sanitation. The average score obtained was 3.8. From the data, it can be concluded that the result was "very valid" with validity criteria. It was concluded that the value of 3.8 was included in the category of "Very Valid" so that it was stated that the ELM in the aspect of instructional impact met the validity criteria.
The aspect of Learning Documentary Video provides motivational stimulation to more easily understand the material in the form of images according to environmental conditions, the material delivered by mangrove and environmental sanitation experts, and reading of narratives. The average score obtained was 3.8. From the data, it can be concluded that the result was "very valid" with validity criteria. It was concluded that 3.8 was included in the category of "Very Valid" so that it was stated that the ELM in the aspect of documentary video learning met the validity criteria.
The educational experts and media experts gave score 7 for the results of the model book validation. Component aspects are in the "Very Valid" category with an average score of 3.5, 3.7, and 3.8.
The results of the model book validation are presented in Figure 1.
Validation of Learning Plan
Learning objectives contain indicators of learning objectives for mangrove forests, environmental sanitation, domestic waste, and livable houses. The learning stages using direct learning syntax are modified, namely phase 1 introduction: preparing the learning process for writing instruments, seating, laptop, LCD, and focus on listening to the material. Phase 2 Core activity: Explain learning strategies using videos, listening to the material for each subject duration of 30 minutes. Phase 3 works on the multiple-choice test. The expert judgment results obtained an average of 4.0 if this figure was confirmed on the validity criteria and analysis techniques as a determinant of the validity criteria (3.5 ≤ X ≤ 4.0). It can be concluded that 4.0 is in the "Very Valid" category Presented Materials include material sourced from mangrove experts, environmental sanitation materials, domestic waste materials, and livable housing materials sourced from public health experts. The average score obtained was 3.8. From the data, it can be concluded that the result was "very valid" with validity criteria (3.5 ≤ X ≤ 4.0).
Learning Aids. The process of learning activities required aids in the form of video documentaries, CD-ROM, Computers, and Liquid Crystal Display (LCD). The average score obtained was 4.0. From the data, it can be concluded that the result was "very valid" with validity criteria (3.5 ≤ X ≤ 4.0).
The aspect of the Reaction Principle contains a learning activity phase consisting of a preliminary phase preparing fishermen study groups and instructional information, a core activity phase of learning strategies and listening to material through a documentary video, and a final activity phase giving questions verbally and working on multiple-choice form questions. The average score obtained was 4.0. From the data, it can be concluded that the result was "very valid" with validity criteria (3.5 ≤ X ≤ 4.0).
The validation results of the Learning Plan from the education experts gave score four (4). The component aspects are in the "Very Valid" category with an average score of 3.5 and 4.0.
Validation of Video Documentary Material
Home (Initial display) includes the design of the main menu display, the display of the title menu, the display of the subject matter material, the display of the narrative of the documentary video learning objectives, the narrative display of the learning achievements of the ELM using a documentary video. The average score obtained was 3.75. From the data, it can conclude that the result was "very valid". Video Documentary Display includes colour quality display design, text quality display, image quality display, audio quality display. The average score obtained was 3.62. From the data, it can be concluded that the result was "very valid".
Material View includes the duration of time used in presenting the material, the display of mangrove forest material, the display of environmental sanitation material, the display of domestic waste material, the display of material for livable houses. The average score obtained was Display of evaluation questions used simple and easy sentences to understand, multiplechoice question forms, time duration, and the number of questions, and answers to questions. The average score obtained was 3.66. From the data, it can be concluded that the result was "very valid". The End View segment (closing) shows the acknowledgment display of funding sources, researcher names, video editors, and narrative voice actors. The average score obtained was 3.75. From the data, it can be concluded that the result was "very valid" with validity criteria.
The validation results of the documentary video material from learning media experts obtained score five (5). Aspects of the component are in the "Very Valid" category with average scores of 3.62, 3.66, 3.75, and 3.8.
The results of the documentary video validation are presented in Figure 3.
ELM Practicality Analysis
The practicality of the ELM measured was the feasibility of the learning component according to Nordyke (2011), namely the syntax component, social system, reaction principle, and support system. Observation of the learning component was carried out by two observers in the trial of two meetings. The results of the analysis of the feasibility of learning components of the ELM can be stated as follows: The Syntax Component was concerned about the main subject of video documentary, the carried-out activities in the learning process, individual presentation of the learning process, and strengthening of the material. Two observers have agreed that the Syntax Component of the ELM reliability percentage of agreement R (PA) is 100%. The average observation of the implementation in the first meeting syntax component was 3.5, and the average observation of the implementation of the syntax component of the second meeting was 3.6. It means that the syntax component in the learning of the first meeting and the meeting of two groups of fishermen was carried out well entirely.
The Social System Component was concerned about documentary videos as one-way interaction information media, activeness of fishermen groups following the presentation of the material, activeness of fishermen groups listening to expert explanations, underlining, making essential notes, activeness working on multiplechoice questions, and giving rewards to the active learning participant. Two observers have agreed that the Social System Component of the ELM reliability percentage of agreement R (PA) is 100%. The average result of observation of the social system components of the first meeting was 3.7, and the average result of observations of the implementation of components social system in the second meeting was 3.6. It means that the social system component in the learning of the first meeting and the second meeting of the fishermen group was carried out well entirely.
The Reaction Principle Component presents a conducive atmosphere in listening to the material. The material was responded positively, supporting the learning process. The participants listened to the material well from mangrove experts and public health experts. They listened to the material by sitting well, orderly, and easily arranged. Two observers have agreed that the Reaction Principle Component of the ELM reliability percentage of agreement R (PA) is 100%. The average result of observations of the conduct of the first meeting reaction principal components was 3.6, and the average result of observations of the implementation of the component reaction principle of the second meeting was 3,6. It means that the principal component of the reaction in the learning of the first meeting and the second meeting of the two fishermen groups was carried out well entirely.
The Support System Component was concerned about the condition of the room and the learning atmosphere, learning devices in the form of video documentaries, multiple-choice questions, computers, and LCD. Two observers have agreed that the Support System Component of the ELM reliability percentage of agreement R (PA) is 100%. The average result of the observations of supporting system implementation components in the first meeting was 3.7, and the average result of observations of the implementation of support system component in the second meeting was 3.7. It means that the components of the support system in the learning of the first meeting and the second meeting of the two fishermen groups were carried out well entirely.
The practicality determination of the model was done by being assessed by two educational experts. They observed the implementation of the syntax component, the social system, the principle of reaction, and the support system. The assessment results show R (PA) = 100%, which means it was implemented entirely
Effectiveness Analysis Trial of the First Meeting
The measured effectiveness was the ability of fishermen to understand the material using documentary videos. In the first trial meeting, the number of participants in the fishermen study group was 25 people and coded according to the answer sheet code for mangrove forest material, domestic waste, environmental sanitation, and livable houses. Waste is the one that might give severe damage to environment (Lofrano & Brown, 2010). The results of the scores obtained by each participant learning from the four materials are scored averagely for statistical analysis descriptive. The results of the analysis are described as follows: (1) the average mastery score was 47.5. In general, this score illustrates the passing grade of the learning outcomes of fishermen groups from four material; (2) 62.5 was the highest score obtained by participants of the fishermen study group and the lowest score was 35; (3) The median score was 47.5. It indicates that there were 50 % of them get a score of 47.5; (4) the mode score was 45. It indicates that the combined score of the mangrove forest knowledge material, environmental sanitation, domestic waste, and livable houses was 45 that mostly obtained by the participants. Scores obtained from the learning process are complex because it contains pedagogic, psychology and didactic components (Snowman & McCown, 2011). The detail information can be seen in Figure 5. Arifin (2009) stated that determining the learning outcomes of the first meeting can be grouped into five categories. It can be seen in the following table. The learning outcomes in the first meeting for the very high category was 0%. The average category was 84% taken from 21 people out of 25 people. The low category was 16%. The level of education of the participants varies. 48% were from elementary school and 52% were ranging from not graduating junior high, high school, to higher education. Kudryavtsev et al (2012) stated that Environmental Education in Indonesia still needs to be improved in terms of cognitive and affective aspects.
Trial of the Second Meeting
The accuracy of effectiveness was conducted in the second meeting. The concepts were the same as the first trial. Twenty-five participants of fishermen learned and were coded according to the answer sheet. The codes were mangrove forest material, domestic waste, environmental sanitation, and livable houses.
Learning outcomes were analysed descriptively. Statistical analysis was stated as follows: the average score of the material mastery increased to 60.6. This score generally illustrates that the passing score increased from the combined score of the four materials. 80 was the highest combined score achieved by participants, and the lowest score was 37.5. The median score was 62.5. It indicates that 50% of them obtained 62.5 from the combined score of the four items. The mode score was 55. It shows that the combined score of mangrove forest knowledge, environmental sanitation, domestic waste, and livable houses was the most obtained score by the fishermen study group, generally material mastery of the fisher- Figure 6. Learning Outcomes Mastery of the Second Meeting Material men group are low due to uncommon method learning through documentary video (Asri et al., 2019). The detail is shown in figure 6.
Distribution of frequencies and learning outcomes of the second meeting is presented in the Table 2. The learning outcomes of the second meeting increased. The high category was 40%, and in the average category was 52% taken from 13 out of 25 people. The lowest score was 8% taken from 2 people.
The implementation of the Model component (LE) using documentary videos was observed by two people as observers to determine the percentage of agreement for the syntax component 100%, the social system 100%, the reaction principle 100%, and the support system 100%. It means that two observers have agreed that all model components have been implemented properly.
Validation Model
The assessment results of the educational and multimedia experts on the implementation of the ELM was Very Valid. It means that the ELM and Video documentary are suitable to be used to increase knowledge, change attitudes and behaviour of coastal residents in Palopo, due to the coastal residents poverty that leads them to damage and pollute the environment (Eggen, 2012).
The benefits of using documentary videos as Environment Learning Model as a method of sharing information are (1) Documentary videos are adequate to be used because participants found it the same as they watch it from TV broadcasts. Individually, the material is easy to remember because It is equipped with pictures of their environment. However, it also creates difficulty in terms of understanding the material due to the variety level of participants' education. (2) The fishermen group is shown a documentary video as media to share information. In the learning process, participants were challenged due to the speed of reading, memorizing and working on problems. Asri et al. (2015) argued that the Environmental Education Model learning using computer media is implemented by local-hosting and online. Students experience a difficulty to understand online material. They are limited by the time of each subject and continue working on questions online on Environmental Education material in Vocational Schools.
Practicality
Two approaches can measure practicality. The first one is the theoretical approach. It is based on the results of an educational expert assessment of the implementation of the ELM components using documentary videos as an information method. It was declared feasible to be used as a learning model to help participants in learning, obtaining information and knowledge. Rusman and Cepi (2011) stated that the function of media in learning process is as learning aids and sources. Nordyke (2011) argued that models of teachers are really models of learning as we help students acquire information, ideas, skills, values, ways of thinking, and means of expressing themselves, we are also teaching them how to learn.
The second is the empirical approach. It is based on observations of the implementation of the components of the ELM using a documentary video. The components are the syntax component, social system, reaction principle, and supporting systems. The average observation of the first trial was 3.62, and the second trial remains 3.62, with the reliability of the percentage of agreement R (PA) is 100%. It means that all the components of the model are well-implemented, but several aspects need to be improved in terms of function.
Effectiveness
The effectiveness of the Model uses documentary videos from the learning outcomes of mangrove forests, domestic waste, environmental sanitation, and livable houses. One of the supporters of effectiveness is the validity of the learning tools completely in a very valid category. The learning outcomes of the descriptive statistical analysis of mastery of the first trial material obtained an average score of 47.5 from the number of study participants in the fishing group of 25 people. It means that the material mastery ability of the four materials falls into the "Medium" category so that it is considered that the understanding of material mastery is not evenly distributed as a whole in the fishermen study group and perch has not been successful. Continued with the second trial, the average score of material management increased by 60.6 from the number of study participants in the fishermen group of 25 people, with the ability of material mastery remaining in the "medium" category, and ten people already in the high category, it is hoped that changes in behavior can act to preserve the forest, mangroves, healthy living, and recycling waste. The participants were given various statements about personal environmental behavior (Kanuka, 2010;Levy et al., 2018). The results of their responses on a Likert scale indica-tes that most of the behaviors in the statements are not frequently practiced. The more frequently practiced behaviors have to do with issues of cleanliness, recycling, electricity, and buying local produce. Two factors have led to an increase in mastery of the material in the second trial phase, namely: (1) documentary videos as a method of information are new, thus providing support, motivation, and encouragement to learn; (2) the material presented in the documentary video did not change in the second trial so that the completion of answering questions was easier because it had been studied in the first trial.
CONCLUSION
According to the research results that it is appropriate to use ELM as documentary video media to teach mangrove forests, domestic waste, environmental sanitation, and livable houses to coastal residents. The ELM model was assessed by educational experts and media experts to figure out its validation.
The results of the assessment show that (1) the validity of the reliability coefficient instrument is R = 1. Overall, the instrument meets the validity and reliability requirements. (2) The validity of the ELM Book of 3.75 is in the "very valid" category with a reliability coefficient of R = 0.96. (3) The validity of the learning plan of 3.84 is categorized as "very valid".
The ELM implementation was conducted twice to the coastal residents. The results show that (1) the first trial refers to the implementation of the syntax component, the social system, the principle of reaction, and the support system entirely implemented, but still need to be improved.
(2) Improvements were made before the second trial, which still refers to the implementation of the components. The results of the assessment show 87.5 percent, which indicates ELM is quite practical to use.
Material mastery of the learning process represents the effectiveness of the ELM. The results show that (1) the learning process of the first trial concerns on four subjects, namely mangrove forests, environmental sanitation, domestic waste, and livable houses. The average score was 48.4 from 25 participants. The score was still very low.
(2) In the second meeting, the results of material mastery obtained an average score of 60.8. It demonstrates an improvement in learning outcomes so that the ELM can be categorized as effective.
|
2020-10-16T01:42:29.672Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "f134af076b86a801fc61325b835dd1ebec75a5a8",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/jpii/article/download/23358/10786",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f134af076b86a801fc61325b835dd1ebec75a5a8",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257468840
|
pes2o/s2orc
|
v3-fos-license
|
OPENING EDITORIAL: INTERSECTIONAL APPROACHES, INCLUSIVITY, AND INTERDISCIPLINARITY
The current issue of Intersectional Perspectives: Identity, Culture, and Society reflects the responsiveness of the editorial board to necessary changes regarding the humanities, academia, and research approaches in general. The journal has been previously known as Assuming Gender and was founded in 2010 by postgraduates at the School of English, Communication, and Philosophy at Cardiff University, focusing on themes of gender and sexuality. Under the title Assuming Gender , the journal was hosted on a WordPress site, publishing eight issues up to the year 2017, before transferring to Cardiff University Press in 2021, publishing a special issue in the same year. All back issues of the journal under its previous title have been archived in the past issues section of our journal platform and can be viewed and downloaded by readers. 2020 was a pivotal year for the development of the journal as the upheaval of the COVID-19 global pandemic called for an examination of global, societal, economic, and academic inequalities that had a greater impact on marginalised groups and disenfranchised members of both local and international academic communities, in addition to the reading public. Global vaccine inequality prolonged the state of crisis as some wealthier nations reserved vaccine production and distribution, leaving lesser-income countries to face devastating health-impacts, which tore at the social and financial structure of their communities. 1 The pandemic further deepened inequalities for marginalised and vulnerable groups, uncovering gendered, classist, ageist, and ableist attitudes that left others at a disadvantage concerning remote work during lockdowns. 2
In academia, not only were academics 'firefighting' the abrupt changes to delivering education, but they and students alike were abruptly required to adapt to new learning technologies, bringing into discussion issues of accessibility and economic disadvantage.Personal and professional lives were disrupted, leading to many pandemic-related interruptions with authors and peer-reviewers across the world.University Presses were particularly affected through the sudden move of research and teaching online, with rising financial pressures on the higher education institutions to which they belong, and a 'mass movement towards creating a more equitable and anti-racist society'. 3In face of these building pressures, our publication responded by adapting to digital technologies and focusing our commitment to social justice.
In 2021, the editorial board made the decision to restructure the journal and transfer it to Cardiff University Press to realise its potential.We decided to change the name of the journal to Intersectional Perspectives: Identity, Culture, and Society (IPICS).This decision was made due to public response at the time regarding the outdatedness of the previous title Assuming Gender as it had evolved into a transmisic term that was incongruous with the journal's commitment to equality, diversity, and inclusion.The new title of the journal also acknowledges the intersections of various identity markers beyond the gender and sexuality focus that our previous title implied.
Intersectional Perspectives: Identity, Culture, and Society reflects our interest in other markers of identity such as race, class, disability, and neurodiversity.This change simultaneously widened the scope of the journal to seek out publications that take into consideration additional identity factors and viewpoints.It is also a nod to Kimberlé Crenshaw's theory of intersectionality. 4 Intersectionality, as Patricia Hill Collins aptly summarises, is a way of understanding and analysing the complexity in the world, in people, and in human experiences.The events and conditions of social and political life and the self can seldom be understood as shaped by one factor.They are generally shaped by many factors in diverse and mutually influencing ways.When it comes to social inequality, people's lives and organization of power in a given society are better understood as being shaped not by a single axis of social division, be it race or gender or class, but many axes that work together and influence each other. 5nce, Intersectional Perspectives is our way of creating a platform for comprehending and examining the intricacies present with individuals, human encounters, and the world.Rarely can the circumstances that shape identity, social and political life, and cultural practices be reduced to iv a single factor.They are commonly shaped and compounded by multiple factors that interact and impact each other in numerous ways.While also hinting at Crenshaw's framework of intersectionality where it concerns identity and societal influences, 'intersectional', as we see it, is not limited to intersectionality and the social sciences.Where it concerns identity and social issues, IPICS encourages publications that move beyond a 'single axis' or category to better grasp the complexities involved in such discussions.'Intersectional' further speaks to interconnecting viewpoints and multi-layered analyses and contributions.It also allows for apertures and ruptures, as well as leakages and slippages, within such paradigms and standpoints.
Accordingly, IPICS remains an open-access double peer-reviewed, interdisciplinary, and multidisciplinary journal as we encourage representations of identity and social categories in not only literature and society, but also in various cultural expressions, including multi-media and art.
Through our new journal platform, IPICS can publish sound-essays and art-based pieces, while keeping in mind the different accessibility needs for our readers.To accommodate the verve of these dynamic perspectives, the editorial board expanded publication types to include systematic reviews, special features, creative research, and commentaries.Our target readership has also been expanded to include members of the public who are welcome to submit to the journal.
Not only this, but IPICS is also committed to addressing the exclusivity of some academic publishing circles, who have long been criticised for their elitism. 6The academic publishing process often favours scholars who are affiliated with prestigious universities or who have access to resources and networks that give them an advantage.Consequently, the voices and perspectives of emerging scholars from less privileged backgrounds or researchers who receive little to no funding may be marginalised or excluded altogether from academic journals that charge publication fees.Hence, IPICS, in alignment with Cardiff University Press' commitment to ethical social awareness, does not charge journal authors for publishing through our platform and retains their rights through our application of Creative Commons licences to all our publications.
Publications are also Open Access and free of charge for readers.This provides many advantages for our authors, including increased visibility and impact for their publications, greater accessibility to their publications on both a local and international scale, long-term preservation of their contributions through archiving the work in digital knowledge repositories, and reduction of financial burdens on individual researchers and institutions by eliminating subscription fees for authors and readers.And so, IPICS encourages submissions that empower marginalised voices Another commitment of the journal is its adherence to academic excellence, integrity, and purpose in accordance with Cardiff University Press' philosophy of 'Rigour, Diversity, and Relevance'. 7We espouse high standards for assessing quality research and submissions, and where relevant, encourage authors to revise and resubmit papers if they do not meet our requirements during the first or second round of peer review.We accept academic articles upon merit, based on the assessment of our expert peer-reviewers who evaluate the articles through a rigorous two-stage anonymised peer-review process.In our own commitment to diversity, we, as an editorial board, reflect the diversity of the academic community at Cardiff University, as we are of various backgrounds and viewpoints and are constantly updating our inclusion practices and policies.Our commitment to relevance is seen in our open invitation for guest editors to process and publish a self-contained special issue, as was the case with the first issue published under our new title and affiliation in 2021.Relevance is also evidenced through the editorial board's upcoming work on a special feature that explores the intersections between identity and creative and performative spaces from our local and regional perspective. 8 open our current issue with a research article by Aswathi Moncy Joseph Muslim Women on Faith, Feminism, Sexuality and Race. 9Joseph raises two crucial questions: How do these women's life-narratives challenge the oversimplified and reductionist perceptions of their identities, which solely focus on religion as a single axis of social and political analysis?And how do these women create unique stories that challenge these representations to reflect the many nuances of their identities, and the multiple factors that shape them?Joseph proposes answering these questions through Foucault's concepts of heterotopia and hypomnemata, arguing for the transversal potentialities of the two.Heterotopia within the context of Joseph's article refers to the 'worlds within worlds' that these Muslim women inhabit, whereas hypomnemata indicates the women's own inner worlds, expressed through their personal notes and meditation.Therefore, Joseph argues that the Muslim women's life-narratives in It's Not About the Burqa 'extend into a subjunctive space for reading and re-reading' their subjectivities, thus becoming a site for heterogenised meaning-making for 'transculturally scattered' Muslim women.
Joseph's article provides a much-needed analysis of Muslim women's life narratives, thus acknowledging their agency in telling their own stories at the intersections of gender, sexuality, religion, and race.Also engaging with the theme of meaning-making, Chakravarty provides an insightful review of Laura Engel's Women, Performance, and the Material of Memory: The Archival Tourist, 1780-1915, through her discussion of the book's conceptualisation of an 'archival tourist'.This figure, Chakravarty points out, provides a nuanced understanding of archival records and spurs generative meanings through the embodied existence of the archive.Chakravarty further brings attention to the interdisciplinary methodology of the book, which enables one to imagine the multifaceted relationship between the materials of the archive and the archive's spatial, theatrical, and visual dimensions.Through an analysis of the four case studies of The Archival Tourist, Chakravarty concludes her review by reflecting upon the interdisciplinary strategies of Engel's study, which she explains attempts to repair the difficult relationship that marginalised people had with the archive in terms of their identities that materialise through it or remain hidden.
v
and perspectives, while actively addressing some of the barriers of the academic publishing industry.
and a book review by Dyuti Chakravarty.Joseph's 'It's Not About the Burqa: Transversing Heterotopia and Hypomnemata in Muslim Women's Life Narratives' challenges homogeneous and stereotypical representations of transcultural and displaced Muslim women.Joseph engages in an intersectional analysis of race, gender, and religion in displaced Muslim women's life narratives through selected readings of Mariam Khan's 2020 edited anthology It's Not About the Burqa:
|
2023-03-12T15:14:07.290Z
|
2023-01-20T00:00:00.000
|
{
"year": 2023,
"sha1": "3277615c13cc924c6dd868ff694edd1f2dce9eb2",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18573/ipics.127",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1d5369ce6db8545608e69b795c2e02c742ce12ac",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
10897627
|
pes2o/s2orc
|
v3-fos-license
|
Necrotizing Fasciitis of the Paraspinous Muscles
Necrotizing fasciitis (NF) is a rare and lethal soft tissue infection that requires urgent surgical intervention. It is most often found in the extremities occurring with precipitating trauma or in immunocompromised states. Signs and symptoms are often vague or missing making early diagnosis very difficult. Our patient presented with flank pain and altered mental status but no known precipitating factors. Computed Tomography showed gas within and around the right paraspinous muscle suspicious for NF. Given NF’s high lethality, early suspicion by emergency physicians of NF in patients with soft tissue infections or with systemic findings of unknown etiology is necessary.
INTRODUCTION
Necrotizing soft tissue infections (NSTIs) encompass a rare but highly lethal spectrum of infections of the subcutaneous tissue and fascia. NSTIs are often associated with trauma or immunocompromised but can be seen in previously healthy people. Given the high mortality, reported at 25% in recent years 1,2 and up to 76% with involvement of the perineum or trunk 1 , early recognition and consultation for debridement is essential to decrease mortality.
CASE REPORT
A previously healthy 54-year-old man presented to the emergency department (ED) with a chief complaint of altered consciousness. Over the preceding five days, the patient's wife reported that he had worsening back and right-flank pain. He also had a productive cough and was seen by a physician three days prior and diagnosed with bronchitis and a lumbar strain. He was prescribed ciprofloxacin, baclofen, and hydrocodone 5mg/acetaminophen 500mg. Despite this, he worsened and developed fever to 39.2°C. On the day of presentation, he became confused and his wife brought him to the ED. Review of systems from the wife was otherwise negative.
Past medical history included hypertension and a brain angioma as a child. He had a 60 pack-year smoking history but denied intravenous (IV) or illicit drugs, recent trauma or illness. Physical exam showed temperature of 37.3°C, pulse 130 beats per minute, blood pressure 110/70 mm Hg and respiratory rate 16 breaths per minute. He appeared tired, diaphoretic, and older than stated age. His head and neck exam was noncontributory. Cardiac exam revealed tachycardia with normal S1 and S2. His lung exam was significant for diffuse rhonchi. The abdomen was soft and nontender to deep palpation. His skin exam, from his right flank to right paraspinal lumbar region, showed erythema and brawny edema with minimal elevation of the epidermis. This area was moderately tender and without crepitance. His mobility was limited by pain. Extremities were dry but skin was warm. There were no gross motor or sensory deficits. The patient was confused with speech limited mainly to incomprehensible sounds. He was easily aroused and complained of pain when he was turned to examine his back.
A metabolic panel revealed a blood urea nitrogen of 41 mg/dL (normal 8-22), creatinine 2.1 mg/dL (normal 0.5-1.3) and serum sodium 136 mEq/L. A complete blood count showed white blood cells (WBC) of 30.3 K/mm 3 with a manual differential of 27.9 K/mm 3 neutrophils. Serum lactate was 2.1 mmol/L (normal <2). The remainder of the laboratory results was unremarkable. An ECG showed sinus tachycardia without ST changes. No HIV test was done.
The patient's resuscitation included two liters IV normal saline without improvement in mental status or blood pressure. Given the area of skin elevation in his back and the clinical presentation of presumed sepsis, computed tomography (CT) scans of the head, chest, abdomen and pelvis were ordered. The CT of the head and chest were unremarkable; however, the CT scan of the abdomen and pelvis showed gas within and around the right paraspinous muscles with an adjacent large abscess measuring 28 (cranial to caudal) x 15 x 5 cm that extended from the gluteus muscle to the mid-thoracic level of the back concerning for necrotizing fasciitis ( Figure). General surgery was immediately consulted and the patient was given broad-spectrum IV antibiotics. Despite this, the patient became hypotensive with systolic blood pressure 92 mmHg. After the patient's surgical evaluation, he was intubated to protect his airway and for his impending surgery.
In the operating room (OR), extensive necrosis of the trapezius, quadratus lumborum, and paraspinous muscles from the base of the neck to two centimeters above his buttock were debrided. Involvement extended into portions of the retroperitoneum at the lumbar triangle. There was also bony involvement of the twelfth rib, cervical spine, lumbar spine, and iliac crest requiring removal of the twelfth rib, with no spinal canal or cord involvement. In total, an estimated 9% of total body surface area was affected without involvement of the perirectal area. Wound cultures grew Streptococcus viridans, peptostreptococcus and porphyromonas species.
The patient returned to the OR the next day for more debridement and wound vac placement. Through his 17day hospitalization, he had four additional debridements and wound vac changes, with continuous IV antibiotics. The patient recovered and was discharged home in stable condition. He was seen in general surgery clinic with no complications or need for additional surgical or medical interventions.
DISCUSSION
Necrotizing fasciitis (NF), a NSTI that has invaded fascial planes, is a rare but potentially fatal infection that requires early diagnosis and surgical intervention due to its rapid progression and high mortality. The incidence of NF in the United States is estimated at 500-1500 cases per year with mortality of 20-60%. [1][2][3][4][5][6] The pathophysiology of NF involves release of bacterial toxins and enzymes resulting in rapidly progressing soft tissue necrosis. 2,7 Pathogens further block the lymphatic and vascular systems, impairing the immune system and antibiotic delivery. 2 Ultimately, if untreated, extensive inflammation and coagulation necrosis results in pathogen spread along fascial planes with eventual muscular and bony involvement. Mortality results from overwhelming sepsis and multiple organ failure.
There are many classification systems for distinguishing various types of NF, based on type of pathogen, location and/or extent of tissue involvement. While classification can be useful for refining antibiotic treatment or documentation purposes, there are no obvious distinguishing clinical features separating the various types, and initial treatment in the ED should be the same in all suspected cases of NF. 2,5 While many risk factors have been identified, including diabetes mellitus, chronic kidney disease, IV drug use and immune suppression, 3,9 up to 50% of NF cases occur in otherwise healthy patients of all age ranges. 2,3 Many cases have a precipitating factor, usually trauma. However, as in our patient, >20% present with unknown etiology. 4,10 NF is primarily a clinical diagnosis with a wide spectrum of presentations, making early diagnosis difficult. The high rate of initial misdiagnosis in the ED, reported at 42.6%-86.4%, has been attributed to lack of systemic and/or cutaneous findings. 5,9,11 The most common misdiagnoses are cellulitis or abscess, 9,11 but NF can also present similarly to erysipelas, phlebitis, arthritis, deep vein thrombosis and viral illness. 4 While NF most often affects the extremities, itcan affect any part of the body with the perineal area and trunk being the next most common (Table). 1,10,[12][13][14][15] Initial symptoms are vague and onset occurs over several hours to days. 1,5,10 Symptoms include tenderness, swelling, erythema and pain at the affected site. 3,10 Skin changes are usually heterogeneous 5,9 and can mimic cellulitis 4,5 or abscess. 1,7 Pain out of proportion to exam is the most specific early manifestation of NF, 5,6,10 while presence of bullae is a specific late finding indicating tissue necrosis. 5 These findings are fairly specific but insensitive (10-40%). 4 Systemic findings may include fever, tachycardia, diaphoresis, hypotension, extreme anxiety and vomiting/diarrhea. 2,5 Law et al.
Figure.
Computed tomography scan of the abdomen demonstrating gas (arrows) in the right paraspinous muscles. Laboratory results associated with poor outcomes from NF are WBC counts >14,000 cells/mm3, serum sodium <135 mEq/L and a BUN >15-18 mg/dl. 2,11 Given the lack of definitive clinical presentation, Wong et al. 16,17 developed the Laboratory Risk Indictor for Necrotizing Fasciitis (LRINEC) that uses six predictive factors to distinguish NF from other soft tissue infections, which in one prospective study was shown to have a negative predictive value of 95% and a positive predictive value of 40%. Wong et al. 18 thus argued that the LRINEC should be used to limit and target use of radiographic imaging rather than as an independent diagnostic tool for NF. However, utility of this instrument has not been validated in ED patients.
CT and magnetic resonance imaging (MRI) are most commonly used and studied. While MRI is more sensitive than CT for soft tissue infections, availability often limits its use. Findings on CT suspicious for NF include asymmetric deep fascial thickening, fat stranding and presence of fluid or gas. 18 Plain films have a high specificity but low sensitivity in identifying subcutaneous gas. 4 Imaging should be an adjunct to diagnosis and not delay operative treatment if suspicion for NF is high, since an open look by a surgeon is the criterion standard for diagnosis and allows for immediate treatment. 5 Our patient initially presented to his primary medical doctor with low back pain and a productive cough that later progressed to fever and delirium. While these initial symptoms were vague, we suspected NF given the redness and exquisite tenderness of the back and right flank. Unusual features of this case include atypical location, lack of trauma and apparent previous healthy state of this patient. The emergency physician (EP) should consider NF even in these circumstances. In this, the diagnosis was made expeditiously with CT and early exploration.
Initial treatment in the ED for NF includes aggressive resuscitation, broad spectrum IV antibiotics, and immediate surgical consult. 10 The criterion standard of treatment is repeated surgical debridement to ensure removal of all necrotic tissue along with deep incisional biopsy, wound cultures and antibiotics. [2][3][4] Hyperbaric oxygen and IV immunoglobulin have also been used with mixed results and are seen as possible adjuvants, especially if risk of mortality is high. 4,6,19
CONCLUSION
As in this patient, NF can occur in any location without precipitating factors and with vague symptoms, making early diagnosis difficult. Misdiagnosis can lead to delay in surgical debridement, which is the only identified modifiable factor that decreases mortality. 1,6 Although NF is rare, the rapid progression and lethality warrants high clinical suspicion, early diagnosis by the EP and prompt treatment.
|
2014-10-01T00:00:00.000Z
|
2010-02-01T00:00:00.000
|
{
"year": 2010,
"sha1": "846470e657a8ddce485b1a27d7a2aba84a478c27",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "846470e657a8ddce485b1a27d7a2aba84a478c27",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237326889
|
pes2o/s2orc
|
v3-fos-license
|
Influence of a Pediatric Fruit and Vegetable Prescription Program on Child Dietary Patterns and Food Security
Limited access to fresh foods is a barrier to adequate consumption of fruits and vegetables among youth, particularly in low-income communities. The current study sought to examine preliminary effectiveness of a fruit and vegetable prescription program (FVPP), which provided one USD 15 prescription to pediatric patients during office visits. The central hypothesis was that exposure to this FVPP is associated with improvements in dietary patterns and food security. This non-controlled longitudinal intervention trial included a sample of caregiver–child dyads at one urban pediatric clinic who were exposed to the FVPP for 1 year. Patients received one USD 15 prescription for fresh produce during appointments. A consecutive sample of caregivers whose children were 8–18 years of age were invited to participate in the study. Dyads separately completed surveys that evaluated food security and dietary behaviors prior to receipt of their first prescription and again at 12 months. A total of 122 dyads completed surveys at baseline and 12-month follow-up. Approximately half of youth were female (52%), and most were African American (63%). Mean caregiver-reported household food security improved from baseline to 12 months (p < 0.001), as did mean child-reported food security (p = 0.01). Additionally, child-reported intake of vegetables (p = 0.001), whole grains (p = 0.001), fiber (p = 0.008), and dairy (p < 0.001) improved after 12 months of exposure to the FVPP. This study provides evidence that pediatric FVPPs may positively influence food security and the dietary patterns of children.
Recent efforts to simultaneously address food insecurity and poor dietary behaviors among children and adolescents include the introduction of pediatric fruit and vegetable prescription programs. These programs vary widely in design and approach; however, most involve physician-issued prescriptions that may be exchanged for fresh fruits and vegetables at local farmers' markets, mobile markets, or food stores. Although evidence suggests that prescriptions for fruits and vegetables may address barriers to healthy food access among young patients and their families [22][23][24][25], reproducible implementation strategies that consistently demonstrate effectiveness are absent.
In August 2015, a large university-affiliated pediatric office, in a low-income, urban area, moved to a downtown farmers' market building. Shortly after this move, the clinic developed and implemented a successful pediatric fruit and vegetable prescription program (FVPP) that provided a pediatrician-issued prescription for fresh produce to patients (birth to 18 years of age) at every office visit [22,26]. This FVPP was expanded to a second pediatric clinic located several miles from the downtown farmers' market in August 2018. The current study sought to examine the preliminary effectiveness of the expanded FVPP, which provided one USD 15 prescription to all pediatric patients at the conclusion of office visits. Prescriptions were redeemable only for fresh produce at either the downtown farmers' market or local mobile market.
Study Population
Flint, Michigan, the birthplace of General Motors, is home to approximately 100,000 residents. Following the American automobile industry's decline, the city fell into an extreme recession [27]. The child poverty rate in Flint is nearly 60% [28], and the city lacks resources and dietary options. Local stores are likely to offer low-quality foods and few healthy food options [26,29,30], and grocery stores are limited within the city [31].
Pediatric Fruit and Vegetable Prescription Program
Hurley Children's Center, a residency-training clinic with approximately 11,000 visits annually, launched Michigan's first pediatric FVPP in February 2016. The program was intentionally designed to facilitate ease of implementation within busy pediatric offices while sending a ubiquitous message to patients and families regarding the importance of regular consumption of fruits and vegetables. Prescriptions for fresh fruits and vegetables were built into the existing electronic medical record (EMR) system and stored in patient records. This allowed for ease of distribution as well as monthly tracking of prescription distribution rates. Pediatricians ordered prescriptions through the EMR system, printed on prescription paper, and distributed to all patients (birth to 18 years of age) at the conclusion of office visits.
Following the success of the FVPP at Hurley Children's Center [22,26], an identical program was introduced in August 2018 at a private practice pediatric clinic in Flint. This second clinic, Akpinar Children's Clinic, serves approximately 3000 patients. Most patients are residents of Flint and receive public health insurance. Modeled after the original program, fruit and vegetable prescriptions at Akpinar Children's Clinic were ordered via EMR, printed, and given to patients (birth to 18 years of age). All patients, regardless of income or health status, received one USD 15 fruit and vegetable prescription at every clinic visit to be exchanged for fresh produce at either the year-round, downtown farmers' market or a mobile market that also offered free delivery of fresh produce boxes. Prescriptions were treated as vouchers redeemable only for fresh fruits and vegetables and were valid for 90 days from prescription receipt.
Study Design
This was a non-controlled longitudinal intervention trial with a consecutive sample of 122 caregiver-child dyads exposed to the pediatric FVPP for 1 year. Dyads completed inperson assessments at baseline and approximately 6-month and 12-month follow-up with a trained research assistant. Descriptions of the evaluation tools are also available in an earlier article reporting baseline data and a 6-month follow-up on fruit intake [23,32]. The current study was approved by Michigan State University's Institutional Review Board.
Participants and Data Collection
Beginning in August 2018, pediatric patients at Akpinar Children's Clinic received one USD 15 prescription for fresh produce during each office visit. A consecutive sample of caregivers whose children were between 8 and 18 years of age was invited to participate in this study. Exclusion criteria included the following: caregiver or child not English speaking, legal guardian not present at enrollment, child assent refused, or sibling previously enrolled (study enrollment was limited to 1 caregiver and 1 child per household).
Caregiver-child dyads provided consent and assent before separately answering demographic questions and survey questions that evaluated food security and dietary behaviors. Approximately 12 months after baseline data collection, caregivers returned to the clinic with their children to complete follow-up surveys. All data were collected from August 2018 through March 2020 using a secure digital platform (Michigan State University Qualtrics), accessed through iPads.
Food Security
To measure household food insecurity and hunger, caregivers completed the National Center for Health Statistics' US Household Food Security Module: Six Item Short Form [33]. Food security status can be understood using the household's raw score (0-1 = high/marginal food security; 2-4 = low food security; 5-6 = very low food security), calculated by counting affirmative responses ("often", "sometimes", "yes", "almost every month", "some months but not every month").
Children 12 years of age and older (n = 67) completed the 9-question Self-Administered Food Security Survey Module for Youth. Because the module's internal validity is adequate for children ages 12 years and older but is not recommended for younger children, the tool was not used with children younger than 12 years of age [34]. The sum of affirmative responses ("a lot" or "sometimes") served as the child's raw score. Food security status can be understood using the raw score (0-1 = high/marginal food security; 2-5 = low food security; 6-9 = very low food security).
Dietary Behaviors of Children
The Block Kids Food Screener (BKFS), a 41-item food frequency questionnaire with relatively low administration burden, assessed usual and long-term eating behaviors. Prior research demonstrates it has good relative validity for children and adolescents [35]. The BKFS documented the frequency and quantity of foods and beverages consumed during the previous week and was completed by children with the help of a trained research assistant. Dietary analysis, using the Block Online Analysis System, produced nutrient estimates and number of servings by food group.
Statistical Analyses
Demographic data were analyzed using descriptive statistics, specifically means with standard deviations and frequencies with percentages. To compare change at 12 months from baseline, a series of paired t-tests assessed change in mean daily intake of key food groups as well as child-reported food security and household food security. Additionally, independent t-tests examined whether mean change in daily intake of vegetables, fruits, fiber, whole grains, and dairy differed by key child demographics (age group, gender, and race/ethnicity). Change in daily intake of vegetables, fruits, fiber, whole grains, and dairy was calculated by subtracting each value at baseline from the daily intake value at 12 months. McNemar analysis was used to determine whether there was a significant increase in the number of children who reported daily consumption of at least 1 4 cup, 1 2 cup, and 1 cup of vegetables and fruits at the 12-month follow-up. Finally, multiple logistic regression examined the relationship between key child demographics and those who reported an increase of at least 1 4 cup of vegetables and fruits at the 12-month follow-up. In relation to logistic regression analyses, for age group, younger age was coded as 1; for gender, male was coded as 1; and for race/ethnicity, African American was coded as 1. Data were analyzed using IBM SPSS Statistics version 27.
Demographics
A total of 122 caregiver-child dyads (244 participants) completed surveys at baseline and 12-month follow-up, with the majority (70%) reporting residency in Flint. As shown in Table 1, approximately half of youth were female (52%), and most were African American (63%). Age of children ranged from 8 to 18 years (mean age 12.42 ± 2.78). Most caregivers were female (93%) and African American (59%). Thirty-seven percent of caregivers reported having a high school degree or less.
Distribution Rate
From August 2018 through September 2019, a total of 7827 patients (birth to 18 years of age) visited Akpinar Children's Clinic, and 5953 prescriptions for fresh fruits and vegetables were ordered via EMR and distributed to patients. This reflects a 76% prescription distribution rate during the program's inaugural year.
Food Security
The US Household Food Security Module was completed by 122 caregivers at baseline and 12-month follow-up. Mean household food security score decreased significantly (p < 0.001) from baseline (1.96 ± 2.20) to the 12-month follow-up (0.87 ± 1.25), indicating an improvement in food security. There was no difference in change in caregiverreported household food security score by caregiver age group (p = 0.47), caregiverreported race/ethnicity (p = 0.85), or caregiver education level (p = 0.45). A total of 67 children (≥12 years of age) completed the Food Security Survey Module for Youth. Mean child-reported food security score also decreased significantly (p = 0.01) from baseline (1.88 ± 2.06) to 12-month follow-up (1.04 ± 1.97), indicating an improvement in food security. There was no difference in change in child-reported food security score by child age group (p = 0.55), gender (p = 0.85), or between those who reported race/ethnicity as African American or white (p = 0.12).
Dietary Behaviors of Children
The BKFS was completed at baseline and 12-month follow-up by 122 children. As shown in Table 2, children reported improvements in mean daily intake of vegetables (p = 0.001), whole grains (p = 0.001), fiber (p = 0.008), and dairy (p < 0.001) after 12 months of exposure to the FVPP. There were no significant differences in change in mean daily intake of vegetables, whole grains, fiber, or dairy by child age group, child gender, or child race/ethnicity ( Table 3). Although change in mean daily intake of total fruit in the entire sample of children was not significant, there was a significant difference in mean change of daily fruit intake when comparing consumption of total fruits by race/ethnicity, with African American children reporting a mean decrease of 0.3201 cups of fruit per day and white children reporting a mean increase of 0.209 cups of fruit per day (p = 0.04). There was no difference by child age group or gender in reported intake of total fruit (Table 3). Given the program's focus on improving intake of fruits and vegetables, an examination of the percentage of children who achieved dietary recommendations for mean daily intake of fruits and vegetables was considered. Unfortunately, very few children met current dietary recommendations for fruit and vegetable intake at either timepoint. As a result, we examined the number of children who reported mean daily consumption of at least 1 4 cup, 1 2 cup, and 1 cup of vegetables and fruits at baseline and 12-month follow-up (Table 4). There was a significant increase in the number of children who reported daily consumption of at least 1 4 cup (p < 0.001), 1 2 cup (p < 0.001), and 1 cup (p = 0.001) of vegetables at the 12-month follow-up. Additionally, there was a significant increase in the number of children who reported daily intake of at least 1 2 cup (p = 0.02) and 1 cup (p = 0.003) of total fruits at the 12-month follow-up. Using a logistic regression, the impact of child race/ethnicity, gender, and age group on the increase of at least a 1 4 cup in intake of fruits and vegetables was examined. For vegetables, the model was significant (X 2 = 8.14, p = 0.04), indicating that these demographic variables were predictive of who reported an increase of at least 1 4 cup intake of vegetables at 12 months. In this model, age group (p = 0.06, 95% CI 0.97-4.87) and gender (p = 0.51, 95% CI 0.59-2.87) were not significant, while race/ethnicity was significant. Children who reported their race/ethnicity as African American were less likely (Exp (B) = 0.39, 95% CI 0.16-0.93) to have increased their daily intake of vegetables by 1 4 cup at the 12-month visit when compared to children who reported their race/ethnicity as white (p = 0.03). For total fruits, the model was also significant (X 2 = 10.92, p = 0.01). Here, age group (p = 0.43, 95% CI 0.61-3.23) was not significant, while gender (Exp (B) = 0.36, 95% CI 0.15-0.82, p = 0.02) and race/ethnicity (Exp (B) = 0.37, 95% CI 0.15-0.88, p = 0.03) were significant. Male children and those children who reported their race/ethnicity as African American were less likely to have increased their daily intake of fruit by at least 1 4 cup at 12 months when compared to female children and those children who reported their race/ethnicity as white, respectively.
Discussion
Impacting approximately 6.5 million US children, food insecurity is associated with serious consequences that include poor diet quality [36][37][38], negative health and behavioral outcomes [39][40][41][42], and low academic achievement [41,43]. Even intermittent food insecurity, which causes occasional or modest undernutrition among youth, is likely to have longterm neurocognitive and developmental implications [41,[44][45][46]. Pediatricians have a responsibility to screen households with children for food insecurity [47], but resources to address the underlying issues are frequently lacking [48,49]. Previous research has suggested that caregivers whose children were exposed to the pediatric FVPP at Akpinar Children's Clinic perceived the program to be effective in improving household food security, with many describing how they held onto prescriptions to redeem for fruits and vegetables when food resources were depleted [26]. Consistent with this qualitative finding, caregivers and children in the current study reported significant improvements in measured food security following 1 year of exposure to the identical FVPP. Implemented within two distinctly different clinic environments, the FVPP successfully addressed not only food insecurity, but also nutrition security, specifically the provision of healthy foods.
Unique in design, the current FVPP provided prescriptions for fruits and vegetables to all pediatric patients, regardless of health condition or socioeconomic status, to emphasize the important role fruits and vegetables have in health promotion and disease prevention among all children [1][2][3]. This primary prevention approach is markedly different from previous efforts that have employed produce prescriptions for adults with diet-related chronic health conditions as a disease-management strategy [50][51][52][53][54]. With fruit and vegetable consumption continually falling short of national goals among US children and adolescents [13,[55][56][57], the current program was intentionally designed to send a consistent and straightforward message regarding the importance of fruits and vegetables in a healthy diet. Pediatricians actively promoted that message through the provision of prescriptions that enabled children to purchase fresh, high-quality fruits and vegetables. Previous research has indicated that simply giving fruits and vegetables to children is likely to have an important influence on dietary intake through familiarization [21]. Repeated exposure is, in fact, a key mechanism through which youth food acceptance occurs [58,59].
Although participants in the current study failed to meet current dietary recommendations, noteworthy improvements in mean daily consumption of vegetables, whole grain, fiber, and dairy were consistently reported across all age, gender, and ethnic categories. Because greater fruit and vegetable intake during childhood is associated with reductions in chronic diseases in adulthood [8,9,[14][15][16], this particular finding, which is consistent with previous research [23,25,26], highlights the potential long-term implications of pediatric prescriptions for fruits and vegetables.
Although exposure to the FVPP was associated with important dietary improvements among all participants, change in mean daily intake of total fruits differed by race/ethnicity. African American children reported a decrease in total fruit consumption over 12 months, while white children reported an increase in consumption. Further analysis also suggested that, when compared to white children, African American youth were less likely to increase the intake of fruits and vegetables by at least 1 4 cup. These findings support previous research that has noted similar differences in consumption by race/ethnicity [11,60]. The current FVPP may benefit from tailored nutrition education that carefully considers racial/ethnic factors that may influence dietary choices among youth. Additionally, results highlight the continued need to actively address health disparities when implementing similar nutrition interventions.
Recent evidence illustrates parental desire for healthcare systems to not only focus on interventions at the clinic and community levels, but to also advocate for more expansive policies for alleviating barriers to healthy foods [61]. Pediatric prescriptions for fruits and vegetables are increasingly demonstrating effectiveness in combatting food insecurity [24,26] and poor dietary patterns [23,25,26] among children. The current study provides evidence of an effective and reproducible model for widespread prescription distribution within various clinic settings. Although the current initiative was supported by foundation funding through a competitive grants process, opportunities for federal grant support are emerging. Produce prescription programs were added to the US Farm Bill in 2018 through the Gus Schumacher Nutrition Incentive Program. This program allocated USD 25 million toward FVPPs and committed to increasing funding to USD 56 million by 2023 [62].
Limitations of the current study include the lack of a control group. However, pediatricians and community partners questioned whether the inclusion of a control group was ethical when children in Flint are facing hunger and food insecurity. The sample was small and specific to one low-income, urban community. As a result, findings may not be generalizable. However, the current fruit and vegetable prescription program could be modeled in similar communities confronted with enduring obstacles to healthy food access and affordability. Additionally, there may have been selection bias as responses from dyads who chose not to participate may have differed from those who voluntarily enrolled. However, characteristics of the study population closely match those of the source population of predominantly low-income, minority families receiving public health insurance. Finally, the accuracy of the BKFS may be limited by recall bias, but a trained research assistant was consistently available when children completed this instrument to minimize this limitation.
Conclusions
Pediatricians have a primary role in not only identifying children who are at risk for food insecurity or poor diet and connecting them to community resources, but also in advocating for policies that support access to healthy foods for children of all income levels [47]. The current study provides evidence that fruit and vegetable prescriptions, easily ordered through EMR systems and provided to all pediatric patients, may have a significant influence on food insecurity and dietary patterns of children living in a lowincome, urban community. In addition to monitoring the long-term impact of the FVPP, future research will investigate the influence of prescription distribution and redemption patterns on food security and dietary behaviors of children. Informed Consent Statement: Informed consent/assent was obtained from all subjects involved in the study.
Data Availability Statement: Requested data may be provided after IRB approval and appropriate data use agreements have been obtained.
|
2021-08-28T06:17:21.460Z
|
2021-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "d6d85b6c763d66d2a07fce673485e09e00a211f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/8/2619/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef8e4a731dd09c2dcacb93473d726abbb7daf861",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256390304
|
pes2o/s2orc
|
v3-fos-license
|
Quantifying proximity-induced superconductivity from first-principles calculations
Proximity induced superconductivity with a clean interface has attracted much attention in recent years. We discuss how the commonly-employed electron tunneling approximation can be hybridized with first-principles calculation to achieve a quantitative characterization starting from the microscopic atomic structure. By using the graphene-Zn heterostructure as an example, we compare this approximated treatment to the full \textit{ab inito} anisotropic Eliashberg formalism. Based on the calculation results, we discuss how superconductivity is affected by the interfacial environment.
I. INTRODUCTION
Since the establishment of the BCS theory 1,2 , the occurrence of superconductivity (SC) in nonsuperconducting materials (N) placed in proximity to a superconductor (S) has been studied by various semiempirical approximations [3][4][5] . Historically, the dirty interface was better modelled, for which the detailed interfacial structure is less important and the motion of the superconducting electrons can be described by a simplified diffusion equation 4,6 . In contrast, for the clean NS heterostructure, an atomic characterization of the interfacial coupling is more complicated.
In the past two decades, the first-principles calculation within the framework of density functional theory (DFT) 7,8 has reached a status to reliably describe not only the normal states of a wide range of materials 9,10 , but also conventional superconductivity mediated by phonons 11,12 . Calculating electron-phonon couplings (EPCs) from first principles is rapidly reaching maturity, thanks to the development of density functional perturbation theory (DFPT) 13,14 . In addition, the computational cost of evaluating EPCs on a dense mesh of the Brillouin zone is significantly reduced by the efficient first-principles interpolation technique based on maximally localized Wannier functions (MLWF) 12 . This progress makes quantitative and predictive calculations on the interface superconductivity possible.
This Article aims to demonstrate a general strategy to quantify proximity-induced superconductivity from first principles. We use the graphene-Zn heterostructure as an example to calculate and compare the performance of different treatments of the proximity effects, such as the electron tunneling approximation and complete DFT+DFPT calculations.
II. THEORETICAL FRAMEWORK
Let us consider three atomic structures for firstprinciples calculations: (i) a N slab; (ii) a S slab; and (iii) a NS heterostructure by combining the two. For practical purposes, we expect that the two slabs have commensurate or nearly commensurate surfaces, so the heterostructure can be constructed in a computationally feasible supercell.
Applying DFT and DFPT calculations to these structures renders the following description: with α=N, S and NS. The electron and phonon Hamiltonians (H e and H ph ) are readily diagonalized by DFT and DFPT, and the EPC Hamiltonian (H e−ph ) is parameterized in the momentum space: in which ε α nk (ω α νq ) and c αnk (b αυ,q ) denote the electronic (phonon) eigenenergies and annihilation operators acting on the eigenstates indexed by the in-plane lattice momentum k (q) and an additional band label n (υ). g αυ nk,mk+q is the EPC coefficient. It is understood that when magnetism and spin-orbit coupling are taken into account, n should also index the spin degree of freedom.
All the proximity effects (PEs) are in principle encoded in: which can be divided according to Eq. (1) into: Among the three terms, H P E e is in many cases treated as the main driver 3,4,20 , which not only helps simplify the calculation, but also provides an pedagogical understanding on the superconductivity induced in N via electron tunneling. We will first discuss how to hybridize first-principles calculation with this electron tunneling approximation, and then switch to a full ab initio description.
We note that a prerequisite for the discussions below is that the DFT and DFPT descriptions are adequate for the consisting slabs. The only effect of the residual electron-electron interaction is presumed to be an isotropic reduction of the phonon-mediated e-e attraction, as parameterized by a single dimensionless Coloumb pseudopotential µ * . We do not consider cases violating the Migdal theorem either.
A. Electron tunneling approximation
Many model studies on the proximity effect directly start from coupling a pure electron Hamiltonian for the N layer to a Bogoliubov-de Gennes Hamiltonian for the S layer. Referring back to the first-principles formalism, this treatment can be rephrased as: (i) Select electron tunneling as the dominant PE: (ii) For H N , the phonon effect is assumed to be negligible: (iii) H S is simplified into a single-band BCS Hamiltonian: in which V S is the averaged pairing potential on Fermi surface (FS). V S modified by the FS density of states (DOS) (N F ) is associated to the dimensionless EPC strength (λ) routinely computed from DFPT: We first consider the effect of H P E e [Eq. (5)]. The electronic eigenstates of the heterostructure (|N S, nk ) form a new basis by hybridizing |N, nk and |S, k . Then, rotating the EPCs and the effective pairing potential to this new basis gives: N S, nk|S, k g S,ν S k,k+q S, k + q|N S, mk + q and The projection weight appearing in the last line is defined by: w S nk ≡ | N S, nk|S, k | 2 . Since we have assumed the commensurate condition, all the momenta refer to a common super Brillouin zone. There is no overlap between states with different ks.
For a multiband SC, we extend this formula by approximating w S nk as the total weight of |N S, nk projected into the S slab. We can also include the EPC contributions from the N slab, writing: in which λ N is the dimensionless EPC strength defined on the FS of the N-slab. The approximated V N S nk,mk can thus be organized into a 2×2 block matrix according to the projection weights: When the coefficients of V N S do not vary drastically within each block, it is plausible to perform block average. In analogy to the method used for a two-gap superconductor, e.g. MgB 2 22-25 , a 2×2 dimensionless EPC matrix can be defined: in which N α F is the contribution to the FS DOS from the corresponding block. The largest eigenvalue of Λ, denoted as λ max , can then be plugged into the semiempricial McMillan-Allen-Dynes formula 26 to predict T c of this hybrid system: ], (14) where ω log is a logarithmic average of the phonon frequencies. The ratio between the superconducting gaps in the N (proximitized gap) and S (intrinsic gap) layers can be estimated by the eigenvector corresponding to λ max . We will elaborate on these details in Sec. III based on a concrete example. The great advantage of the electron tunneling approximation is that Eqs.(5) refers to the DFT data only, while the expensive EPC calculation is restricted to isolated S and N slabs, which significently reduces the computational complexity.
B. Full ab initio treatment
If a complete DFT+DFPT calculation on the heterojunction is attainable, V N S nk,mk can be obtained without approximation.
Plus, the semi-empirical McMillan-Allen-Dynes formula can be replaced by the anisotropic and frequency-dependent Migdal-Eliashberg equations 11,12 : with: in which ∆,Z and ω n represent the superconducting gap, renormalization function and fermion Matsubara frequencies. Solving Eq. (15) self-consistently for the heterostructure reduces approximations to the least level within the DFT and DFPT formalism. It is worth mentioning that besides H P E e , H P E ph and H P E e−ph may also play an important role, e.g. via phonon renormalization and interfacial phonon scattering, which is captured by the full ab initio treatment. The relative importance of these different mechanisms could be strongly system dependent, which is hard to decide a priori without microscopic calculations. Whenever possible, a crosscheck between the electron tunneling approximation and the full ab initio treatment will be helpful for understanding the origin(s) of the proximity-induced superconductivity.
III. CASE STUDY: GRAPHENE ON ZN
As an example to apply the general framework, we consider the graphene-superconductor heterojunction, which has led to useful applications, such as photon detectors 27 and Cooper pair splitters 28 , and motivates a variety of theoretical proposals to achieve exotic superconducting phases 29 . We note that while experiments usually apply an external voltage and measure the supercurrent injected in graphene, here, we focus on the equilibrium SC state in the heterojunction.
A. Numerical setup
A 6-layer Zn (001) slab is chosen as the superconducting substrate. The experimental critical temperature of Zn is reported to be T c =0.79 K 30 .We choose Zn mainly for a good lattice match to graphene. Fixing the in-plane lattice constants of the computational supercell according to the fully relaxed Zn bulk's parameters a=b=4.97 Bohr (cf. the experimental value a=b=5.04 Bohr 31 ) introduces about 7% tensile strain to the graphene. According to our previous works on graphene 3233 , tensile strain tends to soften the phonons and enhance the EPC strength, but 7% is not sufficient to induce intrinsic superconductivity within a reasonable carrier density range. The atomic structure of the grephene-Zn heterojunction is shown in Fig. 1, including a 28Å thick vaccuum layer normal to the 2D surface. The Zn slab is cleaved from a fully relaxed bulk structure, and the Zn-C interfacial spacing is determined by minimizing the total energy. We do not consider surface corrugation or additional structural reconstruction, so a minimal unit cell containing two C atoms and six Zn atoms can be used with periodic boundary conditions.
We perform the first-principles calculations by using Quantum Espresso (QE) 910 with norm conserving pseudopotentials 34 and Perdew-Burke-Ernzerhof exchange-correlation functional 35 . The D3-type Van der Waals correction is included to improve the description of C-Zn interfacial coupling 36 . The plane-wave energy cutoff is set to 80 Ry. The electronic convergence criterion is 10 −10 Ry. EPC is first calculated by the EPW code on a 24 × 24 × 1 (6 × 6 × 1) k(q) mesh, and then interpolated onto a 180 × 180 × 1 (90 × 90 × 1) k(q) mesh. The anisotropic Eliashberg equation is solved by setting the k and q meshes both to be 60 × 60 × 1.
B. Results
A quick exposure of H P E defined in Eq. (3) can be visualized by comparing several key properties before and after the junction is formed. Figure 2 plots the electronic band structures, Fermi surfaces and the phonon dispersions of graphene, Zn slab and the heterojunction. The junction properties can be well tracked back to the two consisting parts, owing to a relatively weak interfacial coupling in this case. Nevertheless, electronic tunneling effects can be observed from the small hybridization gaps whenever a graphene band and a Zn band cross. The Zn FS can be divided into Γ-centered sheets and K-centered sheets. The latters are most relevant to hybridizing with the graphene Dirac bands. The Fermi level of the heterojunction is determined self-consistently during the DFT loop, which indicates 0.026 electron per unit cell transferring from Zn to graphene spontaneously. For the freestanding graphene, we manually adjust the Fermi level to the same electron filling as in the junction. In the phonon spectra, the size of the blue markers reflects the phononresolved dimensionless EPC constant (λ qν ). It is found that in the heterojunction [ Fig. 2(i)] the EPC is dominated by the long-wave out-of-plane acoustic vibration of the Zn atoms, which inherits the feature of the pure Zn slab [ Fig. 2(f)]. Interestingly, forming a heterojuction leads to a slight enhancement of these λ qν . Figure 3(a) plots the ab initio V N S nk,mk matrix when graphene and Zn are separated, and thus there is no scattering between them. We select the EPCs from scatterings between a pair of electronic states within a ± 200 meV energy window around the FS, and a 10 meV wide smearing function of the gaussian type is used to numerically replace δ( nk − E F ). Note that phonon-induced attraction in graphene is not weak, but since the FS density of states is low, the dimensionless EPC constant is small. Figure 3(b) is an estimation of the V N S nk,mk matrix in the heterostructure by using the electron tunneling approximation [Eq. (10)], which introduces V N S N →S and V N S S→N , giving rise to the structure of a 2 × 2 block matrix as expected in Eq. (12). Figure 3(c) plots the ab initio V N S nk,mk matrix . These FS electronic states are sorted in descending order of the projection weight in graphene, i.e. the upper left corner corresponds to the graphene dominated block. The definition of the block boundary [black solid lines in Fig. 3(c)] is chosen to be w S nk = w N nk . The full ab initio V N S nk,mk matrix displays a richer structure within the Zn block, indicating that the interface modulates the attraction potential on different sheets of the Zn FS. This type of PE is clearly beyond the scope of electron tunneling approximation.
We reduce the V N S nk,mk matrix derived from the electron tunneling approximation [ Fig. 4(b)] to a 2 × 2 dimensionless EPC matrix according to Eq.
The largest eigenvalue λ max =0.532, as dominated by the EPC of the Zn part. The eigenvector associated with λ max is (0.266, 0.964).
Reducing the first-principles V N S nk,mk matrix [ Fig. 3(c)] to a 2 × 2 dimensionless EPC matrix gives: The largest eigenvalue λ max =0.803, and the associated eigenvector is (0.504, 0.864). We can also preserve the additional structure within the Zn block, partitioning the V N S nk,mk matrix into a 4 × 4 dimensionless EPC matrix. The block average leads to: The largest eigenvalue λ max =0.900, with the eigenvector (0.243, 0.364, 0.720, 0.538). Figure 3(d) shows the SC gap on the FS at 1K, derived from the first-principles anisotropic Eliashberg equations.
The gap size varies dramatically on different sheets of the Zn FS, in consistent with the structure of the firstprinciples V N S nk,mk matrix. The proximity induced gap (0.1∼0.3 meV) can be found at the graphene FS. Figure 3(e) plots the distrubtion of the gap size. Tracing the temperature evolution of the two marked peaks (∆ 1 and ∆ 2 ) determines a T c = 3.6K[ Fig. 3(f)].
C. Discussion
For the solution of the Eliashberg equations we chose µ * = 0.115. An exact determination of the µ * value is beyond the scope of the present work. In the discussions below, we will always use this fixed µ * value without further fine tuning, and the Eliashberg results can be regarded as a benchmark of the performance of the other approximated treatments.
By feeding λ max and µ * into Eq. (14), the estimated T c 's based on λ max 's from Eqs. (17), (18), and (19) are 0.9 K, 3.2 K and 4.0 K, respectively. The last two numbers, from the 2 × 2 or 4 × 4 partitioning of the firstprinciples V N S nk,mk matrix respectively, are in reasonable agreement with the Eliashberg result T c = 3.6K. The first one, from the electron tunneling approximation is significantly lower, but very close to the intrinsic T c of bulk Zn (0.79K) as determined in experiment 30 . This result is understandable: in the electron tunneling approximation, the SC is essentially inherited from the isolated Zn-slab, while all the other treatments include extra interfacial effects in addion to electron tunneling.
The enhancement of SC in the heterojunction as predicted by both the first-principles V N S nk,mk matrix and the Eliashberg results is attributed to some long-wave out-ofplane acoustic phonons[cf. Figs.2(f,i) and 3(d)]. This result is interesting, but should be interpreted with caution, because such type of vibration is sensitive to the interfacial environment. Just like the gap variation predicted by the Eliashberg equations can be easily washed out by defects and disorders in a real sample, the interfacialenhanced SC might only occur in the ideally clean limit. Nevertheless, this result vividly demonstrates that it is possible to tune SC by controlling the interfacial structure.
According to the eigenvector associated with λ max , the electron tunneling approximation estimates the proximity induced SC gap in graphene to be 0.267/0.964 ≈ 28% of the intrinsic Zn SC gap. The estimation of the first-principles V N S nk,mk matrix in a 2 × 2 partitioning is 0.504/0.864 ≈ 58%. For the 4 × 4 partitioning, the ratio varies between 34% and 67%, depending on which Zn block is used as the reference. At the semi-quantiative level, these estimations predict the order of magnitude in consistent with the Eliashberg results [c.f. Fig. 3(e)].
IV. CONCLUSION
In summary, we show that the power of first-principles calculation can be extended to quantify proximityinduced superconductivity. The electron tunneling approximation can be employed to significantly reduce the computational cost, putting forth a quick and convenient way to semi-quantitatively estimate the proximity induced SC gap. A full EPC calculation on the heterostructure captures further interfacial effects, such as phonon renormalization and interfacial phonon scattering, providing useful information for interfacial SC engineering. By properly block averaging the EPC matrix as for a multi-band superconductor, a simple block average and eigenvalue analysis is found to give quantitative predictions comparable to the much more time-consuming Eliashberg equations. This methodology is expected to find general applications in the studies of interfacial SCs.
|
2023-01-31T06:42:55.499Z
|
2023-01-30T00:00:00.000
|
{
"year": 2023,
"sha1": "8242c93a8ebebc79d46aa5842b97cf1d9411890f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8242c93a8ebebc79d46aa5842b97cf1d9411890f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267653994
|
pes2o/s2orc
|
v3-fos-license
|
Navigating the Evolution of Digital Twins Research through Keyword Co-Occurence Network Analysis
Digital twin technology has become increasingly popular and has revolutionized data integration and system modeling across various industries, such as manufacturing, energy, and healthcare. This study aims to explore the evolving research landscape of digital twins using Keyword Co-occurrence Network (KCN) analysis. We analyze metadata from 9639 peer-reviewed articles published between 2000 and 2023. The results unfold in two parts. The first part examines trends and keyword interconnection over time, and the second part maps sensing technology keywords to six application areas. This study reveals that research on digital twins is rapidly diversifying, with focused themes such as predictive and decision-making functions. Additionally, there is an emphasis on real-time data and point cloud technologies. The advent of federated learning and edge computing also highlights a shift toward distributed computation, prioritizing data privacy. This study confirms that digital twins have evolved into complex systems that can conduct predictive operations through advanced sensing technologies. The discussion also identifies challenges in sensor selection and empirical knowledge integration.
Introduction
The concept of digital twins has evolved beyond its original role in product lifecycle management [1] and become an essential element in the digital transformation across various sectors.The digital twin applications typically involve the creation, utilization, and sustainment of a virtual counterpart of a physical system, facilitating real-time, two-way data exchanges [2].Digital twins enhance human-machine interactions and inter-machine communications.They dynamically and behaviorally mirror their physical counterparts, integrating both raw and processed data to reflect real-world conditions accurately.While proactive development of digital twins is advocated for optimal integration, retrofitting remains a common practice for existing systems [3].
Digital twins are categorized into four functional levels: representation, replication, reality, and relational [4], as shown in Figure 1.The foundational level, representation, focuses on data collection and physical system representation.The form of digital twins at this level is usually real-time data connectivity and visualization.A virtual model is created at the replication level to duplicate the physical system and produce the same outputs as the physical system.With the aid of cutting-edge simulation approaches, virtual models have been capable of monitoring and controlling industrial systems with more complex configurations [5].Digital twins at this level are usually equipped with basic analytical models that can analyze and predict system conditions given existing scenarios.The reality level expands the digital twin's capabilities for exploratory "what-if" analyses, enabling predictions for hypothetical changes and scenarios.The most advanced level, relational function, equips digital twins with machine learning models, providing insights that can be acted upon to optimize the physical system performance.Digital twins at this level achieve a seamless close-loop bidirectional data flow and integration between the physical and virtual realms.Smart technologies such as the Internet of Things (IoT) and Cyber-Physical Systems (CPSs) have been thoroughly studied before the emergence of digital twins.The IoT is a network of physical objects with embedded sensors and other technologies that connect and exchange data with other devices [6].On the other hand, CPSs are complex systems that integrate the cyber world and the physical world through computing, communication, and control [7].Although CPSs and digital twins seem similar in definition, they differ in their primary focuses.Digital twins focus on creating a comprehensive virtual model that mimics and predicts the behavior of its physical counterpart [8].In contrast, CPSs focus on real-time control and the ability to respond to physical states, often through direct sensor and actuator involvement.
Digital twins rely on the same or similar the IoT and CPS-enabling technologies.Because of this, digital twins, as with the IoT and CPSs, have expanded beyond manufacturing and into various other industries.The sensing ecosystem, which includes traditional physical sensors, advanced data analytics, and processing platforms, is central to the expansion of digital twins [9].The sensing ecosystem captures and interprets the vast streams of data generated by the IoT and monitored by CPSs, forming the backbone of digital twin functionality and setting the stage for the vital role of sensors in digital twin-supported integrated systems.
Sensors, commonly represented by physical devices such as accelerometers and temperature gauges, are the fundamental interface between the physical and digital worlds.The basic structure of a sensor moduleserves to convert measurable physical phenomena into data streams for analysis and application in digital systems [10].Yet, the definition of sensors has expanded in today's interconnected environment.Today, a sensor can be anything that translates real-world variables into data, ranging from social media posts that gauge public sentiment to medical tests that provide insights into a patient's health.This broader interpretation of sensors enables sensors to serve as the primary medium for data flow and analysis across various applications.
Table 1 features a collection of papers that review digital twin applications in various areas highlighting their contributions and identifying gaps and opportunities.This work adds quantitative research trend analysis to the current digital twin review landscape with Keyword Co-occurrence Network (KCN) analysis.KCN analysis is a tool to analyze the research landscape from the metadata of literature [11].This method allows us to thoroughly examine and interpret the extensive range of digital twin research.
Application Area Key Contributes
Gaps/Opportunities Identified Ref.
General Applications
Reviews covering concepts, key enabling technologies, and implementation of digital twins, including challenges and prospects across multiple domains The need for standardization, data availability, processing power, interdisciplinary collaboration, and development of reference frameworks and performance metrics [12][13][14][15] Smart Manufacturing Reviews digital twins integration in Industry 4.0 and digital supply chains and their optimization potential The need for design frameworks, early detection of design flaws, clarity in research focus, and organized research environment [16,17] Smart Grid and Smart City Review of digital twins in energy management and infrastructure durability Challenges in data management, analysis, real-time interaction, and effective distributed sensing updating [18,19] Agriculture Current trends, roadmap, and open questions in digital twins for agriculture The need for automated decision-making support and complex examples [20][21][22] Smart Healthcare Review of digital twin applications in precision medicine, clinical trial design, hospital operations, and platforms supporting mobile health applications Technical, regulatory, and ethical challenges in healthcare digital twins [23,24] Education Review of digital twins in remote and virtual laboratories Integration of digital twin concepts into educational systems [25] This review paper serves digital twin researchers and architects.The KCN method reveals the interconnectedness of knowledge components, concepts, technologies, and methodologies in digital twin research, aiding researchers in identifying emerging trends and under-researched areas.For digital twin architects, the KCN analysis provides information on the practical application of sensors and other advanced digital twin technologies.It can assist them in making informed architectural decisions and understanding the evolving landscape of digital twin applications in relation to Cyber-Physical Systems.
The remainder of this work is structured as follows.In the methodology section, we explain the process of KCN analysis and its implementation in the context of the digital twin literature.In the results section, we present the analysis results, which include the temporal analysis of research trends, mapping of sensing technology to application fields, and detailed analysis of digital twin applications in various fields.These insights are presented with a series of visualizations and tables that illustrate the interconnected landscape of digital twin research.In the discussion section, we explain the implications of these findings and conclude by reflecting on the challenges currently being faced and the potential paths for the future development of digital twins.
Methods
This study applies KCN analysis to investigate research trends in digital twin technology.The methodology includes a temporal KCN analysis to identify trends over time and a detailed review of principal application categories.This section outlines the process for article collection, keyword extraction, KCN construction, and network evaluation metrics.
Article Collection and Screening
This study began with a thorough search for literature related to recent advancements in digital twins.We queried literature from Engineering Village, IEEE Xplore, and PubMed.Engineering Village and IEEE Xplore ensure comprehensive coverage in engineeringrelated subjects, and PubMed provides additional coverage of medical literature.We selected articles that contain the terms "digital twin" OR "digital twins" in the metadata (title, keywords, and abstract).After narrowing the search to peer-reviewed journal articles and conference proceedings published in English from 2000 to 2023, we identified a total of 9639 papers and downloaded the metadata of this article collection.
We classified the papers according to their respective application categories to analyze the research trends of digital twins and their applications in different fields.We identified six primary application categories.Figure 2 displays the subtopics under each primary category.Although a paper may fall under multiple categories, it is assigned to the most relevant category based on its content.
Keyword Co-Occurrence Network Construction
After collecting the metadata from the publications identified in the initial screening, we converted this unstructured data into a structured format suitable for quantitative temporal analysis.To achieve this, we started by extracting keywords and key phrases from the abstract, keywords section, and the title of each paper.We then used a Natural Language Processing (NLP) toolkit to extract essential information while minimizing language biases [26].The toolkit automatically broke down the title and keyword strings into phrases, eliminated common words such as "a" and "the", reduced words to their basic form, and reconciled different terminologies referring to the same concept, such as "cyber-physical systems" and "CPS".
To construct a KCN using the structured data, each keyword is considered as a node, and the co-occurrence of a pair of keywords in the same paper is treated as an edge connecting the co-occurring keyword pair (node pair).The resulting KCN is undirected and weighted, with edge weights indicating co-occurrence frequencies.The KCN, which consists of n unique keywords, is stored in an n × n adjacency matrix a.The value of each cell in the matrix, a ij , is set to 1 if a connection exists between keyword i and j, and 0 otherwise.
This study conducts two types of analyses using the KCN.The first type is a temporal research trend analysis, in which we segment publications into distinct time windows: 2000-2020, 2021, 2022, and 2023.We build a separate KCN for each time window to capture the evolving trends over time.The second analysis focuses on capturing research highlights within each application field.Therefore, we divided the publications by their application category and constructed an individual KCN for each category.After constructing the network, we calculated various network metrics for our subsequent analyses.
Network Metrics
This study applies five network metrics to evaluate the KCN.These metrics help identify important keywords, understand their interconnections, and determine the overall structure and trend of the research field.These metrics measure node centrality, connections, and the local topology of node groups.
Node centrality is measured by degree and strength.The degree of a node refers to the number of unique nodes it directly connects to.The degree can be calculated using the adjacency matrix a .As shown in Equation (1), the degree of node i is the sum of a ij .Node j belongs to the group of nodes N i that directly connects to node i.
The strength of a node counts the number of connections it has, taking into account the frequency of co-occurrences.As shown in Equation ( 2), the strength of node i is a weighted sum of a ij , where w ij is the number of connections between node i and node j.
Average weight as a function of endpoint degree quantifies the relationship between the connectivity of nodes and the strength of their co-occurrences.To calculate this, we define the endpoint degree of an edge connecting node i and j as d i d j .We then examine the relationship between w ij and d i d j for every edge in the network.Due to the multiplicity of edges with identical endpoint degree values in large networks (e.g., 1 × 50 and 5 × 10 both equal 50 and hence will have the same endpoint degree), we aggregate edges into set E when they share the same endpoint degree.The average weight for edges in set E is then calculated by Equation ( 3), where |E| is the count of edges in set E. By plotting w E against the endpoint degree of edge set E, we can visualize the patterns in keyword connectivity.A positive correlation indicates that highly connected keywords (i.e., keywords connected with high strength) tend to co-occur more, while a negative correlation suggests that less connected keywords (i.e., keywords connected with low strength) tend to co-occur more.
The local topology of a network was measured by the average weighted nearest neighbor degree and weighted clustering coefficient.The average weighted nearest neighbor degree indicates the strength of a node's connection with its high-or low-degree neighbors.As shown in Equation ( 4), the average weighted nearest neighbor degree d w i of a node i is obtained by adding up the weighted connections of a node's direct neighbors and dividing it by the node's strength.A higher value indicates that a keyword is typically associated with highly connected keywords.By plotting d w i against d i , we can visualize how nodes of different degrees behave in terms of connectivity.
The weighted clustering coefficient measures the level of interconnectedness among a node's neighboring nodes, taking into account the weight of each connection.As shown in Equation ( 5), the coefficient C w i is calculated by averaging the weights w ij and w ih of the connections that node i shares with its neighbors, node j and node h.The calculation incorporates a normalization factor s i (d i − 1), which adjusts for the number of potential connections and the strength of each.A high C w i value indicates that a node's neighbors are not only interconnected but also connected through stronger ties; this suggests a more cohesive and tightly knit structure around the node.In the context of KCN, a high-weighted clustering coefficient for a keyword indicates robust thematic clustering.
)a ij a ih a jh (5)
Results
In this section, we first analyze the evolution of digital twins research over time, featuring visualizations of emerging and declining topics.Then, we examine the application of sensor technology across different fields, highlighting principal keywords in each area.Finally, we present specific case studies from each application field, showcasing the practical implementations of digital twin technology.
Research Landscape Evolution Over Time
From the KCN analysis results, we see a clear growth and diversification trend in the digital twin field.Table 2 presents the statistics of KCNs from 2000 to 2023.Here, we observe a substantial rise in the number of articles, keywords, and links, particularly after 2020.The increase in articles indicates a surge in research activities, while the growth in links points to an expanding web of interconnected topics.The development stage of this research field can also be assessed with the K value, based on Kuhn's model of scientific progression [27].The K value is calculated by dividing the number of unique keywords by the frequency of those keywords within a discipline.Derived from Table 2, the K values for four different time windows are 0.174, 0.155, 0.119, and 0.105, respectively.The declining trend in the K value, inversely proportional to the growing number of publications, suggests that the field of digital twins is in the midst of an evolution, aligning with Kuhn's pre-revolution or revolution stage.
Figure 3 reinforces this observation by showing the distribution of articles, keywords, and links across the four time periods, with significant growth in the latter two years.This suggests not only an increase in the research volume but also expansion in the complexity within the field.Figure 4 expands on these data by comparing the average network strength and the maximum weight of the network, both of which indicate an increase in inter-article and inter-topic connections.
Figure 5 offers a distribution of keyword degrees, strengths, and link weights.The upward trends in average and maximum network degrees from Figures 4 and 5 hint at a broadening scope of individual topics and articles, suggesting an increasingly collaborative research environment where topics are more interconnected.Notably, the outliers represent the keywords that are highly connected and centric to this research field.We will present and discuss these topics in the following sections.Figure 6 provides various insights into the dynamics of the network.Figure 6a shows the probability density function of keyword degree.A shift toward the right over time indicates that certain keywords are becoming increasingly prominent within the network.Figure 6b examines the average weight as a function of endpoint degree.The positive linear trend suggests that keywords with a higher degree tend to form stronger connections with other keywords.However, it is not clear from this graph alone whether these connections tend to be with other highly connected keywords or emerging keywords.In addition, the subtle shift toward the right with time means that the combination of keyword degrees associated with a given average weight has been growing with time, suggesting popular keywords start to be the hubs that connect newer topics into the network, facilitating the network's growth.Figure 6c presents the relationship between the average weighted neighbor's degree and the node degree.In all four time windows, there is no clear correlation between the degree of a node and its neighbor's degree.This complements the insights from Figure 6b and shows that highly connected keywords connect with a diverse range of nodes rather than only with other highly connected nodes.To accompany this insight, Figure 6d shows the decreasing trend in the weighted clustering coefficient, indicating that highly connected nodes act more as bridges than remaining within isolated clusters, pointing to an expanding and diversifying field.
These visualizations and metrics depict the characteristics of a rapidly growing complex field, with foundational research expanding and certain topics gaining more prominence.However, the consistent average weight across years also suggests that the additional links may not always contribute to the foundational research, raising questions about the depth and influence of recent publications.This nuanced view of the field's evolution indicates both robust growth and areas requiring further investigation to understand the research impact.
Emerging and Declining Research Topics
Figures 7 and 8 trace the changes in keyword relevance over time, from the earliest time window of 2000-2020 to the most recent time window of 2023.In each time window, we ranked keywords based on their strength, which is determined by the number of connections each keyword has.We then compared the ranking of keywords from both time windows to assess emerging or declining trends.
Because there was a substantial increase in overall keyword strength from the earlier to the later time window, we used rank as a proxy for a keyword's relevance within its specific period.Additionally, we categorized the keywords into two groups: those relating to digital twin applications and those relating to the sensing ecosystem, which includes sensors, machine learning methods, and computational systems.
It is important to note that a slight decline in a keyword's rank does not necessarily indicate a decrease in its importance or research focus.Instead, it may indicate a natural transition of the keyword from a novel research area to a more established topic that no longer occupies the forefront of emerging research themes.This shift can be seen as a maturation process within the research landscape, where once-novel concepts become foundational elements of the field.The Internet of Things is significant for providing the sensor data that feeds digital twins.Cyber-Physical Systems are essential as they constitute the framework in which digital twins operate, integrating computation with physical processes to enable automated decision making.Industry 4.0 represents the current trend of automation and data exchange in manufacturing technologies, including Cyber-Physical Systems, the IoT, and cloud computing, which are inherently linked to the concept of digital twins.In addition, simulation serves as the analytical engine that enables the virtual representation to predict the behavior and performance of its physical counterpart.
There are two types of keywords that indicate emerging trends: application fields (areas where digital twins are being applied) and functions (what digital twins help achieve).The emerging application fields for digital twins include smart cities, energy consumption, healthcare, the construction industry, power systems, smart grids, and autonomous vehicles.The more digitalized and intelligent infrastructure in these areas enables the implementation of digital twins.The increasingly diversified application fields for digital twins also explain the slight decline of smart manufacturing and the manufacturing industry in the right panel.
The emerging functions include digital transformation, decision making, resource allocation, predictive maintenance, fault diagnosis, and real-time monitoring.The trend can be attributed to advancements in machine learning and sensor technologies.As machine learning algorithms have become more sophisticated, digital twins are now able to not only replicate physical systems but also transform and optimize them.Digital twins also have enhanced decision-making capabilities, enabling automated and informed decisions based on predictive analytics and real-time data.for sensors that can deliver immediate, interconnected, and diverse data types.Regarding the computation architecture that supports digital twins and machine learning functions, we notice a rising trend in edge computing and metaverse and a declining trend in cloud computing.This points to a research area pivoting toward distributed computing paradigms, suggesting a move to bring processing closer to the data source for quicker insights.This trend implies that while cloud computing has become a well-established field, the frontier of research is moving toward systems that can handle analytics at the edge of networks.
As for the machine learning-related keywords, emerging models include deep learning, reinforcement learning, federated learning, surrogate models, and convolutional neural networks.This emergence corresponds to the need for sophisticated analytical tools capable of processing complex, multimodal sensor data.These methods are particularly suited to the demands of digital twins, offering enhanced capabilities for privacy preservation and data security.
Mapping Keywords to Application Fields
The previous analysis has focused on the temporal characteristics of the research field and the trends in the relevance of the top keywords.In this section, we will delve deeper and examine how digital twin technology, specifically sensing, machine learning, and computation technologies, are being applied in different fields.
In the methodology section, we mentioned that we classified the literature into six application categories.In this section, we draw insights from each category and visualize the insights using a Sankey diagram.The left column of the Sankey diagram lists the keywords of interest, while the right column represents the application fields.The numbers on the left represent the number of papers that contain the keyword of interest.The number on the right is a summation of all streams of numbers from the left.Since we only selected the top keywords in each category to visualize, the number on the right should not be confused with the total number of papers in each category.
Figure 9 displays the mapping from sensor technology to different digital twin application fields.Real-time data and point cloud emerge as the most prevalent keywords, which validates the trend from the slope graph.From this graph, there are two types of sensor keywords: focused sensing technologies and cross-domain technologies.As for focused keywords, process data and vibration have found their place in manufacturing settings as they are common practices for machine and equipment monitoring.Electrocardiograms and cardiac electrophysiology in healthcare may be linked to their potential to create high-fidelity visualizations for cardiac twins.LiDAR shows a strong association with infrastructure applications, likely due to its precision in capturing environmental data for smart city applications.Cross-domain keywords such as point cloud, data acquisition, and human-robot interaction point toward the versatility of these technologies.Point cloud data, with their high-resolution spatial information, are crucial not only in manufacturing and logistics but also in infrastructure for transportation and urban planning.Data acquisition stands out as a foundational element in the sensing ecosystem to ensure the quality of high-frequency and multimodal sensing data.Human-robot interaction emphasizes the increasing collaboration between humans and automated systems.In healthcare, this could translate to robotic surgery or patient care systems, while, in manufacturing, this can pertain to collaborative robots working alongside human operators.The emergence of federated learning points to a growing concern for data privacy and distributed computation, enabling collaborative model training without centralized data storage.This approach aligns well with digital twins, which often require the synthesis of distributed data while sustaining confidentiality, particularly in healthcare and business settings.
The strong connection between optimization techniques and manufacturing and supply chain applications underlines the role of digital twins in process improvement and efficiency gains.Meanwhile, the intersection of convolutional neural networks with infrastructure and transportation highlights their importance in image and video processing tasks relevant to these fields.
Interestingly, the relatively modest numbers attached to healthcare and human-centric technology may reflect the nascent integration of machine learning into these regulated domains, where safety and validation are paramount.
Overall, Figure 11 displays the mapping from computational technologies to different digital twin application fields.Blockchain's notable presence across multiple fields, especially in business and asset management, highlights its role in enhancing security, transparency, and traceability.Its application within manufacturing and supply chain domains indicates its potential to revolutionize how data across the digital twin lifecycle are securely managed and shared.
The Metaverse, often associated with immersive virtual environments, shows a substantial intersection with infrastructure and transportation.This could point toward the Metaverse's capacity for sophisticated simulations and virtual testing environments, which are crucial for planning and managing large-scale infrastructural projects.As suggested by the slope chart above, cloud computing displays a slightly declining influence, indicating a shift toward distributed computing paradigms such as edge computing.Big data and data-driven keywords maintain a steady connection with fundamental research, reflecting the ongoing need to process and analyze large datasets within the digital twin sphere to extract meaningful insights.In addition, semantic interoperability and data fusion, though not as dominant, indicate niche but vital areas in ensuring that digital twins can communicate effectively across systems and synthesize information from disparate sources.
Specific Cases in Each Application Field
Guided by the insights from the previous section, we select and review specific instances of digital twin research in this section.The tables in this section are a curated collection of publications based on the highlighted keywords from our Sankey analysis.This section aims to transition from high-level trends to individual research efforts, providing examples of how sensing, machine learning, and computation technologies are implemented within various application areas.
Table 3 presents a selection of studies in fundamental research of digital twins.The study on sensor calibration within building systems [28] demonstrates the ongoing effort to synchronize physical and virtual sensor data, which is a crucial step for accurate digital twin simulations.Research into sensor reliability [29] tackles the challenge of predictive maintenance by using redundant digital sensors to foresee potential sensor failures.Both studies emphasize the significance of sensor calibration in maintaining the operational integrity of digital twins.Challenges remain in improving the accuracy of a virtual model while maintaining the complex system built upon multiple sensors.Wearable ECG sensors [30] have been studied for low-latency signal analysis, enhancing the responsiveness of digital twins.This research resonates with the need to make digital twins interactive and user-centric.In the future, digital twins will serve not only as tools for simulation and monitoring but also as an end-to-end platform for interaction, providing intuitive feedback to users.The integration of tactile sensors in tactile devices [31] offers insight into the sensory augmentation possibility within digital twins, while the use of LiDAR for user interface design [32] highlights the importance of high-resolution spatial data in creating intuitive teleoperation systems.Both studies suggest that as the digital twin userbase grows, the user experience will become an essential factor, particularly in how objects are identified and interacted with within these virtual environments.Researchers should recognize that the usability of digital twins is as important as their technical accuracy [9].
Point cloud Visualization of the physical asset
Evaluate the recognition of physical objects and its implications on UI design for teleoperation systems [32] Table 4 presents a selection of research efforts showcasing the application of digital twin technology in the manufacturing and supply chain areas.In CNC machining, force sensors monitor the cutting torque in end milling processes, supplying data for a comprehensive dashboard that integrates real and simulated torque signals for condition monitoring [35].The predictive maintenance capacity enables real-time adjustments and machine downtime reduction.Another study develops nonlinear multi-variant dynamic models of multi-axis machine tools with onboard CNC sensing data and visualizes the servo system's dynamics [36].The digital twins in the form of real-time visualization can help optimize the machine tool performance and reduce production errors.updating of a BIW production digital twin [40] Further, in CNC machining, the fusion of tool, workpiece, and process monitoring data, is visualized on a dashboard, providing a complete view of the manufacturing process [37].This digital process twin supports operators in making informed decisions by simulating part geometry and process analytics.The use of optical sensors in a cyber-physical production cell to create an interactive visual replica [38] signifies the importance of high-fidelity models for understanding and optimizing complex production systems.The above approaches to building digital twin models have made significant progress in unveiling the relations between key indicators and tool performance in the machining process.The sensor-based digital twins allow autonomous monitoring and troubleshooting within smart manufacturing environments.Future work could investigate the scalability of the method to consistently deliver accurate responses and optimize processes as the numbers of machine types and operational parameters scale.Additionally, exploring the integration of machine learning across different manufacturing environments would be valuable.
In additive manufacturing, embedded distributed fiber sensors are used for Finite Element Analysis (FEA) simulations of temperature and strain [39].The ability to model these parameters with high precision is indicative of the move toward high-fidelity simulations in digital twins, ensuring product quality and process reliability.
Lastly, the production planning process benefits from the fusion of CPS indicators, production data, and LiDAR-generated point clouds to create a 3D model of a production plant [40].This example demonstrates the potential of digital twins in providing a comprehensive three-dimensional context for production planning, facilitating better spatial understanding and resource allocation.
Table 5 presents the selected applications of digital twin sensor technology in the energy and power grid sector.The energy equipment monitoring example showcases a condition monitoring digital twin of a small hydro turbine, enabled by a wireless sensor network, including accelerometers, temperature, and inductive current sensors.The digital twins operating on sensor readings and environmental data provide a condition indicator visualization [41].This approach can detect faults early and reduce downtime, which is crucial in the energy sector where continuity is essential.
Event logger
Live semantic annotation (LSA) of events and coordination laws that cause the events to evolve Autonomous proactive agents on a coordination platform CLEMAP Develop a series of digital twins that interact and coordinate activities to exchange energy and enhance grid stability [45] In electric power conversion, the photovoltaic (PV) dc-dc converter's efficiency is augmented by thermal cameras and scanning electron microscope imagery.FEM simulations predict temperatures at critical converter components, enabling fast estimations of device conditions under various operational stresses [42].The two studies highlight the importance of digital twin predictive maintenance in infrastructure reliability.
Wind engineering research utilizes wind pressure sensors to develop an optimal sensor placement algorithm [43].This algorithm aims to reconstruct wind pressure fields accurately, which is indispensable for assessing the structural integrity of wind turbines and optimizing their design for maximum energy capture.The reconstruction of the wind pressure field can also be utilized for creating digital twin infrastructure.
For hydropower generation, pressure sensors are deployed within the hydraulic network, informing the development of a control system that maximizes hydropower production while adhering to hydraulic constraints [44].Such digital twin applications ensure the harmonization of power generation with environmental and infrastructural considerations.
Lastly, in the field of smart grids, event loggers are utilized to develop digital twins with autonomous proactive agents [45].These agents interact within a coordination platform to manage the complex dynamics of energy demand and supply, thus enhancing grid stability and operational resilience.This research points out that integrating an agentcoordination model into digital twins can address complex energy management issues at the microgrid level.The findings provide an example of how to create resilient and user-centric energy networks.
Table 6 presents the applications of digital twin sensor technology in the healthcare and human-centric area, where sensors are broadly referred to as any device or system that detects events or changes in a given environment, transmitting the information to other devices.In the cardiology field, ECG sensors are integral in developing a digital twin of the human heart [46].This innovative approach merges ECG data with medical records to construct a "Cardio Twin", a proof of concept that offers heart condition visualizations for both local and remote diagnosis.In another example, clinical 12-lead ECGs and Magnetic Resonance Imaging (MRI) create biophysically detailed digital twins for cardiac electrophysiology [47].These models simulate intricate heart structures, including Purkinje networks, paving the way for in silico clinical trials and advanced cardiac care.For rural healthcare, IoT sensors and devices are leveraged to bring medical services to remote areas [48].Here, sensors encompass a variety of medical devices that collect health-related data, which, when coupled with blockchain technology, ensures secure data management and analysis in resource-limited settings.In space medicine, sensors include mixed reality devices such as HoloLens and haptic systems, which create a digitized interactive training environment [49].This expands the sensory experience by providing real-time feedback and immersive scenarios for astronaut medical training.The above research highlights the potential and success of digital twin technology in the field of personalized and predictive healthcare.
In the educational sector, sensors refer to the instrumentation of a remote lab, where equipment control and monitoring are critical [50].These sensor systems enable a hybrid remote laboratory for various learning scenarios, fostering interactive and multimodal educational experiences.This also encourages future researchers to explore digital twin solutions for better learning outcomes and operational safety [52].
Lastly, in the context of human-robot collaboration, force/torque sensors on a battery pack assembly line provide data for a digital twin that visualizes and analyzes the collaborative environment [51].This digital twin assists in designing, developing, and operating a safe and efficient human-robot interactive system.Future study can revolve around the safety and optimization of these systems with the aid of digital twins [53].
Table 7 presents examples of the use of digital twins in the optimization of infrastructure and transportation systems.For infrastructure modeling, LiDAR sensors are utilized to capture detailed point cloud data of campus buildings [54].This technology enables the creation of accurate digital replicas of large structures and facilitates the creation of accurate digital twins of extensive structures, enabling efficient maintenance planning and historical preservation.
In transportation infrastructure, the fusion of 2D images from cameras and 3D point clouds from LiDAR leads to a comprehensive digital twin of a magnetic levitation track [55].This detailed representation bridges the gap between macroscopic project management and microscopic engineering analysis, underscoring the capacity for digital twins to offer multiscale insights into transportation systems.This study points out the importance of having efficient and automated processes for managing large LiDAR datasets to enhance the scalability of digital twins in civil engineering.Future study should include developing advanced algorithms that could automate the conversion of point cloud data into information modeling.
Urban logistics can benefit from the integration of sensors and actuators within the infrastructure.This can be achieved through a platform architecture for digital twins that informs policy making via interactive dashboards [56].This approach allows for realtime sensor data and logistics system documentation to drive simulation models that can pinpoint gaps and opportunities for transformation within city ecosystems.It is challenging to convert this framework into physical systems for city planners and logistics stakeholders to use and improve urban logistics.Apply digital twins for predictive maintenance of the cyclone bag filter system [58] Factory logistics can be revolutionized by incorporating Automated Guided Vehicles (AGVs) that track and monitor the movement of goods on the assembly line [34].The development and application of a multi-objective AGV scheduling method based on digital twins reflect a shift toward intelligent and efficient logistics systems.
The planning of long-distance freight flows can be analyzed by integrating IoT sensors, GPS, and GIS into a virtual infrastructure and transportation model [57].This digital twin serves as a powerful tool for analyzing and synchronizing transport, demonstrating the potential of digital twins to streamline logistics operations across vast distances.Future study can focus on achieving interconnection between real-time data and virtual models for different transport modes in an operational context.
The predictive maintenance of agriculture equipment can be improved by digital twin technologies integrated with sensors and data pipeline systems [58].This study streamlines computational fluid dynamics (CFD) simulation data, sensor readings, and historical information to replicate a virtual cyclone bag filter system in grain milling plants.The digital twins of the system can monitor the filter status and perform precise predictions of a system's remaining useful life.This research marks the potential of digital twins in improving operational efficiency for smart agriculture through monitoring and predictive analytics.
Table 8 explores the digital twin applications in business and asset management.These sensors are not limited to traditional physical devices but also include digital and social data sources.In production management, the term "sensor" encompasses product documentation throughout production [59].This documentation acts as a sensor by providing continuous feedback on the product lifecycle, enabling the development of a digital twin for efficient tracking in high-volume production environments.This study set an example of using Asset Administration Shell to standardize and simplify the digital twin representations of manufacturing assets.Another study in this domain proposes a hybrid digital twin approach that integrates traditional onboard sensors with telemetry data sources to create virtual production line properties [60].Their innovative usage of Apache StreamPipes for handling high-volume data streams features a solution to the data preprocessing for digital twins.Create semantic-driven digital twins for analyzing social network dynamics [63] The notion of sensors expands further in environmental monitoring, where social media posts on platforms such as Twitter become inputs.These "digital sensors" capture real-time data on the spread of invasive species [61], offering a novel approach to environmental monitoring by harnessing crowd-sourced information.This study manifests the versatile nature of digital twins by bridging it with ecological management and leveraging Natural Language Processing to model the spread of an invasive species.Additionally, in social issue alleviation and network sentiment analysis, chat rooms and business intelligence data act as sensors by providing communication data and social network dynamics [62,63].These data allow for real-time sentiment analysis and conversation facilitation via chatbots.The concept of semantic digital twins for simulating human behavior for analytical purposes is an innovative idea.In future studies, it is important to address privacy and data security concerns, such as modeling complex human behavior and ensuring ethical use of personal data.
Discussion
Our analysis of digital twin research from 2000 to 2023 shows that the field has been growing and diversifying rapidly.This trend, particularly notable post-2020, offers digital twin researchers avenues for new research opportunities and gives digital twin architects insights into the evolving applications in areas such as smart cities, healthcare, and energy.Emerging functions such as decision making and predictive maintenance also demonstrate the field's advancement.
Sensor-related keywords, such as real time, point cloud, and sensor network, are becoming more important.This emphasizes the demand for real-time, interconnected, and multimodal sensor data.This trend aligns with the computational architecture shift from cloud to edge computing, indicating a move toward distributed computing for faster and more efficient data processing.Moreover, the emergence of advanced machine learning models such as deep learning and federated learning reflects the increasing complexity of sensor data processing.It also highlights the growing importance of privacy and security for digital twins.
According to the keyword trends, digital twins are advancing beyond conventional simulations such as FEA.The research suggests that there is a shift toward developing systems that not only replicate physical entities but also evolve with them.The importance of keywords such as human-robot interaction and predictive maintenance has been growing, which indicates the emergence of interactive and preemptive digital twins.
The Sankey diagrams demonstrate the wide range of sensor technologies used in digital twins.These sensors go beyond traditional physical devices and include medical equipment, clinical tests, and even social media platforms.This approach allows digital twins to provide a comprehensive and accurate representation of real-world scenarios.Moreover, the literature highlights the rise of virtual and soft sensors, which mirror physical sensor functions, offering preemptive insights and facilitating proactive sensing system maintenance.
Our study also revealed a connection between sensor selection and the functionality levels of digital twins, namely representation, replication, reality, and relational.From the review of specific digital twin examples, we observe that the choice of sensors directly influences the level of digital twin functionality, particularly in retrofit designs.As the digital twin capacity increases, we also observe that many applications need sensor data fusion.For example, in the manufacturing setting, a proactive machine tool digital twins will need the input of low-frequency production data and high-frequency sensor data.Incorrect sensor choices can lead to a mismatch between sensor capacity and the expected functionality of digital twins, eventually impacting the performance of the digital twins.
In addition, we believe that relying heavily on data-driven insights without substantial domain expertise can be risky.With limited domain knowledge, there is a possibility of creating brittle and underwhelming digital twins that may not respond inadequately to realworld variables.Therefore, future research should aim at integrating domain knowledge and robust empirical knowledge into digital twins to enhance their reliability and accuracy.
Conclusions
This study analyzed the research trends in the field of digital twins by examining metadata from 9639 peer-reviewed articles published between 2000 and 2023.We processed the metadata using an NLP-based toolkit and manually labeled each article with its most relevant application field.Using the KCN methodology, we performed temporal research trend analysis, mapping popular sensing technologies to six application fields and identifying representative examples of digital twins in each field.For researchers, this analysis provides a comprehensive view of the field's development, identifying key areas for future exploration.For architects, the findings highlight technological applications and examples essential for informed decision making in digital twin system design.
This study found that the field of digital twins is rapidly growing and diversifying.We used network metrics to analyze the temporal changes in the field and identified emerging and declining keywords over time.We also identified emerging application fields, functions, and enabling sensing technologies.The findings suggest that digital twins are moving toward predictive tasks while ensuring system integrity and security across many sectors beyond manufacturing.
We used a Sankey chart to visualize the mapping from popular sensing technologies to six application fields.We found that real-time data, point cloud data, and human-robot interaction are increasing trends.Additionally, we noticed an extension of the traditional sensor definition to include novel sensors such as medical tests and social media posts.We identified neural networks and reinforcement learning as crucial for autonomous decision making.The emergence of federated learning marks a shift toward distributed computation, emphasizing data privacy.
Figure 1 .
Figure 1.Four levels of functionalities of digital twins.
Figure 2 .
Figure 2. Six primary digital twin application categories.
Figure 3 .
Figure 3. Number of articles, keywords, and links over the four time periods.
Figure 4 .
Figure 4. Growth trends in KCN parameters such as average network strength, maximum strength, average network degree, and maximum degree.
Figure 5 .
Figure 5. Boxplots of keyword degree, strength, and link weight distribution in the KCN.
Figure 7 .
Figure 7. Emerging and declining keywords of digital twin applications from 2000-2020 to 2023.
Figure 7
Figure7presents keywords related to digital twin applications.Notably, the top five keywords, namely digital twins, Internet of Things, Cyber-Physical Systems, Industry 4.0, and simulation, have maintained the top five positions with no rank change, indicating that they have endured centrality and significance in digital twin research for over two decades.Digital twins, as the literature searching criteria, is naturally included in all research.The Internet of Things is significant for providing the sensor data that feeds digital twins.Cyber-Physical Systems are essential as they constitute the framework in which digital twins operate, integrating computation with physical processes to enable automated decision making.Industry 4.0 represents the current trend of automation and data exchange in manufacturing technologies, including Cyber-Physical Systems, the IoT, and cloud computing, which are inherently linked to the concept of digital twins.In addition, simulation serves as the analytical engine that enables the virtual representation to predict the behavior and performance of its physical counterpart.There are two types of keywords that indicate emerging trends: application fields (areas where digital twins are being applied) and functions (what digital twins help achieve).The emerging application fields for digital twins include smart cities, energy consumption, healthcare, the construction industry, power systems, smart grids, and autonomous vehicles.The more digitalized and intelligent infrastructure in these areas enables the implementation of digital twins.The increasingly diversified application fields for digital twins also explain the slight decline of smart manufacturing and the manufacturing industry in the right panel.The emerging functions include digital transformation, decision making, resource allocation, predictive maintenance, fault diagnosis, and real-time monitoring.The trend can be attributed to advancements in machine learning and sensor technologies.As machine learning algorithms have become more sophisticated, digital twins are now able to not only replicate physical systems but also transform and optimize them.Digital twins also have
Figure 8 .
Figure 8. Emerging and declining keywords of digital twins sensing technology from 2000-2020 to 2023.
Figure 8
Figure8presents keywords related to sensor and machine learning technology.The top two keywords are machine learning and artificial intelligence.The emerging keywords related to sensors are real time, point cloud, and sensor network, highlighting the growing demand for sensors that can deliver immediate, interconnected, and diverse data types.Regarding the computation architecture that supports digital twins and machine learning functions, we notice a rising trend in edge computing and metaverse and a declining trend in cloud computing.This points to a research area pivoting toward distributed computing paradigms, suggesting a move to bring processing closer to the data source for quicker insights.This trend implies that while cloud computing has become a well-established field, the frontier of research is moving toward systems that can handle analytics at the edge of networks.As for the machine learning-related keywords, emerging models include deep learning, reinforcement learning, federated learning, surrogate models, and convolutional neural networks.This emergence corresponds to the need for sophisticated analytical tools capable of processing complex, multimodal sensor data.These methods are particularly suited to the demands of digital twins, offering enhanced capabilities for privacy preservation and data security.
Figure 9 .
Figure 9. Mapping of sensor technology to digital twin application areas.
Figure 10
Figure 10 displays the mapping from machine learning methods to different digital twin application fields.Machine learning bestows active digital twins with decision-making capabilities in various applications.Neural networks and deep learning algorithms play a prominent role in pattern recognition and predictive analytics.The strong presence of reinforcement learning, particularly in fundamental research, signals an interest in developing digital twins capable of autonomous decision making and optimization-a critical feature for systems that learn and adapt over time.
Figure 10 .
Figure 10.Mapping of machine learning methods to digital twins application fields.
Figure 11 .
Figure 11.Mapping of computation technology to digital twin application areas.
Table 1 .
A selection of digital twins review papers.
Table 2 .
Temporal analysis of four KCNs from four time periods.
Table 3 .
Examples of articles covering fundamental research in digital twins.
Table 4 .
A sample of articles covering manufacturing and supply chain digital twin research.
Table 5 .
A selection of research articles covering energy and power grid digital twins research.
Table 6 .
A selection of research articles covering healthcare and human-centric digital twin research.
Table 7 .
A selection of infrastructure and transportation digital twins research.
Table 8 .
A selection of business and asset management digital twins research.
|
2024-02-14T16:18:47.362Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "ba19e2d952ee2b2c91492ab9597ec8608974757d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/4/1202/pdf?version=1707748551",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f1ac9aa99384a8c08ad60ca1b6b3fe177cfac59",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
118395001
|
pes2o/s2orc
|
v3-fos-license
|
Quantum chaotic subdiffusion in random potentials
Two interacting particles (TIP) in a disordered chain propagate beyond the single particle localization length $\xi_1$ up to a scale $\xi_2>\xi_1$. An initially strongly localized TIP state expands almost ballistically up to $\xi_1$. The expansion of the TIP wave function beyond the distance $\xi_1 \gg 1$ is governed by highly connected Fock states in the space of noninteracting eigenfunctions. The resulting dynamics is subdiffusive, and the second moment grows as $m_2 \sim t^{1/2}$, precisely as in the strong chaos regime for corresponding nonlinear wave equations. This surprising outcome stems from the huge Fock connectivity and resulting quantum chaos. The TIP expansion finally slows down towards a complete halt -- in contrast to the nonlinear case.
Two interacting particles (TIP) in a disordered chain propagate beyond the single particle localization length ξ1 up to a scale ξ2 > ξ1. An initially strongly localized TIP state expands almost ballistically up to ξ1. The expansion of the TIP wave function beyond the distance ξ1 1 is governed by highly connected Fock states in the space of noninteracting eigenfunctions. The resulting dynamics is subdiffusive, and the second moment grows as m2 ∼ t 1/2 , precisely as in the strong chaos regime for corresponding nonlinear wave equations. This surprising outcome stems from the huge Fock connectivity and resulting quantum chaos. The TIP expansion finally slows down towards a complete halt -in contrast to the nonlinear case. Anderson localization (AL), the absence of diffusion in linear lattice wave equations due to disorder 1 , is now broadly seen as a fundamental physical phenomenon manifested by light, sound, and matter waves 2,3 . Rigorous results state that in one dimension all single-particle (SP) states become exponentially localized at arbitrary weak disorder [4][5][6] . Going beyond the assumption of noninteracting particles or linear waves has proved to be extremely complex, and the current answers on the interplay between disorder and interactions remain controversial and debated.
For quantum many body systems predictions range from no major effect of interactions 7 to the emergence of a finite-temperature AL transition already in dimension one 8 . Advance has been achieved within mean field approximations, which lead to nonlinear wave equations like the Gross-Pitaevsky equation 9 . Recent analytical and numerical studies demonstrate that nonlinearity breaks AL and leads to subdiffusive wave packet propagation, caused by nonintegrability, deterministic chaos, phase decoherence and a consequent loss of wave localization [10][11][12][13][14][15][16] . A positive measure of initially localized excitations gets delocalized for an arbitrarily small nonlinearity, tending to one above some threshold which may depend on the initial state 17 .
Few interacting quantum particles may bridge the two extremes from above. In particular the case of two interacting particles (TIP) appears to be an interesting testing ground for any of the above statements. There is not much doubt that the TIP case also yields a finite localization length ξ 2 , similar to but potentially much larger than the single particle localization length ξ 1 . Most of the studies only debate whether and how the TIP localization length ξ 2 scales with ξ 1 18-23 . Paradoxically, almost nothing is known on the interaction induced wave packet dynamics beyond ξ 1 . Numerical experiments explored only the case of the strong disorder, limiting to small connectivity of TIP states in the relevant Hilbert space of the problem, to report, unsurprisingly, quite a trivial ballistic expansion up to ξ 1 followed by a quick saturation 24 .
Here we report the first study of TIP wave packet dynamics much beyond the single particle localization length and discover the new phenomenon of quantum chaotic subdiffusion. This is exhibited in the weak disorder regime, characterized by the large connectivity of Fock states, when the packet size exceeds ξ 1 but does not yet reach its asymptote ξ 2 . We find that in this regime the system demonstrates the two main signatures of quantum chaos: dynamical excitation of a wealth of Fock states and strongly non-Poissonian level spacing distribution. Based on this, we argue that the TIP subdiffusion in Fock and in real space can be described as subsequent excitation of unpopulated Fock states by a multifrequency and quasistochastic driving from a large number of already populated ones. Numerics and analysis estimate the subdiffusion exponent as 1/2 in a remarkable correspondence to the strong chaos subdiffusion in the classical nonlinear wave equation. Asymptotic dynamics differs though, in the crossover to a weak chaos subdiffusion of the nonlinear case, and slowing down towards a complete arrest of the TIP excitation.
We study the TIP dynamics in the framework of the Hubbard model with the Hamiltonian whereb + j andb j are creation and annihilation operators of indistinguishable bosons at lattice site j, and U measures the on-site interaction strength between the particles. The on-site energies are random uncorrelated numbers with a uniform probability density function on the interval j ∈ [−W/2, W/2] as in the original Anderson problem.
Using the vacuum state |0 and the basis |j, k ≡ b + j b + k |0 we write the TIP wave function as Ψ = j,k ϕ j,k |j, k . Note that indistinguishability implies j ≥ k.
However this can be eased by considering arbitrary pairs arXiv:1309.5281v2 [cond-mat.dis-nn] 20 Feb 2014 of j, k with the constraint that ϕ j,k = ϕ k,j . Inserting into the Schroedinger equation iΨ =ĤΨ we obtain an effective single particle problem on a two-dimensional lattice with correlated disorder and a defect line along the diagonal due to the interaction U (hereh = 1): where j,k = j + k + U δ j,k and δ j,k is the Kronecker symbol. In the absence of interactions U = 0 solutions to (2) break into a product of SP solutions that follow All eigenstates {A (3) are exponentially localized with the maximal localization length ξ ≈ 96 W 2 for W < 4 6 . Their corresponding eigenvalues are ω r .
Let us rewrite the dynamical equations (2) where ω r1,r2 = ω r1 + ω r2 is the sum of SP eigenvalues ω r1 and ω r2 , and The interaction U transfers excitation amplitudes between two-particle Fock states. Suppose that the particles initially occupy a Fock state (s 1 , s 2 ). For U = 0 the solution to (4) reads φ (s1,s2) = φ 0 e −iωs 1 ,s 2 t . For nonzero U , in the first order the other states follow If the resonance condition is fulfilled, the states (r 1 , r 2 ) and (s 1 , s 2 ) become hybridized. Note that they will partially occupy the same volume since the overlap integrals of distant Anderson modes are exponentially small. Similar arguments can be mounted in higher orders of perturbation theory. The inverse characteristic time for the hybridization to take place is of the order U I r1,r2,s1,s2 . An increase of the interaction strength U appears to increase the number of potential hybridizations and facilitates the dynamical process. However this is only true as long as U is less or equal than the single particle kinetic energy which is of order one in Eq.(1). For U 1 the TIP spectrum splits into a noninteracting spinless fermion continuum and a band of double occupied site states, separated from the continuum by energy ∼ U . The spinless fermions will yield a localization length very close to ξ 1 since they do not interact (apart from avoiding double occupancy). The separated states will have a much smaller localization length of the order of ξ 1 /U 2 . Therefore the optimal value for the interaction is U ∼ 1. At variance to the many body problem, or the nonlinear wave case, where the number of particles or the density serve as an additional control parameter, we are then left in the TIP case with only one control parameter -the disorder strength W . As there are about ξ 2 1 Fock states residing in a volume of size ξ 1 , their connectivity, if at all, will be large for weak disorder. It is the weak disorder case ξ 1 1 which we will therefore explore. We study the expansion of an initial TIP wavepacket whose size is substantially smaller than ξ 1 . A single particle wave packet will expand ballistically up to ξ 1 which serves both as a localization length and as a mean free path in one dimension. We expect therefore that also a TIP packert will expand ballistically (or at least faster than diffusive) up to the length scale ξ 1 . Beyond that length any further expansion will be due to the interaction and the nontrivial dynamics in Fock space. We perform extra-large-scale computational studies of TIP dynamics (2) for W < 2, where a substantial enhancement of ξ 2 is expected 23 . We analyze the wave packet expansion and develop a qualitative theoretical explanation of our observations. Numerical integration of (2) is performed on a finite lattice N × N with the PQ-method 14 . The two particles are initially placed at neighbour sites. The probability distribution function (PDF) of the particle density is given by P j = k |ϕ j,k | 2 , and normalized to z j = P j / k P k . We monitor the wave packet expansion computing its mass center m 1 = j jz j and the second moment m 2 = j (j − m 1 ) 2 z j . For each choice of parameters W and U we average the numerical data over 100 different disorder realizations and denote this by · · · . In most of experiments we set the system size N = 5000, making also test simulations with N = 8000 to make sure that boundary effects do not matter. The evolution of the two-particle PDF in the weak disorder regime is presented in Fig. 1 for W = 1.0. In the non-interacting case we observe a rapid expansion of a wave packet over the SP localization volume and a halt afterwards (U = 0, left panel). At variance, interactions promote the wave packet diffusion beyond the SP local- ization volume and we do not observe visual signs of its halt up to t = 10 5 , about two orders of magnitude beyond the SP expansion time (U = 2.0, right panel).
In Fig. 2 we plot the evolution of the corresponding second moment m 2 . In a range of disorder strength W = 0.5 . . . 1.5 we observe that after an initial superdiffusive spread (a mix of ballistic transport and the influence of interaction) the spreading turns into a potentially long lasting subdiffusive regime. A comparison to the noninteracting case ensures that the discovered subdiffusion is due to interactions (Fig. 2).
In order to quantify our findings, we first smooth log 10 m 2 with a locally weighted regression algorithm 27 , and then apply a central finite-difference to calculate the local derivative α = d log 10 m 2 /d log 10 t for the data from Fig.2 and plot the result in Fig.3. For W = 1.5 the ballistic/superdiffusive spreading continuously slows down as time progresses. However for weaker disorder W = 1, 0.75, 0.5 the ballistic/superdiffusive regime crosses over into a subdiffusive one with α ≈ 0.5, and that regime lasts at least for 1.5 decades. This is precisely what is known as the strong chaos regime for the chaotic spreading of nonlinear wave packets in disordered potentials 13 .
We conjecture and demonstrate that this similarity has a profound physical origin in quantum chaos, its two main signatures displayed 28 . First, the weak disorder regime leads to high connectivity of the Fock states. In Fig.4 we plot the norm distribution in the Fock space after an initial excitation of a single state. Clearly a huge number of the other states become populated. Second, the normalized TIP level spacing distribution P (s) becomes strongly non-Poissonian for the set of parameters that exhibits subdiffusion (cf. Fig.2, 3), as opposed to an almost Poisson law for zero interactions (cf. measured on the block about ξ 1 × ξ 1 ). Here level repulsion reveals the parameter-dependent sub-linear fit P (s) ∝ s β , β < 1, s 1. The resulting quantum chaotic oscillations determine the proximity to nonlinear chaotic dynamics on the timescale of their inverse average frequency spacing.
The corresponding diffusion rate D is proportional to the coupling Γ r,s between the initial and final states r ≡ (r 1 , r 2 ), s ≡ (s 1 , s 2 ). Perturbative calculations give 25 : and yield D ∼ U 2 n 2 , where n is the local norm density in Fock space. Since we follow the wave packet in a one-dimensional system, it follows that m 2 ∼ 1/n 2 . Substituting the above into m 2 = D t we finally arrive at which corresponds well to our numerical findings.
In conclusion, we discovered the TIP subdiffusion in the weak disorder regime of Anderson localization and found a remarkable correspondence of the power law exponent α = 1/2 to the one observed for classical strongly chaotic nonlinear waves 13 . We demonstrated the pronounced signatures of quantum chaos in this regime, and proposed a mechanism conjecturing the origin of subdiffusion from the quantum chaos of strongly interacting two-particle Fock states. The obtained results call for further research to provide a rigorous description of the mechanisms behind TIP subdiffusion, to explore the relation to the random walk hopping theory of subdiffusion in disordered semiconductors 29 , and put a thrilling question on how the asymptotic α = 1/3 power law subdiffusion 12 converging to a self-similar solution for nonlinear waves 30 is recovered in a quantum system with N > 2 particles. It would be also extremely interesting to address these phenomena in experiments using interacting pairs of ultracold Rb atoms in optical lattices 31,32 , employing the recent advances in single atom control 33 .
MVI and TVL acknowledge financial support of RF President grant MK-4028.2012.2 and RFBR 12-02-31403. Dynasty Foundation is also acknowledged. A significant part of numerical experiments has been carried out at the HPC of Lobachevsky State University of Nizhny Novgorod.
|
2014-02-20T12:58:15.000Z
|
2013-09-20T00:00:00.000
|
{
"year": 2014,
"sha1": "85e9edb5c9187b41e65f7ceeea2c922230c65507",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1309.5281",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ca04f4b76426bd4957c3aa2a4a52e2eb8dfd25f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249990010
|
pes2o/s2orc
|
v3-fos-license
|
In Situ Nanoscale Dynamics Imaging in a Proton‐Conducting Solid Oxide for Protonic Ceramic Fuel Cells
Abstract Hydrogen fuel cells and electrolyzers operating below 600 °C, ideally below 400 °C, are essential components in the clean energy transition. Yttrium‐doped barium zirconate BaZr0.8Y0.2O3‐d (BZY) has attracted a lot of attention as a proton‐conducting solid oxide for electrochemical devices due to its high chemical stability and proton conductivity in the desired temperature range. Grain interfaces and topological defects modulate bulk proton conductivity and hydration, especially at low temperatures. Therefore, understanding the nanoscale crystal structure dynamics in situ is crucial to achieving high proton transport, material stability, and extending the operating range of proton‐conducting solid oxides. Here, Bragg coherent X‐ray diffractive imaging is applied to investigate in situ and in 3D nanoscale dynamics in BZY during hydration over 40 h at 200 °C, in the low‐temperature range. An unexpected activity of topological defects and subsequent cracking is found on a nanoscale covered by the macroscale stability. The rearrangements in structure correlate with emergent regions of different lattice constants, suggesting heterogeneous hydration. The results highlight the extent and impact of nanoscale processes in proton‐conducting solid oxides, informing future development of low‐temperature protonic ceramic electrochemical cells.
Introduction
The drive for clean and energy-efficient electrochemical devices for hydrogen energy has brought much attention to high proton-conducting solids, from low-cost proton-conducting polymers to more chemically and mechanically stable protonconducting ceramics (PCCs). [1,2] PCCs are unique solid electrolytes that acquire protons from ambient hydrogen and water vapor through equilibration with oxide lattice defects. [2,3] Efficient proton transport in PCCs has allowed solid-state electrochemistry at temperatures below 600°C or even 400°C, making them attractive for electrochemical energy conversion and electrochemical manufacturing. To date, PCCs have shown promise for a wide range of technological applications, such as fuel cells for energy conversion [4][5][6] and membrane reactors for hydrogen production. [7,8] Further advances in the performance and reliability of protonic ceramic electrochemical devices rely on improvements in the long-term transport properties and structural stability of the proton-conducting solid oxide electrolytes in the intermediateand low-temperature range. [9,10] Acceptor-doped barium zirconate, particularly the yttriumdoped barium zirconate BaZr 0.8 Y 0.2 O 3-d (BZY) is a highly chemically stable perovskite oxide with excellent proton conductivity below 600°C, making it a popular candidate for hydrogen fuel cells, electrolyzers, and electrochemical synthesis. [8,[11][12][13][14] In polycrystalline PCCs and BZY in particular, nanostructure, including grain boundaries and defects, can increase the protonic resistance or adjust proton conductivity and lower the operating temperatures down to 400°C or less (Figure 1a). [6,[15][16][17][18][19] The interfacial area between grains can stabilize the interfacial hydrated layer to provide a pathway for protonic conduction in the polycrystalline BZY. [18] The mechanism is available at low temperatures typically not higher than 200°C. Static strain and misfit dislocations also demonstrably affect electrochemical performance. [20,21] The structural instability during operation can affect the performance of the protonic ceramic electrochemical cells, as the crystal structure changes modulate proton conduction. [9] Figure 1. a) Defects in the crystal structure, grain boundaries, and cracks change the proton conduction of the bulk material. b) Incident coherent Xray beam produces speckled diffraction patterns from individual grains in the polycrystalline electrolyte pellet. While the X-ray beam illuminates many grains, the angular sensitivity of Bragg scattering allows us to isolate signal from individual crystalline grains embedded in the pellet. The heating element maintains temperature while N 2 /H 2 O is pumped through the chamber. The pellet is rotated in the beam, producing a 3D diffraction peak, c). The diffraction peak can either be analyzed directly or used to retrieve the 3D particle shape and displacement field.
Recent in situ studies have widely examined chemical processes in BZY [12,[22][23][24] and macroscale or averaged structure, but the nanoscale dynamics in proton-conducting solid oxide electrolytes and electrodes remain largely unstudied. Intra-and inter-particle stresses developed from chemical processes and the proton transport mean that the material grains will eventually develop defects, strain gradients, and cracks, changing the macroscopic properties. Still, the lack of in situ characterization on the nanoscale has prevented definitive conclusions about these processes' timescale and extent. To understand the nanostructure dynamics, one must extract information about strain, defects, and crystal coherence within individual submicron-sized grains embedded in a polycrystalline, 10-1000 micron thick material. The unique problem is to achieve a sub-100 nm resolution while imaging in 3D defects and shape of a single grain surrounded by millions of similar grains, despite high absorption and hazardous operating conditions. For these reasons, imaging the evolving structure with sufficient resolution in situ with electron microscopy is challenging. [2] X-ray [25] and neutron [26] diffraction methods, in comparison, provide sufficient penetration depth for in situ structure investigations in ceramic materials but conventionally only provide averaged information on the structure over multiple crystal grains and inadequate spatial resolution. [2] Recently, Bragg coherent X-ray diffraction Imaging (BCDI) [27][28][29][30] has enabled operando imaging of the nanostructure dynamics in battery materials, where similar limitations exist. BCDI is performed at high intensity coherent sources of X-ray radiation such as synchrotrons and free electron lasers. An X-ray beam focused within a polycrystalline material produces an X-ray scattering peak from a crystal grain in the beam only when the crystallographic orientation of the grain and the beam direction satisfy Bragg condition. Therefore, scattering from different grains can be separated and selected by moving and/or rotating the material. Coherent scattering from individual grains in polycrystalline electrolytes and electrodes produces speckle patterns uniquely dependent on the internal structure and shape of the grain (example in Figure 1a). BCDI requires no spatial scanning and uses beam size comparable to the grain size. The spatial resolution is defined by the diffraction pattern. Phase retrieval [30,31] on a 3D Bragg peak collected by rocking the sample provides the 3D structure of the grains and the atomic displacement within (Figure 1b), reaching sub-100 nm resolution for strain and particle shape, and detecting non-equilibrium defects such as dislocations and domain boundaries.
Here, we used the grain Bragg coherent X-ray diffractive imaging (gBCDI) [32] to track in situ the evolution of nanostructure and defects within the individual grains in the polycrystalline BZY. gBCDI enables nanoscale imaging of the changes in atomic displacement and defects dynamics in submicron-sized grains within a polycrystalline material, going far beyond incoherent X-ray diffraction capabilities. We combined direct analysis of coherent diffraction and gBCDI to track nanostructure evolution within BZY on an individual grain level at 200°C during hydration. Despite macroscale stability, we find that multiple non-equilibrium structural defects and new grain facets develop on a timescale of hours, despite 200°C belonging in a low operating temperature range for BZY electrochemical devices. We find clear evidence of the grain separating into smaller coherent volumes directly visible in the reciprocal maps of multiple X-ray Bragg peaks. We demonstrate that the dynamical evolution and nucleation of topological defects and subsequent grain cracking are present on the nanoscale in BZY during hydration even at 200°C, and find that unique crystal facets appear in non-equilibrium conditions. Clearly separable regions of different crystal lattice constants, suggesting inhomogeneous hydration, develop in unison with the appearance of defects and cracking. Nanostructure dynamics must be controlled to make protonic ceramic electrochemical devices viable and efficient.
Evidence of Nanostructure Dynamics in Coherent X-Ray Scattering
We performed a coherent X-ray diffraction experiment on the polycrystalline BZY sintered at 1100°C (details on synthesis in Adv. Sci. 2022, 9,2202096 www.advancedsciencenews.com www.advancedscience.com The total intensity of a Bragg peak for different grains as a function of time. Every data point represents a unique condition of an individual grain at a given time (n = 1). Error is estimated as the difference between two measurements with 2 min delay. c) Evolution of the relative peak width parallel (top) and perpendicular (bottom) to the scattering vector for different grains. Every data point represents a unique condition of an individual grain at a given time (n = 1). Peak width is estimated as standard deviation of the intensity distribution, error is estimated as an error on standard deviation 4 √ 4 − 4 . [33] Experimental Section and pre-characterization in Supporting Information). We collected (110) Bragg diffraction peaks from individual grains within the sintered pellet for over 30 h at 200°C, while a humid nitrogen atmosphere was pumped through the chamber (setup scheme in Figure 1b). Analysis of the reciprocal space maps from the grains provides immediate information on the comparative structural evolution of multiple grains (Figure 2). Before the real-space imaging with phase retrieval, significant changes in the diffraction patterns are already noticeable. We see splitting and separation of a single Bragg diffraction peak into multiple peaks (Figure 2a) over several hours. The angular separation between the splitting peaks grows initially with a speed of ≈0.5 mrad h −1 before the separation rapidly increases. At this point, the changes in separated diffraction peaks can no longer be followed simultaneously. Splitting occurs mainly perpendicular to the scattering vector q. The remaining total scattered intensity in the brighter Bragg diffraction peak after complete separation is two to three times smaller than the intensity before the split, showing a steep decrease in the crystal coherent volume within the grain (Figure 2b, grains P4 and P5).
The peak splitting suggests that the crystal grain splits initially into slightly misaligned domains, producing separated by ≈1 mrad scale angle but still simultaneously visible peaks. Subsequently, growing misorientation suggests the fracture of a single grain into two different grains of a smaller individual volume. Figure 2b shows divergent behavior among measured grains, suggesting that local environment and grain form factor affects the degree and timing of coherent volume loss (responsible for intensity drop). Because diffraction of both domains in Figure 2a is visible while illuminated with an ≈1 μm X-ray beam, the domains remain close. The angular splitting is due to relative angular misorientation between the domains. The full separation of the peaks after several hours supports the idea of fracture, and not domains with different hydration. The slow (over several hours) speed of the misalignment is explained by the restrictions imposed by the neighboring grains in the sintered pellet. To further investigate the structural deformation during fracture, we have investigated the evolution of the diffraction peak widths during and after crack propagation. The width of a diffraction peak can serve as a proxy for the coherence of the crystal structure. Degradation of the crystal structure commonly presents itself in the growing average strain and number of defects in the grains, increasing the Bragg peak width. Notably, the Bragg peak width both along and perpendicular to the scattering vector q (Figure 2c) does not demonstrate a preferred increase or decrease of the width over the different grains. However, while particles 4 and 5 in (Figure 2c) clearly tend to increasing peak width up to 40% parallel to q (thus not caused simply by peak splitting, which happens perpendicular to q), signifying an increase in strain and/or defects, the peak width decreases rapidly after the peaks entirely separate. The decrease in peak width after the cracking suggests that the grain cracking relieves stress and non-equilibrium defects in the grain. The absence of lingering strain gradient suggests brittle fracture with no significant permanent structural rearrangements away from the crack surface. The rest of the particles present diverging behavior, with peak width variation within 10-20% higher or lower than the pristine state. It is important to note that peak width perpendicular to Q is inaccessible by conventional X-ray Diffraction (XRD), in which diffraction structure over the direction perpendicular to q is averaged out and is thus insensitive to the peak splitting observed here. Figure 3). Isosurface at 15% maximum amplitude, slight additional variation in form is due to the uncertainty in the modulus of the retrieved complex amplitude. b) Example of the cracking. Blue surface-particle shape at t = 1590 min, magenta surface-particle shape at t = 2370 min, green horizontal plane-(110) crystallographic plane, brown plane-(112) crystallographic plane. c) BaZr 0.8 Y 0.2 O 3-d unit cell schematic with (110) and (112) planes marked.
3D Imaging of the Cracking Process
We further investigated the evolution of the shape and internal structure of the grains by performing phase retrieval [30] on the collected Bragg diffraction peaks. We have successfully retrieved in three dimensions the shape of and the atomic displacement field within grains at specific times during the 30 h period. The grain size of all measured grains ranged from 0.5 to 2 μm. Interestingly, even when the total scattering intensity decreases only by 10-20% and without apparent peak separation, as in grain P2 in Figure 2b, a sharp change in the grain shape consistent with cracking is visible (example in Figure 3a, top). The grain of ≈500 × 500 × 500 nm size changes shape at ≈2000 min into the in situ measurement. Part of the volume present at 1000-1800 min, marked by a green circle in Figure 3a, disappears at 2370 min and beyond, signifying the loss of crystal coherence with the rest of the grain. In the 3D coherent Bragg peak itself, the change is accompanied by a disappearance of a satellite maximum (marked by green arrows in Figure 3a, bottom). Overlapping reconstructed grain shapes at 1590 min and at 2370 min (Figure 3b) confirm the disappearance of a crystal volume. The comparison of the shape before and after fracture allows us to determine the orientation of the crack plane in comparison to the scattering vector q∥z, oriented normal to a crystallographic plane from {110} family (green). We find an angle of ≈50-60 degrees, most closely matching to a plane from {112} family (brown). (112) crystallographic plane is oriented at an angle 54 degrees to (110) plane in the BZY perovskite crystal structure (Figure 3c).
In equilibrium, dominant facets of BZY crystals are along {001} and {110} plane families. [34][35][36] In BZY nanocrystals, {111} facets have been observed. [37] Therefore, cracking along the {112} plane family, leading to {112} facets, is unexpected. Previously, similarities between BZY and CeO 2 nanocrystals have been found, [37] and in CeO 2 {112} planes are possible termination planes, [38] although they spontaneously turn into a stepped {111} surface. The resolution of our measurement is insufficient to observe a surface rearrangement to a stepped surface; however, to the authors' knowledge, no {112} termination planes have been previously reported in BZY.
Topological Defect Nucleation
Furthermore, the complex phase of the real-space complex amplitude retrieved through gBCDI provides in situ information on the 3D distribution of atomic displacement within the grains in the [110] direction. Analysis of the atomic displacement within the P2 grain demonstrates the abundance of dislocations generated during the in situ process (Figure 4). Dislocations with a component of the Burgers vector b along the scattering vector q produce a singularity in the atomic displacement. They can be pinpointed as vortices in the displacement field (Figure 4a, marked by a green circle), also producing zeroes ("holes") in the retrieved shape (see the center of the vortex in Figure 4a) because of undefined displacement at the dislocation core. We pinpoint the dislocation lines in 3D (Figure 4b, red lines) by tracking the singularities in the retrieved displacement through the grain. Multiple dislocations with different orientations of the dislocation line are found in the grain P2, evolving over time. Interestingly, the grain volume that later detaches demonstrates a particular proclivity for dislocations (Figure 4b). Note the jagged appearance of the grain surface in the region due to the zeroes in amplitude produced by dislocations. While the orientation of the dislocation lines differs, all of them have a component in the (110) plane, perpendicular to the scattering vector q. Note that a screw dislocation with a dislocation line entirely in the (110) plane would not produce a vortex in the atomic displacement because the Burgers vector would be oriented perpendicular to the scattering vector q|| [110], which suggests that the dislocations are preferentially of the edge or mixed type. Additionally, our experimental geometry is only sensitive to dislocations with the Burgers vector not perpendicular to the Q vector, suggesting there might be more dislocations we do not see in the displacement field.
While perovskites do not form an isomechanical group, in perovskites such as SrTiO 3 and KNbO 3 , and theoretically generally in perovskite oxides, edge dislocations aligned along <110> at low temperatures (<1000 K) are mobile and dissociate producing stacking faults. [39,40] Our in situ imaging results show that the dislocation configuration changes at a sub-hour timescale in BZY, showing experimentally similar <110> dislocation behavior to the one theoretically predicted for other oxide perovskites.
Strain Distribution: Evidence of Inhomogeneous Hydration
The displacement field provides information about the distribution of the strain in the [110] direction, which is a derivative of the dislocation field along the scattering vector. Analysis of the strain distribution ( Figure 5) shows a significant spatial difference in strain accumulation across the grain. In the beginning stages of the process, the strain is distributed homogeneously (Figure 5a), with a variation of ±0.1% of the crystal lattice spacing. However, after the first ≈1500 min, the accumulated strain in the main and detaching volumes of the grain differ by ≈0.4%. More precisely, the average lattice spacing in the volume that detaches after the cracking event seen in Figure 3 is 0.4% lower, signifying either evolving external stress from neighboring grains or a lower penetration by H and O ions. Incorporation of oxygen ion is anticipated to produce a higher impact on the strain. [41] Note that the strain difference between the center of the grains and their surface is, in comparison, much smaller (<0.1%), suggesting a more homogeneous ion distribution within the two volumes. Different lattice constants induced in the separating volumes before www.advancedsciencenews.com www.advancedscience.com facet formation, suggesting different H + concentrations, lead us to speculate, therefore, that the non-equilibrium effects and the interaction with the neighboring grains make {112} termination plane energetically more favorable.
Conclusions
In summary, we reveal the in situ nanostructure dynamics in a proton-conducting solid oxide by applying coherent X-ray diffractive imaging to sintered BZY during hydration at 200°C. gBCDI fills a method gap for studying the nanoscale dynamics of the crystal lattice in electrolyte candidates for protonic ceramic electrochemical devices in situ. We found unexpectedly active defect nucleation and grain boundary changes at 200°C. Our results reveal the abundance of newly generated, mobile dislocations that align preferentially along the {110} planes and the cracking of the grains producing uncommon facets. Imaging shows cracking of the grains along the {211} crystallographic planes, generating facets energetically unfavorable in equilibrium conditions. The cracking occurs in the vicinity of the mobile dislocations, suggesting strong interaction between defects. Given that protonic ceramic electrochemical cells commonly operate at even higher temperatures of 300-600°C, the observed crystal lattice dynamics on the nanoscale and grain instability of BZY at 200°C in the absence of electric current merit further investigation of effects of nanostructure dynamics on stability, hydration, and proton transport in other PCCs. Future investigations with fuel, in H 2 atmosphere, will also be necessary. Electrolytes sintered at higher temperatures (1600°C [42,43] ) also present an interesting avenue of investigation due to a differing density and possibly mechanics. Our results suggest a loss mechanism of active material during temperature and humidity cycling, for example, during the startup and shut-down cycles. The lost material will not be afterward able to participate in the electrochemistry. Operando measurements are required to better quantify the connection between degradation and functional properties. Furthermore, we found the formation of clearly distinct regions in individual grains with different lattice constants, which together with the changes in the interfacial surface to volume ratio means that topological defects affect hydration in electrochemical devices on the timescale of hours, not just as a direct transport channel, but also through the surrounding lattice changes. Thus, submicron structure and mechanical properties in solid oxide proton conductors impact not just the long-term structural stability and statically proton conduction, but give rise to a dynamic, constantly evolving system, further changing the functionality of the ceramic electrolyte. Mesostructure, external strain, superior mechanical properties, and other methods to control the evolution of non-equilibrium defects and grain boundaries are, therefore, essential to the advancement of protonic ceramic electrochemical devices.
Experimental Section
Synthesis of the BaZr 0. 8
Y 0.2 O 3-d Pellets:
The BZY pellets were prepared from crystalline BZY powders, which were first formed from nitrate precursors via a sol-gel synthesis followed by calcination at 900°C for 5 h in the air (3°C min −1 heating rate). Subsequently, the crystalline powders were pressed into ≈50 mm diameter and 50 μm thick pellets (see Figure S1, Supporting Information) and heated them at 1100°C for 36 h in the air (1°C min −1 heating and cooling rate) to sinter the grains. XRD and surface electron microscopy characterization are presented in Figure S1, Supporting Information. The pellets were later broken into smaller (≈1-10 mm diameter) plates for BCDI measurements due to brittleness.
Details of the In Situ X-Ray Coherent Diffraction Measurements:
The in situ coherent X-ray measurements were performed at the beamline 34 ID-C of the Advanced Photon Source (Argonne National Laboratory, ANL, USA), at a photon energy of 9 keV and sample-detector distance of 1 m. Timepix (34ID) 2D detector with a pixel size of 55 μm × 55 μm was used. Water vapor content inside the in situ chamber was estimated as 30 g m −3 assuming water vapor saturation at room temperature from the method used (bubbling). This was supported by condensation appearing in the chamber after cooling. (110) Bragg diffraction peaks were collected (scattering angle 26.6 degrees) from individual grains in the sintered BZY pellet for over 30 h at 200°C in a humid nitrogen atmosphere (setup scheme in Figure 1a). At every new time point the sample was realigned to ensure that a single Bragg peak is measured, corresponding to a single grain. No multiple intersecting peaks, which would mean multiple grains, were observed. Collecting full 3D reciprocal space maps of the Bragg diffraction peaks required 1-3 min of rocking the sample chamber in the scattering plane (schematically shown in Figure 1b). A full Bragg peak angular spread was below 1 degree, and an angular step below 0.01 degree was required to sufficiently oversample the speckle pattern for phase retrieval. [31] Bragg diffraction peaks from individual grains remained stable over hours in a pure nitrogen atmosphere at 200°C without introducing humidity, therefore excluding significant radiation damage effects on the in situ measurement.
Phase Retrieval Procedure: Phase retrieval combined the errorreduction (ER) algorithm alternating with hybrid input-output (HIO) algorithm in 50/10 combination. Retrieval was performed without binning, as sufficient signal and resolution were achieved. Iteration number was settled at 610. All attempts resulted in very similar reconstructions. We used an average of five results in this work, each being an average of 20 best reconstructions retrieved in a guided procedure developed in ref. [6] (8 generations, 40 population). The nanoparticle shape was found by averaging the amplitudes of the reconstructions and applying a threshold of 15% (higher than the threshold during retrieval) to that average amplitude. [5] The reconstructions were run using a GPU optimized code on multiple GeForce 1080 and 2080 graphics cards.
Statistical Analysis: Operando BCDI measurements represented a result of phase retrieval (involving Fourier transform) on a unique, unrepeatable measurement of a crystal grain condition at a certain moment in time. Nevertheless, wherever possible ( Figure 2) an error estimate for the values derived directly from intensity measurements was provided. The measure of error in integral peak intensity was estimated by comparison with a measurement performed with small delay (2 min), as a difference. The measure of error in peak width (standard deviation of the intensity distribution) was estimated according to ref. [33] through the fourth central moment of the intensity distribution as 4 √ 4 − 4 .
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
|
2022-06-25T06:16:01.456Z
|
2022-06-24T00:00:00.000
|
{
"year": 2022,
"sha1": "94f8d2445412896d1126d1f2eb1d80ec61140a4e",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "f8bc24a94b2259668eaa18b57f125eba636bbf4f",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201757834
|
pes2o/s2orc
|
v3-fos-license
|
Potent bicyclic inhibitors of malarial cGMP-dependent protein kinase: approaches to combining improvements in cell potency, selectivity and structural novelty
Graphical abstract
Focussed studies on imidazopyridine inhibitors of Plasmodium falciparum cyclic GMP-dependent protein kinase (PfPKG) have significantly advanced the series towards desirable in vitro property space. LLE-based approaches towards combining improvements in cell potency, key physicochemical parameters and structural novelty are described, and a structure-based design hypothesis relating to substituent regiochemistry has directed efforts towards key examples with well-balanced potency, ADME and kinase selectivity profiles.
Malaria is one of the most prevalent infectious diseases of the developing world in humans, whose causative agent is the protozoan parasite Plasmodium, with most deaths caused by P. falciparum. Despite being largely preventable and treatable, it was responsible for 435,000 deaths in 2017; young children and pregnant women in sub-Saharan Africa are particularly at risk. 1 In addition to continuing challenges in the contexts of policy development and socio-economic impact, 2 the observation of increasing resistance to current standard-of-care treatments is significant. This is driving research and development efforts to uncover new mechanisms by which the disease can be controlled and prevented. 3 Studies on the malarial kinome continue to provide well characterised and credible new targets for antimalarial small molecule drug discovery. 4,5 The cGMP-dependent kinase PfPKG is one kinase which meets many of the criteria for such a target. Pharmacological characterisation using early chemical inhibitors in combination with reverse genetics has demonstrated the important role of this enzyme in numerous critical processes in the malaria life cycle. [6][7][8][9][10][11] Following previous experience with progressing chemical inhibitors of other important malarial kinases, [12][13][14][15] we have recently begun to disclose our efforts to develop a series of PfPKG inhibitors based upon both bicyclic 16 and monocyclic scaffolds. 17 In the bicyclic series, a number of advanced analogues were shown to possess promising in vitro activity, a well-defined mechanism of action and property profiles which translated to target-driven efficacy in vivo. 16 An ongoing objective is to develop this chemical series with a view to improving key physiochemical parameters and compound novelty whilst retaining cell potency and lipophilic ligand efficiency (LLE). 19 A recent report from us described initial efforts towards these goals by evaluating the aminopyrimidine hinge binding motif, bicyclic core structure and basic substituent positioning. 20 Investigation of each of those structural features was found to be both necessary and productive, and the resulting compound profiles pointed strongly to retaining these motifs in their original forms. As a result, the profiles of analogues such as 1 (Figure 1) challenged us to consider additional strategies for re-positioning the series in suitable ADME property space whilst maintaining suitable levels of in vitro activity and improving compound novelty. A first approach was to reduce the size of the 4-fluorophenyl motif to lower lipophilicity and hence increase lipophilic ligand efficiency (LLE) 19 (Figure 1 increasing chain length and basicity) and could also address one likely point of metabolic liability (for example by replacing the benzylic carbon atom with a heteroatom) (Figure 1 -C). Here we discuss the results of these investigations and show their significant beneficial impact against the above criteria.
We first examined the possibility of improving the lipophilic efficiency by focusing on the large 4-fluorophenyl motif, and initially retained the original basic substituent at the 7-position of the bicyclic core in doing so. The main design emphasis was to attempt to balance the size of the pyrimidine substituent with a smaller lipophilically efficient replacement for the 4-fluorophenyl group. Among a small set of initial replacements, prepared by the general route shown in Scheme 1, the cyclopropyl analogue 9 was of lower potency in a biochemical assay 21 as compared to 2, but significantly also showed a lower mLogD value of 1.7 of 1.7 (Table 1). 22 Given that potency was lower than desirable, further analogues incorporating the cyclopropyl group were designed to combine a lower mlogD with improvements in potency and LLE. Hence a set of compounds with larger groups appended to the aminopyrimidine nitrogen was prepared using variations of the same chemical approach. Small alkyl groups such as that in 10 did not provide any further boost in activity or LLE but, in line with previous SAR, arylaminopyrimidines such as 11 and 12 were more biochemically active and possessed the anticipated trend towards lower mLogD. The most balanced profile was achieved in 13, 23 which showed similar levels of both biochemical potency and anti-malarial activity in a blood stage hypoxanthine incorporation (HXI) cell assay 21 compared to 2, Figure 1. In vitro profiles of imidazopyridine 1 and 2, and design modifications to be applied to 2: Atruncate the aryl group; B -enlarge the pyrimidine substituent if required; C -re-design the basic substituent. ADME data: mLogD = measured logD; MLM = % remaining after 30 min incubation with mouse liver microsomes. Scheme 1. Reagents and conditions: (i) LiHMDS, R 1 CO 2 Et, THF, −78°C -rt, 3 h, 27-76%; (ii) Bu 4 NBr 3 or NBS, CH 2 Cl 2 , rt, 2 h; (iii) 2-aminopyridine-4-methanol, EtOH, 4 Å sieves, 100°C, 18 h, 12-44% for two steps; (iv) MsCl, Et 3 N, THF, 0°C, 1 h or SOCl 2 , CH 2 Cl 2 , 50°C, 1 h; (v) Me 2 NH, THF, 0°C -rt, 33-65% for two steps; (vi) H 2 O 2 , Na 2 WO 4 ·2H 2 O, AcOH, MeOH, 0°C -rt, 3 h; (vii) for 7-9: NH 4 OAc, melt, 130°C, 3 h, 5-27% for two steps; for 10: i PrNH 2 (neat), 60°C 3 h, 23% for two steps; for 11: 2-aminopyridine (excess), NMP, microwave, 150°C, 3 h, 8% for two steps; for 12: 4-(4-methylpiperazino)aniline, neat, microwave, 170°C, 15 min, 5% for two steps; for 13: 4-(4-aminophenyl)piperazine-1-carboxylic acid tert-butyl ester, TFA, s BuOH, 110°C, 6 h, then TFA, CH 2 Cl 2 , rt, 2 h, 10% for three steps. coupled with improvements in mLogD and LLE. Turning next to the basic substituent on the bicyclic core, a small number of molecules were initially designed to identify the optimum position at which to locate this motif. We decided to employ the benzylic dimethylaminomethyl group present in 2 for this analysis. Docking of 2 and its 5-, 6-and 8-regioisomers into an apo-structure of PfPKG (PDB:5DYK 24 ) suggested that the best site for that substituent was at the 7-position ( Figure 2). The location of the positively charged basic center between two acidic protein residues (E625 and D682) was judged to be optimal for that particular group. Whilst relocating to the 6-or 8-positions appeared to be spatially tolerable, sub-optimal interaction with the acids and a subsequent loss in affinity was predicted. Appending several possible groups at the 5-position appeared to result in a significant steric clash with the pyrimidine hinge binding motif (data not shown); this was predicted to cause a significant loss of activity and hence was not pursued. This hypothesis was tested by synthesizing the 8-and 6-regioisomers 15 and 16 respectively. Using variations of previously described chemical approaches, 18,20 compounds 15 and 16 could be prepared from the bromoketone building block 14 25 (Scheme 2), in good yields over 5 synthetic steps.
These two compounds showed reductions in their biochemical activity, as compared to 2 (Table 2), which were in line with predictions from the docking studies. Lipophilic ligand efficiency for the 8-substituted compound 15 was also higher than for 6-analogue 16, in part due to an interesting divergence in mLogD (values of 1.6 for 15, 2.3 for 16, as compared to 2.4 for 2). However, the key factor of lower synthetic accessibility for 8-position analogues emerged, which directed our efforts away from preparing further compounds of this kind. In contrast, the position of the two key acidic residues at the binding pocket mouth implied that re-design of the basic substituent into longer chain variants and appropriate conformationally constrained versions might be productive. This design hypothesis suggested that substituents of these new types at either the 6-or 7-positions should be evaluated.
We tested this proposal by making compounds bearing such modifications to the dimethylaminomethyl side chain in 2, and chose to include adjustments in both expected pKa and conformation by varying (Table 3). Neither of these compounds appeared to possess a particular advantage in any aspect of their in vitro profiles as compared to 2. Interestingly, the related pair of piperazine regioisomers 22 and 23 showed a subtle contrast in mLogD, where the 6-isomer 23 was found to possess the lower value. The cell activity of 23 was also slightly lower as compared to 22. Synthetic access to 6-substituted compounds was also found to be generally less efficient; considering this and other contributing factors, 27 we decided to focus additional efforts on 7-linked analogues only.
Using the same synthetic chemistry as shown in Scheme 3, a small additional set of 7-substituted analogues was prepared and evaluated ( Table 4). As compared to 20, increasing the ring size and hence altering the conformational constraint in 24 gave modest improvements in biochemical potency and cell activity, though only a slight change to LLE. Microsomal stability was improved significantly, perhaps due to constraining the conformation in the basic side chain. For the open chain examples 25 and 29, in vitro ADME profiles very similar to 2 could be obtained, though both showed lower biochemical activity (and hence no further benefit in LLE) and microsomal stability had not improved. Both LLE and microsomal stability could be improved by returning to a nitrogen-linked design in the open chain analogue 26, for which the essentially unchanged mLogD (as compared to 25 and 29) was accompanied by better biochemical potency and LLE. Finally, positioning an additional carbon atom within the aminopyrimidine group gave 27; this notable compound showed an excellent balance of good biochemical potency, in vitro activity against the parasite and improved LLE and mLogD values. The effect of the secondary aminopyrimidine (in 27) on microsomal stability, relative to the primary aminopyrimidine (in 26), was also significant.
The two most promising compounds identified − 13 and 27 -were profiled and compared in vitro (Table 5). In addition to previously described improvements in mLogD and LLE, high kinetic solubility (measured using PBS at pH7.4 as buffer) was maintained in each case and both compounds were shown to be non-cytotoxic. Despite a significantly lower mLogD value, the mouse microsomal stability of 13 surprisingly remained at the same level as for 2, for which we have no clear explanation. 28 In particular, 27 matched excellent biochemical and cell potency with significantly higher stability in mouse microsomes to give a highly promising and well-balanced overall profile. Selectivity was assessed by screening 13 and 27 against a human kinase panel 29 at a single 1 µM concentration ( Figure 3). As expected, the smaller cyclopropyl motif in 13 resulted in a decreased level of selectivity, whilst 27 showed an excellent selectivity profile against the kinases screened. We also tested compounds 2 and 27 against the two human orthologues of PKG; no activity was observed up to a top assay concentration of 1 µM, 30 indicating a high level of selectivity for the malarial kinase.
We have reported here the results of our continuing effort to progress a series of imidazopyridines as inhibitors of PfPKG, focusing on alteration of the 4-fluorophenyl group and re-design of the basic substituent as key strategic aims. By concentrating on cell potency, lipophilic ligand efficiency and structural novelty in tandem, compounds such as 27 in particular were developed to populate a highly desirable and novel area of chemical space as potent, lower molecular weight, lipophilically efficient analogues with improved in vitro ADME and selectivity profiles. Studies towards the identification of additional analogues suitable for in vivo studies and further mechanistic considerations are ongoing and will be reported in due course. a nt = not tested. b % remaining after 30 min incubation with mouse liver microsomes.
Table 5
Full in vitro profiles for compounds 2, 13 and 27; a % remaining after 30 min incubation with mouse liver microsomes; b kinetic solubility; c in vitro cytotoxicity assay measured in HepG2 human liver-derived cells -concentration at which half of cells remained viable at 48 h. 16 Compound PfPKG pIC 50 Figure 3. Kinase selectivity data for representative imidazopyridines 2, 13 and 27 on screening against a human kinase panel at 1 µM concentration; green < 50% inhibition; yellow 50-90% inhibition; red > 90% inhibition. 29
|
2019-09-01T14:58:55.399Z
|
2019-08-09T00:00:00.000
|
{
"year": 2019,
"sha1": "5d1ab2406e297e8cc98714f58080a703a82e17d2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.bmcl.2019.08.014",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "001b49ebd366bc90f8e58b3f826089db87d7be2f",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
103496021
|
pes2o/s2orc
|
v3-fos-license
|
An Analysis on Position Estimation, Drifting and Accumulated Error Accuracy during 3D Tracking in Electronic Handheld Devices
This work focuses on a brief discussion of new concepts of using smartphone sensors for 3D painting in virtual or augmented reality. Motivation of this research comes from the idea of using different types of sensors which exist in our smartphones such as accelerometer, gyroscope, magnetometer etc. to track the position for painting in virtual reality, like Google Tilt Brush, but cost effectively. Research studies till date on estimating position and localization and tracking have been thoroughly reviewed to find the appropriate algorithm which will provide accurate result with minimum drift error. Sensor fusion, Inertial Measurement Unit (IMU), MEMS inertial sensor, Kalman filter based global translational localization systems are studied. It is observed, prevailing approaches consist issues such as stability, random bias drift, noisy acceleration output, position estimation error, robustness or accuracy, cost effectiveness etc. Moreover, issues with motions that do not follow laws of physics, bandwidth, restrictive nature of assumptions, scale optimization for large space are noticed as well. Advantages of such smartphone sensor based position estimation approaches include, less memory demand, very fast operation, making them well suited for real time problems and embedded systems. Being independent of the size of the system, they can work effectively for high dimensional systems as well. Through study of these approaches it is observed, extended Kalman filter gives the highest accuracy with reduced requirement of excess hardware during tracking. It renders better and faster result when used in accelerometer sensor. With the aid of various software, error accuracy can be increased further as well.
Introduction
Portable, compact, easily accessible, and cost-effective means of services and technology are of great use in recent times.A lot of works have been carried out in the field of computer algorithm and application development which is based on mobile and portable workstations like cellphone, smartphone, tablet, Personal Digital Assistant (PDA) etc.These devices are especially opted since they feature few unique characteristics i.e. portability, cost effectiveness, multiple platform and OS compatibility, easy internet connectivity for firmware update, user friendly UI etc.Moreover, with the advances in semiconductor chip and integrated circuits technology, these devices now boast highly powerful processors and a wide range of sensors.A developer hence can take advantage of this powerful yet portable computing ability of these devices to design innovative applications.To accurately estimate position and localization, and to achieve effective tracking, various combinations of sensors, algorithms have been designed and developed which work together in synchronization.After thorough scrutiny and analysis of relevant works in this area, it is found that they can be broadly categorized and discussed based on their methodology, framework, and experimental results.Hence our discussion is separated into three afore-mentioned kinds so that it is possible to critically analyze relevant works in an effective way considering every criterion and come up with a new idea.
Using Miniature Inertial Sensors
Various types of sensors have been used to accomplish various tasks.Miniature inertial sensors based on biomechanical models and sensor fusion algorithm can accurately track human motion [1].A rigid body can move freely in 3D space in six degrees of freedom (6DOF) which refers to specific number of axes.The number of independent parameters is characterized in 6DOF which elucidates the configuration of mechanical system.The rigid body can move with X, Y and Z axes and change its orientation among these axes with the help of rotation which are usually referred as pitch, yaw and roll.6DOF is mainly used in engineering and robotics to count the number of degrees of freedom of an object having 3D space.In short, it can be said that there are six parameters which are created during the movement of body.The movement of the cell phone can be tracked using 6DOF sensors of cellphone.Six degrees of freedom are more in number in case of robotics.Robotic arm has 18 degrees of freedom because three segments are found in its arm with each segment having six degrees of freedom [2].It is an expensive system for robotics and the cost of equipment, software and personal requires can restraint use of this system only to large productions.
Using Sensor Fusion Approach
Sensor fusion is another effective approach where data from several sensors is combined in a time triggered network that can correct deficiencies from indi-Journal of Computer and Communications vidual sensors to calculate accurate position and orientation information [3].It proposes two sensor fusion algorithms to accomplish this task, the systematic confidence-weighted averaging algorithm and the application-specific robust certainty grid algorithm.TT (Time-Triggered) design requires detailed design phase.Data arrival at fusion node does not coincide due to variable propagation delays.This sensor fusion can be used in android devices for indoor positioning quite satisfactorily [4].It focuses in estimating the position of the phone inside a building where the GPS signal is bad or unavailable.By using data from the device's different sensors, such as, accelerometer, gyroscope, and wireless adapter, the position is determined.Advantages of using sensor fusion are redundancy, complementary, timeliness and less costly information.However, it has higher power assumption.
Using Triaxial Accelerometers
Mobile robot position can be accurately recorded using accelerometer [5].Here Kalman filter is used to reduce error caused by random noise.Although it can reduce error caused by random noise as well as having low cost and small size, random bias drift problems can occur here.Tri-axial accelerometer has also been used to determine position and orientation precisely [6].Triaxial accelerometers provide simultaneous measurements in three orthogonal directions, for analysis of all the vibrations being experienced by a structure.Here, each unit is a combination of three sensing elements, separated from one another and aligned at right angles with each other.It helps to diminish cost, amenity and upgrade tangential sensitivity, but it is bulkier in size.A combination of accelerometer, gyroscope, magnetometer can be used as well for 3D knee kinematics, having low error percentage results [7].A combination of two sensors has also been used to develop the error propagation equations [8].Beginning with the basic multi-sensor triangulation equations to estimate a 3D target position, error propagation equations are derived by taking the appropriate partial derivatives with respect to various measurement errors.Gaussian measurement is also used.
Inertial Measurement Units (IMU)
Integrated electronic devices containing accelerometers, gyroscopes, magnetometers are incorporated in Inertial Measurement Units (IMU) [9].IMUs are integrated electronic devices that contain accelerometers, magnetometers and gyroscopes.This system is not free from drift error, a difference between the actual position and system detected position.Growth of quadratic error in position and linear error in velocity occurs due to constant error of acceleration in the time of integration.On the contrary, quadratic error in velocity and growth of cubic error in position appears because of constant error of gyroscope [10].To mitigate drift error, MEMS based low cost system has been designed for pedestrian navigation using only accelerometer and gyroscope sensors without incorporating a magnetometer [11].Inertial measurement units have been used for gait analysis as well [12].A miniature inertial/magnetometer package wirelessly coupled to PDA is reported to track pedestrian position effectively with or without GPS availability [13].Using MEMS inertial sensor, it is possible to measure accurately up to 50 cm in case of diameter [14].The advantage of inertial sensor is being self-contained and non-reliant on external field, disadvantage is it is typically rate measurement and expensive.Though the system does not require calibration and is stride length independent, it can be affected to render erroneous data from surrounding metal infrastructures or buildings.Foot mounted inertial sensors have also been used in indoor environment for pedestrian localization [15].The system takes advantage of the low cost and small size of inertial sensors but it requires dedicated experiment space and walking range.
Kalman Filter Based Tracking
A widely-used approach of accurate tracking is Kalman filtering [16] [17].This approach has several advantages.It requires smaller memory allocation which makes it well suited for real time problems and embedded systems.It is a convenient form for online real-time processing.However, use of Kalman filter can sometimes make the final error level worse.1D Kalman filter, a statistical technique, can adequately describe the random structure of experimental data through a connection with GPS and Wiener filter [18].Here, the error is smaller with time varying gain.Kalman filter can be implemented with an accelerometer sensor that gives excellent noise reduction, increases dynamic range, and reduces displacement of mass under closed loop structure [19].Kalman filter fusion algorithm in IMU/UWB can be used to track human operators effectively [20].
Kalman filter approach has also been used for global pose estimation using multi sensor fusion [21].The system is made using Kalman filter for fusion of Differential GPS (DGPS) or Real Time Kinematic (RTK) for AR (Augmented Reality) and IMU with visual orientation tracker.Altitude Kalman filter is not independent because of 9 parameters for altitude representation and not easy to use and understand, besides real-time kinematics is no longer useful.
Extended Kalman Filter (EKF)
The extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance [22].EKF usually gives less accurate measure of covariance.It is incorporated in designing indoor positioning system based on IMU/magnetometer [23].This filter also renders better and faster result when used in accelerometer sensor.Besides EKF, Unscented Kalman filter (UKF) algorithm is used for better position accuracy and reliability, uses U transform during filtering [24].UKF is more appropriate in solutions to nonlinear systems compared to EKF [25].
Framework of Position Estimation Approaches
Among different frameworks, MVN consists of 17 inertial and magnetic sensor Journal of Computer and Communications modules [1].The magnetic sensor is not always able to find out below-surface defects, and inertial sensor is typically rate measurement and expensive.Computer, motion sensors (accelerometer) and rotation sensors (gyroscope) are used in inertial navigation system (INS) for continuous calculation of dead reckoning position, orientation and velocity of moving objects without external reference [26].In sensor-fusion approach, the application is structured as a single activity with different layouts (views) [4].The main processing line takes care of the sensors reading which is handled by an event listener.When a new value from the sensors is sampled depending on the mode it will be processed by the positioning engine, displayed on the screen or stored in a sampling array.The UI is handled from an independent thread, the screen updates are controlled by a timer and by external calls from the positioning block.The mode changes are triggered either by the 30-user interaction (through the menu) or by special event in the block which handles the sensors (e.g. the calibration is completed).
IMUs work by the principle such as, the first and the second frames depend on orientation and mounting position, and remain constant during motion between joint center and origin of their position [12].Here, MT9 need six channels to change and each channel needs to record separately [9].On the other hand, magnetometer calibration is done in implementing the Navshoe system based on inertial sensors [13].
Kalman filter based inertial motion capture system and UWB localization system for hybrid tracking incorporates measurements from the GypsyGyro-18 [20].These are transformed to the Ubisense coordinate system using transformation matrix in the same coordinate system.Kalman filter is used for prediction step and correction of step, whereas the Ubisense system usually returns accurate positions.However, some measurements from the Ubisense system have big errors and shouldn't be incorporated to the Kalman filter.In case of orientation estimation and position estimation, dedicated Kalman filter and altitude Kalman filter can be used together [21].Altitude Kalman filter, a linear Kalman filter is used to estimate altitude and vertical velocity by doing sensor fusion of acceleration and any altitude measurement sensor such as e.g.barometer or SONAR.Algorithm of the Kalman filter has several advantages.This is a statistical technique that adequately describes the random structure of experimental measurements.This filter can consider quantities that are partially or completely neglected in other techniques (such as the variance of the initial estimate of the state and the variance of the model error).It provides information about the quality of the estimation by providing the variance of the estimation error in addition to the best estimate.Kalman filter is well suited for online digital processing.Its recursive structure allows its real-time execution without storing observations or past estimates.EKF orientation estimation method is effective to provide accurate reconstructed trajectory in an indoor environment [27].EKF is also designed based on quaternions for heading estimation which is a combination of gyroscopes and accelerometers with better accuracy [28].
Result Analysis of Reviewed Estimation Approaches
It is observed that miniature inertial sensors based on biomechanical models and sensor fusion algorithm can accurately track human motion and finds unknown initial position [1].In case of estimating the position of a phone by sensor-fusion, a double integration is required to calculate position from acceleration [4].The INS algorithm needs large processing time.To reduce the effect of the errors, the linear acceleration method is complemented with additional functions: movement detection, walking speed limit and step detection.This improved the INS position estimation significantly.
To successfully estimate the orientation and position of a triaxial accelerometer on industrial robot, two basic steps are observed [6].First, the internal sensor parameters and the accelerometer with orientation of the sensor.Second, the position respect to the robot tool coordinate system.Here, there is no elimination of error.On the other hand, two-sensor 3D position estimation technique comprises steps such as, 3D position estimation, then error propagation for two sensors, Tylor series expansion for unknown values, then error propagation with Gaussian statistics to target the position.However, these steps can be avoided [8].
It is observed during calibration of IMUs for 3D orientation, spreading of error in absolute orientation is present [9].Besides in the time of raw data processing, the total error was split into component rotations about the global axis to further understand the sources of the orientation error.Each sensor needs to be mapped to the rigid body segment.In case of INS for pedestrians based on MEMS IMU, only accelerometer and gyroscope measurements were considered (no magnetometers required) [11].The sensor error model parameters were included in the state vector of an extended Kalman filter.This analytical approach was appropriated for foot stance detection at zero-velocity [11].
Comparison on Razor IMU and Xsens MTi IMU was drawn as well.It is also observed that the experimentation of various systems depicts different levels of non-consistency.Noise and slow drifting occurs in IMU based gait analysis [12].
Accuracy of only 0.3 percent is observed in indoor, outdoor experimentation of shoe-mounted inertial sensors based pedestrian tracking [13].Indoor environment localization technique for pedestrians did not remove errors present [15].
It required a dedicated experiment space and the walking range.
The Kalman filter combines all previous predicted values and information in implementing accelerometer sensor data for three state positions in dynamic system [19].It is very inexpensive since it completes the whole process without storing data.Besides, it uses simple loop equations.It is also mandatory to point out the independent Kalman gain and error covariance equation of actual observation.To obtain preliminary information of estimator performance, these parameters are easy to use.It is found that the algorithm is recursive and easy to implement because dimensional matrix cannot be changed with time.It is very useful for solving all problems in multi-state or multi-dimensional conditions.Journal of Computer and Communications The solution is obtained with less computational time in respect of large cost overhead in modeling.R\q value of 28 could be good solution to reduce errors in acceleration, velocity and position which are found from graph analysis [19].It is observed, Kalman filter based inertial motion capture system and UWB localization system for hybrid tracking incorporating measurements from the Gyp-syGyro-18 [20] in Ubisense system is not suitable, because Ubisense has small data frequency (5 -9 FPS) which causes extremely high latencies in industrial environments.Ubisense system having considerable amount of errors shouldn't be incorporated to the Kalman filter.GypsyGyro-18 gives accumulated errors in global translational measurements.GypsyGyro-18 represents an error of 0.56 m regarding the pre-established path.This error is due to the GypsyGyro-18 footstep extrapolation algorithm because it sometimes estimates wrongly when the feet meet the floor.That's why they need to use UWB localization system.The GypsyGyro-18 global translational error has been reduced to 0.14 m and the resulting data rate is equal to the GypsyGyro-18 frequency (30 Hz).It consists of two components: an inertial motion capture system (GypsyGyro-18) and an UWB localization system (Ubisense).The MoCap (Motion Capture) system can register movements of the operator's limbs with high precision, but global position of the operator in the environment is not determined with sufficient accuracy.On the other hand, use of dedicated Kalman filter and altitude Kalman filter together for orientation estimation and position estimation is achieved by a hardware platform [21].Kalman filter is then used for global pose estimation using multiple sensors.GPS and DGPS are used as well, which is not necessary.
A visual panorama tracker as additional input can communicate in limited distance.Also, use of barometer gives less accurate results.
It is shown that robustness against measurement error is found using EKF [29].EKFs have been implemented for all possible combinations using gyroscopes and accelerometers as control or measurement inputs [30].EKF with Taylor series expansion of observation matrix is used for superior performance [31].A comparison between Second-Order Extended Kalman Filter (SOEKF) based on multiplicative noise model and the random matrix approach i.e.EKF has been drawn.During the time of orientation changing, both SOEKF and EKF provides better performance [32].SOEKF Taylor series expansion matches more accurately with the exact moments compared to First-order Taylor series expansion [33].It is also observed that EKF is capable of incorporating different directions during movement when compared to invariant extended Kalman filter (IEKF) [34].
Experimental Discussion
Study of various position estimation approaches above exhibit that comprehensive and accurate position and localization estimation as well as efficient tracking comprises different degree of issues where there is scope of further improvements.The dimensions and joints of the character (creature) often do not exact-Journal of Computer and Communications ly match the subject (actor) being captured in miniature inertial sensors based human motion tracking using sensor fusion algorithm [1].Custom calibration of inertial measurement units for 3D orientation is valid for only 22 days, with error increasing everyday [9].Moreover, human knee being less close to a perfect hinge joint, gait analysis of IMU based joint angle measurement is not completely accurate [12].On the other hand, a large difference between altitudes occurs at the beginning due to slow transition oscillation for global pose estimation using multi sensor-fusion by Kalman filter approach [21].Lack of accuracy and drift over time is observed as well.
To compensate for the drift, existing sensor fusion systems use various types of external sensor, such as, GPS, ultrasonic beacons etc.In our approach, we propose to implement a single camera SLAM [20] [21] with controller smartphone's camera to obtain visual odometry of the handheld device in real-world environment.Lucas-Kanade sparse optical flow [35] is used to track good features between frames.We then calculate essential matrix to obtain rotation and translation information, which eliminates dependency on any external positioning system.Proposed approach is tested on Sony Xperia Z2 smartphone, which runs on Android OS 6.0.1, API level 23.Android API's Sensor Manager Class is used to make use of events and information, such as, sensor's type, time-stamp, accuracy, and sensor's data.Android API provides a 3D vector indicating acceleration along each device axis, excluding gravity.The smartphone used in the experiments consist of BMA2X20 Accelerometer, having a resolution of 0.019 m/s 2 and maximum range of 39.227 m/s 2 .
Accelerometers have very fast response but consists noise.Experimental data shows white noise propagation in raw linear acceleration over constant time domain with controller device in flat position.By using EKF, relatively smooth data is found as in Figure 1.During construction of EKF, we set three values.They are expectation value (0.001), process noise (1e-5) and measurement noise (0.0195).Data from linear acceleration of X, Y and Z-axis is sent to EKF to convert noisy data into smooth data.There is still some bias found when accelerometer data is sent after filtering.Computing filtered or smooth data by double integration, we get position estimation of the handheld device.However, an issue of rise in drift is observed.Bias increases the amount of drift error as well.
So, we accumulate long term average of bias for bias estimation.Then we calculate bias estimation before double integration to reduce drift.Inertial measurement sensors are MEMS based.Hence, flicker noise or 1/f noise is present.
Conclusion
It is observed from detailed analysis above; there is scope of state estimation (velocity, position and pose) by using accelerometer, gyroscope and magnetometer sensors and EKF which requires further study.The significance of this approach is to be able to get room scale canvas and introduce immersive experience to users.Extensive experimentation can be performed by using IMU of mobile device.Further analysis on estimating accurate drift and error can be performed as well.Monocular odometry to compute the accurate relation between IMU provided position and visual localization in real world co-ordinate system is another scope of possible study.We propose to use monocular visual inertial system integrated with accelerometer to reduce observed flicker noise in linear acceleration data.From monocular images, depth map can be created using concurrent inertial measurements and pose recovery.With Structure-from-Motion method we can estimate camera pose estimation and trajectory to get stable 3D coordinates for handheld device with reduced residual noise from IMU sensors.
Figure 1 .
Figure 1.Accelerometer linear acceleration data from X-axis: (a) Noisy data (b) Smooth data after extended Kalman filtering (arbitrary units).
|
2018-12-29T04:44:21.433Z
|
2018-04-19T00:00:00.000
|
{
"year": 2018,
"sha1": "cd995e4e0ef7eede207dde05424262df68200263",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=84143",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cd995e4e0ef7eede207dde05424262df68200263",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
9152124
|
pes2o/s2orc
|
v3-fos-license
|
Physiological concentrations of soluble uric acid are chondroprotective and anti-inflammatory
High uric acid levels are a risk factor for cardiovascular disorders and gout; however, the role of physiological concentrations of soluble uric acid (sUA) is poorly understood. This study aimed to clarify the effects of sUA in joint inflammation. Both cell cultures of primary porcine chondrocytes and mice with collagen-induced arthritis (CIA) were examined. We showed that sUA inhibited TNF-α- and interleukin (IL)-1β–induced inducible nitric oxide synthase, cyclooxygenase-2 and matrix metalloproteinase (MMP)-13 expression. Examination of the mRNA expression of several MMPs and aggrecanases confirmed that sUA exerts chondroprotective effects by inhibiting the activity of many chondro-destructive enzymes. These effects attenuated collagen II loss in chondrocytes and reduced proteoglycan degradation in cartilage explants. These results were reproduced in chondrocytes cultured in three-dimensional (3-D) alginate beads. Molecular studies revealed that sUA inhibited the ERK/AP-1 signalling pathway, but not the IκBα-NF-κB signalling pathway. Increases in plasma uric acid levels facilitated by the provision of oxonic acid, a uricase inhibitor, to CIA mice exerted both anti-inflammatory and arthroprotective effects in these animals, as demonstrated by their arthritis severity scores and immunohistochemical analysis results. Our study demonstrated that physiological concentrations of sUA displayed anti-inflammatory and chondroprotective effects both in vitro and in vivo.
poorly understood. Early studies proposed that uric acid is a signal from damaged tissues that alerts the immune system 8,9 . For example, uric acid was shown to regulate the inflammatory response in damaged tissues in a mouse model of liver injury 10 . In addition, experimental evidence has suggested that uric acid may be important in vascular remodelling and is an independent risk factor for many vascular disorders 11 . Moreover, uric acid-lowering therapy was shown to exert beneficial effects in patients with cardiovascular diseases 12 . However, a recent study showed that administering allopurinol to heart failure patients treated with atorvastatin to reduce serum uric acid levels did not provide beneficial effects in these patients 13 .
In contrast to studies showing that elevated serum uric acid levels potentially increase the risk of cardiovascular disease, an early study suggested that uric acid may have antioxidant effects 14 . Uric acid scavenges singlet oxygen atoms and oxygen radicals, thereby attenuating iron-mediated ascorbic acid oxidation 14,15 . Uric acid can also attenuate reperfusion damage induced by free radical-generating granulocytes in isolated organs from pigs and humans 16 . In addition, uric acid can prevent peroxynitrite-induced nitrosation of proteins and inactivation of tetrahydrobiopterin 17,18 , a cofactor necessary for nitric oxide synthase. Increases in plasma uric acid levels are associated with reductions in plasma nitrite/nitrate levels, and treating endothelial cells with uric acid reduces vascular endothelial growth-stimulated nitric oxide (NO) production 19 .
In a mouse model of dsRNA-triggered arthritis, administering a uric acid suspension in saline reduced the frequency and severity of arthritis compared to saline treatment 20 . Interestingly, a recent study showed that uric acid concentrations in the synovial fluid of OA patients were positively correlated with the severity of knee OA 21 . Given that uric acid plays a complex role in the inflammatory response, in the present study, we investigated the possible pro-or anti-inflammatory effects of physiological concentrations (15-60 μg/ml) of soluble uric acid (sUA) in joint disease. The results of the study indicate that sUA exerts both anti-inflammatory and chondroprotective effects in vitro and in vivo.
Results
sUA inhibited IL-1β-and TNF-α-stimulated chondrocytes. We investigated the protective effects of several physiological concentrations of sUA on chondrocyte/cartilage degradation induced by inflammation. The results showed that IL-1β and TNF-α induced inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and pro-MMP-13 expression in the chondrocytes, and that these effects were suppressed by sUA (Fig. 1). We consider that the incubation of cells with sUA should be as long as possible to mimic the real situation in humans. However, considering that longer incubation period may result in some unwanted conditions like contamination, we chose to pre-incubate the cells with sUA for 72 h. In very rare conditions, we also pre-incubated the cells with sUA for 24 h and the conditions also worked well. We observed that the anti-inflammatory effects of sUA could be demonstrated with as short as 6 h of pre-incubation with chondrocytes (data not shown). In addition, proinflammatory cytokine-mediated reductions in collagen II (Col II) expression were abolished by sUA treatment. We also examined the effects of monosodium urate crystals (MSU) on the above parameters. In contrast to sUA, MSU did not exert anti-inflammatory effects or reverse proinflammatory cytokine-mediated reductions in Col II expression in chondrocytes (Supplementary Figure 1). sUA also inhibited both IL-1β-and TNF-α-induced reactive oxygen species (ROS) generation (Supplementary Figure 2). To determine whether the effects of sUA are mediated by regulation of the mRNA expression levels of inflammation-related proteases and enzymes, we performed quantitative PCR assays. The primers for the genes whose expression levels were assessed by qPCR are shown in Supplementary Table 1. The results of these analyses demonstrated that sUA suppressed proinflammatory cytokine-stimulated MMP-1, MMP-13, a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS)4, ADAMTS5, iNOS and COX-2 mRNA expression in chondrocytes (Fig. 2). sUA also tended to reduce TNF-α-induced MMP-3 mRNA expression in chondrocytes, although these reductions were not statistically significant. Surprisingly, both IL-1β and TNF-α inhibited Col II mRNA expression, effects that were counteracted by sUA treatment, which also facilitated significant recovery of Col II mRNA expression levels. sUA did not affect aggrecan mRNA expression.
Signalling pathway targeted by sUA. The molecular mechanisms underlying the anti-inflammatory effects of sUA were examined. As shown in Fig. 3A and Supplementary Figure 3, sUA inhibited proinflammatory cytokine-induced activator protein-1 (AP-1) but not nuclear factor kappaB (NF-κB) or signal transducer and activator of transcription (STAT)3 DNA-binding activity. These results were confirmed by reporter assays (Supplementary Figure 4A and B). In addition, neither IL-1β-nor TNF-α-mediated NF-κB inhibitor-α (IκBα) degradation was affected by sUA treatment (Supplementary Figure 4C and D). Analysis of the activity of the mitogen-activated protein kinases (MAPKs) upstream of AP-1 that are activated by IL-1β or TNF-α, i.e., the phosphorylated forms of extracellular signal-regulated kinase (ERK), p38, and c-Jun, showed that ERK, but
Effects of sUA in chondrocytes cultured in 3-D alginate beads.
To avoid the confounding effects of the occurrence of chondrocyte de-differentiation in a monolayer culture, we encapsulated chondrocytes in 3-D alginate beads to ensure that the chondrocytic phenotype was retained 22 . As shown in Fig. 4A and B, sUA treatment counteracted IL-1β-and TNF-α-mediated reductions in Col II production and inhibited MMP-13 expression in samples collected from ECM and cell lysates. sUA also effectively inhibited IL-1β-and TNF-α-induced MMP-13 mRNA expression. However, sUA did not affect aggrecan mRNA expression, which served as a control in this experiment (Fig. 4C). sUA protected against TNF-α-and IL-1β-induced proteoglycan degradation in cartilage explants. To further investigate the chondroprotective effects of uric acid and elucidate the events associated with proteoglycan degradation in cartilage, we prepared and examined porcine cartilage explants of equal sizes. As shown in Fig. 5A, sUA prevented both IL-1β-and TNF-α-induced proteoglycan loss and inhibited both IL-1βand TNF-α-enhanced NITEGE, the carboxyl-terminal aggrecan cleavage product, staining in cartilage explants. Furthermore, sUA suppressed IL-1β-and TNF-α-mediated release of proteoglycan into the culture supernatants of cartilage explants (Fig. 5B). Immunohistochemical staining for COX-2, MMP-13, and Col II confirmed that sUA exerts chondroprotective effects in proinflammatory cytokine-induced inflammation and cartilage damage (Fig. 5C).
Effect of hyperuricaemia in a murine CIA model. To investigate the possible protective effect of hyperuricaemia in the murine CIA model, the animals were fed with water or-to induce hyperuricaemia-oxonic acid, a uricase inhibitor effective in increasing serum uric acid levels 23 (Fig. 6A). We induced polyarthritis by injecting bovine Col II into the tails of the mice. Administration of oxonic acid did not cause changes in body weight (data not shown). Based on a previously established scoring system 24 , oxonic acid-fed mice exhibited a lower incidence of arthritis and less severe arthritis than water-fed mice ( Fig. 6B-D). Immunohistochemical analysis was performed to measure structural damage severity, as described by other researchers 25 (Supplementary Figure 5). Figure 6E shows that uric acid exerted arthroprotective effects against arthritis in oxonic acid-treated mice, as these mice displayed less inflammatory cell infiltration in the synovium, less synovial hyperplasia, less cartilage damage and less bone erosion than control mice (the higher magnification images are shown in Supplementary Figure 6). The statistical data regarding the severity of both the synovial inflammation and the cartilage damage displayed by oxonic acid-treated and control animals were analysed (Fig. 6F). The results of the analysis suggest that a high correlation exists between the histological findings characterizing inflamed joint tissues and arthritis clinical data. Furthermore, increases in uric acid differentially regulated the mRNA levels of several cytokines, including IL-1β, TNF-α, IL-6, IL-10, IL-1 receptor antagonist (IL-1Ra) and IFN-γ, as well as the levels of chemokines, such as CXCL10 and regulated on activation, normal T cell expressed and secreted (RANTES), and the levels of cartilage-damaging enzymes, including MMP-3, MMP-13, ADAMTS4, ADAMTS5, and iNOS, in CIA mice (Fig. 7). Oxonic acid treatment did not affect the mRNA expression of ZIP8, a Zn2+ importer capable of inducing the expression of several MMPs and ADAMTS5 in OA 26 . The anti-inflammatory and chondroprotective effects of sUA and the mechanisms underlying these effects are summarized in Fig. 8.
Discussion
An early study published a few decades ago reported that sUA has no effect on chondrocyte viability, proliferation or proteoglycan synthesis 27 . No other studies have examined the effects of sUA at physiological concentrations on chondrocytes. Furthermore, many researchers have long considered plasma sUA a waste product. This study therefore aimed to clarify the roles of sUA in cartilage and joint inflammation. Although above-normal uric acid levels may be risk factors for several diseases 28 , our results demonstrate that physiological concentrations of uric acid exert anti-inflammatory and chondroprotective effects. These effects were clearly demonstrated using many different cellular and molecular approaches in different systems, including a chondrocyte-based study, a 3-D alginate bead study and study using cartilage explants.
ECM components, such as collagen, proteoglycan and aggrecan, form structural bases that are essential for maintaining cartilage integrity 29 . Among the many types of collagen, Col II is particularly important and has long served as an accurate indicator of cartilage metabolism 30 . MMP-13 (collagenase-3) preferentially cleaves Col II with greater potency than collagenase-1 and is a critical proteinase in cartilage damage, as well as in progressive cartilage matrix and cellularity loss in the ageing process 31,32 . Many other MMPs and aggrecanases, such as the ADAMTSs, also have roles in cartilage damage in inflamed joints 33 . These results suggest that many of these proinflammatory cytokine-induced chondro-destruction-inducing enzymes can be effectively suppressed under physiological sUA concentrations. Surprisingly, in addition to downregulating chondro-destruction-inducing enzymes, sUA also inhibited proinflammatory cytokine-mediated suppression of Col II mRNA expression. It is therefore possible that sUA targets more upstream molecules in the signalling pathways involved in proinflammatory cytokine stimulation.
Molecular analysis revealed the specificity of the chondroprotective effects of sUA. Among the MAPK pathways, the ERK signalling pathway, but not the p38-or JNK-mediated signalling pathway, was affected by sUA. Similarly, the AP-1-mediated signalling pathway but not the NF-κB signalling pathway was targeted by sUA. These results are consistent with those of a previous study that showed that AP-1 activation plays an important role in MMP-13 expression 34 . Furthermore, given the critical role played by AP-1 family proteins in the mediation of OA cartilage destruction, it is possible that sUA-mediated AP-1 signalling pathway inhibition can protect against OA pathogenesis 35 . The AP-1 signalling pathway-selective character of sUA shares certain similarities with the anti-inflammatory effects of retinoic acid observed in previous studies 36 . Although our studies demonstrated that sUA could inhibit proinflammatory cytokine-induced ERK activation, these experiments did not identify the exact target that is inhibited by sUA. That is, the current study did not show whether sUA directly or indirectly inhibited the ERK/AP-1 signalling pathway. Further studies are needed to address this issue.
For decades, there have been no satisfactory animal models of OA. The commonly used traumatic OA model, which is induced by anterior cruciate ligament transection, is representative of only a very limited number of OA populations. Because the cartilage damage in OA and rheumatoid arthritis is similar 2 , we investigated the anti-inflammatory effects of sUA in a murine model of CIA. We noted significant changes in synovitis development, cartilage destruction and bone erosion, changes indicating that the CIA model is a very useful animal model of arthritis, especially rheumatoid arthritis 37 . In this model, many cytokines, such as IL-1β, TNF-α, IL-6 and IL-10, play important roles in the pathogenesis of the disease 37 . The uricase inhibitor oxonic acid, which increases plasma uric acid levels, reduced the incidence and severity of arthritis in CIA mice. We also noted significant reductions in the levels of proinflammatory cytokines, such as IL-1β and TNF-α, as well as suppression of chemokine and cartilage destruction-inducing enzyme production, in oxonic acid-treated mice compared to control mice. The cumulative effects of these changes also led to reduced inflammatory cell infiltration into the synovial tissues, as well as protection against cartilage damage and bone erosion. The anti-inflammatory roles of IL-10 38, 39 and IFN-γ 40,41 have been established; our results showed that uric acid also reduced both IL-10 and IFN-γ expression in inflamed joints. However, these reductions did not have significant impact on the chondroprotective and anti-inflammatory effects of sUA. Meanwhile, the mRNA levels of the proinflammatory cytokine IL-6 and the anti-inflammatory cytokine IL-1Ra also decreased after oxonic acid treatment, although the results . sUA regulated the mRNA expression of proinflammatory cytokines, chemokines, and several proinflammatory markers in the inflamed joints of CIA mice. The tissues taken from the entire joint/paw of water-and oxonic acid-treated CIA mice (7 for each) were collected, and the mRNA expression levels of several proinflammatory cytokines, chemokines, and inflammation-related molecules were analysed, as indicated previously. The primers for the individual genes whose expression levels were measured by qPCR are shown in Supplementary Table 1. of our statistical analysis indicated that the changes were not significant due to the limited number of samples analysed. Overall, the results of our in vivo studies using CIA mice fully support the results of our in vitro studies using primary chondrocytes.
sUA has been shown to exert anti-inflammatory effects in patients with acute knee injuries 42 . Furthermore, an extensive meta-analysis that compared the incidence of bone mineral density, osteoporosis and fractures in people with higher serum uric acid concentrations to those in people with lower serum uric acid concentrations revealed that uric acid plays a protective role. Moreover, subjects with higher serum uric acid levels have a significantly higher bone mineral density than subjects with lower serum uric acid levels 43 . Consistent with the results of these human studies, our results showed that physiological serum sUA levels exerted anti-inflammatory effects that protected against inflammation-induced cartilage and joint damage.
Methods
Reagents and antibodies. sUA solution was freshly prepared and filtered through a 0.22-μm syringe filter, as described previously 44 . MSU was prepared using the method reported by Lee et al. 45 Isolation and culture of porcine chondrocytes. Porcine cartilage specimens were obtained from the hind leg joints of pigs. Chondrocytes were prepared from these specimens as described in our previous report 46 . After the articular cartilage was enzymatically digested with 2 mg/ml protease in serum-free Dulbecco's modified Eagle's medium (DMEM) containing antibiotics and 10% foetal bovine serum (FBS), the specimens were digested overnight with 2 mg/ml collagenase I and 0.9 mg/ml hyaluronidase in DMEM/antibiotics. The cells were subsequently collected, passed through a cell strainer (Beckton Dickinson, Mountain View, CA, USA) and cultured in DMEM containing 10% FBS and antibiotics for 3-4 days before being used. When cultured in a monolayer, chondrocytes de-differentiate into fibroblast-like cells after a few passages [47][48][49] . To prevent this change, we maintained the chondrocytes used throughout this study at one passage so that the cells retained their shapes and characteristics [50][51][52] . During the period of cell culture and treatment, sUA was not removed.
Western blotting. Enhanced chemiluminescence Western blotting (Amersham-Pharmacia, Arlington
Heights, IL, USA) was performed as described previously 52 . Briefly, equal amounts of protein were analysed using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a nitrocellulose filter. For immunoblotting, the nitrocellulose filter was incubated with Tris-buffered saline with 1% Triton X-100 containing 5% non-fat milk for 1 h and then blotted with antibodies against specific proteins for another 2 h at room temperature.
Nuclear extract preparation and electrophoretic mobility shift assay. Nuclear extract preparation and electrophoretic mobility shift assay (EMSA) were performed as described in our previous report 52 . Oligonucleotides containing an NF-κB-, STAT3-, or AP-1-binding site were purchased and used as DNA probes. The DNA probes were radiolabelled with [γ-32 P]ATP using T4 kinase (Promega). For the binding reaction, the radiolabelled probe was incubated with 4 μg of nuclear extracts. The binding buffer contained 10 mM Tris-HCl (pH 7.5), 50 mM NaCl, 0.5 mM ethylenediaminetetraacetic acid (EDTA), 1 mM dithiothreitol, 1 mM MgCl 2 , 4% glycerol, and 2 μg of poly(dI-dC). The final reaction mixture was analysed in a 6% non-denaturing polyacrylamide gel, and 0.5× Tris/Borate/EDTA was used as an electrophoresis buffer.
Analysis by real-time polymerase chain reaction with reverse transcription. Total RNA was isolated using Trizol reagent (Invitrogen; Carlsbad, CA, USA) after the cells were lysed, according to the manufacturer's protocol and as described in our previous report 52 . Reverse transcription was performed in a 20-μl mixture containing 2 μg of total RNA, 10× RT buffer (Invitrogen), random hexamers (Invitrogen), a mixture of dNTP (Promega; Madison, WI, USA), and Moloney Murine Leukemia Virus Reverse Transcriptase (MMLV RTase, Invitrogen), in accordance with the protocol governing the use of the Superscript First-Strand Synthesis System (Invitrogen). After the RNA was reverse-transcribed to cDNA, the obtained template cDNA samples were subjected to PCR reactions. Real-time measurements of the expression levels of the designated genes were performed according to the manufacturer's instructions (power SYBR Green PCR Master Mix, Applied BioSystems, Foster City, CA, USA). Briefly, 10 ng of cDNA was amplified in a total mixture volume of 20 μl consisting of 1× Master Mix and the appropriate gene-specific primers, which were added at a final concentration of 100 nM. The primer sequences, which are shown in Supplementary Table 1, were designed by us or described by other researchers 53,54 . The reactions were performed over 40 cycles comprising steps at 95 °C for denaturation and 60 °C for annealing and extension on a Roche LightCycler 480 (Roche). The changes in gene expression caused by stimulation with TNF-α or IL-1β in the presence or absence of sUA were calculated with the following formula: fold , where ΔC t = C t stimulated − C t GAPDH , and Δ(ΔC t ) = ΔC t stimulated −ΔC t control .
Transfection assays. Transient transfection was performed using the transfection reagent TransIT-LT1 (Mirus Bio LLC, Madison, WI, USA). Briefly, chondrocytes at P0 were transfected with a DNA/TransIT-LT1 preparation in 10% FBS culture medium. The transfection mixture consisted of 15 μg of AP-1 or NF-κB firefly luciferase reporter plasmid (Stratagene, La Jolla, CA, USA), 1 μg of the internal control plasmid TK-Renilla luciferase (Promega, Madison, WI, USA) and 45 μl of TransIT-LT1 in Opti-MEM. The chondrocytes were passaged in a 24-well plate at a density of 4 × 10 5 /well overnight, after which the medium was replaced with serum-free DMEM containing various concentrations of sUA. After 72 h, the cells were treated with IL-1β or TNF-α for another 24 h before the total cell lysate was collected, and luciferase activity was measured using a luminometer, according to the manufacturer's instructions (Promega). Renilla luciferase values were used to normalize each sample for transfection efficiency measurements. The results are expressed as fold inductions of luciferase activity.
3-D alginate bead experiments.
The 3-D alginate bead experiments were performed by slightly modifying a previously described method 55 . Briefly, freshly isolated chondrocytes were gently resuspended in alginate solution (1.2% low-viscosity alginate in 0.15 M NaCl) at a density of 3.75 × 10 6 cells/ml. The chondrocyte suspension was slowly dripped (drop volume, 10 μl) into a CaCl 2 solution (102 mM) using an automatic Pipetman. After the solution had slowly mixed, and the beads had been allowed to completely polymerize for 10 min at room temperature, the CaCl 2 solution was aspirated, and the beads were washed with normal saline before being cultured in DMEM containing 10% FCS at 37 °C with 5% CO 2 . After the beads were treated for the indicated time period, the culture medium was replaced with iced normal saline. The alginate beads were then transferred into Eppendorf tubes containing a cold 55 mM sodium citrate solution and rotated for 30 min at 4 °C to dissolve the alginate gel and release the cells from the beads. After the cells were centrifuged at 12,000 g for 10 min at 4 °C, the supernatants (ECM fraction) were collected. The cell pellet was washed with cold normal saline and lysed in RIPA buffer. The ECM fraction and cell lysate were then analysed with the relevant assays.
Preparation of cartilage explants. The cartilage explants were performed as described in our previous report 52 . Briefly, articular cartilage specimens of uniform size from the joint located near the femoral head of the pig hind-limb were excavated by a stainless-steel dermal-punch (diameter, 3 mm; Aesculap, Tuttlingen, Germany) and weighed. Each cartilage explant was subsequently placed in a 96-well plate and cultured in DMEM containing antibiotics and 10% FBS for 24 h. After incubating in serum-free DMEM for 72 h, the cartilage explants were used for additional experiments.
Analysis of cartilage degradation. We assessed cartilage degradation by measuring the amount of proteoglycan released into the culture medium, as previously described 52 . Briefly, culture medium was added to 1,9-dimethylmethylene blue (DMB) solution (Sigma), which comprises a metachromatic dye that can bind sulfated glycosaminoglycan (GAG), a major component of proteoglycan. GAG-DMB complex formation was quantified in a 96-well plate using a plate reader (TECAN) at a wavelength 595 nm (released GAG). The cartilage explants were then collected into Eppendorf tubes and digested in papain (1 mg/ml containing 5 mM cysteine HCL, 5 mM EDTA, and 0.1 M phosphate buffer, pH 6.0) at 60 °C overnight. The dissolved matrix was subsequently analysed to determine the proteoglycan content of each sample (retained GAG). GAG loss was calculated and was expressed as [the released GAG (μg) in culture medium/the released GAG + the retained GAG in cartilage explant] × 100.
Safranin O staining and immunohistochemical study. Cartilage explants were mounted in embedding medium (Miles Laboratories, Naperville, IL, USA) and rapidly frozen at −80 °C, and then serial, noncontiguous microscopic sections (7 μm) of cartilage explants were cut on a Microm cryostat at −20 °C and mounted on Superfrost Plus glass slides (Menzel-Gläser, Braunschweig, Germany). To assess changes in proteoglycan content, we stained the tissue sections with safranin O/fast green before counterstaining the tissues with Weigert's iron haematoxylin 52 . We then performed immunohistochemical staining to assess pro-MMP-13, Col II and COX-2 expression and performed NITEGE as described in our previous report, with some modifications 50 .
Animal experiments. All animal experiments performed in this study were approved by the National Health Research Institute, Taiwan, and all mice were housed and maintained under specific pathogen-free conditions, according to the institute's animal care guidelines. In addition, all methods in animal studies were performed in accordance with the relevant guidelines and regulations of the institution. The murine CIA model was produced as previously described 24 . The tails of male DBA/1 J mice (age, 9 weeks) were injected intradermally with 100 μl of bovine Col II at a concentration of 1 mg/ml and complete Freund's adjuvant containing 0.5 mg/ ml of Mycobacterium tuberculosis. The animals received a 50-μl booster injection 21 days after the first injection. Twenty-eight days after Col II injection, the animals were closely observed every 2-3 days to determine any inflammatory reactions had occurred in their foot paws. The following 4-point scale was used to measure clinical disease activity in each paw: 0 = No evidence of erythema and swelling; 1 = Erythema and mild swelling confined to the tarsals or ankle joint; 2 = Erythema and mild swelling extending from the ankle to the tarsals; 3 = Erythema and moderate swelling extending from the ankle to the metatarsal joints; and 4 = Erythema and severe swelling encompassing the ankle, foot and digits or limb ankylosis 24 . The mice were sacrificed via CO 2 inhalation 56 days later, after which their blood was aspirated from their hearts, and their serum was collected (3000 × g, 15 min at 4 °C) for analysis of their uric acid concentrations. The foot paw samples were immersed in 10% formalin and fixed for pathological analysis after haematoxylin/eosin staining and toluidine blue O staining or stored in liquid nitrogen for mRNA expression analysis. Pathological inflammatory changes and cartilage destruction in joints were scored using a previously described classification system, with some modifications 25,56 . The following 3-point scale for synovial inflammation was used in the study: 0 = Normal; 1 = Minimal inflammatory cell infiltration into the synovium; 2 = Moderate inflammatory cell infiltration into the synovium, synovial hyperplasia and oedema; and 3 = Severe diffuse infiltration, pannus formation and severe oedema. The following 3-point scale for cartilage degradation measurement was used in the study: 0 = Normal; 1 = mild loss of toluidine blue O staining in the superficial layer and slight surface fibrillation; 2 = moderate loss of toluidine blue O staining and Scientific RepoRts | 7: 2359 | DOI:10.1038/s41598-017-02640-0 cartilage disruption; and 3 = severe loss of toluidine blue O staining, complete loss of cartilage and bone erosion. The liquid nitrogen-preserved samples were ground with a pestle in SPEX SamplePrep 6770 Freezer/Mill and dissolved in Trizol to obtain RNA. After the RNA was reverse transcribed into cDNA, mRNA expression levels were determined. The mice in the treatment group were fed 2% oxonic acid (Sigma-Aldrich) in reverse osmosis-treated water to induce hyperuricaemia, as previously described 57 . Statistical analysis. One-way ANOVA with Bonferroni's multiple comparison test was used for multiple comparisons, and Student's t-test was used to evaluate differences between groups. To measure arthritis incidences and histopathology scores, we performed chi-square contingency analysis and nonparametric Mann-Whitney U-tests, respectively. P values less than 0.05 were considered significant (*p < 0.05; **p < 0.01; ***p < 0.001, ****p < 0.0001).
|
2018-04-03T00:00:38.419Z
|
2017-05-24T00:00:00.000
|
{
"year": 2017,
"sha1": "211bb9ec242ff402befc34ff5e783b3a84ffee22",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-02640-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c4fec68e696fdf438f94cd4dc20789dc301e831",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
259862417
|
pes2o/s2orc
|
v3-fos-license
|
ac-Determination of Brain Distribution of Amino Acid Neurotransmitters in Pigs and Rats by HPLC-UV
The
Introduction
Amino acid neurotransmitters is a common substance to transmit nerve information, and plays an extremely vital role in nerve organization, particularly in terms of neural functions of the brain. 1 There are two types of amino acid neurotransmitters: excitatory and inhibitory. Excitatory neurotransmitters include aspartate (Asp) and glutamate (Glu), while inhibitory neurotransmitters include glycine (Gly), taurine (Tau), and γ-aminobutyric (GABA).
These five amino acid neurotransmitters are involved in the functions of the nervous system. For instance, the excessive activation of Glu receptors would result in central nervous system disorders. 2 Variations in the amount of Asp generated and released in neuronal terminals would have an impact on the functional functions of the brain, including cognition, memory, intellect and emotion. 3 The lack of GABA can manifest itself in different ways, such as anxiety and panic. 4 Tau has a promising therapeutic potential in the central nervous system, where it protects against toxicity and damage induced by the functions of the nervous system. Furthermore, Tau may also help to treat a number of neurological conditions, including epilepsy, stroke, and neurodegenerative illnesses. 5 Gly not only improves sleep quality and prevents neurological disorders, such as epilepsy, depression and pain, 6 but is also critical for regulating hippocampal excitation and inhibiting balance. 7 Therefore, it is critical to quantitatively detect amino acid neurotransmitters in multiple brain regions, and determine how amino acid neurotransmitters change during various physiological and pathological processes.
With the advancement of techniques for examining amino ac-J Explor Res Pharmacol ids, an increasing number of detection methods have been applied to detect amino acid neurotransmitters in various sample types. At present, amino acid neurotransmitters test samples include the cerebrospinal fluid, blood and urine, and the high-performance liquid chromatography (HPLC)-Triple time-of-flight (TOF) method is used. 8 This approach is highly sensitive, but the disadvantage is its complex operation. In addition, previous studies have tested amino acids obtained from the hippocampus of the brain of rats using gas chromatography-mass spectrometry (GC-MS), 9 but other areas of the brain were not tested. Furthermore, the liquid chromatography-tandem mass spectrometry (LC-MS/MS) approach has been employed to detect brain amino acid content, 10 but the high cost of equipment and maintenance has made this method less accessible. These shortcomings highlight the need to establish a simple and cost-effective detection method. Therefore, the present study developed a liquid phase pre-column derivation-based method to address this need.
Since most amino acids are not ultraviolet (UV) absorbent, derivations must be performed to quantitatively measure the amino acid content before using the liquid phase. The common derivative reagents include phenyl isothiocyanate (PITC), FOMC-Cl, 6-aminoquinolyl-N-hydroxysccinimidyl carbamate (AQC), and ortho-phthalaldehyde (OPA). However, these reagents have various disadvantages. For example, FOMC-Cl itself and its decomposition product, FOMC-OH, induces fluorescence, which affects the separation efficiency, 11 OPA derivatives are unstable, and require immediate analysis after derivatization, 12 and the presence of PITC in a sample would shorten the life of the column. A sensitive derivatization reagent that works well with amino acids is 4-fluoro-7-nitrobenzofurazan (NBD-F), 13 and this also applies for partial proteins 14 and drug concentration in plasma. 15 Pre-column derivation using 4-Fluoro-7-nitrobenzofurazan (NBD-F) can lead to several benefits, including gentle reaction conditions, consistent byproducts, and rapid assay time. 16 As a result, HPLC was developed for the determination of amino acid neurotransmitters in the brain using the NBD-F derived reagent. This has been successfully used to compare the amino acid neurotransmitter content in the brain of pigs and rats. This technique has been proven to be straightforward and useful for estimating the amount of amino acid neurotransmitters.
The Asp, Glu, Gly, Tau and GABA were purchased from Beijing Solarbio Science & Technology Co., Ltd. The NBD-F was purchased from Shanghai McLean Biochemical Technology Co., Ltd. The HPLC grade acetonitrile and methanol were purchased from Merck (USA). The HPLC grade phosphoric acid was purchased from Comeo Chemical Reagent Co., Ltd. The analytical grade potassium tetraborate was purchased from Shanghai Jingchun Biochemical Technology Co., Ltd. All of the substances were obtained from Chinese Pharmaceutical Chemical Reagent Co., Ltd., including the sodium dihydrogen phosphate and disodium hydrogen phosphate of analytical quality. The Milli-Q ultrapure water system was used to create ultrapure water.
Experimental animals
Laboratory animals: The Hunan Agricultural University Ethics Committee authorized the use of animals for the research (No. 2020-43). Twelve male Sprague-Dawley rats (weighing 180-240 g) were obtained from SJA Laboratory Animal Co., Ltd. The blank pig brain tissue samples were obtained from Hunan New Wufeng Co., Ltd. (Liuyang, China). The Chinese Guidelines for the Care and Use of Laboratory Animals were followed in the present study. The investigators will continue to follow these guidelines in future experiments.
Standard solution
Standard reserve solution: accurate weighing of dissolved Glu, Asp, Gly, Tau and GABA in water. The first two formulations were 10 mmol/L, and the last three formulations were 100 mmol/L. Then, these were stored at −20°C. Five kinds of standard amino acids were prepared into 1 mmol/L of mixed standard working solution, with 100 mmol/L and 10 mmol/L, respectively, and stored at 4°C.
Derivative reagent: the NBD-F was accurately weighed and prepared into a reserve solution at a concentration of 0.1 mol/L, and this was kept away from light at −20°C until analysis.
Potassium tetraborate solution: the potassium tetraborate was accurately weighed and dissolved in water, and the amount was fixed to 100 mL. Using a calibrated pH meter, the pH of the solution was adjusted to 9.5 ± 0.1. Then, 40 Hz of ultrasound was applied for 10 minutes, and this was stored at 4°C through a membrane.
Sample collection and preservation
The hippocampus, cortex, striatum, brainstem and cerebellum were quickly removed from the skull of the pigs and rats, and separated from the ice plate. Before use, the separated brain tissues were kept at −80°C. After accurate weighing, normal saline was added to homogenize at a mass-volume ratio of 1:1. Then, the homogenate was centrifuged at 12,000 rpm for 10 minutes at 4°C to separate the supernatant, and this was transferred and purified using an organic membrane filter with a pore size of 0.22 µm.
Sample collection and pretreatment
The mixed amino acid solution or sample supernatant (100 µL), potassium tetraborate solution (350 µL), and NBD-F working solution (50 µL) were mixed in 1.5 mL light-proof centrifuge tubes, and allowed to react in a heated constant temperature mixer at 60°C for 10 minutes.
Chromatographic conditions
The analytes were separated in the ChromCore C18 column (150×4.6 mm, 5 µm) using the FLC automatic 2-D liquid chromatography coupling instrument. Mobile phase A was methanol, and mobile phase C was the phosphate buffer (0.02 mol/L, including 0.2 mmol/L of sodium dihydrogen phosphate and 0.2 mmol/L of sodium disodium phosphate, pH 6.0). Then, a 0.22 µm water system filter membrane was used to filter the mobile phase. The injection volume was 20 L, the column temperature was 45°C, the wavelength was 472 nm, and the mobile phase flow rate was 1.0 mL/min. Meng S.Y. et al: Detection of brain amino acid neurotransmitters J Explor Res Pharmacol
Method of validation
Five standard amino acids were selected, and these were detected from the hippocampus samples obtained from pigs. Then, the limit of detection (LOD), limit of quantitation (LOQ), linearity, precision, and accuracy were determined to confirm the dependability of the method. Next, after the five standard amino acids were selected, these were detected from the hippocampus samples obtained from pigs. Then, the standard curve was drawn using the concentration and peak area of Asp, Glu, Gly, Tau, and GABA in milli-q water. Three concentrations (QCH, QCM and QCL) of quality control samples were prepared (n = 6). The precision and accuracy were measured within six days, within days, and between days using the QC samples of the same concentration. The detection limit was set at a signal-to-noise ratio (S/N) of 3, and the quantitative limit was set at a S/N of 10.
Statistical analysis
GraphPad Prism 8.0.1 was used to conduct the multivariate ANO-VA analysis on the experimental results, and the findings of the experiment were presented in mean ± standard deviation. The difference was considered to be significant when p < 0.05, while this was considered to be highly significant when p < 0.01.
Liquid chromatographic conditions
Gradient-free elution was used for the present study. Following the previous experiments, the isometric elution of 25% methanol buffer salt, 15% methanol buffer salt, and 5% methanol buffer salt for 25 minutes was determined. Due to the strong retention of GABA on this column, and in order to enable GABA to be eluted smoothly without affecting the normal peak emission of Asp and Glu, 12% methanol buffer salt was selected as the flow stage. In addition, the impact of pH (6.0, 6.5 and 6.8) in the buffer salt mobile phase on the target peak was determined, and the pH was detected to be 6.0. Therefore, the ChromCore C18 column was selected, and this was eluted for 18 minutes in 12% methanol buffer salt. Thus, the five neurotransmitter amino acids were completely separated in the ChromCore C18 column, and the peak shape was good. However, after the addition of the biological matrix, there were other unknown peaks that were not completely eluted. In order to prevent the normal injection of the next needle to be affected, the investigators decided to extend the isometric elution to 25 minutes. The results indicated that the five amino acid neurotransmitters totally separated after 25 minutes at pH 6.0, and the isometric elution with 12% methanol buffer salt after 18 minutes was determined. Based on the previous experiments, three derivative systems were compared, and the best concentration and derivative solution of NBD-F under the best derivative environment were obtained. The following were investigated: the mixed derivative system that consisted of the sample + potassium tetraborate + NBD-F solution, the mixed derivative system that consisted of the sample + potassium tetraborate + NBD-F and methanol, and the mixed derivative system that consisted of the sample + potassium tetraborate + NBD-F + acetonitrile. In addition, the NBD-F solution of 10, 50 and 100 mmol/L under the conditions of the three mixed derivative systems were investigated. It was revealed that the 10 mmol/L NBD-F solution had the best peak shape and separation, without acetonitrile and methanol.
pH, temperature and time
In the present experiment, after studying and analyzing the derivation conditions of amino acids, it was revealed that the pH of the derivative reagents, temperature, and reaction heating time affects the sensitivity of the detection method. Therefore, the pH of the reaction medium was investigated. The effect of various pH levels (8.5, 9.0, 9.5 and 10.0) on the peak area of the derivative product of amino acid was investigated. It was revealed that the derivative yield of potassium tetraborate with pH 9.5 was the highest.
Based on this information, the effects of several derivatization temperatures (30, 45, 60 and 75°C) and time ranges (1-20 minutes) on the peak area of the derivatives were determined. It was found that the reaction time of heating for 10 minutes at 60°C can make the derivatization reaction complete, and allowed for more ideal peak areas and separations to be obtained.
Chromatography and detection
The chromatographic conditions in the experimental study were used to determine the selectivity of the five amino acid neurotransmitters, and draw the chromatogram for the five brain areas in pigs and rats (Fig. 1). Within the chosen experimental parameters, the retention time of the symmetrical resolution of the analyte in its vicinity was not disturbed by the additional substances. Furthermore, the five kinds of amino acid neurotransmitters were entirely separated in 18 minutes. As shown in Figure 1b, the peak emergence time of Asp, Glu, Gly, Tau and GABA in the chromatogram was 2.44, 3.29, 8.09, 10.01 and 16.20 minutes, respectively. As shown in Figure 1c, the peak emergence time for Asp, Glu, Gly, Tau and GABA in pigs in the chromatogram was 2.42, 3.38, 7.96, 9.88 and 17.29 minutes, respectively.
Linearity, detection and quantitative limits, precision and accuracy
The linear relationships of the five standard amino acid neurotransmitters were determined within the range of seven different concentrations (0.300-100.0 µmol/L). The regression equations for the correction curves, peak area relative standard deviations (RSDs), detection limits, and quantitation limits are presented in Table 1. The results revealed that the strong linear relationship among the five amino acids is good, with correlation coefficients totaling higher than 0.999.
Precision and accuracy were assessed by measuring the RSD of the five amino acid neurotransmitters in the hippocampus of pigs within and between six days ( Table 1). The concentration of the amino acid neurotransmitters was 0.15-0.20 µmol/L and 0.30-0.55 µmol/L, respectively.
According to the experimental results, the LOD value of Asp, Glu, Gly, Tau and GABA was 0.15, 0.15, 0.20, 0.20 and 0.20 µmol/L, respectively. Although the sensitivity of this method was not high, this satisfies the basic quantification, and provides a new method for the detection of amino acids in the brain.
Results of the method application
The method has been successfully applied in two animal substrates (rats and pigs). The outcomes are presented in Figure 2. The blank contents of the five amino acid neurotransmitters in the five brain regions (hippocampus, cortex, striatum, cerebellum and brainstem) were respectively measured in rats and pigs.
The findings revealed that the method of determining the con-centration of amino acid neurotransmitters in rats and pigs has strong selectivity, because the peak area and resolution of the amino acid targets in the five separate brain areas of rats and pigs were good. In the pig samples, the content of Asp was the highest in the different brain regions. Furthermore, the results revealed that the amino acid neurotransmitters in the brain of rats were generally lower, when compared to those in the brain of pigs. However, the amount of Tau in the brain of rats was higher, when compared to that in the brain of pigs. These results show that for the amino acid neurotransmitters in both pigs and rats, Asp, Glu, Gly and GABA were significantly different in the hippocampus, while Tau was not significantly different. In the cortex, Asp, Glu, Gly and GABA presented with significant differences, while Tau did not present with significant differences. In the striatum, Asp, Gly, Tau and GABA presented with significant differences, but there was no significant difference in Glu. In the cerebellum, there were significant differences between Asp and Gly, and between Glu and GABA, while there was no significant difference in Tau. In the brain stem, Asp, Glu, Gly and GABA markedly varied, while Tau did not markedly vary. Interestingly, it was observed that the excitatory amino acid neurotransmitters and inhibitory amino acid neurotransmitters balanced with each other in each region of the animal brain, and this was not dominated by any one kind of amino acid neurotransmitter. Animal amino acid neurotransmitters are regulated by two different types of amino acid neurotransmitters, and these maintains the dynamic balance of the body under normal physiology. Levels of amino acids in the five different brain regions in pigs and rats. The data were expressed as mean ± standard error of the mean (SEM, n = 6). * p < 0.05, ** p < 0.01, **** p < 0.0001, pig vs. rat. Asp, aspartate; Glu, glutamate; Gly, glycine; Tau, taurine; GABA, γ-aminobutyric; RSD, relative standard deviations; LOD, limit of detection; LOQ, limit of quantitation. J Explor Res Pharmacol
Comparison of the contents of amino acid neurotransmitters between pigs and rats
Earlier studies on the hippocampus of the brain of rats have revealed that this has the highest concentration of Glu, followed by Asp. 16 However, this result was slightly different from what was revealed in the present study. It was found that the Tau content was the highest in the hippocampus, cortex, striatum and cerebellum, and that the glutamic acid content was the highest in brain stem. The reason for this difference may be due to difference in conditions during processing, such as sample derivation. In previous studies, the highest concentrations of Glu were detected in the hippocampus and cortex of pigs, followed by Asp. 17 This finding differs from the present results, which revealed that the highest concentration was Asp, followed by Glu. It was analyzed that the reason for this difference may be because the sensitivity to different amino acids varied due to different methods. However, the results remained consistent. For example, in the cortex of pigs, except for Glu and Asp, which had the highest content, the ranking of the contents of the other three amino acids in descending order was GABA, Gly and Tau.
Future directions
A number of studies have revealed that the imbalance of amino acids in the brain is associated with a variety of neurological diseases, such as Alzheimer's disease, anxiety and depression. Based on this, a simple brain amino acid detection method was established, in order to provide technical support for the monitoring of these diseases, hoping to provide help in the prevention of neurodegenerative diseases. Future research would be conducted, and focus will be given on the exploration of the methodology, providing technical support for scientific research in the future.
Conclusions
The present study offers a quick and low-cost method for identifying amino acid neurotransmitters in brain tissues. This technique has been used to identify amino acid neurotransmitters in five different regions of the brain in pigs and rats. These findings demonstrate the suitability of this approach for identifying amino acid neurotransmitters in animal brain tissues.
|
2023-07-15T15:18:16.538Z
|
2023-07-13T00:00:00.000
|
{
"year": 2023,
"sha1": "3690a31e73c40d978f439bb0e968159870437d7d",
"oa_license": "CCBYNC",
"oa_url": "https://publinestorage.blob.core.windows.net/journals/JERP.2023.0(0).0.00029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6bf8d4fa053aa5375cb4e646f5dcfe1918e622ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
119309783
|
pes2o/s2orc
|
v3-fos-license
|
The Existence of Multi-vortices for a Generalized Self-dual Chern-Simons Model
In this paper we establish the existence of multi-vortices for a generalized self-dual Chern--Simons model. Doubly periodic vortices, topological and non-topological vortex solutions are constructed for this model. For the existence of doubly periodic vortex solutions, we establish an explicitly necessary and sufficient condition. It is difficult to get topological multi-vortex solutions due to the non-canonical structure of the equations. We overcome this difficulty by constructing a suitable sub-solution for the reduced equation. This technique maybe applied to the problems with similar structures. For the existence of non-topological solutions we use a shooting argument.
Introduction
In mathematical physics static solutions to gauge field equations with broken symmetry in twospace dimensions are often called vortices. Magnetic vortices play important roles in many areas of theoretical physics including superconductivity [1,31,39], electroweak theory [2][3][4][5], and cosmology [33,43,70]. The first and also the best-known rigorous mathematical construction of magnetic vortices was due to Taubes [39,68,69] regarding the existence and uniqueness of static solutions of the Abelian Higgs model or the Ginzburg-Landau model [31]. Since then there have been much mathematical work about the existence and properties of such vortices. See, for example, the references [8, 10-12, 29, 46-48, 50, 53, 54, 58-60, 65, 71, 74]. It is also natural to consider the dyon-like vortices, often referred to electrically charged magnetic vortices, carrying both magnetic and electric charges. Such dually charged vortices are very useful in several issues in theoretical physics such as high-temperature superconductivity [42,49], the Bose-Einstein condensates [36,41], optics [13], and the quantum Hall effect [57].
It is now well-known that there is no finite-energy dually charged vortices in two-space dimensions for the classical Yang-Mills-Higgs equations, Abelian or non-Abelian. This is known as the
Generalized Chern-Simons vortices
In this section we derive the generalized self-dual Chern-Simons equations, while in [9] only radial case is considered. We adapt the notation in [37]. The (2 + 1)-dimensional Minkowski space metric tensor g µν is diag(1, −1, −1), which is used to raise and lower indices. The Lagrangian action density of the Chern-Simons-Higgs theory is given by the expression where D µ = ∂ µ − iA µ is the gauge-covariant derivative, A µ (µ = 0, 1, 2) is 3-vector field called the Abelian gauge field, φ is a complex scalar field called the Higgs field, F αβ = ∂ α A β − ∂ β A α , is the induced electromagnetic field, α, β, µ, ν = 0, 1, 2, κ > 0 is a constant referred to as the Chern-Simons coupling parameter, ε αβγ the Levi-Civita totally skew-symmetric tensor with ε 012 = 1, V is the Higgs potential function, and the summation convention over repeated indices is observed.
Integrating over the doubly periodic domain Ω or the full plane R 2 , we have (2.10) Therefore, we can get the following lower bound of the energy Then we see from (2.10) that such a lower bound is attained if and only if (φ, A) satisfies the following self-dual or anti-self-dual system 14) It is easy to check that if (φ, A) is a solution of the system (2.11)-(2.12), then (φ, −A) is the solution of (2.13)-(2.14). In addition, in view of (2.5), any solution of (2.11)-(2.12) or (2.13)-(2.14) is also the solution of (2.3)-(2.4). Consequently, in the sequel we only consider (2.11)-(2.12).
To formulate our problem more properly, as in [16,39,74] we can see that the zeros of φ are isolated with integer multiplicities. These zeros are often referred to vortices. Let the zeroes of φ be p 1 , p 2 , . . . , p m with multiplicities n 1 , n 2 , . . . , n m , respectively. Then, m i=1 n i = N gives the winding number of the solution and the total vortex number. We aim to look for N -vortex solutions of (2.11)-(2.12) such that, φ has m zeros, say p 1 , p 2 , . . . , p m with multiplicities n 1 , n 2 , . . . , n m , respectively, and m i=1 n i = N . For the generalized Chern-Simons equations (2.11)-(2.12), we are interested in three situations. In the first situation the equations (2.11)-(2.12) will be studied over a doubly periodic domain Ω such that the field configurations are subject to the 't Hooft boundary condition [35,73,74] under which periodicity is achieved modulo gauge transformations. In the second and the third situations the equations are studied over the full plane R 2 under the topological condition (2.8) and non-topological condition (2.9), respectively.
The main results of this paper read as follows.
Theorem 2.1 (Existence of Doubly Periodic Vortices) Let p 1 , p 2 , . . . , p m ∈ Ω, n 1 , n 2 , . . . , n m be some positive integers and N = m i=1 n i . There exists a critical value κ c ∈ 0, of the coupling parameter such that the self-dual equations (2.11)-(2.12) admit a solution (φ, A) for which p 1 , p 2 , . . . , p m are zeros of φ with multiplicities n 1 , n 2 , . . . , n m , if and only if 0 < κ ≤ κ c . When 0 < κ ≤ κ c , the solution (φ, A) also satisfies the following properties. The energy, magnetic flux, and electric charge are given by The solution (φ, A) can be chosen such that the magnitude of φ, |φ| has the largest possible values.
Let the prescribed data be denoted by S = {p 1 , p 2 , . . . p m ; n 1 , n 2 , . . . , n m }, where n i may be zero for i = 1, . . . , m, and denote the dependence of κ c on S by κ c (S).
Then κ c is a decreasing function of S in the sense that Theorem 2.2 (Multiple Existence of Doubly Periodic Vortices) Let p 1 , p 2 , . . . , p m ∈ Ω, n 1 , n 2 , . . . , n m be some positive integers and N = m i=1 n i and κ c be given in Theorem 2.1. If 0 < κ < κ c , then, in addition to the maximal solution (φ, A) given in Theorem 2.1, the self-dual equations (2.11)-(2.12) have a second solution (φ,Ã) satisfying (2.15) and for which p 1 , p 2 , . . . , p m are the zeros ofφ with multiplicities n 1 , n 2 , . . . , n m . Theorem 2.3 (Topological Solution) Let p 1 , p 2 , . . . , p m ∈ R 2 , n 1 , n 2 , . . . , n m be some positive integers and N = m i=1 n i . The self-dual equations (2.11)-(2.12) admit a topological solution (φ, A) such that the zeros of φ are exactly p 1 , p 2 , . . . , p m with corresponding multiplicities n 1 , n 2 , . . . , n m . Moreover, the energy, magnetic flux, and the charges are all quantized The solution is maximal in the sense that the Higgs field φ has the largest possible magnitude among all the solutions with the same zero distribution and local vortex charges in the full plane.
Theorem 2.4 (Radially Symmetric Topological Solution) For any pointx ∈ R 2 and a given integer N ≥ 0, the self-dual equations (2.11)-(2.12) admit a unique topological solution (φ, A), which is radially symmetric about the pointx, such thatx is the zero of φ with multiplicities N . Moreover, the the energy, magnetic flux, and the charges are all quantized, given by (2.17).
Theorem 2.5 (Radially Symmetric Non-topological Solution) For any pointx ∈ R 2 and a given integer N ≥ 0, then for all β > 2N + 4, the self-dual equations (2.11)-(2.12) allows a non-topological solution (φ, A), which is radially symmetric about the pointx, such thatx is the zero of φ with multiplicities N and realizing the prescribed decay properties, Then from (2.11), we can get Inserting (3.1) into (2.12) gives rise to the reduced equation away from the zeros of φ, where we write λ ≡ 12 κ 2 throughout this paper. Counting all the multiplicities of the zeros of φ, we write the prescribed zero set as Z(φ) = {p 1 , . . . , p N }. Let |φ| 2 = e u . Then the generalized self-dual Chern-Simons equations (2.11)-(2.12) are transformed into the following scalar equation where δ p is the Dirac distribution centered at p ∈ Ω. Conversely, if u is a solution of (3.3), we can obtain a solution of (2.11)-(2.12) according to the transformation Then it is sufficient to solve (3.3). Let u 0 be a solution of the equation (see [7]) It is easy to check that the function f (t) = e t (e t − 1) 5 (t ∈ R) has a unique minimal value − 5 5 6 6 . Then, if v is a solution of (3.5), we have ∆v ≥ − 5 5 Integrating (3.6) over Ω, we have which is a necessary condition for the existence of solutions to (3.3). As in [16] or chapter 5 in [74] we can use a super-and sub-solution method to establish the existence results for (3.3).
To solve (3.5), we introduce the following iterative scheme where K > 0 is a constant to be determined.
for any sub-solution v of (3.5). Therefore, if (3.5) has a sub-solution, the sequence {v n } converge to a solution of (3.5) in the space C k (Ω) for any k ≥ 0 and such a solution is the maximal solution of the equation.
In what follows we just need to construct a sub-solution of (3.5). Indeed, we have the following lemma. Proof. Choose ε > 0 sufficiently small such that the balls smooth connection, elsewhere.
It is easy to see that Ω g ε dx = 0.
Then we see that the equation admits a unique solution up to an additive constant. First, it follows from (3.11) that, for x ∈ B(p j , ε), if ε is small enough. In the sequel we fix ε such that (3.13) is valid. Next, we choose a solution of (3.12), say, w 0 , to satisfy Hence, for any λ > 0, we have Then 0 < µ 0 < µ 1 and e u 0 +w 0 (e u 0 +w 0 − 1) . As a consequence, we can choose λ > 0 sufficiently large to fulfill (3.14) in entire Ω. Thus, w 0 is a sub-solution of (3.5). The proof of Lemma 3.2 is complete. Now we seek the critical value of the coupling parameter. We establish the following lemma.
Lemma 3.3
There is a critical value of λ, say, λ c , satisfying such that, for λ > λ c , the equation (3.5) has a solution, while for λ < λ c , the equation (3.5) has no solution.
Proof.
Assume that v is a solution of (3.5). Then u = u 0 + v satisfies (3.3) and is negative near the points x = p j , j = 1, · · · , N . Applying the maximum principle away from the points x = p j , j = 1, · · · , N , we see that u < 0 throughout Ω.
4πN
|Ω| for any λ > λ c . Taking the limit λ → λ c , we obtain (3.15). Then Lemma 3.3 follows. Now we need to consider the critical case λ = λ c . We use the method of [66] to deal with this case.
We first make a simple observation. We can show that the maximum solutions of (3.5) {v λ |λ > λ c } are monotone family in the sense that v λ 1 > v λ 2 whenever λ 1 > λ 2 > λ c . Indeed, since Then X is a closed subspace of W 1,2 (Ω) and In other words, for any v ∈ W 1,2 (Ω), there exits a unique number In what follows, we will use the Trudinger-Moser inequality(see [7]) where C is a positive constant depending only on Ω.
Lemma 3.4
Let v λ be a solution of (3.5).
where C is a positive constant depending only on the size of the torus Ω. Furthermore, {c λ } satisfies the estimate Proof. Multiplying (3.5) by v ′ λ , integrating over Ω, using Schwarz inequality and Poincaré inequality, we have Noting the property u 0 + v λ = u 0 + c λ + v ′ λ < 0, we have the upper bound, From the equation (3.5), we have Integrating the above inequality over Ω gives Then we have where in the last inequality we have used Trudinger-Moser (3.16). Now using (3.5) in the above inequality we can obtain a lower bound of c λ , For the critical case we have the following result.
Lemma 3.5 The set of λ for which the equation (3.5) has a solution is a closed interval. That is to say, at λ = λ c (3.5) has a solution as well.
Proof. For λ c < λ < λ c + 1 (say), by Lemma 3.4 the set {v λ } is bounded in W 1,2 (Ω). Noting that {v λ } is monotone with respect to λ, we conclude that there exist Therefore v λ → v * strongly in L p (Ω) for any p ≥ 1 as λ → λ c . Using Trudinger-Moser inequality (3.16) again we obtain e v λ → e v * strongly in L p (Ω) for any p ≥ 1 as λ → λ c . Using this result in (3.5) and the L 2 estimates for the elliptic equations, we have v * ∈ W 2,2 (Ω) and v λ → v * strongly in W 2,2 (Ω) as λ → λ c . Particularly, taking the limit λ → λ c in (3.5), we obtain that v * is a solution of (3.5) for λ = λ c . Then the lemma follows. Denote P = {p 1 , · · · , p m ; n 1 , n 2 , · · · , n m }, We denote the dependence of λ c on P by λ c (P ). Consider the equation Proof. It is sufficient to show that, if λ > λ c (P ′ ), then λ ≥ λ c (P ). Let u ′ be a solution of (3.22) with n j = n ′ j , j = 1, · · · , m and u 0 satisfy which implies in particular that v is a sub-solution of (3.5) in the sense of distribution and (3.9) holds pointwise. It is easy to check that the singularity of v is at most of the type ln |x−p j |. Hence, the inequality (3.9) still results in the convergence of the sequence of {v n } to a solution of (3.5) in any C k norm. Indeed, by (3.9) we see that {v n } converges almost everywhere and is bounded in L 2 norm. Therefore, the sequence converges in L 2 . Analogously, the right-hand side of (3.8) also converges in L 2 . Applying the standard L 2 estimate, we see that the sequence converges in W 2,2 (Ω) to a strong solution of (3.5). Thus, a classical solution can be obtained. Using a bootstrap argument again, we can obtain the convergence in C k norm. This proves λ ≥ λ c (P ). Therefore , λ(P ) ≤ λ(P ′ ).
From the above discussion we complete the proof of Theorem 2.1. Now we carry out the proof of Theorem 2.2. It is easy to see that (3.5) is the Euler-Lagrangian equation of the following functional Lemma 3.7 For every λ > λ c , the problem (3.5) admits a solution v λ ∈ W 1,2 (Ω) and it is a local minimum of the functional I λ (v) defined by (3.23).
Proof. We apply the method in [66]. Since u 0 + v * < 0, we see that v * is a sub-solution of (3.5) for any λ > λ c . Define Then the functional I λ is bounded form below on V . We can study the following minimization problem We will show that the problem (3.25) admits a solution. Let {v n } be a minimizing sequence of (3.25). Then, by the decomposition formula, we see that { ∇v n 2 } is bounded since the definition of V gives a lower bound of {c n }. By the which gives an upper bound of {c n }. Then {v n } is a bounded sequence in W 1,2 (Ω). Without loss of generality, we may assume that {v n } converges weakly to an element v ∈ W 1,2 (Ω) as n → ∞. Hence, v is a solution to the problem (3.25). Using Lemma 5.6.3 in [74] or the appendix of [66], we conclude that v is a solution of the equation (3.5) and v ≥ v * in Ω. By the maximum principle we obtain the strict inequality v > v * in Ω.
Next we prove that v is a local minimum of the functional (3.23) in W 1,2 (Ω). We use the approach of Brezis and Nirenberg [15] as in Tarantello [66] and Yang [74]. We argue by contradiction. Suppose otherwise v is not a local minimum of I λ (v) in W 1,2 (Ω). Then, for any integer n ≥ 1, we have Similarly to the above, for any n ≥ 1, we can conclude that the infimum of (3.26) is achieved at a point v n ∈ W 1,2 (Ω). Then, by the principle of Larangian multipliers, we obtain that there exists number µ n ≤ 0 such that We rewrite the above equation the following form Noting the fact v n − v W 1,2 (Ω) → 0 as n → ∞ and the Trudinger-Moser inequality (3.16), we see that the right hand side of (3.27) converges to 0 as n → ∞. Then using the elliptic L 2 estimate, we have v n → v in W 2,2 (Ω) as n → ∞. By embedding theorem we see that v n → v in C α (Ω) for any 0 < α < 1.
Since Ω is compact and v > v * in Ω, we have v n > v * for n sufficiently large. This implies v ∈ V for n sufficiently large, which leads to I λ (v n ) ≥ I λ (v). Then we obtain a contradiction and the conclusion follows.
In the sequel we show that the functional I λ (v) satisfies P.S. condition in W 1,2 (Ω).
Lemma 3.8 Any sequence {v n } ⊂ W 1,2 (Ω) verifying admits a convergent subsequence, where we use · d to denote the norm of the dual space of W 1,2 (Ω).
Proof. By (3.28) we have as n → ∞, where ε n → 0 as n → ∞. Setting ϕ = 1 in (3.30), we obtain λ Ω e u 0 +vn (e u 0 +vn − 1) 5 dx + 4πN ≤ ε n |Ω|, which implies Then, it follows as n → ∞. Then from (3.35) it follows that c n is bounded from above. Since I λ (v n ) → α as n → ∞, we may assume that for all n, α − 1 < I λ (v n ) < α + 1, which leads to Therefore it follows from (3.32) and (3.36) that Now we aim to get a lower bound for c n . Let ϕ = v ′ n in(3.30), we obtain It is easy to see that (3.38) is equivalent to +C Ω e u 0 +vn (e 4(u 0 +vn) + e 3(u 0 +vn) + e 2(u 0 +vn) + e u 0 +vn + 1)|v ′ n |dx. (3.39) Now we deal the right hand side terms in (3.39). Using the Höler inequality and the Poincaré inequality, we have Applying Höler inequality, (3.32) and Sobolev embedding theorem, we get Ω (e 5(u 0 +vn) |v ′ n |dx ≤ Ω e 6(u 0 +vn) dx All the other terms on the right hand side of (3.39) can estimated in the same way and they all be bounded by C ∇v ′ n 2 . Then we have we obtain from (3.40) that ∇v ′ n 2 ≤ C. Inserting (3.41) into (3.37), we see that c n is bounded from below. Then we can derive that {v n } is uniformly bounded in W 1,2 (Ω). Without loss of generality, we may assume that there exists an element v ∈ W 1,2 (Ω) such that v n → v weakly in W 1,2 (Ω) and strongly in L p (Ω) for any p ≥ 1.
Setting n → ∞ in (3.30), we have Then v is a critical point of the functional I λ .
Next we show that v n → v strongly in W 1,2 (Ω) as n → ∞.
Letting ϕ = v n − v in (3.30) and (3.42) and subtract the resulting expressions, we obtain which implies Since the right hand side of (3.43) tends to 0 as n → ∞, we have ∇v n → ∇v strongly in L 2 (Ω).
Then we can obtain that v n → v strongly in W 1,2 (Ω) as n → ∞. Then the proof of Lemma 3.8 is complete.
Next we establish the existence of secondary solutions of the equation (3.5).
Let v λ be the local minimum of I λ obtained in Lemma 3.7. Then There exists a positive constant δ > 0 such that Here we assume that v λ is a strict local minimum because otherwise we would already have additional solutions. Therefore we can assume that there admits a positive constant δ 0 > 0 such that We will show that the functional I λ possesses a "mountain pass" structure. Indeed, since u 0 + v λ < 0, we have Then we can choose c 0 > δ 0 sufficiently large such that
Existence of topological solutions
In this section we establish the existence of topological solution of the generalized self-dual Chern-Simons equations (2.11)-(2.12), i.e. we prove Theorem 2.3. We will use a super-and sub-solution method to construct solutions. The key step is to find a suitable sub-solution to the reduced equation. This technique maybe applied to the problems with similar structures. As in Section 3, let |φ| 2 = e u , the prescribed zeros of φ be p 1 , . . . , p m with multiplicities n 1 , . . . , n m , respectively, and N = m s=1 n s . Then we arrive at the following governing equation It is easy to check that v * = −u 0 is an super-solution to the problem (4.5)-(4.6).
Next we construct a sub-solution to the problem(4.5)-(4.6). The construction of sub-solution is a crucial part of the proof. This technique maybe applied to the other problems with similar structures. Let v * = u * − a − u 0 , from (4.9) we have ∆v * ≥ λe u 0 +v * (e u 0 +v * − 1) 5 + g (4.10) and v * satisfies v * → −a as |x| → ∞. Then we conclude that v * is a sub-solution to the problem (4.5)-(4.6). Then the lemma follows. At this point we can establish a solution of to the problem (4.5)-(4.6) by the super-solution v * and sub-solution v * .
Let B r be a ball centered at the origin with radius r in R 2 , where r > |p s |, s = 1, . . . , m. Consider the following boundary value problem ∆v = λe u 0 +v (e u 0 +v − 1) 5 We first prove that the problem (4.11)-(4.12) has a unique solution v satisfying v * < v < v * . It is easy to see that v * = −u 0 and v * = u * −a−u 0 are a pair of ordered super-and sub-solutions to the problem (4.11)-(4.12).
We use the monotone iterative method. Let K > 0 be constant satisfying K ≥ 6λ. We first introduce an iteration sequence on B r . Then Proof. We prove this lemma by induction.
It is easy to see that the right hand side of (4.14) belongs to L p (B r ) for p > 2. Then by the standard theory, we have v 1 ∈ C 1,α (B r )(0 < α < 1). Near the set Then, by maximum principle we have v 1 < v * in B r . Noting that v * < v * , we have Here and what after we use ξ to denote an intermediate quantity from the mean value theorem. Hence by maximum principle again we have v * < v 1 in B r .
Suppose that we have already obtained the inequality v * < v k , v k < v k−1 for some k ≥ 1. Then by (4.13) we have Therefore we have v k+1 < v k in B r by maximum principle. Similarly, we have Hence we obtain v * < v k+1 in B r . Then, we get (4.16). Hence Lemma 4.2 follows.
Since v * is a bounded function, we can get the existence of the pointwise limit Let n → ∞ in (4.13) and by the elliptic estimate and embedding theorem we see that the limit (4.18) can be achieved in any strong sense and v is a smooth solution of (4.11)-(4.12). It is easy to see that the solution v is unique and v satisfies v * < v < v * . Now we denote by v (n) the solution of (4.11)-(4.12) with r = n(n is large such that n > |p s |, s = 1, . . . , m). By the construction of v (n) , we have v (n+1) ≤ v * in ∂B n+1 . Then, v (n+1) is a sub-solution of (4.11)-(4.12) with r = n. Therefore, from Lemma 4.2 we have v (n+1) ≤ v (n) in B n for any n. Then for each fixed n 0 ≥ 1, we have the monotone sequence v n 0 > v n 0 +1 > · · · > v n > v n+1 > · · · > v * in B n 0 . Then we can see that the sequence {v (n) } converges to a solution, say v, of the equation (4.5) over the full plane R 2 . By elliptic L p estimate, we have v ∈ W 2,2 (R 2 ). Then we get v(x) → 0 as |x| → ∞, which is the topological boundary condition (4.6). Then we can get a topological solution u of (4.1) satisfying u < 0 in R 2 . Now we show that v is maximal. Letṽ be another solution to (4.11)-(4.12). Thenṽ satisfies ∆(u 0 +ṽ) = λe u 0 +ṽ (e u 0 +ṽ − 1) 5 in R 2 \ {p 1 , . . . , p m }, u 0 +ṽ = 0 at infinity, and u 0 + v < 0 in a small neighborhood of {p 1 , . . . , p m }. Using maximum principle, we see that u 0 +ṽ ≤ 0. Then by Lemma 4.2, we obtainṽ ≤ v, which is to say that v is maximal. Let u be the solution of (4.1) obtained above. Define Then (φ, A) is a topological solution of the system (2.11)-(2.12). Hence the proof of Theorem 2.3 is complete.
Existence of radially symmetric topological solutions and nontopological solutions
In this section we establish the existence of radially symmetric topological solutions and nontopological solutions for the generalized self-dual Chern-Simons equations (2.11)-(2.12), that is, we prove Theorem 2.4-2.5. We use the method developed in [20,74]. For convenience, we assume that the zero of φ concentrate at the origin with multiplicities N . Let |φ| 2 = e u , similar to Section 3, we obtain the following governing equation Then the electric charge is Q = κΦ = κπ(2N + β).
Noting (5.4)-(5.5) we can get
Then it follows from (2.10) that the energy is Then we complete the proof of Theorem 2.5. Now we just need to prove Theorem 5.1.
Using the equation (5.10), there exist a positive constant δ 0 depending on t 0 such that Then it is easy to see that u(t) blows up at finite time t > t 0 . Hence by Theorem 5.2, we can conclude the assertion of Theorem 5.1.
In the sequel we just need to Theorem 5.2. Let To prove Theorem 5.2, it is sufficient to prove the same result for the following problem u ′′ (t) + λe 2t g(u(t)) = 0, −∞ < t < +∞, (5.15) First we establish the existence for the initial value problem (5.15).
Lemma 5.1 For any a ∈ R, there exits a unique solution u to the problem (5.15) such that Moreover, if u(t) is a solution of (5.15) in some interval, it can be extended to a global solution of (5.15) in R which satisfies (5.17) for some a ∈ R.
Proof. It is easy to check that u(t) is a solutio of (5.15) if and only if u(t) verifies (t − s)e 2s g(u(s))ds. Noting that |g(u)| + |g ′ (u)| < 7, then by Picard iteration with u 0 = 2N t + a, we can establish the solution of (5.15) in the interval (−∞, T ]. Since g(u) is bounded, we can extend u to a solution of (5.15) in R. Now we prove the uniqueness of the solution. Suppose that u 1 , u 2 are two solutions of (5.15) in the interval (−∞, T ]. Letũ = u 1 − u 2 , we have |ũ|. Then we can get sup (t − s)e 2s g(u(s))ds < +∞ we obtain Then Lemma 5.1 follows. Now we investigate the behavior of the solutions as t → +∞. In the sequel we denote by u(t, a) by the solution given by Lemma 5.1. We use ′ to denote the derivative with respect to t and subscript a to denote the derivative with respect to a. We define the parameter sets: It is easy to see that Furthermore, we can obtain the following lemma. Proof. (1) Let a ∈ A + and t 0 be first time such that u(t, a) hits the t axis from below. Then, u(t, a) < 0 for all t ∈ (−∞, t 0 ). By the equation (5.15) we have u ′′ = −λe 2t g(u(t)) < 0 in (−∞, t 0 ). Hence u ′ (t, a) > 0 in (−∞, t 0 ).
(2) By the definition of A 0 , we see that the limit b ≡ lim If a > 5 5 6 6 λ 4 , then u(0, a) > 0, which says a ∈ A + . (5) If a 0 ∈ A − , then there exists t 0 ∈ R such that u ′ (t 0 , a 0 ) < 0. Hence u ′ (t 0 , a) < 0 when a is close to a 0 . By (3) we have u(t, a 0 ) < 0 for all t ≤ t 0 and a close to a 0 . By (5.18), we see that u cannot take a local negative minimum. Then u(t 0 , a) < 0 and u ′ (t 0 , a) < 0 implies u ′ (t, a) ≤ 0 for all t > t 0 . Hence, u(t, a) < 0 for all t > t 0 when a is close to a 0 . Then we see that A − is open.
(6) Let a < − 5 5 If a / ∈ A − , since that u(t, a) cannot assume a local minimum, there exists constants T 1 and T 2 such that T 2 < Similarly, we have T 1 − T 2 ≥ 1 2N . Hence, by the choice of T , we have which leads to a contradiction. Therefore a ∈ A − . (7) By the assertion of (4)-(6), we can get (7).
To prove A − = (−∞, a 1 ), it is sufficient to prove that if (b 1 , b 2 ) ⊂ A − , then b 1 ∈ A − . For a ∈ A − , let z 1 (a) be the first point such that u ′ (z 1 (a), a) = 0 and let m(a) = u(z 1 (a), a) be the maximum of u(·, a) in R. Noting that for a ∈ A − , u ′′ (z 1 (a), a) < 0, then again by the implicit function theorem we see that z 1 (a) is a differentiable function on A − . Hence we have dm(a) da = u ′ (z 1 (a), a) z 1 (a) da + u a (z 1 (a), a) = u a (z 1 (a), a) ≥ 0, ∀a ∈ (b 1 , b 2 ).
Then we obtain Via continuity Step 2. We show that a 1 = a 2 . For a ∈ A 0 , we have u ′ (t, a) > 0 in R and by Lemma 5 Proof.
|
2012-07-11T16:10:30.000Z
|
2011-12-14T00:00:00.000
|
{
"year": 2013,
"sha1": "80fc27203a42809955dc0cf2229c77c77554f8d7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.3306",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "80fc27203a42809955dc0cf2229c77c77554f8d7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
19440084
|
pes2o/s2orc
|
v3-fos-license
|
Direct visualization of flow-induced conformational transitions of single actin filaments in entangled solutions
While semi-flexible polymers and fibers are an important class of material due to their rich mechanical properties, it remains unclear how these properties relate to the microscopic conformation of the polymers. Actin filaments constitute an ideal model polymer system due to their micron-sized length and relatively high stiffness that allow imaging at the single filament level. Here we study the effect of entanglements on the conformational dynamics of actin filaments in shear flow. We directly measure the full three-dimensional conformation of single actin filaments, using confocal microscopy in combination with a counter-rotating cone-plate shear cell. We show that initially entangled filaments form disentangled orientationally ordered hairpins, confined in the flow-vorticity plane. In addition, shear flow causes stretching and shear alignment of the hairpin tails, while the filament length distribution remains unchanged. These observations explain the strain-softening and shear-thinning behavior of entangled F-actin solutions, which aids the understanding of the flow behavior of complex fluids containing semi-flexible polymers.
Networks of semi-flexible polymers or fibers form the basis of many materials that we encounter in daily life. Concentrated dispersions of wormlike micelles are used to engineer the viscoelastic properties of industrial and consumer products [1], while polysaccharides and selfassembled supra-molecular structures are used in tissue engineering [2] and smart gels [3]. In biology, eukaryotic cells are mechanically supported by an internal cytoskeleton composed of protein filaments including filamentous (F-)actin. These filaments can form striking non-equilibrium patterns when they are subjected to cytoplasmic flows within plants [4] and animal embryos [5] generated by molecular motor activity. The basis of understanding these phenomena is knowledge of the conformational response of the stiff polymers constituting the materials to an applied flow.
Here, we directly visualize the full three-dimensional (3D) contour of labeled F-actin subjected to shear flow, in order to resolve the microscopic basis of the macroscopic non-Newtonian flow response. Our system lies in between the limiting cases of permanently cross-linked networks, where filaments do not relax, and of dilute noninteracting polymers, where filaments relax freely. Permanently cross-linked networks, which represent a model for cytoskeletal networks, show remarkable viscoelastic properties such as elasticity at low filament density and strong strain-stiffening behavior, where the stress increases with increasing strain [6][7][8]. Theoretical models can capture this behavior on the basis of microscopic properties of the filaments, such as the bending rigidity, * Corresponding author: p.lettinga@fz-juelich.de length distribution, and cross-link density [9].
The rheology of F-actin solutions in the absence of cross-links [10][11][12][13][14] is comparatively poorly understood. The complex behavior of such solutions is due to entanglements between the filaments that form at concentrations above the overlap density, where diffusion takes place by reptation within tube-like confinement zones defined by the entanglements [15]. The tube itself is not a rigid object, but describes rather a confining potential with a varying tube radius [16]. The linear viscoelastic response of such entangled F-actin solutions to small deformations that leave the tubes unaffected have been described by wormlike chain models [17,18]. When starting up shear flow entangled F-actin solutions first display strain-stiffening, followed by strain-softening, where stress decreases with increasing strain [13,14,19]. In addition, the viscosity of entangled F-actin decreases with increasing shear rate, known as shear-thinning [10,19,20]. These are key ingredients for the formation of flow instabilities, which often occur in concentrated semi-flexible polymers [21] such as DNA [22], wormlike micelles [23] and also F-actin [20]. Though the connection between shear-thinning and flow instabilities is fairly well understood [24,25], the microscopic mechanism of shear-thinning is still under debate mainly because real space information for nanoscopic systems like DNA and wormlike micelles is limited. F-actin is a particularly useful semi-flexible model polymer because the micron-sized lengths and relatively high stiffness allows imaging of the conformation at the single filament level by fluorescence microscopy [15,[26][27][28]. by the interplay between the Brownian motion of the filament ends and the shear flow [26]. Due to this competition, the filaments tumble in the gradient direction forming so-called hairpins, which contain highly curved segments between the stretched parts of the filaments. Here, the only relevant stress component is the shear stress, while reorientational motion takes place in the flow-gradient plane. The frequency of this tumbling motion in this direction was shown to decreases when the system is entangled [31]. Since, however, stress can develop in all directions for entangled systems [22,33,34], knowledge of the 3D contour of the filament is crucial for a full understanding of the mechanical response of these systems.
We use here concentrations just below and above the concentration where a confining tube can be defined [35]. We achieve 3D in situ imaging by employing a counterrotating cone-plate shear cell ( Supplementary Fig. 1) in combination with a fast confocal microscope [36]. The advantage of this shear cell is that it induces a simple shear flow with a linear velocity gradient, while the presence of a zero-velocity plane guarantees that the filaments stay sufficiently long in the field of view to obtain the full 3D contour. This approach allows us to test predictions on the structural origin of strain-softening of entangled semi-flexible polymers [29,30]. We quantitatively measure the effect of strain on the distribution of the local curvature. Moreover, a detailed analysis of the orientational distributions of the filaments reveals that entanglements are lost during strain deformation while hairpins are formed that are confined in the flow-vorticity plane. Thus we identify the mechanism for strain-softening and shear-thinning of entangled F-actin solutions.
A. Strain-softening and shear-thinning
We subjected F-actin solutions of 0.02 and 0.15 mg/ml to shear flow , increasing the shear rate from 0.075 to 0.3 s −1 in 4 steps of 5 minutes, while keeping track of the total acquired strain, see section III. The stress as a function of strain at 0.15 mg/ml is plotted in Fig. 1a. The stress decreases with increasing strain after start-up of the shear flow at the lowest shear rate. This behavior is indicative for strain-softening. After about 8 strain units the stress stays constant and the viscosity of the system can be determined. In Fig. 1b the viscosity is plotted as a function of shear rate for both concentrations. Both systems display a clear shear-thinning behavior. Thus we observe both strain-softening and shear-thinning. The viscosity at 0.02 mg/ml is an order of magnitude less than that at 0.15 mg/ml and approaches for high shear rates the buffer viscosity, exemplifying the effect of entanglement. From now on we refer to these samples as c high = 0.15 mg/ml and c low = 0.02 mg/ml, where the indices refer to the viscosity of the samples.
B. Global and local information as obtained from imaging
To visualize shear-induced conformational transitions of entangled individual actin filaments, we embedded trace amounts of fluorescently labeled actin filaments in the host dispersion. For imaging we used a home-built counter-rotating cone-plate cell mounted on an inverted microscope (Fig. 1c) that was equipped with a multipinhole confocal scanning head, allowing frame rates of 8 frames per second. We obtain the full 3D contour of on average 150 filaments per analyzed dimensionless strain. Fig. 1d shows several filaments, including one for which the contour is fitted. The coordinate system of this fitted contour, parametrized by r j (j is the index of the coordinates along the contour), is given by the velocity direction (along the shear direction), gradient direction (along the gap direction) and vorticity direction. From r j we can extract information about the global filament conformation, such as the filament contour length L and its end-toend vector R ee . Fig. 2a and b show the distribution of the ratio R ee /L for c high , which is a measure for the degree of stretching. This distribution does not show a significant change with strain for the short filaments (< 13 µm, For the long filaments (> 21µm, 2b), the distribution widens towards small R ee /L while a pronounced peak is formed for R ee /L → 1. This means that the filaments become more stretched and at the same time more bent, which is indicative for the formation of hairpins. Fig. 2c displays the length distribution at different dimensionless strain values for c high . The distribution does not shift to lower values of filament lengths with increasing strain, so that we conclude that shear-softening and shear-thinning is not due to rupture of actin filaments.
We will characterize the filament conformations by extracting the local curvature, κ j , tangent vector,T j , and binormal vector,B j , along the contour, see Fig. 4a. To calculate these values we average over two neighboring points in the contour. Thus, the contour is split up in segments of an average length of L n = 2.36 µm (Supplementary Note 1). Fig. 4b and c display the projections of two typical examples of filaments strained at c low (b) and c high (c). At c low , which corresponds to the onset of entanglements, we find filaments with highly curved segments (region I) and segments that are stretched and orientated with the flow direction (region II). At c high , the filament has a hairpin conformation (region III), with two dangling ends which are stretched in the flow direction (region IV). Strikingly, for c high the hairpin is confined to the flow/vorticity plane, with its binormal vectors (blue arrows) pointing in the gradient direction Fig. 4c. This behavior is markedly different from the behavior observed for filaments in dilute solutions [26], which tumble in the gradient direction. The behavior at c low is intermediate.
C. Filament stretching and bending in shear flow
The typical configuration of a sheared filament as displayed in Fig. 4a shows a hairpin, which is characterized by two stretched tails connected by a bent part. The formation of these hairpins suggests that the distribution of the curvature of the segments changes when entangled semi-flexible polymers are sheared. Stretching and bending of filaments both contribute to the free energy of a filament. Usually this is connected to R ee which measures the stretching of a filament [37]. R ee , however, can be small for hairpins, which can thus be incorrectly taken to suggest that the entropic contribution to the energy is low. This is clearly a flaw because the main part of the filament is stretched. One therefore needs to analyze the filaments at smaller length scales. We do this by using the local curvature κ j , which is related to the scaled end-to-end vector x j = R j / L n (Fig. 4a). In this analysis we only used filaments with length L > 21 µm to assure good statistics and sufficient flexibility (L/l p > 1). On average we analyzed about 1200 segments per image stack, allowing us to obtain distributions for the curvature P (κ j ). The distribution of κ j are plotted in Fig. 3 for zero shear strain and high strain (γ = 215 ± 12.). For both concentrations the equilibrium curvature distribution is well-described by a Gaussian distribution. The distribution in equilibrium deviates from earlier studies of unsheared F-actin [38], probably due to the different sample environment and relatively high concentrations of filaments that are used in this study. For the low concentration the distribution remains unchanged when straining the system. In contrast, at the high concentration we observe that the distribution takes the form of a lognormal distribution. Compared to the unsheared situation, the peak of the distribution shifts towards smaller values of κ j , showing that the majority of segments are more stretched. In addition the distribution also has a long tail with a power law form, showing that strain induces curvature in the system. These observations hint that at c high energy is stored in the system, but not at c low . This is consistent with the presence of entanglements at c high . However, these observations do not yet explain the strain-softening behavior. Right: corresponding views in the flow/gradient plane. At c low , the filament has a highly curved segment (region I), and stretched segments that are somewhat oriented in the flow direction (region II). It shows no confinement in any plane. For c high , the filament also has a highly curved segment (III) and a stretched segment (IV) and is confined to the flow/vorticity plane. Tick unit: µm.
D. Orientation of shear-induced hairpins
To quantify the orientations of the filaments, we collected all local tangent and binormal vectors along the filament contour, as plotted in Fig. 5a,d on a unit sphere, and calculated the corresponding two-dimensional angular orientational distribution functions which are best fitted with a 2D Lorentzian given by Here the angles ϕ and θ, which set up the vectorsT j andB j , are defined as shown in Fig. 4a right. As examples we show in Fig. 5 the distribution for c high after straining the sample to γ = 215±12, separating the orientation of the stretched segments, using all segments with κ j < 0.1 µm −1 (top) and bent segments using all segments with κ j > 0.2 µm −1 (bottom).
The stretched segments display a strong shearalignment of the tangent: Fig. 5a,b shows that the segments all point in the flow direction. Note however that the distributions are slightly biaxial, i.e. not symmetric around the maximum. The binormal distribution has a low degree of order and is strongly biaxial, Fig. 5c. This can be rationalized as follows. When the tangent is well aligned, then there is no clearly defined plane that is spanned by two sequential segments. Thus the binormal is ill-defined though per definition in the plane perpendicular to the tangent: the distribution in Fig. 5c is very sharp with respect to φ B and very broad with respect to θ B . On the contrary, the bent segments have a pronounced uniaxial ordering of the binormal pointing in the gradient direction (Fig. 5d,f), while the distribution of the tangent is biaxial (Fig. 5e), which is to be expected for hairpins, see for example region I in Fig. 4b.
In order to quantify separately the orientational behavior of the stretched and bent parts of the filaments, we use the distribution functions f (θ, φ) to calculate the orientational order tensorsS T = π 0 2π 0 dθdφ sin(φ)f (θ T , φ T )TT and similarlyS B . For our purpose the traceless diagonalized formQ = 1 2 (3S − I) is particularly useful since Fig. 4 and Fig. 5b both suggest that the orientational distributions can be biaxial. Q may be written as where λ T,B is the orientational order parameter of the main orientation axis and η T,B parametrizes the biaxiality of the system. We will now discuss the behavior of these parameters for the four features that we indicated in Fig. 4b and c: segments with high curvatures of κ j > 0.2 µm −1 for c low (I) and c high (III), and segments with low curvatures of κ j < 0.1 µm −1 for c low (II) and c high (IV). The stretched segments at c high (IV in Fig. 4) clearly display a high degree of ordering of the tangent with λ T → 1 (solid Fig. 6b). The ordering is uni-axial (solid symbols in Fig. 6f) and almost along the flow direction, as expected (solid symbols in Fig. 7c and f). λ T is higher for c high than for c low (compare Fig. 6a and b, II and IV), which shows that entanglements enhance ordering when the sample is sheared. Interestingly, λ T jumps immediately to a high value when strain is applied for c high . On the contrary, λ T displays a moderate increase with increasing strain for c low , while the corresponding eigenvector tilts towards the flow direction (solid symbols in Fig. 7a and e). This complies with measurements on wormlike micelles [39]. Thus, the entanglements not only cause stretching of parts of the filaments, as seen from Fig. 3, but also strong shear-alignment of these stretched segments along the flow direction.
symbols in
Whilst this shear-alignment is a well known phenomenon, the orientational behavior of segments with high curvature (I and III in Fig. 4) is a priori not obvious. The binormal is a particularly informative parameter since it is oriented along the normal of the plane spanned by a curved segment. When there is a train of segments with similar curvature, which is the case in the bent part of a hairpin, then binormal vectors point in the same direction. This can be seen for c high , where the blue arrows in region III of Fig. 4c on the right all point in the gradient direction. The behavior of the binormal order parameter λ B confirms this observation. Indeed λ B is higher for the highly curved than for the stretched segments; compare open and solid symbols in Fig. 6d. The eigenvectors belonging to λ B point along the gradient direction ( Fig. 5f and open symbols in Fig. 7d and h) and therefore also the plane spanned by the highly curved segments. The fact that the binormal is well defined for κ j > 0.2 µm −1 implies that the ordering of the tangent is low and biaxial. This is indeed confirmed by Fig. 6f, where the biaxiality of the tangent η T is plotted: for c high the biaxiality of the tangent η T increases with strain when κ > 0.2 µm −1 . We therefore conclude that the plane spanned by the bent part of the hairpin, which we labeled III in Fig. 4c, indeed is located in the flowvorticity plane, with its normal pointing in the gradient direction.
The behavior of the binormal and biaxiality is distinctively different for c low . No increase in λ B nor in η T is observed with increasing strain (Fig. 6c,e). Thus, there is no well defined orientation of hairpins at the low concentration, as is exemplified in region I of Fig. 4b. In Fig. 6 we also plot the points measured about 150 s after cessation of shear flow. We note a clear difference between the relaxation times at low and high concentration: for c low the orientation is partly lost, whereas for c high the orientations remain unchanged. This indicates that the relaxation time at c high is indeed much longer than for c low . A time series after cessation of flow confirms these observations: the shape is mainly lost for c low (Fig. 6g), while at c high the filament conserves its shape (Fig. 6h).
II. DISCUSSION
By directly visualizing the full 3D conformation of individual actin filaments within entangled F-actin solutions, we can show how entanglements influence the conformation of the filaments in response to shear flow.
First, the distribution of curvature for c low , at the onset of the entangled regime, remains unchanged when applying shear flow, while the number of stretched as well as bent segments increases for c high when applying shear, see Fig. 3. This explains the much higher viscosity for c high . Second, we observe at c high that hairpins form in shear flow, which tilt into the flow-vorticity plane. The behavior of entangled filaments is in marked contrast with the dynamics of filaments in dilute solutions which tumble in the gradient direction [26]. Our observations also explain why tumbling was previously shown to be strongly reduced at concentrations three times higher than c high [31], although the shear rates used in this reference were O(10 2 ) higher. We find that the response at c low lies between these two extremes, since there is no well defined plane into which the filaments turn. In this case we find shear-thinning which is purely due to increased alignment of the filaments, as we conclude from Fig. 6a and e.
We will now try to relate the formation of the strongly aligned hairpins with the strain-softening in entangled solutions. Hairpins are generally viewed as a signature of an entanglement. Two entangled filaments will strongly bend around the point where they are entangled when they are moved in opposite directions faster than the time they need to relax. Mechanically this leads to strainstiffening, while pairs of hairpins with the orientations of the bent segments roughly perpendicular to each other result in a very flat distribution of the binormal of the bent segments. We observe, however, exactly the opposite: the binormal of the bent segments is highly aligned when straining the system at c high . Moreover, there is a strong increase in the number of stretched segments and a marginal increase in the number of bent segments and we predominantly find only one hairpin per filament, see Fig. 3b. These observations strongly hint that entanglements disappear as the system is strained. Since the hairpins and their stretched tails are strictly located in the flow-vorticity plane, they have no components in the gradient direction that cause the shear stress, which results in strain softening. The filaments slide over each other facilitating lamellar flow. These findings are contrary to theoretical predictions that hairpins cause strainstiffening. However, the theory considered the contour lengths to be much longer than the persistence length [29,30], while in our experiments they are of the same order. Theory does predict an instability where shear deformation pushes out contacts between filaments causing strain-softening [30]. This strain-induced loss of entanglements, very similar to the convective constraint release [32], is exactly what we find.
There are no predictions of the shape of the filaments at high strains after strain softening. We find experimentally that the effect of the surrounding filaments is to confine the hairpins in the flow-vorticity plane. Instead of a confining tube as can be defined in equilibrium, there are now confining planes, without entanglements. This can also be appreciated from the movie (Supplementary Movie 1), where fluctuations in the vorticity directions are observed, but not in the gradient direction, in contrast to the movie of a filament at c low (Supplementary Movie 2). While the hairpins immediately form and can be related to the strain-softening, we also observe shear thinning, see Fig. 1b. This behavior is likely to relate to the small but significant increase in the orientational ordering of the stretched segments (solid symbols in Fig 6b) and decrease in the biaxiality (solid symbols in Fig 6f) parameterizing the flow alignment of the segments. These new scenarios for strain-softening and shear-thinning exclude the need for scission of the F-actin filaments as a pathway to explain non-linear rheology, which is known for living polymers such as wormlike micelles [23]. Indeed, we observe in Fig. 2b no change in the filament length distribution over the full measured range of strain. The mechanism for stress release by the formation of disentangled sliding hairpins could be a precursor for the formation of flow instabilities, which is often related with local reorganization of the constituting particles [23]. The distinct orientation of the hairpins also suggests that a normal stress builds up which could lead to flow instabilities. Polymer solutions [22,33,34], polymer melts [40] and sticky carbon nanotubes [41] are all systems that display pronounced normal stresses as well as flow instabilities. Flow instabilities have also been observed for actin dispersions at higher concentrations than we used [20]. We did not find any signature of such a behavior, scanning a significant part of the gap of the shear cell at a fixed position from the center of the shear cell. This could be due to the limited strain applied to the system as well as the relatively low filament concentration as compared to ref. [20].
In conclusion, we believe that the mechanism of stress release we identified here may be generally valid for solutions of semi-flexible polymers, including supramolecular systems [23,[42][43][44]. Thus, our findings will aid to the understanding of the complex flow behavior of such systems. Likewise, this mechanism could impact the selforganization of cytoskeletal filaments in response to intracellular shear flows created by processes like cytoplasmic streaming [4,5].
III. METHODS
A. Protein purification and sample preparation G-actin was isolated from rabbit skeletal muscle [45], [46][47][48][49] Samples were prepared by diluting labeled filaments in a 1:2000 ratio with GFS-buffer solution (10% 10x Fbuffer, 60% sucrose in G-buffer) and mixing this solution in equal volume with unlabeled F-actin to reach final concentrations of 0.02 and 0.15 mg/ml. The final buffer solution thus contained 30% sucrose, which reduced the off-rate of labeled phalloidin [50], improving the signal to noise ratio during image acquisition. All measurements were done at 21 • C.
B. Shear cell and Microscopy
To produce shear flows with a well-defined linear velocity gradient, we used an adapted version of the counterrotating cone-plane shear cell used in ref. [51] (Supplementary Fig. 1). It consists of a bottom glass plate (diameter 80 mm, thickness 170 µm, Menzel) which is fixed by two Teflon rings that are pressed together between a titanium plate at the bottom and a stainless steel block at the top. In this block there is a hole where the top cone is inserted, which is also made out of stainless steel. Both the glass plate and steel cone can move independently and are in our experiments moved counter clockwise. We used an Epiplan-Neofluar 50x/1.0 Pol objective (Zeiss) . The shear cell was mounted on an inverted microscope (Zeiss/Axiovert 200 M), equipped with a multipinhole-confocal system (VisiTech/VT-Infinity-I) and an Epiplan-Neofluar 50x/1.0 Oil Pol Objektiv (Zeiss). An Argon/Krypton laser (Spectra Physics/Stabilite 2250) operating at 647 nm was used for excitation of the fluorescent dye. An observation area of 151 µm x 151 µm was imaged onto an EMCCD-camera (Andor/iXon DU-897) operated with IQ software. Confocal stacks consisted of 51 frames taken at 1 µm intervals at a rate of 7.2 s per stack. The difference in the geometrical and the optical path lenght, due to the different refractive indices of the immersion oil (n oil = 1.518) objective and the actin solution (≈ n water = 1.33) was determined by two independent methods. First we measured the refractive indices of the immersion oil and the actin solution and calculated the correction factor n k based on the ratio of those indices. Second we filled a glass capillary of known thickness (10 µm ±10%) with a solution of fluorescent beads (Ø 0.5 µm, Latex beads, Sigma) and measured the geometrical path length, by the use of a calibrated piezo element, calculating the correction factor n k based on the difference of the capillary and geometrical length: n k = n buf f er n oil ≈ h capillary h geometrical = 0.9 ± 0.04. Both methods gave equivalent results. Data were taken 30-80 µm into the sample to reduce wall effects.
Our shear protocol consists of four blocks of five minutes where a shear rate of 0.075, 0.15, 0.225, 0.3 s −1 , respectively, is applied. The reason for the shear protocol is that after sample loading and inserting the cone, the orientation of the filaments for c high is not well defined, while it does not relax. This is a fact that cannot be avoided. Thus quite some strain units are needed to re-move this memory effect. For c low the sample does relax before we start the experiment and indeed we can follow trends with increasing strain.
Rheology data are taken with an Anton Paar MCR501, using a cone-plate geometry with 30 mm diameter and 1 degree angle.
C. Data analysis
The 3D filament contours were tracked with the autodepth-function of the visualization and analysis software Imaris (Bitplane/ Imaris 6.1), and subsequent filament position analysis was done using Matlab (Version, Mathworks). As a control for the conformation of the filaments before the shear experiment (γ = 0) one frame was taken before placing the cone. This frame contained about 50 tracer filaments, that were tracked with Imaris. For the analysis of the filament conformation at a certain strain value, 3 statistically independent stacks (circa 30 s apart) before changing the shear rate, were analyzed, containing also about 50 tracer filaments each, corresponding to a strain of γ=21± 3, 59 ± 6, 126 ± 7 and 215 ± 12. For the last data point 3 frames, 120 s, 150 s and 180 s after cessation, were analyzed.
|
2015-01-09T14:54:16.000Z
|
2014-10-09T00:00:00.000
|
{
"year": 2015,
"sha1": "666ef9b65c3d85008493a98b553796c36c41c488",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/ncomms6060.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "666ef9b65c3d85008493a98b553796c36c41c488",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Medicine"
]
}
|
73674915
|
pes2o/s2orc
|
v3-fos-license
|
ICP-OES Determination of Titanium (IV) in Marine and Wastewater Samples after Preconcentration onto Unloaded and Reagent Immobilized Polyurethane Foams Packed Columns
A novel method that utilizes untreated and polyurethane foam (PUF) physically immobilized with the reagent 4-(2-pyridylazo) resorcinol (PAR) or 2, 3, 5-triphenyl-2H-tetrazolium chloride (TZ+Cl-) as solid phase extractor packed column has been developed for preconcentration and subsequent determination of titanium(IV) ions in marine and wastewater samples. The method is based on retention of the titanium(IV) traces present in aqueous media at pH 3-4 onto the reagent treated PUF packed column, followed by recovery with HNO3 (2.0 mol dm -3) and subsequent ICP-OES determination. The uptake of titanium species onto the unloadedand reagent impregnated PUF was fast and followed a first-order rate equation. Titanium sorption onto PUF followed Langmuir, Freundlich and DubininRadushkevich (D-R) type isotherm models. Thus, a dual-mode sorption mechanism involving absorption related to “weak-base anion exchange” and an added component for “surface adsorption” seems a more likely retention model. PAR-immobilized PUF packed column has been applied successfully for complete collection of titanium(IV) species in fresh and wastewater samples at low level of titanium (<0.5 ng Ti mL-1) at pH 3-4. The retained titanium species were then recovered (97-101%) from the packed column with HNO3 and determined by ICP-OES. The proposed PUF packed column method was further applied for analysis of picomolar concentrations of dissolved Ti species in marine water.
Introduction
Titanium is well known for its excellent corrosion resistance, having ability to withstand attack by dilute H 2 SO 4 and HCl or even moist chlorine. These properties make titanium highly resistant to the usual kinds of metal fatigue. Titanium alloys are principally used for aircrafts and missiles, where light weight strength and ability to withstand extremes of temperature are important. In nature, titanium exists in its most stable and common oxidation state (IV). Many organic compounds of titanium such as phthalates, oxalates, tetraethylate and butyltitanate are widely synthesized and used extensively. Titanium is naturally present in sea and ocean water (picomolar) and in food at only trace levels (μg Kg -1 ) [1]. The presence of TiO 2 as an excipient in most pharmaceutical preparations, pigment, as a particulate food additive and in human intestinal tissue has been proposed that an abnormal response in the pathogenesis of Cohn's disease additive [2][3][4]. Hence, determination of titanium (IV) at trace levels in various samples is of paramount importance.
Few methods for the separation and subsequent determination of titanium in food and industrial wastewater samples are known [4]. A reversed-phase liquid chromatographic method for the determination of titanium with 5,5'-methylenedisalicylohydroxamic acid (MEDSHA) has been described by Bagur et al. [5]. Separation and determination of titanium (IV) at trace levels in different matrices including industrial wastewater have been reported employing solid phase extraction (SPE) [6][7][8][9][10][11]. SPE has several advantages e.g. simple operation, low cost, no time consuming, good selectivity, higher preconcentration factor, rapid phase separation and the ability to be combined with different modern analytical techniques [6][7][8][9][10][11].
Polyurethane foam (PUF) sorbent in the last four decades UF has been tested as an excellent support in reversed phase extraction chromatography, gas-solid and gas-liquid partition chromatography [12][13][14][15][16][17][18][19][20][21][22][23]. The cellular structures and the available surface area of the PUF in both foamed and micro spherical forms make it suitable as an excellent extractor and as a column filing material with good capacity for firmly retaining various loading and extracting agents [21]. Preconcentration, separation and subsequent sensitive determination of titanium (IV) in various matrices at trace levels is of prime importance and necessitates. Thus, the present article is focused on: i. studying the retention profile of titanium (VI) onto PAR or TZ + Cltreated and untreated PUF; ii. Developing a convenient and low cost extraction procedure for separation and subsequent ICP-OES determination of titanium (IV) species in water samples employing PAR immobilized PUF in packed column and finally iii. Assigning the most probable sorption mechanism of Ti retention.
Reagents and materials
All chemicals and solvents used were of analytical reagent grade and were used without further purification. Doubly deionized water was used throughout. Stock solution (0.1 % w/v) of BDH 4-(2-pyridylazo) 4-resorcinol (BDH, Poole, England) and 2, 3, 5-triphenyl-2H- tetrazolium chloride (TZ + Cl -) were prepared by dissolving the required weight in few drops of ethanol and the solution was then completed with water. Stock solution (1 mg mL -1 ) of titanium (IV) nitrate (BDH) was used for the preparation of diluted solutions (0.05-150 µg Ti mL -1 ) in water. Stock solutions (1.0 % w/v) of BDH sodium dodecyl sulphate (SDS), tetrabutylammonium bromide, TBA + Br -, (BDH) and Triton X-100 (Analar) were prepared in water. Foam cubes (10-15 mm edge) of commercial white sheets of polyether type based PUF were cut from the foam sheets, purified and finally dried at 80˚C [20]. The immobilized reagent PUF cubes were prepared by mixing the dried foam cubes with an aqueous solution (50 mL g -1 dry foam) containing PAR or TZ + Cl -(0.1 % w/v) with efficient stirring for 30 min, squeezed and finally dried as reported [21]. The reagent PAR-immobilized PUF was packed in the glass columns (2 cm, 10 mm ID) by applying vacuum method of foam packing [13,18]. All containers used were pre-cleaned by soaking in HNO 3 (20 % w/v) and rinsed with de-ionized water before use.
Apparatus
A Perkin _ Elmer (Lambda 25, Shelton, CT,USA) spectrophotometer (190-1100 nm) with 10 mm (path width) quartz cell was used for recording the electronic spectra and measuring the absorbance of the complex species of titanium. A Perkin Elmer Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES, Optima 4100 DC Shelton, CT, USA) was operated at the optimum instrumental parameters for titanium determination before and after extraction with the reagent treated PUFs under the optimum operational parameters (Table 1). A Soxhlet extractor and a lab-line Orbital mechanical shaker (Corporation Precision Scientific, Chicago, USA) with a shaking rate in the range of 10-250 rpm were used were used for the foam purification and for shaking in batch experiments, respectively. De-ionized water was obtained from Milli-Q Plus system (Millipore, Bedford, MA, USA). A thermo Orion model 720 pH Meter (Thermo Fisher Scientific, MA, USA) was employed for pH measurements with absolute accuracy limits being defined by NIST buffers. Self-made columns (16, 10 and 2 cm h and 10 mm id) were used in flow experiments.
Recommended procedures
Batch experiments: In a dry 100 mL polyethylene bottle, an accurate weight (0.05 ± 0.001 g) of the unloaded or PAR immobilized foam cubes was shaken for 1 hr in a mechanical shaker with 50 mL of an aqueous solution containing titanium(IV) ions at 100 µg mL -1 concentration at 25 ± 0.1ºC at the required pH employing Britton-Robinson (B-R) buffer (pH 2.5-11.5). After phase separation, the aliquot solution was analyzed for titanium by ICP-OES under the optimal operational parameters of the instrument ( Table 1). The amount of titanium (IV) retained at equilibrium, q e on the foam cubes was then determined from the difference between the concentration of titanium (IV) measured in solution before (C i ) and after (C f ) shaking with the unloaded or reagent loaded foam cubes employing the equation: where v and w are the volume (mL) of the aqueous solution and the weight (g) of the foam cubes, respectively. The extraction percentage (%E) and the distribution ratio (D) of the titanium sorption onto the unloaded and the reagent loaded foam were then calculated as reported [20]. Following these procedures, the influence of different parameters was critically investigated. The values of E and D are the average of three independent measurements and the precision in most cases was ± 2%.
Column experiments: An accurate weight (0.50 ± 0.01 g) of the unloaded or PAR-immobilized PUF was packed in a column using the vacuum method of foam packing [23]. The aqueous solutions (0.1-10 L) containing titanium (IV) at various concentrations (0.01-10 µg mL -1 ) adjusted with Britton-Robinson buffer of pH = 3-4 were percolated through the foam column at 15-20 mL min -1 flow rate. The sample and blank foam packed columns were then washed with 100 mL of B-R buffer solution at the same pH. Complete retention of titanium (IV) took place on the unloaded and PAR or TZ + Cl --immobilized PUF as indicated from the ICP-OES determination of titanium in the effluent solutions. The sorbed titanium (IV) species were then recovered quantitatively from sorbent packed column with HNO 3 (50 mL, 2M) at 5 mL min -1 flow rate. Equal fractions (10 mL) of the eluate were then collected and the titanium species were then determined with ICP-OES.
Analysis of titanium (IV) in fresh, sea and wastewater samples:
A 10 mL of concentrated HNO 3 was added to 0.1-0.5 L of tap, seawater or industrial wastewater samples. The mixture was boiled until the volume of the sample solution is reduced to two-third and the solution was allowed to cool down and filtered through a Whatman No 1 filterpaper. The pH of the solution was adjusted to 3-4 with B-R buffer. The water samples were then spiked with (or without) titanium (IV) at a total concentration 0.1-10 µg mL -1 and diluted to the original volume with water in a volumetric flask. The water samples were then percolated through the unloaded or PAR-loaded PUF packed column at 20-25 mL min -1 flow rate. The retained titanium (IV) species on the PUF column were then recovered with HNO 3 solution (25 mL, 2M) at 5 mL min -1 flow rate. Titanium (IV) concentration before extraction and after recovery in the eluate was finally determined by ICP-OES.
Results and Discussion
The concentrations of heavy metals in natural water and wastewater samples are frequently lower than their limit of detection (LOD). Therefore, recent years have seen an upsurge of interest in developing solid sorbents [22,23] and exploring them for the separation and chemical speciation of metal ions [20]. PUF sorbent represents an inexpensive and efficient separation and preconcentration media with steadily versatile applications in inorganic and organic complex species [22].
Retention profile of titanium (IV) onto the PUF
The retention behavior of titanium (IV) ions from the aqueous solutions by the untreated and reagent-immobilized PUF cubes after 1 hr shaking at different pH employing B-R buffer (pH 2.5-11.5) were investigated. The uptake of titanium (IV) onto the unloaded and PAR loaded PUF increases on raising solution upto pH 3.1-4.0 and decreased markedly on increasing the solution pH ( Figure 1). The observed behavior of titanium(IV) species sorption onto the unloaded and TZ + Clloaded PUF at pH 3.1-4.0 is most likely attributed to the formation of binary and ternary complex ion associates of titanium (IV) with PUFs via protonated ether (-CH 2 -O + H-CH 2 -) oxygen linkage of the PUF and TZ + Clloaded PUF, respectively. The reagent PAR-immobilized PUF showed also similar trend of sorption with better extraction performance ( Figure 1). The low titanium sorption onto PUF at pH higher than pH 4.0 is attributed to the instability, hydrolysis or incomplete extraction of the produced ion associates of titanium (IV) with the unloaded PUF or TZ + Cl -. On the other hand, PAR molecules immobilized onto PUF are most likely complexed with titanium (IV) species in the solution via ligand exchange or ligand addition mechanism [2]. Thus, in the subsequent work, the aqueous solution was adjusted at pH 3-4.
A possible explanation of the observed trend involves a "weak-base anion exchanger" mechanism for the unloaded and TZ + Cltreated PUF and "cation chelation or ligand addition extraction" mechanism for PAR-immobilized may be preceded [18,20] as follows: The influence of shaking time (1-60 min) on the sorption of titanium (IV) from the aqueous solution at pH 3-4 onto the unloaded and TZ + Clor PAR-immobilized PUF was investigated. The extraction was found fast, followed a first-order rate equation and the equilibrium was attained in ~10 min ( Figure 2). Thus, a shaking of 20 min time was adopted in the subsequent work. The calculated half-life time (t 1/2 ) of the equilibrium sorption to reach 50% saturation of the sorption capacity of PUF loaded PAR and TZ + Cland untreated PUF (Figure 2) was in the range 1.5-2 min. The values of E and D of titanium sorption onto unloaded and PAR-immobilized PUF were found better in comparison with TZ + Climmobilized PUF. Thus, in the subsequent experiments, the unloaded and PAR immobilized foams were used.
The effect of cation size (Na + , K + , NH 4 + and Ca 2+ ) as chloride salts at concentration 0.05% w/v on titanium sorption by treated and untreated PUF was studied. In the unloaded foam, the uptake followed the sequence: Different trend was observed for PAR or TZ + Climmobilized, with reasonable increase (5-10 %) of titanium (IV) sorption in the presence of K + and the retention percentage followed the following order: The reduction of the repulsive forces between adjacent sorbed titanium (IV) complex ion associates in the unloaded PUF membrane may account for the trend observed [22,23]. Thus, the ion-dipole interaction of NH 4 + with the oxygen sites of the PUF is not the predominating factor in the extraction step. The added K + ions is most likely reduce the number of water molecules available to solvate the titanium ions which would therefore, be forced out of the solvent phase onto the PUF. Thus, "weak-base anion exchange and "cation chelation or "ligand addition extraction" are the most probable sorption mechanism for the sorbent.
The influence of the surfactants SDS, TBA + Brand Triton-X 100 on titanium (IV) sorption from aqueous solution onto the unloadedand loaded PUF was investigated. Titanium sorption onto PUF sorbent increased in the presence of SDS (0.1 % w/v) and leveled off on raising the surfactant concentration. This behavior is most likely attributed to the increase of the solution viscosity leading to progressive change in the physical properties of the micro environment of the produced complex ion associates of unloaded or TZ + Cltreated PUF and the chelate formed with PAR-treated PUF [18,20], respectively. The increase in the solution viscosity enhances the dissociation and/or the formation of aggregate complexes with low diffusion constants [24,25]. The competition between the surfactant and the anionic complex of titanium (IV) may also predominate in the observed trend. Also, the surfactant may reacts directly with the anionic complex of titanium and this may retard the extraction process [25].
Sorption isotherms of titanium (IV) by PUF
The sorption profile of titanium (IV) from the bulk aqueous solution onto the untreated and PAR treated PUF was determined over a wide range of concentrations. In the aqueous, the amount of titanium (IV) retained onto unloaded and reagent treated PUF varied linearly with the corresponding amount of titanium (IV) at low or moderate Ti concentration. Thus, the titanium (IV) sorption onto the PUF were subjected to Freundlich [26] and Dubinin-Radushkevich [27] isotherms over a wide range of equilibrium concentrations. The Freundlich model [26] is expressed as follows: where C e is the equilibrium concentration (M) of titanium(IV) in solution, C ads is the sorbed titanium(IV) ions concentration (mmol g -1 ) and A and 1/n are the Freundlich parameters related to the maximum sorption capacity of solute (mol g -1 ). The values of A and 1/n, computed from the intercepts and slopes of linear plots of log C ads versus log C e over the entire range of titanium(IV) concentrations (0.05-150 µg mL -1 ) were 0.0156 ± 0.004, 0.0173 ± 0.003 mol.g -1 and 0.571 ± 0.07, 0.642 ± 0.18 onto the PUF, respectively. The value of 1/n < 1 indicated that, the isotherms do not predict any saturation of the surface of the solid sorbent by the adsorbate and the sorption capacity is slightly reduced at lower concentration.
The linear form of Dubinin-Radushkevich (D-R) model [27] postulated within the adsorption space close to the PAR-Treated PUF adsorbent surface is expressed as follows: where K DR is the maximum amount of titanium (IV) retained onto PAR treated PUF, β is a constant related to the energy of the transfer of the solute from the bulk solution to the solid sorbent and ε is Polanyi potential which is given by the equation: where R is the gas constant (kJ mol -1 K -1 ) and T is the absolute temperature (298 K) in Kelvin. The plot of ln C ads vs. ε 2 was linear ( Figure 3) indicating that the D-R isotherm is obeyed for titanium (IV) sorption onto the sorbent over the entire concentration range. The computed values of β and K DR from the slope and intercept of Figure 3 were found in the range 0.0027-0.0032 mmol 2 kJ -2 and 105-127 µmol g -1 , respectively. These results and the data reported earlier [18][19][20] suggested a dual sorption mechanism involving absorption related to "weak-base anion exchange" or "cation chelation" and an added component for "surface adsorption" mechanism for the uptake of titanium (IV) ions by unloaded and PAR immobilized PUF. This model can be expressed as follows [20]: where C r and C aq are the equilibrium concentrations of titanium (IV) ions onto the PUF and in solution, respectively. C abs and C ads are the equilibrium titanium (IV) ions onto the PUF as an absorbed and adsorbed species, respectively, S and K L are the saturation value for the Langmuir adsorption, the distribution coefficient and the Langmuir constant. This equation can be solved for D as reported earlier [20,21] as follows: The D values are dependent on the titanium ions concentration confirming the proposed mechanism. These results suggested the use of PAR loaded PUF in flow mode for complete collection, recovery and subsequent ICP-OES Ti determination in water.
Chromatographic separation of titanium (IV)
Preliminary investigation on the use of PAR -PUF packed column for the collection of titanium (IV) ions from aqueous media has indicated that, the column performance towards titanium ions is good. Thus, aqueous solutions of deionized and tap water samples (1.0 L) containing various concentrations (0.01-10 µg mL -1 ) of titanium(IV) at pH 3-4 were percolated through unloaded-and PAR treated PUF packed columns at 20-25 mL min -1 flow rate. Analysis of titanium in the effluent solution versus a reagent blank indicated complete (98 ± 3.1%) retention of titanium. Hence, a series of various eluating agents e.g. HNO 3 , EDTA, NAF and HCl has been tested for recovery of titanium (IV) from the PUF packed column. Nitric acid (50 mL, 2.0M) was found suitable for complete recovery of titanium (IV) from the packed column at 5.0 ml/min flow rate. The obtainable results are demonstrated in (Figure 4).
The performance of the developed unloaded and PAR-immobilized PUF columns was determined by passing 0.5 L (10 µg mL -1 ) of titanium (IV) solution at pH 3-4 through the PUF packed column at 20 mL min -1 flow rate. Complete sorption of titanium (IV) onto PAR loaded foam column took place at 20 mL min -1 . The retained titanium (IV) species were recovered with 50 mL HNO 3 (2.0M). The results are demonstrated in Figure 5. The height equivalent to theoretical plates (HETP) and where V max is the volume of eluting at maximum elution of solute, w e is the width of the chromatogram peak at (1/e) times maximum recovery of solute and L is the length of the PUf bed in the packed column. HETP and N values were found in the range 0.5-0.75 ± 0.04 mm and 80 ± 4, respectively. The HETP and N values evaluated from the breakthrough capacity curves ( Figure 6) were found also in the range 0.74 ± 0.01 mm; 78 ± 3 (n=5) for titanium retention onto unloaded PUF and 0.51 ± 0.02 mm and 83 ± 2 for PAR-loaded PUF packed column. The critical capacities of titanium (IV) ions sorption onto the unloaded and loaded foam packed column calculated from Figure 6 were 56.3 ± 2.2 and 60.4 ± 1 mg titanium per gram PUF, respectively at 20-25 mL min -1 flow rate. The calculated capacity from Par-PUF packed column was higher than capacity calculated from batch mode (40.4 ± 1 mg/g PUF).
Interference Study
The analytical utility of the PAR-immobilized PUF packed column for the retention and recovery of titanium ions (10 µg mL -1 ) from aqueous solutions (100 mL) was tested in the presence of a relatively high excess (100-1000 times) of the diverse ions (Fe 3+ , Al 3+ , Ca 2+ , Mg 2+ ,Cr 3+ , V 4+ , Ni 2+ , Mn 2+ , Co 2+ , Cu 2+ , Zn 2+ , Hg 2+ , and Cd 2+ ) relevant to waste water. The tolerance less than ± 2% change in the recovery of titanium ions is considered free from interference. Good extraction efficiency (>97 ± 3%) for titanium (IV) ions sorption and recovery were achieved successfully in the presence of the investigated divers ions except Al and Fe. Addition of NaF (100 µg mL -1 ) prevented the effect of both Al and Fe.
Analysis of titanium (IV) ions in tap-and wastewater samples:
The validity of the proposed unloaded and PAR-loaded PUF packed columns for the collection, recovery and ICP-AES determination of titanium (IV) ions in tap and wastewater samples was tested as described in the experimental procedures. Low concentrations (0.01-1.0 µg mL -1 ) of the spiked titanium ions in tap and/or wastewater samples were retained quantitatively as indicated from ICP-OES analysis of Ti in the effluent solutions. The retained species of titanium on the unloaded and PAR-PUF columns were then recovered by HNO 3 (25 mL, 2M) at 3-5 mL min -1 flow rate and subsequently determined by ICP-OES. The results are summarized in Table 2. Satisfactory results (96.5-102 ± 2.9%) for the recovery of titanium (IV) ions in tap and wastewater samples were achieved by the proposed PUF packed columns and the standard ICP-OES. The results revealed absence of Ti in the tested samples in good agreement with the data obtained by ICP-OES method. On these bases, titanium ions are not detectable in tap and the wastewater samples.
Analysis of titanium (IV) ions in seawater samples:
Satisfactory results (97 ± 2.7 %) for the preconcentration, recovery and subsequent ICP-AES determination of very low concentration of titanium (≤ 0.5µgL -1 ) spiked in Red seawater samples (Jeddah, Saudi Arabia) by the proposed method was attempted. The titanium (IV) concentration (0.05 µgL -1 ) obtained by the proposed packed column was in acceptable agreement with the data achieved by ICP-mass spectrometry (ICP-MS) and cathodic voltamertric [28] methods.
Conclusion
The present paper demonstrates application of PAR-immobilized PUF solid sorbent packed column for complete removal of titanium (IV) from wastewater samples and subsequent ICP-OES determination. The method is simple to operate and low cost then the conventional method. The PAR-PUF packed column was found stable and it can be reused for many times, without decrease in the extraction and recovery percentage of titanium (over 95%). Work is still continuing for online chemical speciation of inorganic titanium (III) & (IV) and organo-titanium (IV) compounds using PAR-immobilized PUF packed column and ICP-OES. (1) and PAR (2) immobilized PUF packed column (0.3 ± 0.01 g) using HNO 3 (2M) at 5 mL min -1 flow rate.
|
2019-04-07T13:09:07.813Z
|
2014-11-15T00:00:00.000
|
{
"year": 2014,
"sha1": "4bbf4922d320c270bd0f14402028ce2a5cad8b6b",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/icpoes-determination-of-titanium-iv-in-marine-and-wastewater-samples-after-preconcentration-onto-unloaded-2157-7064.1000247.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4b29ee248f7387a9fde43140fb342448f820cbce",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
240600780
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and associated factors of unintended pregnancy among pregnant women of reproductive age group in chencha woreda, gammo gofa zone, Southern Ethiopia
Introduction : Unintended pregnancies and unplanned births can have serious health, economic, and social consequences for women and their families. The immediate outcome of some unintended pregnancies is induced abortion which is unsafe in many countries that have highly restrictive abortion laws. In these countries, abortion often damages women’s health and sometimes results in their death. Method : A community based cross-sectional study was conducted. A total of 420 study participants were recruited. Simple random sampling was used to draw participants; the collected data were entered into EPI- Data version (7.9.0.) and then exported to SPSS Version 20.0 for analysis. Descriptive statistics, binary and multiple logistic regression analysis were carried out, Odds ratio with 95% CI were calculated. Result: the prevalence of unintended pregnancy was found to be 30.2%. Multiple logistic regression results showed that the previous history of abortion (AOR=8.262; 95%CI=3.692, 18.489), not discussing the sexual reproductive health (SRH) issues with their husband (AOR=3.086; 95%CI=1.830, 5.205) age of the last child less than three years (AOR=1.870; 95%CI=1.100, 3.179) were significantly associated with unintended pregnancy. Conclusion : This study shown that the prevalence of unintended pregnancy is high in the study area, hence, strengthening the provision of post abortion services, counseling on long term family planning services and male involvement in all reproductive health services are highly recommended.
Introduction
Unintended pregnancy is associated with an increased risk of problems for the mom and baby. If a pregnancy is not planned before conception, a woman may not be in optimal health for childbearing. For example, women with an unintended pregnancy could delay prenatal care that may affect the health of the baby. 1 Globally, approximately 40 percent of pregnancies or 85 million pregnancies were unintended in 2012, of these, 50 percent ended in abortion, 13 percent ended in miscarriage, and 38 percent resulted in an unplanned birth. 2 Unintended pregnancies and unplanned births can have serious health, economic, and social consequences for women and their families. 3 One immediate outcome of some unintended pregnancies is induced abortion which is unsafe in many countries that have highly restrictive abortion laws. In these countries, abortion often damages women's health and sometimes results in their death. 4 The World Health Organization (WHO) estimates that every year, nearly 5.5 million African women have an unsafe abortion. As many as 36,000 of these women die from the procedure, while millions more experience short-or long-term illness and disability. 5 In sub Saharan Africa, it is estimated that 14 million unintended pregnancies occur every year, with almost half occurring among women aged 15-24 years. 6 In Ethiopia there is a five-fold increase in the use of a method of contraception by currently married women, from 8 percent in 2000 to 42 percent in 2014. 7 In 2005, Ethiopia expanded its abortion law, which had previously allowed the procedure only to save the life of a woman or protect her physical health. Abortion is now legal in Ethiopia in cases of rape, incest or fetal impairment. In addition, a woman can legally terminate a pregnancy if her life or her child's life is in danger, or if continuing the pregnancy or giving birth endangers her life. Notwithstanding the new law, almost six in 10 abortions in Ethiopia are unsafe. 8 Despite this effort the prevalence of unintended pregnancy is very high. Hence studying the prevalence and associated factors of unintended pregnancy is of great importance, which would help to design useful strategies and cost-effective interventions to reduce the burden of unintended pregnancy.
is one of 13 woredas in Gammo Gofa zone, Southern regional state which is located at 250 Km south of the capital of southern regional state, Hawassa; and 480 km south east of the capital city of Ethiopia, Addis Ababa. It is bordered by Kucha and Boreda weredas in the North, Arbamich zuria wereda in the south, Mirab-Abaya wereda in the east and Dita in the west. It has 50 rural administrations which are called Kebele and currently the woreda covers an estimated area of 445 km2 and is divided into 45 rural peasant associations and 5 urban dwellers associations. According to the data obtained from the woreda health office, 2016/2017 projected population of the woreda is around143, 560 from the total population, the number of women in child bearing age is 33,449 of these 4967 women expected to be pregnant and currently there are 1234 pregnant mothers in the woreda.
The last year family planning coverage of the woreda is 76%. There are 1 district hospital, 7 health centers, 5 private clinics, two drug venders and 49 health posts with 2 health extension workers in each Kebeles (small administrative unit). 9 A community based crosssectional study design was conducted to assess the prevalence and associated factors of unintended pregnancy.
Sample size determination
The required sample size was determined by using EPI-INFO version 7.1 by considering single population proportion based on the following assumptions. The prevalence of unintended pregnancy among pregnant women was estimated to be (36.5%). 10 A level of confidence of 95% and a margin of error of 5% were also considered and the final sample size became 356. For associated factors the required sample size was determined by using EPI-INFO version 7.1 by considering double population proportion based on the assumptions (Table 1). Data collection tool and procedure: A pre-tested and semistructured interview questionnaire was used for data collection. The questionnaire has different pats: question related to socio-demographic variables, fertility related variables, access to health information and services, family planning related variables and pregnancy intention questions were used to determine the prevalence and identify factors of unintended pregnancy among currently pregnant women in the woreda. The questionnaires was prepared in English and translated to Amharic & prior to the start of field work; the question was pretested among 5% of women in Arbaminch zuria woreda to make sure that the questions were clear and could be understood by the respondents. It was translated back to English to keep consistency. Then it was checked for its clarity and understandability. Findings and experiences from the pre-test was used to modify and rearrange the data collection instrument. Data was collected from June 17 to July 2, 2018 at Chencha woreda. 15 female BSc Midwife data collectors and 6 supervisors who had direct experience were recruited for the data collection.
Dependent variable
Unintended pregnancy (unwanted and mistimed pregnancy)
Independent variables
Socio demographic and economic characteristics: i. Age, religion, marital status, residence, educational status, occupational status, women's decision making power/autonomy of women ii. Fertility related factors: iii. Age at first marriage, gravidity, parity, history of abortion, history of still birth, age of last living child iv. Family planning related factors: v. Knowledge about FP, use of contraceptive methods vi. Access to health information and services vii. Source of family planning information, accessibility of the services (distance from home).
Data quality assurance:
To ensure the quality of data, data collectors were trained on data collection, how to keep confidentiality of information, the contents of the questionnaire and data quality management by the investigators for two days. Training was given to data collectors and supervisors. On the days of data collection, the investigators were supervising the data collection process by checking the completeness of the data. Clarity was made on all content of the formats and areas of difficulties were discussed and direction on possible solutions was grounded. The questionnaire was checked by data collectors & supervisors a daily base for completeness and consistency.
Data analysis and processing: The collected data was checked for completeness and consistency by the investigator. The data was cleaned, coded and entered into EPI-Data version (7.9.0.) and then exported to SPSS Version 20.0 for analysis. Multicollinearity test was made to see the interaction of explanatory variables. Descriptive statistics was computed and described using tables, figures, and charts. Binary and multiple logistic regression analysis were carried out to assess the effect of potential factors on occurrence of unintended pregnancy. Odds ratio with 95% CI was calculated to measure the strength of association between explanatory variables and the outcome variable.
Ethics approval and consent to participate
Ethical approval was obtained from the ethical review committee of the college of Medicine and Health Sciences, Arbaminch University. Letter of permission was also obtained from Chencha Woreda Health Office. Verbal consent was obtained from study participants. Similarly, the participants were informed about the purpose of the study. As if all information gained during data collection were kept confidential and any personal identification was not be recorded on the questionnaire. women's were do not have education and 169(40.8%) of them were house wife (Table 2).
Reproductive history of pregnant women
From the total pregnant women 164(39.6%) had one to two & 87(21%) of women had five and above number of pregnancy, 79(19.5%) pregnant women have no live birth & 8.7% of women had five and above number of live births, 278(67.1%) of women had their first sex at the age of 18 years and above, 41(9.9%) of women had previous history of abortion, 14(3.4%)of women had previous history of still birth, 181(54%) of women had age of last child less than three years. Out of the total pregnant women 125(30.2%) of women have not planned their current pregnancy, 182(45.2%%) of women who have no discussion on SRH issues with their husband and 51.4% of women claimed that decision making power of everything in the household were pertaining to their husband (Table 3).
Access to health information and services for pregnant women in Gammo Gofe zone, Chencha Woreda, 2018
From 414 pregnant women 178(43%) were access their nearby health facility within 30 minutes to 1 hour walking distance & 10(2.4%) were access after 2 hours waking distance as stated in the fig 5.2 below. 372(89.9%) of pregnant women were visited the nearby health facility in this pregnancy (Figures 1 & 2).
Factors associated with unintended pregnancy
Eight variables that have less than 0.25 p-value were entered in multiple logistic regression model, Out of them previous history of abortion, Communication on SRH issues and age of last child were significantly associated with unintended pregnancy. The likelihood of unintended pregnancy were 8.262 times higher (AOR=8.262; 95%CI=3.692, 18.489) among mothers who have previous history of abortion as compared to mothers without previous history of abortion, the likelihood of unintended pregnancy were 3.086 times higher among mothers who does not have Communication on SRH issues with their husband (AOR=3.086; 95%CI=1.830, 5.205) as compared to couples who communicate. And the likelihood of unintended pregnancy were 1.870 times higher(AOR=1.870; 95%CI=1.100, 3.179) among mothers whose age of last child were less than three years as compared to mothers with age of last child were three years and above (Table 4).
Discussion
In this study of 414 studied, 125(30.2%), with 95%CI; (26.1, 34.3) reported that their most recent pregnancy were unintended and previous history of abortion, absence of discussion on SRH issues and age of last child were significantly associated with unintended pregnancy. The prevalence of unintended pregnancy conducted in West Iran (31.6%), Helwan district (32.4%), Sudan (30.2%), and Bangladish(30%) are consistent with our study. [11][12][13][14] The prevalence of unintended pregnancy in Chencha woreda is higher than study conducted in Nirobi keniya(24%), Tigray region(26%) and Debrebrhan(23.5%) 15-17 this might be due to lack of infrastructure to access family planning services from health facility, absence of sexual and reproductive health information access from media and responsible institutions, absence of different private health facilities as a result of infrastructure barrier.
The prevalence in our study area was lower than study conducted in Malawi (45%), West Nigeria (35.9%), Ganji woreda Oromiya region (36.5%), Kersa, Eastern Harerge(33.3%) and Hosana(34%). 13,10,[18][19][20][21] This might be due to cultural and religious barrier to not use the contraceptive method in the other study areas. Pregnant women with previous history of abortion were 8.262 times more likely to experience unintended pregnancy as compared to mothers without previous history of abortion. Similarly, studies from Arsi Negele Woreda, West Arsi Zone revealed similar finding. 14 This shows absence of post abortion care service, and long acting contraceptive method counseling. The likelihood of unintended pregnancy were 1.870 times higher among mothers whose age of last child were less than three years as compared to mother whose last child is 3 years and above, this finding is similar with Iran study showed that age of the last living child were the main risk factor of unwanted pregnancies. 22 And absence of discussion on SRH issues were 3.086 times higher than that of mothers who have discussion on SRH issues with their husband, similarly study done in Damot Gale district, revealed that those who discuss about FP issue were 57% less likely to experience unintended pregnancy as compared to the reference category. 23
Conclusions
The study has shown that prevalence of unintended pregnancy was high in the study area. Previous history of abortion, no discussion on SRH issues with husband and age of last child were significantly associated with unintended pregnancy.
|
2021-08-27T16:27:21.298Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c76130e58836af2c1326765492c33b3a94d9eabb",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MOJWH/MOJWH-10-00284.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "91df372d14318d59850013735c1d7b5e1f9d6460",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
}
|
252819210
|
pes2o/s2orc
|
v3-fos-license
|
A Generalized Method for Automated Multilingual Loanword Detection
Loanwords are words incorporated from one language into another without translation. Suppose two words from distantly-related or unrelated languages sound similar and have a similar meaning. In that case, this is evidence of likely borrowing. This paper presents a method to automatically detect loanwords across various language pairs, accounting for differences in script, pronunciation and phonetic transformation by the borrowing language. We incorporate edit distance, semantic similarity measures, and phonetic alignment. We evaluate on 12 language pairs and achieve performance comparable to or exceeding state of the art methods on single-pair loanword detection tasks. We also demonstrate that multilingual models perform the same or often better than models trained on single language pairs and can potentially generalize to unseen language pairs with sufficient data, and that our method can exceed human performance on loanword detection.
Introduction
Throughout history, words and phrases have been exchanged between languages around the world (Weinreich, 1954). This can obscure genetic relations between languages (e.g., many people erroneously believe English and French are more closely related than they are) but may also increase comprehension of foreign languages by monoglots (e.g., written French is often partially comprehensible by English speakers).
As Zhang et al. (2021) observe, detecting that a word is loanword is conceptually straightforward: both similar sound and meaning suggests too great a coincidence for different words to have converged by chance 1 . Detecting loanwords computationally has therefore relied on pairwise similarity measures based on transliteration detection and edit distance. However, foundational work in linguistic borrowing, e.g., by Haugen (1950) and Betz (1959), established that when borrowing words into a recipient language, speakers of that language will reproduce existing linguistic patterns when using new words, and the patterns that recipient speakers impose upon a borrowed word vary across time (Köllner and Dellert, 2016), and language pairs. Some languages may adopt a word without much phonetic change due to already-similar phonotactics. Others may fit imported words into a rigid sound pattern, with sometimes significant transformation. Still others may change the meaning. Changes are particular to the language pair, so automatically detecting loanwords between arbitrary languages is challenging. However, if successful, such capabilities would also provide benefits to many other NLP tasks such as machine translation, coreference, and named-entity recognition (NER), because common vocabulary, coreferents, or named entities across languages may often be loanwords.
Here, we present a novel method for automated loanword detection between arbitrary language pairs. We build upon existing edit distancebased approaches, incorporate semantic similarity metrics from multilingual language models MBERT (Devlin et al., 2019) and XLM (Conneau et al., 2020), and a method of assessing alignment of phonemes between donor words and loans to account for differences in phonotactics between the relevant languages. We also present and evaluate on the WikLoW (Wiktionary LoanWord) Dataset, currently consisting of 13 language pairs with a high density of loanwords and 3 further language pairs with a lower density of loanwords. We also provide a methodology for expanding the dataset to new language pairs. We demonstrate that our method to detect loanwords across all language pairs in the dataset performs comparably to or better than existing methods on language-specific loanword detection tasks, that multilingual models can perform better than models trained on individual language pairs, even on data from that pair itself, and that our model can also exceed human performance. 2 Our method supports both loanword detection and construction of parallel corpora of loanwords for other tasks. Our conclusions suggest that there are some general principles of loanword detection that can be picked up by machine learning models independent of specific languages, and we propose follow-up challenges for NLP research in this area.
Related Work
Prior approaches to detecting loanwords computationally follow the intuition mentioned above: that if two words in otherwise not closely related languages have similar meaning and sound similar, then this is likely evidence of borrowing. Van Der Ark et al. (2007) use a Levenshtein-distance based approach to identify language groups and loanwords among languages of Central Asia.
Delz (2013)/Köllner (2021) proposes theoretical approaches to loanword identification based on phylogenetic methods. Zhang et al. (2021) also point out an issue we address herein: loanwords may be transformed to fit the borrowing language's phonology and phonotactics, so pronunciation similarity may be a weaker than ideal method.
Existing data resources relevant to loanwords include the the Automated Similarity Judgment Project (ASJP) database (Brown et al., 2008) and the World Loanword Database (WOLD) (Haspelmath and Tadmor, 2009). Our data source is Wiktionary, which has previously been used in related etymological tasks by De Melo (2014) and Sagot (2017).
One thing we should note is that much work in computational loanword detection and similar tasks is targeted at a specific language or group of languages, e.g., Romance (Cristea et al., 2021;Tsvetkov and Dyer, 2015), Japanese (Takamura et al., 2017), Uyghur (Mi et al., 2014, 2018, 2020, Spanish (Álvarez-Mellado and Lignos, 2022), Central Asian languages (Van Der Ark et al., 2007), or Turkic and Indo-Iranian (Zhang et al., 2021). Our approach attempts to address the problem at a multilingual level. We use and extend existing work in phonological processing by the NLP community, including the Epitran (Mortensen et al., 2018) and PanPhon (Mortensen et al., 2016) packages for representing phonetic and articulatory features. We incorporate semantic similarity measures from multilingual language models MBERT and XLM, and develop a method of scoring the level of alignment of phonemes between a donor and a loanword to account for differences in language-specific phonology and phonotactics. Our approach in principle supports loanword detection on any pair of languages supported by the upstream packages/models Epitran, MBERT, and XLM, but we discuss how we have (Sec. 3) and can (Sec. 8) also extend our approach to languages that are not at present covered by all of these.
A work at a similar scale, albeit on the slightly different task of cognate classification, is Jäger (2018), which evaluates PMI and SVM-based methods over the ASJP database. Cognate detection work generally uses similar methods to those we use here, e.g., semantic and phonetic similarity (Kondrak, 2001), orthographic distance (Mulloni and Pekar, 2006) combined with semantic information (Labat and Lefever, 2019;Lefever et al., 2020), or global constraints (Bloodgood and Strauss, 2017). Work in translation lexicons (e.g., Schafer and Yarowsky (2002)) is also relevant, for the hybrid approach to similarity metrics.
Loanword detection may be useful for phylogenetic reconstruction, like cognate detection (Rama and List, 2019). However, cognates are valid for reconstructing common ancestry; loanwords are not. For historical reconstruction, the two must be separated. Many in the NLP community adopt a definition of "cognate" that subsumes loanwords (e.g., Kondrak (2001)). We do not adopt this definition, and use the linguistic definition that treats loanwords and cognates as distinct.
Data Collection
The WikLoW dataset is collected using the process outlined in this section, which can be run for any pair of languages that have loans between them catalogued in Wiktionary, making it easy to expand to new data. We begin by collecting data from Wiktionary categories of the form [Recipient]_terms_borrowed_from_ [Donor] 3 . Each link in the category is scraped for a loanword in the recipient language and the original form of that word in the donor language. Table 1 shows the language pairs currently contained in the WikLoW dataset, and the number of loans for that pair. There is no global definition of a "low-resourced" language, as this is taskdependent, but we have intentionally tried to represent languages that are not well-represented in large corpora like CC-100 (Conneau et al., 2020). We hereafter refer to language pairs using the format "borrower-donor," e.g., "Hindi-Persian" to refer to Hindi words borrowed from Persian. The directionality between the two languages is important to the pair definition, as only words loaned from the donor language to the borrower are properly considered loanwords. If the direction of the languages were flipped, not only would the class labels be different (the donor word loaned into borrower would not be considered a loanword in the donor language), but while the phonetic and semantic similarities (Secs. 4.2 and 4.3) would probably be the same, the alignment score (Sec. 4.4) would not be, since the output label when training that network is the loanword status, which would be likewise flipped. We also scrape the Wikipedia page listing languages by writing system 4 , to include the script name for each language in our datasets. This allows us to filter out words not written in the typical script of the recipient language. For example, some Chinese "loanwords" from English are incorpo-rated keeping the Latin script intact; we don't need machine learning to tell us that these are borrowed terms. Having script information also proves beneficial in later experiments (see Sec. 5).
We also collect all the available lemmas in the donor language, which we use later to calculate the closest phonetic neighbors for each loanword. We also collect homonyms for each loanword where available; homonyms are considered those words that have more than one etymology, where one is a loan from the relevant donor language 5 .
Using the Epitran package (Mortensen et al., 2018), we transliterate both loans and original words into the International Phonetic Alphabet (IPA). The Epitran package can be extended to support new languages, as we did here in the case of Finnish, using Omniglot 6 as a resource. Epitran is not a perfect mapping to real pronunciation, especially in the case of abjads such as Arabic script, a point of relevance later (Sec. 4.4, Sec. 7.1).
Having gathered positive examples of loanwords, we need to gather sufficient negative examples to both train an algorithm, and to try and fool the trained algorithm. Negative examples can be: • Synonyms: words with similar meaning to a loanword but pronounced differently, e.g., "driver" vs. chauffeur. • Hard negatives: closest phonetic neighbors to a loanword that have different meaning, e.g., "annex" vs. ânesse. • Randoms: random pairings where the two words have no discernible phonetic or semantic relationship.
To create the synonyms dataset, we take a list of 440 English words, each of which has multiple synonyms associated with it. With the Google Translate API, we translate the main word into one language from our current relevant pair, and each synonym into the other. We then construct word pairs in the donor and recipient language using the Cartesian product of each word with each translated synonym. We remove any duplicates, and any pairs that also occur in the loanword dataset, as we do not want true positives labeled as negatives when training the loanword detection model.
To create the hard negatives dataset, we use the PanPhon package (Mortensen et al., 2016) to 5 One such example is Hindi a (/@g@r/), which can be both a loan from Persian, meaning "if," and a descendent of Sanskrit a , referring to a type of wood. 6 https://www.omniglot.com/writing/ finnish.htm compute six edit distances (see Sec. 4.2) between the IPA transcriptions of the gathered loanwords, and up to 20,000 candidate lemmas of the donor language, which are also transliterated into the IPA using Epitran. The result here is that each loanword is paired with up to six candidates that have a low phonetic edit distance but are not the original word in the donor language. We remove duplicates where multiple distance metrics chose the same closest neighbor, and where pairs cooccur with the synonyms or loans datasets.
Finally in the randoms dataset, we pair each loan with a random word in the donor language.
Similarity Metrics
Every word pair in the WikLoW dataset has measures of textual, phonetic, semantic, and articulatory similarity associated with it.
Textual Similarity
This is simply the Levenshtein edit distance between two strings. Where the two languages are written with different scripts, this is simply the maximum length of the strings, but in some cases, a language written in the same script as the donor language may borrow a word and keep the spelling unchanged, even if the pronunciation changes. A case in point is the word "science," a loan derived from French science, which is spelled identically but pronounced very differently (/saI@n(t)s/ vs. /sjÃs/). Textual edit distance may be a useful feature for some language pairs, so we keep this metric.
Phonetic Similarity
Having created IPA transcriptions of the words, we compute 6 distance metrics over the transcriptions, all available from the PanPhon package: • Fast Levenshtein Distance. A C implementation of Levenshtein distance (Levenshtein et al., 1966). PanPhon sets all edit costs to 1.
• Dolgo Prime Distance. Based the notion of the Dolgopolsky list of the 15 most stable lexemes (Dolgopolsky, 1986) but extended by PanPhon to a list of 14 most stable phonemes.
Phonemes are mapped to these classes, over which Levenshtein distance is calculated.
• Feature Edit Distance. IPA is converted to articulatory feature vectors (e.g., storing presence, absence, or irrelevance of articulatory features place/manner of articulation, roundedness, pulmonic quality, etc.). Levenshtein distance is calculated over the feature vectors. • Hamming Feature Distance. Same as Levenshtein distance, but with substitution cost being the Hamming distance (Hamming, 1950) between the feature vectors, normalized by the length of the vector.
• Weighted Feature Distance. Accounts for the class of the IPA symbol when calculating the Levenshtein costs as well as the probability of that specific edit. Weights are prespecified by PanPhon.
• Partial Hamming Feature Distance. Insertion and deletion costs are 1, however the cost of substitution for a zero value is half the substitution cost for a nonzero value.
We use the PanPhon normalized version of all edit distances, which divides by the maximum length of the two words in the pair. Fig. 1 shows kernel density estimation plots of the distribution of Fast Levenshtein and Dolgo Prime distances over the entire dataset. Loans have the lowest distance on average, followed by hard negatives.
Semantic Similarity
A loanword between a pair of languages must both sound and mean the same. While phonetic similarity, calculated with edit distance, has been a foundation for past work in loanword detection, modern large language models provide an opportunity to select for semantic similarity between word vectors, provided the models are trained over multilingual data. We make use of the simultaneous multilingual training objectives of MBERT (Devlin et al., 2019) and XLM (Conneau et al., 2020) to benefit from cross-language proximity of contextualized word embeddings, as shown in (Cao et al., 2019). We use the cosine function as our vector similarity measure. MBERT is the multilingual version of BERT, pretrained on 104 languages, with demonstrated capacity for knowledge transfer on downstream tasks. It differs from BERT in two ways: i) in its masked language modeling pretraining, each batch comprises sentences from all languages, and ii) its dictionary is shared among all languages and is created by WordPiece from concatenating all corpora. Pires et al. (2019) show that MBERT's ability to transfer is due to a multilingual representation, which enables it to manage transfer across different scripts. These representations seem to share a common subspace that contains linguistic information, independent of specific languages.
XLM-100 is a cross-lingual (100-language) pretrained model which extends previous BERT-based models with a Translation Language Modeling (TLM) objective as well as the masked language and causal language modeling objectives, and has demonstrated success in unsupervised machine translation tasks (Conneau and Lample, 2019). XLM uses byte-pair encoding subword tokenization (Sennrich et al., 2016) which includes the most frequent symbol pairs when creating the token vocabulary. This makes it suitable for encoding tokens common in low-resourced languages (LRL) while alleviating bias towards high-resource languages, by reducing tokenization of LRL words at the character level. This improves the alignment of embedding spaces of languages that share either the same alphabet or proper nouns (Smith et al., 2017), both of which occur frequently among loanwords.
To these models, we input a "sentence" consisting of the word preceded by the [CLS] or <bos> token and followed by the [SEP]/<eos> token. We retrieve the vector of the [CLS]/<bos> token as a representation of the entire semantics of the input, to account for tokenization possibly splitting the word.
Alignment Network
To account for different phonotactics in paired languages (e.g., Swedish /sku:lA:/→ Finnish /koulu/), we build a model to align phonemes in a word pair and account for epenthesis, elision, and metathesis, which provides a more informative measure than simply edit distance. Mortensen et al. (2016) show that information-rich phonological representations do better than character-based models or one-hot encodings in tasks such as NER.
We convert the IPA transcriptions to 21 subsegmental articulatory features using PanPhon 7 . These features were padded to the maximum length of a vector in the borrower-donor pair. The features for the loanword and original word were then concatenated for input to the alignment network.
The alignment network is a deep feedforward neural network trained on the aforementioned concatenated features of the alldata split of our datasets. The network was trained against the loan/non-loan binary label. This is not to predict loan status, but because we do not include any semantic information at this step, the label acts as an indicator of "phonetically aligned" or not. A positive prediction means the model predicts that the two words in the pair are strongly phonetically aligned according to the articulatory features. During inference, we get the pre-sigmoid logit value as a holistic alignment score between the two words.
Evaluation
For evaluation, we create three data distributions for each language pair. One (the balanced distribution), contains half loanwords and half nonloans. This is a well-behaved distribution wellsuited for machine learning. The non-loans are drawn roughly 1 7 from the hard negatives, 4 7 from the synonyms, and 2 7 from the randoms, reflecting the notion that relatively few words in a language are likely to be very phonetically close to a loanword on average, while there are likely to be many more words of synonymous or similar meaning.
Another distribution attempts to approximate the actual proportion of loanwords from the donor language into the recipient language (the "realistic" distribution, or realdist). Sometimes this proportion is well-documented, and at other times not. 8 . Where a figure is provided in the linguistic literature, we use it. Otherwise, we take the number of loanwords we collected from Wiktionary and divide it by the total number of lemmas in the borrowing language, and impose a lower bound of 10%, to maintain enough loanwords in the testing set. The non-loans portion of the realdist set is drawn in the same proportions as in the balanced set. For all language pairs currently in the WikLoW dataset, the realdist contains <50% loanwords, but for other language pairs, e.g., Korean-Chinese, >50% loanwords is certainly possible or likely (Sohn, 2005).
The final distribution (abbreviated alldata), takes all the data we collected from Wiktionary, to purposely overweight the dataset against loanwords, to test our method in a difficult condition.
To each distribution, we concatenate two one-hot vectors representing the scripts of the languages in the pair. This allows certain models to learn dependencies between the scripts and other variables, e.g., if the languages are written in different scripts, the textual Levenshtein distance becomes nearly meaningless.
Each distribution was divided into a 90:10 train/test split, and then shuffled. We evaluate four different binary classifiers on all distributions: a logistic regressor (LR), a linear SVM, a Random Forest (RF), and a deep neural network (NN). The neural network consists of 3 layers of 512, 256, and 128 hidden units respectively, all with ReLU activation and followed by 10% dropout, and a final sigmoid activation, and is trained for 5,000 epochs with Adam optimization and BCE loss. We perform the evaluations listed below. Single Multilingual Model (SMM) For each different data distribution, we train a single model on the data from every language pair listed in Table 1 except for Persian-Arabic, Hungarian-German, German-Italian, and Catalan-Arabic, which we reserve for subsequent experiments. The single multilingual model is evaluated on the unseen test sets for all language pairs used in training. Pair-Specific Models For each distribution, we train and evaluate on a single language pair only, so we can compare the performance of the SMM to models specialized for each language pair. Pruned Training Set We train on the realdist train set and evaluate on the alldata test set. This allows us to test on a much larger test set that contains a lower proportion of loanwords, and test the ability of our model to pick out loanwords from a more challenging distribution with less training data. The realdist train set is pruned of word pairs that appear in the alldata test set, since the two distributions were originally created separately. This experiment used the neural network classifier only. Unseen Language Pairs We evaluate the performance of the SMM on Persian-Arabic, Hungarian-German, German-Italian, and Catalan-Arabic, which the model has never seen. This experiment used the neural network classifier only.
Results
Our primary metrics are precision, recall, and F1score on positive loanword identification. Table 2 shows the average positive F1 score on the realdist LR NN SVM RF F1 (+) 85 86 84 85 distribution of the 4 classifiers we evaluated. The remaining tables and figures all focus on the results of the neural network, are sorted by decreasing number of loanwords in the language pair, and are discussed in Sec. 7. Table 3 presents the SMM results. Fig. 2 shows the alldata test results from Table 3 in bar graph form compared to the performance of the loanword detection model on each language pair when trained only on data from that language pair, and to the model when trained on the smaller pruned realdist training data. Table 4 shows the SMM's performance on the unseen language pairs, and Fig. 3 plots F1 score against the number of loanwords in each pair's test set.
Discussion
We can quantitatively compare our approach to that of Mi et al. (2021), who report 75.35% average precision, 74.09% average recall, and 74.71% average F1 on loanword detection in Uyghur on borrowings from Russian, Arabic, Turkish, and Chinese. Our results are on different language pairs but are comparable to or exceed this, particularly if the testing set is balanced between loans and non-loans. In Fig. 2, we can see that in most cases, the multilingual model outperforms the single-pair models on the same language pair on loanword retrieval, though this effect is most pronounced in language pairs with a higher density of loanwords. The model trained on the smaller pruned realdist data sees an appreciable drop in precision, but an equal or greater increase in loanword recall, and this effect is especially pronounced in pairs with fewer loanwords in the data overall, suggesting that training on a more realistic distribution may be advantageous when prioritizing reducing false negatives. Fig. 3 shows the correlation between test set size and performance of the SMM (including unseen language pairs). There appears to be a strong correlation between the proportion of loanwords in a test set (as expected, a balanced set leads to optimal performance), but also the raw size of the test set itself. The model performs better on larger test sets, unseen or not, regardless of what data it was trained on. We speculate that this may be because when a borrowing language borrows a lot of words from a donor language, it does so at around the same time (e.g., English from Norman French), meaning 5002 all en-fr en-de id-nl pl-fr ro-fr kk-ru ro-hu de-fr hi-fa fi-sv az-ar zh-en P (+) 92 96 90 96 90 94 93 88 94 94 85 85 81 98 97 98 99 97 96 98 99 98 97 98 98 98 83 89 84 85 82 82 86 76 86 81 78 69 there are consistent transformations applied, which a network can pick up. This may not be the case in language pairs with a sparser density. Catalan-Arabic performance is particularly low and there are only 10 words in the test set, many of which were likely mediated by Spanish first.
Error Analysis
Mistakes made by the SMM, particularly on language pairs that perform less well, are illuminating. Finnish-Swedish false negatives, e.g., kyökki/kök and rontti/strunt, suggest that additional final vow- False positives are overwhelmingly hard negatives, and the model has particular trouble with languages that use abugidas or alphabets that borrow from languages that use abjads, due to the lack of vowels. Examples include Hindi-Persian (nisār)/nasr and Azerbaijani-Arabic r@bb/rabbaba. This can largely be attributed to Epitran not inserting vowels into Perso-Arabic transcriptions.
This suggests one clear way to potentially improve our method: incorporating multi-head attention into the phonemic alignment network rather than the current feedforward structure, which is performing the task the way single-head attention would and then averaging over all alignments.
Cognates are excluded from the positive loans data unless the cognate was actually later borrowed into the recipient language, as sometimes happens (e.g. "chef" vs. "head"). It is rare for cognates to be misclassified as loanwords due to intervening sound changes between two languages with common ancestry, but there are cases where a loanword is paired with a word in the source language that is cognate to it but is not the original borrowed word. Table 5 shows some of these rare cases.
Influence of Features
Neural networks are difficult to interpret, but the weights of the logistic regression classifier, which on average performed ∼1-3% lower than the neural network, gives a sense of which features are important. Overall the alignment score is a strong positive correlate to loanword status across all language pairs. As expected, Levenshtein textual edit distance is inversely correlated with loanword status in pairs that share the same script, but not when the languages use different scripts. Interestingly, the semantic similarity metrics do not have a lot of influence on the model, but XLM is generally more influential than MBERT, and this influence is more pronounced among the lower-resourced languages (e.g., Kazakh-Russian, Hindi-Persian, Azerbaijani-Arabic), which supports XLM's claim to be more suited to LRLs, but the influence is most pronounced on English-French, the highest-resourced language pair currently in WikLoW, which undercuts the claim somewhat. Since loanwords are expected to be semantically similar, this task allows us to investigate the quality of multilingual language models on different language pairs. These findings are also borne out by ablation tests on the neural network classifier. For instance, dropping the alignment score and semantic similarities causes recall on the different-script pairs (Hindi-Persian, Azerbaijani-Arabic, Mandarin-English) to drop by 20% or more, while not affecting the samescript pairs as significantly. Sec. A.5 in the appendix shows these findings in more detail.
Human Comparison
To compare the performance of our model to human performance on loanword retrieval, we selected three language pairs, English-French, Hindi-Persian, and Mandarin-English, took the list of loanwords from the test set of the alldata distribution, and asked N annotators who were fluent speakers of each borrowing language to mark which in the list they thought were loans from the listed donor language. This was a fast way to assess human loanword recall and provide comparative numbers to our system on these language pairs. Our system is able to significantly exceed human recall on English-French and Hindi-Persian, but not on Chinese-English (as noted those numbers may be inflated). Some loans were also homonyms, which may have had a small impact on human recall (see supplement). We also calculated Fleiss' kappa (Fleiss, 1971) over the human annotations and found that even when individual humans demonstrated moderate-to-high recall on loanword retrieval, there was virtually no agreement among annotators on which loanwords they identified.
Conclusions and Future Work
Automated loanword detection enables a number of downstream tasks. Coreferents and named entities across languages may often be loanwords, and common vocabulary enables potential improvements in machine translation (Ortega et al., 2021).
Parallel corpora of loanwords also afford learning cross-lingual contextual word embedding mappings-inspired by the success of pre-Transformer embedding mappings (Bojanowski et al., 2016), and the potential of post-Transformer alignments (Cao et al., 2019). These can be incorporated into the Transformer architecture to provide auxiliary signals to enhance translation in two ways: i) Introducing another multi-head attention between the input language embeddings and their mappings in the target language space-similar to the second multi-head attention block in the original Transformer architecture (Vaswani et al., 2017). We propose to map embeddings between a source language L X and target language L Y by computing a transformation matrix between paired representations of semantically-equivalent words or sentences, then to compute attention weights between these mapped embeddings, and concatenate these auxiliary attention outputs with the attention between tokens from L X and already-generated tokens from L Y . ii) Unmasking identified loanwords in the target language in the decoder's input, which is expected to provide further context to the decoder in the target language. This would replicate a uniquely human linguistic capability: the ability to pick up context in an unfamiliar language by picking out known words (i.e., loans from a known language). Fig. 4 shows a proposed architecture for these operations. Mapping between embedding spaces also allows expanding our method and dataset to new languages not covered by MBERT or XLM through resources like IndicBERT (Kakwani et al., 2020).
Why Study Loanwords?
In keeping with the COLING 2022 special theme, "Tackling the Grand Challenges of the world by promoting mutual understanding through language," we posit that common vocabulary decreases barriers to communication, and representing it offers a particular benefit to LRLs in NLP, by providing a way to leverage resources from higherresourced languages that have contributed vocabulary to an LRL. In this, Wiktionary itself has been and can continue to be a resource (Zesch et al., 2008;Krizhanovsky and Smirnov, 2013;De Melo, 2015;Wu and Yarowsky, 2020). Loanword detection is also necessarily not language agnostic, and is therefore important for linguistic diversity and inclusion in NLP (Joshi et al., 2020), although our multilingual results suggest that there may be key features of loanwords that allow detection to generalize.
We propose these challenges to the community: 1. We have presented a novel baseline for loanword detection across arbitrary language pairs that delivers high-quality results, but there remain challenges particularly for languages with divergent phonotactics. 2. We have also presented a method to gather more data for new languages, and demonstrated our detection method's performance on unseen language pairs, which we present as a baseline for comparison. 3. We have also provided homonym data, which is tailor-made to confound a loanword detection algorithm. Discriminating loanwords from their homonyms remains a challenge that presents many interesting opportunities in areas like machine translation and comparative and corpus linguistics.
A.1 Further Details on Data Collection
We use the MediaWiki API to conduct our data collection. To maintain adherence to Wiktionary's terms of service, we make no more than 200 requests per second and sleep after a specified number of words are processed (by default, 200). When conducting the initial data collection, we exclude terms that begin or end with hyphens, as those are likely to be affixes; that are only one letter long, as those are likely to contribute too much noise to the final dataset; and those that contain numerals or non-phonetic, non-syllabic, or nonlogographic (depending on the language) symbols.
The choice of language pairs investigated here was determined in part by the intersection of languages that are supported by all 3 of Epitran, MBERT, and XLM-100, and that have a [Recipient]_terms_borrowed_from_ [Donor] category on Wiktionary that contains more than 1,000 entries. The exceptions to this are: Finnish-Swedish, where Finnish is not natively supported by Epitran, but we built our own Finnish G2P mapping for Epitran; Mandarin-English, where some terms were discarded during preprocessing, causing the number to fall below 1,000; and Hungarian-German, German-Italian, and Catalan-Arabic, which were selected specifically for having fewer than 1,000 loanwords listed in Wiktionary. Table 7 shows the 2-letter ISO 639-1 codes for these languages, which can help in interpreting Table 3 (Sec. 6).
A.2 Further Details on Semantic Similarity
In our experiments, for the XLM-100 and MBERT models, we extract the <bos> embeddings (equivalent to the [CLS] token for MBERT) for a word pair from the last_hidden_state. Numerous studies like (Jawahar et al., 2019) and (Tenney et al., 2019) suggest that BERT's later layers encode comparatively more high-level semantic information than its middle layers which tend to capture more syntactic features in the linguistic hierarchy. For both the models, the dimensions of the generated embeddings are of the shape (batch_size, sequence_length, hidden_size) where batch_size is 8 for both, sequence_length is the number of tokens from the word after tokenization (max_length is 512 for both models) whereas the embedding dimension i.e., hidden_size is 1280 for the XLM and 768 for MBERT. We then get the cosine similarities between the generated embeddings of each word pair of the borrower-donor pair in order to extract their semantic similarities.
A.3 Further Details on Alignment Network
The alignment network was trained for 5,000 epochs with Binary Cross-Entropy (BCE) loss and Adam optimization, with a 20 percent validation set to prevent overfitting. The DNN consists of two hidden layers with 512 neurons each with ReLU activation, followed by 10% dropout, and an output layer and a sigmoid function. Previous studies like Wu and Klabjan (2021) have suggested that logit outputs of neural networks can be a reliable and agnostic uncertainty measure that captures innate features of classes during classification and detection tasks. The alignment network here maps the concatenated articulatory features of a word pair to their class and therefore, the logits will contain class-based information that can subsequently be used as crucial features for our classifiers. In other words, these logits encode alignment information of the articulatory features that can be mapped to whether a pair is a phonetically similar, conditioned upon the sound patterns of their respective languages, or not.
A.4 Results from Other Classifiers
The main paper presented the results of the neural network classifier in detail and discussion of the weights from the logistic regressor. Here we present results from the logistic regression classifier (Table 8), the support vector machine (Table 9), and the random forest (Table 10).
The neural network is consistently the bestperforming classifier, by about 1-5% F1, depending on which distribution is being evaluated on. The other classifiers can be expected to perform about this much lower. One thing to note is that the effect is most pronounced on the alldata dataset, which is the hardest dataset for any classifier on average, due to the overwhelming preponderance of non-loans. When the dataset is balanced between loans and non-loans, the type of classifier chosen for loanword detection is almost immaterial, with almost perfect performance all around. It seems at these proportions, the information encoded in the datasets, such as alignment score, edit distances, and cosine similarities, are informative enough. For this reason we have focused most discussion in the main body of the paper on the alldata and realdist datasets.
However, while the behavior of the logistic regressor and SVM are largely consistent with each other, and track that 1-5% difference with the neural network across all language pairs, the behavior of the random forest is rather different and inconsistent with the other classifiers. For example, it gets 100% recall on the balanced distributions of Indonesian-Dutch and Romanian-French (as well as Kazakh-Russian like the other classifiers), but on the Chinese-English alldata distribution, recall comes in ∼20% below the other classifiers. The other pairs with dissimilar scripts see a similar, albeit reduced effect on the same distribution, but so do some pairs that share a script, such as Indonesian-Dutch and Romanian-French.
A.5 Further Details on Influence of Features
This section contains the quantitative breakdown of the influence of different features on the results, which was discussed in Sec. 7.2. Fig. 5 is a graph representation of the logistic regressor weights mentioned there. The circular markers represent language pairs where both languages use the same script (including extended versions), while the square markers represent pairs where the languages use different scripts. Inferences drawn from the logistic regressor weights are bolstered by ablation tests on the neural network. Table 11 shows the neural network performance when the alignment scores and cosine similarities are not used as input features.
Articulatory alignment scores and cosine similarities are most important when the languages in the pair use different scripts. When these are removed as training inputs, and only phonetic and textual distance metrics are left, along with the script encodings, performance on the Azerbaijani-Arabic alldata distribution drops by 10% positive F1 and Hindi-Persian drops by 20% positive F1. The most drastic case is Mandarin-English, where without these features, positive F1 on realdist and alldata drop by 19% and 47% respectively, and positive recall drops by 20% and 42% respectively. This is because the different scripts make textual Levenshtein distance a useless feature here, and the differing phonologies of Mandarin and English make the phonetic edit distances noisy (e.g., see Sec. 7.1). Meanwhile, on certain same-script pairs, particularly those where words tend to be imported with little change in spelling (e.g., English-French, English-German, German-French), performance can actually go up slightly, because in these cases, textual Levenshtein distance is enough to detect that the word is a loan.
We should note that with only phonetic and script features, performance on the balanced distribution remains relatively high but suffers slightly. However, results vary on the realdist distribution, and there appears to be some correlation between increased performance on realdist without these features, and the proportion of loans in that distribution, suggesting that this is potentially important to consider (i.e., the base rate of loans from French into English, for instance, is relatively high). The performance penalty we see on LRLs and differentscript pairs do suggest that overall the alignment score is most critical to generalizable performance, and the semantic similarities provide a way to analyze the quality of large multilingual language models for certain language pairs. These could also be augmented with other pair-specific metrics, such as overall measures of lexical or phonetic distance.
A.6 Homonyms in Human Comparison Task
The loanwords from the alldata test sets given to human annotators, that are also homonyms, are listed below: • English-French: -"punt," from French pointe, meaning a bet or wager, with many other etymologies, including from Old English for a pontoon boat. -"Lemans," French surname from toponym Le Mans, and from Middle English Lemans, "son of Leman." -"bride," from French bride, meaning a bridle, and from Old English brȳd, "bride, daughter-in-law." -"paillard," from a French surname (and name of a restaurant), and variant of "palliard," meaning a beggar. -"lisse," from French lisser, smooth, and from Old English lissīan, "to relieve." -"tarse," from French tarse, the tarsus or ankle-bones, and from archaic term for a male falcon. -"par," from French par, meaning "through, by," with many other etymologies, including from Latin pār, "equal." -"bombard," actually a doublet, with two meanings both meaning "cannon," both ultimately from Middle French, one via modern French bombarde, the other via Middle English bombard (latter form also referred to a bassoon).
• Hindi-Persian: a (agar), from Persian, meaning "if," and a descendent of Sanskrit a (agaru), a type of wood.
A.8 Supported Languages and Scripts
Our system can in principle support the languages in Table 13 out of the box. While we have only tested on the language pairs mentioned in the main paper, and not every pairing in Table 13 has a sufficient volume of loanwords listed in Wiktionary, data collected in any of these languages can be converted to IPA with Epitran or extensions, and processed by MBERT and XLM to get cosine similarities between word vectors. Epitran can be extended to other languages by defining custom mapping, preprocessing, and postprocessing rules, as we did here for Finnish. Proper functionality makes an assumption that the language given is written in the associated script listed. This serves the purpose of not only maintaining support in Epitran but also in collecting clean data from Wiktionary, and in assigning the correct one-hot script encoding during training and evaluation.
A.9 Organization of Code/Data README.md contains instructions to run the full pipeline. language-pairs.json is a JSON file containing information about the language pairs to make datasets for, including codes for Epitran and Google Translate and desired realdist proportion of loans. language-pairs-holdout.json is the same for language pairs to be included in the holdout test set and withheld from training. language-pairs-pipelinetest.json contains only Catalan-Arabic, which is a small sample and runs (relatively) quickly, in order to validate the pipeline. These JSON files drive most of the rest of the code.
supported_languages.txt contains the list of supported languages (cf . Table 13). epitran-extensions contains preprocessing, mapping, and postprocessing rules for new Epitran language. Currently this contains only Finnish, which only uses pre and map. To run Epitran for the new language, these would need to be moved into the corresponding folder in the Epitran distri-
|
2022-10-12T13:06:23.305Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "88e9d7ad46bfa09ea789e9f983f6faf52ca07b32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "88e9d7ad46bfa09ea789e9f983f6faf52ca07b32",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
136348153
|
pes2o/s2orc
|
v3-fos-license
|
Engineering nanoparticle synthesis using microbial factories
: Biologically engineered entities have enabled discoveries in the past decade and a half, spanning from novel routes for the syntheses of drugs and value-added products to carbon capture. The precise cellular re-programming has extended to the production of nanomaterials owing to their ever-growing demand. The primary advantage of the biological nanoparticle synthesis is the eco-friendly approach performed at ambient temperature and pressure, where the usage of harsh chemical stabilisers and capping agents is eliminated, providing ease of handling and downstream processing. Although the techniques hold great promise, many short comings hamper their scalability; thus, rendering them unsuitable for industrial applications. A fundamental understanding of the underlying mechanisms which involve various enzymes of different metabolic pathways is most crucial in surmounting these impending blocks leading to successfully engineered systems which can be tuned in accordance with the goals of specific applications. This mini review highlights the recent developments in nanoparticle synthesis that employ the use of microbial reaction vessels with specific emphasis on engineering of these biological entities such as bacteria, yeast, fungi and algae. Also presented are the challenges and future trends in this domain where novel and engineered approaches will be the most consequential.
Introduction
Nanoparticles, the building blocks of nanotechnology, are particles with at least one dimension of less than or equal to 100 nm. Nanoparticles have potential applications in diverse fields that include targeted drug delivery vehicles, gene therapy and cancer treatment. Other applications such as antibacterial agents, DNA analysis, biosensors, separation science, magnetic resonance imaging (MRI) and nanogenerators, have been explored [1].
Although the size and shape of these particles might appear as merely physical, their complex effects on chemical properties and physical interactions are important in maintaining their functionality. For example, gold nanoparticles are reactive when they are <10 nm in size while particles with a small radius of curvature and angular shapes have improved catalytic properties [2,3]. Therefore, it is essential to synthesize shape-and size-controlled nanoparticles. However, this is met with multiple challenges. Nanoparticle synthesis is primarily categorised as either chemical or biological. Atomistic, molecular and particulate processing either in vacuum or in a liquid medium are applied in various chemical synthesis methods [4]. Many physical and chemical methods, which are extensively used, result in monodispersed nanoparticles but they require the use of harsh chemical stabilisers and capping agents. These techniques are capital intensive and inefficient in energy usage [5]. To lower the high surface energy leading to thermodynamic instability, nanoparticles stabilise themselves either by lowering the surface area through agglomeration or by sorption of surrounding molecules. To kinetically stabilise nanoparticles, stabilisers or capping agents such as surfactants, polymers, small ligands, cyclodextrins and polysaccharides are used [6]. The presence of capping agents may affect the functionalities of the nanoparticles and additional steps such as solvent washing, thermal annealing and UV-ozone irradiation are required to remove the stabilisers which could in turn alter the nanoparticle shape, size and stability [7,8]. Such synthesis procedures that use non-polar solvents and toxic chemicals on the surface to produce nanoparticles limit their usage in clinical fields. Hence, there exists a requirement for bio-compatible, clean, non-toxic and 'green' methods of producing monodispersed, size-controlled nanoparticles.
Many microbes are naturally capable of producing nanoparticles either intracellularly or extracellularly when challenged with metal salts. The availability of various biotechnological tools, such as genetic and protein engineering, systems and synthetic biology, encourages the use of microbial systems to fine tune nanoparticle synthesis. This mini review highlights recent developments in the field of nanoparticle production by microbes (e.g. bacteria, cyanobacteria and yeast) and microbe-derived scaffolds, and identifies some challenges associated with the approach. There have been numerous reports on microbial synthesis of nanoparticles but very few studies have contributed to understanding the process and the mechanism of the syntheses. The knowledge is important to overcome the synthesis bottlenecks and for the systematic engineering of microbes. The last part of the review sheds light on the future directions and highlights the importance of understanding the synthesis mechanisms for controlled production of nanoparticles.
conversion into metal nanoparticles with controlled size and morphology [27,28]. The main advantage of this approach is that it does not require any chemicals that can possibly be toxic and these reactions can occur at ambient temperature and pressure conditions.
The biological process of nanoparticle formation in microorganisms can occur in two ways: intracellular (non-templated or templated) and extracellular (in the culture broth or adhered to the membrane) as demonstrated in Fig. 2. In intracellular production, the cell culture is challenged with metal salt solution where the metal ions are transported across the cell membrane and nanoparticle formation occurs within the cell. Subsequently, the nanoparticles are recovered by lysing the cells and purified. In contrast, during extracellular synthesis the added metal salts are converted to nanoparticles either on the cell membrane or in culture broth. Recovery of the extracellularly produced nanoparticles will expectedly involve fewer downstream processing steps.
Bacterial production
Bacteria, owing to the ease of its manipulation and the availability of well-established genetic tools, are of particular importance in the field of nanoparticle synthesis. Easy maintenance and fast growing time are added advantages. The first genetic engineering of bacteria for nanoparticle synthesis is reported by Chen et al. [16]. Genetically engineered Escherichia coli JM109 expressing phytochelatin synthase gene from Schizosaccharomyces pombe (SpPCS) along with modified g-glutamylcysteine synthetase (GSHI*) was used as a synthetic host to produce cadmium sulphide (CdS) nanocrystals. The GSHI* catalysed the synthesis of glutathione (GSH), the precursor for phytochelatin, which in turn enhances the production of phytochelatin that serves as the capping agent for the CdS nanocrystals. The strategy was adopted from S. pombe natural defence mechanism. Further improvement to the strategy shows that it is extendable to another strain, E. coli [9][10][11][12][13][14][15][16][17][18][19] compared with that of genetic engineering (below) [20][21][22][23][24][25][26] Fig. 2 Different modes of nanoparticle biosynthesis by microorganisms a CdS quantum dots produced in genetically engineered E. coli [29] b Gold nanoparticles produced on the membrane if Synechocystis sp. PCC 6803 [30] c Silver nanoparticles produced by P. aeruginosa SM1 [31] d Silver nanoparticles on plasmid scaffolds [19] e Magnetite nanoparticles produced in Rhodospirillum rubrum [32]. Parts of the figure are reproduced with permission R189, resulting in the synthesis of uniform CdS quantum dot nanocrystals (3-4 nm).
In another study, E. coli was transformed with plasmids containing gene encoding foreign CdS-binding histidine-rich peptide (CDS7) [29]. High-resolution transmission electron microscopy, X-ray diffraction, luminescence spectroscopy, and energy dispersive X-ray spectroscopy were used to characterise the quantum dots and showed that the average particle diameter was 6 nm. Various alkaline earth, semiconducting, magnetic and noble metal nanoparticles have been synthesised using recombinant E. coli expressing phytochelatin from Arabidopsis thaliana and metallothione from Pseudomonas putida [33]. Some extremophiles, such as Antarctic bacteria, have natural resistance and tolerance towards cadmium and telluride, hence have been exploited for the synthesis of respective fluorescent nanoparticles [34]. However, the details of the mechanisms and the molecular machineries are not yet elucidated.
Silver nanoparticles produced from Rhodobacter sphaeroides are spherical and relatively mono-dispersed with average size of 9.56 ± 0.32 nm. The 6-h production time is an improvement compared to a few days in previous studies; hence, making it a rapid method to produce silver nanoparticles in vivo [35]. Some silver resistant E. coli strains contain CusCFBA silver/copper system which helps in the accumulation of silver nanoparticles in the periplasm [36]. Lin et al. have exploited such strain for anaerobic production of silver nanoparticles in the periplasm [37]. This process employs the use of oxidised metal ions as electron acceptors resulting in the generation of reduced metal nanoparticles with the help of multi-heme cytochrome-c [38,39]. Nitrate reductase (NapC) mutant of this strain ceased to produce silver nanoparticles establishing the role of cytochrome-c in the synthesis of silver nanoparticles. This study attempts to shed light on the mechanism of nanoparticle synthesis at the protein and metabolism level which deepens our understanding of the process (Fig. 3).
Many studies have utilised bacteria to produce various kinds of nanoparticles but only a few have focused on understanding the synthesis mechanisms. Despite the exciting findings from these studies, limited information is available on the factors responsible for the synthesis. Understanding these systems is particularly important for controlled synthesis of unconventional nanoparticles, such as ruthenium and rhodium, and for further extrapolation into synthesis in easy-to-handle bacteria, such as E. coli. Pseudomonas aeruginosa SM1 was used for intracellular production of lithium and cobalt nanoparticles [31]. The production was achieved without the addition of growth media, stabilisers, electron donors or pH adjustments while being performed at room temperature. The same strain was also shown to produce extracellular silver, palladium, iron, rhodium, nickel, ruthenium and platinum nanoparticles [31]. It is to be noted that this is the first report on rhodium and ruthenium nanoparticles production by living organism and nickel nanoparticle synthesis in bacteria.
Morganella morganii, a silver-resistant bacterium, was the first synthetic host reported to produce copper nanoparticles in aqueous phase [40]. Production of homogenous copper nanoparticles in aqueous phase is challenging due to the formation of copper oxide on the surface. The silver-resistant bacterium contains proteins that are similar to copper binding proteins; hence, it was envisaged that M. morganii is a suitable candidate for pure metallic copper nanoparticle synthesis. The resulting copper nanoparticles are 19 nm in size and devoid of copper oxide.
To add additional control to the biosynthesis of nanoparticles, biological templates have been explored to give better control in the shape and size distribution. Biological systems have natural ability for controlled deposition and structure of inorganic materials which has given rise to biomimetic approaches to synthesise inorganic nanomaterials. Protein shell with hollow cavities in the centre, such as cowpea chlorotic mottle virus capsids, ferritin and ferritin-like proteins, serve as a size-constrained reaction vessel for the synthesis of inorganic materials with controlled dimensions [41]. In addition, the protein surface provides a platform for surface modifications. An example of templated nanoparticle synthesis is the synthesis of 1D C-doped Fe 3 O 4 nanoparticles inside self-assembled magnetosomes in magnetotactic bacterium, Magnetospirillum gryphiswaldens [42]. Characterisation of the nanoparticles using field emission scanning electron microscopy and transmission electron microscopy revealed nanoparticles of 50 nm size assembled into 1-2 µm long chains. Attempts to transfer the magnetosome biomineralisation pathway from the magnetotactic bacterium, Magnetospirillum gryphiswaldense, for heterologous expression into another synthetic host, Rhodospirillum rubrum, were successful by incorporating mamAB, mamGFDC, mamXY, and mms6 genes. The resulting magnetite nanoparticles were 24 nm in size surrounded by a protein shell [32]. The role of mamO gene was recently established in the formation of magnetic nanoparticles in magnetotactic bacteria by negating its long believed putative function by acting as serine protease [43]. It was identified by X-ray crystallography and genetic analysis that the degenerate active site of the protein hogties its protease activity. Also, a di-histidine motif which is surface exposed confers metal binding capability which is responsible for the initiation of biomineralisation in vivo. A recent report suggests using mms6 gene as a reporter for MRI [44]. This was achieved by expressing the mms6 gene in a mammalian cell line which in turn resulted in changes in magnetic resonance contrast due to formation of nanoparticles within the mammalian cells. This study showcases the myriad applications possible by understanding the mechanism of nanoparticle synthesis.
Production of nanoparticles by photosynthetic microorganisms
Cyanobacteria, robust microorganisms which have the capability to adapt to extreme environmental conditions, are of particular interest as a synthetic host for nanoparticle production as they convert CO 2 to other forms of carbon catalysed by sunlight [45]. The feature has implications on reduced cost for growth medium, hence lower production costs, and potentially reduced carbon footprint of the process. Silver and gold are the most reported nanoparticles synthesised using cyanobacteria as the host system. Intracellular gold nanoparticles have been made using Synechocystis sp. PCC 6803 [30]. The nanoparticles of average size 13 ± 2 nm were found to localise at the cell wall, plasma membrane, and inside the cytoplasm. The study compared the gold nanoparticle synthesis to the metabolic activity of cyanobacteria namely, photosynthesis and respiration. It was observed that the production of nanoparticles was detrimental to photosynthesis in the presence of light but the same was not observed with respiration when cultured in the dark. The study also reported that photosynthetic electron transport in the thylakoid membranes played a key role in gold nanoparticle synthesis compared to the respiratory electron transport taking place at the cellular and thylakoid membranes. It is also speculated that the formation of gold nanoparticles within the cell wall of cyanobacteria may be due to the polyphosphates, polysaccharides and carboxyl groups present on the cell membrane which catalyse the reduction of gold ions. The findings are important for process development of gold nanoparticle production using cyanobacteria cultured in water under sunlight.
Sixteen different strains of cyanobacteria and microalgae were tested for their ability to produce silver nanoparticles of which fourteen were successful in the production [46]. Both cell extracts and extracellular medium (i.e. medium that had been used to grow the microbes) were capable of producing nanoparticles indicating that the extracellular medium contains excreted compounds responsible for the synthesis of nanoparticles of sizes 13-31 nm. Experiments showed that extracellular polysaccharides released by cyanobacteria and algae act as reducing agents in the nanoparticle synthesis. Interestingly, the extracellular medium failed to produce silver nanoparticles in the dark. In contrast, the washed biomass of a few strains retained the nanoparticle formation ability in the dark suggesting that light played a role in the process [47]. The study also demonstrated that C-phycocyanin, which is the blue coloured accessory pigment produced by cyanobacteria, can reduce silver to form silver nanoparticles. Algal extracellular medium of Chlorella vulgaris when treated with chloroauric acid resulted in size and shape controlled nanogold crystal formation [48]. The protein, referred to as gold shape-directing protein, aiding in gold reduction was isolated and purified; and was further shown to produce triangular and hexagonal gold nanoplates.
A recent study employs genetically engineered micro-algae Thalassiosira pseudonana to attach IgG binding domain on biosilica for its use as a cancer targeting moiety. IgG-bio-silica complex and chemotherapy drug (camptothecin and 7-ethyl-10-hydroxy-camptothecin) loaded liposomes were individually prepared. Subsequently, the loaded liposomes were attached to the IgG biosilica complex and targeted towards cancer cells [49].
Cyanobacteria and algae seem to be promising hosts for nanoparticle synthesis. However, to increase the feasibility of using these reaction vessels for various applications, including nanoparticle synthesis, requires established genetic tools which is currently still lacking.
Production by yeast
Yeast, being a unicellular eukaryote, is an important model organism in molecular biology. Availability of genetic tools for manipulations makes it an interesting host to engineer for nanoparticle synthesis. Yeast has been employed for producing various metallic and non-metallic nanoparticles. Extracellular syntheses of gold, silver, and palladium nanoparticles are achieved using the yeast, Hansenula anomala [50]. Gold nanoparticles have been produced with and without the addition of stabilisers. The use of the G5 PAMAM dendrimer as the stabiliser gave bigger gold nanoparticles with average particle size of 40 nm whereas the average particle size without the addition of the stabiliser was found to be 14 nm. This generality of using bio-reductant was applied to silver and palladium resulting in nanoparticles of 35 nm as characterised by TEM.
Selenium nanoparticles which have a great potential as anti-cancer agents were synthesised using yeast Saccharomyces boulardii [51]. Extracellular synthesis of selenium nanoparticles was observed as the culture solution turned from colorless to red and the nanoparticle size averages 200 nm as revealed by TEM and dynamic light scattering techniques.
The yeast Rhodotorula mucilaginosa was employed as biofactory to produce copper nanoparticles [52]. Both dead and live biomass could produce pure metallic copper nanoparticles and it was found that the dead biomass was more efficient in synthesis by producing the nanoparticles within 1 h. It was speculated that the nanoparticle synthesis using dead biomass bypassed the toxicity barrier and the reducing enzymes formed nanoparticles more efficiently. This provides an added advantage by removing the requirement for growth medium during synthesis.
Despite the availability of genetic tools and well established methods of manipulation, genetic engineering in yeast to produce nanoparticles has not been well explored. There is immense scope in this area to identify limiting parameters leading to novel engineered pathway.
Other scaffolds
In addition to microbial synthesis and protein-templated synthesis of nanoparticles, cell-derived substrates such as plasmids and bacteriophages have been explored for nanoparticle synthesis. Plasmids derived from Bacillus host were employed as scaffolds to form silver nanoparticles [19]. Silver ions incubated with plasmids were photoirradiated under UV at 254 nm at room temperature and resulted in nanoparticles with an average diameter of 20-30 nm. The phosphate backbone of DNA being negatively charged, binds to the positively charged metal ions through electrostatic interactions. Photoirradiation with UV light led to nucleation of nanoparticles on the plasmid which acted as a reducing agent, aiding in the formation of nanoparticles. This study shows that plasmids could be used not only as templates but also as reducing agents to drive the nucleation of nanoparticles.
Jeong et al. [53] fused three glutamates on the N-terminus of the major capsid P8 of M13 bacteriophage. Incubation with barium glycolates and subsequently with titanium glycolates resulted in calcination of this compound forming perovskite crystal structure while retaining the viral fibrous morphology (50-100 nm). Electrostatic interactions and hydrogen bonding led to the formation of barium titanate nanoparticles which were used as nanogenerators. This system could generate an electrical output up to ∼300 nA and ∼6 V.
Discussion and future prospects
Various attempts to utilise microbes as factories to produce nanoparticles have resulted in nanoparticles of varied type, shape, colour and size. Bacteria, owing to the ease of its handling and short multiplication time have been a favourite choice as a synthesis host. Thus far, the 'green' methods in nanoparticle synthesis that have been explored are those available in nature almost without any modifications. Systemic design and production optimisation for industrial level synthesis are challenged by the lack of understanding between causal mechanisms and the various interactions within the microbial system. For significant enhancement of the system, there is a definitive need for understanding the underlying mechanisms of biological nanoparticle synthesis. By establishing the pathways or the enzymes required for a particular kind of nanoparticle synthesis in a given organism, it is possible to extrapolate this system to other hosts which would better suit the applications at hand.
Another important aspect to be taken into consideration is the availability of genetic tools for systemic manipulation of the genetic circuits in microorganisms. E. coli which has been the model organism for genetic manipulation in the production of nanoparticles. However, more research needs to focus on improving the available genetic tools for other organisms such as cyanobacteria and unconventional yeasts which would aid in the employment of these non-traditional microorganisms for the synthesis of nanoparticles [54][55][56].
The challenges that remain to bring these processes to industrial scale are optimising the production and minimising the time required while choosing a suitable strain [57]. Development of microbial strains for industrial applications employing system-wide engineering, cellular metabolism optimisation and product recovery optimisation is a stand-alone research topic by itself [58]. Contemporary techniques in synthetic biology, systems biology and metabolic engineering serve as building blocks in industrial strain development. Strain improvement and controlled nanoparticle synthesis efforts should be brought together to design robust industrial strains capable of producing nanoparticles of desired shape and size.
The emergence of synthetic biology in this era will chart the development of this field in the next decades. Engineering non-native genetic circuits in yeast to produce artemisinic acid and engineering E. coli to see light are some of the classic examples of the potential of this field [26,59]. Synthetic biology, which deconstructs biological systems into synthetic circuits and logic gates, will provide indispensable tools for tuneable synthesis and optimisation of nanoparticle synthesis in biological hosts. Synthetic circuits may be generated for the synthesis of nanoparticles with controlled size and shape. Understanding how the microbes react to stress and how the corresponding regulatory networks play a role will provide insights into metal toxicity and tolerance mechanisms. Integrating systems engineering approach in biology has paved way for the synthesis of small chemicals and similar advances are expected for the synthesis of nanoparticles.
|
2019-04-29T13:16:51.051Z
|
2017-07-11T00:00:00.000
|
{
"year": 2017,
"sha1": "546c8e8dc3d6f0e3fcbbadd92ba008e5462c2f94",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1049/enb.2017.0009",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a9724072a6796f24c1e60955424f1ae87f17972f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
55222789
|
pes2o/s2orc
|
v3-fos-license
|
Simulation of a collision-less planar electrostatic shock in a proton-electron plasma with a strong initial thermal pressure change
The localized deposition of the energy of a laser pulse, as it ablates a solid target, introduces high thermal pressure gradients in the plasma. The thermal expansion of this laser-heated plasma into the ambient medium (ionized residual gas) triggers the formation of non-linear structures in the collision-less plasma. Here an electron-proton plasma is modelled with a particle-in-cell (PIC) simulation to reproduce aspects of this plasma expansion. A jump is introduced in the thermal pressure of the plasma, across which the otherwise spatially uniform temperature and density change by the factor 100. The electrons from the hot plasma expand into the cool one and the charge imbalance drags a beam of cool electrons into the hot plasma. This double layer reduces the electron temperature gradient. The presence of the low-pressure plasma modifies the proton dynamics compared to the plasma expansion into a vacuum. The jump in the thermal pressure develops into a primary shock. The fast protons, which move from the hot into the cold plasma in form of a beam, give rise to the formation of phase space holes in the electron and proton distributions. The proton phase space holes develop into a secondary shock that thermalizes the beam.
Introduction
The impact of a laser pulse on a solid target results in the evaporation of the target material. The heated plasma expands under its own thermal pressure and shocks as well as other nonlinear plasma structures form. Generating collision-less plasma shocks in a laboratory experiment permits us to study their detailed dynamics in a controlled manner. A better understanding of such shocks is not only relevant for the laserplasma experiment as such and for inertial confinement fusion experiments. It can also provide further insight into the dynamics of solar system shocks and the nonrelativistic astrophysical shocks, like the supernova remnant shocks [1,2,3,4,5].
An obstacle to an in-depth investigation of the laser-generated shocks has been so far, that the frequently used optical probing techniques could not resolve the shock structure at the required spatio-temporal resolution. The now available proton imaging technique [6,7] helps us overcoming this limitation. This method can provide accurate spatial electric field profiles at a high time resolution, as long as no strong magnetic fields are present. The nonrelativistic flow speed of the laser-generated shock, e.g. that in Ref. [8], implies that no strong self-induced magnetic fields due to the filamentation instability or the mixed mode instability [9,10] occur at the shock front.
The availability of electric field data at a high resolution serves as a motivation to perform related numerical simulations and to compare their results with the experimental ones. The experimental observations from Ref. [8], which are most relevant for the simulation study we perform here, can be summarized as follows. The ablation of a solid target consisting of aluminium or tungsten by a laser pulse with a duration of ≈ 470 ps and an intensity of 10 15 W/cm 2 results in a plasma with a density ≈ 10 18 cm −3 and with an electron temperature of a few keV. This plasma expands into an ambient plasma with the density ≤ 10 15 cm −3 . The ambient plasma has been produced mainly by a photo-ionization of the residual gas. The dominant components of the residual gas, which consists of diluted air, are oxygen and nitrogen. Electrostatic structures, which move through the ionized residual gas, are observed. Their propagation speeds suggest that one is an electrostatic shock [11] with a thickness of a few electron Debye lengths, which is expanding approximately with the ion acoustic velocity 2−4×10 5 m/s. Ion-acoustic solitons are trailing the shock. Another structure moves at twice the shock speed, which is probably related to a shock-reflected ion beam. The electron-electron, electron-ion and ion-ion mean free paths for the residual gas have been determined for this particular experiment. They are of the order of cm and much larger than the shock width of a few tens of µm. The shock and the electrostatic structures are collision-less.
The experiment can measure the electric fields, the propagation speed of the electric field structures and it can estimate the electron temperature and density. The bulk parameters of the ions, such as their temperature, mean speed and ionization state, are currently inaccessible, as well as detailed information about the spatial distribution of the plasma. We can set up a plasma simulation with the experimentally known parameters, and we can introduce an idealized model for the unknown initial conditions.
The detailed information about the state of the plasma, which is provided by Vlasov simulations [12] or by particle-in-cell (PIC) simulations [13,14], can then provide further insight into the expansion of this plasma.
Here we investigate a mechanism that could result in the shock observed in Ref. [8]. We model with PIC simulations the interplay of two plasmas with a large difference in the thermal pressure, which are initially spatially separated. We aim at determining the spatio-temporal scale, over which a shock forms under this initial assumption, and we want to reveal the structures that develop in the wake of the shock. The temperature and density of the hot laser-ablated plasma both exceed initially that of the cold ambient plasma by two orders of magnitude. The density ratio is less than that between the expanding and the ambient plasma in Ref. [8]. However, the density will not change in form of a single jump in the experiment and realistic density changes will probably be less or equal to the one we employ. Selecting the same jump in the density and temperature is computationally efficient, because both plasmas have the same Debye length that determines the grid cell size and the allowed time step. The ion temperature in the experiment is likely to be less than that of the electrons. The electron distribution can also not be approximated by two separate spatially uniform and thermal electron clouds, because the plasma generation is not fast compared to the electron diffusion. We show, however, that the shock forms long after the electrons have diffused in the simulation box and reached almost the same temperature everywhere.
A change in the thermal pressure by a factor 10 4 should imply a plasma expansion that is similar to that into a vacuum. This process has received attention in the context of auroral, astrophysical and laser-generated plasmas and it has been investigated analytically within the framework of fluid models [15,16] or Vlasov models [17,18]. It has been modelled numerically using a cold ion fluid and Boltzmann-distributed electrons [19] and with kinetic Vlasov and PIC simulations [20,21]. The plasma expansion of hot electrons and cool ions into a tenuous medium has also been examined with PIC simulations, such as the pioneering study in Ref. [22], which reported the formation of a double layer [23,24,25] that cannot form if the plasma expands into a vacuum. Our simulation examines also the dynamics of protons as a first step towards a simulation of the mix of oxygen and nitrogen ions that constitute the residual gas in the physical experiment. Notable differences between the expansion of the hot and dense plasma into the ambient plasma and the expansion into a vacuum are observed.
The structure of this paper is the following. We describe the PIC method in Section 2 and we give the initial conditions and the simulation parameters. Section 3 models the initial phase of the plasma expansion at a high phase space resolution, revealing details of the electron expansion and of the quasi-equilibrium, which is established for the electrons. A double layer develops at the thermal pressure jump, which drags the electrons from the tenuous plasma into the hot plasma in form of a cool beam. The electrons from the hot plasma leak into the cool plasma, which reduces the temperature difference between both plasmas. Section 4 examines the proton dynamics. The ambient plasma modifies the proton expansion. The thermal pressure jump evolves into a shock, which moves approximately with the proton thermal speed of the hot plasma. If the plasma expands into a vacuum, then a plasma density change can only be accomplished by ion beams [21], while the plasma is here compressed by the shock. The fastest protons in our simulation form a beam that outruns the shock. It interacts with the protons of the ambient medium to form phase space holes in the electron and proton distributions. The proton phase space holes develop into a secondary shock ahead of the primary one. This process may result in secondary shocks in experiments, similar to the radiation-driven ones [26]. The results are summarized in Section 5.
The PIC simulation method and the initial conditions
A PIC code approximates a plasma by an ensemble of computational particles (CPs), each of which is representing a phase space volume element. Each CP follows a phase space trajectory that is determined through the Lorentz force equation by the electric field E(x, t) and the magnetic field B(x, t). Both fields are evolved self-consistently in time using the Maxwell's equations and the macroscopic current J(x, t), which is the sum over the microcurrents of all CPs. The standard PIC method considers only collective interactions between particles, although some collisional effects are introduced through the interaction of CPs with the field fluctuations [27].
Collision operators have been prescribed for PIC simulations [28,29]. The structures in the addressed experiment form and evolve in a plasma, in which collisional effects are not strong and such operators are thus not introduced here. We may illustrate this with the help of the electron collision rate ν e ≈ 2.9×10 −6 n e ln Λ T −3/2 e s −1 and the ion collision rate ν i ≈ 4.8 × 10 −8 Z 4 µ −1/2 n i ln Λ T −3/2 i s −1 [30] for a spatially uniform plasma with the number density n e = n i = 10 15 (cm −3 ) and the temperature T e = T i = 10 3 (eV). We take a Coulomb logarithm ln Λ = 10 and we consider oxygen with µ = 16. Both collision rates are comparable, if the mean ion charge Z ≈ 4. We assume ν e ≈ ν i . The electron plasma frequency ω p ≈ 10 12 s −1 gives the low relative collision frequency ν e /ω p ≈ 10 −6 . The plasma flow in the experiment and other aspects, which are not taken into account by this simplistic estimate, alter this collision frequency. The mean-free path has been estimated to be of the order of a cm [8] and the ion beam with the speed 4 × 10 5 m/s crosses this distance during the time ω p t ≈ 25000. This presumably forms the upper time limit, for which we can neglect collisions.
The presence of particles with keV energies and the preferential expansion direction of the plasma in the experiment imply, that multi-dimensional PIC simulations should be electromagnetic in order to resolve the potentially important magnetic Weibel instabilities, which are driven by thermal anisotropies [31]. Such instabilities can grow in the absence of relativistic beams of charged particles, but they are typically weaker than the beam-driven ones [32]. Here we restrict our simulation to one spatial dimension x (1D) and we set B(x, t = 0). The plasma expands along x and all particle beams will have velocity vectors aligned with x. The magnetic beam-driven instabilities have wavevectors that are oriented obliquely or perpendicular to the beam velocity vector and they are not resolved by a 1D simulation. The wavevectors, which are destabilized by the Weibel instability, can be aligned with the simulation direction, but only if the plasma is cooler along x than orthogonally to it. Such a thermal anisotropy can probably not form. Our electromagnetic simulation confirms that no magnetic instability grows. The ratio of the magnetic to the total energy remains at noise levels below 10 −4 .
A 1D PIC simulation should provide a reasonable approximation to those sections of the expanding plasma front observed in Ref. [8], which are planar over a sufficiently wide spatial interval orthogonal to the expansion direction. We set the length of the 1D simulation box to L. The plasma 1 is consisting of electrons (species 1) and protons (species 2), each with the density n h and the temperature T h = 1 keV, and it fills up the half-space −L/2 < x < 0. A number density n h = 10 15 cm −3 should be appropriate with regard to the experiment. The half-space 0 < x < L/2 is occupied by the plasma 2, which is composed of electrons (species 3) and protons (species 4) with the temperature T c = 10 eV and the density n c = n h /100. All plasma species have initially a Maxwellian velocity distribution, which is at rest in the simulation frame.
The ablated target material drives the plasma expansion but its ions are probably not involved in the evolution of the shock and of the other plasma structures. These structures are observed already 100-200 ps after the laser impact at a distance of about 1 mm from the target. Aluminium ions, which are with a mass m A the lightest constituents of the target material, would have the thermal speed (T /m A ) 1/2 ≈ 10 5 m/s for T = 1 keV. Hundred times this speed or a temperature of 10 MeV would be necessary for them to propagate 1 mm in 0.1 ns. We thus assume here that the shock and the other plasma structures involve only the ions of the residual gas, which is air at a low pressure. If we assume that these ions have a high ionization state and comparable charge-to-mass ratios, then the protons may provide a reasonable approximation to their dynamics.
The equations solved by the PIC code are normalized with the number density n h , the plasma frequency Ω 1 = (n h e 2 /m e ǫ 0 ) 1/2 and the Debye length λ D = v t1 /Ω 1 of species 1, which equals that of the other species. The thermal speeds of the respective species are v tj = (T j /m j ) 1/2 , where j is the species index. We express the charge q k and mass m k of the k th CP in units of the elementary charge e and electron mass m e . Quantities in physical units have the subscript p and we substitute The Lorentz force is solved for each CP with index k, position x k and velocity v k . It is necessary to interpolate the electromagnetic fields from the grid to the particle position to update p k and the microcurrents of each CP have to be interpolated to the grid to update the electromagnetic fields. Interpolation schemes are detailed in Ref. [13]. Our code is based on the virtual particle electromagnetic particle-mesh method [14] and it uses the lowest possible interpolation order possible with this scheme. Our code is parallelized through the distribution of the CPs over all processors. Simulation 1 (Section 3) resolves the box length L S = 3350 by N S = 5 × 10 3 grid cells of size ∆ xS = 0.67λ D . The dense species 1 and 2 are each resolved by 8 × 10 4 CPs per cell and the tenuous species 3 and 4 by 800 CPs per cell, respectively. The simulation is evolved in time for the duration t S = 800, subdivided into 45000 time steps ∆t S . Simulation 2 in Section 4 resolves the box length L L = 10 L S by N L = 2.5×10 4 grid cells of size ∆ xL = 1.34λ D . This grid cell size is sufficiently small to avoid a significant numerical self-heating [33] of the plasma during the simulation time. The total energy in the simulation is preserved to within ≈ 10 −5 . The species 1 and 2 are approximated by 6400 CPs per cell each and the species 3,4 by 64 CPs per cell, respectively. The system is evolved during t L = 25500 with 6.4 × 10 5 time steps.
We use periodic boundary conditions for the particles and the fields in all directions. Ideally, no particles or waves should traverse the full box length during the simulation duration. The group velocity for the electrostatic waves and the propagation speed of the electrons are both comparable to v t1 . We obtain v t1 t S /L S ≈ 0.24 for simulation 1 and v t1 t L /L L ≈ 0.76 for simulation 2. Both simulations ran on 16 CPUs on an AMD Opteron cluster (2.2 GHz). Simulation 1/2 ran for 100/800 hours.
Simulation 1: Initial development
Our initial conditions involve a jump in the bulk plasma properties at x ≈ 0. Some electrons of the plasma 1 will expand into the half-space x > 0 occupied by the plasma 2. The slow protons can not keep up with the electrons and the resulting charge imbalance gives rise to an electrostatic field E x . This E x confines the electrons of plasma 1 and it accelerates the electrons from the plasma 2 into the half-space x < 0. The electrons of the plasma 1 and 2 with x < 0 are separated along the velocity direction by the electrostatic potential and form a double layer. Figure 1 examines the E x and its potential. The amplitude of E x peaks initially at x ≈ 0 and it accelerates the electrons into the negative x-direction. The position of the maximum of E x moves to larger x with increasing times and the peak amplitude decreases. The spatial profile of E x is smooth, which contrasts the one that drives the plasma expansion into a vacuum that has a cusp [21]. The potential difference ≈ 5 kV between plasma 1 and 2 remains unchanged. The spatial interval, in which the amplitude of E x is well above noise levels, is bounded. An interesting property of the double layer can thus be inferred according to [25]. Its electrostatic field can only redistribute the momentum between the four plasma species, but it can not provide a net flow momentum. This is true if the double layer is one-dimensional and electrostatic. The decrease of the peak electric field in Fig. 1(b) resembles that in Fig. 3 in Ref. [19]. The decreasing electric force, in turn, implies that the ion acceleration in the Fig. 4 of Ref. [19] decreases as the time progresses, which should hold for our simulation too.
The plasma phase space distribution at t = 60 is investigated in Fig. 2. A tenuous hot beam of electrons is diffusing from the plasma 1 into the half-space x > 0, while the mean speed of the electrons of the plasma 2 becomes negative. The electrons of plasma 1 and 2 with x < 0 are separated by a velocity gap of ≈ v t1 /10. The protons that were close to the initial boundary x = 0 at t = 0 have propagated until t = 60 for a distance, which is proportional to their speed. A sheared velocity distribution can thus be seen in Fig. 2(b). The fastest protons of the plasma 1 with x > 0 have also been accelerated by the E x by about v t2 /2, reaching now a peak speed ≈ 4v t2 . The fastest protons are found to the right of the maximum of E x at x ≈ 2 at t = 60 in Fig.1(a). A similar acceleration is observed for the protons of plasma 2 in 0 < x < 5. The densities of the electrons and protons disagree in the interval −5 < x < 5 and the net charge results in the electrostatic field E x > 0. Both curves in Fig. 2(d) intersect at x ≈ 2, which coincides with the position in Fig. 1(a), where the E x has its maximum at t = 60.
The density of the cold protons in Ref. [21] is practically discontinuous at the front of the expanding plasma, while it changes smoothly in our simulation. This is a result of our high proton temperature, which causes the thermal diffusion of the protons. The contour lines of the electron phase space density are curved at x ≈ 0. Most electrons of plasma 1 that move to increasing values of x are reflected by the electrostatic potential at x ≈ 0. These density contour lines resemble those of the distribution of electrons that expand into a vacuum at an early time in Ref. [21], which are all reflected by the potential at the plasma front. Here the inflow of electrons from plasma 2 into plasma 1 allows some of the electrons of plasma 1 to overcome the potential. The electrons provide all energy for the proton expansion in Ref. [21] and their distribution develops a flat top. Here the proton thermal energy is the main driver and consequently the electron velocity distribution shows no clear deviation from a Maxwellian at any time. Figure 3 shows the plasma phase space distributions at the times t = 120 and t = 180. The plasma distributions are qualitatively similar to that in Fig. 2. Electrons diffuse out from plasma 1 into plasma 2, forming a hot beam, while the electrons of plasma 2 are dragged into the half-space x < 0 in form of a cold beam. The confined electrons of plasma 1 expand to increasing x at a speed, which is determined mainly by the protons. The proton distribution shows an increasing velocity shear, but the apparent phase space boundary between the protons of plasma 1 and 2 is still intersecting v x = 0 at x = 0. The front of the protons of plasma 1 at t = 120 and t = 180 is close to the position of the maximum of E x in Fig. 1(a) at x ≈ 5 for t = 120 and x ≈ 10 for t = 180. The protons at the front of plasma 1 and the protons of plasma 2 in the same interval are accelerated by the E x > 0 and reach the peak speed ≈ 5v t2 .
The electrons of plasma 1 in Fig. 4 at t = 300 have expanded into the half-space x > 0 for several hundred Debye lengths. The electrons from the plasma 2, which have been dragged towards x < 0, interact with the electrons of plasma 1 through a two-stream instability. A chain of large electron phase space holes has developed for −400 < x < −300, which thermalize the beam distribution. No two-stream instability is yet observed in the interval x > 0, even though a beam distribution is present, for example, at x ≈ 250. The change of the mean speed of the electron beam leaked from plasma 1 for x > 0 inhibits the resonance that gives rise to the two-stream instability. The mean speed of the electrons of plasma 2 does not vanish any more and it varies along x > 0 to provide the return current that cancels that of the electrons of plasma 1. The E x has noticably accelerated the protons in the interval 10 < x < 30, which still show the sheared distribution in the interval −25 < x < 25.
The evolution of the plasma is animated in the movies 1 (electrons) and 2 (protons). The axis labels v eh = v t1 and v ph = v t2 . The colour scale denotes the 10-logarithmic number of CPs. The movie 1 reveals that a thin band of electrons parallel to v x propagates away instantly from the plasma 1 and towards x > 0. These electrons leave the plasma 1, before the E x has grown. The electrons diffusing into x > 0 at later times, when the E x has developed, form a tenuous beam with a broad velocity spread. The electrons of plasma 1 can overcome the double layer potential of ≈ 5 kV if their speed is v ≥ 3v t1 prior to the encounter of its electrostatic field. The movie 1 furthermore illustrates the growth of the two-stream instability between the electron beam originating from the plasma 2 and the confined electrons of plasma 1 in x < 0 and its saturation through the formation of electron phase space holes. The movie 2 demonstrates, how the velocity shear of the protons develops and how the fastest protons of plasma 1 in x > 0 are accelerated by E x . Neither the Fig. 4 nor the movie 2 reveal the formation of a shocked proton distribution prior to the time t S .
We expand the simulation box and we reduce the statistical representation of the plasma. Ideally, the plasma evolution should be unchanged. Figure 5 compares the plasma data provided by simulation 1 (box length L S ) and by simulation 2 (L L = 10L S ) at the time t S , when we stop simulation 1. The proton distributions in both simulations are practically identical and we notice only one quantitative difference. The sheared proton distribution of plasma 1 extends to x ≈ −60 and v x ≈ −3v t2 in simulation 1, while it reaches only x ≈ −50 and v x ≈ −2v t2 in simulation 2. This can be attributed to the better representation of the high-energy tail of the Maxwellian in simulation 1.
The bulk electron distributions in both simulations agree well for x < 100. The interaction of the confined electrons of plasma 1 with the expanding protons is thus reproduced well by both simulations. We find a beam of electrons with x > 100 and v x ≈ −3v t1 in Fig. 5(c), which is accelerated by the double layer to −4v t1 in the interval −100 < x < 100. This beam originates from the second boundary between the dense and the tenuous plasma at x = L S /2 in simulation 1. It is thus an artifact of our periodic boundary conditions. Its density is three orders of magnitude below the maximum one and it does thus not carry significant energy. This tenuous beam does not show any phase space structuring, which would be a consequence of instabilities, and it has thus not interacted with the bulk plasma. Its only consequence is to provide a weak current that should not modify the double layer. This fast beam is absent in Fig. 5(d), because the electrons could not cross the distance L L /2 in simulation 2 during the time t S .
The electron distributions for x > 100 and v x > 0 computed by both simulations differ substantially. The electrons form phase space vortices in simulation 1, while the electrons in simulation 2 form a diffuse beam with some phase space structures, e.g. at x ≈ 300. Phase space vortices are a consequence of an electrostatic two-stream instability, which must have developed between the leaked electrons of plasma 1 and the electrons of plasma 2. Only the electrons of plasma 1 with v > 3v t1 can overcome the double layer potential. These leaked electrons form a smooth beam in simulation 1 that can interact resonantly with the electrons of plasma 2 to form well-defined phase space vortices. The statistical representation of the leaking electrons in simulation 2 provides a minimum density that exceeds the density of these vortices.
Simulation 2: Long term evolution
We examine the plasma at three times. The snapshot S 1 corresponds to the time t = 8000, S 2 to t = 16000 and S 3 to t = 25500. The plasma phase space distributions for S 1 and S 2 are displayed by Fig. 6. The proton distribution is still qualitatively similar to that at t = 300 in Fig. 4. The phase space boundary between the protons of plasma 1 and 2 has been tilted further by the proton streaming. The key difference between the Figs. 4 and 6 is found, where the proton distribution of the plasma 1 merges with that of the plasma 2. This collision boundary is located at x ≈ 300 for S 1 and at x ≈ 600 for S 2 , which evidences an approximately constant speed of this intersection point. The propagation speed is ≈ v t2 . The protons directly behind this collision boundary, e.g. in 450 < x < 550 for S 2 , do not show a velocity shear. Their mean speed and velocity spread is spatially uniform in this interval, evidencing the downstream region of a shock. The upstream proton distribution with x > 600 for S 2 resembles, however, only qualitatively that of an electrostatic shock [11]. That consists of the incoming plasma and the shock-reflected ion beam. The density of the beam with v x ≈ 4v t2 exceeds that of the plasma 2 in the same interval and its mean speed exceeds the v s ≈ v t2 of the shock by the factor 4. A shock-reflected ion beam would move at twice the shock speed and its density would typically be less than that of the upstream plasma, which the shock reflects. The linear increase of the proton beam velocity with increasing x is reminiscent of the plasma expansion into a vacuum [20], but it is here a consequence of the shear introduced by the proton thermal spread.
The electron distribution at t = t S in Fig. 5(d) could be subdivided into the cool electrons of plasma 2 and the leaked hot electrons of plasma 1, while the electrons in the interval x > 750 have a symmetric velocity distribution in Fig. 6(b) that does not permit such a distinction. The electron temperature gradient has also been eroded. The electron phase space density decreases by an order of magnitude as we go from v x = 0 to v x ≈ 2v t1 at x ≈ 0 and at x ≈ 2000 in Fig. 6(d) and the thermal spread is thus comparable at both locations. We attribute this temperature equilibration to electrostatic instabilities, which were driven by the electron beam that leaked through the boundary at x = 0, and also to the electron scattering by the simulation noise. The noise amplitude is significant in the interval x > 0 due to the comparatively low statistical representation of the plasma, in particular that of the hot leaked electrons.
The electron density jumps at both times in Fig. 6 at the positions, where the protons of plasma 1 and 2 intersect. The electron distribution for S 2 furthermore shows a spatially uniform distribution in 450 < x < 550, as the protons do. The electrons have thermalized and any remaining free energy would be negligible compared to that of the protons. The electron density merely follows that of the protons to conserve the plasma quasi-neutrality. This electron distribution thus differs from the similarly looking one, which has been computed at late times in Ref. [21]. There the electrons changed their velocity distribution in response to the energy lost to the protons.
The time 10t S corresponding to S 1 and the box length L L = 10L S imply, that we should see some electrons emanated by the plasma boundary at x = L L /2 as in Fig. 5. Only the electrons with v < −2.1v t1 would be fast enough to cross the interval 0 < x < L L /2 occupied by plasma 2 during the time 10t S . These electrons correspond to the few fast electrons in Fig. 6(b) with x > 0 and v < 0. An increased number of fast electrons moving in the negative x-direction is visible at the snapshot S 2 . The electrons emanated from the plasma boundary at x = L L /2 reach now the boundary at x = 0 in significant numbers. The diffuse phase space distribution of these electrons implies, however, that they do not carry with them enough free energy that could result in instabilities that drive strong electrostatic fields.
The shock structure and the density jump in the electron distribution has propagated to x ≈ 900 for S 3 and the proton beam ahead of the shock has started to thermalize by its interaction with the upstream plasma, as it is evidenced by the Fig. 7. An electron phase space hole doublett and proton phase space structures are visible. These structures have grown out of the phase space oscillation of the proton beams and the electron phase space hole at x ≈ 1250 in Fig. 6(c). The proton distribution in Fig. 7(c) in x ≥ 1700 reveals that a second shock is forming, which will thermalize the dense and fast beam of protons that expands out of the plasma 1 into the plasma 2. The spatially uniform electron distribution outside the interval occupied by the electron phase space holes changes only its thermal spread and density along x and could be approximated by a Boltzmann-distribution. The electrons are not accelerated to high energies neither by the shocks nor by the other phase space structures.
The expansion of the protons of plasma 1 in simulation 2 is captured by the movie 3. The colour scale corresponds to the 10-logarithmic number of CPs. The movie 3 evidences the formation of the shock and of its downstream region and it displays how the proton phase space hole and, subsequently, the secondary shock develop. The mean velocity of the upstream protons is modulated along x, which is probably a result of the same wave fields that thermalized the electrons.
The proton distribution at x ≈ 0 changes in time primarily due to the free motion of a proton i with the speed v x,i , which is displaced as x i = v x,i t. The phase space boundary between the plasma 1 and 2 is thus increasingly sheared. Further acceleration Figure 8. The proton densities, normalized to n h , as a function of the scaled position xt 1 /t j , where t j corresponds to the snapshot S j . The curves match, except within the downstream region of the shock at 200 < xt 1 /t j < 400 that is characterized by a constant density. The density doubles by the shock compression at xt 1 /t j ≈ 350. mechanisms are the drag of the protons by the thermally expanding electrons and the shock formation. Figure 8 assesses their relative importance. The plasma density distribution should be invariant if the protons expand freely and if we scale the position ∝ x/t. This is indeed the case and the proton density distributions for S 1 , S 2 and S 3 match if we use the scaled positions, except at the shock and within its downstream region. The electron densities (not shown) closely follow those of the protons. Figure 9 compares the electrostatic field with the electron distributions for the snapshots S 1 and S 3 . An electric field peak at x ≈ 400 coincides with the shock in the snapshot S 1 . The peak E x ≈ 0.04 and it confines the electrons to the left of the density jump by accelerating them into the negative x direction. The electric field can be scaled to physical units with n M = 10 15 cm −3 and v t1 = 1.325 ×10 7 m/s to give ≈ 5 ×10 6 V/m. The electric field, which has been measured close to the shock in Ref. [8], is ≤ 2 × 10 7 V/m. The plasma density in the region, where the shock develops in the experiment, may be higher than 10 15 cm −3 . The electric field amplitudes associated with the shock are thus comparable. The noise levels in PIC simulations are typically higher than in a physical plasma, explaining the strength of the evenly spread noise in the simulation box, which is not observed to the same extent in the experiment. The electric field at the shock at x ≈ 10 3 is at noise levels for S 3 , while the phase space holes at x ≈ 1700 give an electric field, which is exceeding that sustained by the shock for S 1 .
Discussion
We have investigated the thermal expansion of a hot dense plasma into a cooler tenuous plasma. The thermal pressure of the hot plasma exceeded that of the cool plasma by the factor 10 4 . Our study has been motivated by the laser-plasma experiment in Ref. [8], which examined the expansion of a hot and dense plasma into a tenuous ambient medium. Our initial conditions and the 1D geometry are, however, idealized and the simulation results can thus not be compared quantitatively to the experimental ones. The aim of our work has been to better understand the qualitative effects of the ambient medium on the plasma expansion. We have for this purpose compared our results with some of those in the related study in Ref. [21], that considered the plasma expansion into a vacuum. There, the electron temperature exceeded that of the protons by a factor 10 3 , while we consider here the same temperature of electrons and protons.
Our results are summarized as follows. An electric field grows almost instantly at the boundary between both plasmas, because the ion expansion of the hot plasma is slower than the electron expansion. The electric field forms irrespective of the ambient medium. It accelerates only the ions, if the plasma expands into a vacuum and it has a cusp in its spatial profile. The acceleration of the electrons of the ambient medium triggers in our simulation the formation of a double layer [22] with a smooth electric field profile. This double layer redistributes the momentum between the individual plasma species [25]. A tenuous hot beam of electrons streams from the hot plasma into the cool plasma, while all the electrons of the cool plasma are dragged into the hot plasma. These beams thermalize through electrostatic two-stream instabilities, which equilibrate the electron temperatures of both plasmas on electron time scales. This rapid thermalization will cancel any significant proton acceleration by hot electrons already at the relatively low density of the ambient medium we have used. Proton acceleration is, however, still possible because a thermal pressure gradient is provided by the density jump. Most electrons merely follow after their thermalization the motion of the protons to conserve the quasi-neutrality of the plasma. They maintain their Maxwellian velocity distribution, which would not be the case for an expansion into a vacuum [21].
The protons at the front of the hot plasma are accelerated by the electric field of the double layer to about 5.5 times the proton thermal speed, while the Maxwellian distribution is represented up to 3-4 times the proton thermal speed. The expansion of the protons from the hot into the cool plasma is dominated by the free streaming of the fastest protons (diffusion). The effects of the ambient medium on the proton expansion are initially negligible. Eventually the interaction of the expanding and the ambient plasma results in the formation of shocks. We have observed one shock at the position, where the protons of both plasmas merge. This shock did not result in the acceleration of electrons or in the modification of their phase space distribution.
The protons of the hot plasma expand farther than the position of this shock and they can interact with the protons of the cool plasma through ion beam instabilities. The interval, in which the protons of both plasmas co-exist, resembles qualitatively the upstream region of an electrostatic shock [11]. However, the density and the speed of the beam of expanding protons of the hot plasma are both higher than what we expect for a shock-reflected ion beam. We have observed in the simulation the growth of a phase space structure in the upstream proton distribution that gave rise to an electron phase space hole. The proton structure evolved into a second shock ahead of the primary one. The presence of multiple shocks has been observed experimentally [26], although there the second shock was radiation-driven and not beam-driven.
|
2009-11-26T14:37:41.000Z
|
2009-11-26T00:00:00.000
|
{
"year": 2010,
"sha1": "90ebfd1aefef80d42faf9a370799261c8161c7d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "90ebfd1aefef80d42faf9a370799261c8161c7d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
236747063
|
pes2o/s2orc
|
v3-fos-license
|
NAVIGATING INTEGRATION AND ALIENATION IN MIGRATION: A READING OF LEILA ABOULELA’S “THE MUSEUM”
When migrants move from their homeland to a new country, they carry their memories, beliefs, traditions, feelings of belonging with them. Arab Anglophone literature is a genre that deals with the distresses and difficulties of the Arab and African migrants, including cross-cultural conflicts and western perceptions and misconceptions of their identity, which lead to feelings of dislocation, alienation, and depression. The works of the Sudanese feminist and Scottish migrant, Leila Aboulela (1964), is part of the growing corpus of Anglophone Arab fiction. Most of her works explore the complex cultural perceptions between east and west in migration. In this discussion, the study elects to interrogate Aboulela’s winning prize short story, “The Museum” (1999), where the Sudanese female protagonist, in Aberdeen, is torn between her expectations of integrating and improving her life and her feelings of isolation and strangeness in the host country. Thus, through a psychosocial analytic approach, the paper engages the concepts “identity”, “acculturation”, and “integration” to use these as tools to examine the east-west encounter in a migration experience. In so doing, the study elucidates the following issues: to what extent does Shadia, as a migrant, strive to adjust to the new culture in the receiving country? And how did the hostility and misconception of the west to her African identity negatively affect her psychological well-being?
Introduction
The migration researches have a long history in negotiating cross-cultural encounter and social psychology. The migration process can be better understood by three distinct stages: pre-migration, during migration, and postmigration. Generally speaking, migrants leave their homeland in search of better social and economic chances. However, in some cases, some migrants in the post-migration stage experience various anxieties during their attempts of adjustment to the culture of the new land due to several reasons. Two of the crucial reasons, according to C. C. Sangalang, D. Becerra, F. M. Mitchell, and I. Kim, are the "dissonance between one's culture of origin and the host culture (acculturative stress)… [and] experiences of racial/ethnic discrimination" (2019, p. 910). Thus, considering these intricacies and drawing on psychological studies on migration, the study focuses on the impact of the cultural clash and misleading image of the west to Africa on the psych of the Sudanese female migrant, in Leila Aboulela's "The Museum", in two main settings; the Aberdeen University and the African museum. Thus, the paper is organized as follows: The first part defines the acculturation process with a brief outline of the primary concepts that structure this study namely 'identity' and 'integration', which offer a framework to the analysis of the protagonist's migration experience. Consequently, the study answers the following: what did the protagonist, Shadia, as a migrant, attempt to integrate into the western society? And how did the hostility and misconception of the west to her African identity leads to psychological injuries?
Conceptual Framework
Identity is considered a keynote aspect in all migration studies. Conceptually, it is defined as a group of implications that help the individual to know who he/she is, and determine if he/she is an active member of a particular group, or endowed with specific features that identify him/her as a special person. Moreover, identity can be perceived as personal beliefs in relation to social groups, such as religious, or ethnic, or racial groups. In addition, Dominic Abrams and Michael A. Hogg, in "Social identifications: A social psychology of intergroup relations and group processes", perceive identity as a person's concept of who he/she is in the eyes of his fellow-men and in the eyes of the Other (2006, p. 2). Migration transcends identity substituting the sense of who the individual is with the sense of where he fits and what his/her future roles within the new society are. Thus, migrants' identities change according to new dynamic contexts in which new forms of cultural identities are produced. In the host country, migrants feel alienated and isolated when faced with the shocking reality of exclusion and rejection which differs from their impression of the receiving country as a place better than one's country of origin. Thus, by experiencing exclusion, migrants find themselves devoid of history or image and, consequently, try to search for social approval by a psychosocial process termed as acculturation process.
Acculturation process, according to J. W. Berry (2001), refers to change and adaptation. In "A psychology of immigration", Berry divides acculturation into four levels, namely, assimilation, integration, separation, and marginalization. Since our focus of interest is on integration, it is important to know what it means. For Berry, integration is defined as "maintaining one's original culture…while engaging into a positive communication with the receiving society" (2001, p. 619). He further adds that this strategy can encourage positive adaption of migrants if only the receiving culture endorses cultural diversity through openness and inclusion. In this sense, integration is known as "sociocultural integration", which refers to the individual's ability to successfully cope with daily events and demands in the new land. This process includes knowing a new culture/language and forming interpersonal relations to members of the new society, such as friendships, enterprises, and marriages. Acculturation, as a bridge between the migrant's culture of origin and the culture of the new society, can lead to emotional reaction referred to as "acculturative stress" (2001, p. 624) when he/she fails to achieve this strategy.
Acculturative stress is usually associated with the migrant's negative integration, which includes stereotyping, discrimination, and rejection by the host society, leading, in a psychological level, to identity confusion, stress, depression, and culture shock experienced by some acculturating migrants. João Sardinha, in "Immigrant Associations, Integration and Identity", holds that discrimination and rejection are central barriers that exacerbate psychological dysfunctions among the migrants including, feelings of isolation and marginalization. Such barriers are best described by Sardinha (2009) as, institutional, coming in the form of unequal citizenship rights, …or structural barriers in different public spheres… Additionally, they may also be societal or individual, coming in the form of racism, discrimination and distancing. Differences related to the immigrants' culture of origin or professed religion, racist perceptions and linguistic differences are such barriers that can curtail integration. Immigrants cannot successfully be inserted unless the host society is ready to accept their differences and receive their contributions. (pp. 38-39) These barriers can impede the migrant's attempt to integrate into the new society, leading to unexpected depressive feelings contrary to what an individual expects from the receiving country, which consequently cause them to underestimate their origin.
A corresponding concept to understand acculturation in migration is the cultural identity. Cultural identity refers to the characteristics and knowledge of every individual including the totality of attitude, behaviors, history, place, nationality, gender, language, religious beliefs, and sexual orientation. D. Bhugra (2004), in "Migration, distress and cultural identity", posits that religious beliefs are important aspects that build the individual's cultural identity as they preserve one's values and give him/her a sense of belonging, anywhere, away from home (129). That is why losing the migrant's moral values or feeling guilty for deserting religious values are aspects that lead to "cultural bereavement" (Eisenbruch, 1991, p. 674). In the migration context, cultural identity is commonly formed by the migrants' cultural norms to which they belong and the new environment they become parts of. Thus, the contact between the migrant with the host society may lead to assimilation, rejection, integration or deculturation. In terms of rejection, Bhugra holds that the individual or the collective group to which he/she belongs withdraws from the larger society. With deculturation, the individual experiences a loss of cultural identity, alienation and acculturative stress that further add to the sense of failure, loss and poor self-esteem (2004, p. 133).
Leila Aboulela
Leila Aboulela, an acclaimed writer, was born in Cairo to a Sudanese father and an Egyptian mother in 1964. She grew up in Khartoum where, although she was a Muslim, she went to a Catholic school and the Khartoum American School, an education which inspired her with insights into other cultures and beliefs. She belongs to the Arab Anglophone writers who use English rather than Arabic to avoid "cultural restriction and censorship and to optimize exposure" (Nash, 2007, p. 12). Her residence in Britain provides her with a subject matter that shapes an emerging awareness of the conflicts of migration and the difficulty of creating a sense of home in the new country of residence. Her literary publications are now widely recognized by Western critics and attract the interest of academics and researchers alike. As a migrant writer, her writings go beyond notions of diasporic lives and transnational experiences, standing between the culture of her homeland and that of the host country and "equipped with first-hand knowledge of both cultures" (Sarnou, 2014, p. 68). She has remarkably established a significant literary reputation, winning several awards and receiving critical praise from two of Africa's leading contemporary writers, Ben Okri and J.M. Coetzee.
Like most of her fellow Arab Anglophone writers, she engages her reader to themes, such as double-consciousness, hybridity, in-betweenness, transcultural singular experiences, as well as questions of stereotyping, ethnic representation, reception, and identity formation (Al Maleh, 2009, p. xi). She stresses the idea that hybridization in her lifestyle in Scotland is a way of inscribing her African and Muslim identity in the mainstream culture of the British. This is clearly stated in Geoffrey Nash's words: Aboulela's residence in Britain provided her with a subject matter: a terrain against which she could not only set her Sudanese heritage, but which she could employ to encapsulate a new identity: that of the Muslim Arab/African woman in exile (2007, p. 135).
"The Museum" (1999)
Moving to Aboulela's "The Museum" (1999), it was the winner of the first Caine Prize for African Writing in 2000. It was selected among fifteen short stories of diverse women writers in Africa to be published in the short story collection, Opening Spaces: An Anthology of Contemporary African Women 's Writing (1999). Vera, the editor of this collection, states that Leila Aboulela: welcomes the reader of "The Museum" with a controlled and confident exploration of a woman in exile as she ponders the dichotomies of arranged marriages, the transforming power of an overseas education, the imbalance of family ties, Recently, Arifa Akbar, in her review, "Elsewhere, Home by Leila Aboulela", claims that homesick immigrants and Islam are the main themes in Aboulela's stories (2018). Additionally, Porochista Khakpour, in his review "Stories of the Muslim Immigrant Experience, From a Sudanese Writer Now Living in Scotland" describes her use of realism in 'The Museum" as a "vivid reflection in a pond, as accurate as glass's gaze but rippled to capture life as a thing shivering and fluid even when seemingly still" (2019).
Depiction of Alienation at Aberdeen University
Shadia, the protagonist, is a Sudanese post graduate student, moves from her Arab African Islamic country, Khartoum, heading to Aberdeen in search of a better education. The story begins with a graphic portrayal of her disorientation which results from the strange appearance of her Scottish colleague with his "long, straight hair that tied up with a rubber band" and "his silver earrings". This different look, in Shadia's cultural lens, signifies "the strangeness of the West, another culture shock" (Aboulela, 2018, p. 157). It makes her feel afraid as she has never seen such a man. Moving to the early days of the term, Shadia is also confused by the "cold" and "grim" inhospitable university. She describes her perplexity as: Someone tossed around by monstrous waves. Battered as she lost her way to the different lecture rooms…The course required a certain background, a background she didn't have…she and the other African students, the two Turkish girls and the men from Brunei. As this congregation from the Third World whispered anxieties in grim Scotland corridors… Us and them, she thought. (Aboulela, 2018, P. 158) The above quotation signifies the migrant's loss of social role by becoming alienated in an unfriendly sociocultural environment. "Us" and "them", in the above quotation, stand for the opposing interaction between self-representation and social categorization, which establishes social alienation.
Likewise, Aboulela represents the confusion of other migrant students by pointing to a racism incident, recounted by Badr, Shadia's Malaysian friend, about being attacked in his house by racists, who damage his house window, just after Shadia informs the reader about the glossy University handbook for international students which implies that they are asked to be grateful, to accept the unfriendly treatment that is given to them and surrender to the vague and incomprehensible course system in Aberdeen university. Badr's story revisits Castro's view of racism and discrimination as critical barriers that disperse the migrant's expectations of a successful integration to the new culture of the host country.
Although this unfriendly environment makes Shadia feels out of place, she belongs to a kind of migrants known as "the bearers of hope", a term coined by C. Wagner to describe those who migrate seeking a better standard of living or fulfill their desires in obtaining a higher education. Certain features allied to this group, which include preparing for either opposition or integration into the host country (Wagner, 2016, p. 241). Thus, unlike her "Third World" colleagues who avoid interaction with the other, Shadia manages to get a better understanding of the Scottish society through her growing, cross-cultural relationship with Bryan, her Scottish male colleague, as he knows all the lectures and knows the system. Interestingly, the issue of an Arab female migrant's emotional relationship with a Western non-Muslim man is a repeated topic in Aboulela's work. Despite Shadia's sense of strangeness and alienation, she finds relief in beginning her friendship with Bryan, to cope with the new society hoping to gain social approval, praise, and admiration. Her attitude, at this point, recalls Berry's term "sociocultural integration" (2001).
Shadia, in her relationship with Bryan, tries to bridge the gap of communication between two different cultures to overcome her feelings of inferiority. Sometimes, Shadia achieves this by trying to be superior to Bryan. This is shown in a comparison, drawn by Shadia between the Nile in Khartoum and the river Dee in the host country when she says that the Nile is greater than the Dee. She tells Bryan that his Dee is nothing but a stream. Bryan, on the other side, doesn't exert effort to defend his river Dee. In another situation, when she expresses her dislike for Bryan's earrings, he immediately tugs them off. At this moment, she feels troubled as "he wasn't smiling". She fails to understand if he does so to please her or for another reason. One single time, he expresses his admiration to her walking style saying, "Ye walk like a princess' (Aboulela, 2018. P. 171). It is noticed that Shadia's overt offensiveness and snobbery towards Bryan and her inability to comprehend his feelings towards her dispossess any possibility of mutual understanding between east and west.
Though Shadia's relationship with Bryan provides her with feelings of integration in her new place, it creates inside her a sense of guilt, which "was like a hard-boiled egg stuck in her chest" (Aboulela, 2018, p. 175). Entering the cafeteria hand in hand with Bryan is also deplored by some of Shadia's foreign colleagues who possess the same cultural identity. For example, one of her friends, the Turkish girl, "raised her perfect eyebrows" as a sign of condemnation, and her friend, Badr, "quickly looked away" (Aboulela, 2018, p.169) when his eyes and Shadia's met. Shadia's sense of guilt results from her respect and conformity to her Arab Islamic identity, considering any premarital relationship between men and women a sin, which damages the family's reputation and the teachings of Islam. Psychologically, this conflict inside her recalls the "acculturative stress" and "cultural bereavement" which results from the dissonance between one's culture of origin and the host culture.
Invisibility of African Identity at the African Museum
Museums are, by and large, supposed to keep and display cultural heritage, to transmit its real meaning. However, R. MacLeod, in "Postcolonialism and museum knowledge: revisiting the museums of the Pacific" describes museums as "embodiments of possession and power, part of whose business was setting boundaries-architectural and conceptual-imposing hierarchies and structuring meanings" (1998, p.313). In the same vein, Michael Baxandall asserts that it is impossible to display other cultures without putting a construction upon them (1991, p. 34). This notion is further explicated by WEB Du Bois' view that many migrant visitors face a dilemma and feelings of alienation when entering a national or international museum, where they are not figured. In this regard, the museum as represented in Aboulela's "The Museum", exemplifies the western misrepresentation of Africa. Towards the end of the story, Shadia is invited by Bryan to visit an African museum in Aberdeen. In the African museum, she is stunned by the same negative feelings of rejection and inferiority she has on her first days at university. During their visit, Shadia and Bryan come across a statue of "a Scottish man from Victorian times…, [sitting] on a chair surrounded with possessions from Africa" (Aboulela, 2018. P. 176). Staring over the African artifacts, Shadia is unable to identify herself or her history with anything in this museum, for "[n]othing was of her, nothing belonged to her life at home, what she missed. Here was Europe's vision, the clichés about Africa: cold and old" (Aboulela, 2018. P. 177). In this sense, the African objects fail to speak with their voice; instead, they speak with the dominant voice in the host country. That is why the protagonist feels denigrated and outraged by the lies transmitted by the orientalist images of Africa.
The misleading role of the museum in shaping how the west perceives Africa reminds us of Edward Said's words about the East-West encounter. In his groundbreaking book, Orientalism (1978), Said demonstrates how the West has misrepresented the orient and how the east or the orient has been always represented not by themselves, but by the occident as a contrast to Europe. This is best described, by Said, as: It is Europe that articulates the Orient; this articulation is the prerogative, not of a puppet master, but of a genuine creator, whose life-giving power represents, animates, constitutes the otherwise silent and dangerous space beyond familiar boundaries. (p. 57) In a similar vein, John McLeod (2020) explains this opposition by saying that it: [I]s not of equal partners. The orient is frequently described in a series of negative terms that serve to buttress a sense of the west's superiority and strength…thus…east and west are positioned through the construction of an unequal dichotomy. The west occupies a superior rank while the orient is its 'other' in a subservient position. (p. 41) In this context, Aboulela problematizes Western museums revealing their Eurocentric prejudices against the West.
As a "bearer of hope", Shadia comes to this museum "expecting sunlight and photographs of the Nile, something to appease her homesickness, a comfort, a message. But the messages were not for her, not for anyone like her". Unlike the toxic exhibits detached from place and time, she asserts that she is "too modern, too full of mathematics" (Aboulela, 2018, p.178). It is noted that Bryan's preoccupation with what is written on the glass of the cabinet, 'that strength" in his eyes, highlights his strong faith in those days, which adds to her sense of alienation. As a result, her relationship with Bryan comes to an end because, for her, his vision of Africa is a symbol of the European's image of her homeland, which denigrates her identity. One would see that the confusion here is that Bryan sees Shadia as an African from Sudan in the sense of being a Black African, whereas Shadia sees herself as an Arab in Africa. Revisiting Berry's concept of "deculturation", the protagonist's poor self-esteem, at this moment, is intensified, leading to feelings of worthlessness, hopelessness and helplessness, which are acute symptoms of depression. That is why when Bryan tries to discuss his Orientalist misconceptions with her, Shadia feels exhausted, too tired to challenge these misrepresentations. Hence, instead of getting them closer to each other, the visit to the museum produces a cultural gap between them, mystifies the protagonist's vision of her own cultural identity, and causes psychological injuries. Commenting on her helpless behavior towards rejection, the narrator says, If she was strong she would have explained and not tired of explaining. She would have patiently taught him another language, letters curved like the epsilon and gamma he knew from mathematics. She would have showed him that words could be read from right to left. If she was not small in the museum, if she was really strong, she would have made his trip to Mecca real, not only in a book. (Aboulela, 2018, p. 182) This was indeed the reason for the separation. Shadia, as an Arab woman from Sudan in Africa, failed to reveal her true identity to Bryan as an Arab but instead pretended to be an African. And having already presenting herself as an African to Bryan she should have expected nothing less order than Bryan's vision of her as an African in the right sense of the concept which Europeans understand. So in effect Shadia's identity alienation arose fundamentally from her false identity definition and from clash of cultures. This was further proven by the absence of a picture of the River Nile-a symbol of life among the Arabs of Sudan and Egypt in the Museum.
Conclusion
Leila Aboulela's "The Museum" (1999), provides an authentic model of crosscultural encounter, highlighting two important issues: first, the "integration" strategy adopted by Shadia, the Sudanese female protagonist, as a way to cope with the new culture of the receiving land, second, the feelings of alienation resulting from the misleading European vision of the African culture exhibited in the western museum, which leads to a sense of "deculturation". The protagonist, as a representative of the Arab African migrant, is torn between her desire to assert her identity and accommodate herself to a new society that is unfriendly and indifferent. Analyzing the short story has provided us with a clear image of the hostile and unwelcoming host land, represented by behaviors of racism, discrimination, and misrepresentation of the migrants' "cultural identity" and its negative impact on the psyche of the newcomers leading to serious depressive symptoms, such as, feelings of alienation, guilt, and hopelessness. Thus, the migrant's daily life is signified as complex in which he/she is torn between the continual attempts to be part of the new culture while resisting exclusion and deprecation. Finally, through the depiction of Shadia's fateful visit to the Scottish museum, she removes the image of the museum from its basic setting as knowledge-based location to be a place spreading delusions and misrepresentations of ethnic identities.
|
2021-08-03T00:04:12.627Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "cbec26a00f2c07a4358c5f10cb8bd6b5a30ccb2e",
"oa_license": "CCBY",
"oa_url": "https://misj.journals.ekb.eg/article_170498_9da1859113bd88f98fb65a5b0339487d.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b032ae7a3f90424f2113965925deaf4f42a537a7",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
3018310
|
pes2o/s2orc
|
v3-fos-license
|
Screening for CLCN 5 mutation in renal calcium stone formers patients
Thirty-five patients (23 males and 12 females), age 35 ± 13 years old, presenting either idiopathic calcium nephrolithiasis, nephrocalcinosis or mild renal failure with idiopathic calcium nephrolithiasis were selected for the analysis of low molecular weight proteinuria and the possible mutations occurrence in the chloride channel gene CLCN5. The urinary ratio of β2-microglobulin and creatinine (β2M/Cr) was very high in a transplanted woman with nephrocalcinosis (>3.23 mg/mmol) and slightly high in five patients (>0.052 or < 1.0 mg/mmol) with multiple urological manipulations. Other studied patients showed β2M/Cr ratio at normal range (0.003-0.052 mg/mmol) without gender difference (p > 0.05). Mutation analysis of CLCN5 gene was performed in 26 patients of 35 selected (11 with idiopathic hypercalciuria; 6 men with normal calciuria; 3 with mild renal insufficiency and 6 with nephrocalcinosis) and was normal in all subjects even in those with abnormal molecular weight proteinuria. Conclusion: CLCN5 gene mutation is not a common cause of kidney stone disease or nephrocalcinosis in a group of Brazilian patients studied.
INTRODUCTION
The genetic background of the idiopathic calcium nephrolithiasis is unknown.Some advances have been made in the understanding of disorders that can exhibit nephrolithiasis as a symptom such as primary hyperoxaluria (Danpure et al. 1993), cystinuria (Stoller et al. 1999) and Dent's disease (Xlinked hypercalciuria and nephrolithiasis) (Schein-man et al. 1993), giving some insight into the ethiopathogenesis of calcium idiopathic nephrolithiasis.Dent's disease, for example, is a rare form of renal tubular disorder and it is characterized by hypercalciuria and low molecular-weight proteinuria besides all features of idiopathic nephrolithiasis, such as calcium stone formation and occasionally nephrocalcinosis and renal failure (Scheinman et al. 1993, Frymoyer et al. 1991, Wrong et al. 1994, Igarashi et al. 1995, Hoopes et al. 1998).The discovery of a hypercalciuric man, apparently idiopathic, that was in fact a true case of asymptomatic Dent's disease (Scheinman et al. 1993), has excited a stone investigators group to look for CLCN5 (gene that encodes for ClC-5 chloride channel) gene mutations.Mutation in CLCN5 is the pathophysiological basis for Dent's disease that can also present calcium stone formation with idiopathic hypercalciuria, nephrocalcinosis and renal insufficiency.Scheinman et al. in a study that screened 101 patients for low molecular weight proteinuria (LMWP) and presenting idiopathic hypercalciuric found only slight abnormalities in the LMWP in nine patients, none of them had a mutation in CLCN5 (Scheinman et al. 2000).Although the LMWP (β2-microglobuline or retinolbinding protein) study still a useful tool for screening genetic involvement in these patients that usually are male.The same procedure can also be used for the screening of genetic defect carrier female (Scheinman et al. 2000).
In idiopathic lithiasis, which is the main etiologic diagnosis of the renal stone formers, the hypercalciuria is the major urinary risk factor identified.The cellular mechanism of this metabolic disorder is unclear.Nephrocalcinosis is found during imaging studies of renal stone patients and in such occasion idiopathic hypercalciuria, hyperparathyroidism and distal acidification defect must be investigated.
Among all factors involved in the lithogenesis, the present study aims to search for a CLCN5 gene mutation in the context of renal idiopathic calcium lithiasis and/or nephrocalcinosis patients.
Patients
Recurrent calcium stone formers and/or patients with nephrocalcinosis from Outpatient Clinic of Pedro Ernesto Hospital, Rio de Janeiro, underwent a routine etiologic and metabolic investigation as previously described (Rebelo et al. 1996).Thirtyfive patients (23 males), age 35 ± 13 (SD) years old, with idiopathic calcium nephrolithiasis, nephrocalcinosis or mild renal failure with idiopathic cal-cium nephrolithiasis were selected.The procedure, briefly, comprised clinical history and physical examination; revision of previously abdominal roentgenogram and renal ultrasound performed; chemical composition analysis of stone (if available); urine spot sample for urinalysis and qualitative cystine investigation; urine culture; a 24-hour urine collection and fasting venous blood sampling to determine creatinine clearance, proteinuria, calcium, phosphate, uric acid, eletrolytes, urine citrate and oxalate, peripheral blood cell count and serum parathyrode hormone (iPTH).Hyperparathiroidism or other hypercalcemic disorders, complete distal renal form of tubular acidosis and anatomic abnormality were excluded of tests for distal acidification ability used in order to detect the incomplete form of distal renal tubular acidosis (iRTA).The test consisted of urinary pH measurement after 12-hours water deprivation.If the pH was less than 5.5 the distal acidification was interpreted as normal; otherwise, the test is complemented by oral furosemide (Lasix , 40 mg).In this case, the urinary pH is measured hourly, up to 4-hour post-ingestion; if urine pH was less than 5.5, at any time, the acidification was interpreted as normal; otherwise, a short ammonium chloride loading test is made (the patient is challenged with an acute acid load as ammonium chloride, 0.1g/kg body weight, ingested during 45 minutes to 1-hour, and urine pH has measured hourly during 8 hours following drug ingestion).The averages of the 6 last samples were used to interpret the test.The ability to acidification is normal if the attained pH is 5.3 or less.If patient fail to lower the pH to 5.3 or less, i. e. pH > 5.3, the diagnosis of incomplete form of distal renal tubular acidosis is considered.
Analytical Methods
Blood and urine biochemical parameters were determined by standard techniques previously used (Rebelo et al. 1996).Immunoreactive parathyroid hormone in serum (iPTH, intact molecule) was assayed by radioimmunoassay (Immunolite kits, Diagnostic Products, Los Angeles, CA, USA).Uri- Bras Cienc (2005) 77 (1) CLCN5 MUTATION AND CALCIUM NEPHROLITHIASIS 97 nary citrate was measured enzymatically with citrate liase (Sigma-Aldrich Corporation, St. Louis, MO, USA) and oxalate by enzymatic-colorimetric assay (Sigma-Aldrich Corporation, St. Louis, MO, USA).Urine pH was measured using a pH meter (Metronic, Minneapolis, MN, USA).The term idiopathic hypercalciuria is applied to hypercalciuria with normocalcemia in the absence of other mineral disorders known to cause hypercalciuria.
β2-microglobulin (β2M)
The subjects collected 250 ml of the first morning urine in sodium azide, 200 mg/l final concentration, sent to the laboratory at room temperature.The pH was measured immediately and, if necessary, adjusted to pH > 5.5 with alkali.β2M was measured by fluorimmunoassay (Vidas β2-microglobulina; bioMérieux, MO, USA) within four hours of collection.Creatinine and total proteinuria were also evaluated in the same sample.
The β2M results were expressed in relation to the creatinine in the same sample (β2M/Cr ratio; mg/mmol) and as concentration (mg/l).Normal β2M/Cr ratio is less than 0.052 mg/mmol (Scheinman et al. 2000).
The reference concentration (mg/l) ranges are: from 20 to 39 years old: mean 0.01 and upper limit 0.74; from 40 to 59 years-old: mean: 0.05 and upper limit 1.2.
Mutation Analysis of the CLCN5 Gene
In 26 patients leukocyte DNA was extracted (Miller et al. 1998) and used with CLCN5 specific primers for polymerase chain reaction (PCR) amplification utilizing the conditions described in Table I.The PCR products were purified (QIAquick PCR purification kit; Qiagen, Valencia, CA, USA) and DNA sequence of the PCR products was determined by the use of Taq polymerase cycle sequencing and a semi-automated detection system (Perkin-Elmer, Applied Biosystem, Foster City, CA, USA).The primers were designed based in the CLCN5 gene (genebank accession number 15309448).
RESULTS
The urinary β2-microglobulin (β2M) was evaluated in 35 subjects (23 male) of whom 25 presented idiopathic calcium stone disease (3 with mild renal insufficiency), 6 presented nephrocalcinosis and 4 were asymptomatic offspring of stone and nephrocalcinosis patients.The mean and median of results are shown in Table II.The urine pH varied from 5.53 to 7.60 (median 6.20).The results of β2M expressed as a creatinine ratio, disclosed six patients as having abnormal low molecular weight proteinuria: the transplanted one and 5 cases with multiple urological manipulations for relief of stone obstructions; the total proteinuria was slightly increased, i.e., less than 700 mg protein/g creatinine.Overall, not including the transplanted patient, β2M represented less than 20% of total proteinuria (3% to 16%).In the transplanted patient the β2M corresponded to more than 73% of total protein excretion.(Table II).
DNA Analyses for CLCN5 Mutations
The CLCN5 gene was analyzed in 26 subjects: 11 with idiopathic hypercalciuria; 6 idiopathic calcium lithiasis men with normal calciuria; 5 calcium stone disease and light to moderate renal insufficiency degrees, 6 nephrocalcinosis (3 without renal stone).Family history of renal stone disease could be obtained in 23 cases and it was positive in about 70%.Some features of these patients are in Table III.
Direct DNA sequencing of the CLCN5 gene did not show any mutation even in those cases with low molecular weight proteinuria.
DISCUSSION
The renal stone is a clinical symptom which has high prevalence (affecting 1 to 12% of the population), significant recurrence rates and high mor-bidity and often requires hospitalization for relief of renal spasmatic crisis or to treat unusual complications, such as acute urinary obstruction, infection or urinary sepsis.
In idiopathic calcium stone disease, the high frequency of nephrolithiasis familial history, as in this casuistic, suggest a genetic base.Although there are many possibilities of altered gene to ac-count for idiopathic hypercalciuria (vitamin D receptor gene, sodium-phosphate co-transporter gene, human homologous with the rat soluble adenylate cyclase gene, renal chloride channel gene and others), so far none has been found to be prevalent (Scheinman et al. 2000, Reed et al. 2002).
In this study, patient selection was based on the presence of either idiopathic calcium nephrolithia- sis (with or without renal insufficiency) or image of nephrocalcinosis, in order to increase the odds of the group to represent CLCN5 mutation.Other criterion was the agreement to be subjected to blood and urine sampling for genetic analysis.The higher percentage of idiopathic hypercalciuria -52% -than previously described (Rebelo et al. 1996), is in part due to selection criterion.As it is known, absorptive hypercalciuria type II patient on low calcium diet can reduce urinary calcium excretion to normal range and, in this study, the diet calcium content was not taken into account.This was the reason to include ''normal'' calciuric idiopathic nephrolithiasis with or without nephrocalcinosis in the search for CLCN5 mutation disease, generally described as a hypercalciuric disease.
Although clinical features of CLCN5 mutation diseases manifest mainly in affected man, female nephrolithiasis and/or nephrocalcinosis have been reported but normally the carrier woman is not symptomatic (Scheinman 1998, Reed et al. 2002).
Based on aforementioned clinical features, the patients were involved in a CLCN5 gene study, whose mutations led to a rare condition that could be misinterpreted as idiopathic nephrolithiasis and nephrocalcinosis, since the affected patients can manifest any feature of metabolic disarrangements seen in idiopathic nephrolithiasis.However, con-trasting to idiopathic lithiasis, in men, not in women, chloride channel disease has a worse prognostic caused by progressive renal failure culminating to end stage renal failure in a young age.
None of our patients had CLCN5 gene mutation, supporting the well known idea that most of calcium stone formers and nephrocalcinosis patients are not fenotypes of the chloride channel disease named Dent's disease or X-linked calcium nephrolithiasis (Scheinman et al. 2000).
|
2017-06-05T18:23:27.779Z
|
2005-02-02T00:00:00.000
|
{
"year": 2005,
"sha1": "a189e35de2328982aa067690d4f462fa7ea7b512",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/aabc/a/p9ksj4W54XdSvsW86pL8r9n/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a189e35de2328982aa067690d4f462fa7ea7b512",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216650200
|
pes2o/s2orc
|
v3-fos-license
|
Soluble urokinase plasminogen activator receptor (suPAR) as an early predictor of severe respiratory failure in patients with COVID-19 pneumonia.
As of April 1, 2020, 885,689 cases of infections by the novel coronavirus SARS-CoV-2 (COVID-19) have been recorded worldwide; 44,217 of them have died (https:// www.worldometers.info/coronavirus). At the beginning of the illness, patients may experience low-degree fever or flu-like symptoms, but suddenly, severe respiratory failure (SRF) emerges [1]. Increased circulating levels of D-dimers [1, 2] suggest endothelial activation. Urokinase plasminogen activator receptor (uPAR) that is bound on the endothelium may be cleaved early during the disease course leading to an increase of its soluble counterpart, namely suPAR [3]. If this holds true, then suPAR may be used as an early predictor of the risk of SRF. The Hellenic Sepsis Study Group (HSSG, www.sepsis. gr) is collecting clinical information and serum samples within the first 24 h of admission from patients with infections and at least two signs of the systemic inflammatory response syndrome. Since March 1, 2020, 57 patients with community-acquired pneumonia and molecular documentation of SARS-CoV-2 in respiratory secretions were enrolled. Patients were followed up daily for 14 days; the development of SRF defined as PO2/FiO2 ratio less than 150 requiring mechanical ventilation (MV) or continuous positive airway pressure treatment (CPAP) was recorded. suPAR was measured by an enzyme immunoassay in duplicate (suPARnosticTM, ViroGates, Lyngby, Denmark); the lower detection limit was 1.1 ng/ml. Measurements were performed and reported by one technician who was blinded to clinical information. The study endpoint was the prognostic performance of suPAR admission levels for the development of SRF within 14 days. Measured levels were compared to those collected from 15 patients with COVID-19 from the emergency department (ED) of Rush University Medical Center. Thirty-four (59.6%) patients were male and 23 (40.1%) female; the mean ± SD age was 64.0 ± 10.3 years, and the Charlson’s comorbidity index was 2.70 ± 1.80. The mean ± SD admission total neutrophil count was 4414.1 ± 2526.5/ mm; the total lymphocyte count was 1149.1 ± 1131.4/ mm; the C-reactive protein was 73.1 ± 76.4mg/l. Admission levels of suPAR were significantly greater among patients who eventually developed SRF (Fig. 1a). Circulating levels of suPAR were of the same range as those of the US cohort (Fig. 1b). Receiver operator characteristics curve analysis identified levels ≥ 6 ng/ml as the best predictor for SRF. At that cutoff point, the sensitivity, specificity, positive predictive value, and negative predictive value for the prediction of SRF was 85.7%, 91.7%, 85.7%, and 91.7%, respectively. The time to SRF was much shorter among patients with suPAR ≥ 6 ng/ml (Fig. 1c). The only admission
As of April 1, 2020, 885,689 cases of infections by the novel coronavirus SARS-CoV-2 (COVID-19) have been recorded worldwide; 44,217 of them have died (https:// www.worldometers.info/coronavirus). At the beginning of the illness, patients may experience low-degree fever or flu-like symptoms, but suddenly, severe respiratory failure (SRF) emerges [1]. Increased circulating levels of D-dimers [1,2] suggest endothelial activation. Urokinase plasminogen activator receptor (uPAR) that is bound on the endothelium may be cleaved early during the disease course leading to an increase of its soluble counterpart, namely suPAR [3]. If this holds true, then suPAR may be used as an early predictor of the risk of SRF.
The Hellenic Sepsis Study Group (HSSG, www.sepsis. gr) is collecting clinical information and serum samples within the first 24 h of admission from patients with infections and at least two signs of the systemic inflammatory response syndrome. Since March 1, 2020, 57 patients with community-acquired pneumonia and molecular documentation of SARS-CoV-2 in respiratory secretions were enrolled. Patients were followed up daily for 14 days; the development of SRF defined as PO 2 /FiO 2 ratio less than 150 requiring mechanical ventilation (MV) or continuous positive airway pressure treatment (CPAP) was recorded. suPAR was measured by an enzyme immunoassay in duplicate (suPARnostic™, ViroGates, Lyngby, Denmark); the lower detection limit was 1.1 ng/ml. Measurements were performed and reported by one technician who was blinded to clinical information. The study endpoint was the prognostic performance of suPAR admission levels for the development of SRF within 14 days. Measured levels were compared to those collected from 15 patients with COVID-19 from the emergency department (ED) of Rush University Medical Center.
Thirty-four (59.6%) patients were male and 23 (40.1%) female; the mean ± SD age was 64.0 ± 10.3 years, and the Charlson's comorbidity index was 2.70 ± 1.80. The mean ± SD admission total neutrophil count was 4414.1 ± 2526.5/ mm 3 ; the total lymphocyte count was 1149.1 ± 1131.4/ mm 3 ; the C-reactive protein was 73.1 ± 76.4 mg/l. Admission levels of suPAR were significantly greater among patients who eventually developed SRF (Fig. 1a). Circulating levels of suPAR were of the same range as those of the US cohort (Fig. 1b). Receiver operator characteristics curve analysis identified levels ≥ 6 ng/ml as the best predictor for SRF. At that cutoff point, the sensitivity, specificity, positive predictive value, and negative predictive value for the prediction of SRF was 85.7%, 91.7%, 85.7%, and 91.7%, respectively. The time to SRF was much shorter among patients with suPAR ≥ 6 ng/ml (Fig. 1c). The only admission variables that were independently associated with the development of SRF were male gender and suPAR ≥ 6 ng/ml (Table 1). A positive association was found between admission suPAR and D-dimers (r s = + 0.777, p < 0.0001). suPAR has been proposed as a biomarker for the risk of death. An analysis of the TRIAGE III trial in 4420 patients admitted at the ED in Denmark revealed that suPAR ranged between 2.6 and 4.7 ng/ml in 30-day survivors and between 6.7 and 11.8 ng/ml in 30-day nonsurvivors [4]. Early increase of suPAR has also been reported to be a prediction of 28-day outcome in sepsis [5]. uPAR is bound to the endothelial membrane and functions for the differential signaling between the cleaved and uncleaved forms of kininogen [3]. The positive association between D-dimers and suPAR suggest early complex kininogen and uPAR interactions at the endothelial level of early stages of COVID-19. Higher plasma levels of suPAR are predictive of and potentially causally involved in kidney disease [6] which can be a feature of severe COVID-19 infection.
Findings suggest that suPAR may early trace patients who need intensified management probably in need of anti-inflammatory treatment [6]. Whether modification of circulating suPAR is a useful therapeutic option will require further study. Authors' contributions NR and KA contributed in the collection and analysis of clinical data, critically revised the manuscript, and gave final approval of the version to be published. JEO and JR conceptualized the study design, contributed to the analysis of the data, critically reviewed the manuscript, and gave final approval of the final version to be published. SH participated in study design and data interpretation, critically reviewed the manuscript, and gave final approval of the final version to be published. EJGB conceptualized the study design, contributed to the analysis of the data, wrote the manuscript, critically reviewed the manuscript, and gave final approval of the version to be published.
Funding
The study was funded by unrestricted educational grants provided by the Hellenic Institute for the Study of Sepsis. Funds were also provided by Rush University Medical Center.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Ethics approval and consent to participate Written informed consent was provided from all participants. The study was approved by the Ethics Committees of the following hospitals:
Consent for publication Not applicable
Competing interests JEO is a co-founder, shareholder, and CSO of ViroGates A/S, Denmark. JEO is an inventor on patents on suPAR owned by Copenhagen University Hospital Hvidovre, Denmark. JR is a co-founder and shareholder of Trisaq, a biopharmaceutical company that develops drugs that target suPAR. EJGB has received honoraria from AbbVie USA, Abbott CH, InflaRx GmbH, MSD Greece, XBiotech Inc., and Angelini Italy; independent educational grants from AbbVie, Abbott, Astellas Pharma Europe, AxisShield, bioMérieux Inc., InflaRx GmbH, and XBiotech Inc.; and funding from the FrameWork 7 program HemoSpec (granted to the National and Kapodistrian University of Athens), the Horizon2020 Marie-Curie Project European Sepsis Academy (granted to the National and Kapodistrian University of Athens), and the Horizon 2020 European Grant ImmunoSep (granted to the Hellenic Institute for the Study of Sepsis). All other authors have disclosed that they do not have any conflicts of interest relevant to this submission.
|
2020-04-30T15:40:07.835Z
|
2020-04-30T00:00:00.000
|
{
"year": 2020,
"sha1": "241e3d0b4b90abc0e976688b2c0081f8a0c1c1e9",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-020-02897-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4094273c48bd6f99d212562ef4f407a13b9ab0ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225139901
|
pes2o/s2orc
|
v3-fos-license
|
A review of groundwater in high mountain environments
Mountain water resources are of particular importance for downstream populations but are threatened by decreasing water storage in snowpack and glaciers. Groundwater contribution to mountain streamflow, once assumed to be relatively small, is now understood to represent an important water source to streams. This review presents an overview of research on groundwater in high mountain environments (As classified by Meybeck et al. (2001) as very high, high, and mid‐altitude mountains). Coarse geomorphic units, like talus, alluvium, and moraines, are important stores and conduits for high mountain groundwater. Bedrock aquifers contribute to catchment streamflow through shallow, weathered bedrock but also to higher order streams and central valley aquifers through deep fracture flow and mountain‐block recharge. Tracer and water balance studies have shown that groundwater contributes substantially to streamflow in many high mountain catchments, particularly during low‐flow periods. The percentage of streamflow attributable to groundwater varies greatly through time and between watersheds depending on the geology, topography, climate, and spatial scale. Recharge to high mountain aquifers is spatially variable and comes from a combination of infiltration from rain, snowmelt, and glacier melt, as well as concentrated recharge beneath losing streams, or through fractures and swallow holes. Recent advances suggest that high mountain groundwater may provide some resilience—at least temporarily—to climate‐driven glacier and snowpack recession. A paucity of field data and the heterogeneity of alpine landscapes remain important challenges, but new data sources, tracers, and modeling methods continue to expand our understanding of high mountain groundwater flow.
| INTRODUCTION
Mountains, which cover 24% of earth's land mass (Kapos, Rhind, Edwards, Price, & Ravilious, 2000), are a disproportionately important component of global water supply because they receive more precipitation than lowland areas, experience less evapotranspiration at high elevations, and contain large stores of water as snow and ice. Runoff from precipitation, meltwater from mountain snow pack and glaciers, and groundwater discharge (or exfiltration) provide a valuable water resource to surrounding areas which often include arid or semi-arid landscapes. Viviroli, Dürr, Messerli, Meybeck, and Weingartner (2007) estimate that more than half of mountain areas play either an essential or supporting role in downstream water supply. Furthermore, demand on mountain water resources is growing; Viviroli, Kummu, Meybeck, Wada, & Pierre (2020) estimate that 1.4 billion people will depend critically on mountain runoff by 2050.
In this review, we focus on areas that Meybeck, Green, & Vörösmarty (2001) classify (based on topographic roughness and maximum altitude) as "high and very high mountains" (e.g., much of the Andes, Himalayas, Karakoram, and Southern Rocky Mountains) and "mid-altitude mountains" (e.g., much of the Northern Rocky Mountains, European Alps, and Cascades; parts of the Sierra Nevada and Alaska Range) with a few examples from "high and very high plateaus" (e.g., mountainous portions of the Tibetan Plateau). Combined, these regions represent 15% of the earth's land area and are estimated to contribute 17% of global runoff (Meybeck et al., 2001). These areas exhibit alpine and subalpine characteristics such as steep slopes, exposed bedrock, talus fields, moraines, alpine grasslands, shrublands, and sub-alpine forest. For simplicity, we refer to these mid-altitude, high, and very high mountain regions as "high mountains" in the text. Some examples from "low mountain" environments are used where they make a relevant and transferable contribution to knowledge of higher mountain systems. These low mountain examples are explicitly identified as such.
Mountain regions are being subjected to larger temperature increases than lowland areas under anthropogenic climate warming. Average global temperature has increased by 1 C above pre-industrial levels and is expected to reach 1.5 C between 2030 and 2052 (Intergovernmental Panel on Climate Change [IPCC], 2018), with faster warming expected at higher elevations and depending on the season (IPCC, 2019; Vuille et al., 2018). This phenomenon, known as elevationdependent warming, is of particular importance for the Andes and Himalayas where glaciers are located at very high elevations (Pepin et al., 2015). Changes in temperature dramatically alter mountain hydrological regimes by reducing water storage in snow and glaciers, increasing evapotranspiration and permafrost degradation (Barnett, Adam, & Lettenmaier, 2005;Immerzeel, van Beek, & Bierkens, 2010). Continuing and projected cryosphere decline will have negative impacts on downstream agriculture, hydropower, and water quality (IPCC, 2019). However, groundwater (subsurface water in the saturated zone) in high mountain environments may provide resiliency from the hydrological impacts of climate change in high mountain regions (Somers et al., 2019;Tague, Grant, Farrell, Choate, & Jefferson, 2008).
In mountain hydrologic research, the primary focus has often been cryosphere landscape features, such as glaciers and snowpack. While critically important, this research overlooks water that is "hidden" below the land surface. Our understanding of mountain groundwater processes has historically been limited by the scarcity and cost of well data, complexity and heterogeneity of mountain aquifers (including bedrock structural features), and variability of alpine climates (Manning & Solomon, 2005). Though some early studies hypothesized the importance of groundwater in the mountain hydrological system (Flerchinger, Cooley, & Deng, 1994;Forster & Smith, 1988b;Snow, 1972), groundwater was commonly considered a minor contributor to mountain streamflow because the steep slopes and shallow soil development were hypothesized to be small and short-lived storage reservoirs for groundwater (McGlynn, McDonnel, & Brammer, 2002;Weiler, McDonnell, Tromp-van Meerveld, & Uchida, 2005). However, recent work has demonstrated the substantial capacity for groundwater storage and discharge in mountain watersheds and its importance in buffering streamflow during dry periods (Liu, Williams, & Caine, 2004;Soulsby, Malcolm, Helliwell, Ferrier, & Jenkins, 2000 [low mountains]; Uhlenbrook, Frey, Leibundgut, & Maloszewski, 2002).
Groundwater processes in mountain regions differ from lower relief areas in three main ways: (a) water table position and hydraulic gradients are much higher which influence the dominant local flow paths and discharge rates (Forster & Smith, 1988a), (b) the near surface hydrogeologic stratigraphy is very complex due to the high energy depositional environment and glacial deposition processes (Cairns, 2014), and (c) the high relief of the surface topography drives deeper groundwater circulation, recharging regional and even continental scale flow systems and potentially allowing the geothermal temperature gradient to affect flow (Forster & Smith, 1988a).
We review primarily peer-reviewed research on groundwater processes in high mountain environments. Though our review is not exhaustive, we seek to provide a comprehensive overview of research on the subject. As summarized in Figure 1, we review studies from 1972 to present and note an increase in mountain groundwater research beginning in 2014. The Rocky Mountains are the most studied mountain range while the Himalayas received relatively little attention for groundwater research. We first outline the different types of high mountain aquifers and flow pathways that have been described in the literature and integrate them into a conceptual model of high mountain groundwater flow. Second, we describe research that has quantified the contribution of groundwater to streamflow in high mountain regions-mostly tracer studies and water balance studies. Third, we examine the suggested recharge sources and mechanisms for high mountain groundwater, and fourth, we look at numerical modeling approaches and their findings for high mountain environments as well as climate change impacts.
| MOUNTAIN AQUIFERS AND FLOW PATHWAYS
Sedimentologically, mountains are high-energy environments that experience a disproportionately large amount of weathering, erosion, and mass wasting. Furthermore, the vast majority of mountain ranges once hosted glaciers (or still do), leading to glacial depositional features such as till deposits, moraines, outwash plains, and so forth. Many of these deposits are highly heterogeneous and can act as storage reservoirs and/or conduits for groundwater. A single mountain watershed can include talus slopes, moraine and alluvial deposits, lacustrine clays, weathered and unweathered bedrock, geologic faulting, karst formations, permafrost, and rock glaciers (Barsch & Caine, 2007). Furthermore, these glacially derived features can control the presence and location of other alpine geomorphic features such as wetlands.
A considerable amount of research has focused on identifying subsurface features that store and transport groundwater in high mountains. In the literature, the importance assigned to different aquifers or pathways is somewhat dependent on the spatial scale of the study where headwater scale studies generally emphasize small scale coarse deposits and large-scale studies may attribute more flow to bedrock pathways.
| Coarse deposits
Coarse geomorphic units play an important role in storing and channeling groundwater flow in high mountains (Clow et al., 2003;Gordon et al., 2015;Hood & Hayashi, 2015;Käser & Hunkeler, 2016;Liu et al., 2004;Pierson, 1982;Somers et al., 2016;Szmigielski et al., 2018). Coarse deposits in high mountain regions include talus slopes, debris fans, alluvium, and some moraines. The relatively high permeability of these materials allows them to channel preferential flow F I G U R E 1 Histogram indicating the year of publication and geographic setting of high mountain groundwater studies included in this overview. Review papers for specific geographic regions and studies that use data from two ranges are colored as half and half. Studies labeled as modeling are pure theoretical modeling that is not associated with a field site. Only peer-reviewed publications (no dissertations or theses) in English are included in the plot. While not exhaustive, we aim to present a comprehensive overview of mountain groundwater research from steep alpine ridges, glacier forefields, and through valley bottom sediments, and their high porosity allows for potentially significant groundwater storage ( Figure 2).
Proglacial moraines, composed of mostly cobbles and boulders, have been identified as important landforms for groundwater storage in the Canadian Rocky Mountains (Hood & Hayashi, 2015). Langston, Hayashi, & Roy (2013) estimated groundwater flow through a proglacial talus and moraine complex in the Canadian Rockies using salt tracing and energy balance approaches, and found that groundwater flow dominates the water balance of a tarn lake. This technique provided one of very few field-scale measurements of hydraulic conductivity of these materials, 10 −3 m/s. Furthermore, groundwater flow through proglacial moraine/talus features can follow multiple, possibly disconnected flow paths and may exhibit distinct geochemical and hydrological characteristics (Roy & Hayashi, 2009) and collectively dampen and delay the transmission of snowmelt . Hayashi (2019) expands on this research and shows that these talus aquifers have a fast recession of discharge (i.e., groundwater exfiltration) after recharge (e.g., from snowmelt or rainfall), followed by a longer slow recession. Furthermore, Hayashi notes that the hydrogeologic setting of the talus deposits (e.g., internal deposition structure, adjacency to wetlands, etc.) further controls groundwater discharge.
Elsewhere, in the Colorado Rocky Mountains, talus fields were found to contribute more than 40% of the total stream discharge during the summer (Liu et al., 2004). Glas et al. (2018) and Chavez (2013) propose conceptual models of groundwater recharge in proglacial valleys of the Peru's Cordillera Blanca where recharge is channeled through talus deposits, which line the valley walls, into an aquifer system beneath the valley floor. There, coarse talus aquifers are interbedded and confined by fine glaciolacustrine clays to create confined and sometimes artesian aquifers.
Coarse alluvial deposits can also channel groundwater flow through high mountain systems. Hydraulic conductivity and gradients were measured in 13 monitoring wells in a 10 km 2 alpine headwater catchment in British Columbia, Canada, to investigate the groundwater transport of industrial contaminants. Most groundwater flow through the catchment was channeled through unconfined coarse basal alluvial deposits above shale bedrock and groundwater flow accounted for approximately 15% of watershed outflow (Szmigielski et al., 2018). Käser & Hunkeler (2016) monitored a watershed in the swiss alps to assess the role of alluvial aquifers in basin discharge. The alluvial deposits were composed of sandy gravel and cobbles with variable silt content. Though the alluvial aquifer had limited spatial extent (3% of basin area), it played an important role in storing groundwater in the catchment and sustaining streamflow, by F I G U R E 2 Schematic of groundwater flow through coarse high mountain geomorphic units including a debris fan, talus slope, and moraine complex. From Gordon et al. (2015) providing a third of total stream discharge, during a drought. Furthermore, significant groundwater flow out of the watershed was found to occur through the alluvial aquifer below the stream channel.
| Bedrock
Groundwater flow through bedrock represents an important flow path in the high mountain hydrological system. Several studies in the related discipline of hillslope hydrology illustrate the importance of flow through bedrock in steep slopes (Box 1). Early hillslope hydrological studies suggested that subsurface flow was concentrated in the soil layer above bedrock in steep hillslopes (Mosley, 1979;Tani, 1997). Since then, several hillslope studies have debunked this assumption. Tromp-van Meerveld, Peters, & McDonnell (2007) used a sprinkler plot study to physically simulate the infiltration of precipitation into bedrock in the Panola Mountain Watershed, GA (low mountain environment). They initially anticipated that subsurface water would flow toward the stream through the unconsolidated layer (coarse sandy loam, high permeability) above the bedrock (granodiorite, relatively low permeability), which was assumed impermeable in previous studies. Instead, they found that 91% of the water applied to a 66 m 2 study patch infiltrated into the bedrock layer. Another hillslope sprinkler plot study in the western Cascade Range in Oregon attempted to quantify the amount of "deep seepage" in a 172 m 2 hillslope plot underlain by thin soil above andesite and coarse breccia. The authors define deep seepage as infiltrated water that does not resurface in a collection trench below the hillslope plot but instead is detected in the stream at the catchment outlet. They found that 27% of the applied water went to deep seepage (Graham, Van Verseveld, Barnard, & McDonnell, 2010).
BOX 1 INTEGRATING OUR UNDERSTANDING OF SURFACE AND SUBSURFACE HYDROLOGICAL PROCESSES IN MOUNTAINS
Current literature on groundwater processes in high mountain regions is divided across several sub-disciplines of hydrology including hydrogeology, surface hydrology, hillslope hydrology, and cryospheric science. Acrossand even within-these sub-disciplines, there is variability in the terminology and assumptions used to describe groundwater flow and storage in high mountain regions, making it difficult to compare findings (Staudinger et al., 2019).
Mountain surface hydrology research papers tend to focus on shallow flow and use terms like soil water, interflow, groundwater runoff, old/new water, and deep seepage. Hydrogeology research papers are more likely to include deeper sub-surface flow and use terms such as groundwater discharge/exfiltration, shallow/deep groundwater flow, mountain-block recharge (MBR), and unsaturated and saturated zones. Further complication arises from usage of discharge, which can describe both rate of stream flow and groundwater leaving the subsurface system. Rosenberry, Lewandowski, Meinikmann, and Nützmann (2015) suggest the term exfiltration should be used in place of groundwater discharge to surface water for disambiguation. In terms of assumptions, there is a lack of consistency in defining the depth of watersheds (Condon et al., 2020) which is particularly important in mountain watersheds where the steep topography drives deeper groundwater circulation. Additionally, mountain hydrological studies sometimes assume that all groundwater recharges are returned to a river within the surface watershed. This assumption must be carefully examined as small headwater catchments are more likely than other catchments to be "leaky" in that groundwater export is a non-negligible component of the water balance (Fan, 2019).
Groundwater and surface water are deeply integrated and represent a single resource (Winter, Harvey, Franke, & Alley, 1999). Therefore, more consistency and clear definition of groundwater terminology and assumptions used may facilitate intercomparison between studies and narrow the gap between hydrologic and hydrogeologic research in high mountains. fractured and weathered shallow bedrock, occasionally assuming that deeper bedrock remains impermeable. Other research examines flow through deeper competent bedrock, which may still be fractured to a lesser extent. Frisbee et al. (2011) tested these competing conceptual models of high mountain streamflow generation using stream chemistry and numerical modeling. Their results suggested that streamflow was generated through both hillslope response and fully three-dimensional flow through bedrock ( Figure 3b) and was not merely an aggregate of near-surface hillslope responses ( Figure 3a; also see Voeckler et al. (2014) for a low-mountain examination of shallow versus deep bedrock flow). The distinction between shallow and deep bedrock is relative and defined on a case-by-case basis. Related research on MBR usually includes groundwater flow through deep bedrock aquifers to discharge in higher order streams in adjacent valleys, and is summarized in later in this review.
| Shallow, weathered bedrock
Given that bedrock permeability decreases exponentially with depth (Ren, Gragg, Zhang, Carr, & Yao, 2018), some studies rely on the assumption that groundwater flow through bedrock is concentrated near the bedrock surface where bedrock may be more heavily fractured (Flerchinger, Deng, & Cooley, 1993). For example, a recent study of borehole data in a fractured granite aquifer in the Laramie Range of the Rocky Mountains in Wyoming, indicated that the hydraulically significant zone, where hydraulic conductivity was above 10 −10 m/s, was above 40-53 m below ground surface (Ren et al., 2018). Additionally, Andermann et al. (2012) used hydrograph data to calculate groundwater transit times which correspond to fractured basement rock. Unfortunately, it is rare for hydrological studies in mountain regions to have access to substantial data on hydrogeological properties of bedrock, due to the cost of collecting such data and difficulty in accessing remote and rugged locations.
| Deep bedrock
The depth of a local groundwater flow system is understood to increase with topographic relief (Toth, 1963). Therefore, mountain regions should theoretically experience enhanced groundwater circulation depths compared to low relief areas. Accordingly, several studies note the contribution of both shallow and deeper bedrock aquifers to streamflow in mountains. A recent hydrochemical study of Mount Daisen Volcano (a high mountain in a low-mid altitude mountain range), Japan, detected both shallow and deep bedrock groundwater contributions to streamflow in two small (4.0 and 6.6 km 2 ) headwater catchments. Deep bedrock groundwater through ash, pumice, and pyroclastic deposits, reportedly dominated streamflow and contributed more streamflow per unit catchment area, further downstream (Fujimoto et al., 2016).
The temperature and geochemistry of high mountain groundwater and springs also provide evidence of circulation of meteoric water through deeper, intact bedrock in mountains (Gleeson, Manning, Popp, Zane, & Clark, 2018;Liu et al., 2008;Manning & Caine, 2007). Frisbee et al., (2017) estimated groundwater circulation depths in two watersheds of the Colorado and New Mexico Rocky Mountains. They used the geochemical signature of discharging groundwater to deduce the water temperature at depth (also known as geothermometry). They then convert the temperature to depth based on a previously estimated geothermal gradient. They found circulation depths range from 0.6 to 2.5 km F I G U R E 3 Two conceptual models of streamflow generation in high mountain terrain where streamflow is generated from (a) a combination of hillslope responses and (b) a combination of hillslope and fully 3-dimensional groundwater flow. From Frisbee et al. (2011) below ground surface, well below what might be considered shallow weathered bedrock. Furthermore, the cause of fracturing (i.e., volcanic versus tectonic) impacts the connectivity of fractures and therefore the permeability and circulation depth. They further suggest that model domains assigned in groundwater modeling efforts for mountain environments are often too shallow and cut off deeper flow paths. Despite the dramatic heterogeneity that can exist in mountain bedrock, a study in the Colorado Rocky Mountains yielded spatially consistent groundwater ages of 8-11 years in the fractured crystalline rock along an alpine stream. Their findings support the applicability of relatively simple numerical modeling of mountain groundwater systems where permeability mainly varies with depth (Manning & Caine, 2007).
As many mountain ranges occur in tectonically active regions, other studies use mountain groundwater from hot springs as observation points of deep crustal groundwater processes (Diamond, Wanner, & Waber, 2018;Newell, Jessup, Hilton, Shaw, & Hughes, 2015;Van Hinsberg et al., 2017) though these are beyond the scope of this review. There is an ongoing need to quantify the fraction of groundwater discharge to mountain streams that follows very deep (>500 m approximately) flow paths and the impact of neglecting these deep flow paths in modeling.
Karst aquifers in high mountain regions often have relatively short transit times. Infiltration can be diffuse or concentrated through sinking streams, swallow holes or fractures. Groundwater discharge to the surface often occurs through springs (Gremaud & Goldscheider, 2010;Vigna & Banzato, 2015). A tracer study of high alpine karst systems in the Limestone Wetterstein Mountains of the German Alps estimated transit times for three components of groundwater discharge: 3-13 days, 2.9-4.9 months, and >1 year for fast, intermediate, and slow flow components, respectively (Lauber & Goldscheider, 2014). Gremaud et al. (2009) used 19 tracer tests to infer the internal structure of the Tsanfleuron-Sanetsch karst aquifer system in the Swiss Alps. Tracer transit times were short (5-57 hr) and did not correlate to the distance from the injection point to the spring, indicating the heterogeneity of the system. Groundwater flow occurred in distinct flow paths, largely following the limestone and marl stratification, and fold structures served to direct many of the observed flow paths which converged to discharge at Glarey Spring. Cross-layer flow was also observed through fractures and faults. Elsewhere, short transit times have also been observed in Karst systems in the western Himalayas of India (Shah, Jeelani, & Jacob, 2017). Conduit flow often dominates river baseflow in karst mountains but matrix flow becomes appreciable during dryer periods as observed in the Rocky Mountains of Utah (Neilson et al., 2018).
| Permafrost and rock glaciers
Groundwater in high mountain environments can also exist in the solid phase as ice-rich permafrost and rock glaciers. These features occur at high elevations and/or latitudes where mean annual air temperature is sufficiently low. Permafrost is perennially frozen ground which remains at or below 0 C for two or more consecutive years. The spatial coverage of permafrost (isolated to continuous) in mountains varies mainly with elevation and aspect but other factors like vegetation and snow coverage also impact groundwater temperature and therefore permafrost occurrence (Gruber et al., 2017). Permafrost that has substantial pore water is called ice-rich permafrost (Ge, McKenzie, Voss, & Wu, 2011) and can represent an important store of groundwater (Clow et al., 2003). Rock glaciers, on the other hand, are deposits of rock debris cemented by interstitial ice that originated from former glaciers or from the re-freezing of glacier melt (Harrington et al., 2018). Rock glaciers can be important source water for baseflow in alpine environments (Williams, Knauf, Caine, Liu, & Verplanck, 2006) and influence stream water quality (Williams, Knauf, Cory, Caine, & Liu, 2007).
Permafrost acts as a barrier to groundwater flow . In high mountain areas where topographic gradients are high, permafrost limits deeper groundwater flow paths that would otherwise occur (Ge et al., 2011;Rogger et al., 2017). Evans et al. (2015) used field observations and groundwater modeling to investigate the role of permafrost in a headwater mountain watershed of the Qinghai-Tibet Plateau. In total, 50-80% of the 25 km 2 mountain watershed is underlain by permafrost at higher elevations. They estimated, using thermal modeling, that the supra-permafrost (or active) layer ranged from 0.6 to 3.3 m deep above 3,400 m elevation. The results of their groundwater modeling indicated that 95% of volumetric flow was channeled through the supra-permafrost layer. For comparison, lower in the catchment where no permafrost exists, 89% of volumetric flow occurred within 108 m of ground surface through surficial deposits. Thus, permafrost creates a shallow, perched flow system. Current baseflow in the watershed contributes 43% of streamflow from June to November. As the climate warms, permafrost will continue to degrade, increasing the hydraulic conductivity of the subsurface and increasing baseflow.
Rock glaciers can act as an important store of subsurface water on multiple time scales in alpine environments. Perennial ice melt can contribute non-negligible amounts of water to rivers, particularly in periods of deglaciation and in semi-arid and arid environments (Jones, Harrison, Anderson, & Betts, 2018;Jones, Harrison, Anderson, & Whalley, 2019;Williams et al., 2006). Globally, it is estimated that rock glaciers store approximately 83 Gt of water, around 1/456 as much water as glaciers hold globally (Jones et al., 2018).
Rock glaciers can act as barriers to groundwater flow when they contain substantial ice, or conduits for groundwater flow when they contain little ice. Harrington et al. (2018) studied an inactive rock glacier in the Canadian Rockies. Geophysical surveys showed that the rock glacier contained little ground ice and that perennial melt was small. The coarse debris of the rock glacier then acted much like an unconfined aquifer that contributed 50% of summer streamflow in a headwater catchment. In the same study area, Harrington, Hayashi, & Kurylyk (2017) found that springs discharging from the rock glacier cooled the average stream temperature by 3 C and the maximum daily stream temperature by 5 C degrees. In July and August, groundwater provides an important downstream thermal refuge for at-risk cold-water fish species.
Rock glaciers have also been shown to influence the geochemistry of groundwater and the surface waters they feed by enhancing mechanical weathering of rock and providing meltwater for continued dissolution and solute transport during dry periods. However, it has also been suggested that rock glaciers and permafrost can reduce weathering in colder areas by limiting rock exposure to liquid water (Ilyashuk, Ilyashuk, Psenner, Tessadri, & Koinig, 2018). Thies, Nickus, Tolotti, Tessadri, & Krainer (2013) found higher concentrations of dissolved ions and heavy metals in alpine streams fed in-part by active rock glaciers compared to adjacent streams with no input from rock glaciers in the Tyrolean Alps. At a nearby field site, Ilyashuk et al. (2018) found elevated dissolved ions, heavy metals, and rates of macroinvertebrate deformities, all as a result of enhanced acid rock drainage, in two alpine lakes fed by rock glaciers compared to a nearby lake which had no rock glaciers in its catchment. Rock glaciers can also be important nutrient sources in high mountain environments. For example, Williams et al. (2007) observed much higher nitrate concentrations in rock glacier discharge compared to other surface water in the Rocky Mountains of Colorado and Wyoming, and suggest that microbial activity within the rock glaciers themselves is responsible for the high concentrations.
| Wetlands
Wetlands are geomorphic features with the water table at or near the land surface for extended periods of time, leading to unique hydrophilic soils, plants, and hydrologic functionality. For wetlands to form, poorly draining substrates (e.g., glaciolacustrine clays) and a wet climate (i.e., precipitation well in excess of evapotranspiration) are required (Tarnocai et al., 1997). Given the extensive till deposits and higher precipitation rates in mountain regions, wetlands often form and can act as carbon sinks and biodiversity hot-spots (Buytaert, Cuesta-Camacho, & Tobón, 2011). Mountain wetlands are described by many different terms geographically and across hydrology, ecology, and geomorphology literature including páramo, jalca, pampa, bofedal, bog, fen, peatland, and mire (Buytaert & Beven, 2011;Maldonado Fonkén, 2015;Tarnocai et al., 1997;Tomaselli et al., 2018).
Several studies have examined the hydrologic function of wetlands and meadows in high mountain catchments (Chignell, Laituri, Young, & Evangelista, 2019;Chimner et al., 2019;Cooper et al., 2010Cooper et al., , 2019Lowry, Loheide, Moore, & Lundquist, 2011;Millar, Cooper, & Ronayne, 2018;Mosquera, Lazo, Célleri, Wilcox, & Crespo, 2015;Mosquera et al., 2016;Polk et al., 2017;Streich & Westbrook, 2019). Due to their excess of water and decreasing permeability with drying, wetlands can self-regulate to keep the water table near the land surface (Rezanezhad et al., 2016). The relatively high porosity of alpine wetland soils provides an important groundwater store, slowing the movement of water from high to low elevations (Mosquera et al., 2015) and attenuating high flows (Buytaert & Beven, 2011). In the wider hydrogeological context, alpine wetlands often have a dual-hydrologic function: they receive shallow runoff and precipitation, but due to perennial saturation, also may serve as groundwater recharge areas (Winter, 1999). Groundwater dynamics of alpine wetlands can also be affected by beaver (Castor canadensis and Castor fiber) activity in some mountain ranges in western North America, Eurasia, and Argentina (Morrison, Westbrook, & Bedard-Haughn, 2014; Pietrek & Fasola, 2014). Beaver dams have the net effect of raising the water table, which increases groundwater recharge (Karran, Westbrook, & Bedard-Haughn, 2018) and enhances hyporheic flows (Lautz, Siegel, & Bauer, 2006). Some studies have suggested that alpine wetlands may be particularly sensitive to climate change and glacier recession (Polk et al., 2017).
| Conceptual model of groundwater flow in high mountains
High mountain watersheds each contain some combination of the hydrogeological features outlined above. The individual flow regimes depend on which features are present as well as the rock type, topography, and climate. Subsurface heterogeneities can also cause inter-basin flow where groundwater is exported or imported across the boundary of the topographically defined watershed (Fan, 2019). Figure 4 summarizes the groundwater aquifers and flow paths described in the previous sections. At headwater scales, flow paths through coarse deposits like talus and moraines are particularly important to the catchment water balance. At larger scales, alluvial and valley bottom aquifers become increasingly important as well as deeper groundwater flow paths.
| QUANTIFYING GROUNDWATER CONTRIBUTION TO HIGH MOUNTAIN RIVERS
Tracer methods (natural and artificial) and water balance studies provide useful techniques to quantify different water sources in hydrological systems. Tracer methods are particularly useful in high mountain regions because they do not necessarily require long-term monitoring, and data can be collected in remote areas and rugged landscapes through periodic synoptic water sampling (as opposed to continuous data collection). Water balance studies, while generally more data intensive, can serve as a measurement of groundwater storage, flow, and discharge/exfiltration. Table 1 summarizes results of studies which have quantified groundwater contribution to high mountain streamflow using a variety of methods.
| Geochemical tracers
A challenge in high mountain research is that these regions are often data poor due to the difficulty in making physical field measurements for groundwater studies (e.g., installing piezometers) in remote sites with difficult access. The use of geochemical tracers helps to overcome this challenge. Geochemical tracers include dissolved ions and isotopes of water and solutes, and are frequently used to detect groundwater discharge in high mountain watersheds (Baraer et al., 2009Burns et al., 2001;Carey, Boucher, & Duarte, 2013;Carroll et al., 2018;Cowie et al., 2017;Engel et al., 2016;Frisbee et al., 2011;Huth et al., 2004;Liu et al., 2004;Liu, Conklin, & Shaw, 2017;Mark & Mckenzie, 2007;Mckenzie, Mark, Thompson, Schotterer, & Lin, 2010;Neilson et al., 2018;Saberi et al., 2019;Shaw et al., 2014). Groundwater, having been in contact with geologic materials for extended periods of time, usually has a higher concentration of dissolved ions than precipitation, surface runoff, glacier melt, or snowmelt. Groundwater that has followed a longer flow path and/or has a longer residence time may also have a higher concentration of solutes. This leads to a trend of increasing solute concentrations lower in a watershed which has been observed in the field (Frisbee et al., 2011) but that can be limited by mineral solubilities in different geological or climatic settings. Additionally, the ratio between the different ionic concentrations can be related to the geologic material through which the groundwater flows (Clow & Sueker, 2000;Hem, 1985). This difference in hydrochemical signature is used along with conservative Where multiple groundwater contribution percentages are listed, the study either looks at different times of year or at more than one catchment as indicated. b See corresponding note. mixing analysis to quantify the contributions of different source waters. Two commonly used methods of mixing analysis include end-member mixing analysis (Burns et al., 2001;Hooper, Christophersen, & Peters, 1990) and the similar hydrochemical basin characterization method Saberi et al., 2019). Similarly, stable isotopes of water (δ 18 O and δ 2 H) can be used in combination with mixing analysis to quantify contributions to streamflow if there is a significant difference in isotopic values between groundwater and stream water. The use of stable isotopes of water as tracers presents some additional challenges as the isotopic composition of precipitation his highly variable, as a function of altitude, season, and moisture sources (Lachniet & Patterson, 2002) and may require extensive seasonal baseline data for proper interpretation (Carey et al., 2013;Mark & Mckenzie, 2007).
Several seminal field studies in the United States and Scotland (low mountains) used geochemical tracers to establish that groundwater is a substantial contributor to mountain streamflow despite the pervasive conception that near-surface flow alone dominates mountain hydrological systems (Burns et al., 2001;Clow et al., 2003;Liu et al., 2004;Soulsby et al., 2000 (low mountains)). For example, Liu et al. (2004) used isotopic and geochemical tracers to estimate groundwater contribution to streamflow in two small (0.08 and 2.25 km 2 ) watersheds of the Colorado Front Range. They found that sub-surface flow (defined as the sum of soil water, baseflow, and talus water) contributed more than two thirds of streamflow in both catchments. More specifically, 54% of streamflow originated from baseflow in the smaller catchment and 28% of streamflow originated from baseflow, plus 36% from talus deposits, in the larger catchment.
Subsequent work with geochemical tracers has geographically expanded our understanding of groundwater discharge to different mountain hydrological regimes in the Rockies (Carroll et al., 2018;Cowie et al., 2017;Frisbee et al., 2011;Liu et al., 2008), Andes (Baraer et al., 2009Saberi et al., 2019), Alps (Engel et al., 2016;Schmieder, Garvelmann, Marke, & Strasser, 2018), Himalayas (Jeelani, Bhat, & Shivanna, 2010;Maurya et al., 2011;Williams et al., 2016;, Tianshan Mountains (Wang et al., 2017), and Canadian North (Carey et al., 2013). For example, in Wolf Creek, Yukon, Canada (a low-to mid-altitude mountain watershed with alpine characteristics), (Carey et al., 2013) showed that water stored in near-surface soils within the catchment dominated the snowmelt hydrograph based on a multi-year combination of isotope and major ion data. Through tracer studies, even glacierized environments were shown to have substantial groundwater inflow to rivers. Baraer et al. (2015) found that groundwater contributes 24-80% of dry season stream discharge in four proglacial valleys of the Cordillera Blanca in the northern Peruvian Andes, while Wang et al. (2017) found that groundwater contributed 38% of streamflow in a glacierized watershed of the Tianshan Mountains in China. More recent work has increased the spatial and temporal resolution of results, examined fine-scale groundwater flow paths and made use of new tracers such as chloride isotopes (Shaw et al., 2014), sulfur isotopes (Urióstegui, Bibby, Esser, & Clark, 2017), and dissolved noble gasses (Gleeson et al., 2018).
| Heat, dye, and chloride tracers
In addition to natural geochemical tracers, heat, dye, and chloride (and other ions less commonly) can also be used to trace sources of streamflow and groundwater flow through high mountain aquifers. Langston et al. (2013) used heat tracing along with chloride tracing to estimate the hydraulic conductivity of an alpine moraine in the Canadian Rocky Mountains. Gordon et al. (2015) combined geochemical sampling with Rhodamine dye tracing to quantify groundwater contribution to streamflow in proglacial valleys of the Peruvian Andes. Somers et al. (2016) used a combination of rhodamine dye tracing and heat tracing to calculate that 29% of stream flow came from groundwater over a 4 km reach in the Peruvian Andes. Tracers have also been used to characterize flow paths through mountain geomorphic features (Gremaud et al., 2009;Roy & Hayashi, 2009).
| Water balance studies
Various types of water balance studies have been employed in high mountain environments to quantify groundwater storage, recharge, and discharge (Andermann et al., 2012;Clark et al., 2014;Cochand, Christe, Ornstein, & Hunkeler, 2019;Flerchinger & Cooley, 2000;Hood, Roy, & Hayashi, 2006;Hood & Hayashi, 2015;McClymont, Hayashi, Bentley, Muir, & Ernst, 2010;Paznekas & Hayashi, 2016). Water balance studies capitalize on the difference in timing between water inputs (precipitation-evapotranspiration, snowmelt, glacier melt) and outputs (stream discharge) from a catchment to provide an indication of transient catchment water storage and discharge. One limitation of water balance studies is that the calculation of groundwater storage is subject to errors from the measurement of all other hydrologic fluxes (Winter, 1981). Andermann et al. (2012) examined 30 years of river discharge records from three large Himalayan basins and show hysteresis in the relationship between precipitation and streamflow throughout the year. This hysteresis indicates substantial transient water storage which can be explained by groundwater storage. They use hydrological modeling to estimate that the volume of water flowing through the groundwater system represents two thirds of annual streamflow and is approximately six times greater than the glacier and snowmelt contribution.
Similarly, Hood and Hayashi (2015) used detailed measurement and modeling of hydrological fluxes, including precipitation, snowmelt, and streamflow, in a proglacial headwater catchment in the Canadian Rocky Mountains to characterize the timing of groundwater recharge, discharge, and storage capacity. They show that peak groundwater storage is 60-100 mm averaged over the watershed area. This groundwater storage is much less than the peak snowpack storage (500-640 mm snow water equivalent) but is important when compared to the average fall and winter baseflow which is typically less than 0.5 mm/d. Cochand et al. (2019) employed a similar water balance approach to Hood and Hayashi (2015) but found a larger change in groundwater storage during the snowmelt period of 300 mm or 45% of the pre-melt snow water equivalent in the Valais Alps of Switzerland. Accordingly, Cochand et al.'s minimum stream baseflow was higher than Hood and Hayashi's at 0.9 mm/d indicative of more groundwater storage.
| What controls groundwater contribution to high mountain streamflow?
The above tracer and water balance studies estimate a wide range of groundwater contributions to high mountain streams, summarized in Table 1. The extent to which groundwater contributes to streamflow in mountain environments is controlled by several different factors. Forster and Smith (1988b) suggested that surface topography, geology, climate, and regional heat flux all affect groundwater flow and water table position in mountains. It should also be noted that mountain groundwater contribution to streamflow is spatially and temporally variable within a given watershed (Neilson et al., 2018;Payn, Gooseff, McGlynn, Bencala, & Wondzell, 2009) and can be correlated to antecedent moisture content in the previous year (Baraer et al., 2009;Burns et al., 2001). Furthermore, the lag time between recharge and discharge is dependent on the scale of groundwater flow paths. Paznekas & Hayashi (2016) analyzed streamflow records of 18 mountain watersheds in the Rocky and Columbia mountain ranges in Canada. Since snowmelt, precipitation, and glacier melt are negligible during the winter, they examined winter baseflow to determine what controls groundwater flow. Precipitation in the previous year was uncorrelated to winter baseflow, leading the authors to conclude that the groundwater storage was completely filled each year and that winter baseflow depended on stationary variables like bedrock and topography. They found that bedrock geology exerted a strong control on winter baseflow where watersheds underlain by younger sedimentary rocks had higher winter baseflow than those underlain by older metamorphic rock (also see Liljedahl, Gädeke, O'Neel, Gatesman, & Douglas (2017) and Tsinnajinnie (2018)). Similarly, water balance studies by Cochand et al. (2019) and Hood & Hayashi (2015) show that quartzite bedrock in the Opabin watershed of the Canadian Rockies accommodates much less annual groundwater storage than the evaporites present in a catchment of the Swiss Alps.
Other researchers have indicated that the watershed size influences the relative and absolute contribution of groundwater to streamflow in high mountains. Several authors have noted that groundwater inputs were higher downstream, closer to the outflow of their study catchments (Cowie et al., 2017;Frisbee et al., 2011;Fujimoto et al., 2016;Soulsby et al., 2000), and that as watershed scale increases, new larger scale groundwater flow paths are incorporated into the hydrological system. This can be considered an extension of seminal hydrogeological theory by Toth (1963) outlining nested groundwater flow systems. Baraer et al. (2015) also points out that relative groundwater contribution to streamflow is related to glacierized area in the Cordillera Blanca and therefore relative groundwater contribution increases with basin area as the relative glacier coverage diminishes. While increased groundwater contribution with distance downstream is detectable in individual watersheds, the phenomenon is not clearly generalizable across watersheds in the current literature given large differences in precipitation regimes, basin characteristics, extent of glaciers and snowpack.
| Recharge from rain and snowmelt
Precipitation is the primary driver of groundwater recharge. In high mountains, recharge from precipitation can occur as diffuse recharge from rain or snowmelt, or as seepage from ephemeral or perennial streams (Smerdon et al., 2009). The low temperatures and associated vegetative community in mountains also lower the amount of evapotranspiration, which substantially enhances the potential for recharge with increasing elevation (Goulden et al., 2012;Goulden & Bales, 2014).
At lower elevations and/or latitudes, rain may dominate year-round. However, given the high elevation of many high mountain regions, seasonal snowmelt often plays an important role in recharging the groundwater system (Earman, Campbell, Phillips, & Newman, 2006;Flerchinger, Cooley, & Ralston, 1992;Lowry, Deems, Loheide, & Lundquist, 2010). In the spring, the snowpack melts and some of the meltwater follows shallow or preferential flow paths and produces high river flows during freshet. At the same time, the annual pulse of meltwater percolates toward the saturated zone (Hammond, Harpold, Weiss, & Kampf, 2019). In the Sierra Nevada of California, analysis of shortlived cosmogenic sulfur isotopes revealed that less than 15% of freshet streamflow originated from the previous winter's snow pack and that a significant fraction of the annual snowmelt was recharging the groundwater system (Urióstegui et al., 2017). Snowmelt is a more efficient contributor to streamflow (through shallow runoff) than rainfall because the concentrated period of infiltration allows less time for evapotranspiration compared to intermittent precipitation and snowmelt generally occurs in the spring when potential evapotranspiration is lower. Deeper groundwater recharge seems to be less sensitive to snow and rain fraction (Hammond et al., 2019;Liu et al., 2008) though groundwater recharge from losing mountain streams is certainly affected.
Groundwater recharge from precipitation in high mountains can be spatially variable for several reasons. More precipitation occurs at higher elevations due to the orographic effect. Lower temperatures and more snow coverage often decrease evapotranspiration at higher altitudes (Gurtz, Baltensweiler, & Lang, 1999). Locally, slope and aspect can affect the amount of precipitation received, snow accumulation, alter snowmelt patterns and evapotranspiration (Flerchinger & Cooley, 2000;Gurtz et al., 1999;Luce, Tarboton, & Cooley, 1998). Additionally, less or different vegetation may be present at higher elevations, decreasing transpiration and thereby increasing recharge with elevation (Goulden et al., 2012;Gurtz et al., 1999). Valley bottoms and depressions can be sites of groundwater discharge which prevents recharge from occurring. Smerdon et al. (2009) found that groundwater recharge varied from 0-20 mm/year at low elevations and from 20-50 mm/year at higher elevations in a semi-arid low-to mid-altitude mountain watershed in the Okanagan Basin of British Columbia, Canada. Hydraulic conductivity of subsurface materials can also control groundwater response to precipitation (Smith et al., 2014) and geomorphic features of high mountain basins can redistribute groundwater recharge. For example, alluvial fan aquifers can channel flow into valley bottom aquifers (Glas et al., 2018;Smerdon et al., 2009;Winter et al., 1999).
| Recharge from glaciers
Relatively little is known about interactions between mountain glaciers and the groundwater system in high mountain regions Gremaud & Goldscheider, 2010;Levy, Robinson, Krause, Waller, & Weatherill, 2015;Liljedahl et al., 2017;Ó Dochartaigh et al., 2019;Saberi et al., 2019;Somers et al., 2019;Vuille et al., 2018). Meanwhile, the importance of these linkages is increasing as mountain glaciers retreat globally under climate change. Only a handful of studies estimate the extent to which groundwater in proglacial watersheds is recharged by glacier melt, either directly below the glacier, at the glacier margin, or through glacial lakes and streams (Gremaud & Goldscheider, 2010 Most glacier melt occurs on the glacier surface under meteorological forcing, and a small amount occurs beneath the glacier as a result of friction from glacier flow, heat transfer from water flow, and geothermal heat flux. Supraglacial meltwater drains over the surface of the glacier and toward the base through fractures, crevasses, and moulins (Ravier & Buoncristiani, 2017). In Karst systems, glacier melt recharges groundwater through a combination of swallow holes and fractures. These features can occur beneath the glacier, intersecting small meltwater streams near the glacier toe or intersecting proglacial rivers (Gremaud & Goldscheider, 2010). In these rapid karst conduits, strong diurnal and seasonal patterns are observed in streamflow corresponding to glacier melt (Gremaud et al., 2009). Some studies consider glaciers themselves as a barrier to groundwater recharge near mountain tops, channeling melt closer to the glacier margin where recharge occurs through percolation or seepage from proglacial streams (Forster & Smith, 1988b). At longer time scales, glacial loading/unloading can alter the hydraulic conductivity of subsurface materials by compressing pores and fractures (lowering K) or by creating new fractures (increasing K) (Ravier & Buoncristiani, 2017).
Two coupled groundwater and surface water studies in the Andes estimate glacier melt contribution to groundwater. Saberi et al. (2019) use field data and numerical modeling of a proglacial headwater catchment on Volcán Chimborazo in Ecuador to estimate that 18% of groundwater discharge is sourced from glaciers which cover 34% of the watershed area. Somers et al. (2019) examine a proglacial watershed in the tropical Andes of Peru and estimate that glaciers contribute approximately 2% of groundwater discharge to the Shullcas River which has approximately 2% basin glacier coverage. Using contrasting methods, Liljedahl et al. (2017) examined two glacierized watersheds in the Alaska Range. They found that glacier melt contributed 15 to 28% of annual streamflow in a watershed with 3% areal glacier coverage. Furthermore, differential stream gauging revealed that 46% of annual streamflow was lost to the underlying aquifer in headwater streams. These three studies demonstrate a wide spectrum of glacier-groundwater connections and more research is required to determine what governs this relationship.
| Mountain system, mountain-front, and MBR
Since mountains receive more precipitation than nearby lowlands and are often subjected to less evapotranspiration at higher elevations where vegetation is sparse (Goulden & Bales, 2014), they can play an important role in replenishing central valley (or basin) aquifers, particularly in arid or semi-arid regions (Ajami et al., 2011;Manning & Solomon, 2003Meixner et al., 2016;Wahi et al., 2008). Groundwater recharge that originates in mountains (the mountain block), or in the transition between mountains and the basin valley floor (the mountain front), and feeds a basin aquifer, is known collectively as MSR (Wahi et al., 2008). MSR can be subdivided into MBR and mountain-front recharge (MFR). MBR is sub-surface flow from the mountains to the basin aquifer. MFR is sub-surface flow from the mountain front zone toward the basin aquifer and mostly occurs where mountain streams and rivers reach the mountain front and subsequently infiltrate through the streambed ( Figure 5; Bresciani et al., 2018;Wahi et al., 2008;J. L. Wilson & Guan, 2013). Either MBR or MFR can dominate MSR to basin aquifers depending on the setting, and noble gasses have been used as a tracer to differentiate between the two (Manning & Solomon, 2003).
Broadly speaking, groundwater recharge that occurs in high mountains may follow two different paths: (a) it may follow a relatively shallow flow path and discharge into mountain streams or (b) it may flow through deeper bedrock toward the central valley aquifer of the basin as MBR. Welch and Allen (2014) used numerical groundwater models to investigate the partitioning of groundwater recharge between discharge to a mountain stream within a defined watershed (baseflow) and MBR. They found that 12-15% of total recharge became MBR which was eventually discharged to a higher order river in the basin, while 85-88% of recharge contributed to low-order mountain streams within the defined watershed. Though not the focus of this review, an in-depth review of MBR is presented by Markovich et al. (2019).
| NUMERICAL MODELING OF GROUNDWATER IN HIGH MOUNTAINS
A variety of numerical modeling approaches (e.g., conceptual water balance, linear reservoir, finite volume, finite difference, finite element) have been used to investigate high mountain groundwater dynamics with varying amounts of constraining field data. Conceptual water balance and linear reservoir models use simplified parameterizations of groundwater recharge, storage, and discharge, and may or may not be spatially distributed. They are frequently employed in surface-focused hydrological models to represent groundwater processes. Distributed two-and threedimensional groundwater flow models discretize the subsurface into grid cells or elements and apply Darcy's Law to simulate groundwater flow.
Early numerical modeling of groundwater in high mountain massifs was performed by Smith (1988a, 1988b) and emerged before much of the field-based work summarized in this review. They developed a steady-state, two-dimensional, finite element, free-surface approach for modeling groundwater flow and heat transfer in mountain regions with the goal of quantifying the factors that control groundwater flow in mountains. Sensitivity analysis found that the simulated groundwater flow and water table position were most sensitive to: (a) the topographic slope profile (convex versus concave profile), (b) bulk permeability, (c) available infiltration, (d) presence of alpine glaciers, and (e) basal heat flow, among 17 parameters investigated.
Since then, conceptual hydrological models have been used to highlight how subsurface flow slows the transmission of precipitation to rivers (Andermann et al., 2012;Jódar et al., 2017;Pohl, Knoche, Gloaguen, Andermann, & Krause, 2015;Tague & Grant, 2009) and two-and three-dimensional modeling based on Darcy's Law has been applied F I G U R E 5 Conceptual diagrams illustrating mountain-front recharge (MFR) and mountain-block recharge (MBR). (a) An example physical configuration of the transition between mountain and basin. (b) A system where mountain front recharge dominates. (c) A system where mountain-block recharge dominates. (d) A system which has both MFR and MBR. From Bresciani et al. (2018) in a variety of high mountain settings. Of these, some studies have focused on groundwater flow through bedrock (Gleeson & Manning, 2008;Ofterdinger, Renard, & Loew, 2014;Welch & Allen, 2014) or valley sediments (Ciruzzi & Lowry, 2017), while others have coupled groundwater and surface water modeling (Engdahl & Maxwell, 2015;Foster & Allen, 2015;Voeckler et al., 2014), incorporated interactions with glaciers (Saberi et al., 2019;Somers et al., 2019), and/or permafrost and frozen soil (Evans et al., 2015;Evans, Ge, Voss, & Molotch, 2018;Ge et al., 2011). Numerical modeling studies of central valley aquifers often simplify hydrological processes above the mountain front and use MSR as a boundary condition to model basin aquifers (Manning & Solomon, 2005).
Furthermore, simulations often indicate that the water table is located far below ground surface below mountain ridges, up to several hundred meters (Forster & Smith, 1988b;Ofterdinger et al., 2014;Somers et al., 2019). While data to constrain this are scarce, upland wells in the Okanagan region of British Columbia, Canada (low to mid-altitude mountains) were found to have water table depth in excess of 91 m (Smerdon et al., 2009).
Reconciling small-scale observations of hydrological and hydrogeological processes with watershed scale models of mountain catchments remains challenging. Longer and deeper groundwater flow paths come into play with increasing watershed area. As previously noted, Frisbee et al. (2011) tested two types of modeling approaches for a large (1,600 km 2 ) mountain watershed in the Colorado Rocky Mountains. They found that a fully three-dimensional hydrological model worked much better than a two-dimensional model made up of many hillslopes to recreate observed data. Furthermore, Frisbee et al. (2017) suggest that the modeled domain of many mountain groundwater models may be too shallow to accurately represent deep groundwater circulation.
| High mountain groundwater under climate change
High mountain regions are being subjected to faster climate warming than lowlands (Pepin et al., 2015). Increasing air temperatures reduce winter snow pack, cause peak snowmelt flows to occur earlier in the spring, and drive glacier retreat and permafrost degradation in mountains. Glacier and snow loss threaten mountain water resources, particularly during the summer, autumn or dry season when mountain streamflow is low and water demand is high (Barnett et al., 2005). Meanwhile the number of people who depend on mountain water resources is expected to increase (Viviroli et al., 2020).
The buffering capacity of groundwater is expected to provide some resilience against climate change-driven hydrologic changes in high mountains by continuing to store water during wet periods and discharge water during dry periods. Studies on this topic often combine numerical models with climate projections and many have focused on the western United States (Engdahl & Maxwell, 2015;Evans et al., 2018;Huntington & Niswonger, 2012;Markovich, Maxwell, & Fogg, 2016;Tague et al., 2008;Tague & Grant, 2009) with a few others focusing on the Andes (Somers et al., 2019) and the Tibetan Plateau (Evans et al., 2015;Ge et al., 2011).
The geology underlying a high mountain watershed is an important control on the streamflow response to climate change-often as important as snow distribution and melt timing. Higher permeability bedrock is better able to maintain streamflow in response to earlier snowmelt but at the cost of depleting groundwater supplies during summer (Markovich et al., 2016;Tague et al., 2008;Tague & Grant, 2009). Low permeability basins do not lose as much groundwater storage but are more sensitive to the timing of snowmelt, where an earlier freshet results in an earlier peak in groundwater discharge. Huntington & Niswonger (2012) explain that this phenomenon can lead to decreasing summer streamflow, even when annual precipitation is increasing in low-storage granitic watersheds in the Sierra Nevada.
The presence of glaciers and permafrost further complicate high mountain groundwater response to climate change. Mountain glacier melt contribution to groundwater recharge is variable and poorly constrained (Levy et al., 2015;Liljedahl et al., 2017;Ó Dochartaigh et al., 2019;Saberi et al., 2019;Somers et al., 2019). Somers et al. (2019) integrate groundwater, surface water, and glacier melt modeling and apply downscaled climate projections to a proglacial watershed in the Peruvian Andes. They find that as the glaciers in the watershed disappear, groundwater contribution to streamflow remains large and relatively consistent in the short term. In the long term, however, evapotranspiration increases with temperature, decreasing groundwater recharge and exfiltration. The resulting dry-season streamflow is projected to decrease by 20-50% (representative concentration pathways [emissions scenarios] 4.5 and 8.5, respectively) by 2,100.
In mountain watersheds with extensive permafrost, degradation of permafrost is expected to increase hydraulic conductivity of the sub-surface, decrease peak flows, and increase baseflow (Evans et al., 2015;Ge et al., 2011;Rogger et al., 2017). Modeling of a groundwater system in the Qinghai-Tibet Plateau suggests a three-fold increase in baseflow will result from a 2 C increase in mean annual air temperature (Evans et al., 2015). Furthermore, changes in seasonal soil freezing will impact both groundwater recharge and baseflow (Evans et al., 2018).
Groundwater recharge in several mountain regions is projected to decrease as the climate changes. In the western United States, declining snowpack is projected to decrease recharge to mountain aquifers and MSR alike (Meixner et al., 2016). In addition to declining snowpack, increasing temperatures stimulate evapotranspiration and decrease groundwater recharge in high mountains. In a warming climate, vegetation type, coverage, and activity in high mountains will change as cold-limited vegetation encroaches on higher elevations, increasing ET, and decreasing recharge (Goulden et al., 2012). Goulden & Bales (2014) examined the relationship between precipitation, ET, and elevation in California's Sierra Nevada. Using a space-for-time approach, they projected a 28% increase in ET across the entire King's River basin and a corresponding 26% decrease in river flow by 2,100. Increasing groundwater age of spring water in the Sierra Nevada, from 1997 to 2003 provides field evidence of decreasing groundwater recharge (Manning et al., 2012).
Though this review has focused on water quantity, high mountain groundwater quality may also be affected by climate change in some regions. For example, ice-in the form of glaciers, permafrost or rock glaciers-can isolate to sulfide bearing rocks but also contributes to mechanical weathering of rock. As glaciers, rock glaciers and permafrost retreat upslope, previously frozen ground is exposed to more liquid water and higher temperatures facilitating oxidation of sulfides which causes acid rock drainage and leads to high concentrations of heavy metals (Ilyashuk et al., 2018).
| CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS
Historically, high mountain research has focused on the visually stunning aspects of mountains including glaciology, geology, and ecology. Though not visible, groundwater is a critically important component of high mountain hydrological systems and has special importance for water resource management in the face of environmental change. As our review demonstrates, groundwater research in high mountain environments has received increasing attention in recent years. A variety of field and modeling studies have described the hydrogeologic functioning of mountain geomorphic features. Small-scale studies tend to highlight the importance of coarse deposits including talus and alluvium in storing and transmitting groundwater, while larger-scale studies often emphasize the role of valley-bottom and bedrock flow paths. Tracer and water balance studies have demonstrated that groundwater is an important contributor to streamflow in a variety of high mountain regions, particularly during low-flow periods. High Mountain groundwater recharge, driven by rain, snowmelt, and glacier melt, can be highly spatially variable and is an important source of recharge to distal basin aquifers. Numerical modeling has provided important insights into high mountain groundwater flow, though bestpractices are not well established. In the face of a warming climate, groundwater will provide resilience to high mountain hydrological systems, though the nature of the response is controlled by complex interactions between geological conditions, changes to snow and glacier melt regimes, presence of permafrost and vegetation impacts on recharge.
Despite the strides made in recent years, there remain significant gaps in our understanding of high mountain groundwater systems. Specifically, the field is still limited by a scarcity of field data and relatedly, representing the high degree of heterogeneity in numerical modeling remains challenging and limits our ability to project the impacts of climate change.
6.1 | Future directions to combat data limitations Groundwater observation wells are particularly scarce in high mountain environments. Those that exist are often located in valley bottoms where they are more easily accessible and hydraulic head is more stable. Furthermore, longterm hydraulic head measurements are scarce. This lack of data limits our ability to constrain groundwater levels in modeling studies leading to high uncertainty in conclusions. Forster & Smith (1988b) pointed out that overestimating the bedrock permeability by a factor of 5 led to underestimates in water table elevation by more than 1,000 m in their seminal modeling study. Addressing this uncertainty remains a critical challenge today and is particularly difficult in remote data-scarce regions like the Andes and Himalayas.
Increased investment in high mountain groundwater monitoring, particularly in uplands, is a good starting point to combat data limitations, though it is important to remember that no number of wells will produce a perfect model. Water sampling of existing wells, springs and surface waters is a relatively low-cost way to investigate groundwater processes in remote high mountains and new information continues to emerge from innovative applications of natural and artificial tracers (Frisbee et al., 2017;Gremaud et al., 2009). Furthermore, geophysical investigations can help to visualize complex subsurface structures and improve our understanding of groundwater storage and flow in high mountains (Glas et al., 2019;Harrington et al., 2018;Ó Dochartaigh et al., 2019).
In the face of limited funding for increased data collection, an open-access data sharing system on high mountain groundwater systems is needed, potentially as a part of an existing mountain-data sharing initiative such as GEO GNOME (Mountain Research Initiative, n.d.). There is also potential to harness under-utilized data in the gray literature. Mining activities are common in mountain regions globally, for which geotechnical and hydrogeologic data is collected and often reported for mine planning and environmental assessments. Tunneling projects (for road and rail transportation) can also provide similar data (Corniello et al., 2018). Though data may be proprietary, site-specific data-sharing agreements are often possible for academic research projects (Gleeson et al., 2011;Scibek, Gleeson, & McKenzie, 2016). A comprehensive accessible dataset of high mountain groundwater observations would allow for better comparison between sites and, with enough data, would allow for better statistical understanding of how such systems work.
| Representing heterogeneity
High mountains are incredibly complex hydrogeological environments and many studies focus on individual geomorphic features. It remains difficult to reconcile small-and large-scale studies of groundwater processes in high mountain watersheds. Small-scale processes, such as flow through individual talus features or upwelling springs in valley bottoms, are difficult to incorporate in watershed-scale models that often have grid cells up to several hundred meters in diameter. Likewise, it is unclear when and how to quantify or simulate deep flow through the mountain block when modeling high mountain watersheds. Furthermore, many numerical models are not designed to simulate multi-scale
BOX 2 HIGH MOUNTAIN GROUNDWATER AND PEOPLE
Mountain hydrology (including groundwater) is critical for society in multiple and complex ways. Mountains act as "water towers" for both local inhabitants and downstream users (Viviroli et al., 2020). Globally, the retreat of mountain glaciers and snowpack, including the resulting variability in discharge, has a variety of human impacts on municipal water supply, hydropower, food security, and culture . High mountain communities are already experiencing climate change impacts and are particularly vulnerable (Gurgiser et al., 2016;Heikkinen, 2017). Though groundwater provides some resilience to high mountain water resources, it is also vulnerable to long-term climate change (Somers et al., 2019).
Human activity impacts high mountain groundwater systems, both intentionally and inadvertently. For example, livestock grazing is a common practice in mountain regions globally. However, over grazing is thought to compress the near-surface soil and increase runoff (e.g., in the Ecuadorian páramo; Buytaert et al., 2006). Groundwater-based adaptation strategies have been proposed in mountain regions as ways to increase groundwater recharge. It is hypothesized that increased groundwater recharge during wet seasons will increase baseflow during dry periods (Ochoa-Tocachi et al., 2019;Somers et al., 2018).
While mountain hydrology and hydrogeology are important research areas, future changes in water-use are likely to exceed changes in water supply (de Jong, 2015). Long-term and thoughtful collaboration with local scientists, stake holders, and social scientists, is critical to ensure that high mountain groundwater research is useful to society. Local user experiences can provide important avenues for knowledge development and future research directions. As Carey et al. (2014) show, the broad evaluation and quantification of mountain water resources is much more complicated than simply measuring flows but must also include accounting for social dimensions of water usage. processes and some groundwater flow models will not converge (result in errors) if sharp boundaries between highand low-permeability units exist, particularly in regions of very steep topographic gradients-all fundamental high mountain characteristics.
Going forward, new data collection techniques and modeling methods may help to fill remaining knowledge gaps. For example, remote sensing may be useful in collecting higher spatial resolution hydrological data in remote high mountain regions, such as soil moisture (Wigmore, Mark, McKenzie, Baraer, & Lautz, 2019) and spatially variable precipitation and snowmelt (Girona-Mata, Miles, Ragettli, & Pellicciotti, 2019). Furthermore, geochemical tracers can be used in combination with numerical modeling to constrain model parameters in data-poor regions (Doyle, Gleeson, Manning, & Mayer, 2015). There may also be potential to use geomorphometry of mountain landforms to infer hydrological storage and functioning (after Cairns, 2014;Carlier, Wirth, Cochand, Hunkeler, & Brunner, 2019;Gleeson & Manning, 2008). Given the research focus on remote sensing of mountain glaciers, there should be additional applicable proglacial geospatial datasets available. Improved data coverage and better methods to represent and model heterogeneous groundwater systems will help to guide mountain water resource management in the face of a changing climate (Box 2).
|
2020-10-28T19:11:57.402Z
|
2020-10-08T00:00:00.000
|
{
"year": 2020,
"sha1": "4aa51ca0801a3d2432c7fa8187f3d2cd84b6beee",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wat2.1475",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "c8abd515bc3577a4d9424e4600ee85106e1d1d15",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
252649350
|
pes2o/s2orc
|
v3-fos-license
|
Health At Every Size intervention® under real-world conditions: the rights and wrongs of program implementation
ABSTRACT Implementation integrity is known to be critical to the success of interventions. The Health At Every Size® (HAES®) approach is deemed to be a sustainable intervention on weight-related issues. However, no study in the field has yet investigated the effects of implementation on outcomes in a real-world setting. Objective This study aims to explore to what extent does implementation integrity moderate program outcomes across multiple sites. Methods One hundred sixty-two women nested in 21 health facilities across the province of Québec (Canada) were part of a HAES® intervention and completed questionnaires at baseline and after the intervention. Participant responsiveness (e.g. home practice completion) along with other implementation dimensions (dosage, adherence, adaptations) and providers’ characteristics (n = 45) were assessed using a mix of qualitative and quantitative data analysis. Adaptations to the program curriculum were categorized as either acceptable or unacceptable. Multilevel linear modeling was performed with participant responsiveness and other implementation dimensions predictors. Intervention outcomes were intuitive eating and body esteem. Results Unacceptable adaptations were significantly associated with providers’ self-efficacy (rs(23) = .59, p = .003) and past experience with facilitating the intervention (r(23) = .47, p = .03). Participant responsiveness showed a significant interaction between time and home practice completion (B = .07, p < .05) on intuitive eating scores. Conclusion Except for participant responsiveness, other implementation dimensions did not moderate outcomes. Implications for future research and practice are discussed.
Health care professionals and researchers call for a need to intervene more effectively to reduce the disease burden associated with high body mass index (BMI) (GBD, 2015Obesity Collaborators, 2017Ng et al., 2014). Long-term healthy lifestyle changes seem to be a common ground for many health care professionals, although many different approaches are proposed to achieve this goal. Among the approaches focusing on health behaviors, the Health At Every Size (HAES®) movement is one of the most referenced (Cadena-Schlam & López-Guimerà, 2015). It advocates for health gains without necessarily losing weight and promotes intuitive eating, active lifestyle and self-acceptance. It also aims to stop weight-related stigma (Burgard, 2009). HAES® interventions accumulate more and more empirical evidence of their efficacy on health-related outcomes (Ulian et al., 2015), as well as psychological well-being and eating behaviors (Clifford et al., 2015). This approach has been suggested as a promising new direction in the public health sphere for long lasting behavioral changes (Bombak, 2014), although most research so far has been conducted in well-controlled settings. In this regard, Penney and Kirk (2015) pointed out the need for empirical studies to be conducted in a wider range of population to move the 'reframing obesity debate' forward. Studies in real-world settings allow not only to test evidence-based interventions within a more representative sample of a targeted population, but also to take into consideration many important yet overlooked factors, such as program implementation integrity, sociopolitical context, funding, and organizational characteristics. These factors can contribute to greater heterogeneity in responses but are unfortunately underreported in studies despite being known as crucial (Allen et al., 2012;Glasgow et al., 2012;Peters et al., 2014). Yet this knowledge gap is expected as translating evidence-based interventions into real-world settings can be challenging, especially in health and social care science (Hasson, 2010).
Surprising outcomes can result from implementing an intervention in a natural environment (Domitrovich & Greenberg, 2000;Durlak & DuPre, 2008). The assessment of implementation is therefore useful to interpret outcomes accurately, namely to determine whether they are attributable to the intervention's theoretical components or the integrity of its application (Durlak & DuPre, 2008;Helmond et al., 2012;Mowbray et al., 2003). Most importantly, this can provide a possible explanation when observing weaker (or an absence of) results (Dobson, 1980). Implementation has been widely reported as influencing intervention outcomes, the magnitude of mean effect sizes being two to three times higher when studies monitor implementation in comparison with studies that do not (Durlak & DuPre, 2008). Implementation monitoring indeed unveils the highest potential benefits of an intervention (Cutbush et al., 2017;Elliott & Mihalic, 2004) by providing support to program instigators to increase the quality of delivery of the intervention. It also leads to a better understanding of the setting factors that foster or hinder outcomes, the processes by which they operate and how they can be improved (Carroll et al., 2007;Dobson, 1980). Program implementation has been conceptualized in many ways and lacks in standardization regarding its nomenclature (Toomey et al., 2020). One of the most comprehensive conceptualizations encountered in the literature stems from Durlak and DuPre (2008), where they have envisioned implementation as a multidimensional construct grouping several aspects identified as: (1) adherence (fidelity is often used interchangeably), which refers to the degree to which an intervention is delivered as intended; (2) dosage, the quantity of the program actually delivered; (3) quality of delivery, referring to the skills with which the intervention was provided (e.g. clarity of instructions, ability to interact with participants); (4) participant responsiveness, the degree to which a participant displays interest in the intervention; (5) program differentiation, meaning the uniqueness of the program in comparison with other interventions; (6) monitoring of the control/comparison conditions; (7) program reach, meaning the scope of the program, and finally; (8) adaptation, referring to the changes that were made to the original program while it has been delivered. Based on this theoretical frame, Berkel et al. (2011) provided evidence that all these dimensions have been positively associated with program outcomes.
The Durlak and DuPre's (2008) model was used in the current study because adherence (fidelity) is specifically distinguished from adaptations, which have been traditionally seen as a lack of fidelity (Blakely et al., 1987). The degree to which interventions are expected to be implemented with fidelity vary greatly from one perspective to another (Cutbush et al., 2017), and can be debated. Supporters of strict adherence are indeed opposed to those who adopt a more flexible view of fidelity, where adaptations are 'allowed' if they do not compromise the intervention core components (Cohen et al., 2008). Core components are defined as 'the most essential and indispensable components of an intervention practice or program' and are thought to determine the success of the intervention (Gould et al., 2014). Adaptation supporters argue that they could preserve and even enhance program effectiveness by making it more relevant to a set of diverse audiences and culturally competent (Castro et al., 2004). Interestingly, adaptations have both been associated with better and worse program outcomes (Stirman et al., 2013). This inconsistent body of literature is leading researchers to think that some modifications could indicate decreases in fidelity, while others embrace the intended purpose of the intervention. Yet treatment manuals cannot possibly list exhaustively in advance what behaviors and adaptations are acceptable or proscribed. Stirman et al. (2013) have thus highlighted the relevance of determining empirically core components of an intervention and coding adaptations in addition to fidelity monitoring.
Few studies include more than two components of implementation (Durlak & DuPre, 2008). Documenting the effects of several facets of implementation integrity on program outcomes at once is however relevant to better understand which dimensions account for the most variance (Giannotta et al., 2019). Within our field of research, some weight management programs have been implemented in a real-world setting and have addressed the issue of implementation (Campbell-Scherer et al., 2014;Damschroder & Lowery, 2013;Lombard et al., 2014). However, those studies used either different conceptualization frameworks that did not focus on individual-level outcomes, or they did not explicitly report outcomes in conjunction with a comprehensive assessment of implementation integrity. In the specific case of the HAES® approach, our research group would be, to our knowledge, the first to investigate the effects of implementation on program outcomes.
The purpose of this study is to explore to what extent does implementation integrity moderate outcomes of a disseminated HAES® intervention within the community. This study is in line with previous publications reporting on its program effectiveness (Bégin et al., 2018;Carbonneau et al., 2016), and herein focuses on the effects of implementation. More particularly, it examines implementation dimensions standing at two different levels: program participants (participant responsiveness) and providers (dosage, adherence, adaptation). It should be noted that adaptations were classified as either acceptable or unacceptable by instigators (according to the core components of the program). We hypothesized significant positive associations between all dimensions of implementation, with the exception of unacceptable adaptations that were assumed to be detrimental and negatively correlated with the other dimensions. We also hypothesized program outcomes to vary significantly across sites of implementation and to be predicted by implementation dimensions.
Method and materials
Intervention 'Choisir de maigrir?' (CdM?) (What about losing weight?) is a HAES®-based intervention for women which promotes intuitive eating and self-acceptance, following the example of the fat acceptance and size diversity movement. The intervention consists of 13 weekly three-hour sessions, plus an intensive six-hour day, provided in small groups of 10-15 participants and led by a social worker or psychologist as well as a dietitian. It aims to develop healthy ways of coping with weight management such as reevaluating eating habits and food intake, enjoying physical activity and being critical towards diets. By the end of the program, it prompts participants into free and informed decisionmaking about losing weight. A realistic action plan of behavior changes customized to their own personal situation is then designed accordingly. As such, the success of intervention of CdM? relies on outcomes reflecting a healthier relationship with oneself such as improvement of body esteem and intuitive eating (Bégin et al., 2018;Carbonneau et al., 2016). CdM? has the special feature of giving participants the opportunity to lead some discussions and to customize their goals and actions to achieve them, in a way to encourage empowerment throughout the program. Main themes addressed during CdM? program with examples of activities has been previously published (Carbonneau et al., 2016). The efficacy of CdM? has been assessed several times since then and revealed mostly positive outcomes on eating behaviors and psychological variables (Gagnon- Girouard et al., 2010;Mongeau, 2004;Provencher et al., 2007;Provencher et al., 2009).
Cdm? Dissemination overview
The current research took place in the context of a massive dissemination of CdM? in Health and Social Services Centres (HSSC) across the province of Québec in response to the public health action plan led by the Ministère de la santé et des services sociaux (MSSS, Ministry of Health and Social Services) to promote healthy habits and prevent weight-related issues (MSSS, 2012). The dissemination of CdM? was entrusted to 'Équi-Libre', a Québec-based nonprofit organization aiming at preventing and reducing issues related to weight and body image in the population (Groupe d'action sur le poids Équi-Libre, 2021). They ensured the training of all providers with a free five-day seminar. They also provided them with a turnkey intervention toolkit including a detailed step-by-step description of the intervention, as well as an explanation of the theoretical rationale behind the program, a comprehensive review of literature on weight management, intervention materials, practical advice for starting the program and videoclips from previous CdM? facilitation (Groupe d'action sur le poids ÉquiLibre, 2005).
Procedure
HSSCs from across the province of Quebec (Canada), spread over 9 different regions from Quebec (80% urban and 20% rural areas), were provided with the instructions and assessment materials by the research team. They were entirely in charge of recruitment and data collection, and delivery of the intervention. Participants of CdM? were recruited starting from September 2010 to December 2011. Procedures were approved by the research ethics committee (REC) of the Health and Social Services Agency of Montreal. They were also ratified by each HSSC local ethics committee. This study was conducted following the principles of the Declaration of Helsinki.
CdM? providers gave participants a series of pen-and-paper questionnaires to complete at home at baseline (T1 = 0 month), after the intervention (T2 = 4 months) and at 1-year follow-up (T3 = 16 months). Only data from T1 to T2 were used as this study focuses only on the processes related to the implementation. Participants also had to complete a sociodemographic questionnaire at baseline, and a feedback survey about CdM? at the end of the intervention (T2). Providers completed an evaluation grid which was used as a reminder at the end of every session to report the conduct of each planned activity. These reminders were all sent back to the research team by mail at the end of the intervention. Providers had to report in their grid the extent in which the activity was performed with integrity, meaning whether the activity was: (1) performed accordingly to the manual; (2) performed with some modifications; or (3) not performed. They also had to report the length of each activity and, if required, described the modifications they made. An individual semi-structured interview was then conducted by phone at the end of the program with each provider in order to better understand the course of implementation in their respective HSSC. The interview was recorded and lasted approximately one hour. The research goals as well as the independency of sources of funding were recalled at the beginning of each interview. The interview guide had 30 questions that were derived from evidence-based factors known to influence implementation integrity (Durlak & DuPre, 2008), and in relation with the core components identified by the instigators of the program (see Identification of CdM? core components below). Qualitative data from the reminder and interview verbatim transcripts were imported in NVivo v.9.0 for analysis.
Identification of CdM? core components
The initial instigators of CdM? (program developers) were invited to provide the research team with their subjective insight on core components of the program. Instigators had first identified primary theoretical core components of the program, namely: the nondiet, self-acceptance and empowerment approaches. They completed a grid listing each activity of the program, which were rated according to their degree of importance (1 = very important, 2 = somewhat important and 3 = slightly important) and then associated with core components. They were also asked to comment on the level of flexibility they would allow adaptations on (a) the holding of the activity, (b) its content, (c) its facilitation style and (d) its length. Grids were sent back to the research team and led to the creation of a decisional algorithm allowing the classification of modifications to the program made by providers into two distinct categories: (1) acceptable adaptations and (2) unacceptable adaptations (see Figure 1). Five main core components resulted from the several qualitative analyses (Samson, 2015): (1) empowerment; (2) healthy group environment; (3) natural flow; (4) learning objectives (key-messages regarding the non-diet and self-acceptance approaches); (5) aim of the program (informed decisionmaking process toward the design of a personalized action plan).
Sampling
Participants 216 adult women from the community participated in CdM?, nested within HSSCs across the province of Quebec (Canada). The intervention was opened to any woman seeking treatment for eating or weight-related problems. Apart from being aged 18 years-old and over, no formal exclusion criteria were used for restricting entrance to the program. A mandatory information session was held prior to the program to ensure their understanding of the themes covered during the program. Participants reporting a pregnancy over the course of the study (n = 5) were excluded from the analyses for sample homogeneity reasons. 162 participants completed the questionnaire at the end of the intervention and were used for the analyses.
Providers
Recruited dyads of providers consisted of a registered dietitian and a psychologist or social worker who were already trained by ÉquiLibre and engaged into delivering CdM?. All HSSCs that had at their disposal a trained dyad of CdM? providers were approached for recruitment (n = 41). From these invitations, 6 refused to participate in the study and 14 HSSCs were not giving the intervention at the time of data collection, thus resulting in 21 eligible HSSCs and a total of 24 dyads (since some HSSCs have trained several dyads of providers across different establishments). All providers agreed to participate in the study, although three of them did not complete data collection, one for medical reasons and the remaining two, due to lack of time. The final sample size of providers was n = 45 (see Figure 2 for flowchart of HSSCs/providers recruitment).
Measurements
Implementation dimensions. Adherence. Adherence was computed by listing all activities that were completed accordingly to the program. This number was then divided by the total number of activities (n = 124) and multiplied by 100 to obtain a percentage of adherence to the curriculum. Adherence has been documented in the past as a percentage of program activities completed (Dusenbury et al., 2005). As mentioned earlier, measure of adherence results from providers' evaluation grids that were completed at the end of each session (see procedure).
Dosage. The amount of exposure to CdM? was calculated from the length of activities reported by providers in their reminder grid. For each HSSC, the dosage was computed from dividing the amount of time of program actually delivered by the total alleged duration of the program, then multiplied by 100 to obtain a percentage of exposure. Acceptable adaptation. Adaptation is the number of modifications made to the program curriculum per providers' dyad that were classified ultimately as acceptable adaptations according to the standards of instigators. Adaptations could either refer to modifications or additions to the program, as long as they were not altering core components of CdM?. Examples of acceptable adaptations would be adding an optional activity planned in the manual, providing additional explanations, examples or visual support: 'an example […] of these documents (activity journal and compilation of activities) was photocopied and given to participants as examples to make their own compilations at home'; or bringing together activities with similar themes: 'Moved the [presentation on the body's resistance to weight loss] to Session 12 (I become critical of diets)'.
Unacceptable adaptation. This variable herein refers to the number of modifications performed per providers' dyad (e.g. additions, removals, changes in the progression of activities, alteration of core components) that were classified as unacceptable adaptations by the decisional algorithm. They could be, for instance, the addition of a new activity without consulting participants, 'We'll do the taboo foods exercise (which is not planned in the program) […]'; while omitting theoretical content, '[The physiological consequences of obesity of the energy balance presentation sheet] not completed […]', or limiting group discussions and interventions, which was considered to obstruct empowerment.: '[…]we asked women to limit their interventions when giving feedback about their visualization exercise'. It is to be noted that all removal modifications were classified as unacceptable.
Participant responsiveness. A questionnaire developed by the research team was used to assess participants' responsiveness to CdM?. We hereby define participant responsiveness as a multidimensional construct referring to involvement and interest to the program that includes attendance, subjective improved knowledge, home practice completion and satisfaction. We also included goal achievement as another evidence of their responsiveness to the program. Qualitative information was collected in concomitance to the quantitative assessment of their responsiveness, using written feedback on the intervention. For instance, participants were asked to provide explanations as to why they would not have met their goals, or reasons behind their assessment of their satisfaction towards each activity.
Goal achievement. Participants were asked whether they have achieved their main goal over the course of the program, to which they could either answer (1) yes, (2) more or less or (3) no.
Attendance. The attendance of participants to each session of CdM? was documented by providers of the program who reported it at the beginning of each session. Attendance was computed by summing up attendance to each session by the end of the program.
Subjective Improved Knowledge (SIK). Participants were assessed on their improvement of knowledge on theoretical subjects thoroughly discussed during the program (e.g. energy balance, determinants of weight regulation, physical activity, body dissatisfaction, weight-loss products and programs). Participants could answer either '1 = slightly', '2 = moderately' or '3 = a lot'. A mean score was calculated by averaging scores for the six learning components. Higher scores indicate higher improved knowledge. Internal consistency was acceptable (α = .70).
Home Practice Completion (HPC). Participants self-reported on a Likert-scale how often they put into practice the several methods and problem-solving exercises learned during the program (1 = never; 5 = very often). Ten behaviors were assessed: (1) be aware of false hunger; (2) do an enjoyable substitutive activity not related to food; (3) listen to your body; (4) taste the food you eat; (5) feel and respect satiety signals; (6) choose the desired foods; (7) relax; (8) be active; (9) express your feelings; and (10) assess difficult situations. A mean score was then computed, where higher scores reflect higher home practice completion. Internal consistency was acceptable (α = .73).
Satisfaction. Participants were asked to rate their level of satisfaction towards the several types of activities conducted during the program. They were asked: 'Are you satisfied with the following activities?', to which they could answer '1 = slightly', '2 = moderately' or '3 = a lot'. More specifically, nine types of activities were assessed: (1) energy balance assessment; (2) true signals of hunger exercise; (3) tasting exercise; (4) role-playing; (5) modeling dough exercise; (6) theoretical/conceptual presentation; (7) visualization exercises; (8) relaxation/mindfulness exercise; and (9) action plan design. A final score was averaged from all types of activities. Higher scores indicate higher satisfaction. The internal consistency for this variable was however lower than the other measurements of responsiveness (α = .61). This weaker reliability may be attributed to the eclectic and multidimensional nature of the program, which relies on significantly different types of activities.
Provider characteristics
Experience. Providers were asked to report the number of years of experience they had in their respective professional area (dietitian or social worker/psychologist). A mean score was computed for each providers' dyad.
Cdm? Experience. CdM? experience refers to the specific program-related experience of the providers, meaning how often they have offered CdM? in the past in their current HSSC or other establishments.
Self-efficacy. Providers were assessed on three theoretical founding pillars of CdM?: (1) the non-diet approach; (2) the self-acceptance approach and (3) the empowerment approach. Providers were asked: 'To which degree do you feel able to convey information about [e.g. the non-diet approach]?' and had to report it on a 5-point Likert-scale (1 = not at all, and 5 = entirely). They were similarly assessed about a set of relevant skills for multipatient interventions: (1) group dynamic facilitation, (2) handling emotional participants, (3) handling quiet participants, (4) handling overwhelming participants, (5) managing auto-facilitated sessions, (6) managing dyadic facilitation, and (7) adopting a non-directive style of facilitation. A mean score averaging the up-mentioned items was computed for each provider and dyad of providers, resulting in a mean selfefficacy score. Higher scores indicate higher sense of self-efficacy. Internal consistency was good (α = .83).
Program outcomes Intuitive eating. The Intuitive Eating Scale (IES) is a scale of 21 items displaying a total score as well as 3 subscales: (1) Eating for physical rather than emotional reasons, (2) Unconditional permission to eat when hungry and what food is desired, and (3) Reliance on internal hunger and satiety cues (Tylka, 2006). This questionnaire informs globally to which point an individual is inclined to eat accordingly to his hunger and satiety signals, as well as being able to listen to its body in order to guide what, when and how much to eat. For the purposes of this study, the total score was used as a main outcome rather than examining each subscale individually. Many studies have supported the construct validity of this scale with women (Tylka & Van Diest, 2013). The Cronbach alpha coefficient for the total score was above .70 at baseline and post-intervention (α = .78 and .82 respectively). It indicated good internal reliability in the current study.
Body esteem. The Body Esteem Scale is a validated 23-item questionnaire composed of 3 subscales, namely the (1) Appearance, (2) Weight and (3) Attribution subscales (Mendelson et al., 2001). While the first refers to general self-appreciation about appearance, the second focuses more on weight satisfaction strictly speaking. The attribution subscale refers to social attributions made about one's body and weight. The BES has been validated in adults of a wide range and provided good test-retest reliability, as well as convergent and discriminant validity (Mendelson et al., 2001). Only the appearance subscale was used to assess the construct of body esteem (hereafter BESAP). This subscale has 10 items for which a mean is computed. Items range from 0 to 4 on a Likert scale (0 = never; 4 = always), lower scores indicating lower body esteem. The choice of this subscale relied mostly on philosophical considerations, appearance being conceptually closer to what is addressed in CdM? sessions. Internal consistency for this scale was good (respectively α = .89 at baseline and α = .90 at post-intervention).
Statistical methods
Using SPSS v.25, we performed descriptive statistics and bivariate pairwise Pearson correlations between variables, while we used Spearman correlations (r s ) when assumptions could not be met for parametric statistic tests (Aggarwal & Ranganathan, 2016). Univariate outliers identified with the outlier labeling rule sustained a 90% winsorization, though unwinsorized data is shown in descriptive statistics. Independent-samples t-tests or Mann-Whitney U tests were also performed to compare providers by their occupation. A two-tailed p-value of .05 was employed as the criterion of statistical significance for all tests. Bonferonni correction was used when conducting multiple testing. A three-level multilevel (hierarchical) linear modeling (MLM) (time (1), and participants (2) nested in HSSC implementation sites (3)) was performed through SPSS MIXED MODELS for the examination of the effect of implementation dimensions on prediction of outcomes of the intervention over time. MLM is an appropriate statistical method when data is nested in units of a higher level of analysis by allowing the study of the relationship between a dependent variable and one or more explanatory variables without violating assumptions of independence in linear multiple regression. Assumptions regarding normality of residuals and absence of outliers using Mahalanobis distance were assessed for the set of predictors. A null model in which no covariates were added (intercepts-only model) was tested for each outcome to provide intraclass correlation among each level of the hierarchy. Models were then tested with centered predictors (implementation dimensions, participant responsiveness and time).
Results
Descriptive statistics of CdM? participants are presented in Table 1. Descriptive statistics of CdM? providers are presented in Table 2. On average, providers had 15.75 years of experience and facilitated the CdM? program 3.76 times in the past (see Table 3 for means and standard deviations per dyad). No significant differences were found between providers accordingly to their occupation, except for self-efficacy in handling emotional participants. The Mann-Whitney-U test indicated that psychosocial professionals (M = 4.64) reported greater self-efficacy on this skill than dietitians (M = 3.68), U = 111.5, p = .001, 2 = .25.
Means and correlations regarding participant responsiveness and other implementation dimensions are shown in Table 3. Bivariate correlations between subdimensions of participant responsiveness revealed positive associations between subjective improved knowledge and satisfaction (r s = .37, p < .001), and home practice completion and satisfaction (r s = .26, p = .001), both with a medium effect size. Unacceptable adaptations showed strong positive associations with providers' dyads CdM? experience (r = .47, p = .03) and their overall self-efficacy (r s = .59, p = .003), while self-efficacy correlated with CdM? experience (r s = .42, p < .05). Dosage did not correlate significantly with any other variable. Adherence correlated positively with subjective improved knowledge (r s = .25, p = .01), but with no other variable.
The null model tested with intercepts only resulted in a total mean score on IES of 2.93 (t (18.91) = 50.28, p < .001), with s 2 error = .22, p < .001, s 2 participant .07, p = .03, and s 2 HSSC = .03, p = .11. Intraclass correlations (ICC) were calculated accordingly, r 1 = .68, r 2 = .21 and r 3 = .10 respectively for intraindividual residual error, participant and HSSC levels. We similarly obtained a total mean score on BESAP of 1.39 (t (15.20) = 20.72, p < .001), with s 2 error = .21, p < .001, s 2 participant .44, p < .001, and s 2 HSSC = .004, p = .89. ICC were then calculated, resulting in r 4 .33, r 5 .67 and r 6 .01. As no support was found to perform 3-level modeling, we examined predictors related to implementation dimensions using two-level models. It is to be noted that only the intercept and residual deviations were entered as random effects, since slope and slope*intercept covariance deviations did not allow the models to converge properly. As such, time was entered as a fixed effect only and an identity variance-covariance matrix was used. The models were significantly better fitted to the data than the ones with intercepts only, x 2 (11, N = 162) = 636.03-416.78 = 219.25, p < .001 for IES and x 2 (11, N = 162) = 259.03, p < .001 for BESAP. Results of models are presented in Tables 4 and 5 with participant responsiveness predictors only, as the other implementation dimensions did not interact with time neither for intuitive eating nor for BESAP (data available in supplemental material). For intuitive eating, participants on average had an IES score of 2.71 at baseline (p < .001), which significantly deviated between participants, and had an average increase in IES of .08 per month over the course of the intervention. Home practice completion was the only significant predictor interacting with time (b X2 time = .07, t(133.52) = 2.13, p < .05), showing an additional increase in scores of IES of .07 per month for each increase of one unit of home practice completion. Other predictors did not have a significant effect on the slope. For body esteem, Table 5 displays significant interindividual differences around intercepts, which was 1.02 on average, as well as a significant slope of .08 (p < .001). Participants had a lower intercept of .46 points for each increase of one unit for subjective improved knowledge, meaning that those who reported higher scores on subjective improved knowledge had a lower body esteem at baseline. No other significant effect was found among participant responsiveness predictors.
Discussion
This study aimed to examine the effect of implementation on outcomes of a communitybased HAES® intervention. Regarding our main research goal, our preliminary analysis did not support evidence of significant variability across implementation sites and implementation dimensions did not moderate outcomes, except for home practice completion. While these results go against the main body of literature stating that implementation integrity moderates program outcomes, we call for cautiousness in their interpretation for several reasons. First, it should be reiterated that 'no evidence of effect' is not to be confounded with an 'evidence of no effect' (Ranganathan et al., 2015), and we might have encountered a type II error, perhaps due to the limitations regarding the measurements used or the sample size. Secondly, it is possible that CdM? has been given sufficiently faithfully by providers in a way to reach a 'good-enough' threshold allowing participants to benefit from the program at scale. In the same vein, it could mean that the adaptations performed, regardless of whether they were classified as acceptable or unacceptable by the algorithm, were either in line with the intended philosophy of the intervention or inconsequential. This would be possible given that the participant responsiveness was globally positive (quantitatively and qualitatively). Thirdly, we must point out that CdM? is a program that is also facilitated by participants, and that group dynamics take up a lot of space in the program. This could partially explain as to why no provider effect was found, as opposed to therapist effects usually accounting for around 7% of outcomes (Schiefele et al., 2017). Fourthly, the HAES® approach is known to have multiple outcomes and so, it is not because no effect of implementation was found on the two outcomes chosen in this study that it prevents it from influencing other outcomes, such as reducing maladaptive eating behaviors or other psychological well-being measures. Nevertheless, we should mention that other studies have found similar results than ours, where no effect was found for implementation dimensions except for participant responsiveness (Giannotta et al., 2019). This study is thus in line with a body of literature highlighting the importance of participant responsiveness (Berkel et al., 2011). A study has even recently found that participant responsiveness had a direct effect on outcomes, rather than being a mediational influence in the association between quality of delivery and outcomes (Doyle et al., 2018).
An interesting result emerging from our analysis was that home practice completion had a positive effect on change over time in intuitive eating. It is concordant with basic principles conjured in behavioral science, that emphasize the importance of performing successive approximations of the behavior to experience change (Jackson, 1997). Although this finding is rather self-explanatory, it emphasizes that intuitive eating improves over time through practice. Therapists who teach intuitive eating could therefore put emphasis on practicing at home the principles seen in session and present it as a skill that can be learnt. On the other hand, no significant participant responsiveness predictor was found for body esteem. It is possible that the choice of variables was less appropriate for this particular outcome. It might have been useful, retrospectively, to have assessed participant responsiveness in a way that would capture satisfaction with group exchanges, such as feeling accepted by the group and being treated in a non-judgmental manner. Perhaps the atmosphere of the group could have been used in predicting change in participants' body image.
Our study also led us to take a closer look at the implementation process. Although modifications were expected, the extent to which certain dyads have not followed the program cursus was quite important. This should however be interpreted with caution while considering the self-reported nature of our measurement, which can deviate from the true course of events (as opposed to the use of observers rating providers' behaviors). Meanwhile, results in terms of adherence are difficult to interpret since no benchmark was defined prior to the conduct of this study, again suggesting caution in results interpretation. We also noted that each providers' dyad has performed acceptable and unacceptable modifications to the program. Moreover, providers reported a very high sense of self-efficacy on theoretical pillars of the HAES® movement, as well as on skills required to facilitate a program in a group setting, regardless of their occupation. An exception to this would be the handling of emotional patients, as dietitians reported feeling less confident on this matter. This could indicate the relevance of having providers with complementary fields of expertise.
The associations between dimensions of implementation adherence did not significantly correlate with hardly any variables, contrary to our expectations. Only subjective improved knowledge correlated moderately with adherence, which could indicate that participants received more educational material when providers closely followed the curriculum of the program. Another surprising result related to unacceptable adaptations, where a moderate association found between providers' self-efficacy, CdM? experience and performing unacceptable adaptations. These results appear counterintuitive as self-efficacy was found to be associated with adherence (Campbell et al., 2013;Thierry et al., 2022). However, it is important to recontextualize our results within the current study as the decisional algorithm departing acceptable from unacceptable adaptations, based on the instigators view of the program, seems especially unforgiving to providers' initiatives. Indeed, not only did the algorithm not allow alterations of the learning objective, but it also severely restricted the way by which the program was given (e.g. empowerment, group environment, natural flow). Therefore, many adaptations, such as 'doing a lot of mirroring during and after group discussions […]', were classified as unacceptable, while they were not altering, in our opinion, core components per se. As such, it is likely that the providers confident in their abilities felt more comfortable to perform adaptations of greater extent. This could instead reflect on a good mastery of the program. Providers could also have either overreported or underreported the adaptations made. This could provide an alternative explanation as to why unacceptable adaptations were associated with self-efficacy. Overreporting adaptations, a downside from using selfreported methods in fidelity of implementation (Allen et al., 2012), could simply reflect conscientiousness from some providers.
In terms of participants' responsiveness, it seems that most of them expressed genuine enthusiasm and engagement towards CdM?. They indeed reported high satisfaction towards activities of the program. The self-reported scores matched qualitative data as well, which was very positive in general. More than two thirds of participants reported having achieved their main goal by attending to CdM?, and those who responded not having met their goal recognized, however, that their goals changed throughout the intervention. Several expressed having new 'insights' regarding their issues, such as 'needing to address their mental health problems prior to (their weight)', while others held onto their goal to lose weight and expressed being now 'better equipped' for it. When participants reported dissatisfaction from their participation to the program, they generally mentioned having felt 'overwhelmed by the amount of homework to do' or 'for not having invested as much (efforts) as they would have wanted to'. Regarding their satisfaction towards each activity, most participants' feedbacks were generally positive, except for the play dough activity, which generated polarized reactions. This high responsiveness among participants could, all in all, reflect a high-quality implementation.
This study presents several strengths that are worth mentioning. First, this is, to our knowledge, the first study in the field of HAES® to examine outcomes of a disseminated and community-based intervention through the lens of implementation science. We might as well point out that we based our study on a model of implementation as it is recommended by the current guidelines in implementation science (Toomey et al., 2020). This brought us to consider several dimensions of implementation rather than only measuring treatment adherence. We also have used a mixed methodology, combining qualitative and quantitative data, to have a better understanding of implementation processes, which, once again, has been numerously recommended in the literature (Peters et al., 2014;Toomey et al., 2020). Another strength was the consideration of adaptations independently from adherence, even more so that we separated unacceptable from acceptable adaptations using an assessment of the intervention core components. Indeed, the measurement of treatment adherence only would have failed to capture the occurring of certain adaptations. However, this study also includes some important limitations to mention. The biggest limitation relates to the self-reported methodology used for measuring adherence and adaptations made to the program, which could have compromised the quality of the data by social desirability, omissions, overreporting and underreporting. It would have been more accurate to have these concepts measured using independent and exterior observers that would rate behaviors of providers, as well as to determine a gold-standard in terms of fidelity (or benchmark) that would have allowed, for instance, a categorization of lowquality implementation and high-quality implementation. Another limitation yet in regard to adaptations is the determination of essential components, which has been done retrospectively to the implementation of CdM?, using only the point of view of its instigators. It is also important to note that measurements of participant responsiveness were derived from a questionnaire developed by the research team, which is suboptimal in terms of fidelity and validity. For instance, quantifiers of frequency for the home practice completion could have been more precise, and objective assessment of knowledge change could have been used. As such, our results should be interpreted very cautiously.
In conclusion, our study showed the complexity of implementing a multi-dimensional intervention in a community-based setting. Our main analysis failed to demonstrate an effect of HSSC-level implementation on outcomes. While disappointing in terms of findings, this 'absence of results' could be seen positively, as it is possible that participants improved their intuitive eating and body esteem independently from how the intervention was given. It challenges the attention given to treatment adherence. Meanwhile, we found that participant responsiveness (home practice completion) had a positive effect on intuitive eating. More studies in the field are needed to explore which components of implementation matter the most, especially given the complexity of implementing and scaling-up interventions. The question of whether a 'lack of adherence' could have the same detrimental effect as the wrongdoing of unacceptable modifications to the treatment would also deserve, in our opinion, further investigation. More particularly, researchers should explore the impacts of different types of adaptations and better understand the context in which they are performed. In that regard, our study showed unexpectedly that unacceptable adaptations made to the program were associated with greater self-efficacy and experience with the program. Finally, more attention should be given to the monitoring of implementation in the field of obesity as it would step up the quality and accuracy of findings in future effectiveness studies.
|
2022-10-02T15:09:31.960Z
|
2022-09-30T00:00:00.000
|
{
"year": 2022,
"sha1": "336f828f92a1a9f50dc56c2b58147405b259953c",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642850.2022.2128357?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44cd33918f0f32e72977838831f7758b7756d179",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
67754097
|
pes2o/s2orc
|
v3-fos-license
|
Promoting physical literacy in Irish adolescent youth: the youth-physical activity towards health (Y-PATH) intervention
In 2007, Margaret Whitehead outlined that the sporadic term of “physical literacy” was gaining momentum, specifically in relation to physical education (PE) practice in the United Kingdom.1 Most recently, in 2014, the Physical Education Association of Ireland (PEAI) launched their annual conference with the major focus towards “physical literacy” and the importance of building a “movement culture” during childhood.2 In the global context, physical literacy across the lifespan is now becoming a critical field of focus in physical activity (PA), exercise, sport settings and other public health sectors.3,4
Introduction
In 2007, Margaret Whitehead outlined that the sporadic term of "physical literacy" was gaining momentum, specifically in relation to physical education (PE) practice in the United Kingdom. 1 Most recently, in 2014, the Physical Education Association of Ireland (PEAI) launched their annual conference with the major focus towards "physical literacy" and the importance of building a "movement culture" during childhood. 2 In the global context, physical literacy across the lifespan is now becoming a critical field of focus in physical activity (PA), exercise, sport settings and other public health sectors. 3,4 Previous research, by Whitehead, contextualised physical literacy as a multifaceted conceptualisation of the skills required to fully realise physical activity potential through the embodied experience. 5 In a more recent classification, Whitehead defined physical literacy as having the motivation, confidence, physical competence, knowledge and understanding that underpin someone's values and responsibilities for life-long purposeful physical pursuits. 6 An important component of lifelong physical literacy development is the acquisition of fundamental movement skills (FMS). 7 In most recentyears, the implementation of FMS programmes in sport, exercise and school-based environments has received considerable evidence-based attention. [8][9][10][11][12] FMS as aligned with physical literacy, can be defined as basic observable patterns of behaviour and movement present from childhood to adulthood. 13,14 The skills include, for example, running, hopping, skipping (locomotor), balancing, twisting, dodging (stability), and throwing, catching and kicking (object control). Individuals at the fundamental movement stage are preparing for the acquisition of more advanced skills within the sport specific stage. 13,14 Whilst FMS are often considered the initial building blocks of more complex movements 14 their specific mastery are a prerequisite for everyday movements, and participation in sports and PA. 15,16 While physical literacy, as a concept, is the "new kid on the block", 17 it is reasonable to state that both PA and physical fitness in childhood have been extensively researched over the past number of years. [18][19][20][21][22][23] Undeniably, there is now a plethora of strong research evidence demonstrating that the physical fitness and health status of youth are substantially enhanced by regular PA participation. [24][25][26] Lifestyle changes to people in industrialised countries over the past few years, has resulted in the decline of people engaging within physically active behaviours. 27 A growing body of literature is now showing that the prevalence of chronic disease risk factors are increasing during adolescence, 28,29 and that levels of PA decline dramatically during adolescence. 30,31 Therefore, it is a valuable contribution to the literature that Tremblay & Llyod's concept of physical literacy for youth specifies the importance of integrating assessment for FMS, PA and physical fitness. 17 Yet, it must be recognised that knowledge of health is a critical component for skill development, PA participation and physical fitness; it is the foundation of characteristics, attributes, behaviours, awareness, and understanding related to healthy active living. 17 Previous work by Alexanderon the alteration of an individual's knowledge base describes learning as a relatively permanent change in the way a person thinks and processes information. 32 In light of the concept of physical literacy, it is interesting that research by Ennis suggests that participants can learn the kinaesthetic principles of fitness-related sciences within an educational fitness curriculum. 33 The importance of knowledge within physical literacy 17 is reinforced by Physical and Health Education Canada, specifically that PE programmes for youth provide the best opportunity to experience a variety of activities in a progressive, sequential format to ensure maximum learning and enjoyment. 34 The "knowledge, skills and understanding" of physical literacy 35 can be acquired constructively through the medium of PE for children and youth. 36 In this particular instance, the provision of a knowledge based PE programme may provide a platform for the development of physically literate and physically active youth. 8 Most recently, the concept of physical literacy across the lifespan has been subject to strategic action planning within the United States. 4 Critical philosophical debate around physical literacy began in the mid-1990s, yet, there is now a global emphasis towards the promotion of physical literacy through the enactment of multi-sectoral approaches, [3][4][5]35 with education firmly positioned as a platform for delivery. The purpose of the present evidence-based study is to highlight that many of the well-established components associated with physical literacy promotion (such as FMS, PA, health related knowledge and awareness, etc…) are currently being delivered in Irish second-level schools for adolescent youth, aged 12 to 15years old, as part of the longitudinal Y-PATH randomised-controlled trial.
Case presentation Synopsis y-path intervention
A systematic review on youth PA intervention effectiveness concluded that for adolescent youth, multi component interventions involving the school, family and community have the potential to make important differences in the increase of youth PA. 37 More recently, a systematic review summarising the effectiveness of schoolbased interventions in promoting youth PA and fitness, found that the evidence continues to advocate for the on-going implementation of school-based PA interventions for youth. 24 The Y-PATH programme is fundamentally guided by research informed findings. 8,38 As previously reported in O' Brien et al., 39 there are four essential intervention components within the existing Y-PATH evidence-based study: Student component: A targeted focus on the integration of HRA and FMS for students within a specifically tailored post-primary PE curriculum (delivered by specialist PE teachers).
Parent/guardian component: A PA promotion (across the lifespan)
workshop prior to the beginning of the intervention, and distribution of research informed Y-PATH information leaflets for parents and guardians.
Teacher component: All school teachers attend two workshops (pre and mid intervention) which among other concepts, highlight the importance of "active role modelling", and voluntary participation in a one week "Teacher Pedometer Challenge" to enhance participation and compliance.
Website component: All student, parent and teacher resources are made readily available for all intervention participants (http://www. dcu.ie/shhp/y_path.shtml).
A comprehensive overview and theoretical structure for the Y-PATH programme has been reported elsewhere; 8 (Figure 1) contextualises the component structure of the intervention. In line with the promotion of physical literacy, 5,6,40 it is important to note that the Y-PATH intervention was developed with a strong focus on PE based HRA [41][42][43][44][45][46] and FMS, [47][48][49][50] with additional school, teacher and parental components.
Research considerations in the development and extension of the y-path intervention
This Y-PATH intervention was initially based on the Medical Research Council's (MRC) guidance document, 'Developing and Evaluating Complex Interventions', 51 which is shown in Figure 2. Since the inception of this evidence-based study, the Y-PATH programme has progressed from the theory/modelling phase one, 8,38 to the exploratory trial phase two 39 and is currently, undergoing the definitive randomised-controlled trial of phase three. The four stages, as outlined in the MRC document, for 'Developing and Evaluating Complex Interventions' are: 1) Development, 2) Feasibility, 3) Evaluation, and 4) Implementation.
Following the MRC's guidelines, 51 the Y-PATH research study is continuing to generate and increase the longitudinal evidence-base. The development of the Y-PATH intervention (phase one theory/ modelling 2010-2011) was guided by previously reported research findings, 8 specifically the low levels of PA participation and FMS proficiency amongst Irish adolescent youth. As part of this theory and modelling phase, the development of the Y-PATH intervention also identified an evidence-base of literature relating to youth PA and FMS promotion during PE classes. The development of this intervention then used the Youth Physical Activity Promotion (YPAP) model as a theoretical framework. 52 The Y-PATH intervention hypothesised that if the research team (Principal Investigators and trained field staff) were successful in positively influencing the enabling, predisposing and reinforcing factors for PA experienced by youth, then a successful adolescent PA intervention would occur. The purpose of using the YPAP framework, as part of the theory/modelling phase, was to collect specific data, including levels of PA, FMS, body mass index and psychological influences (including attitudes and selfefficacy), so that a meaningful and relevant intervention could be implemented. 8 Phase two's 'Exploratory Trial' (2011-2012) within the MRC framework evaluated the Y-PATH intervention effectiveness after nine and twelve months respectively. Findings from this quasiexperimental non-randomised controlled trial suggested a positive effect for the Y-PATH intervention in the increase of PA and FMS levels of Irish adolescent youth. 39 The Y-PATH intervention, 8 was therefore, shown to be effective in increasing adolescent PA and FMS levels. As part of the definitive randomised controlled trial (phase three 2013-2014), the Y-PATH research team were looking at the evaluation of the programme's evidence base on a larger population sample (data analysis and results under review) to further demonstrate the impact of Y-PATH on PA and FMS levels.
Within the Y-PATH intervention, physical literacy promotion is ideally positioned for Irish post-primary school PE, specifically to foster students' development of the skills, knowledge, and attitudes needed to become physically literate. 53 Through this evidence-based Y-PATH programme, 8,39 students become physically active and skilled through PE, which in turn enables them to demonstrate the core components of physical literacy throughout the lifespan. A positive spiral towards physical literacy promotion for adolescent youth can be achieved within the Y-PATH intervention, specifically with the emphasis on 'quality, well-delivered PE'. 53
Making the physical literacy case for the y-path programme in Ireland
In her response to 'Physical Literacy in the context of Physical Education in the Secondary School', Whitehead outlined: "The development of physical literacy depends as much, if not more, on the nature of the interaction between the teacher and the pupil, as on the content of the lesson. Above all physical education must provide a positive and rewarding experience for all young people-whatever their ability. At the heart of Physical Literacy is the motivation to take part in physical activity. This is acquired as young people make progress in movement mastery and develop selfconfidence and self-esteem in this significant aspect of their human potential. 54 From the recently published evidence, it appears that positive school-based PE provides an opportunistic window for physical literacy promotion across the lifespan, [4][5][6]17,40 particularly for youth. In this section, Whitehead and Murdoch's conceptual map on the attainment and maintenance of lifelong physical literacy for second level youth 55 will be documented specifically in relation to the Y-PATH programme (Table 1). 56 Table 1 A comparative relationship between Whitehead and Murdoch's conceptual map and the Y-PATH programme in the attainment of physical literacy for secondary school youth Activity opportunities outside of school introduced through 'Pathways to Activity' initiative as part of the Y-PATH intervention (inventory of and links to the extra-curricular opportunities for PA and sport within the local community).
Personnel influencing the attainment and maintenance of physical literacy in Y-PATH
Teachers, parents, family, peers, coaches, club and local facility personnel.
Situations, contexts where Physical Literacy can be encouraged, established and maintained in Y-PATH
School Physical Education, extracurricular opportunities. Sports/activity clubs. Home, local environment, local facilities.
Y-PATH is a multi-component whole-school approach to PA promotion, with school physical education the medium for programme dissemination. The developed resources also provide direct links to the local sports clubs, home, environment and facilities.
Discussion
In the present Y-PATH evidence-based study, the authors are creating awareness and equally advocating for an approach to increasing adolescent PA participation through the structured teaching of secondary school PE in Ireland, that is fundamentally rooted within components of physical literacy promotion. 54 The domains of the Canadian Assessment of Physical Literacy include 'physical activity behaviours, physical fitness, motor skills, awareness, knowledge and understanding'; 17 from the evidence-based study presented, it is clear that the Y-PATH programme is targeting the promotion of physically literate secondary school youth through the medium of PE, as delivered by specialist teachers. Whitehead has suggested that physically literate individuals ought to possess assurance and selfconfidence in parallel with their movement proficiency. 5 In a recent publication from the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2015 on 'Quality Physical Education(QPE) guidelines for policy-makers', 57 documented the importance of physical literacy for healthy, able and active citizens. From this policy guideline document, 57 UNESCO outlined that QPE should comprise of the following: 1. QPE should enable children and young people to become physically literate, and provision should feature from the early years through the entire school journey to secondary school education.
2. Fundamental movement skills are a vital aspect of physical literacy and, also, to the development of healthy, able, and active citizens.
3. The promotion of physical literacy should then remain a key feature of any physical education curriculum throughout primary and secondary education. 57 From the presentation of this evidence-based study, physical literacy promotion is clearly a strong platform within the delivery of Y-PATH PE. The core components of physical literacy (PA behaviours, motor skills, physical fitness, PA knowledge, awareness, and understanding) have been at the heart of the Y-PATH programme since its inception in 2010. This was not due to a direct attempt of researchers to address physical literacy per say, but rather was driven by the research identified needs evident in Irish adolescent youth 8 for whom the intervention was developed. The attainment of physically literate youth is readily integrated and embedded within Y-PATH, specifically as learners encounter a range of age and stage appropriate opportunities. 57
Conclusion
Over the past decade, many public health agencies have introduced and embraced a variety of initiatives based on the desired outcome of physically literate individuals and populations. 4 While this concept is primarily aimed at young people, physical literacy programmes seek to provide the motivation, confidence, physical competence, knowledge and understanding to be active for life. 6 With this emergent shift towards physical literacy, the Y-PATH school-based PE intervention for Irish adolescent youth is well positioned for addressing this call to action. This Y-PATH evidence-based study has introduced the guiding principles of the PE-based intervention, specifically the sustainability of learning to be active during adolescence. It is important to note that the Y-PATH intervention consists of a multi-component approach to whole-school PA promotion, many components which reflect the integration of physical literacy in the school environment. By actively engaging the student, teacher, parent, guardian and local community in the intervention process, the Y-PATH intervention is adhering to Whitehead and Murdoch's previously published conceptual map in the attainment of lifelong physical literacy. 55 In terms of originality, the PE components of the intervention addresses psychosocial, HRA and FMS content in the promotion of skill competency, attitudes, selfefficacy and educational belief towards the importance of sustainable PA participation. The intervention is grounded within a cost-efficient and feasible approach to overall physical literacy promotion in the school context. The interpretation of the physical literacy concept continues to be refined with updated research informed data 3,4,6,11 and contributions to this 'new kid on the block' are set to continue. This evidence-based study set out to highlight how physical literacy is being promoted within a school-based programme for Irish adolescent youth.
|
2018-12-21T12:53:02.174Z
|
2015-10-16T00:00:00.000
|
{
"year": 2015,
"sha1": "805a908b1ade917f8b909c8263de2163417776d3",
"oa_license": "CCBYNC",
"oa_url": "http://medcraveonline.com/MOJPH/MOJPH-02-00041.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1faedcefd8c3df113a53b887fa213746b6474b7e",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260893122
|
pes2o/s2orc
|
v3-fos-license
|
Electroencephalography as a tool to predict cerebral oxygen metabolism during deep-hypothermic circulatory arrest in neonates with critical congenital heart disease
Objectives Recent research suggests that increased cerebral oxygen use during surgical intervention for neonates with congenital heart disease may play a role in the development of postoperative white matter injury. The objective of this study is to determine whether increased cerebral electrical activity correlates with greater decrease of cerebral oxygen saturation during deep hypothermic circulatory arrest. Methods Neonates with critical congenital heart disease requiring surgical intervention during the first week of life were studied. All subjects had continuous neuromonitoring with electroencephalography and an optical probe (to quantify cerebral oxygen saturation) during cardiac surgical repair that involved the use of cardiopulmonary bypass and deep hypothermic circulatory arrest. A simple linear regression was used to investigate the association between electroencephalography metrics before the deep hypothermic circulatory arrest period and the change in cerebral oxygen saturation during the deep hypothermic circulatory arrest period. Results Sixteen neonates had both neuromonitoring modalities attached during surgical repair. Cerebral oxygen saturation data from 5 subjects were excluded due to poor data quality, yielding a total sample of 11 neonates. A simple linear regression model found that the presence of electroencephalography activity at the end of cooling is positively associated with the decrease in cerebral oxygen saturation that occurs during deep hypothermic circulatory arrest (P < .05). Conclusions Electroencephalography characteristics within 5 minutes before the initiation of deep hypothermic circulatory arrest may be useful in predicting the decrease in cerebral oxygen saturation that occurs during deep hypothermic circulatory arrest. Electroencephalography may be an important tool for guiding cooling and the initiation of circulatory arrest to potentially decrease the prevalence of new white matter injury in neonates with critical congenital heart disease.
Quantitative EEG metrics before circulatory arrest predict a subsequent decrease in cerebral oxygen.
CENTRAL MESSAGE
Quantitative EEG before DHCA predicts the extent of cerebral oxygen desaturation during DHCA in neonates with CHD undergoing surgical repair.
PERSPECTIVE
Neurologic injury is common among neonates undergoing surgery for CHD.Recent research indicates that cerebral oxygen saturation decreases during circulatory arrest and that these desaturation events are associated with brain injury.We show that EEG metrics before arrest can predict the extent of cerebral oxygen desaturation, providing a potentially useful tool for mitigating injury.
Infants born with complex congenital heart disease (CHD) requiring surgery during the neonatal period are at an increased risk for neurodevelopmental disabilities. 1][4][5] Neonates undergoing deep hypothermic circulatory arrest (DHCA) for aortic arch reconstruction are at an increased risk for neurologic injury and subsequent poor outcomes.In these procedures, arch perfusion must be paused during surgical correction to maintain the absence of blood in the operative field.Cooling to deep hypothermic conditions is used to reduce the risk of brain injury that occurs due to a lack of perfusion and thus cerebral oxygen delivery during surgery.Despite the use of deep hypothermia, neurodevelopmental disabilities in children who have undergone neonatal cardiac surgery with arch reconstruction is common, with a population median IQ of 86 (significantly lower than average) and 30% requiring special education services. 6he development of neurologic deficits in this patient population may be partially due to the incidence of new postoperative white matter injury (WMI) that has been recently linked to ongoing cerebral oxygen metabolism (CMRO 2 ) during DHCA. 4 Further research has found that cooling to the standard 18 C before initiation of circulatory arrest is insufficient in this patient population to induce isoelectric electroencephalography (EEG) patterns. 7These data indicate that clinically used efforts to reduce cerebral metabolism before circulatory arrest are likely insufficient.Improved neuromonitoring methods are needed to individualize cooling and perfusion methods to reduce cerebral oxygen demand to a sufficient level before initiation of DHCA.
EEG measures the electrical activity of the brain in realtime and can provide clinically relevant insight into cerebral oxygen demand.Several metrics of electrical activity derived from EEG signals have been shown to correlate with other measures of CMRO 2 . 8][15] This study aims to use quantitative EEG metrics (SEF95, TP, and SR) as a tool to predict ongoing CMRO 2 (measured with a novel optical modality, frequency-domain diffuse optical spectroscopy [FD-DOS]) during DHCA.Establishing a potential relationship between pre-DHCA EEG suppression and ongoing CMRO 2 during DHCA could lead to individualized cooling for each patient with the goal of preventing perioperative neurologic injury in neonates with CHD.
MATERIALS AND METHODS
This is a retrospective analysis of a prospectively studied cohort of fullterm neonates with complex CHD who underwent surgical intervention requiring cardiopulmonary bypass (CPB) in the neonatal period at the Children's Hospital of Philadelphia.The goal of the prospective study was to evaluate risk factors for development of brain WMI.Although the prospective study cohort included patients enrolled between 2008 and 2018, this analysis was conducted on patients that were enrolled between August 31, 2015, to July 24, 2017.Study procedures were approved by the Institutional Review Board at the Children's Hospital of Philadelphia on August 8, 2011 (Number: 11-008191).Parents were approached for consent after birth and before the day of surgery for preoperative and postoperative magnetic resonance imaging (MRI) and perioperative EEG and optical monitoring.Participants were given the option to opt out of EEG monitoring but enroll in the larger study investigating the role of risk factors for development of brain WMI.Patients also concurrently provided informed written consent for the publication of their study data.Exclusion criteria were a birth weight less than 2 kg, a history of perinatal depression (ie, 5-minute APGAR<5 or cord blood pH<7.0),perinatal seizures, evidence of endorgan injury, preoperative cardiac arrest, and significant preoperative intracerebral hemorrhage (eg, grade 3 or 4 intraventricular hemorrhage).
For this study, all subjects had subcutaneous bilateral centro-parietal EEG leads (C3, C4, P3, P4) and an optical probe on the forehead placed for the duration of the surgery.FD-DOS and EEG data were captured continuously throughout the surgery.The surgeries of the 16 subjects in this cohort were performed by 3 surgeons, although the majority were performed by a single surgeon (12/16).Surgical strategy was the same for all subjects in this cohort.After heparinization, the pulmonary artery and the right atrium are cannulated and CPB is commenced.All study subjects were maintained on a combination of volatile (sevoflurane) and intravenous anesthetic (ketamine, fentanyl).pH-stat blood gas management was used during cooling and while hypothermic; alpha stat was used during rewarming and at normothermia per institutional protocol.Systemic cooling is performed to a nasopharyngeal temperature of 18 C for at least 15 to 20 minutes, and then circulatory arrest is initiated.
All subjects received a pre-operative MRI under general anesthesia on the day of surgery and an unanesthetized postoperative MRI approximately 1 week after surgery.WMI in the periventricular white matter was identified as T1 hyperintensity and conventionally rated using the previously validated quadrant scoring system. 16Further MRI methodology has been published. 4,17The FD-DOS cerebral tissue oxygen saturation (ScO 2 ) data have been published, but a brief methodology is described here. 4
Frequency-Domain Diffuse Optical Spectroscopy
FD-DOS is a method to quantify tissue oxygenation that has been validated against MRI in neonates. 18Specifically, multi-separation FD-DOS, used in the present study, is capable of accurate quantification of ScO 2 (ie, in contrast to commercial oximeters, which use continuous wave near-infrared spectroscopy to monitor trends in saturation).FD-DOS uses the photon diffusion theory to relate the measured amplitude attenuation and phase shift of modulated and multiply scattered light detected on the tissue surface to the wavelength-dependent tissue absorption (m a ) and scattering (m' s ) properties.The wavelength-and time-dependent absorption coefficient, m a (l,t), depends linearly on the oxy-([HbO 2 ]) and deoxyhemoglobin ([Hb]) concentration; thus, measurements at multiple wavelengths yields these 2 parameters via linear absorption spectroscopy.From [HbO 2 ] and [Hb], we can derive the total hemoglobin concentration (THC The oxygen extraction fraction (OEF), a surrogate marker for CMRO 2 , can be calculated from the ScO 2 and arterial oxygen saturation measured clinically from an arterial blood gas sample. 18The DOS device used in the present study (Imagent, ISS Inc, Champaign, Ill) is amplitude modulated at 110 MHz and uses source lasers at 2 wavelengths, 688 and 830 nm, with 1 detection fiber.We used 4 source-detector separations (1.5, 2.0, 2.5, and 3.0 cm along the tissue surface).The patient interface for this instrument consists of a custom-made flexible black rubber probe secured to the subject's forehead with a soft wrap.
Electroencephalography
Four electrodes were placed subcutaneously to create 2 recording channels (C3-P3, C4-P4) according to the international 10 to 20 system before surgical repair.EEG leads were attached for the entirety of the procedure (baseline, cooling, circulatory arrest, and rewarming).EEG data were collected and stored on a CNS-310 Moberg monitor and then translated for analysis in MATLAB.Postoperatively, the EEG data went through a series of postprocessing filters: a 0.5 Hz high pass, 30 Hz low pass, and a 60 Hz notch filter.
Despite the use of postprocessing filters, there was considerable electrical artifact infiltrating the EEG signal due to clinician movement, the proximity of the optical probe on the head, and the presence of numerous electrical devices placed on the patient throughout surgery.All EEG data in this study were visually screened, first by a research assistant, and then subsequently by a pediatric epileptologist (S.M.) on CNS Envision software (Moberg Research, Inc) to ensure that data analyzed in the study did not contain electrical artifacts.The pediatric epileptologist discarded temporal periods of the EEG waveform that were nonphysiologic.For example, EEG data including a supra-physiologically high amplitude (>100 mV) or a repeating pattern of EEG signal (the same waveform repeating itself) over several seconds would be discarded.A single channel was analyzed for each patient.The channel with the least artifact was selected for analysis.
The presence of artifact in the EEG signal was pervasive.To gather consistent data from each patient, data were assessed at four 5-minute epochs: baseline (any 5-minute time period before the start of cooling), end-cooling (within the last 5 minutes of cooling), end-arrest (within the last 5 minutes of DHCA), and postrewarming (within the first 5 minutes after the end of rewarming).A minimum of 1 minute of continuous, artifactfree EEG data from each of these 4 epochs were analyzed for each subject.
We analyzed EEG data using 3 quantitative EEG metrics: SR, TP, and SEF95.SR is defined as the proportion of time that the EEG signal is suppressed under a certain threshold.In this analysis, the threshold required to be considered in the SR was set at 5 mV, as neonatal EEG amplitudes below this threshold is considered a "severe burst suppression pattern". 19TP is the total energy of the EEG signal in nanowatts, which is measured by continuously summing the power of each frequency (delta, theta, alpha, beta) over time.SEF95 is the frequency below which 95% of the power spectrum is observed.The mean SR, TP, and SEF95 were calculated for each of the 4 epochs for every subject with at least 1 minute of artifact-free data.SR was calculated using MATLAB.TP and SEF95 were calculated using CNS Envision software.
We examined the correlation between quantitative EEG parameters (SR, TP, SEF95) at the end of cooling with the change in DScO 2 during DHCA.We also sought to determine the correlation between quantitative EEG parameters at the end of cooling with the change in volume of WMI (postoperative WMI vs preoperative WMI) reflected on MRI.We hypothesized that cooling-induced EEG suppression of high frequency brain activity and overall power would be correlated with lower CMRO 2 during DHCA (as reflected by minimal decrease in ScO 2 ).We also hypothesized that cooling induced EEG suppression would be correlated with a lower burden of WMI on postoperative MRI.A linear regression model was used to test these hypotheses.
We also examined the relationship between the magnitude of each EEG parameter at baseline, end-cooling, and postrewarming with the OEF during the same epochs, hypothesizing that measures of increased cerebral activity would be associated with higher OEF (Figure 1).For instance, we compared each subject's SEF at baseline, with that subject's OEF calculation over the same period.These data, along with the analogous paired data for the other 2 epochs were plotted on a single scatter plot (comparing quantitative EEG values to OEF).To test our hypothesis, we used a mixed effects linear regression model with patient identification as the grouping variable to minimize the presence of intra-subject variability.Oxygen extraction fraction is the outcome variable and SEF95, TP, and SR are each separately predictor variables.Only baseline, end-cooling, and postrewarming were chosen because the OEF calculation during the DHCA period is not meaningful because blood flow is zero (conditions are not in steady state).For the linear regression and mixed effects linear regression models, statistical tests for a slope different from zero were done using a t statistic.Summary statistics are presented using medians and interquartile ranges (IQRs).
RESULTS
In this study, we obtained EEG data from 16 subjects undergoing CPB with the use of DHCA for cardiac repair.All 16 subjects had at least 1 minute of usable EEG data for the 4 epochs analyzed.Of these 16, 5 subjects had unusable FD-DOS data due to poor data quality, which yields 11 subjects with both FD-DOS and EEG data.Therefore, data solely describing quantitative EEG changes represent 16 subjects (Figure 2), and data describing associations between EEG and OEF or ScO 2 represent 11 subjects (Figures 1 and 3).The median cooling time before DHCA was 17.05 minutes (IQR, 15.53-29.18).The median DHCA duration was 38.78 minutes (IQR, 30.10-46.17), while the median total time on CPB (excluding DHCA duration) was 39.58 (IQR, 37.22-52.10).The median temperature at which end-cooling EEG activity was extracted was 19.5 C (IQR, 18.58-20.4C); the lowest temperature was 17.35 C, and the highest temperature was 22.08 C. At our institution, temperatures within 18 to 22 C are adequate to initiate DHCA, and thus cooling was fully complete during the end-cooling EEG activity assessments.Patient demographic data are summarized in Table 1.
Correlation With Postoperative Injury on Magnetic Resonance Imaging
Of the 9 subjects with preoperative and postoperative MRI measurements, the median volume of new WMI on postoperative MRI was 19.38 mm 3 (IQR, 5.22-86.83).Three subjects were observed to have preoperative WMI, with a median volume of 22.71 mm 3 (IQR, 10.9-31.85) .No significant or trending correlations were observed between quantitative EEG metrics at the end of cooling and new postoperative WMI.
DISCUSSION
This study demonstrates that the use of EEG during neonatal cardiac surgery can produce several quantitative EEG parameters that give insight into CMRO 2 .Specifically, SEF95, TP, and SR were found to correlate with OEF throughout the intraoperative period (Figure 1).We have demonstrated that high levels of EEG suppression (low SEF95, low TP, and high SR) are significantly associated with lower levels of cerebral oxygen extraction.Furthermore, these EEG parameters before the initiation of DHCA predicted the amount of cerebral oxygen desaturation during DHCA (Figure 3), highlighting the potential usefulness of EEG for patient-specific cooling goals.Because this study was a retrospective analysis of neonatal EEG activity and cerebral oxygenation and not a controlled experiment, we cannot exclude the role of other postoperative variables in contributing to ongoing cerebral oxygen desaturation.
EEG-guided reduction of CMRO 2 has been linked to a decrease in the prevalence of adverse neurologic outcomes in adult patients undergoing cardiac surgery, but EEG neuromonitoring is not used as standard of care during neonatal cardiac surgery in many institutions. 13,14One such previous study found statistically significant decreases in neurologic sequelae, length of stay in the hospital, and estimated overall hospital expenditure in pediatric patients undergoing EEG monitoring during repair of CHD compared with patients without EEG neuromonitoring. 20The results presented herein (Figure 3) provide evidence that EEG parameters (SEF95, TP, and SR) before the initiation of DHCA can predict the degree of ongoing CMRO 2 during DHCA.Given recent research establishing that ongoing cerebral metabolism during DHCA is associated with increased postoperative WMI, 4 our data suggest that quantitative EEG metrics can provide greater insight into the intraoperative causes of WMI and serve as a potential biomarker for therapeutic intervention (ie CPB pump flow increases, increasing FiO 2 , increasing cooling duration, increasing anesthetic dosages).
We sought to establish which EEG parameter would be most correlated to cerebral oxygen consumption.SEF95 and TP both correlate negatively with intraoperative anesthetic dosage and positively with nasopharyngeal temperature in adult patients, suggesting that these metrics can be useful for extrapolating cerebral functional activity. 9,10,12he EEG SR or burst SR quantifies the percentage of an EEG signal that has low voltage. 21The SR increases when the brain exhibits a characteristic EEG pattern termed burst suppression. 11Burst suppression occurs during periods of brain inactivation such as general anesthesia, deep hypothermia, or brain injury. 10,11utliers (indicated by red plus signs) are defined as values greater than q 3 þ (2.7 *s) * (q 3 -q 1 ) or less than q 1 -(2.7 *s) * (q 3 -q 1 ); q 1 ¼ first quantile, q 3 ¼ third quantile, s ¼ standard deviation.
Prior research consistently reveals that pediatric EEG during cardiac surgery becomes more suppressed during cooling and more active during rewarming periods of CPB. 7,22,23Both SEF95 and TP exhibited this pattern in Figure 2, suggesting that they are sensitive to the same cerebral electrical activity changes.SR increased after baseline but did not appear significantly different after that.The lack of differentiation in SR during other epochs may be an indication that SR is less sensitive to functional brain activity than SEF95 and TP.Furthermore, although all 3 quantitative EEG metrics exhibited a statistically significant correlation to FD-DOS derived OEF values according to a mixed effects linear regression model, SR was the least statistically significant with a P value of.049compared with less than .001and .012for SEF95 and TP, respectively (Table 2).Given that SEF95 is a significantly better predictor of OEF than TP and SR, it should be further studied as an EEG metric used to assess cerebral activity.
There are several studies that suggest that prolonged time under DHCA is a risk factor for various forms of neurologic injury such as postoperative seizures, new WMI, and neurodevelopmental delays. 4,24,25There are alternative perfusion strategies to DHCA that are predominant in the current era, but DHCA is still used by a subset of surgeons, and brief periods of DHCA are still sometimes necessary even if not used for the majority of repair.
Study Limitations
A potential confounding variable in the correlation between the end-cooling EEG metrics and the DScO 2 during DHCA is the duration of DHCA.A linear regression analysis found a trending negative correlation between the duration of DHCA and the change in DScO 2 during DHCA (P ¼ .09,R 2 ¼ .285).That is, those subjects with longer DHCA times tended to have a greater decrease in cerebral oxygen saturation (ScO 2 ) during DHCA.
A major limitation of EEG is the presence of artifact infiltrating the signal.In this study, artifact caused by electrical devices in the OR and movement in the operative field reduced the availability of usable data.To gather data, filtered EEG signal had to be manually screened by a pediatric epileptologist (S.M.).Given the need to manually screen for artifact, only 4 discrete time periods were chosen for screening per subject.Although EEG was manually screened for artifacts by a well-trained epileptologist, in the future, automated EEG algorithms to identify and remove artifacts promise to allow clinicians to visualize quantitative EEG metrics in real-time (ie, no post hoc analysis needed).
Another major limitation of this study is the small sample size.Although 16 patients had both EEG and FD-DOS attached to the head for the entire procedure, FD-DOS probe displacement reduced the number of subjects with both neuromonitoring modalities to 11.Because of the small sample size, 1 subject (highlighted in yellow in Figure 3) had a disproportionate influence on the correlation between end-cooling EEG activity and DScO 2 during DHCA.This subject's SEF95 immediately before DHCA (SEF95 ¼ 3.74) was more than twice that of the study population's mean (SEF95 mean ¼ 1.38).If this subject were to be excluded from the linear regression analyses shown in Figure 3, the relationships between end-cooling SEF95, TP, and SR with DScO 2 during DHCA would not be significant (P ¼ .231,P ¼ .13,P ¼ .296,respectively).However, because the data quality was adequate, we included this subject in all analyses.
Another consequence of this study's small sample size is the lack of any correlation between end-cooling EEG metrics and the occurrence of new WMI in postoperative MRI.Although there are EEG data for 16 subjects, preoperative and postoperative MRI data are available for only 9 subjects (of whom only 7 of 9 had both FD-DOS and EEG data).Furthermore, most subjects experienced only small increases in the volume of WMI, making it difficult to do a linear regression analysis.Our results motivate a larger study to investigate the relationship between intraoperative EEG and postoperative WMI.
CONCLUSIONS
EEG is a useful tool for intraoperative neuromonitoring and can give insight into the efficacy of cooling for decreasing CMRO 2 .These findings, combined with our prior findings that increased CMRO 2 during DHCA is associated with increased WMI, suggest that EEG-guided cooling (by increasing the duration of cooling to ensure electrocerebral silence) may help to individualize and optimize pre-DHCA cooling to reach the cerebral metabolic nadir (Figure 4).In doing so, the data suggest a potential for decreasing the risk of hypoxic ischemic injury in neonates undergoing complex congenital cardiac surgery with DHCA.
1 CFIGURE 1 .
FIGURE 1. Scatter plots of the (A) spectral edge frequency 95% (Hz), (B) TP (nW), and (C) SR versus the OEF for every subject at baseline, end-cooling, and postrewarming.A mixed effects linear regression model was used to generate a thick black line plotting the model estimated OEF versus given quantitative EEG values in all 3 subplots.The 95% CIs from the model are indicated by blue shaded error bars.The model revealed that SEF95, TP, and SR are significant predictors of OEF.The x-axis in (B) is log scaled for better visualization given the large number of near 0 TP values.Each dot color corresponds to a different subject (3 measures for each subject; baseline, end-cooling and post rewarming).
1 FIGURE 2 .
FIGURE 2. Three boxplots showing the change in (A) spectral edge frequency 95, (B) TP, and (C) SR over the course of surgery for 16 subjects.Significance is determined by pairwise Wilcoxon signed-rank tests between timepoints.Significance is Bonferroni adjusted to an alpha level of 0.0083; significant differences (adjusted P value<.05)are indicated by horizonal black lines.The lower and upper borders of each box represent the lower and upper quartiles (25th percentile and 75th percentile).The middle horizontal line represents the median.The lower and upper whiskers represent the minimum and maximum values of nonoutliers.Outliers (indicated by red plus signs) are defined as values greater than q 3 þ (2.7 *s) * (q 3 -q 1 ) or less than q 1 -(2.7 *s) * (q 3 -q 1 ); q 1 ¼ first quantile, q 3 ¼ third quantile, s ¼ standard deviation.
FIGURE 3 .
FIGURE 3. End-cooling EEG versus DHCA DScO2: Scatter plots indicating the value of 3 quantitative EEG metrics: (A) spectral edge frequency 95%, (B) TP, and (C) SR at the end of cooling (before DHCA) compared with the subject's change in ScO 2 during DHCA.Black lines of best fit are plotted in (A), (B), and (C).Blue shaded error bars represent the 95% CI for the results of the linear regression model.Each dot color corresponds to a different subject.Results of a simple linear regression model revealed that every quantitative EEG metric at the end of cooling is a significant predictor of DScO2 during DHCA.ScO 2 , Cerebral oxygen saturation.
Electroencephalography as a tool to predict cerebral oxygen metabolism during deephypothermic circulatory arrest in neonates with critical congenital heart diseaseMethods:Implications: EEG may be a useful tool for intraoperative neuromonitoring for neonates with CHD undergoing surgical repair.Decreases in ScO 2 during DHCA have been previously shown to correlate with increases in new post-operative white matter injury in MRI.EEG may help personalize cooling prior to DHCA to minimize the loss of ScO 2 during DHCA.16 neonates with congenital heart disease (CHD) undergoing surgical repair had both bilateral centroparietal EEG leads and an optical probe attached to the head to measure cerebral electrical activity and cerebral oxygen saturation (ScO 2 ) respectively.
TABLE 2 .
Results from 3 mixed effects linear regression models in which oxygen extraction fraction is the outcome variable and SEF95, TP, and SR are each separately predictor variables These data correspond to the lines of best fit plotted in Figure1.SE, Standard error; SEF, spectral edge frequency; TP, total power; SR, suppression ratio.
|
2023-08-15T15:02:35.711Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9459182a9b318e156d020cd649e871702741ca03",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jtcvsopen.org/article/S2666273623002164/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f4b2de5469f61a94189cb7a2d65b7acc3cbb547",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244750117
|
pes2o/s2orc
|
v3-fos-license
|
Influence of the Downwash Wind Field of Plant Protection UAV on Droplet Deposition Distribution Characteristics at Different Flight Heights
The aerial spraying of pesticides by plant protection unmanned aerial vehicles (UAV) is a process in which the spray droplet deposition on target sites occurs under the influence of the downwash wind field. The downwash wind field is the most important factor affecting droplet deposition distribution characteristics in an aerial spray. To understand the mechanism of the downwash wind field, spray tests were conducted at different flight heights by using a DJI UAV, and the downwash wind field in the three-dimensional direction (X-directional wind, Y-crosswind, and Z-vertical wind) was measured by using a wind speed measurement system for UAV. Combined with the droplet deposition of aerial spray, the distribution characteristics of the downwash wind field and the influence of the downwash wind field on droplet deposition were studied. The results showed that it had obvious differences in the distribution of the downwash wind field for UAV at different flight heights. As the flight height increases, the downwash wind field in X-direction and Z-direction showed a strong to weak trend, while the downwash wind field in Y-direction showed an opposite trend. In addition, it was found that the downwash wind field in Y-direction and Z-direction both have a significant influence on droplet deposition. With the increase of flight height, the change of the downwash wind field led to a gradual decrease in droplet deposition in the effective spray area, and droplets deposited more uniformly. For the DJI T16 plant protection UAV in this test, the optimal flight height was 2.0 m, and the downwash wind field had a better improvement effect on droplet deposition. Therefore, in order to make full use of the downwash wind field of UAV, the appropriate flight height should be selected to improve droplet deposition of liquid pesticide and achieve a better control effect for crop disease and pests when UAV is used for aerial spray operations in the field. This study revealed the influence mechanism of the downwash wind field on droplet deposition of aerial spray, and proposed appropriate operation parameters from the perspective of practical operation. It was expected to provide data support for improving the operation quality of aerial spraying and the formulation of field operation specifications.
Introduction
Chemical application is an important agricultural production method of controlling plant diseases, insect pests, and weeds. However, the extensive spray of pesticides that is currently common in China not only leads to low effective utilization of pesticides, but also forms a large number of residues of pesticides, which seriously pollutes the ecological environment and threatens food and life safety [1]. According to statistics, the use of pesticides per unit area in China is 2.5 times the world average, and the area of contaminated arable land reaches 1 × 10 7 hm 2 , which accounts for about 1/10 of the arable area [2]. In the current crop production process, traditional manual and semi-mechanical operations are still the main methods of plant protection in China, which not only have low efficiency and high labor intensity, but also cause low utilization of pesticides [3,4]. Therefore, although the use of pesticides improves the yield of crops, it also pollutes the food and environment, leading to a global food and environmental crisis.
In recent years, China's agricultural aviation industry has developed rapidly; especially, the rapid development and application of plant protection UAV, which is one of the important components of the agricultural aviation industry, has attracted widespread attention [5,6]. As a new type of plant protection operation in China, the aerial spray technology of plant protection UAV has improved the shortcomings of traditional plant protection operations. Due to its high spraying efficiency, better atomization effect, and pesticide utilization, and solving the problem that ground machinery is difficult to use in the field during crop growth, plant protection UAV is gradually becoming the preferred method of plant protection operations [7][8][9]. The use of plant protection UAV to spray pesticides for the prevention and control of crop disease and pests has become a new feature in the development of plant protection machinery.
With the widespread application of plant protection UAV, research on its low-altitude and low-volume aerial pesticide application technology has gradually become a research hotspot, and a series of explorations have been performed by researchers on the quality of its operation and the effect of droplet deposition distribution [10][11][12][13]. Qiu et al. [14] studied the relationship between the spray deposition concentration, deposition uniformity of the CD-10 single-rotor UAV, and the flight altitude, flight speed, and the interaction between the two factors by using the two-factor and three-level test method, and established the corresponding relational model. Qin et al. [15] studied the effect of spraying parameters on the droplet deposition distribution in maize canopy by changing the operation height and spraying width of the N-3 single-rotor UAV. Chen et al. [16] studied the effect of different spray parameters on the droplet deposition distribution in rice canopy by using different flight parameters of an HY-B-10L single-rotor electric unmanned helicopter.
It can be seen that the studies on the aerial spraying technology of plant protection UAV were mainly focused on the influence of aerial spraying operation parameters on droplet deposition distribution, while the importance of the influence of the downwash wind field on droplet deposition distribution is ignored. In fact, the main factor that affects droplet deposition distribution of aerial spraying is the downwash wind field below the UAV rotor, which is made up of the wind field generated by the rotating rotor and the wind field of the external environment [17]. Some researchers have used computational fluid dynamics (CFD) to simulate the wind field below the UAV rotor to analyze the droplet deposition distribution [18][19][20][21]. However, due to the interference of the external environment, the wind field distribution under the simulated condition is quite different from that under the actual field condition. Researchers have paid attention to the downwash wind field since the early days from the application of rice-assisted pollination technology by using agricultural UAV. Zhou and Li et al. [22][23][24] collected the downwash wind field of a single-rotor helicopter and multi-rotor UAV in rice pollination operations, and selected the best operating parameters based on the pollen distribution, respectively. However, there are no relevant reports on the research and application of the downwash wind field of plant protection UAV in aerial spraying technology. It is necessary to consider the influence of the downwash wind field fundamentally in the study of the characteristics Agronomy 2021, 11, 2399 3 of 13 of droplet deposition and drift in aerial spray operation. Therefore, in this study, a plant protection UAV was used as the research object to conduct aerial spraying tests under four different flight heights, and the downwash wind field was measured by using a wireless wind speed sensor network measurement system for UAV. Combined with the droplet deposition of aerial spray, the distribution characteristics of the downwash wind field and the influence of the downwash wind field on droplet deposition were studied, and it was expected to provide data support for improving the quality of aerial spraying operation and the formulation of field operation specifications.
Materials and Equipment
The UAV used in this spray test was a DJI T16 six-rotor electric UAV for plant protection (Shenzhen DJI Technology Co., Ltd., Shenzhen, China), as shown in Figure 1. The spraying system of the UAV is composed of hydraulic nozzles, pressure pump, tank, etc. The number of nozzles is eight, and two nozzles are used as a group to be symmetrically distributed under the rotors on both sides of the fuselage. The UAV model has the functions of route planning and autonomous obstacle avoidance, which can complete aerial spraying operation autonomously, and its main performance indicators are shown in Table 1. The nozzle type selected in this test is Teejet XR11001VS. When the UAV is spraying, the four nozzles behind the fuselage are turned on for spraying. The maximum flow rate of the spraying system can reach 4.8 L/min, and the spraying flow rate can be adjusted by the handheld ground station. In addition, when the flight height is 1.5~3.0 m, the range of the effective spray width is 4.0~7.5 m. influence of the downwash wind field fundamentally in the study of the characteristics of droplet deposition and drift in aerial spray operation. Therefore, in this study, a plant protection UAV was used as the research object to conduct aerial spraying tests under four different flight heights, and the downwash wind field was measured by using a wireless wind speed sensor network measurement system for UAV. Combined with the droplet deposition of aerial spray, the distribution characteristics of the downwash wind field and the influence of the downwash wind field on droplet deposition were studied, and it was expected to provide data support for improving the quality of aerial spraying operation and the formulation of field operation specifications.
Materials and Equipment
The UAV used in this spray test was a DJI T16 six-rotor electric UAV for plant protection (Shenzhen DJI Technology Co., Ltd., Shenzhen, China), as shown in Figure 1. The spraying system of the UAV is composed of hydraulic nozzles, pressure pump, tank, etc. The number of nozzles is eight, and two nozzles are used as a group to be symmetrically distributed under the rotors on both sides of the fuselage. The UAV model has the functions of route planning and autonomous obstacle avoidance, which can complete aerial spraying operation autonomously, and its main performance indicators are shown in Table 1. The nozzle type selected in this test is Teejet XR11001VS. When the UAV is spraying, the four nozzles behind the fuselage are turned on for spraying. The maximum flow rate of the spraying system can reach 4.8 L/min, and the spraying flow rate can be adjusted by the handheld ground station. In addition, when the flight height is 1.5~3.0 m, the range of the effective spray width is 4.0~7.5 m. As shown in Figure 2, the wireless wind speed sensor network measurement system (WWSSNMS) for UAV (Guangzhou Fumin measurement and Control Technology Co., As shown in Figure 2, the wireless wind speed sensor network measurement system (WWSSNMS) for UAV (Guangzhou Fumin measurement and Control Technology Co., Ltd., Guangzhou, China) was used in this spray test, which includes impeller-type wind speed sensors ( Figure 3) and wind speed sensor wireless measurement nodes. The impeller wind speed sensors measure the three-dimensional wind speed generated by the UAV flying. The measurement range of the system is 0~45 m/s, the measurement accuracy is ±3%, and the measurement resolution is 0.1 m/s. The wind speed sensor wireless measurement node is composed of a 490 MHz wireless data transmission module, a microcontroller, and a power supply module to realize the transmission of wind speed data to the Laptop. The working principle is shown in Figure 2. The single sampling time of the system is 5 s, the sampling frequency is 20 Hz, and the continuous working time of normal field work is 10 h.
Ltd., Guangzhou, China) was used in this spray test, which includes impeller-type wind speed sensors ( Figure 3) and wind speed sensor wireless measurement nodes. The impel ler wind speed sensors measure the three-dimensional wind speed generated by the UAV flying. The measurement range of the system is 0~45 m/s, the measurement accuracy i ±3%, and the measurement resolution is 0.1 m/s. The wind speed sensor wireless meas urement node is composed of a 490 MHz wireless data transmission module, a microcon troller, and a power supply module to realize the transmission of wind speed data to th Laptop. The working principle is shown in Figure 2. The single sampling time of the sys tem is 5 s, the sampling frequency is 20 Hz, and the continuous working time of norma field work is 10 h.
Test Site
The test site was the Wind Tunnel Laboratory of South China Agricultural Univer sity, Guangzhou City, Guangdong Province, China. A test site with a length and width o more than 120 × 30 m was selected, and the test site was covered with vegetation to simu late the actual field operation, where the height of vegetation was about 0.4 m ( Figure 4) In addition, a surrounding wall with a height of 3 m was built around the site to block th external wind and eliminate the influence of external crosswind on the test. Ltd., Guangzhou, China) was used in this spray test, which includes impeller-type wind speed sensors ( Figure 3) and wind speed sensor wireless measurement nodes. The impeller wind speed sensors measure the three-dimensional wind speed generated by the UAV flying. The measurement range of the system is 0~45 m/s, the measurement accuracy is ±3%, and the measurement resolution is 0.1 m/s. The wind speed sensor wireless measurement node is composed of a 490 MHz wireless data transmission module, a microcontroller, and a power supply module to realize the transmission of wind speed data to the Laptop. The working principle is shown in Figure 2. The single sampling time of the system is 5 s, the sampling frequency is 20 Hz, and the continuous working time of normal field work is 10 h.
Test Site
The test site was the Wind Tunnel Laboratory of South China Agricultural University, Guangzhou City, Guangdong Province, China. A test site with a length and width of more than 120 × 30 m was selected, and the test site was covered with vegetation to simulate the actual field operation, where the height of vegetation was about 0.4 m ( Figure 4). In addition, a surrounding wall with a height of 3 m was built around the site to block the external wind and eliminate the influence of external crosswind on the test.
Test Site
The test site was the Wind Tunnel Laboratory of South China Agricultural University, Guangzhou City, Guangdong Province, China. A test site with a length and width of more than 120 × 30 m was selected, and the test site was covered with vegetation to simulate the actual field operation, where the height of vegetation was about 0.4 m (Figure 4). In addition, a surrounding wall with a height of 3 m was built around the site to block the external wind and eliminate the influence of external crosswind on the test.
Sampling Point Layout
The layout of droplet sampling points and wind field sampling points is shown in Figure 5.
The droplet sampling line perpendicular to the flight direction of the plant protection UAV was arranged in the center of the test site. The middle point was marked as 0 m, and 13 sampling points were arranged symmetrically on the left and right sides. The sampling points were all separated by 0.3 m, and these points were respectively recorded as −3.9, −3. The total length of the droplet sampling line was close to 8 m, which is greater than the effective spray width provided in the UAV parameter index. These water-sensitive papers (WSPs, 26 × 76 mm, Syngenta Inc., Basel, Switzerland) were fixed horizontally on a tripod by using double-head clamps at each sampling point, and were used to measure the droplet deposition distribution. The height of these WSPs was about 0.6 m, which is close to the canopy height of the actual crop.
The wind field sampling line was parallel to the droplet sampling line and perpendicular to the UAV flight path. Six wind speed sampling points were symmetrically distributed along the UAV flight path. The measurement points were marked as −3.6, −2.1, −0.6, 0.6, 2.1, and 3.6 m from left to right. The field layout of the WWSSNMS nodes referred to the three-dimensional three-way linear gust wind field measurement method proposed by Hu et al. [25]. Each measurement node was equipped with three wind speed sensors to measure the direction parallel to the route (X-direction), horizontal and perpendicular to the route direction (Y-direction), and the wind speed perpendicular to the ground (Zdirection).
Sampling Point Layout
The layout of droplet sampling points and wind field sampling points is shown in Figure 5.
Test Plan Design
In this test, pure water was used to replace the chemical liquid for spraying. In order to eliminate the influence of wind field distribution caused by different load parameters on the test results, the volume of solution in the tank was the same before each test, which was 10 L. Four UAV flight altitudes of 1, 1.5, 2, and 2.5 m were set during the experiments. At least three valid repeat tests should be performed for each group. To ensure the authenticity and validity of test results, a normal flight speed of 5 m/s and a fully autonomous operation mode of plant protection UAV were selected for this spray test. During the test, the UAV was controlled to take off autonomously from 30 m outside the sampling area. After passing through the buffer area, the UAV was accelerated to the set speed and passed over the sampling line at a constant speed. The wind speed value started to be collected when the UAV was 5 m away from the wind field sampling line. The sampling duration was 5 s, and the acquisition frequency was 20 Hz.
Data Processing
Nearly 30 s after spraying, the sampling cards were collected and placed in properly labeled bags. The cards were scanned one by one by a scanner, and Deposit Scan software (US USDA) was used to obtain the coverage density, deposition, and droplet size at different locations after scanning [26]. The average value of droplet coverage density at each The total length of the droplet sampling line was close to 8 m, which is greater than the effective spray width provided in the UAV parameter index. These water-sensitive papers (WSPs, 26 × 76 mm, Syngenta Inc., Basel, Switzerland) were fixed horizontally on a tripod by using double-head clamps at each sampling point, and were used to measure the droplet deposition distribution. The height of these WSPs was about 0.6 m, which is close to the canopy height of the actual crop.
The wind field sampling line was parallel to the droplet sampling line and perpendicular to the UAV flight path. Six wind speed sampling points were symmetrically distributed along the UAV flight path. The measurement points were marked as −3.6, −2.1, −0.6, 0.6, 2.1, and 3.6 m from left to right. The field layout of the WWSSNMS nodes referred to the three-dimensional three-way linear gust wind field measurement method proposed by Hu et al. [25]. Each measurement node was equipped with three wind speed sensors to measure the direction parallel to the route (X-direction), horizontal and perpendicular to the route direction (Y-direction), and the wind speed perpendicular to the ground (Z-direction).
Test Plan Design
In this test, pure water was used to replace the chemical liquid for spraying. In order to eliminate the influence of wind field distribution caused by different load parameters on the test results, the volume of solution in the tank was the same before each test, which was 10 L. Four UAV flight altitudes of 1, 1.5, 2, and 2.5 m were set during the experiments. At least three valid repeat tests should be performed for each group. To ensure the authenticity and validity of test results, a normal flight speed of 5 m/s and a fully autonomous operation mode of plant protection UAV were selected for this spray test. During the test, the UAV was controlled to take off autonomously from 30 m outside the sampling area. After passing through the buffer area, the UAV was accelerated to the set speed and passed over the sampling line at a constant speed. The wind speed value started to be collected when the UAV was 5 m away from the wind field sampling line. The sampling duration was 5 s, and the acquisition frequency was 20 Hz.
Data Processing
Nearly 30 s after spraying, the sampling cards were collected and placed in properly labeled bags. The cards were scanned one by one by a scanner, and Deposit Scan software (US USDA) was used to obtain the coverage density, deposition, and droplet size at different locations after scanning [26]. The average value of droplet coverage density at each sampling location was used to represent the amounts of droplets per unit area, and the average value of droplet deposition at each sampling location was used to represent the deposition rate per unit area. In order to characterize the uniformity of droplet deposition between these sampling points, the coefficient of variation (CV) value was used to measure the uniformity of droplet deposition. The smaller the CV value, the better the uniformity of droplet deposition.
The downwash wind field data of plant protection UAV collected by the WWSSNMS at four different flight heights were imported into Origin 2018 software (Origin Lab., USA), and the wind speed distribution map was drawn. To further demonstrate the influence of the downwash wind field on the droplet deposition characteristics of aerial spray, the significant differences for the results of droplet deposition were conducted using analysis of variance (one-way ANOVA) by Duncan's test at a significance level of 95% with SPSS v22.0 (SPSS Inc, an IBM Company, Chicago, IL, USA). More precisely, data are expressed as the mean ± standard deviation (SD).
Downwash Wind Field Distribution
The wind speed distribution map is shown in Figure 6, and can directly reflect the downwash wind field distribution at different flight heights. As shown in Figure 6a, the downwash wind field in Z-direction had two airflow centers at the acquisition position of ±0.7 m, which were symmetrically distributed along the center route when the flight height was 1.0 m. The distance between the centers of two airflows was about 1.5 m, which roughly corresponds to the distance between the centers of rotors on both sides of the UAV. The peak value of wind speed in the airflow center can reach 8.1 m/s, and the wind speed directly below the fuselage center was significantly lower than that on both sides. It also showed that the downwash wind field below the fuselage was concentrated in the coverage range (-1.5~1.0 m) directly below the rotor. The main downwash wind field in Y-direction was concentrated below the fuselage, and it was mainly distributed at the lower right of the fuselage with the peak wind speed of 4.2 m/s. This may be mainly due to the low flight height of UAV, resulting in insufficient space for the downwash wind field on both sides of the rotor to spread out fully. Different from the downwash wind field in Z-direction and Y-direction, the distribution of the downwash wind field in X-direction was relatively scattered, and did not form a large continuous airflow central area, but it covered a wider area. Moreover, there were some concentrated area of wind speed in X-direction at the collection positions of −2.0, −0.6, and 2.0 m, and the wind speed value was 3.8 m/s. From the above visual analysis of the wind field, it can be seen that the downwash wind field distribution of the UAV was obviously different at different flight heights. The As shown in Figure 6b, the downwash wind field distribution in Y-direction and Z-direction at a flight height of 1.5 m had no significant change compared with that of a flight height of 1.0 m. When the flight height was 1.0 m, the coverage of the wind field in X-direction became larger, and a large continuous airflow area was formed. As shown in Figure 6c, when the flight height reached 2.0 m, the airflow center of the wind field in Z-direction on both sides of the fuselage had disappeared and connected to form an airflow field with a wider central area. This phenomenon is mainly due to the gradual integration of the wind field in Z-direction on both sides of the fuselage into an air flow field under mutual disturbance with the increase of flight height. The distribution range of the wind field in Y-direction became larger, mainly −1.5~3.6 m, and two strong airflow areas gradually formed at −1 and 2 m on both sides of the central route with a peak wind speed of 5.0 m/s. As shown in Figure 6d, when the flight height was 2.5 m, the downwash wind field in X-direction and Z-direction was similar to the wind field distribution at a flight height of 2.0 m. However, as the flight height increased, the vertical downward flow field in Z-direction weakened with the peak wind speed of 5.8 m/s. The wind field in Y-direction spread to both sides of the fuselage, and obviously formed a distribution law of weak in the middle and strong on both sides, with the peak wind speed of 7.9 m/s. In plant protection operations, the distribution law of the wind field in Y-direction will make the sprayed droplets spread to both sides of the fuselage, which can increase the effective spray width of plant protection UAV to a certain extent, but it will also force the droplets to spread out of the target area, causing the droplets to drift horizontally.
From the above visual analysis of the wind field, it can be seen that the downwash wind field distribution of the UAV was obviously different at different flight heights. The downwash wind field in X-direction and Z-direction generally showed a trend of wind speed from strong to weak and a distribution range from large to small, while the downwash wind field in Y-direction showed an opposite trend. The wind field in X-direction was derived from the winding airflow generated by the interaction between the downwash airflow at different horizontal positions and the external ambient wind, which may increase the risk of droplet drift at the edge of the farmland plot for the spray operation on small-scale plots. Similarly, the wind field in Y-direction was the result of the downwash airflow spreading to both sides of the fuselage. The action direction of the wind field in Y-direction on droplets makes it move towards both sides of the route, which makes it easy to further aggravate the horizontal drift of droplets under crosswind conditions. In contrast, the wind field in Z-direction was the vertical downward component of the downwash airflow, which was generally considered to have a promoting effect on droplets' deposition. Therefore, for spraying operations by using plant protection UAV, it is necessary to avoid the strong wind field in the horizontal direction while making full use of the wind field in the downward direction to improve the deposition rate of droplets and reduce the drift risk of pesticides. Figure 7 shows the droplet deposition distribution in the spraying test of the plant protection UAV. According to previous studies [27], the flight height of the plant protection UAV would affect the effective spray width of droplet deposition. Therefore, in order to ensure the effectiveness of comparison and analysis on droplet deposition results, it is necessary to evaluate the effective spray width of the plant protection UAV in different spray tests [27]. According to the evaluation method of effective spray width with droplet density, the average effective spray width was 6.0, 6.9, 7.2, and 7.8 m at different flight heights. tion UAV would affect the effective spray width of droplet deposition. Therefore, in order to ensure the effectiveness of comparison and analysis on droplet deposition results, it is necessary to evaluate the effective spray width of the plant protection UAV in different spray tests [27]. According to the evaluation method of effective spray width with droplet density, the average effective spray width was 6.0, 6.9, 7.2, and 7.8 m at different flight heights. The peak wind speed of the downwash wind field and droplet deposition distribution of the UAV at four different flight heights is shown in Table 2. It can be seen that the average droplet deposition in the effective spray width was 0.482, 0.436, 0.295, and 0.189 μL/cm 2 respectively, at the four different flight heights. The results showed that the droplet deposition on the sampling line gradually decreased with the increase of flight height. It is worth noting that the average droplet deposition on the sampling line decreased sharply when the flight height increased from 2 to 2.5 m. Combined with the peak wind speed of the downwash wind field in Table 2, it can be seen that the main reason was that the downwash wind field in Z-direction gradually weakened with the increase of the flight height, resulting in the decrease of droplet deposition in the effective spray area. On the other hand, the gradual increase of the downwash wind field in Y-direction also made droplets drift more. In addition, it can be seen from the droplet deposition curve that there The peak wind speed of the downwash wind field and droplet deposition distribution of the UAV at four different flight heights is shown in Table 2. It can be seen that the average droplet deposition in the effective spray width was 0.482, 0.436, 0.295, and 0.189 µL/cm 2 respectively, at the four different flight heights. The results showed that the droplet deposition on the sampling line gradually decreased with the increase of flight height. It is worth noting that the average droplet deposition on the sampling line decreased sharply when the flight height increased from 2 to 2.5 m. Combined with the peak wind speed of the downwash wind field in Table 2, it can be seen that the main reason was that the downwash wind field in Z-direction gradually weakened with the increase of the flight height, resulting in the decrease of droplet deposition in the effective spray area. On the other hand, the gradual increase of the downwash wind field in Y-direction also made droplets drift more. In addition, it can be seen from the droplet deposition curve that there were two peaks of droplet deposition on both sides of the flight route, which was consistent with the distribution of the downwash wind field in Z-direction. The lower the flight height, the stronger the wind field in Z-direction on both sides of the flight route, and the more obvious the peak of the droplet deposition curve. Similarly, it can be seen that the CV of droplet deposition in the effective spray area were 58.3%, 62.0%, 48.6%, and 42.5% at altitudes of 1, 1.5, 2, and 2.5 m. The higher the flight height, the better the uniformity of droplet deposition. Combined with the analysis of the downwash wind field of UAV, it was mainly caused by the influence of the downwash wind field in Y-direction and Z-direction on droplet deposition distribution. When the flight height was low (1.0 and 1.5 m), the downwash wind field in Y-direction was weak, and the downwash wind field in Z-direction was strong and had two strong airflow central areas below the fuselage. The distribution of the downwash wind field was not conducive to the diffusion of droplets to both sides, so the droplet deposition under the airflow central area was significantly higher than that in other places, resulting in poor uniformity of droplet deposition. When the flight height of the UAV became higher (2.0 and 2.5 m), the downwash wind field in Y-direction was enhanced, while the downwash wind field in Z-direction was connected to form a whole airflow field. In this case, the distribution of the downwash wind field was more uniform, so the droplets were deposited more uniformly under the action of the wind field.
Analysis and Discussion
In order to further reveal the influence mechanism of the downwash wind field on droplet deposition distribution in the effective spray area, the peak wind speed in X, Y, and Z directions in each test was taken as the downwash wind field intensity to study the relationship between the downwash wind field and droplet deposition distribution. The peak values of wind speed in X, Y, and Z directions and droplet deposition distribution were analyzed by variance analysis and regression analysis respectively, and the analysis results are shown in Table 3. It can be seen that the p-values of the wind field in Y-direction and Z-direction on droplet deposition in the effective spray area were 0.041 and 0.001 respectively, which indicated that the wind field in Y-direction and Z-direction had a significant and an extremely significant influence on the droplet deposition, respectively. Moreover, according to the regression coefficient, the wind field in Y-direction and Z-direction had a negative and positive correlation with droplet deposition in the effective spray area respectively, indicating that the stronger the wind field in Z-direction and the weaker the wind field in Y-direction, the more droplet deposition in the effective spray area. For the uniformity of droplet deposition, the p-value of the wind field in Z-direction on the uniformity of droplet deposition in the effective spray area was 0.036, which indicated that the wind field in Z-direction had a significant influence on the uniformity of droplet deposition. Similarly, the wind field in Z-direction was positively correlated with the deposition uniformity. The stronger the wind field in Z-direction, the worse the uniformity of droplet deposition, which was consistent with the above results [17].
According to the analysis of the test results, the appropriate flight height should be set when UAV was used for aerial spraying in the field, not too high or too low. When the flight height is too high, the downwash wind field above the crop canopy will be weakened. The weakened wind field in the vertical direction would cause a sharp decrease in the amounts of droplets deposited on the crop canopy, and the increase of the horizontal wind field will aggravate droplet drift to the non-target area. When the flight height is too low, the strong downwash wind field in the vertical direction above the crop canopy will result in poor uniformity of droplet deposition, which cannot achieve the ideal control effect, such as the re-spray and missed spray [16,28]. Therefore, the downwash wind field in the vertical (Z) direction is regarded as the most important analysis factor affecting droplet deposition. It can be seen from the regression model that the regression coefficient of the model increased with the increase of flight height, indicating that the promoting effect of the wind field in Z-direction on droplet deposition is also increasing with the increase of flight height. However, when the flight height increased to 2.5 m, the regression coefficient of the model decreased sharply (Figure 8), which indicated that the promotion effect of the wind field in Z-direction on droplet deposition was also weakened. At the same time, when the flight height was 2.0 m, the fitting degree of the regression model between the wind field in Z-direction and the droplet deposition was the best, and its R 2 reached 0.9020. Therefore, according to the test results, the optimal flight height of the DJI T16 plant protection UAV was 2.0 m without the influence of the external environment conditions. droplet deposition. It can be seen from the regression model that the regression coefficient of the model increased with the increase of flight height, indicating that the promoting effect of the wind field in Z-direction on droplet deposition is also increasing with the increase of flight height. However, when the flight height increased to 2.5 m, the regression coefficient of the model decreased sharply (Figure 8), which indicated that the promotion effect of the wind field in Z-direction on droplet deposition was also weakened. At the same time, when the flight height was 2.0 m, the fitting degree of the regression model between the wind field in Z-direction and the droplet deposition was the best, and its R 2 reached 0.9020. Therefore, according to the test results, the optimal flight height of the DJI T16 plant protection UAV was 2.0 m without the influence of the external environment conditions.
Conclusions
In this study, the DJI T16 plant protection UAV was used as the research object to conduct aerial spraying tests under four different flight height parameters, and the downwash wind field was measured by using a wireless wind speed sensor network measurement system for UAV. Combined with the droplet deposition of aerial spray, the distribution characteristics of the downwash wind field and the influence of the downwash wind field on droplet deposition were studied. The conclusions are shown as follows: (1) There were obvious differences in the distribution of the downwash wind field for the plant protection UAV at different flight heights. As the flight height increased, the downwash wind field in X-direction and Z-direction showed a strong to weak trend, and the distribution ranged from large to small, while the downwash wind field in Y-direction showed an opposite trend.
(2) It was found that the downwash wind field in Y-direction and Z-direction both had a significant influence on the characteristics of droplet deposition. With the increase of flight height, the intensity of the downwash wind field in Y-direction and Z-direction gradually increased and decreased respectively, and the change of the downwash wind field led to the gradual decrease of droplet deposition and more uniformity in the effective spray area.
(3) In order to make full use of the downwash wind field, the appropriate flight height should be selected when UAV is used for aerial spray operations in the field. For the DJI T16 plant protection UAV in this test, the optimal flight height was 2.0 m, and the downwash wind field had a better promotion effect on droplet deposition.
|
2021-12-01T16:31:18.107Z
|
2021-11-25T00:00:00.000
|
{
"year": 2021,
"sha1": "e382276f0374b650552948d8f24d14d0a5911a16",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/12/2399/pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d3e971b7ab1ac878f5bbc01c837cdc67dbe2f27d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
118725936
|
pes2o/s2orc
|
v3-fos-license
|
NUMERICAL SIMULATION OF THE FRACTIONAL LANGEVIN EQUATION by
The fractional calculus has been studied for more than three hundred years. For a long time the fractional calculus has been studied only in the pure mathematical field. In history, Euler, Riemann, Liouville, Gruwald, Letnikov, Lebnitiz, L’Hospital, et al., contributed to the fractional calculus [1-3]. It has not attracted more attention because of not finding applications. However, in recent few decades, the fractional calculus has been widely used in many fields such as chaotic dynamics, viscoelasticity, acoustics, physical chemistry, electromagnetics, signal processing, earthquake prediction, etc. The study of fractional differential equations shifts from pure theory to real applications [2], [4-15]. Especially, the stochastic fractional differential equations have attracted increasing interests due to the potential applications. In effect, human evolutions are deterministic in local situations and in short terms but are random globally and in a long run. The stochastic fractional differential equation can reflect dual characters-stochasticity and globality. Hence, the stochastic fractional differential equation is possibly the better choice for characterizing human evolutions. In this paper we will study the fractional Langevin equation where the fractional derivative is in Caputo sense. In 1908 the French physicist Langevin introduced the concept of the equation of motion with a random variable, which reads as:
Introduction
The fractional calculus has been studied for more than three hundred years.For a long time the fractional calculus has been studied only in the pure mathematical field.In history, Euler, Riemann, Liouville, Gruwald, Letnikov, Lebnitiz, L'Hospital, et al., contributed to the fractional calculus [1][2][3].It has not attracted more attention because of not finding applications.However, in recent few decades, the fractional calculus has been widely used in many fields such as chaotic dynamics, viscoelasticity, acoustics, physical chemistry, electromagnetics, signal processing, earthquake prediction, etc.The study of fractional differential equations shifts from pure theory to real applications [2], [4][5][6][7][8][9][10][11][12][13][14][15].Especially, the stochastic fractional differential equations have attracted increasing interests due to the potential applications.In effect, human evolutions are deterministic in local situations and in short terms but are random globally and in a long run.The stochastic fractional differential equation can reflect dual characters-stochasticity and globality.Hence, the stochastic fractional differential equation is possibly the better choice for characterizing human evolutions.
In this paper we will study the fractional Langevin equation where the fractional derivative is in Caputo sense.In 1908 the French physicist Langevin introduced the concept of the equation of motion with a random variable, which reads as: where m is the mass of the particle, g -the coefficient of viscosity, F(x) -the external force, and x(t) -the random force.The Langevin equation is always regarded as the first stochastic differ-ential equation.Although the classical Langevin equation has a fundamental role in so many areas such as physics, chemistry, signal processing, financial market, etc.There are still some dynamics, for example the anomalous diffusion (sub-diffusion and super-diffusion), power-law phenomena, long-tail character, long-range interaction, etc., which can not be described by the classical Langevin equation.Therefore, the generalized Langevin equations have been introduced to model the above behaviors [16][17][18][19][20][21][22][23].
Among the generalized Langevin equations, the fractional version is often used, which is in the following form: where 0 < a < 1, g is a constant, F(x) -an external force field, x(t) -a random force, and áx(t ; the fractional derivative c o t x t D , ( ) a will be defined in the following section.
Preliminaries
First of all, we give some basic definitions.In general, fractional calculus includes both fractional integral (integration) and fractional differentiation.The fractional integral mainly means the Riemann-Liouville integral.For fractional differentiation, however there exist more than six kinds of fractional derivatives.Among them, the Riemann-Liouville derivative and the Caputo derivative are mostly utilized.In the following, we only introduce the fractional integral, the Riemann-Liouville derivative and the Caputo derivative [2,3,[8][9][10].
Definition 1. Fractional integral of function f(x) with order a > 0 is defined by: Definition 2. Riemann-Liouville fractional derivative of function f(x) with order a > 0 is defined by: Definition 3. Caputo fractional derivative of function f(x) with order a > 0 is defined by: The above two fractional derivatives are not equivalent, whose important properties are listed below.
Algorithm for the fractional Langevin equation
In this paper we will give the numerical simulation for the fractional Langevin equation (2).
Without external force
In the following, we consider the force free type, i. e. F(x) = 0.The above equation can be written as: then integrating the both sides of the above equation yields: The first integral term and the second one in eq. ( 7) are approximated by the rectangle formula.For the first integral part in the r.h.s.: then we can calculate the coefficients: Similarly, for the second term in the r.h.s.: we can also get the coefficients: For the l.h.s. of eq. ( 7), we use the finite difference method to approximate it.Therefore we get the discrete formula: where W(t) is a Wiener process with mean zero, and dW(t) is in Ito sense, i. e., an independent increment random.With the above algorithm we simulate its dynamic behaviors.The following figures give the displacement and the mean square displacement under different parameters.
The algorithm is almost similar to that for the fractional Langevin equation without external force:
Conclusions
In this paper we study the generalized Langevin equation with a memory kernel in the damping item, i. e. the fractional Langevin equation in Caputo sense.We study two cases of the fractional Langevin equation (i.e., without force and with constant external force).Finally we give an algorithm and numerical experiments with different parameters.From the numerical simulations, we find that the displacement is bigger if the coefficient of the damping item is relatively smaller and vice versa.
|
2019-01-02T11:37:00.528Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "66f8b9a8f36b39107c29dae0310cf4167d039e0d",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98361100073G",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "66f8b9a8f36b39107c29dae0310cf4167d039e0d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
229467111
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Community Satisfaction Level Against the Ministry of Health’s Infection Emerging Websites Using Webqual 4.0
As one form of communication and information media, web sites have a very big role in representing a government institution to interact with the public. A website is designed in such a way as to meet certain service quality standards set by the developer. However, good service quality must also consider perceptions from the point of view of users in this case the wider community. The study purpose is to determine the level of community satisfaction with the quality of the website as an indicator of success by the government in conveying information to the public. The method used in this study is to distribute questionnaires with a Webqual approach which is of three categories, specifically Usability Dimension, Information Quality Dimension, and Service Interaction Dimension. The data were obtained then analyzed using the Structural Equation Model (SEM) Technique with the SmartPLS 3 software. Based on a survey of 104 respondents, it was found that in general users were satisfied with the services on the website.
Introduction
Information technology and communication technology (ITCT) has been widely used by many countries nowadays. In government organizations ITCT is utilized to build the straightforwardness of the administration framework. Along with the development of ITCT, the need for ITCT in all walks of life has also increased and public acceptance of the Internet has produced several implications for the public sector [1].
During the pandemic disease of Covid-19 the need for data on the amount of people affected by the viruses was even greater. Not only the government, the people also need information on the extent and impact of the distribution of Covid-19. The government has made a policy to utilize ITCTs in the field of integrated government, which is contained in Presidential Instruction (INPRESS) No. 3 of 2003 concerning National Policies and Strategies for the Development of E-Government [2]. As the government agencies dealing with health issues, the Ministry of Health applies the use of ITCTs to manage data and information related to Emerging Infection that is manifested in a website named Infection Emerging.
Infection Emerging is managed by the Ministry of Health to provide the best service in the field of information technology as an effort to provide satisfaction to users, namely the community. It's provide the reliable information about global emerging disease, Southeast Asia regional emerging disease, numbers of infected country, numbers of country transmitted with local transmission, numbers of confirmed cases, numbers of death case, numbers of cure case, numbers of case in treatment, numbers of regency or city affected, Indonesian region with local transmission, gender and age of patients who are positively infected. The Government hope that this website give the valuable information for decision maker, media or people who need.
In order to make the Infection Emerging website the best in accordance with what is desired by the visitors, it is necessary to know the extent to which the website of Infection Emerging can be accepted by the public by holding an assessment to measure from the available websites whether it can be accepted by society well. The measurement serves to enhance the standard of service to the community [3]. This mechanism permits voters to assume a lively role in discovering, distinguishing, and shaping public services that require to be provided [4].
From the explanation above, it is necessary to conduct a study to measure the service quality of Infection Emerging Website by referring to the Webqual indicator to analyze the relationship between aspects of Usability, Information Quality, and Service Interaction on community satisfaction level against the Infection Emerging Website with the help of SmartPLS 3 software.
Method
Quality of service contains a very important role in determinant success for a corporation. Consumer perception is that the comparison between expectations of the standard of a service with the standard of services received by consumers. Quality of service represents the relationship between the customer and the service provider and between the level of service perception and the service provided. Service quality is conceptualized as a multidimensional. The concept and measurement of service quality must be based on user perception, context specific, hierarchical, and multidimensional. Service quality includes two dimensions, namely technical and functional [5].
Webqual was first created in 2000 and over the years the scaled version was iteratively developed by Barnes and Vidgen until Webqual 4.0 in 2002 [6]. The scale used at Webqual is based on a scale developed by Parasitaman Zeithaml and Berry in 1988 called Servqual and has been widely accepted. There are three main dimensions on the Webqual scale, namely: the Usability scale, the Information Quality scale, and also the Service Interaction scale [7]. Webqual facilitates marketers in transforming a qualitative assessment into a quantitative measure. The only focus on Webqual's scale is the Internet user experience [8].
Likert Scale was designed in 1932 to measurement 'Attitudes' scientifically and validated. An attitude can be defined as a way of behaving / reacting preferences in specific circumstances rooted in relatively long-standing organizational beliefs and ideas (around an object, subject or concept) obtained through social interaction. The collection of statements (items) that are asked for a real situation or a hypothetical being studied is the definition of a Likert Scale. The level of acceptance of a statement (item) is shown from fully disagree to fully agree using the metric scale. All statements are combined to reveal the specific dimensions of attitude towards the problem, hence, of course are interrelated to one another [9].
Structural Equation Modeling or abbreviated SEM which has a component or variant based equation model is the understanding of PLS. PLS was first introduced in general by Herman Wold in 1974. The PLS approach is a covariance-based SEM approach that is shifted into variants. SEM generally tests causality or theoretical models, while PLS is predictive model. PLS Analysis sub-model consists of structural models or often called inner models and measurement models or called outer models. Structural models or inner models show the strength of estimates between constructs, whereas measurement models or outer models show how indicators represent latent variables for being measured. The latent variables formed in the PLS indicator can be either reflexive or formative [10].
This study was conducted to live the standard of the Infection Emerging Website (infeksiemerging.kemkes.go.id) which belongs to Ministry of the Health of Republic of Indonesia from the perception of website users using quantitative descriptive research. Using survey techniques to get primary data by distributing questionnaires. Determination of the sample or respondent using random sampling techniques.
The questionnaire-based survey was used as an instrument at the research stage, which was distributed to respondents in this case the community which became an example of distributing questionnaires as users of the Infection Emerging website. Using a WebQual 4.0based questionnaire according to the established standards. The use of WebQual 4.0 as a theory to determine community satisfaction has often been done for example in educational sites [3], and also online banking [6]. However, the use of this theory for the assessment of community satisfaction on websites relating to the handling of Covid-19 has not been much. So that the contribution of this research is that it can provide recommendations on the satisfaction indicators of users of the Covid-19 website, specifically the Infection Emerging website of Ministry of Health's. The total used number of questions is 20 questions with a usability dimension of 8 questions, the dimension of information quality is 5 questions, the quality dimension of service interaction is 7 questions. Assessment for each question uses a Likert Scale consisting of 5 answer choices to assess perceptions of website quality, as shown on Table 2. The respondents of this research were Indonesian citizens who has used the Infection Emerging website at least once. The respondents who have never visited Infection Emerging website will be directed to visit the Infection Emerging website before filling out the questionnaire as shown on Figure 2. Respondents were drawn from the population using random sampling. Only 104 responses were categorized as appropriate responses for the analysis step as shown in Table 3. Arguably, the respondents are people who concern about what's going on, moreover, about the pandemic that has been infect millions and creating high rate of casualties. Therefore, the responds are valid and the data can be accounted for this study.
Measurement and Structural Model Testing
At this stage, there are three types of testing carried out namely Convergence Validity, Discriminant Validity and Reliability Testing. This test is to see the extent of the link between latent variables with each indicator. The Convergent Validity value is taken from the loading issue of every indicator of every latent variable. In order to be processed further, the expected loading factor value is 0.7. In Figure 3, the research model and output are shown after the questionnaire results are processed using PLS Algorithm in the SmartPLS application. Based on user perception of each indicator, it has an outer loading value of more than 0.7. This means having a positive impact on users of the Infection Emerging website (infeksiemerging.kemkes.go.id). A construct will be valid and reliable if it has a AVE value above 0.50, composite reliability above 0.70 and a Cronbach alpha value above 0.70 [12]. Table 4 shown that it meets the requirement, so it will be same that the research model conducted contains a positive and important result on society. Table 5. T-Statistic Value The hypothesis can be accepted if the T-Statistic value is greater than 1.64 and if the opposite occurs, the hypothesis is not accepted. Where the α value used is 5 percent. As shown on table 5 it is seen that the test conducted using SmartPLS T-Statistic values have values greater than 1.64 so Hypothesis 1 (H1), Hypothesis 2 (H2), and Hypothesis 3 (H3) are accepted. The 3 variables specifically Usability variable, Information Quality variable, and Service Interaction variable have positive and important result on user satisfaction variables.
Conclusion
Based on data analysis results of the study, it may be all over that the variable Usability, Information Quality, and Service Interaction have an effect on community satisfaction as users of the Infection Emerging website belongs to Ministry of Health's. However, service suppliers should still improve the standard of each the knowledge, services and interactions to the broader community as its users.
|
2020-11-26T09:07:08.861Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6e72c7bc1391f01e92174ef4fa3cb73c3bc8f8c3",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1641/1/012050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "43d6a624ee1dbc9ee0170b1aac2b1478f4523afc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Psychology"
]
}
|
20094060
|
pes2o/s2orc
|
v3-fos-license
|
Occurrence of the mcr-1 Colistin Resistance Gene and other Clinically Relevant Antibiotic Resistance Genes in Microbial Populations at Different Municipal Wastewater Treatment Plants in Germany
Seven wastewater treatment plants (WWTPs) with different population equivalents and catchment areas were screened for the prevalence of the colistin resistance gene mcr-1 mediating resistance against last resort antibiotic polymyxin E. The abundance of the plasmid-associated mcr-1 gene in total microbial populations during water treatment processes was quantitatively analyzed by qPCR analyses. The presence of the colistin resistance gene was documented for all of the influent wastewater samples of the seven WWTPs. In some cases the mcr-1 resistance gene was also detected in effluent samples of the WWTPs after conventional treatment reaching the aquatic environment. In addition to the occurrence of mcr-1 gene, CTX-M-32, blaTEM, CTX-M, tetM, CMY-2, and ermB genes coding for clinically relevant antibiotic resistances were quantified in higher abundances in all WWTPs effluents. In parallel, the abundances of Acinetobacter baumannii, Klebsiella pneumoniae, and Escherichia coli were quantified via qPCR using specific taxonomic gene markers which were detected in all influent and effluent wastewaters in significant densities. Hence, opportunistic pathogens and clinically relevant antibiotic resistance genes in wastewaters of the analyzed WWTPs bear a risk of dissemination to the aquatic environment. Since many of the antibiotic resistance gene are associated with mobile genetic elements horizontal gene transfer during wastewater treatment can't be excluded.
INTRODUCTION
Antibiotic-resistant intestinal bacteria enter the environment through sewage water treatment plants. Some survive or even multiply during the waste water treatment and are capable of transferring genes to other microorganisms (Davies et al., 2006;Rizzo et al., 2013;Berendonk et al., 2015). There is a potential risk of people getting colonized with these bacteria, for example via contact with wastewater contaminated surface water (Ferrer et al., 2012). The resistant bacteria can cause infections which are difficult to treat because of the insensitivity to antibiotics. It is therefore in the interest of our society to quickly determine whether and how resistant bacteria spread via sewage water-and how this could be prevented.
The worldwide increase of antibiotic-resistant bacteria is considered as a major challenge by the World Health Organization (WHO, 2014), the US Centers for Disease Control and Prevention, and the German Antibiotic Resistance Strategy (DART, 2020;The Federal Ministry of Health, 2012). To minimize the leaking of antibiotics or antibiotic-resistant bacteria into the environment, the use of antibiotics in human and veterinary medicine needs to be reduced. Indications of the importance of sewage water and wastewater treatment plants (WWTP) for the spread of antibiotic-resistant pathogens were already described (Rizzo et al., 2013;Alexander et al., 2015Alexander et al., , 2016. A particular danger is represented by pathogens with a resistance against last resort antibiotics. It can be very difficult to cure patients that suffer from an infection with this kind of resistant bacteria. These serious concerns have been catalyzed by the rapid increase in carbapenemase-producing Enterobacteriaceae expressing enzymes such as KPC-2 (Klebsiella pneumoniae carbapenemase-2) and NDM-1 (New Dehli metallobeta-lactamase; Kumarasamy et al., 2010;Munoz-Price et al., 2013). The global increase in such carbapenemase-producing Enterobacteriaceae has resulted in increased use of colistin with the risk of emerging resistance. The development of colistin resistance is also directly linked with the agricultural use of human antibiotics, where some countries have actively used colistin in animal production (Hao et al., 2014).
Colistin belongs to the family of polymyxins, cationic polypeptides, with broad-spectrum activity against Gramnegative bacteria, including most species of the family of Enterobacteriaceae. The two polymyxins currently in clinical use are polymyxin B and polymyxin E (colistin), which differ only by one amino acid from each other and have comparable biological activity. The mechanism of resistance to polymyxins is modification of lipid A, resulting in a reduction of polymyxin affinity . The polymyxin resistance can be traced back to the plasmid mediated mcr-1 gene. The plasmid carrying the mcr-1 is already characterized and can be mobilized by conjugation . The prevalence of mcr-1 in E. coli from livestock and food in Germany is documented by Irrgang et al. (2016), with the highest prevalence of mcr-1 positive isolates of Enterobacteriaceae from poultry food chains. Another report documented the presence of mcr-1 carrying plasmids in pig slurry in Estonia (Brauer et al., 2016). Furthermore, Ovejero et al. (2017) was able to isolate 30 mcr-1 positive isolates from sewage of two WWTP in Spain. Their analysis suggested, that mcr-1 in Spain besides E. coli has successfully transferred to K. pneumoniae. Bernasconi et al. (2016) also detected the mcr-1 gene in stool samples of travelers returning from India. This indicates the fact that humans can get colonized by mcr-1 resistant bacteria which are acquired through the food chain or from other environmental sources.
We report the prevalence of the plasmid mediated colistin resistance gene mcr-1 together with other clinically relevant antibiotic resistance genes during wastewater treatment at municipal WWTPs in Germany. Quantitative PCR (qPCR) was used for the detection of the antibiotic resistance genes in native wastewater populations aiming on a microbiological risk characterization concerning the dissemination of colistin and other resistance genes into the adjacent aquatic environment.
Whereas, WWTP-1 and WWTP-2 treat wastewater from urban areas including clinical, household, and industrial wastewaters, the wastewaters of the other 4 WWTPs are mainly affected by agricultural catchment areas including animal and food farming. WWTP 7 only treats municipal wastewater without affection of agriculture or industry.
Sampling points were arranged at the influents of WWTPs after mechanical separation and at the effluent sites after sedimentation tanks before the purified wastewater is released into the receiving bodies.
Sample Preparation and DNA Extraction
Volumes of about 100 mL from the 24 h-composite of the WWTP influent samples and about 300 mL of the WWTP effluent samples from the 24 h-composite sample were used for DNA extraction. Wastewater samples were filtered using polycarbonate membranes with a diameter of 47 mm and a pore size of 0.2 µm (Whatman Nuclepore Track-Etched Membranes, Sigma-Aldrich, Munich, Germany). DNA extraction was performed using the Fast DNA spin kit for soil (MP Biomedical, Illkrich, France) utilizing the lysing matrix E according to the manufacturer's protocol for wastewater. The quantities and purities of the DNA extracts were measured by means of the Qubit fluorimetric quantitation (Thermo Scientific, Waldham, USA).
The yield of DNA obtained from the influent water samples ranged from 71 to 244 µg per mL (n = 14) and from 15 to 143 µg per mL (n = 14) for the effluent water samples. To determine the abundances of 7 clinically relevant antibiotic resistant genes including the mcr-1 colistin resistance gene, and the 16S rRNA gene as well as the gene markers for some Enterobacteriaceae qPCR analyses were performed.
Primer Design, qPCR Protocol, and Evaluation
All primers and references are listed in Table 2, which do also contains the accuracy (R 2 ) as well as efficiency (%) values for each qPCR detection system. More specifically, the Escherichia coli NRZ-14408 reference strain is carrying the mcr-1 colistin resistance gene and was kindly provided from National Reference Center in Bochum, Germany. For mcr-1 primer design the NCBI Primer BLAST software was used. For the quantification of the mcr-1 gene in environmental water samples a colistin-resistant Material. An R 2 value of 0.999 and an efficiency of 97.6% were determined, indicating the high specificity of the mcr-1 detection system. The antibiotic resistance genes blaTEM, CTX-M, CTX-M-32, and CMY-2 are directed against ß-lactam antibiotics, whereas the tetM resistance gene is coding for the resistance against tetracycline and the ermB gene mediates the resistance against erythromycin. All primers, reference strains, and quality values of each detection system are listed in Table 2. In addition calibration and melting curves are given in Supplementary Material. The abundances of these ARGs were quantified in all WWTP effluent wastewater samples via qPCR approach using different reference strains carrying the mentioned resistance genes.
The yccT gene present in E. coli DSM 1103 was used as taxonomic gene marker as well as the gltA gene from K. pneumoniae DSM 30104 (Clifford et al., 2012). Total DNA of pure culture was extracted with the DNA extraction kit for soil (MP Biomedical, Illkrich, France) and subsequently used for the generation of target specific calibration curves. 16S rRNA primers were used to quantify eubacterial rDNA in the water samples for normalization. Here, an already cloned fragment of the eubacterial 16S rRNA gene in the pNORM plasmid of E. coli DH5α was used (Stalder et al., 2014). Plasmid-DNA was extracted with the GeneJet Plasmid Miniprep Kit (Thermo Scientific, Waldham, USA) and was used for the generation of the calibration curve which in turn was used for the calculation of the 16S rRNA gene copy number in water samples.
The mcr-1, ermB, tetM resistance genes, and the four ß-lactamase genes (CTX-M. CTX-M-32, CMY-2, blaTEM), the ribosomal 16S rRNA gene for Eubacteria, and the specific taxonomic gene markers of E. coli, K. pneumonia, and Acinetobacter baumannii were quantified in a SYBR Green qPCR approach. Reactions were run in volumes of 20 µL, containing 10 µL Maxima SYBR Green/ROX qPCR Master Mix (2x) (Thermo Scientific), 8.2 µL of nuclease-free water (Ambion, Life technologies, Karlsbad, Germany), 0.4 µL of the respective primers (stock concentration 10 µM, Table 2), and 1 µL of template DNA (20 ng µL −1 ). The qPCR protocol comprised 10 min at 95 • C for activation of the DNA polymerase followed by 40 cycles of 15 s at 95 • C and 1 min at appropriate temperature for primer annealing and elongation (see Supplementary Material). Each water sample was analyzed in technical triplicates. To determine the specificity of the amplification, a melting curve was recorded by raising the temperature from 60 to 95 • C (1 • C every 10 s). Data analysis was performed by using the Bio-Rad CFX Manager software.
Cell Equivalent Calculation
To calculate the gene copy number of the respective antibiotic resistance gene, the 16S rRNA gene, and taxon-specific gene marker, reference strains carrying the genetic targets of interest were used. With regard to the already known genome sizes of the reference bacteria it is possible to calculate the cell copies. The following equation with an average molecular weight for one base pair of about 650 g/mol, Avogardo's number with 6.022 × 10 23 molecules/mol, and a converting factor of 10 9 ng/g was used.
number of copies = (amount of DNA [ng]) * 6, 022 * 10 23 average size of genome[bp] * 10 9 * 650 (1) Serial dilution of DNA stocks were prepared and used for calculation of the calibration curves starting with 500 ng DNA (see Supplementary Material). The coefficient of determination of all standard curves was above 0.994 in all experiments ( Table 2), indicating minimal variability within the linear data range.
Using these curves, the measured Ct values of the mcr-1 gene, the 16S ribosomal RNA gene or the taxon-specific gene markers from water samples can be used for copy number calculations (Alexander et al., 2015(Alexander et al., , 2016. The abundances of antibiotic resistance genes (ARGs) and opportunistic bacteria were quantified in each water sample and normalized to 100 ng total extracted DNA (cell equivalents per 100 ng DNA). In addition, the qPCR data was also normalized to the 16S rRNA gene abundances of the corresponding water samples.
Data Statistics and Presentation
In total 14 biological samples from the influents and effluents of 7 WWTPs were obtained from two individual sampling campaigns in autumn and spring time. For data presentation the standardized box plot diagram was used displaying the distribution of data based on the five number summaries: minimum, first quartile, median, third quartile, and maximum. In the box plot the central rectangle spans the first quantile (P = 25) to the third quantile (P = 75). A segment inside the rectangle shows the median and whiskers above and below the box show the minimum and maximum values. Thus, the box plot displays the full range of variation (from min to max), the likely range of variation, and a typical value (the median). In addition, student's t-test calculations were performed to identify the significance of reduction of gene targets during conventional wastewater treatments.
Propidium Monoazide (PMA) Treatment
PMA can enter dead or injured bacteria due to the loss of their membrane integrities, intercalate with the intracellular and extracellular DNA. The presence of the PMA-DNA complex blocks the polymerase activity at the target sites. In consequence, no PCR product is generated. The protocol is in accordance with that of our previous study (Villarreal et al., 2013) and based on the publications of Nocker et al. (2007a,b) and Nocker and Camper (2009), where the method was optimized also for natural mixed populations. To separate the living bacterial population from dead or injured bacteria during conventional wastewater treatment, the PhAST Blue Photo-Activation System (GenIUL, Barcelona, Spain) was used. After filtration of the influent and effluent wastewater samples originating from WWTP 1 (n = 3), polycarbonate membranes (Nucleopore) were submerged in 300 µL of a 25 µM propidium monoazide (PMA; Biotium, Hayward, California, USA) solution and put into a 1.5 mL colorless tube (SafeSeal tubes, Carl Roth, Karlsruhe, Germany). After incubation in the dark at 4 • C for 5 min, samples were exposed to the LED light of the photoactivation system at 100% intensity for 15 min. After PMA treatment, samples were prepared for DNA extraction using the Fast DNA spin kit for soil according to the manufacturer's protocol for wastewater.
Agarose Gel Electrophoresis and Sequencing of mcr-1 Amplicons
The mcr-1 amplicons were separated on a 1% agarose gel to examine the primers and PCR protocols for proper specificity.
The results demonstrated that mcr-1 positive water samples generated amplicons with the expected length of 183 bp (Figure 2). Sequence analyses confirmed the mcr-1 identity (see Supplementary Material). For that, the mcr-1 PCR products were purified with ExoSAPit (GE Healthcare, Munich, Germany) before sequencing. The sequencing reactions were performed in a 10 µL reaction volume, including 2 µL of Premix (BigDye R Terminator v1.1 Cycle Sequencing Kit, Applied Biosystems), 0.5 µL primer 517R (10 pM), and 1 µL of purified, 1:3 diluted PCR product. This volume was filled up to 10 µL with sterile water. The temperature profile of the sequencing reaction was as followed: 5 min at 96 • C, 25 cycles of 10 s at 96 • C, 5 s at 55 • C, and 1 min at 60 • C, afterwards cooling down to 4 • C.
For deleting excessive dye, the sequence products were purified with DyeEx2.0 Spin Kit (Qiagen, Hilden, Germany). Three microliters of this product were added to 12 µL of Hi-Di-Formamid (Applied Biosystems) and analyzed in the ABI Prism 310 Genetic Analyser (Applied Biosystems) using POP4 Polymer (Applied Biosystems) and 47 cm × 50 µm capillaries (Applied Biosystems) according to the manufacturer's instructions. For database analyses, NCBI (National Center for Biotechnology Information) Blast alignments were performed.
Feasibility of the Detection System
Targeting mcr-1 Resistance Gene in Wastewater Population Demonstrated at WWTP-1 The accuracy of the novel qPCR detection system targeting the mcr-1 colistin resistance gene in wastewater samples was demonstrated with the extracted total DNA of original water samples. Here, total DNA from the WWTP-1 with the highest population equivalent of 440.000 was used. Neither irregular amplification results were observed during qPCR nor unexpected shoulders seen during melting curve analyses (Figures 1A,B).
After qPCR amplification the amplicons were separated in an agarose gel and resulted in one specific DNA band positioned at the expected product size of 183 bp (Figure 2). The amplicons were also sequenced. The evaluation of the BLAST search confirmed the 99% identity with the mcr-1 resistance gene from E. coli (LC191581), K. pneumoniae (KX377410), Salmonella enterica (KX257482), and Cronobacter sakazakii (KX505142) strain. As a first result of this study it could be shown that this newly developed qPCR system is suited for the reliable quantification of the mcr-1 gene in wastewater populations. The Ct-values from qPCR were used for cell equivalent calculations normalized to 100 ng total DNA, which is a relative quantification related to the population microbiome. It is independent from the sample volume. Additionally, the cell equivalents from the calibration curve were normalized to the 16S rRNA gene copies as an alternative eubacterial biomass gene marker. It became obvious that the mcr-1 gene was present in influent samples of WWTP-1 with a calculated abundance of 8.11 × 10 1 cell equivalents per 100 ng total DNA. The median values of cell equivalent per 100 ng DNA of the effluent water samples were quantified with 7.0 × 10 1 . Alternatively, the cell equivalents were also referred to 16S rRNA gene copies and corresponded with 2.64 × 10 −8 cell equivalents in the influent and 1.30 × 10 −9 cell equivalents in the effluent water samples, respectively. The data from WWTP-1 is indicating the stable presence of mcr-1 gene within the microbiome of the wastewaters of this large WWTP.
Additionally, influent and effluent wastewater samples of WWTP-1 were treated with PMA to discriminate living bacteria from dead bacteria or extracellular DNA (eDNA) prior to DNA extraction. The presence of eDNA, which might be released from dead or injured bacteria, is known to be a promoting factor for horizontal gene transfer (HGT) (Davies et al., 2006;Aminov, 2011). In case of the colistin resistance the mcr-1 gene is located on a conjugative plasmid, which is already described to transfer the colistin resistance among Enterobacteriaceae (e.g., Liu et al., 2016). After normalization to 100 ng DNA and 16S rRNA gene copies slightly reduced cell equivalents were found in case of PMA treatment in influent and effluent samples (influent PMA treated: 6.2 × 10 0 cell equivalents per 100 ng DNA and 8.92 × 10 −8 per 16S rRNA gene copy; effluent PMA treated: 6.2 × 10 0 cell equivalents per 100 ng DNA; and 3.66 × 10 −8 cell equivalents per 16S rRNA gene copy). Comparing PMA-treated with untreated samples of the influent and effluent in maximum one order of magnitude difference became obvious. These results are indicating the presence of low amounts of eDNA or distinct numbers of injured/dead bacteria in wastewater samples, which might have a relevance for transformation and therefore HGT.
Nevertheless, it could be shown that the mcr-1 colistin resistance gene is present in water samples from WWTP-1 and still living bacteria carrying the mcr-1 gene are released to the receiving adjacent body.
Comparing Different WWTPs According to the Presence of mcr-1 Gene
As mentioned before, the influent and effluent wastewaters of 6 additional WWTPs were investigated for the abundance of the mcr-1 gene coding for the colistin resistance in Enterobacteriaceae. In contrast to WWTP-1 the population equivalents of these WWTPs were much lower ranging from 8,000 to 210,000 p.e. The mcr-1 gene was quantified in the influent wastewater samples of all six WWTPs. These mcr-1 positive tested WWTPs treat urban and rural wastewaters including hospitals, livestock, and food industries. Here, the cell equivalents ranged from 4.45 × 10 1 to 2.01 × 10 2 in 100 ng total DNA, and 9.89 × 10 −8 to 3.44 × 10 −7 per 16S rRNA gene copy number (Figure 3).
Compared to WWTP-1 (440,000 p.e.), the influent waters of these WWTPs showed higher abundances of the mcr-1 gene (Figure 3), which might result from the agricultural or food industry impacts. Even in influent samples of the smallest WWTP-7 (8,000 p.e.) mcr-1 gene copies were detected despite the fact that no hospitals and no intensive animal farms or food industry released wastewaters to this WWTP.
Hence, the mcr-1 gene was detected both in the influent waters of all seven WWTPs and in the effluent water samples of WWTP-1, WWTP-2, and WWTP-3.
These WWTPs differed in population equivalents (Table 1), which gives hints that the dimension of the WWTPs does not significantly impact the survival and persistence of mcr-1 gene carrying bacteria. For the mcr-1 gene a maximum reduction of 1 Log scale was determined in WWTP-1, WWTP-2, and WWTP-3.
Detection of Other Clinically Relevant Antibiotic Resistance Genes
Besides the mcr-1 colistin resistance gene, 6 other clinically relevant antibiotic resistance genes were quantified in effluent wastewater samples of the 7 WWTPs. Both the relative quantification and the cell equivalents per 16S rRNA gene copy number are shown in Figures 4, 5. The targeted genes are the ermB gene coding for the erythromycin resistance, the tetracycline resistance gene tetM, and 4 different ß-lactam resistance genes (CTX-M-32, blaTEM, CMY-2, and CTX-M). All resistance genes were distinctly present in all wastewater effluents samples. The most abundant resistance gene was ermB with a median value of 2.39 × 10 5 cell equivalents in 100 ng DNA or 3.08 × 10 −3 cell equivalents per 16S rRNA gene copy number. The second most abundant gene was the tetracycline resistance gene tetM with 1.26 × 10 4 cell equivalents in 100 ng DNA and 1.68 × 10 −4 per 16S rRNA gene copy number. The abundances of the blaTEM, and CTX-M-32 genes were found in a similar range slightly below the tetM gene. The median values for the CMY-2 and CTX-M ßlactamase genes were detected in a lower abundance of 1.0 × 10 2 in 100 ng DNA and 8.7 × 10 −7 per 16S rRNA gene copy number.
In comparison to mcr-1 gene abundances, these resistance genes were much more frequently found and were quantified in higher concentrations in all wastewater effluents released to the aquatic environment. Data from the influent samples of WTTPs demonstrated a reduction of the mentioned resistance genes ranging from 1 Log to maximum of 2 Logs during wastewater treatment (data not shown), which is in accordance with the data targeting the mcr-1 gene.
Opportunistic Pathogens in the Wastewater Samples
The abundance of specific taxonomic gene markers was quantified targeting A. baumannii and the Enterobacteriaceae K. pneumoniae and E coli. These three bacterial species are described to belong to the network of horizontal gene transfer with clinical relevance. All of them are opportunistic pathogens 14). Significance of reduction is assessed by student's t-test with p ≤ 0.05 and are indicated by asterisk.
FIGURE 4 | Abundances of antibiotic resistance cell equivalents per 100 ng DNA derived from total extracted DNA of all 7 WWTPs effluents. Erythromycin resistance was detected by ermB gene abundance, ß-lactam resistance genes were detected by CTX-M32, blaTEM, CMY-2, and CTX-M gene abundances, and tetracycline resistance was detected by tetM gene abundance. Displayed are the median values as well as the quantils [p = 0.25 (dark), p = 0.75 (bright)] and the standard deviations (n = 14). and were shown to hold resistances to colistin by previous studies (Hua et al., 2017;Jeannot et al., 2017). To address the growing concern of emerging colistin resistant pathogens, the abundance of these possible carrier bacteria are also analyzed in the German WWTP of this study.
The Figures 6, 7 summarize the data of all seven investigated WWTPs (influent/effluent) according to the two different normalization approaches, i.e., per 100 ng total DNA (Figures 6A-C) and per 16S rRNA gene copy (Figures 7A-C). Both figures show the abundances of taxonomic gene markers specific for A. baumannii (secE), K. pneumoniae (gltA), and E. coli (yccT). Calibration curves and melting curve analyses for each parameter are documented in Supplementary Material. The obtained Ct values matched with the linear ranges of the calibration curves.
The median values of the influent samples were calculated for the specific taxonomic genes with 1.12 × 10 4 for A. baumannii, 2.15 × 10 4 for K. pneumoniae, and 2.96 × 10 4 for E. coli cell equivalents per 100 ng total DNA. Decreased median values per 100 ng DNA resulted from the effluent samples for all three target genes, i.e., 1.05 × 10 2 for A. baumannii, 1.60 × 10 3 for K. pneumoniae, and 1.80 × 10 3 for E. coli (Figure 6). In total, the abundances of these taxonomic gene markers decreased during conventional wastewater treatment with 1-2 orders of magnitudes, but are still present in significant amounts in effluent samples. Figure 7 illustrates the abundances of taxonomical marker genes referred to the 16S rRNA copy number. The median values were analyzed and resulted in 2.17 × 10 −5 cell equivalents for A. baumannii, 8.25 × 10 −5 for K. pneumoniae, and 1.3 × 10 −4 for E. coli in the influents samples. The investigation of cell equivalents in the effluent samples resulted in a median of 2.60 × 10 −6 for A. baumannii, 5.86 × 10 −5 for K. pneumoniae, and 3.29 × 10 −5 for E. coli.
DISCUSSION
To the best of our knowledge, this is the first study that demonstrated the occurrence of the colistin resistance mcr-1 gene in bacterial populations of wastewater. The mcr-1 gene was detected in influent samples of all seven WWTPs and was not eliminated during wastewater treatment reaching the aquatic environment. The overall abundances, expressed as cell equivalents per 100 ng DNA or eubacterial 16S rRNA gene copy numbers are still low and live/dead analyses demonstrated that the mcr-1 gene was present in living bacteria, released to receiving bodies.
The colistin resistance can be traced back to the plasmid carrying the mcr-1 gene . This plasmid is already characterized and can be mobilized by conjugation. Regarding wastewater systems a potential high risk for horizontal gene transfer of the mcr-1 carrying conjugative plasmid is present, since many factors promoting horizontal gene transfer are occurring in water samples from WWTPs (Bellanger et al., 2014). The combination of high cell densities in activated sludge tanks, increased nutrient availability, co-selections by heavy metals and selective pressures like complex mixtures of lowconcentrated xenobiotics (e.g., antibiotics, biocids, disinfectants, pharmaceuticals) is hypothesized to promote horizontal gene transfer and therefore the persistence of certain antibiotic resistant bacteria in wastewater environments (Rizzo et al., 2013;Berendonk et al., 2015). This is of major concern because it has been shown, that some human pathogens are involved in high rates of horizontal gene transfer like K. pneumoniae (Hu et al., 2016;Navon-Venezia et al., 2017) and could enhance the dissemination of the mcr-1 plasmid. Potential mcr-1 carrying opportunistic pathogens, i.e., A. baumannii, E. coli, and K. pneumoniae were quantified in higher abundances even at effluent sampling sites.
Cultivation experiments of specific mcr-1 positive Enterobacteriaceae from wastewater populations failed due to the low abundances of the target bacteria and due to the high abundance of accompanying bacterial flora, which overgrew the agar plates supplemented with polymyxin for colistin resistant Enterobacteriaceae.
The dissemination of colistin resistant bacteria via municipal wastewaters is only one way of dissemination of these antibiotic resistant bacteria. Other important dissemination pathways of antibiotic resistant pathogens were already mentioned (Vaz-Moreira et al., 2014). It needs to be studied how far the abundance of the mcr-1 gene target is changing over times. An increasing number of this gene target, especially in clarified wastewater samples released to the aquatic environment, is then directly linked with an increasing microbiological risk potential and would underline the requirements of additional treatment steps to eliminate these opportunistic and antibiotic resistant bacteria.
According to the results of this study, the release of antibiotic resistance genes and opportunistic pathogens from WWTPs in significant amounts to the environment was demonstrated and underlines the requirement of an expanded wastewater treatment (Rizzo et al., 2013;Alexander et al., 2015). Most available data for the selected antibiotic resistance genes derived from clinical studies. Here, the occurrence of blaTEM, CTX-M, CTX-M-32, and CMY-2, all present in the investigated wastewater samples, leads to resistance against ß-lactam antibiotics. ß-lactams made up 30% of all prescript antibiotics in Germany (ECDC, 2015). As a consequence, the high prescription may lead to an increase resistance gene evolution and finally dissemination to the wastewaters. The blaTEM gene, quantified in high abundance of the WWTP effluents under investigation, is responsible for high percentages of ampicillin resistance in E. coli and is also prevalent in K. pneumoniae (Cooksey et al., 1990). It is considered to be a precursor of the extended-spectrum-ß-lactamases (ESBL) (Emery and Weymouth, 1997). CTX-M type ß-lactamases are the most widespread types of ESBL (Bonnet, 2004). Plasmids containing blaCTX-M genes often contain also blaTEM as well as blaOXA genes (Poirel et al., 2005). The CTX-M-32 gene mediates a high resistance in pseudomonads and renders ceftazidime completely ineffective (Fernández et al., 2007). In addition, the CMY-2 gene represents a resistance gene against carbapenems. Carbapenems are used when therapies with other ß-lactam antibiotics failed (Bauernfeind et al., 1996).
It became obvious, that these ß-lactam resistance genes were still present in all effluent samples indicating an insufficient reduction during conventional wastewater treatment. This was also demonstrated by the t-test calculations indicated by values with p < 0.05. In addition, the ermB gene provides resistance against erythromycin, which is a macrolide antibiotic used as substitute for penicillin for patients with allergies against penicillin or infections with resistant bacteria to ß-lactams (Jelić and Antolović, 2016). Macrolides are used for respiratory or gastrointestinal infections and erythromycin made up to 16% of all antibiotics in Germany (ECDC, 2015). Finally, the tetM resistance gene mediates resistance against tetracycline, the second most common antibiotic in the world and currently used as food additive in animal feeding (Gu and Karthikeyan, 2005). Tetracycline and erythromycin often reach WWTP in sub-letal concentrations which might promote the spread of the corresponding resistances through horizontal gene transfer (Alexander et al., 2015).
In consequence, the integration of oxidative, chemicalphysical, or membrane-based technologies for an adjusted wastewater treatment is a necessity to interrupt dissemination pathways of ARBs and ARGs to the aquatic environments. The fate of antibiotic resistant and opportunistic pathogens in different environmental habitats depends on species and genetic backgrounds. Some opportunistic pathogens are highly adaptable to stress situation and are able to activate effective stress responses to survive adverse growth conditions (nutrient limitation, low temperatures, etc.; Alexander et al., 2015Alexander et al., , 2016. Hence, these microorganisms are able to persist in uncomfortable situation and will not die. The risk of contamination via direct contact of humans with insufficient cleaned wastewater or via vegetables irrigated with contaminated water (direct or indirect re-use) can't be excluded. The degree of environmental contamination via wastewater effluents depends on (1) the densities of WWTPs in a specific area, (2) the dimension of conditioned wastewaters released to the receiving bodies, and (3) the catchment area of the WWTP. It became clear by this study, that the size of the WWTP is not an adequate parameter to discuss a possible reduction of antibiotic resistance. The reduction was limited to only 1 to a maximum of 2 Logs during conventional treatment in all WTTPs ranging from low to high population equivalents. Thus, WWTPs should include an effective disinfection technique to avoid an environmental contamination with clinically relevant microbes to stop the dissemination and evolution of antibiotic resistances. Such an effort would also contribute to an effective protection of reservoirs for drinking water production even in industrialized countries. In fact, no national or international regulations or threshold values are available that mention the dissemination of clinically relevant microbes including antibiotic resistances via WWTPs to the aquatic environments. In fact, integrated evaluation concepts to assess the efficiency of advanced wastewater treatment processes for the elimination of antibiotic resistant bacteria and micro-pollutants are already published (Ternes et al., 2017) and are supposed to be a useful tool to initialize regulation processes.
AUTHOR CONTRIBUTIONS
TS: Experimental organization, preparing the manuscript. NH, JA, and FS: Authors performed experiments and generate the scientific data. Co-authors of the manuscript. CH: Local support at the municipal wastewater treatment plants, sampling procedures, providing WWTP specific data. ER: Providing sequencing data.
|
2017-07-14T18:26:41.777Z
|
2017-07-11T00:00:00.000
|
{
"year": 2017,
"sha1": "67021133c39c317a2f8df2de9e0054ae6295fd4c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.01282/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8075cb6b67bab87b2a82b8386298574a533cb007",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259193496
|
pes2o/s2orc
|
v3-fos-license
|
Visualization of the distortion induced by nonlinear noise reduction in computed tomography
Abstract. Purpose We developed a method to visualize the image distortion induced by nonlinear noise reduction algorithms in computed tomography (CT) systems. Approach Nonlinear distortion was defined as the induced residual when testing a reconstruction algorithm by the criteria for a linear system. Two types of images were developed: a nonlinear distortion of an object (NLDobject) image and a nonlinear distortion of noise (NLDnoise) image to visualize the nonlinear distortion induced by an algorithm. Calculation of the images requires access to the sinogram data, which is seldomly fully provided. Hence, an approximation of the NLDobject image was estimated. Using simulated CT acquisitions, four noise levels were added onto forward projected sinograms of a typical CT image; these were noise reduced using a median filter with the simultaneous iterative reconstruction technique or a total variation filter with the conjugate gradient least-squares algorithm. The linear reconstruction technique filtered back-projection was also analyzed for comparison. Results Structures in the NLDobject image indicated contrast and resolution reduction of the nonlinear denoising. Although the approximated NLDobject image represented the original NLDobject image well, it had a higher random uncertainty. The NLDnoise image for the median filter indicated both stochastic variations and structures reminding of the object while for the total variation filter only stochastic variations were indicated. Conclusions The developed images visualize nonlinear distortions of denoising algorithms. The object may be distorted by the noise and vice versa. Analyzing the distortion correlated to the object is more critical than analyzing a distortion of stochastic variations. The absence of nonlinear distortion may measure the robustness of the denoising algorithm.
Approach: Nonlinear distortion was defined as the induced residual when testing a reconstruction algorithm by the criteria for a linear system. Two types of images were developed: a nonlinear distortion of an object (NLD object ) image and a nonlinear distortion of noise (NLD noise ) image to visualize the nonlinear distortion induced by an algorithm. Calculation of the images requires access to the sinogram data, which is seldomly fully provided. Hence, an approximation of the NLD object image was estimated. Using simulated CT acquisitions, four noise levels were added onto forward projected sinograms of a typical CT image; these were noise reduced using a median filter with the simultaneous iterative reconstruction technique or a total variation filter with the conjugate gradient least-squares algorithm. The linear reconstruction technique filtered back-projection was also analyzed for comparison.
Results: Structures in the NLD object image indicated contrast and resolution reduction of the nonlinear denoising. Although the approximated NLD object image represented the original NLD object image well, it had a higher random uncertainty. The NLD noise image for the median filter indicated both stochastic variations and structures reminding of the object while for the total variation filter only stochastic variations were indicated.
Conclusions:
The developed images visualize nonlinear distortions of denoising algorithms. The object may be distorted by the noise and vice versa. Analyzing the distortion correlated to the object is more critical than analyzing a distortion of stochastic variations. The absence of nonlinear distortion may measure the robustness of the denoising algorithm.
1 Introduction reasonably low. However, recent advancements in CT technology have led to the possibility of acquiring CT images at sub-mSv radiation doses 1,2 and even at the same dose as in conventional radiographic imaging. 3 Improvements in noise reduction algorithms in the reconstruction of CT images have been important in this development. Recently developed algorithms are mainly based on iterative approaches and deep learning to reduce the noise in CT images. 4,5 A clinical CT system will always exhibit nonlinear distortion due to its physical limitations as the detector elements cannot be infinitely small, and data cannot be collected continuously around the patient. In addition to leading to aliasing, these limitations, in combination with the presence of scattered radiation and a diverging beam, may create a partial-volume effect and streak artifacts. 6 As filtered back-projection (FBP) has been the gold standard of reconstruction techniques, convolution kernels have been the only reconstruction option to balance the distortion suppression between for example, reducing streak artifacts and increasing the blurring of the image and vice versa. Iterative reconstruction is often less sensitive to abrupt divergences between adjacent projections than FBP, and the nonlinear distortion due to geometrical inconsistencies may thus be reduced. 7 However, other distortion effects may arise from the often nonlinear behavior of the algorithms included in iterative reconstruction. For example, overregularization in a nonlinear noise reduction algorithm may lead to an unfamiliar smoothing often described as plastic. 8 Also the regularization factors in the algorithm may adapt to the composition of the patient using prior object information modeling. 9 Consequently, the image quality cannot be generalized as it will depend on the contrast of the imaged objects and the noise level.
The image quality of images reconstructed using nonlinear noise reduction algorithms has been evaluated by altered metrics from the theory of linear systems. For example, the concept of the modulation transfer function has been adapted to apply for specific tasks by the task-specific transfer function (TTF) as the spatial resolution may vary depending on the noise and the contrast of the imaged object. 10,11 The TTF has further been applied in the concept of model observers to calculate a detectability index (d 0 ) for specific detection tasks. 12 Recently, the method of TTF has been applied to patient images. 13,14 Other approaches, such as quantifying the overregularization, have also been used to describe the performance of a nonlinear noise reduction algorithm. 8 The performance of nonlinear noise reduction algorithms may also be characterized by decoupling the distortion from the system resolution as the distortion may masquerade as a degradation in spatial resolution. 15,16 However, a diagnosis will often depend on the radiologist's interpretation of the detected pathology and a nonlinear noise reduction algorithm may distort the image such that the new image impression alters this interpretation. 9 Hence, an overview of the distortion of a reconstructed image would be useful to improve our understanding of the effects of the nonlinear noise reduction algorithm. A comparison with an FBP image is often used to demonstrate the robustness of nonlinear noise reduction algorithms. 17 Although such a comparison may be interesting, it does not analyze the nonlinear effect of the algorithm but instead the difference between the algorithms. Furthermore, the difference will consist of both linear and nonlinear distortions induced by the algorithms, in which the absence of nonlinear distortion is a measure of robustness of the noise reduction algorithm. Thus the purpose of this study is to develop a method that can isolate and visualize the location of the nonlinear distortion caused by a nonlinear noise reduction algorithm in an arbitrary object, independently of other algorithms. Two new types of images that estimate and visualize the behavior and noise dependence of a nonlinear algorithm are proposed. The first type, denoted the NLD object image, visualizes the systematic nonlinear distortion of the object at a given noise level. The second type, denoted the NLD noise series (which consists of many images), visualizes the nonlinear distortion of the noise caused by the object.
nonlinear reconstruction algorithms. Although the distortion at individual frequencies may characterize a nonlinear algorithm, the distortion is dependent on the composition of the object, i.e., the distortion of an object is not equal to the sum of the distortion of the individual frequencies of the object. Hence, the present method was developed for analysis of the distortion of an arbitrary object. Further, the methods used in the mentioned studies and in the present one are basically testing the criteria for a linear system. In contrast, the present method investigates the distortion in the image domain and any changes to an object when reconstructed in low versus high noise. Only a nonlinear reconstruction will be dependent on the noise level. Hence, the present study defined the observed changes between noise levels as nonlinear distortions. The method was applied to a typical CT image and tested using simulations of the CT acquisition and reconstruction algorithm as access to projection data was limited on the existing CT systems at our disposal. Similar to Larsson et al., 16 the present method is based on manipulation of the sinogram. However, the NLD object was approximated to not require access to the sinogram but was investigated here using simulations.
Description of the Method
The noise reduction algorithms used in CT image reconstruction may affect image quality nonlinearly, i.e., they may be dependent on the imaged object and the quantum noise. One of the effects may be nonlinear distortion, i.e., instead of a reduction in the signal, part of the signal of the object is transferred to other image structures. To identify the nonlinear distortion of a reconstruction algorithm, two types of images are proposed: one containing the nonlinear distortion of objects (NLD object ) and the other containing the nonlinear distortion of noise (NLD noise ). These images reveal the nonlinearity of a system by visualizing the extent to which the change in the output is not directly proportional to the change in the input. 18 This is achieved by calculating and comparing the left and right sides of the conditions of the superposition principle (additivity and homogeneity). The NLD object image is the residual of a comparison between the average of the acquired image data before and after reconstruction. An NLD noise image is the residual of a comparison between noise data reconstructed with and without an object being present. The NLD object image thus describes the systematic NLD object , whereas the NLD noise series represent a series of images describing the NLD noise . Both image types are presented in the image domain. For a linear reconstruction system, the residual is close to zero. However, the residual may visualize distortions originating from nonlinearities in the CT configuration, e.g., the detector response.
Nonlinear distortion of objects
A nonlinear noise reduction algorithm may distort the reproduction of an object in an image depending on the level of noise in the input data. For a CT system Ƥ, the input data are represented by a sinogram of an object s acquired with a background b and are written as where the subscript n indicates the noise level of the background and p and q are the projection angle and detector position, respectively. Noise reduction in a CT system may be implemented during the reconstruction step. Hence, to isolate the nonlinear distortion introduced by a noise reduction algorithm, two reconstructions may be performed on a sinogram of the same object at different noise levels. One of the noise levels can be approximated to zero (noise-free) to obtain as large a difference in the comparison of the distortion as possible (the distortion is assumed to be smaller at lower noise levels). If manipulation of the sinogram is possible, the acquisition of a sinogram at a background n (sb n ) can be repeated N times and averaged before reconstruction to estimate the noise-free sinogramsb n;N (provided N is sufficiently large). The reconstruction of this sinogram represents the object with the least nonlinear distortionSB n;N . In images reconstructed at high noise levels, there is a risk that the distortion of the object may be obscured by the distortion of the high noise level. It may therefore be more appropriate to consider the systematic nonlinear distortion. Thus the acquired sinograms sb n should be reconstructed separately and then averaged after reconstruction to represent both the reconstructed object and the systematic nonlinear distortion. Any difference between this averaged image (SB n;N ) and the image reconstructed from the approximate noise-free sinogram (SB n;N ) will result in a residual consisting of the systematic nonlinear distortion of the object and is written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 4 ; 6 9 7 NLD object ¼ SB n;N ðx; yÞ −SB n;N ðx; yÞ; (1) where x and y are the reconstructed pixel positions in Cartesian coordinates ( Fig. 1, workflow of the calculation of the NLD object image, the top and middle rows of the images). When manipulation of the sinograms is not possible, an approximate noise-free image of the objectSB 0 n low ;N low acquired separately may be compared with the average of the high-noise images (SB n;N ) to estimate an approximation of the NLD object image, and this is written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 4 ; 6 0 6 NLD 0 object ¼ SB n;N ðx; yÞ −SB 0 n low ;N low ðx; yÞ; where the subscripts n low and N low indicate the noise level of the background and the number of repeated acquisitions for the approximate noise-free estimation, respectively ( Fig. 1 gives a comparison of the workflow for the calculation of the NLD object image and NLD 0 object , the middle and bottom rows of images, respectively). Each of these images (NLD object and NLD 0 object ) provides a map of the nonlinear distortion that characterizes the systematic nonlinear distortion of the ) and reconstruction (f ), respectively. The index i represents the acquisition number, and ðx; yÞ and ðp; qÞ indicate that averaging was performed in the image domain (middle and bottom rows of images) and the sinogram domain (top row of images), respectively. A series of N acquired noisy sinograms of the object (sb n ) was duplicated and averaged after reconstruction (i.e., a pixelwise average of the reconstructed images at the Cartesian coordinates x and y , middle row) and before reconstruction [i.e., a pixelwise average of the sinograms at the projection angles (p) and detector position (q), top row] to provide the NLD object image that is the residual of these two calculations. The NLD 0 object was the residual of the reconstructed noisy sinograms (middle row) and the reconstructed low-noise sinograms indicated by the subscript low (bottom row). In the case of a linear reconstruction, the pixel values of the NLD object image will be close to zero (i.e., an overall gray image). reconstructed object. However, the noise in these images does not have the same origin, and the random uncertainty in the resulting systematic nonlinear distortion will be higher than in the case in which sinogram manipulation is possible.In the application of the proposed method, using simulations of a CT acquisition of a typical abdominal image (see Sec. 2.2 for details), the acquisition of sb n was repeated 16, 32, 64, 128, and 256 times (N ¼ 16, 32, 64, 128, and 256) to illustrate how the difference in the uncertainty between the NLD object and NLD 0 object changes with the number of repetitions. For the case in which manipulation of the sinograms was possible, these sinograms were averaged to give an approximate noise-free sinogram: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 6 3 9sb n;N ðp; qÞ ¼ where index i is the acquisition number and N is the number of acquisitions at background noise level b n (p and q are still the projection angle and the detector position, respectively, Fig. 1). The reconstruction ofsb n;N provides an estimate of the noise-free image of the object and was calculated according to The NLD object image was calculated by inserting Eqs. (4) and (5) into Eq. (1) (Fig. 1) at background noise level b n . TheSB 0 n low ;N low in the calculation of the NLD 0 object image in Eq. (2) can, if the acquisition procedure change nonlinearly due to for example change in size of the x-ray focal spot as the tube current is increased, be estimated by an average of many reconstructed images of the object such as the SB n;N [Eq. (5)]. However, this study used simulations of CT acquisitions with fixed acquisition parameters to focus on the nonlinearities in the reconstruction algorithm. Hence, theSB 0 n low ;N low was estimated using only one separate acquired image with a noise level corresponding to the average of the high-noise images (SB n;N ). The noise level was defined at a contrast-to-noise ratio (CNR) dependent on the simulated noise level and the number of repeated acquisitions.
Nonlinear distortion of noise
When the noise reduction algorithms are nonlinear, the distortion may be different at each acquisition as quantum noise is generated randomly. Further, the noise reduction algorithm may reconstruct noise differently when an object is present. It is then possible to estimate the NLD noise by comparing the noise reconstructed with and without an object being present. However, the noise must be isolated from the object in both cases. The noise in an image reconstructed with the object being present can be isolated after reconstruction by subtracting an estimate of the reconstructed objectŜðx; yÞ from each noisy image of the object SB n;i . In the case in which the noise is reconstructed without an object being present, the noise is isolated before reconstruction by subtracting an estimate of the object sinogramŝðp; qÞ from each noisy sinogram sb n;i . Thus the NLD noise for a series of images is the difference between each pair of isolated noise images, which is written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 7 ; 1 4 1 NLD noise ¼ B n;i ðx; yÞ −B n;i ðx; yÞ; where B n;i andB n;i are the noise images reconstructed with and without the object being present, respectively; i has values from 1 to N and represents the acquisition number of the reconstructed noise; and n denotes the level of noise at which the NLD noise series is assessed (Fig. 2, workflow of the calculation of the NLD noise series). Each of the images in the NLD noise series provides a map of the nonlinear distortion that characterizes the random nonlinear distortion of the reduced noise. An assessment using a linear reconstruction algorithm will result in an NLD noise series with pixel values close to zero. In the application of the proposed method (see Sec. 2.2), the average of the acquired noisy images SB n;N [described in Eq. (5)] was used as the estimate of the reconstructed object (Ŝðx; yÞ), and the calculation of B n;i was expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 2 . 1 . 2 ; 1 1 4 ; 2 6 5 B n;i ðx; yÞ ¼ SB n;i ðx; yÞ − SB n;N ðx; yÞ; where SB n;i is the i'th reconstructed noisy image of the object at noise level n (Fig. 2). The average of the acquired noisy sinogramssb n;N [described by Eq. (3)] was used as the estimate of the object sinogram, and the calculation ofB n;i was expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 2 . 1 . 2 ; 1 1 4 ; 2 0 1B n;i ðx; yÞ ¼ fðsb n;i ðp; qÞ −sb n;N ðp; qÞÞðx; yÞ; where f is the reconstruction algorithm and sb n;i is the i'th noisy sinogram of the object at noise level n (Fig. 2). The NLD noise series was calculated for the same number of repeated acquisitions used for the NLD object images (N ¼ 16, 32, 64, 128, and 256), using Eq. (6). However, for the estimation of the NLD noise series in this study, N was equal to 256 to show the NLD noise series with negligible uncertainties. Further, the NLD noise series consisted of N separate estimates of the NLD noise . ) and reconstruction (f ), respectively. The index i is the acquisition number, and ðx; yÞ and ðp; qÞ indicate that averaging was performed in the image domain and the sinogram domain, respectively. A series of N sinograms of an object s acquired at noise level n (sb n ) was duplicated. The first series was used to obtain the isolated noise images (with noise level n) reconstructed in the presence of the object (B n;i , top row), i.e., the mean of the reconstructed noisy sinograms (SB n;N ) was reconstructed and subtracted from each reconstructed noisy image of the object (SB n;i ). The second series was used to obtain the isolated noise images (with noise level n) reconstructed in the absence of the object (B n;i , bottom row), i.e., the mean of the acquired noisy sinograms (sb n;N ) was subtracted from each acquired noisy sinogram of the object (sb n;i ) and then reconstructed. Finally, the series of NLD noise images was obtained by subtracting each of the isolated noise images reconstructed in the absence of the object (B n;i ) from each of the isolated noise images reconstructed in the presence of the object (B n;i ). In the case of a linear reconstruction, the pixel values of the NLD noise image would be close to zero.
2.2 Application of the Method As described above, the method was evaluated using simulations. The open-source Compute Unified Device Architecture-integrated CT simulation toolbox [ASTRA© (version 1.9.9.dev1) [19][20][21] for MATLAB™ (R2020b)] was used to perform reconstructions of the test object. In the toolbox, the projection geometry was set to fan-beam using a flat array of the detector. The 1474 detector elements with a detector element width of 1 mm (defined at the isocenter) were used to reconstruct images with a volume of 1 × 768 × 768 voxels with equilateral voxel size of 1 mm 3 . However, the distortion was assessed in the trans-axial image plane using a 512 × 512 pixel region of interest (ROI) to represent the most common field of view used clinically. The 1152 projection angles were evenly spaced around a whole rotation. The sourceto-detector distance was set to 1085.6 mm, and the isocenter was placed 696.7 mm from the source. A function incorporated in the toolbox was used to compute forward projections of the test object (in this case representing a sinogram map of linear attenuation) with respect to the volume and projection geometry.
Poisson noise was used to simulate noise originating from the simulated CT acquisition of the test object. The CT simulation generated a sinogram mapping the linear attenuation that was first transformed by Beer-Lambert's law into the detector signal, where the noise was added and then transformed back into linear attenuation data. The final sinograms used in the calculation of the NLD object and NLD noise images may be averaged both at the level of detector signal or at the attenuation data. The averaging in this study was done on the attenuation data as the noise reduction algorithms investigated were only working on the attenuation data and did not involve transformation from the detector signal. A CNR was defined in an FBP-reconstructed image as the difference between the CT number of liver and muscle tissue divided by the standard deviation of the CT number for the muscle tissue. The estimation of the CT number of the tissues was determined using ROIs sufficiently large to generate stable standard deviations [In Fig. 3(a), the location of the ROIs is indicated in the FBP image of CNR ¼ 2.4]. The expected value in the Poisson noise calculation was varied so that the linear attenuation coefficient ranged between that for dry air (at near sea level) and cortical bone at 60 keV, taken from the NIST database. 22 CNR values of 0.4, 0.7, 1.4, and 2.4 were used to determine the nonlinear performance of the noise reduction algorithms at different noise levels.
Two combinations of the nonlinear noise reduction algorithm and reconstruction algorithm were tested: a median filter (using a kernel of 3 × 3) in combination with the simultaneous iterative reconstruction technique (SIRT, SIRT med ) and a total variation algorithm 23 (TV-L1, 24,25 henceforth denoted TV) in combination with the conjugate gradient least-squares algorithm (CGLS, CGLS TV ). Each nonlinear noise reduction algorithm was applied to the sinograms before reconstruction. Both iterative reconstruction algorithms were stopped after 100 iterations, and an image reconstructed with FBP in combination with each respective nonlinear noise reduction algorithm (using a Hamming filter) was used as the initial estimate in the iteration sequence. An image with high noise has high total variation, which is the integral of the absolute image gradient. A TV denoising algorithm can reduce noise by minimizing the total variation while preserving the edges of objects in an image as noise contributes more to the total variation of the image than edges. The amount of denoising of the algorithm was adjusted by the regularization parameter λ, where a too high λ could lead to blurring of edges as the parameter allows the image to be less consistent with the noisy image. The denoised image was estimated iteratively, and the number of iterations N iter and the amount of denoising λ were set to 50 and 1.9, respectively, to match the noise reduction (the reduction of standard deviation) of a median filter. Further, to obtain equal contrast after processing with the TV algorithm, the mean ratio between unprocessed and processed sinograms was calculated to allow for correction of the contrast level due to the normalization process in the TV algorithm.
The visualization of the average nonlinear distortion could not show the distribution of the distortion. However, each repeated acquisition was an estimation of the nonlinear distortion. Hence, for each investigated algorithm, a pixelwise distribution of the distortion was estimated as an NLD object image at the 5 th , 50 th , and 95 th percentiles together with a plot of the corresponding horizontal profile [In Fig. 3(a), the location of the horizontal profile plots is indicated in the FBP image of CNR ¼ 2.4]. The distribution of the nonlinear distortion was analyzed in more detail by depicting the sharp edge of the dexter costa against the lung (the more peripherally located costa, left in the image) using 20 pixels at the same horizontal profile as described before. Plots of both the NLD object , the 5 th , 50 th , and 95 th percentiles together with the profiles of thẽ SB n;N and the SB n;N were viewed to illustrate the correlation between the object and nonlinear distortion.
Results
The noise reduction, in terms of the standard deviation (σ), was similar for both combinations of the reconstruction algorithm and noise reduction algorithm (SIRT med and CGLS TV ) at the three lowest CNRs (Table 1 and Fig. 3). At these CNRs, each combination reduced the standard deviation by ∼40% in the ROI in the longissimus muscle (Table 1, indicated by the white square in the muscle Fig. 3(a) at CNR ¼ 2.4) and had a smoothing effect on the noise texture (Fig. 3). The noise reduction of SIRT med was slightly stronger than the other combination ( Table 1). The nonlinear distortion of FBP [Figs. 4(a) and 6(a)] was negligible, and the maximum deviation from zero in any of the images of the nonlinear distortion (NLD object and NLD noise ) was in the range of AE10 −3 HU. Table 1 Noise at four CNRs, in terms of standard deviation (σ) in a 60 × 20 pixel ROI in the longissimus muscle [the location of the ROI is shown in the FBP image of CNR ¼ 2.4, in Fig. 3(a)] for FBP alone and two combinations of reconstruction algorithms and noise reduction algorithms described in the text (SIRT med and CGLS TV ).
Nonlinear Distortion of Objects
The NLD object image was used to identify the nonlinear properties of an algorithm by visualizing the systematic nonlinear distortion in the reconstruction of objects at a given CNR. As expected, FBP did not show any nonlinear distortion [ Fig. 4(a)]. For the two combinations investigated, the NLD object images showed distortions at most edges of the anatomical structures in the CT image used to test the method (Figs. 4(b) and 4(c)]. These structures of distortion in the NLD object image indicate smoothing of the reconstructed object caused by the nonlinear algorithm as the noise increases. The NLD object image of SIRT med was indicated to cause less systematic distortion than the CGLS TV algorithm [Figs. 4(b) and 4(c)]. One indication was the more prominent distortion at the contrast filled vessels in the NLD object image of CGLS TV [Fig. 4(c)]. A dark region in the NLD object images indicates a nonlinear contrast reduction at the CNR tested. Compared with using the SIRT med algorithm, the contrast reduction was higher using the CGLS TV algorithm, especially for the contrast filled vessels [ Fig. 4(c)]. Both algorithm combinations exhibited a CNR dependence such that the edge distortions were more pronounced at lower CNRs Fig. 4(c)]. The random uncertainty in the NLD object image was higher at lower CNRs due to the higher noise (Fig. 4). A comparison between the NLD object image and its approximation, the NLD 0 object image, indicated good correspondence other than additional noise (higher random uncertainty) in the NLD 0 object image (Fig. 5). The uncertainty in both NLD object and NLD 0 object was increased as the number of repeated acquisitions was decreased (Fig. 5). However, the estimation of the NLD object by the NLD 0 object was shown to be comparable at 128 repetitions, where most of the distorted structures could be detected [ Fig. 5(b) and 5(c)]. The estimation of the NLD object itself was almost unchanged down to between 32 and 64 repetitions [ Fig. 5(c)].
The distribution of the distortion visualized by the percentile images (5th and 95th) showed the distribution of NLD object of the investigated algorithm to have a maximum range between about −40 to 40 HU (Fig. 6). Further, the variation of the distortion across space was small between the percentile images (Fig. 6). A comparison between the distribution estimation at the 5th-and 95th-percentile plots showed the nonlinear distortion to have a smaller amplitude in general for the noise reduction algorithms (SIRT med and CGLS TV ) than the noise for FBP (not affected by nonlinear distortion, Fig. 6). The general amplitude level of the 5th and 95th percentiles for the SIRT med , CGLS TV , and FBP was about −20 and 20 HU, −30 and 30 HU, and −40 and 40 HU, respectively (Fig. 6). However, the nonlinear distortion had amplitude peaks in the profile higher than the noise distribution of FBP. The distribution in space at the 50th-percentile The more detailed analysis of the sharp edge of a costa against the lung illustrated the correlation in space between the NLD object and the variations ofSB 2.4;256 and SB 2.4;256 for the investigated algorithms (Fig. 7). The FBP algorithm did not show any distortion, which was shown by the flat line of the NLD object and the overlapping lines ofSB 2.4;256 and SB 2.4;256 [ Fig. 7(a)]. Further, the noise reduction algorithms (SIRT med and CGLS TV ) showed specific characteristics of the nonlinear distortion; for example, the distortion at the lung had different signs for the two algorithms (pixel 15 to 20, Figs. 7(b) and 7(c)]. The distortion at the edge (pixel 10 to 15) for the noise reduction algorithms was visualized by the NLD object even though the difference betweeñ SB 2.4;256 and SB 2.4;256 was difficult to see. As expected, the estimation of the distortion distribution by the percentile profiles was shown to be affected by noise. Further, the profile of the 50th percentile and the NLD object for each algorithm were shown to be equal. However, the amplitude of the distortion described by these profiles was shown to have a variation as large as the difference between the 5th and 95th percentiles.
Nonlinear Distortion of Noise
The NLD noise series (Fig. 8) shows the difference in nonlinear distortion between two images when reconstructing the same noise with and without an object being present. This allows the consistency in noise reduction and the dependence on the object to be visualized. The linear reconstruction algorithm FBP did not show any nonlinear distortion of the noise [ Fig. 8(a)]. Among the two combinations investigated, SIRT med was the only one to show the structure of the distortion at contours of the anatomy in the abdominal CT image in the NLD noise series [ Fig. 8(b)]. However, the pixel values in the NLD noise series obtained with SIRT med was about two orders of magnitude lower than for the CGLS TV combination [ Figs. 8(b) and 8(c)]. The structures seemed to have a mean of zero as no structures were observed when the series of NLD noise images was averaged (data not shown). The NLD noise series showed a correlated noisy texture that varied depending on the combination. The magnitude of the pixel values in the NLD noise series using the SIRT med combination was shown to increase as the CNR decreased such that the distorted structures became more distinct [ Fig. 8(b)]. This is in contrast to the CGLS TV combination, which did not show any dependence on CNR [ Fig. 8(c)].
Discussion
An ideal noise reduction algorithm decreases the noise power, preserves the object signal power, and does not cause any distortion. This paper presents two new types of images that were used to visualize the performance of a nonlinear noise reduction algorithm in terms of nonlinear distortion. The distortion of an image can be described in the frequency domain as the transfer of frequencies of an object to other frequencies, especially to harmonics of the object Fig. 3(a)] are shown below the images to better illustrate the changes in the magnitude of the distortion (amplitude range for FBP and SIRT med , −50 to 50 HU and for CGLS TV , −5000 to 5000 HU). Note: the amplitude range has been increased by a factor 100 for CGLS TV to better visualize the differences in magnitude between the CNRs. The profile plots are arranged by CNR from the highest (2.4: top) to the lowest (0.4: bottom) for each image (a)-(c).
frequencies. 15 However, in this study, the distortion is defined in the spatial domain as morphing of the image content, hence, including both object and noise. This is a broad definition of the distortion as it is more common to only estimate distortion when a severe morphing of the object has occurred. However, using the present definition, many of the effects of a nonlinear algorithm on the image quality can be understood to originate from distortion. One of the new types of images, the NLD object image, visualized the systematic distortion by the difference between reconstructing an arbitrary object in high versus low noise; then it could indicate which structures were affected by the algorithm as the noise was increased. The noise power may also be distorted; hence, noise is also a part of the image content. Further, in the image domain, the noise may be distorted systematically such that noise mimics the object, as nonlinear noise reduction algorithms often depend on the object. The other type of image, the NLD noise series, visualizes this distortion of noise by estimating the difference in reconstructing the noise with and without the object being present. Thus the NLD images did not test the algorithm against the ground truth, but to what extent the algorithm met the criteria for linear systems. Further, the test was based on comparing the average of many acquisition data before and after reconstruction, which should be equal for a linear system. Hence, the robustness of the algorithm was tested rather than the image quality. This concept of analyzing distortion was tested using a median filter in combination with the SIRT algorithm and a TV algorithm in combination with the CGLS algorithm.
An approximation of NLD object , NLD 0 object , was also introduced based on assuming the distortion to be high compared with an averaged noise. By comparing one CT image acquired using a high dose and an average of many images acquired using a low dose, NLD 0 object demonstrates the systematic nonlinear distortion difference between these dose levels. Noise may interfere with the nonlinear distortion analysis if the high-dose image is not high enough. However, the acquisition of the high-dose image may also be repeated and averaged similar to the low-dose images to represent a CT acquisition of a higher dose and lower the random uncertainty of the distortion estimation. Further, the NLD 0 object image can be applied directly in an existing CT system to analyze the integrated noise reduction algorithms without the need to perform any mathematical operations on the raw data. However, it must be remembered that the nonlinear differences of an existing CT system acquisition configuration (e.g., size of the focal spot and number of acquired projections) between these dose levels could be visualized. Further, an NLD 0 object image may also visualize ring artifacts as the detector response between detector elements at a low dose may vary. This was not seen in this study as it was assumed in the simulations that the detectors had a perfect response. A more complex noise model including detector noise could perhaps have shown the effect of ring artifacts when simulating noise from CT examinations acquired at a low dose. 26 Thus the degree of noise reduction by an algorithm would have showed to be limited by the quality and performance of the hardware components in the CT system. Further, these hardware effects may be isolated by analyzing a linear reconstruction algorithm, for example FBP, which will not exhibit nonlinear distortion effects due to the reconstruction algorithm.
In a dose optimization of CT examinations, an NLD 0 object image might be more useful than the NLD object image as the NLD 0 object image accounts for all changes in the image quality between two dose levels. In contrast, NLD object is more suited in an optimization procedure of a nonlinear noise reduction algorithm itself as it isolates the distortion of the nonlinear noise reduction algorithm. Further, a reasonable uncertainty in the NLD 0 object image was obtained at 128 repetitions of the noise levels used in this study. The same number of acquisitions acquired on an existing CT system using the lowest tube current at about 10 mA and a rotation time of 1 s will result in an averaged image representing 1280 mAs, which should well be approximated as noise-free. The size of the x-ray focal spot would have changed between acquisitions using 10 and 1280 mA, where the latter tube output is about max for such a CT system. A change from a small to a large focal spot will cause degradation in the spatial resolution and will be analyzed as a nonlinear distortion in an NLD 0 object image. Hence, an analysis of the nonlinear distortion using NLD 0 object image may best be performed with the same size of the focal spot. The available tube current range for a small size of the x-ray focal spot can be about between 10 and 400 mA. Thus if the NLD 0 object image was acquired using these limits of the tube current and repeated to achieve an averaged image representing 1600 mAs using a rotation time of 1 s, the theoretic acquisition time Larsson, Båth, and Thilander-Klang: Visualization of the distortion induced. . . of all images required for the calculation would have been <3 min (160 s þ 4 s ¼ 2 min 44 s) plus perhaps some tube cooling time.
It may not be possible to estimate an approximation for the NLD noise series as noise is randomly generated and cannot be easily approximated without the object being present. In the estimation of NLD noise , the noise is isolated before and after reconstruction, i.e., reconstructed without and with the object being present. Further, the noise sinograms are estimated by subtracting the average of the noisy sinograms. Hence, the estimation of the object will not be perfect but also consists of the noise variations originating from the sampled acquisitions. In practice, the test object will further be affected by a range of other types of variations, such as electronic noise, scatter, vibrations of the CT, and a polychromatic x-ray spectrum. However, as the systematic part (the estimation of the test object) is subtracted, only the random fluctuations (the estimation of the noise) are left, both before and after the reconstruction. Hence, the NLD noise will indicate the difference in how the reconstruction algorithm handles these fluctuations when an object is present or not. For a linear CT system, the variations will be reconstructed equally and NLD noise will be close to zero, which was indicated by the FBP algorithm in this study.
It may not be obvious how the content of the NLD object or NLD noise images should be interpreted. However, noise-dependent smoothing of the reconstructed object can be indicated by the structures of distortion in the NLD object or NLD noise image. Further, dark areas corresponding to the reconstructed object can indicate degradation of the contrast of these objects. Improved spatial resolution or contrast enhancement cannot be indicated as this type of image shows the difference when reconstructing an object in low versus high noise. Hence, it is unlikely that a noise reduction algorithm will increase the image quality as the noise is increased. Thus bright or dark regions in the NLD object or NLD noise images that do not correspond to the reconstructed object could indicate that an additional object or texture has been added to the image by the nonlinear noise reduction algorithm due to the noise or the reconstructed object. Accordingly, the NLD object and NLD noise images visualize undesirable effects of the algorithm. An ideal noise reduction algorithm would have generated distortion images without structures or unwanted alterations in the noise texture. However, it could be possible to approximate the CT system as linear if the NLD object and NLD noise only indicated stochastic variations. Further, such approximation may only be valid for the object used in the analysis. Although four CNRs were considered in this study, the NLD object and NLD noise images can be used to visualize the nonlinear properties of a noise reduction algorithm at any noise level, to indicate the noise dependence of the algorithm. A deep learning algorithm can train a neural network to reduce noise in high-noise images by comparing them to a low-noise image of the same object. The NLD image would probably show less distortion for the noise levels at which the algorithm was trained, and the distortion may be increased for the noise levels far from the trained levels. However, other types of noise reduction algorithms, such as model-based iterative reconstruction techniques, would have probably induced distortions similar to the algorithms tested in this study.
The distribution image of the nonlinear distortion was contaminated by noise as each estimation of the distortion contained both distortion and noise. The systematic nonlinear distortion described by the NLD object was not varied a lot across space between the 5th-and 95th-percentile images. Hence, the difference between these images reflected the noise distribution more than the nonlinear distortion as the Hamming filter for the FBP algorithm did not reduce noise to the same extent as the nonlinear noise reduction algorithms. However, the distortion variation across space could be related to the noise distribution and could indicate if the amplitude of the nonlinear distortion would have been larger than the noise. Further, the more detailed analysis of the costa edge against the lung indicated that the systematic nonlinear distortion was comparable to the noise level. However, the nonlinear distortion and noise in a single image could be higher or lower at specific positions in space depending on the object and noise variations. Hence, the risk of a nonlinear noise reduction algorithm obscuring the pathology due to distortion may still be difficult to assess. However, the NLD methodology might help with understanding many properties of various nonlinear algorithms in the future.
The method presented here, using NLD object and NLD noise images, was inspired by the methodology of DPS, which relates the distorted signal to the original signal, i.e., the object before reconstruction. 15,16 Such an analysis will handle all of the distortion effects of a CT system, including aliasing and other nonlinear distortions due to geometrical inconsistency. However, when analyzing nonlinear noise reduction algorithms, it may be better to exclude distortion effects that do not originate from the noise reduction algorithm. This was achieved in this study by relating the nonlinear distortion effect to that between two different noise levels after reconstruction. Hence, both reconstructed images have been affected by the same geometrical distortion. The DPS can also be estimated without geometrical distortion using a method similar to the present method, but that has yet to be tested. Further, this study showed how to separate the distortion of the signal from the distortion of the noise. Hence, the general theory of the DPS estimation applied in the image domain and on a typical CT image with anatomical structures will result in an image containing the sum of the NLD object and an average of the NLD noise series.
Solomon et al. 27 proposed a method that can assess noise properties using anthropomorphic phantoms and compared nonlinear noise reduced images with images reconstructed using FBP. Their method clearly visualized the noise reduction of a nonlinear algorithm to be dependent on the object by isolating the noise from the object after the reconstruction. Further, the subtraction of the noise magnitude of one image from another reconstructed with a different algorithm may show the location of the noise difference between the algorithms. However, the effects of nonlinear distortion may be altered or completely concealed by the difference in resolution or noise properties of the reconstruction algorithms being compared. Hence, even if the method is similar to the present method, the focus of the studies was not the same because, in such a method, the nonlinear effects are not isolated even if the compared algorithm is FBP, i.e., is linear. In contrast, the method described in this study isolates the nonlinear distortion using the same nonlinear algorithm. Thus the analysis is not influenced by or dependent on the performance of a second algorithm. Hence, the properties of the nonlinear algorithm may be analyzed independently. Further, the focus of the NLD object and NLD noise images is not to estimate noise reduction but the distortion effect caused by the nonlinear noise reduction algorithm.
The observed degradation of the resolution and contrast in nonlinear reconstructions as the noise increases is supported by previous findings using lesion simulation in patient images and with findings in studies using the task transfer function (TTF). 10,28 The lesion simulation study described how a nonlinear noise reduction algorithm of an existing CT system affected the reconstruction of a single simulated lesion at different noise levels. 28 The present method was developed to trace the same nonlinear effect but for the whole patient content. Further, the method applied on an existing CT system would not need to simulate objects or noise. In the case of the TTF method, it has recently been modified to be analyzed in clinical images and, in combination with the noise power spectrum, has been used to estimate the detectability index (d 0 ) of specific tasks. 13,14 The d 0 is useful for predicting the correlation between the image quality and dose reduction in optimization of CT examinations. However, the appearance of the NLD object and NLD noise images may be a complement to the d 0 estimation by reflecting the overall nonlinear effect visually. Thus it would indicate whether there is a risk that the diagnostic task has been affected by the noise reduction algorithm and needs further investigation. The numerical figure of merit could be developed from the NLD object and NLD noise images as a measure of the effect in diagnostic tasks. For example, such a figure of merit could analyze the correlation of the distortion to the object and may further be used to indicate if the nonlinear noise reduction algorithm may be generally handled as a linear algorithm.
There are obvious limitations with this study. In addition to the lack of a specific figure of merit in this study, the simulation of an ideal CT system was another limitation as noise reduction algorithms on existing CT systems were not analyzed with the proposed method. Further, the investigated nonlinear noise reduction algorithms were only applied on the sinogram data and were not integrated in the updating procedure of the pixel values. However, the focus of this study was not to develop a nonlinear noise reduction algorithm but a method to visualize the effect of it. Furthermore, assessment on a patient should theoretically be possible although not convenient or suitable with conventional CT systems. However, the first photon-counting CT systems have been installed at radiological departments around the world. The generation of NLD object and NLD noise images without the need for repeating the acquisition should be possible on these systems if equipped with a feature to time integrate the acquisition at various intervals (similar to modern single photon energy CT systems) as a new noise will randomly be obtained at each time interval. Subtraction in the projection domain would still be needed to generate NLD noise images. Nevertheless, the use of anthropomorphic phantoms may be sufficient to gain a better understanding of how the anatomy of a patient and/or pathology is affected by nonlinear noise reduction algorithms in conventional CT systems. Furthermore, the present method may be applied to any imaging modality involving nonlinear denoising algorithms, such as compressed sensing in magnetic resonance imaging or deep learning reconstruction in positron emission tomography.
Conclusions
We have described a method of analyzing nonlinear noise reduction algorithms in CT imaging independently of other algorithms by estimating and visualizing the NLD object and NLD noise . The NLD object image describes how objects are distorted due to noise by visualizing the degradation of resolution and contrast of an arbitrary object, whereas the NLD noise series indicates how the noise is affected by the noise reduction of a nonlinear reconstruction algorithm in the presence of an object. The induced distortion may be both stochastic and correlated to the object, and analyzing the latter may be more critical. The absence of nonlinear distortion may be used as a measure of robustness of the denoising algorithm. Further analysis using the proposed method may improve our understanding of the effects of noise reduction algorithms on image quality and their dependence on anatomical structures and noise.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
|
2023-06-20T06:17:10.871Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2e74b7133fedc2de9cf2b7af8b34332dc3a44b15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "SPIE",
"pdf_hash": "20dad8c5ca2b91bd67a4999c7116782cbd19bcbf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
27906485
|
pes2o/s2orc
|
v3-fos-license
|
Parental Concerns about the Health of Adolescents with Intellectual Disability: A Brief Report
Background. Parents of adolescents with intellectual disability are concerned about the future health and well-being needs of their children. Method. Qualitative data was collected as part of a cross-sectional descriptive study and semi-structured interviews were conducted with 32 parents. The results were themed. Results. Most parents discussed areas of their children's health which made them anxious about the future. These concerns were collated into five themes. Conclusion. The health and well-being themes were dependency, general health, challenging behaviours, and increasing support needs.
Introduction
Compared with the general population, adults with intellectual disability experience significant healthcare inequalities including general health screening, mental health support, women's health screening, and oral healthcare services [1], and this equally applies to adolescents with intellectual disability [2,3]. The majority of Australian adolescents (11-19 years) with intellectual disability live with their parents who have the responsibility for the healthcare of their adolescent child [4,5].
Parents are concerned about the health of their adolescents with intellectual disability [6][7][8][9] especially when they can no longer care for them [10]. As adolescents transition out of specialist-based pediatric care, they move to the primary care system and general medical practitioners (GPs); this creates additional parental concerns about their child's health [11,12]. GPs have expressed concern at being expected to take on a similarly intensive role [13]. This study states the major themes expressed by parents regarding the health of their adolescent with intellectual disability.
Method
During a cross-sectional descriptive study which examined the effect of health interventions, interviews with parents were undertaken. This was a six-month cross-sectional descriptive study with qualitative and quantitative data collected. Qualitative data was collected from adolescents with intellectual disability, parents, and the adolescents' teachers. The other findings are reported elsewhere [3]. In a semistructured interview parents were asked about the three main health issues relevant to their adolescent in the next ten years. The dominant themes are discussed here.
Results
Thirty-two parents participated, of which 31 were mothers, with a mean age of 46 years, with 21% having tertiary qualifications. They were employed in a range of occupations including home duties, teaching, small business, retail, and nursing. Their adolescent children were described through GP notes and parent/teacher reporting as having mild (3/32), moderate (17/32), severe (11/32), and profound (1/32) intellectual disability. Only 33% (11/32) said that their children were strong and healthy and they had no concerns for the future, and 58% (18/32) said that there were particular areas of their children's health that made them anxious about the future, those being dependency, general health, challenging behaviour, and increasing support needs.
International Journal of Family Medicine 3.1. Theme One: Dependency. Parents made practical observations about the capacity of their children's independence in future health decisions: "He will always be very dependent on us, and the resources we provide and the professional advice we seek." A few parents spoke of independence and how the future was not such a worry for them. "Independence-he likes to have his own things. The (health) diary (the study intervention) is his way of telling people about his health without his mum having to talk all the time." 3.2. Theme Two: General Health. Weight was most frequently named as the biggest challenge. There were additional concerns about management of medication, epilepsy management, "staying healthy mentally," mobility reduction over time, "getting herself to and from the doctor's when she is not feeling well and knowing when to go," "moving her from the child health to adult health system," and maintenance of health checks.
Theme Three: Challenging Behaviours.
Parents consider that others outside the school environment will not know about "behavioural issues related to his condition." Another added that her main concern was that of "anger management" and another because "his behavioural issues may affect his social and daily life." 3.4. Theme Four: Increasing Support Needs. Parents perceived that they will need support for their own health as they age: "Support structures-we will need support in place as he and I get older"; "Advocacy-someone else needs to be aware of his health needs"; "Someone else to know his normal health-related patterns would be helpful. As his primary care-provider I may not always be around."
Conclusion
These findings are not a comprehensive list of parental concerns, but they contribute to our common understanding of parental health concerns for the future of their adolescent with intellectual disability.
|
2014-10-01T00:00:00.000Z
|
2011-06-22T00:00:00.000
|
{
"year": 2011,
"sha1": "b24f28d1c5a086ba2b9d98e61adbd3677b199b18",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2011/164080.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b24f28d1c5a086ba2b9d98e61adbd3677b199b18",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248881557
|
pes2o/s2orc
|
v3-fos-license
|
The Value of Magnetic Resonance Diffusion-Weighted Imaging and Dynamic Contrast Enhancement in the Diagnosis and Prognosis of Treatment Response in Patients with Epithelial Serous Ovarian Cancer
Simple Summary Epithelial ovarian cancer is one of the greatest challenges for a gynecologist and oncologist both in terms of diagnosis and treatment. Modern imaging techniques such as DWI or DCE MRI allow for better planning of the treatment strategy. This is related not only to a more precise localization of lesions, but also to the relationship between the values of DWI and DCE parameters and specific histological types of ovarian cancer. In our study, we demonstrated the previously suggested relationships between the values of DWI parameters and the types of ovarian cancer. We described the relationship with the results of immunohistochemical tests. We also showed a correlation between DWI and DCE values with time to relapse. We have made an attempt to describe such correlations in the group of patients treated with bevacizumab. Abstract Background. The aim of our study was to describe the selected parameters of diffusion-weighted imaging (DWI) and perfusion dynamic contrast enhancement (DCE) MRI in primary tumors in patients with serous epithelial ovarian cancer (EOC), as well as in disease course prognosis and treatment response, including bevacizumab maintenance therapy. Materials and Methods. In total, 55 patients with primary serous EOC were enrolled in the study. All patients underwent MR imaging using a 1.5 T clinical whole-body MR system in preoperative DWI and DCE MRI selected parameters: apparent diffusion coefficients (ADC), time to peek (TTP) and perfusion maximum enhancement (Perf. Max. En.) were measured. The data were compared with histopathological and immunochemistry results (with Ki67 and VEGF expression) and clinical outcomes. Results. Higher mean ADC values were found in low-grade EOC compared to high-grade EOC: 1151.27 vs. 894,918 (p < 0.0001). A negative correlation was found between ADC and Ki67 expression (p = 0.027), and between ADC and VEGF expression (p = 0.042). There was a negative correlation between TTP and PFS (p = 0.0019) and Perf. Max. En. and PSF (p = 0.003). In the Kaplan–Meier analysis (log rank), a longer PFS was found in patients with ADC values greater than the median; p = 0.046. The Kaplan–Meier analysis showed a longer PFS (p = 0.0126) in a group with TTP below the mean value for this parameter in patients who received maintenance treatment with bevacizumab. Conclusions. The described relationships between PFS and DCE and DWI allow us to hope to include these parameters in the group of EOC prognostic factors. This aspect seems to be of particular interest in the case of the association of PFS with DCE values in the group of patients treated with bevacizumab.
Introduction
Epithelial ovarian cancer (EOC) is the fifth most common cancer in women. It is also the fourth leading cause of death from cancer because of the lack of discernible symptoms and effective screening tools [1,2]. Tumor prognosis depends on the use of optimal cytoreductive surgery and adjuvant platinum-based chemotherapy [3,4].
Surgical outcome in EOC is usually classified according to the amount of postoperative residual tumor. A complete resection is regarded if no macroscopically visible tumor is left. If any visible tumor remains after surgery, it is classified according to its largest diameter. Operations that ended up with residuals up to the 10 mm largest diameter had been formerly classified as optimal debulking, whereas those resulting in any larger residual tumor had been defined as suboptimal debulking [5,6].
Following the publication of the results of two III-phase clinical trials (GOG 218 and ICON 7), the adjuvant treatment of advanced or high-risk early stage EOC is as follows: six 3-weekly cycles of intravenous carboplatin (AUC 5 or 6) and paclitaxel (175 mg/m 2 of body surface area), with maintenance intravenous bevacizumab in a dose of 7.5 mg/kg of bodyweight continued for twelve further 3-weekly cycles (ICON 7) or 15 mg/kg of bodyweight during sixteen 3-weekly cycles (GOG 218) [7][8][9].
Transvaginal ultrasonography (TVUS) is the initial modality for investigating ovarian tumors, and the International Ovarian Tumor Analysis Group guideline can be used to estimate the malignancy risks of ovarian tumors [10]. According to the guidelines of the European Society of Uro-Genital Radiology (ESUR), the imaging modality of choice for the preoperative evaluation of such patients is abdomino-pelvic and chest computed tomography (CT) [11]. However, the CT in some cases is not able to indicate very small and diffuse peritoneal implants or bowel wall or mesentery infiltrations. In the pretreatment diagnosis of EOC, magnetic resonance imaging (MRI) can yield more information than CT. Compared to CT, MRI diffusion weighted imaging (DWI) has shown promise in tumor staging, predicting the aggressiveness of the tumor and clinical outcome [12,13].
DWI, in combination with apparent diffusion coefficients (ADC), bring new possibilities in the imaging of EOC, particularly in diagnosing intraperitoneal implants. According to recent studies, diffusion restriction is higher in intraperitoneal implants than in primary tumors [14]. Other studies confirm that ADC values correlate with vascular endothelial growth factor (VEGF), whose expression is higher in intraperitoneal implants than in primary tumors. There is also a reported inverse correlation between ADC values and Ki67 protein, the proliferation marker [15].
Dynamic contrast-enhanced (DCE) MRI is used to improve the diagnostic accuracy of conventional MRI. Most DCE-MRI studies of ovarian tumors have targeted differentiating among benign, borderline and malignant lesions [16]. There is some information that DCE parameters may be useful in the differentiation between highly and low aggressive EOC [17]. Moreover, some studies proposed the application of perfusion-MRI as a prognostic study in EOC [17]. There are no unequivocal data on the relationship between the results of DWI and perfusion MRI in the primary tumor and disease progression.
The aim of our study is an attempt to describe the parameters of DWI and DCE MRI in primary tumors in patients with serous EOC. Additionally, we analyzed selected parameters of DWI and DCE of the primary tumor in early and advanced disease as well as in disease prognosis.
Study Protocol and Patients Population
A single-center prospective study was conducted at the Medical University of Warsaw in the 2nd Department of Obstetrics and Gynecology and the 2nd Department of Clinical Radiology. The inclusion criteria for the study were clinical suspicion of ovarian cancer in CT or TVUS. The exclusion criteria were contraindications to MRI with gadolinium contrast, the current therapy of coexisting neoplasms, starting EOC chemotherapy before performing MRI and surgery outside our center. The study included 55 women aged 30-78 years with primary serous EOC diagnosed in the final histopathological examination. The type and histological differentiation were assessed according to the WHO criteria of 2014, and the serous EOC was classified into low-grade (LG) and high-grade (HG) EOC. The advancement of the disease was assessed according to the FIGO criteria (International Federation of Obstetricians and Gynecologists).
Treatment Protocol
First-line treatment consisted of primary cytoreduction followed by chemotherapy. In patients disqualified from primary cytoreduction, an exploratory laparoscopy was performed to establish the histopathological diagnosis, followed by the neoadjuvant chemotherapy. Systemic treatment was continued after the postponed cytoreduction.
The adjuvant treatment consisted of six courses of intravenous carboplatin (AUC 5 or 6) and 175 mg/m 2 body surface area paclitaxel administered every three weeks. According to the results of the ICON study, 7 patients at high risk of relapse received maintenance treatment with bevacizumab at a dose of 7.5 mg/kg every three weeks for a total of 18 courses or until progression. Neoadjuvant chemotherapy consisted of the administration of 3 courses according to the above-mentioned scheme. After delayed cytoreduction, chemotherapy was continued for up to 6 or 8 courses, and in high-risk patients, bevacizumab was administered for a total of 18 courses or until progression. Patient characteristics and clinical-histopathological data are presented in Table 1.
MRI Protocol
All patients underwent MR imaging using a 1.5 T clinical whole-body MR system (MAGNETOM Avanto; Siemens AG, Erlangen, Germany).
The MRI protocol for the detection of the pelvic and abdominal lesions contained turbo spin-echo (TSE), T2-weighted images (T2 w), fat-suppressed T2-weighted (fsT2 w), turbo inversion recovery magnitude (TIRM), diffusion-weighted echo planar imaging (DW-EPI) and pre-and postcontrast dynamic T1-weighted gradient echo (3D T1 GRE) sequences. The details of the applied parameters of MR imaging are presented in Table 2. Axial DW images were acquired using the same multi-slice EPI sequence for all patients 30 × 6 mm slices (pelvic part); 360 × 360 mm FoV; TR = 4250 ms; TE = 73 ms; with diffusion weightings of 0, 50, 500, 1000 and 1500 mm 2 /s. These parameters are shown in Table 2. Motion correction was completed automatically. Two radiologists experienced in pelvic MRI, and blinded to the histological information, documented the character of the adnexal masses (one board specialist with more than 15 years of experience and a specialist with a European Diploma in Radiology certificate). On all DWI (with b values of 0, 50, 500, 1000, 1500 mm 2 /s), ROI contained the small circle with a diameter 5-6 mm which was placed on the solid part of the primary tumor, avoiding the partial volume effect, areas of necrosis and artifacts. ROI were replicated from the DW image to the corresponding ADC map and the measurement on the ADC map was recorded. T1WI (non-contrast and contrast enhanced), and DCE sequence parameters for the dynamic analysis are presented in Table 2. ROI were drawn on enhancement DCE images and replicated to DCE parameter maps. During DCE image acquisition, non-contrast images were acquired first, followed by contrast agent administration and continued image acquisition. Time to peek (TTP) and perfusion maximum enhancement (Perf. Max. En.) were measured. In all patients, Gadobutrol (Gadovist, Bayer Schering, Berlin, Germany) was administered, as a bolus dose of 0.1 mmol/kg, immediately followed by a bolus dose of 20 mL of physiological saline (NaCl 0.9%).
DCE parameter maps were generated automatically using Workplace Station.
Immunohistochemistry
The study included samples obtained from the primary tumor prior to the initiation of chemotherapy. In patients undergoing neoadjuvant chemotherapy, the material was obtained from tissue collected during laparoscopy. The tissue was embedded in paraffin and then cut into 5 µm (micrometer) thick sections. A histopathological examination was performed after staining with hematoxylin and eosin. In the Ki67 immunohistological study, the En Vission FLEX Mini Kit High pH was used, while in the VEGF study, the DAKO Monoclonal Mouse Anti-Human VEGF Clone VG1, 1:50 was used. The expression of Ki67 was assessed in the cell nucleus and VEGF in the epithelium and stroma. Ki67 was determined in all 55 patients, and VEGF in 51 patients. The result was reported as the percentage of cells showing staining.
Statistical Analysis
Dell Statistica (data analysis software system, version 13.1) and MedCalc (ver. 20.014, MedCalc Software Ltd., Acacialaan, 8400 Ostend, Belgium) were used for the statistical analysis. All continuous variables were assessed for normality using a one-sample Kolmogorov-Smirnov test, and the data were expressed as the mean standard deviation or median. The parametric T-tests for independent groups were used for testing the significance of differences between mean values because of normal distribution. All correlations were analyzed using a linear model with the Pearson correlation coefficient. For the survival analyses, imaging parameters (ADC, TTP, Perf. Max. Enh.) were dichotomized using the mean values as a cut-off. Recurrence-free survival (RFS) was defined as the interval between the date of surgery and the date of identified recurrence, and overall survival (OS) as the interval between the date of surgery and the date of death or the end of follow-up. The Kaplan-Meier method (log rank) was used for the univariate survival analysis. A p-value of <0.05 was considered statistically significant.
Results
The inclusion criteria for the study were met by 55 patients, whose median age at diagnosis was 57 years (30-78). There were 74.5% of patients in FIGO Grades III and IV (n = 41).
Primary Tumor
All studies managed to visualize the primary tumor in which the ROI was located. The median of the greatest size of the primary tumor was 78 mm (range 60-139).
LG EOC was diagnosed in 16 patients, and HG EOC in 39 patients (Figures 1-3). A very high agreement was obtained both in the results of the two ADC measurements obtained by each radiologist, and in the comparison of the mean measurements between the radiologists. The intraclass correlation coefficient (ICC) for radiologist A's mean measurement was 0.966. The ICC for radiologist B's mean measurement was 0.955. Concordance between radiologists A and B for the mean of the first measurement was ICC = 0.932, and for the mean of the second measurement, ICC = 0.916 ( Figure 4).
A significantly higher mean of ADC values was found in low-grade EOC tumors compared to high-grade EOC tumors: 1151.27 vs. 894,918 (p < 0.0001). No differences were found in TTP (p = 0.87) and Perf. Max. En. (p = 0.43) for these histopathological diagnoses (Table 3).
MRI DWI and DCE Parameters and Immunohistochemistry
Examples of Ki67 and VEGF immunohistochemical staining are provided in Figures 5 and 6.
A significant negative correlation was found between ADC values and Ki67 expression; p = 0.027, r = −0.298 (Figure 7), and a negative correlation (p = 0.042, r = −0.285) between ADC and VEGF expression in the primary tumor ( Figure 8). No correlation was found between TTP, Perf. Max. En. value with Ki67 and VEGF ( A very high agreement was obtained both in the results of the two ADC measurements obtained by each radiologist, and in the comparison of the mean measurements between the radiologists. The intraclass correlation coefficient (ICC) for radiologist A's mean measurement was 0.966. The ICC for radiologist B's mean measurement was 0.955. Concordance between radiologists A and B for the mean of the first measurement was ICC = 0.932, and for the mean of the second measurement, ICC = 0.916 (Figure 4). A significantly higher mean of ADC values was found in low-grade EOC tumors compared to high-grade EOC tumors: 1151.27 vs. 894,918 (p < 0.0001). No differences were found in TTP (p = 0.87) and Perf. Max. En. (p = 0.43) for these histopathological diagnoses (Table 3).
Relapse of the Disease
In total, 34 patients (all FIGO grades) had a recurrence of the disease. The mean PFS was 17.6 months (range 0-40). There was a significant negative correlation between TTP and PFS values; p = 0.0019, r = −0.51 and between Perf. Max. En. and PSF; p = 0.003 and r = −0.49. No significant relationship was found between ADC and PFS (p = 0.836) ( Table 5, Figures 9 and 10).
Relapse of the Disease
In total, 34 patients (all FIGO grades) had a recurrence of the disease. The mean PFS was 17.6 months (range 0-40). There was a significant negative correlation between TTP and PFS values; p = 0.0019, r = −0.51 and between Perf. Max. En. and PSF; p = 0.003 and r = −0.49. No significant relationship was found between ADC and PFS (p = 0.836) ( Table 5, Figures 9 and 10).
Relapse of the Disease
In total, 34 patients (all FIGO grades) had a recurrence of the disease. The mean PFS was 17.6 months (range 0-40). There was a significant negative correlation between TTP and PFS values; p = 0.0019, r = −0.51 and between Perf. Max. En. and PSF; p = 0.003 and r = −0.49. No significant relationship was found between ADC and PFS (p = 0.836) ( Table 5, Figures 9 and 10).
No correlation was found between ADC (p = 0.12), TTP (p = 0.55) and Perf. Max. En. (p = 0.26) and OS. In the Kaplan-Meier analysis (log rank), a significantly longer PFS was found in the group of patients with ADC values greater than the median; p = 0.046.
No such correlation was found in the Kaplan-Meier analysis between TTP; p < 0.19 and Perf. Max. En.; p < 0.39 ( Figure 11).
Survival curves were also analyzed in the group of patients who received maintenance treatment with bevacizumab. A Kaplan-Meier analysis showed a longer PFS in patients with TTP values below the mean value for this parameter; p = 0.0126 (Figure 12). In the case of Perf. Max. En. and ADC, no such correlation was found. In the Kaplan-Meier analysis (log rank), a significantly longer PFS was found in the group of patients with ADC values greater than the median; p = 0.046.
No such correlation was found in the Kaplan-Meier analysis between TTP; p < 0.19 and Perf. Max. En.; p < 0.39 (Figure 11).
Survival curves were also analyzed in the group of patients who received maintenance treatment with bevacizumab. A Kaplan-Meier analysis showed a longer PFS in patients with TTP values below the mean value for this parameter; p = 0.0126 ( Figure 12). In the case of Perf. Max. En. and ADC, no such correlation was found.
Discussion
Our study involving 55 patients showed significant correlations between ADC and the results of histopathological and immunohistochemical tests (Ki67, VEGF) of serous EOC, confirming its significance in predicting the course of the disease. We also showed that DCE parameters such as TTP and Max. Perf. En. also correlate with PFS. We were probably the first to analyze and describe the correlation between DCE parameters and PFS in patients receiving maintenance treatment with bevacizumab.
EOC is the biggest problem faced by people who treat cancer of the female genital organs. It is usually diagnosed in advanced stages III and IV. It especially concerns the
Discussion
Our study involving 55 patients showed significant correlations between ADC and the results of histopathological and immunohistochemical tests (Ki67, VEGF) of serous EOC, confirming its significance in predicting the course of the disease. We also showed that DCE parameters such as TTP and Max. Perf. En. also correlate with PFS. We were probably the first to analyze and describe the correlation between DCE parameters and PFS in patients receiving maintenance treatment with bevacizumab.
EOC is the biggest problem faced by people who treat cancer of the female genital organs. It is usually diagnosed in advanced stages III and IV. It especially concerns the serous type studied by us. According to the National Cancer Institute data, high-grade serous ovarian cancer is diagnosed in 51% in stage III and in 29% in stage IV according to FIGO [18].
The standard of treatment for EOC is primary optimal or complete cytoreduction followed by platinum-based chemotherapy. The 2019 ESMO ESGO consensus confirmed the role of cytoreductive surgery as a prognostic factor in EOC [19]. Patients operated on without leaving macroscopic disease or with macroscopic disease up to 10 mm have a better prognosis than patients with a left tumor larger than 10 mm [20]. In patients who cannot perform primary optimal cytoreduction, treatment is started with exploratory laparoscopy and neoadjuvant chemotherapy [21]. Hence, the huge role of preoperative imaging tests not only in determining the advancement of the disease, but also in qualifying for an appropriate treatment method [22]. According to the recommendations of the European Society of Urogenital Radiology (ESUR) from 2010, the recommended imaging modality for the management and initial preoperative staging is CT of the chest, abdominal cavity and pelvis [11]. However, we know that a CT examination has a number of limitations, especially in the diagnosis of intraperitoneal dissemination, especially in the case of small lesions, without the presence of ascites [23,24]. It seems that MRI can bring new possibilities both in the differentiation of ovarian neoplasms as well as in the prognosis of the course of the disease. Previous reports have found that ADC values correlate with established immunohistochemical prognostic factors for ovarian cancer such as the proliferation marker Ki67. In our research, we confirmed the negative correlations between Ki67 and ADC (r= −0.2981, p = 0.027). Lower ADC values and higher Ki67 values correspond to poorly differentiated EOC, which is associated with a worse prognosis. Thus, we confirmed the results obtained in the research by Lindgren et al. on EOC [15]. Similar negative correlations of ADC and Ki67 are also found in ductal breast cancer research (r = −0.717 do r = −410, p < 0.001) [25], prostate cancer (r = −0.332, p < 0.05) [26] and rectal cancer (r = −0.555, p < 0.001) [27]. This dependence is indirectly confirmed by the correlation of ADC with two types of EOC: low-grade and high-grade (type I and II). Greater diffusion restriction and thus lower mean ADC values were recorded in high-grade EOC tumors, i.e., tumors with lower differentiation and with higher aggressiveness [28]. Our studies confirmed this observation, and the negative correlation of ADC with low-grade and high-grade EOC was significant (p < 0.0001). Currently, the MRI gives more and more possibilities to distinguish features such as FS-T2WI, DWI, CE-T1WI and DCE, which allow for the differentiation of two types of EOC [29,30].
VEGF is one of the most important cytokines responsible for angiogenesis in EOC. By binding to a cellular receptor, it is involved in the formation of new tumor vessels [31]. However, one of the first studies on VEGF in ovarian cancer found that in patients with advanced EOC, intense VEGF immunostaining was more often detected in peritoneal metastases than in primary tumors. VEGF immunostaining in primary as well as in metastatic lesions correlated neither with the response to chemotherapy nor with the clinical outcome. Therefore, the detection of VEGF in tissue samples failed to have a predictive or prognostic relevance for patients with advanced OC [32]. In the other study from that time, the authors concluded that VEGF-C, VEGF-D and VEGFR-3 play an important role in lymphatic spread and intraperitoneal tumor development in OC [33].
After about ten years, it was proven that VEGF inhibitory factor and its receptor is the anti-VEGF antibody-bevacizumab, used in the maintenance therapy of advanced EOC [34,35]. Its effectiveness was confirmed by the ICON 7 and GOG 2018 studies mentioned in the introduction [7][8][9]36]. In our study, the correlation between ADC and VEGF protein expression in the primary tumor was negative (r = −0.2858, p = 0.04). The result differed from that obtained by Lindgren in one of the earlier studies, which found no correlation between ADC and VEGF in the primary tumor. On the other hand, the negative correlation of ADC with the three receptor types VEGFr-1 (r = 0.838, p = 0.001), VEGFr-2 (r = 0.764, p = 0.006), VEGFr-3 (r = 0.627, p = 0.039) and VEGFr-mRNA was confirmed (r = 0.855, p = 0.001) in intraperitoneal dissemination [15]. Similar to ours, negative correlations of ADC with VEGF protein in the primary tumor have been shown in studies on prostate cancer (r = −0.714, p = 0.005) [26] and in rectal cancer (r = −0.290, p = 0.005) [27].
When analyzing the recurrent disease, we showed an inverse correlation between PFS and the values of TTP (p = 0.0019) and Perf. Max. En. (p = 0.003) in the primary tumor.
Higher DCE values were associated with a shorter time to relapse. In the Kaplan-Meier analysis for the entire study group, we found differences in probable time to relapse in groups with ADC values above and below the mean for this parameter. Higher ADC values were associated with longer survival (p = 0.04). Perhaps this correlation is explained by the relationship between higher ADC values and a better differentiated neoplasm and a lower percentage of Ki67. However, the differences between the probable PFS length and the TTP and Perf. Max. En. values above and below the average were not confirmed.
Lindgren, in a study from 2019, confirmed the difference in PFS curves for other DCE parameters, such as contrast agent distribution volume (Ve) and plasma volume (Vp). For TTP, it showed longer PFS in the group where the value of this parameter was greater than the median (the opposite was true for Ve and Vp) [17]. Our analysis of the survival curves in the group of patients who received bevacizumab maintenance treatment seems interesting. The Kaplan-Meier analysis showed a longer PFS for the group with TTP values lower than the mean for this parameter; p = 0.0126. We did not obtain such a correlation in the case of ADC, although a negative correlation of ADC with the VEGF protein was previously shown. Our study seems to open up the topic of the correlation of DWI and DCE parameters with survival curves in the group of patients receiving maintenance treatment with bevacizumab. This topic requires further analysis.
Our work has several limitations. The first is the single-center nature of the study. The second limitation is the analysis of patients with serous EOC. It is true that this type accounts for over 75% of cases of this cancer, but in clinical practice we will encounter other types of EOC. The obtained parameters may then differ from those described in the study. However, narrowing the group to the serous type made it possible to standardize the study group. The third limitation is the small number of patients treated with bevacizumab. However, this form of maintenance treatment is used only in selected patients.
Conclusions
The correlation of DWI parameters with markers of proliferation (Ki67) and factors influencing angiogenesis such as VEGF in the tumor, as well as the significant correlation of ADC values with the EOC type (low-grade vs. high-grade), make the MRI an excellent tool in the diagnosis of serous ovarian cancer. The described correlations between PFS and DCE and DWI allow us to hope to include parameters such as TTP, Perf. Max. En. or ADC in the group of prognostic factors of EOC. These parameters seem to be of particular interest in the association of PFS with DCE values in the group of patients treated with bevacizumab. However, it requires further research.
|
2022-05-19T15:07:08.299Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0bbdc4c0d6057cbf19e567b2fab339ac0d388a9a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/10/2464/pdf?version=1652780384",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0467cc75717dbee33ada4afed181d19ce8e14e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
197453561
|
pes2o/s2orc
|
v3-fos-license
|
A Vessel Schedule Recovery Problem at the Liner Shipping Route with Emission Control Areas
: Liner shipping is a vital component of the world trade. Liner shipping companies usually operate fixed routes and announce their schedules. However, disruptions in sea and / or at ports a ff ect the planned vessel schedules. Moreover, some liner shipping routes pass through the areas, designated by the International Maritime Organization (IMO) as emission control areas (ECAs). IMO imposes restrictions on the type of fuel that can be used by vessels within ECAs. The vessel schedule recovery problem becomes more complex when disruptions occur at such liner shipping routes, as liner shipping companies must comply with the IMO regulations. This study presents a novel mixed-integer nonlinear mathematical model for the green vessel schedule recovery problem, which considers two recovery strategies, including vessel sailing speed adjustment and port skipping. The objective aims to minimize the total profit loss, endured by a given liner shipping company due to disruptions in the planned operations. The nonlinear model is linearized and solved using CPLEX. A number of computational experiments are conducted for the liner shipping route, passing through ECAs. Important managerial insights reveal that the proposed methodology can assist liner shipping companies with e ffi cient vessel schedule recovery, minimize the monetary losses due to disruptions in vessel schedules, and improve energy e ffi ciency as well as environmental sustainability. experiments. M.K. assisted with encoding the developed solution methodology within the MATLAB environment, performing the computational experiments, and interpretation of the obtained results.
Background
Maritime transportation is a critical component of the global trade. According to the United Nations Conference on Trade and Development (UNCTAD), more than 80% of the global trade tonnage and 70% of the global trade value are carried by oceangoing vessels around the world [1]. Over the past three decades, liner shipping, which involves transporting containerized cargoes for various customers, has become a significant part of maritime transportation [1]. The latter tendency has been predicted to be sustained. In liner shipping, a vessel transports containerized cargo to a set of ports based on a fixed schedule. Usually, liner shipping companies form a round trip and call at several ports during a voyage (the term "voyage" refers to a single round-trip). Moreover, their schedules and transit routes are published. According to the World Shipping Council [2], there are approximately 400 liner services around the world, which adopt a regular service frequency (mostly weekly). For a weekly service frequency, each port of call at the liner shipping route is visited the same day every week. Generally, the service network is designed by the liner shipping company at the strategic level of decision-making over a fixed time period [3]. Other important components of liner shipping operations, including environmental regulations, enforced by IMO, and proposes two vessel schedule recovery strategies. The proposed nonlinear mathematical model is linearized and solved to optimality using CPLEX.
A number of computational experiments are conducted for the Asia-North Europe LL5 route, which is served by OOCL liner shipping company and passes through ECAs with the SO x emission control. Important managerial insights, which would be of interest to liner shipping companies, are revealed using the proposed mathematical model and the developed solution approach. The proposed methodology (i.e., the mathematical model and the solution approach) will assist liner shipping companies with efficient vessel schedule recovery, minimizing monetary losses endured as a result of disruptions in vessel schedules, improving energy efficiency and environmental sustainability throughout the transportation process. The latter aspects are considered as critical [12][13][14][15].
The rest of the manuscript is organized in the following manner. The Section 2 presents a review of the state-of-the-art with a focus on vessel schedule recovery and green vessel scheduling. The Section 3 provides a detailed description of the green vessel schedule recovery problem, investigated in this study, while the Section 4 presents a mixed-integer nonlinear model for the problem. The Section 5 presents the solution methodology, adopted in this study, while the Section 6 describes in detail the computational experiments that were conducted in this study and showcases the important managerial insights that were revealed. The Section 7 summarizes the entire research efforts and highlights potential future research directions.
Review of the Relevant Literature
The studies, collected from the state-of-the-art, were classified into two main categories: (a) vessel schedule recovery (studies that focus on disruptions in liner shipping and suggest various vessel schedule recovery strategies); and (b) green vessel scheduling (studies that propose various mathematical models and solution approaches, which address the environmental issues in vessel routing and scheduling). Both study categories are presented in the following sections of the manuscript.
Vessel Schedule Recovery
Various uncertainties, associated with the liner shipping operations, have received attention in the literature; however, only a few studies examined the vessel schedule recovery problem in liner shipping. Paul and Maloni [16] presented a nonlinear programming model for disruptions at seaports due to disasters, minimizing the total operational cost. The results from the numerical experiments, conducted for the North American container port network, indicated that single and multiple port shutdown scenarios incurred additional costs. Brouer et al. [4] conducted a pioneering research on vessel schedule recovery. The authors studied a mixed-integer vessel schedule recovery problem of NP-hard complexity to evaluate the effects of disruptions. The considered vessel schedule recovery strategies included: (i) swap port calls; (ii) omit a port of call; and (iii) adjust vessel sailing speeds. It was found that a 58% cost reduction was achieved when the liner shipping company decided to omit a port of call or swap ports of call, aiming to ensure timely cargo delivery.
Lee et al. [17] proposed a model that incorporated port time uncertainty and slow steaming in determining service reliability, fuel consumption, and emissions. The study underlined that delays at ports might require liner shipping companies to speed up in order to recover the vessel schedules. The decision to implement the latter strategy could increase the fuel consumption and the associated emissions. Li et al. [9] formulated the vessel schedule recovery problem and considered the following recovery options: (a) speeding up only; (b) port skipping only; and (c) port swapping. The results showed that if the delay was not too large, the option of speeding up was preferred for the vessel schedule recovery, as the vessel might not need to sail at the maximum speed to recover the schedule. Furthermore, port swapping and port skipping were found to be better alternatives with significant cost savings if the delay was large. It was also found that additional buffer time could yield cost savings for the recovered vessel schedules. Wang and Meng [18] and Qi and Song [19] highlighted that short segments of a given liner shipping route require more buffer time due to weak flexibility. Li et al. [10] focused on a real-time schedule recovery problem for a liner shipping service. Two types of uncertainties in liner shipping operations were defined: (1) regular uncertainties; and (2) disruptive events. The results from numerical experiments indicated that, for scenarios without earliest handling time constraints at ports, skipping the disruption port might allow the liner shipping company effectively recovering the vessel schedule. Cheraghchi et al. [11] formulated the vessel schedule recovery problem as a bi-objective optimization problem, minimizing the total monetary losses and the total delay due to disruptions. The authors focused on a speed-based vessel schedule recovery. A total of six different multi-objective algorithms were applied to solve the problem. The results from computational experiments showed that the NSGA-II algorithm performed better than other multi-objective metaheuristics used.
Green Vessel Scheduling
Although vessel scheduling and routing has received a lot of attention from researchers [20][21][22][23][24][25][26][27][28], only a very few studies have addressed the vessel scheduling problem, taking into account the environmental concerns [29]. For example, Chang and Wang [30] conducted a comprehensive assessment of the approaches used in commercial maritime transportation to determine the optimal sailing speed of vessels. The results demonstrated that speed reduction was the most efficient for the cases with high unit fuel costs and low charter rates. Kontovas [31] presented a generalized formulation for the green vessel routing and scheduling problem and discussed the need for adopting more accurate models in order to estimate the vessel emissions and the fuel consumption. Psaraftis and Kontovas [32] investigated the factors that influence the choice of vessel sailing speed. Some of the critical factors in determining the vessel sailing speed included the unit fuel cost, state of the market, freight rates, inventory costs, and dependency of the fuel consumption on payload.
Fagerholt et al. [33] evaluated the impacts of environmental regulations in ECAs on sailing paths, sailing speed, and operational costs in maritime shipping. It was found that liner shipping companies might choose to sail along longer voyage legs in order to avoid sailing at a lower speed through ECAs. Fagerholt and Psaraftis [34] presented two speed optimization models for the vessels sailing through ECAs. The numerical computations revealed that the best alternative for the vessels would be sailing through the shortest route within the ECA in order to minimize the total fuel cost and maximize the total profit. Mansouri et al. [29] examined the use of multi-objective decision support tools for enhancing the environmental sustainability of maritime transportation. The authors indicated that simulation and optimization are promising approaches to address the emission reduction challenges in maritime shipping. The importance of modeling emissions in multi-objective vessel scheduling was also discussed by Dulebenets [35]. Song et al. [36] formulated a stochastic multi-objective vessel scheduling problem with uncertain port times, which included the following objectives: (i) minimization of the annual vessel operational costs; (ii) minimization of the average schedule unreliability; and (iii) minimization of the annual total CO 2 emissions. The numerical experiments indicated that about 5-10% reduction in CO 2 emissions could be achieved when either the operating cost is minimized or the schedule unreliability is minimized.
Dulebenets [37] extended the work, conducted by Dulebenets et al. [38], and assessed the benefits and the drawbacks of introducing restrictions on the emissions, produced by oceangoing vessels within ECAs. The study compared the existing IMO regulations regarding the fuel sulfur content while sailing through ECAs with an alternative environmental policy, which imposed restrictions on the amount of emissions, produced within ECAs, along with the existing IMO regulations. The results from computational experiments demonstrated that the emission restrictions reduced the SO 2 emissions by ≈40% but increased the total route service cost. Dulebenets [39] also focused on the green vessel scheduling problem, considering the liner shipping routes with ECAs. It was found that imposing the transit time constraints along with the emission restrictions could lead to significant changes in the vessel schedules and incur additional route service costs. Dulebenets [40] designed a mathematical model for green vessel scheduling, considering CO 2 emission costs of oceangoing vessels in sea and at Energies 2019, 12, 2380 5 of 28 ports of call due to container handling. The numerical experiments showed how changes in the CO 2 tax could affect vessel schedules.
Literature Summary and Contributions
A review of the literature revealed that the vessel schedule recovery problem in liner shipping has not received a lot of attention from researchers. A number of strategies have been proposed in the literature for the vessel schedule recovery. The most common strategies include: (a) vessel sailing speed adjustment; (b) port skipping; and (c) port swapping. Some studies have developed mathematical models, which incorporated one or more vessel schedule recovery strategies. On the other hand, green vessel scheduling has received growing attention from researchers. A number of vessel scheduling models, which account for the environmental issues, energy efficiency, environmental sustainability, and emissions produced, have been proposed in the past. From a detailed review of the literature, none of the existing studies on vessel schedule recovery have taken explicitly into account the environmental issues and modeled the liner shipping routes, which pass through ECAs and where additional regulations are imposed by IMO. Considering the importance of vessel schedule recovery and growing attention of the community to the environmental issues, this study aims to fill the gaps in the state-of-the-art by making the following contributions:
1.
Present a novel mixed-integer nonlinear programming model for the green vessel schedule recovery problem, which can be applied to the liner shipping routes, passing through ECAs; 2.
Explicitly account for the regulations, imposed by IMO at voyage legs within ECAs; 3.
Consider two common strategies for the vessel schedule recovery (i.e., vessel sailing speed adjustment and port skipping); 4.
Propose an exact solution methodology for solving the formulated mathematical model; 5.
Reveal a set of managerial insights, which will be of interest to liner shipping companies, using the proposed mathematical model and the developed solution approach.
Problem Description
A detailed description of the vessel schedule recovery problem for the liner shipping route, passing through ECAs, and the main modeling assumptions are presented in this section of the manuscript.
Liner Shipping Route Description
A liner shipping route, which connects several ports of call, can be either served by one liner shipping company or multiple liner shipping companies that form an alliance. Throughout this study, the liner shipping route, served by one liner shipping company, will be considered. Let P = 1, . . . , m 1 be the set of ports, which will be visited by vessels of the considered liner shipping company. The vessels, which are dispatched for service of the liner shipping route, sail from port p to p + 1 along voyage leg p. The sequence of ports that have to be visited is assumed to be known. Generally, each port of the liner shipping route is called once. However, in certain cases, a port could be visited more than once during the voyage. Liner shipping companies typically determine the sequence of port visits (which is also referred to as "port rotation" or "port network" in the liner shipping literature) and the port service frequency at the strategic and tactical levels, respectively [3]. The Ports of New York, Norfolk, and Savannah are visited once, while the Ports of Le Havre and Antwerp are visited twice. In order to model multiple visits to the same ports of call during the voyage, two dummy nodes are included in the port rotation graph to represent additional visits to the same ports. The port rotation graph, which will be used for modeling of the liner shipping route, is presented in Figure 2. Two dummy nodes "Le Havre*"and "Antwerp*" are added and placed after the node for
Vessel Service at Ports
The arrival time of vessels at each port along the liner shipping route is planned by the liner shipping company at the tactical level. Let , ∈ (hours) denote the planned arrival time of vessels at port . Generally, the marine container terminal (MCT) operator at each port of call and the liner shipping company coordinate the arrival time of each vessel. The planned arrival time at each port falls within the time window (TW), which was negotiated between the MCT operator and the liner shipping company. The duration of the arrival TW at port is defined based on the start of TW at port ( , ∈ -hours) and the end of TW at port ( , ∈ -hours). Some of the factors, which may affect the planned arrival TW duration at each port of call, include the availability of MCT berthing positions, the availability of handling equipment, the vessel arrival frequency, and others [35,[39][40][41]. The vessels, which arrive at a port of call before the start of negotiated arrival TW, are moored at a dedicated waiting area before service. The planned waiting time of vessels at port + 1 ( +1 , ∈ -hours) can be calculated based on the planned vessel departure time from port ( , ∈ -hours), the planned sailing time between ports and + 1 ( , ∈ -hours), and the start of TW at port + 1 using the following relationships: Note that estimation of the vessel waiting time at the first port of call (i.e., 1 ) also includes the product of the agreed port service frequency ( -hours) and the total number of vessels deployed for service of ports along the liner shipping route ( -vessels), which represents the planned total turnaround time of vessels at the given liner shipping route (see Section 3.4 of the manuscript for more details). Introduction of the latter term is required in order to capture a round voyage of vessels and return at the first port of call [35,39,40].
Furthermore, throughout the vessel schedule design at the tactical level, the liner shipping company negotiates the handling rate with the MCT operator at each port of call of the given liner shipping route. It is assumed that the MCT operator at port offers a set of handling rates = {1, … , 2 }, ∈ to the liner shipping company for service of the arriving vessels. Based on the selected handling rate, the MCT operator has to provide a specific handling productivity ( ℎ , ∈ , ℎ ∈ , measured in twenty-foot equivalent units per hour-TEUs/hour) throughout the vessel service at a given port of call. The container demand ( , ∈ -TEUs) is assumed to be known. The planned handling time of vessels at port ( ℎ , ∈ -hours) can be calculated based on the
Vessel Service at Ports
The arrival time of vessels at each port along the liner shipping route is planned by the liner shipping company at the tactical level. Let , ∈ (hours) denote the planned arrival time of vessels at port . Generally, the marine container terminal (MCT) operator at each port of call and the liner shipping company coordinate the arrival time of each vessel. The planned arrival time at each port falls within the time window (TW), which was negotiated between the MCT operator and the liner shipping company. The duration of the arrival TW at port is defined based on the start of TW at port ( , ∈ -hours) and the end of TW at port ( , ∈ -hours). Some of the factors, which may affect the planned arrival TW duration at each port of call, include the availability of MCT berthing positions, the availability of handling equipment, the vessel arrival frequency, and others [35,[39][40][41]. The vessels, which arrive at a port of call before the start of negotiated arrival TW, are moored at a dedicated waiting area before service. The planned waiting time of vessels at port + 1 ( +1 , ∈ -hours) can be calculated based on the planned vessel departure time from port ( , ∈ -hours), the planned sailing time between ports and + 1 ( , ∈ -hours), and the start of TW at port + 1 using the following relationships: Note that estimation of the vessel waiting time at the first port of call (i.e., 1 ) also includes the product of the agreed port service frequency ( -hours) and the total number of vessels deployed for service of ports along the liner shipping route ( -vessels), which represents the planned total turnaround time of vessels at the given liner shipping route (see Section 3.4 of the manuscript for more details). Introduction of the latter term is required in order to capture a round voyage of vessels and return at the first port of call [35,39,40].
Furthermore, throughout the vessel schedule design at the tactical level, the liner shipping company negotiates the handling rate with the MCT operator at each port of call of the given liner shipping route. It is assumed that the MCT operator at port offers a set of handling rates = {1, … , 2 }, ∈ to the liner shipping company for service of the arriving vessels. Based on the selected handling rate, the MCT operator has to provide a specific handling productivity ( ℎ , ∈ , ℎ ∈ , measured in twenty-foot equivalent units per hour-TEUs/hour) throughout the vessel service at a given port of call. The container demand ( , ∈ -TEUs) is assumed to be known. The planned handling time of vessels at port ( ℎ , ∈ -hours) can be calculated based on the
Vessel Service at Ports
The arrival time of vessels at each port along the liner shipping route is planned by the liner shipping company at the tactical level. Let τ arr p , p ∈ P (hours) denote the planned arrival time of vessels at port p. Generally, the marine container terminal (MCT) operator at each port of call and the liner shipping company coordinate the arrival time of each vessel. The planned arrival time at each port falls within the time window (TW), which was negotiated between the MCT operator and the liner shipping company. The duration of the arrival TW at port p is defined based on the start of TW at port p (τ st p , p ∈ P-hours) and the end of TW at port p (τ f t p , p ∈ P-hours). Some of the factors, which may affect the planned arrival TW duration at each port of call, include the availability of MCT berthing positions, the availability of handling equipment, the vessel arrival frequency, and others [35,[39][40][41]. The vessels, which arrive at a port of call before the start of negotiated arrival TW, are moored at a dedicated waiting area before service. The planned waiting time of vessels at port p + 1 (τ wait p+1 , p ∈ P-hours) can be calculated based on the planned vessel departure time from port p (τ dep p , p ∈ P-hours), the planned sailing time between ports p and p + 1 (τ sail p , p ∈ P-hours), and the start of TW at port p + 1 using the following relationships: Note that estimation of the vessel waiting time at the first port of call (i.e., τ wait 1 ) also includes the product of the agreed port service frequency (ϕ-hours) and the total number of vessels deployed for service of ports along the liner shipping route (V-vessels), which represents the planned total turnaround time of vessels at the given liner shipping route (see Section 3.4 of the manuscript for more details). Introduction of the latter term is required in order to capture a round voyage of vessels and return at the first port of call [35,39,40].
Furthermore, throughout the vessel schedule design at the tactical level, the liner shipping company negotiates the handling rate with the MCT operator at each port of call of the given liner shipping route. It is assumed that the MCT operator at port p offers a set of handling rates H p = 1, . . . , m 2 p , p ∈ P to the liner shipping company for service of the arriving vessels. Based on the selected handling rate, the MCT operator has to provide a specific handling productivity (pr ph , p ∈ P, h ∈ H p , measured in twenty-foot equivalent units per hour-TEUs/hour) throughout the vessel service at a given port of call. The container demand (d port p , p ∈ P-TEUs) is assumed to be known. The planned handling time of vessels at port p (τ hand p , p ∈ P-hours) can be calculated based on the container demand at that port and the requested handling productivity using the following relationship: where: x ph , p ∈ P, h ∈ H p -is the handling rate selection parameter (=1 if handling rate h is requested at port p; = 0 otherwise). The handling cost (c port ph , p ∈ P, h ∈ H p -USD/TEU) will be imposed to the liner shipping company for requesting handling rate h at port p to serve the vessels. Note that disruptions at MCTs (e.g., disruptions due to labor strikes, natural disasters resulting in port infrastructure damages, port congestion) may further cause an increase in the planned waiting and handling times of vessels.
The actual waiting and handling times of vessels will be further referred to as τ wait p , p ∈ (hours) and τ hand p , p ∈ P (hours), respectively. Furthermore, disruptions at preceding ports of call may cause deviations from the planned arrival time of vessels at a given port of call. The actual vessel arrival time will be denoted as τ arr p , p ∈ P (hours). The vessel arrival delay (τ del p , p ∈ P-hours) at port p can be calculated based on the actual and planned vessel arrival times using the following relationship: Delays in the planned arrival time of vessels at ports of call are not desirable, as they may negatively affect both liner shipping and MCT operations. Therefore, an additional delayed vessel arrival time cost (c del p , p ∈ P-USD/hour) will be imposed to the liner shipping company for any deviations from the planned arrival time of vessels at ports of call.
Fuel Consumption
One of the common assumptions used for the vessel schedule design at the tactical level is a homogeneous nature of the vessel fleet, which is deployed to serve the ports of the given liner shipping route [35,39,40,42,43]. Homogeneous vessels have the same major technical specifications (e.g., structure of the main and auxiliary vessel engines, capacity of the main and auxiliary vessel engines, maximum possible sailing speed). However, the assumption of homogeneous vessels may not be fully accurate in real-life liner shipping operations for certain routes, as the vessels of exactly the same types may have differences in their technical characteristics due to vessel repairs that were conducted in the past, age, utilization, and other factors. Furthermore, a large number of different factors may impact the fuel consumption, including but not limited to sailing speed, payload, vessel geometric characteristics, and weather conditions. However, the vessel sailing speed has been identified as the key factor that directly influences the fuel consumption [6,18,35]. Based on the available liner shipping literature [6], the planned fuel consumption per nautical mile (nmi) at voyage leg p ( f p , p ∈ P-tons/nmi) can be calculated using the following relationship: where: α, γ-are the fuel consumption function coefficients; s p , p ∈ P-is the planned sailing speed adopted for vessels at voyage leg p (knots). The fuel consumption coefficients (α, γ) are typically estimated based on the historical data, which contain the information regarding the fuel consumption and sailing speed for the vessel fleet, serving a given liner shipping route. The planned sailing time at voyage leg p can be calculated based on the length of that voyage leg (d leg p , p ∈ P-nmi) and the planned sailing speed using the following relationship: Note that disruptions in sea (e.g., inclement weather, mechanical failure of vessel engines, inexperience/errors of the vessel crew) may further cause a decrease in the planned sailing speed of vessels. Hence, the actual sailing speed at a given voyage leg (s p , p ∈ P-knots) would be lower than the planned sailing speed: s p < s p , p ∈ P. A reduction of the vessel sailing speed will decrease the fuel consumption and may result in the actual fuel cost savings. However, reduction in the vessel sailing speed will increase the actual sailing time at a given voyage leg (τ sail p , p ∈ P-hours) and cause a delayed arrival at the next port of call.
Port Service Frequency
Throughout the vessel schedule design at the tactical level, the liner shipping company needs to ensure that the agreed port service frequency is met at ports of the liner shipping route, which can be achieved using the following relationship [39,40,42,44]: The planned total vessel turnaround time at the liner shipping route (i.e., the time taken by a vessel to sail from the port of origin, visit other ports of the liner shipping route, and return to the port of origin) is estimated based on the right-hand side of Equation (7). The planned total vessel turnaround time is computed based on the following three components: (a) the planned total sailing time of vessels; (b) the planned total waiting time of vessels at ports of call; and (c) the planned total port handling time. However, due to disruptions at ports and in sea, the actual total vessel turnaround time (VTT-hours) can deviate from the planned total vessel turnaround time (VTT-hours). The actual total vessel turnaround time can be calculated using the following relationship:
Vessel Sailing Speed Selection
The planned vessel sailing speed (s p , p ∈ P-knots) is determined by the liner shipping company at each voyage leg of a given liner shipping route at the vessel scheduling stage [42,45]. The sailing speed of vessels at each voyage leg along the liner shipping route is generally affected by a wide range of factors. According to Wang et al. [45], the lower bound on the vessel sailing speed (s min -knots) is selected to minimize the wear of the vessel's engine. The upper bound on the vessel sailing speed (s max -knots) is generally determined by the vessel's engine capacity [7]. As discussed earlier, disruptions in sea may significantly affect the planned vessel sailing speed. The latter will further result in fluctuations of the fuel consumption of vessels and cause late arrivals at the consecutive ports of call. However, the vessel sailing speed adjustment may serve as an efficient mean for recovery of the vessel schedules from disruptive events (as will be discussed more in Section 3.8 of the manuscript).
Container Inventory Cost
Throughout the vessel schedule design, it is assumed that the liner shipping company knows the number of containers (i.e., TEUs) to be transported at each voyage leg of the liner shipping route. Based on the liner shipping literature, the planned container inventory cost (CIC-USD) can be estimated based on the unit cost of shipping a TEU (c inv -USD/TEU/hour), the total number of TEUs transported Energies 2019, 12, 2380 9 of 28 at voyage legs (d sea p , p ∈ P-TEUs), and the planned total vessel sailing time at voyage legs of the liner shipping route [39,40,42] using the following relationship: Disruptions in sea and/or at ports will affect the planned vessel sailing speed at voyage legs. Specifically, at the voyage legs that experience disruptions, the planned vessel sailing time is expected to increase due to reduction in sailing speed (s p < s p , p ∈ P). A vessel schedule recovery can be achieved by increasing the vessel sailing speed at the consecutive voyage legs. The latter action will reduce the planned sailing time. The actual container inventory cost (CIC-USD) can be estimated by replacing the planned vessel sailing time in Equation (9) with the actual sailing time of vessels at voyage legs (τ sail p , p ∈ P-hours) as follows:
Existing International Maritime Organization Regulations on Emissions
In order to improve energy efficiency (i.e., cost-effective utilization of fuel) and environmental sustainability of liner shipping, this study considers the existing IMO environmental policies under "Sulphur oxides (SO x ) and Particulate Matter (PM)-Regulation 14", which limits the emissions of SO x and PM by vessels in the areas that are designated as ECAs [8]. Specifically, regulation 14 requires that the sulfur content in fuel, used by the vessels that sail through ECAs, must not exceed 0.10% m/m [8].
In order to comply with the existing IMO regulations, this study assumes that the liner shipping company will use two types of fuel (namely, marine gas oil (MGO) and heavy fuel oil (HFO)) at the considered liner shipping route. It is assumed that the liner shipping company will use a more costly MGO with the sulfur content of 0.10% only when sailing through ECAs (SC p = 0.10% ∀p ∈ P * , where P * -is a set of voyage legs, passing through ECAs; SC p , p ∈ P-is the sulfur content of fuel used at voyage leg p), while HFO with the sulfur content of 3.50% will be used at other voyage legs of the liner shipping route (SC p = 3.50% ∀p ∈ P 0 , where P 0 -is a set of voyage legs, passing outside ECAs). However, without loss of generality, the proposed methodology will be still applicable after 1 January 2020 (when the vessels that are sailing outside ECAs would be required to use fuel with the sulfur content of 0.50%) by setting an appropriate value for the unit fuel cost.
Note that along with fuel switching the liner shipping company may consider other alternatives to meet the ECA sulfur regulations, which include, but not limited to, installation of scrubbers and utilization of liquefied natural gas (LNG) [34]. In order to comply with the IMO regulations on nitrogen oxide (NO x ) pollutants, produced by vessels in certain ECAs (e.g., the North American area and the United States Caribbean Sea), this study assumes that the diesel engines of vessels, deployed by the liner shipping company, satisfy the Tier III limits. The latter requirement can be relaxed for the liner shipping routes, which pass through ECAs in the Baltic Sea and the North Sea (which have the SO x emission control only).
Vessel Schedule Disruptions and Recovery
Disruptions may occur in sea and/or at ports of call. Disruptions at ports may be caused by labor strikes, natural disasters that result in port infrastructure damages, port congestion, and other events. On the other hand, disruptions in sea may occur due to inclement weather, mechanical failure of vessel engines, inexperience/errors of the vessel crew, and other events. Let y port p = 1, p ∈ P if a disruption occurred at port p (=0 otherwise). Let y sea p = 1, p ∈ P if a disruption occurred at voyage leg p (=0 otherwise). Note that both y port p , p ∈ P and y sea p , p ∈ P are treated as parameters in this study, as the considered vessel schedule recovery problem for the liner shipping route, passing through ECAs, is an operational-level decision problem, and the liner shipping company already knows where Energies 2019, 12, 2380 10 of 28 disruption(s) occurred. The liner shipping company also has the information regarding the expected duration of a disruptive event. The expected duration of a disruptive event at ports will be further referred to as δ port p , p ∈ P (hours), while the expected vessel sailing speed change due to a disruptive event in sea will be denoted as δ sea p , p ∈ P (knots). In order to recover the vessel schedule due to disruptions at the liner shipping route, passing through ECAs, this study considers the following recovery strategies: (1) vessel sailing speed adjustment; and (2) port skipping. The liner shipping company may decide to increase the vessel sailing speed at voyage legs of the liner shipping route in order to recover the vessel schedule and compensate for the delays, incurred as a result of disruptions in sea and/or at ports of call. Denote ∆ sea p , p ∈ P (knots) as the vessel sailing speed adjustment at voyage leg p of the liner shipping route. As discussed earlier, increasing vessel sailing speed will further increase the fuel consumption by the vessels, deployed for service of the given liner shipping route, which will increase the actual total fuel cost. It is assumed that the liner shipping company will not be able to adjust the vessel sailing speed at the voyage leg that experienced a disruptive event. If a disruptive event substantially affected the liner shipping operations at a certain voyage leg of the liner shipping route, even selection of the maximum vessel sailing speed at the consecutive voyage legs may not be sufficient to fully recover the vessel schedule. Figure 3 presents an example of the vessel schedule recovery strategy based on the vessel sailing speed adjustment. The horizontal and vertical axes of the time-space network represent the time within the planning horizon (in days) and the port position, respectively. A vessel, scheduled to visit three ports (the Ports of Le Havre, New York, and Norfolk), experienced a disruption after departure from the Port of Le Havre, which resulted in a late arrival at the Port of New York. In order to recover the schedule and maintain the planned arrival time at the Port of Norfolk, the liner shipping company selected a higher sailing speed at the voyage leg between the Ports of New York and Norfolk.
Energies 2019, 12, x FOR PEER REVIEW 10 of 28 adjustment; and (2) port skipping. The liner shipping company may decide to increase the vessel sailing speed at voyage legs of the liner shipping route in order to recover the vessel schedule and compensate for the delays, incurred as a result of disruptions in sea and/or at ports of call. Denote ∆ , ∈ (knots) as the vessel sailing speed adjustment at voyage leg of the liner shipping route. As discussed earlier, increasing vessel sailing speed will further increase the fuel consumption by the vessels, deployed for service of the given liner shipping route, which will increase the actual total fuel cost. It is assumed that the liner shipping company will not be able to adjust the vessel sailing speed at the voyage leg that experienced a disruptive event. If a disruptive event substantially affected the liner shipping operations at a certain voyage leg of the liner shipping route, even selection of the maximum vessel sailing speed at the consecutive voyage legs may not be sufficient to fully recover the vessel schedule. In order to recover the schedule, the liner shipping company may decide not to skip port and endure the delay. Let , ∈ be the port skipping decision variable (=1 if port is skipped; =0 otherwise). However, if duration of a disruptive event at a given port of call is expected to be significant, the liner shipping company will have to skip that port in order to reduce the additional liner shipping route service costs due to a disruptive event. Figure 4 presents an example of port skipping due to a disruptive event, where the vessel, scheduled to visit the Port of Le Havre, the Port of New York, the Port of Norfolk, the Port of Savannah, and the Port of Antwerp, respectively, skips the Port of Norfolk due to a disruption and visits only the Port of Le Havre, the Port of New York, the Port of Savannah, and the Port of Antwerp. If the liner shipping company decides to skip the port, which experiences a disruptive event, the sequence of visits to the consecutive ports of the port rotation is assumed to remain unchanged. In order to recover the schedule, the liner shipping company may decide not to skip port p and endure the delay. Let x skip p , p ∈ P be the port skipping decision variable (=1 if port p is skipped; =0 otherwise). However, if duration of a disruptive event at a given port of call is expected to be significant, the liner shipping company will have to skip that port in order to reduce the additional liner shipping route service costs due to a disruptive event. Figure 4 presents an example of port skipping due to a disruptive event, where the vessel, scheduled to visit the Port of Le Havre, the Port of New York, the Port of Norfolk, the Port of Savannah, and the Port of Antwerp, respectively, skips the Port of Norfolk due to a disruption and visits only the Port of Le Havre, the Port of New York, the Port of Savannah, and the Port of Antwerp. If the liner shipping company decides to skip the port, which experiences a disruptive event, the sequence of visits to the consecutive ports of the port rotation is assumed to remain unchanged. If the liner shipping company decides to skip a given port of call due to a disruptive event, it will not be able to unload the import containers and load the export containers at that particular port. A port skipping cost ( , ∈ -USD) is imposed to the liner shipping company for skipping the port affected by a disruption. The penalty may be imposed for every single skip or over a certain time period (e.g., quarterly or annually). The latter aspect is generally negotiated between the MCT operators and the liner shipping company in the corresponding contractual agreements. The port skipping cost is introduced to compensate the MCT operator for the unmet demand, reserved handling equipment, as well as reserved storage yard space.
The vessel schedule recovery strategies considered (i.e., vessel sailing speed adjustment and port skipping) can be implemented by the liner shipping company to recover the vessel schedule from disruptions in sea and/or at ports of call. Depending on the effects of a disruptive event, the liner shipping company may be willing to use both strategies (if they allow reducing the additional liner shipping route service costs due to that disruptive event). However, if the cost of implementing the aforementioned vessel schedule recovery strategies is substantial, the liner shipping company may decide to endure the effects of a disruptive event without application of any vessel schedule recovery actions.
Decisions
The decision problem, examined in this study, can be classified as the operational-level problem in liner shipping and will be referred to as the green vessel schedule recovery problem. The major decisions that need to be addressed in this problem by the liner shipping company are as follows: (1) identify the voyage legs of the liner shipping route, where the vessel sailing speed adjustment will be a favorable alternative to compensate for the delays, caused by a disruptive event; (2) determine whether energy efficient vessel schedule recovery strategies would be favorable to compensate for the delays, caused by a disruptive event (i.e., a limited increase in the vessel sailing speed will not significantly increase the actual fuel consumption and the associated cost, but may not be sufficient to recover the delays); (3) determine whether the vessel sailing speed adjustment would be a favorable alternative at the voyage legs, passing through ECAs; (4) skip a port of call due to a disruption or wait until the port recovers from that disruption and is able to provide service of vessels; (5) select effective vessel schedule recovery strategies, aiming to reduce the vessel arrival delays at ports of the liner shipping route. The overall objective of the liner shipping company is to effectively recover the vessel schedule and minimize the total profit loss as a result of disruptions in sea and/or at ports of call. If the liner shipping company decides to skip a given port of call due to a disruptive event, it will not be able to unload the import containers and load the export containers at that particular port.
A port skipping cost (c skip p , p ∈ P-USD) is imposed to the liner shipping company for skipping the port affected by a disruption. The penalty may be imposed for every single skip or over a certain time period (e.g., quarterly or annually). The latter aspect is generally negotiated between the MCT operators and the liner shipping company in the corresponding contractual agreements. The port skipping cost is introduced to compensate the MCT operator for the unmet demand, reserved handling equipment, as well as reserved storage yard space.
The vessel schedule recovery strategies considered (i.e., vessel sailing speed adjustment and port skipping) can be implemented by the liner shipping company to recover the vessel schedule from disruptions in sea and/or at ports of call. Depending on the effects of a disruptive event, the liner shipping company may be willing to use both strategies (if they allow reducing the additional liner shipping route service costs due to that disruptive event). However, if the cost of implementing the aforementioned vessel schedule recovery strategies is substantial, the liner shipping company may decide to endure the effects of a disruptive event without application of any vessel schedule recovery actions.
Decisions
The decision problem, examined in this study, can be classified as the operational-level problem in liner shipping and will be referred to as the green vessel schedule recovery problem. The major decisions that need to be addressed in this problem by the liner shipping company are as follows: (1) identify the voyage legs of the liner shipping route, where the vessel sailing speed adjustment will be a favorable alternative to compensate for the delays, caused by a disruptive event; (2) determine whether energy efficient vessel schedule recovery strategies would be favorable to compensate for the delays, caused by a disruptive event (i.e., a limited increase in the vessel sailing speed will not significantly increase the actual fuel consumption and the associated cost, but may not be sufficient to recover the delays); (3) determine whether the vessel sailing speed adjustment would be a favorable alternative at the voyage legs, passing through ECAs; (4) skip a port of call due to a disruption or wait until the port recovers from that disruption and is able to provide service of vessels; (5) select effective vessel schedule recovery strategies, aiming to reduce the vessel arrival delays at ports of the liner shipping route. The overall objective of the liner shipping company is to effectively recover the vessel schedule and minimize the total profit loss as a result of disruptions in sea and/or at ports of call.
Mathematical Model
This section of the manuscript presents a mixed-integer nonlinear programming model for the green vessel schedule recovery problem (the model will be referred to as GVSRP), which takes into account the existing IMO regulations at the voyage legs that pass through ECAs. The GVSRP mathematical model guarantees that low-sulfur fuel is used when sailing through ECAs. The nomenclature, which will be adopted throughout the manuscript, is also presented in this section.
Nomenclature Sets P = 1, . . . , m 1 set of ports to be visited (ports) H p = 1, . . . , m 2 p , p ∈ P set of available handling rates at port p (handling rates)
Decision Variables
∆ sea p ∈ R ∀p ∈ P vessel sailing speed adjustment at voyage leg p (knots) Auxiliary Variables s p ∈ R + ∀p ∈ P actual sailing speed of vessels at voyage leg p (knots) τ arr p ∈ R + ∀p ∈ P actual arrival time of vessels at port p (hours) τ wait p ∈ R + ∀p ∈ P actual waiting time of vessels at port p (hours) τ hand p ∈ R + ∀p ∈ P actual handling time of vessels at port p (hours) τ dep p ∈ R + ∀p ∈ P actual departure time of vessels from port p (hours) τ sail p ∈ R + ∀p ∈ P actual sailing time of vessels at voyage leg p that connects ports p and p + 1 (hours) f p ∈ R + ∀p ∈ P actual fuel consumption at voyage leg p (tons/nmi) τ del p ∈ R + ∀p ∈ P vessel arrival delay at port p (hours) VTT ∈ R + actual total vessel turnaround time (hours) REV ∈ R + actual total revenue to be generated by the liner shipping company (USD) PHC ∈ R + actual total port handling cost (USD) LAC ∈ R + actual total late vessel arrival cost (USD) FCC ∈ R + actual total fuel cost (USD) CIC ∈ R + actual total container inventory cost (USD) TP ∈ R + actual total profit to be generated by the liner shipping company (USD) Parameters m 1 ∈ N number of ports to be visited (ports) m 2 p ∈ N ∀p ∈ P number of available handling rates at port p (handling rates) τ arr p ∈ R + ∀p ∈ P planned arrival time of vessels at port p (hours) τ st p ∈ R + ∀p ∈ P start of TW at port (hours) τ f t p ∈ R + ∀p ∈ P end of TW at port p (hours) ϕ ∈ N agreed port service frequency (hours) V ∈ N total number of vessels deployed for service of ports along the liner shipping route (vessels) d port p ∈ R + ∀p ∈ P container demand at port p (TEUs) pr ph ∈ R + ∀p ∈ P, h ∈ H p handling productivity at port p under handling rate h (TEUs/hour) x ph ∈ B ∀p ∈ P, h ∈ H p =1 if handling rate h is requested at port p (=0 otherwise) d leg p ∈ R + ∀p ∈ P length of voyage leg p (nmi) α, γ ∈ R + fuel consumption coefficients s p ∈ R + ∀p ∈ P planned vessel sailing speed at port p (knots) s min ∈ R + minimum vessel sailing speed (knots) s max ∈ R + maximum vessel sailing speed (knots) d sea p ∈ R + ∀p ∈ P number of containers transported at voyage leg p (TEUs) y port p ∈ B ∀p ∈ P =1 if a disruptive event occurred at port p (=0 otherwise) y sea p ∈ B ∀p ∈ P =1 if a disruptive event occurred at voyage leg p (=0 otherwise) δ port p ∈ R + ∀p ∈ P expected duration of a disruptive event at port p (hours) δ sea p ∈ R ∀p ∈ P expected vessel sailing speed change due to a disruptive event at voyage leg p (knots) c cargo p ∈ R + ∀p ∈ P freight rate for shipping a TEU from port p to port p + 1 (USD/TEU) c port ph ∈ R + ∀p ∈ P, h ∈ H p handling cost at port p under handling rate h (USD/TEU) c del p ∈ R + ∀p ∈ P delayed vessel arrival cost at port p (USD/hour) c f p ∈ R + ∀p ∈ P unit fuel cost when sailing at voyage leg p (USD/ton) c oper ∈ R + vessel operational cost (USD/hour) c inv ∈ R + unit container inventory cost (USD/TEU/hour) c skip p ∈ R + ∀p ∈ P cost for skipping port p due to a disruptive event (USD) VOC ∈ R + planned total vessel operational cost (USD) TP ∈ R + planned total profit to be generated by the liner shipping company (USD) Green Vessel Schedule Recovery Problem (GVSRP): Subject to: s p ≥ s min + δ sea p y sea p ∀p ∈ P (14) τ arr p+1 = τ dep p + τ sail p ∀p ∈ P, p < m 1 (17) In the proposed GVSRP mathematical model, the objective function (11) aims to minimize the total profit loss as a result of disruptions in sea and/or at ports of call for the liner shipping route, passing through ECAs. Constraint set (12) computes the vessel sailing speed for the recovered vessel schedule at each voyage leg of the given liner shipping route, passing through ECAs. It is assumed that the vessel sailing speed adjustment strategy cannot be used for the vessel schedule recovery by the liner shipping company at the voyage legs, where a disruption occurs in sea (i.e., the vessel sailing speed cannot be increased at the voyage legs, experiencing disruptions, to reduce the delays). Constraint sets (13) and (14) (24) guarantees that the liner shipping company can skip a port of the given liner shipping route, passing through ECAs, only if that particular port experienced a disruption. Constraint set (25) calculates the actual total vessel turnaround time. Constraint sets (26)-(32) estimate the individual cost components of the objective function (11) for the GVSRP mathematical model, which include the following: (i) the actual total revenue; (ii) the actual total port handling cost; (iii) the actual total late vessel arrival cost; (iv) the actual total fuel cost; (v) the actual total container inventory cost; (vi) the total vessel operational cost; and (vii) the actual total profit for the recovered vessel schedule. Note that the unit fuel cost c f p , p ∈ P (USD/ton) varies when the vessel is sailing at voyage legs within and outside ECAs. Based on the existing IMO regulations, the liner shipping company is mandated to use a more expensive low-sulfur fuel when sailing through ECAs (c f p = c MGO ∀p ∈ P * , where c MGO -is the unit cost of MGO). On the other hand, liner shipping company will switch to cheaper HFO fuel at the voyage legs outside ECAs (c f p = c HFO ∀p ∈ P 0 , where c HFO p -is the unit cost of HFO).
Solution Methodology
The proposed GVSRP mathematical model is nonlinear due to constraint set (15), which was used for estimating the actual vessel sailing time at each voyage leg, and constraint set (16), which was used for estimating the actual fuel consumption at each voyage leg. In order to linearize constraint set (15), the actual vessel sailing speed s p , p ∈ P was replaced with its reciprocal v p = 1 s p ∀p ∈ P. Similarly, the planned vessel sailing speed s p , p ∈ P will be replaced with its reciprocal . v p = 1 s p ∀p ∈ P. Also, the expected vessel sailing speed change due to a disruptive event at a given voyage leg (δ sea p , p ∈ P-measured in knots) should be adjusted accordingly. Denote . δ sea p , p ∈ P as the expected vessel sailing speed change due to a disruptive event at a given voyage, measured in knots −1 (i.e., converted to the reciprocal of the vessel sailing speed change for consistency). Denote . ∆ sea p , p ∈ P as the vessel sailing speed adjustment at a given voyage, measured in knots −1 .
Let the fuel consumption function, which was computed based on the reciprocal of the vessel sailing speed v p , p ∈ P, be represented by FC p , p ∈ (tons/nmi). Throughout this study, the piecewise linear static secant approximation will be adopted to linearize the GVSRP mathematical model. For static piecewise linear secant approximations, the number of secant lines is predefined [40,45,46]. The piecewise function FC pr , p ∈ P, r ∈ K approximates the nonlinear function FC p , p ∈ P using a defined number of linear segments, where K = {1, . . . , r} represents a set of linear segments in the piecewise function. Let b pk = 1 if segment k is selected to approximate the fuel consumption function at voyage leg p of the liner shipping route (=0 otherwise). Denote st k , k ∈ K and ed k , k ∈ K as the vessel speed reciprocal values at the start of linear segment k and at the end of linear segment k, respectively. Let SL k , k ∈ K and IN k , k ∈ K be the slope of linear segment k and the intercept of linear segment k, respectively. Denote M 1 and M 2 as sufficiently large positive numbers. The mixed-integer nonlinear GVSRP mathematical model can be further reduced to a mixed-integer linear mathematical model (the model will be referred to as GVSRPL) as follows.
Linearized Green Vessel Schedule Recovery Problem (GVSRPL): Subject to: Constraint sets (17)- (28) and (30) In the proposed GVSRPL mathematical model, the objective function (33) aims to minimize the total profit loss as a result of disruptions in sea and/or at ports of call for the liner shipping route, passing through ECAs. Constraint set (34) computes the reciprocal of the vessel sailing speed for the recovered vessel schedule at each voyage leg of the given liner shipping route, passing through ECAs. Constraint sets (35) and (36) define the limits for the reciprocal of the vessel sailing speed at each voyage leg of the liner shipping route, passing through ECAs. Constraint set (37) calculates the vessel sailing time based on the reciprocal of the vessel sailing speed for the recovered vessel schedule at each voyage leg of the given liner shipping route, passing through ECAs. Constraint set (38) ensures that only one linear segment will be selected to approximate the fuel consumption function for the recovered vessel schedule at each voyage leg of the liner shipping route, passing through ECAs. Constraint sets (39) and (40) define the ranges for the reciprocal of the vessel sailing speed, when a given linear segment k is selected to approximate the fuel consumption function, at each voyage leg of the liner shipping route, passing through ECAs. Constraint set (41) estimates the fuel consumption based on the reciprocal of the vessel sailing speed for the recovered vessel schedule at each voyage leg of the given liner shipping route, passing through ECAs. Constraint set (42) estimates the actual total fuel cost.
Based on the preliminary numerical experiments, it was found that the GVSRPL mathematical model can be solved to the global optimality using CPLEX for the realistic-size liner shipping routes in a reasonable computational time. Therefore, development of the approximate solution approaches is not required for the GVSRPL mathematical model, and CPLEX will be further used as a solution approach. Note that increasing the number of linear segments in the piecewise approximation for the fuel consumption function typically enhances its accuracy but may cause an increase in the computational time, required to solve the GVSRPL mathematical model (due to increasing number of variables in the GVSRPL mathematical model). The latter tradeoff will be further investigated throughout the numerical experiments (see Appendix A that accompanies the manuscript for more details).
Computational Experiments
This section of the manuscript presents the numerical experiments, which were performed to exhibit how the proposed GVSRPL mathematical model could be used for decision making by liner shipping companies for the liner shipping routes, passing through ECAs, in case of disruptive events. All the numerical experiments in this study were performed using a Dell Inspiron AMD FX-9800P processor with 16 GB of RAM. The MATLAB 2016a software [47] was used to develop a procedure for generating the piecewise secant approximations, which were further applied to linearize the fuel consumption function. The General Algebraic Modeling System [48] was used to encode the GVSRPL mathematical model and solve it to the global optimality with CPLEX. The following aspects are further discussed in this section: (1) input data generation; and (2) managerial insights.
Input Data Generation
This study considered the Asia-North Europe LL5 liner shipping route, passing through ECAs. The liner shipping route is served by OOCL liner shipping company [41] and is presented in Figure 5. Note that Figure 5 was developed based on the data, provided by OOCL [41]. The route connects Asia, Arabian Sea, Red Sea, North Africa, Malta, and North Europe. As illustrated in Figure 5, the Asia-North Europe LL5 liner shipping route passes through the North Sea and the English Channel, which are designated as ECAs with the SO x emission control. The port rotation for the Asia-North Europe LL5 liner shipping route includes 14 ports of call, where the Port of Le Havre (France) is visited twice. Each one of the ports is scheduled to be visited by vessels on a weekly basis. The distances between the consecutive ports, measured in nautical miles, were obtained from the world seaports catalogue [49] and are presented in parenthesis: 1 In order to conduct the computational experiments for this study, the data from the liner shipping literature and the MCT operations literature [6][7][8][9][10][11][37][38][39][40][50][51][52][53][54][55] were used to generate the parameter values for the GVSRPL mathematical model. The adopted parameter values are presented in Table 1. The start of TW at each port of the port rotation was estimated based on a mathematical relationship between the start of TW at the preceding port, the upper and lower bounds of the vessel sailing speed, and the length of a voyage leg between the consecutive ports as ∀ ∈ (hours). The notation will be further adopted for the values that are drawn from a set of uniformly distributed pseudorandom numbers within a specified range. The planned handling time of vessels at port was calculated as follows: ℎ = ∑ ( ℎ ) ℎ∈ ℎ ∀ ∈ (hours). The negotiated handling rate (i.e., the value of parameter ℎ , ∈ , ℎ ∈ ) was selected randomly from the available handling rates at each port. The planned arrival time of vessels at port + 1 was set based on the following relationship: +1 = + ℎ + ∀ ∈ (hours). In order to conduct the computational experiments for this study, the data from the liner shipping literature and the MCT operations literature [6][7][8][9][10][11][37][38][39][40][50][51][52][53][54][55] were used to generate the parameter values for the GVSRPL mathematical model. The adopted parameter values are presented in Table 1. The start of TW at each port of the port rotation was estimated based on a mathematical relationship between the start of TW at the preceding port, the upper and lower bounds of the vessel sailing speed, and the length of a voyage leg between the consecutive ports as follows: τ st p+1 = τ st p + d leg p U[s min ; s max ] ∀p ∈ P (hours). The notation U will be further adopted for the values that are drawn from a set of uniformly distributed pseudorandom numbers within a specified range. The planned handling time of vessels at port p was calculated as follows: τ hand p = h∈H p d port p pr ph x ph ∀p ∈ P (hours). The negotiated handling rate (i.e., the value of parameter x ph , p ∈ P, h ∈ H p ) was selected randomly from the available handling rates at each port. The planned arrival time of vessels at port p + 1 was set based on the following relationship: τ arr p+1 = τ arr p + τ hand p + d leg p s p ∀p ∈ P (hours). Different cases of disruption occurrence in sea and at ports of call will be modeled throughout the numerical experiments for the considered liner shipping route (see Section 6.2 of the manuscript for more details). Note that the unit cost of HFO fuel was originally assumed to be c HFO = 200 USD/ton, while the unit cost of MGO fuel was set to c MGO = 500 USD/ton. However, an additional sensitivity analysis will be conducted for the unit cost of HFO and the unit cost of MGO (details will be presented in the following sections of the manuscript). The port skipping inconvenience coefficient was originally set to µ = 1.10. However, an additional sensitivity analysis will be conducted for the port skipping inconvenience coefficient (details will be presented in the following sections of the manuscript). (1) Case 1: no disruptions in sea and at ports This is an ideal condition, as there are no disruptions in sea, which may significantly affect the planned vessel sailing speed at voyage legs of the considered liner shipping route. Also, there are no disruptions at ports of call, which may cause delays in vessel service. The vessels visit each port of the port rotation as planned.
(2) Case 2: disruptions in sea and at ports within ECAs There is a strike at the Port of Antwerp (Belgium) with an expected duration of δ port 4 = 50 hours. Also, there is a disruption in sea due to inclement weather at voyage leg "4", connecting the Port of Antwerp (Belgium) and the Port of Le Havre (France), which is expected to reduce the planned vessel sailing speed at voyage leg "4" by δ sea 4 = −4.0 knots. Note that the Port of Antwerp (Belgium) and the Port of Le Havre (France) are located within ECAs, where the liner shipping company must use low-sulfur MGO fuel only. Note that the ports, connected by voyage legs "1" to "4", are within the areas that are designated as ECAs, while the other ports are outside the ECAs. Table 2. The considered unit fuel cost values for the generated problem instances.
Instance
Unit HFO Cost (USD/Ton) Unit MGO Cost (USD /Ton) 1 200 500 2 250 550 3 300 600 4 350 650 5 400 700 6 450 750 7 500 800 8 550 850 9 600 900 10 650 950 Sensitivity of the GVSRPL mathematical model to changes in the unit fuel costs (both HFO and MGO) is further presented in this section of the manuscript with a primary emphasis on the following two aspects: (a) port skipping decisions; and (b) vessel sailing speed and sailing time decisions.
Port Skipping Decisions
The port skipping decisions were retrieved for each one of the considered problem instances and each one of the three disruption cases using the GVSRPL mathematical model. Throughout the numerical experiments, it was found that port skipping decisions were not affected with changes in the unit cost of HFO and MGO. The primary factor, influencing the port skipping decisions, was determined to be the disruption type. The port skipping decisions for all the considered disruption cases are presented in Figure 6. In disruption case 1 (no disruptions occur in sea and at ports within and outside ECAs), it was observed that the vessels did not skip any port of call-see Figure 6a. Hence, the liner shipping company tended to adhere to the planned vessel schedule even after increasing the unit cost of HFO and MGO. On the other hand, in disruption case 2 (disruptions occur in sea and at ports within ECAs), skipping the Port of Antwerp (Belgium) was found to be the most favorable decision for the liner shipping company in order to recover the vessel schedule-see Figure 6b. As for disruption case 3 (disruptions occur in sea and at ports within and outside ECAs), skipping the Port of Antwerp (Belgium) and the Port of Le Havre (France) was found to be the most favorable decision for the liner shipping company in order to recover the vessel schedule-see Figure 6c. Therefore, the results from conducted numerical experiments demonstrate that an increasing number of disruptive events along the given liner shipping route caused significant changes in the vessel schedule and required the liner shipping company to implement more radical recovery strategies (i.e., port skipping rather than vessel sailing speed adjustment). Moreover, increasing unit fuel cost further imposes limitations on implementation of the vessel sailing speed adjustment strategy, since increasing unit fuel cost and increasing vessel sailing speed are expected to significantly increase the actual total fuel cost. The latter will reduce the actual total profit to be generated by the liner shipping company.
Vessel Sailing Speed and Sailing Time Decisions
The average vessel sailing speed adjustment was calculated for each one of the generated problem instances and disruption cases 2 and 3 using the GVSRPL mathematical model, and the results are presented in Figure 7. Furthermore, the average vessel sailing speed was estimated for each one of the considered problem instances and disruption cases 1, 2, and 3 using the GVSRPL mathematical model, and the results are presented in Figure 8. Note that the vessel sailing speed adjustment was not computed for disruption case 1, as no disruptions in sea and at ports were modeled for the latter case. Although changes in the average vessel sailing speed were observed after increasing the unit cost of HFO and MGO even for disruption case 1 (see Figure 8), the vessel sailing speed adjustment was not used as the vessel schedule recovery strategy. Along with the average vessel sailing speed adjustment and the average vessel sailing speed, the total vessel sailing time was retrieved for each one of the generated problem instances and disruption cases 1, 2, and 3 using the GVSRPL mathematical model, and the results are presented in Figure 9. Therefore, the results from conducted numerical experiments demonstrate that an increasing number of disruptive events along the given liner shipping route caused significant changes in the vessel schedule and required the liner shipping company to implement more radical recovery strategies (i.e., port skipping rather than vessel sailing speed adjustment). Moreover, increasing unit fuel cost further imposes limitations on implementation of the vessel sailing speed adjustment strategy, since increasing unit fuel cost and increasing vessel sailing speed are expected to significantly increase the actual total fuel cost. The latter will reduce the actual total profit to be generated by the liner shipping company.
Vessel Sailing Speed and Sailing Time Decisions
The average vessel sailing speed adjustment was calculated for each one of the generated problem instances and disruption cases 2 and 3 using the GVSRPL mathematical model, and the results are presented in Figure 7. Furthermore, the average vessel sailing speed was estimated for each one of the considered problem instances and disruption cases 1, 2, and 3 using the GVSRPL mathematical model, and the results are presented in Figure 8. Note that the vessel sailing speed adjustment was not computed for disruption case 1, as no disruptions in sea and at ports were modeled for the latter case. Although changes in the average vessel sailing speed were observed after increasing the unit cost of HFO and MGO even for disruption case 1 (see Figure 8), the vessel sailing speed adjustment was not used as the vessel schedule recovery strategy. Along with the average vessel sailing speed adjustment and the average vessel sailing speed, the total vessel sailing time was retrieved for each one of the generated problem instances and disruption cases 1, 2, and 3 using the GVSRPL mathematical model, and the results are presented in Figure 9.
The numerical experiments demonstrate that the average vessel sailing speed was generally lower for disruption cases 2 and 3 as compared to disruption case 1, which further increased the total vessel sailing time. The latter pattern can be supported by the fact that disruptive events at certain voyage legs caused a substantial reduction in the vessel sailing speed (i.e., the vessel sailing speed values under disruption cases 2 and 3). Even application of the vessel sailing speed adjustment strategy did not allow the liner shipping company reaching the planned vessel sailing speed (i.e., the vessel sailing speed values under disruption case 1). Furthermore, an increase in the unit cost of HFO and MGO typically reduced the average vessel sailing speed for all the considered disruption cases. Such reduction can be justified the fact that the liner shipping company aimed to decrease the actual fuel consumption and the associated cost. Moreover, an increase in the unit cost of HFO and MGO also resulted in reduction of the average vessel sailing speed adjustment for the recovered vessel schedules under disruption cases 2 and 3. However, some fluctuations in the average vessel sailing speed can be noticed from one problem instance to another, which can be explained by the fact that the arithmetic average vessel sailing speed does not consider the length of voyage legs. On the other hand, more clear patterns were recorded for the total vessel sailing time, as it accounts not only for the vessel sailing speed but also for the length of voyage legs: = ∀ ∈ (hours). Higher total vessel sailing time values were generally obtained for the case, when disruptions occur in sea and at ports within and outside ECAs (i.e., disruption case 3), as compared to the case, when disruptions occur in sea and at ports within ECAs only (i.e., disruption case 2). Hence, an increasing number of disruptive events along the given liner shipping route caused significant changes in the vessel schedule and increased the total vessel sailing time. The scope of numerical experiments also included a detailed assessment of changing port skipping cost effects on the recovered vessel schedules. More information regarding the latter analysis is provided in Appendix B that accompanies this manuscript.
Concluding Remarks
Liner shipping has maintained a steady growth over the years. However, disruptions in sea and at ports of call significantly affect the planned vessel schedules and result in negative externalities for liner shipping companies, including monetary losses. Effective vessel schedule recovery is critical to reduce monetary losses. The vessel schedule recovery problem becomes more complex when disruptions occur at the liner shipping routes, passing through emission control areas (ECAs), as liner shipping companies must comply with the established International Maritime Organization (IMO) regulations. This study presented a novel mixed-integer nonlinear mathematical model for the green vessel schedule recovery problem at the liner shipping route, which passes through ECAs and where additional regulations are imposed by IMO on the fuel sulfur content. The objective of the proposed model aimed to minimize the total profit loss, endured by a given liner shipping company due to disruptions in the planned operations. Two of the most common vessel recovery strategies were considered, including vessel sailing speed adjustment and port skipping. The nonlinear model was linearized and solved to the global optimality by applying CPLEX.
A number of computational experiments were conducted for the Asia-North Europe LL5 liner shipping route, served by OOCL liner shipping company and passing through ECAs. It was found that an increasing number of disruptive events along the given liner shipping route generally resulted in significant vessel schedule changes and required the liner shipping company to implement more radical recovery strategies (i.e., port skipping became preferential over vessel sailing speed adjustment). Furthermore, increasing unit fuel cost imposed limitations on implementation of the vessel sailing speed adjustment strategy, since increasing unit fuel cost and generally obtained for the case, when disruptions occur in sea and at ports within and outside ECAs (i.e., disruption case 3), as compared to the case, when disruptions occur in sea and at ports within ECAs only (i.e., disruption case 2). Hence, an increasing number of disruptive events along the given liner shipping route caused significant changes in the vessel schedule and increased the total vessel sailing time. The scope of numerical experiments also included a detailed assessment of changing port skipping cost effects on the recovered vessel schedules. More information regarding the latter analysis is provided in Appendix B that accompanies this manuscript.
Concluding Remarks
Liner shipping has maintained a steady growth over the years. However, disruptions in sea and at ports of call significantly affect the planned vessel schedules and result in negative externalities for liner shipping companies, including monetary losses. Effective vessel schedule recovery is critical to reduce monetary losses. The vessel schedule recovery problem becomes more complex when disruptions occur at the liner shipping routes, passing through emission control areas (ECAs), as liner shipping companies must comply with the established International Maritime Organization (IMO) regulations. This study presented a novel mixed-integer nonlinear mathematical model for the green vessel schedule recovery problem at the liner shipping route, which passes through ECAs and where additional regulations are imposed by IMO on the fuel sulfur content. The objective of the proposed model aimed to minimize the total profit loss, endured by a given liner shipping company due to disruptions in the planned operations. Two of the most common vessel recovery strategies were considered, including vessel sailing speed adjustment and port skipping. The nonlinear model was linearized and solved to the global optimality by applying CPLEX.
A number of computational experiments were conducted for the Asia-North Europe LL5 liner shipping route, served by OOCL liner shipping company and passing through ECAs. It was found that an increasing number of disruptive events along the given liner shipping route generally resulted in significant vessel schedule changes and required the liner shipping company to implement more radical recovery strategies (i.e., port skipping became preferential over vessel sailing speed adjustment). Furthermore, increasing unit fuel cost imposed limitations on implementation of the vessel sailing speed adjustment strategy, since increasing unit fuel cost and increasing vessel sailing speed would substantially increase the actual total fuel cost. The findings also indicate that the port skipping decisions could be significantly affected with the port skipping cost for the cases, when disruptions occur in sea and at ports within and outside ECAs. The computational experiments showcase that the proposed mathematical model and the developed solution methodology can assist liner shipping companies with efficient vessel schedule recovery, minimize the monetary losses due to disruptions in vessel schedules, and improve energy efficiency as well as environmental sustainability.
This study can be extended in several dimensions that include, but are not limited to the following: (1) apply the proposed methodology for the liner shipping routes, passing through ECAs with SO x , NO x , and PM emission control; (2) modeling other types of emissions (e.g., nitrogen oxide-NO x , carbon monoxide-CO, non-methane volatile organic compounds-VO); (3) evaluating potential re-routing options for the vessels that sail through ECAs; (4) modeling stochastic disruptive events in sea and ports of call, where the expected vessel sailing speed change in sea and the expected disruption duration at ports of call are uncertain; (5) consider alternative vessel recovery strategies in the presented mathematical model (e.g., port skipping with cargo diversion to other ports, port swapping, handling rate adjustment); (6) collect the operational data from liner shipping companies in order to derive a more comprehensive fuel consumption function, which accounts for sailing speed, payload, and vessel geometric characteristics; and (7) evaluation of alternative methods for linearizing the presented mathematical model (e.g., discretization, outer approximations, second order cone programming).
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
A set of computational experiments were conducted to investigate the relationship between the number of linear segments in the piecewise function, accuracy of the fuel consumption function approximation, and the computational time, required to solve the GVSRPL mathematical model. A total of 19 scenarios were developed by varying the number of linear segments in the piecewise function from 2 to 100 (see Table A1). The examples of piecewise approximations for the scenarios with 1 segment, 2 segments, 5 segments, and 10 segments are illustrated in Figure A1 . It can be observed that the accuracy of approximation for FC(v) generally increases with increasing number of linear segments. The GVSRPL mathematical model was solved for each scenario with a specified number of linear segments. It was assumed that no disruptions occurred in sea and at ports throughout the analysis at this stage. The results, retrieved from the numerical experiments (see Table A1), include: (i) the scenario ID; (ii) the number of linear segments-|K|; (iii) the GVSRPL objective function value-Z; (iv) the value of nonlinear objective function at the solution, which was provided by GVSRPL-Z * ; (v) the objective gap ∇ = Z * −Z Z * ; and (vi) the mean value of CPU time over 10 replications. A substantial increase in the GVSRPL computational time was observed when the number of linear segments was greater than 30 without any significant change in the objective gap. Therefore, a piecewise function with 30 linear segments will be further adopted throughout this study for the approximation of the fuel consumption function and analysis of the managerial insights. The GVSRPL mathematical model was solved for each scenario with a specified number of linear segments. It was assumed that no disruptions occurred in sea and at ports throughout the analysis at this stage. The results, retrieved from the numerical experiments (see Table A1), include: (i) the scenario ID; (ii) the number of linear segments-| |; (iii) the GVSRPL objective function value-; (iv) the value of nonlinear objective function at the solution, which was provided by GVSRPL- * ; (v) the objective gap = | * − * |; and (vi) the mean value of CPU time over 10 replications. A substantial increase in the GVSRPL computational time was observed when the number of linear segments was greater than 30 without any significant change in the objective gap. Therefore, a The port skipping decisions for all the generated problem instances of disruption case 2 and the Port of Antwerp (Belgium) are presented in Figure A2. It was observed that the Port of Antwerp (Belgium) was skipped by the liner shipping company for all the considered problem instances, when the port skipping inconvenience coefficient was set to µ = [1.00]-see Figure A2a. Such finding can be supported by the fact that skipping the Port of Antwerp (Belgium) was the most favorable decision for the liner shipping company in order to recover the vessel schedule (i.e., the port skipping cost was lower as compared to the other additional route service costs, which could be endured as a result of disruptions).
piecewise function with 30 linear segments will be further adopted throughout this study for the approximation of the fuel consumption function and analysis of the managerial insights.
Appendix B
This section of the manuscript discusses the effects of changing port skipping cost on the recovered vessel schedules. Throughout the numerical experiments, the value of the port skipping inconvenience coefficient ( ) was altered from = [1.00] to = [10.00] with an increment of [1.00]. The GVSRPL mathematical model was solved for all the considered problem instances and disruption cases 2 and 3. The results from computational experiments are presented next for disruption cases 2 and 3. The port skipping decisions for all the generated problem instances of disruption case 2 and the Port of Antwerp (Belgium) are presented in Figure A2. It was observed that the Port of Antwerp (Belgium) was skipped by the liner shipping company for all the considered problem instances, when the port skipping inconvenience coefficient was set to = [1.00]-see Figure A2a. Such finding can be supported by the fact that skipping the Port of Antwerp (Belgium) was the most favorable decision for the liner shipping company in order to recover the vessel schedule (i.e., the port skipping cost was lower as compared to the other additional route service costs, which could be endured as a result of disruptions). On the other hand, an increase in the port skipping inconvenience coefficient to = [2.00] could significantly increase the port skipping cost to be endured by the liner shipping company. Therefore, the liner shipping company did not skip the Port of Antwerp (Belgium) for the majority of the considered problem instances and the port skipping inconvenience coefficient of = [2.00]see Figure A2b. The Port of Antwerp (Belgium) was skipped only for problem instance 10 with the highest unit cost of HFO and the highest unit cost of MGO. The latter can be explained by the fact that the liner shipping company was not able to make substantial changes in the vessel sailing speed due to increasing unit cost of both fuel types, and port skipping was the most favorable vessel schedule recovery strategy for problem instance 10. Based on the conducted numerical experiments, it can be concluded that the port skipping decisions could be significantly affected with the port skipping cost for the cases, when disruptions occur in sea and at ports within ECAs. On the other hand, an increase in the port skipping inconvenience coefficient to µ = [2.00] could significantly increase the port skipping cost to be endured by the liner shipping company. Therefore, the liner shipping company did not skip the Port of Antwerp (Belgium) for the majority of the considered problem instances and the port skipping inconvenience coefficient of µ = [2.00]-see Figure A2b. The Port of Antwerp (Belgium) was skipped only for problem instance 10 with the highest unit cost of HFO and the highest unit cost of MGO. The latter can be explained by the fact that the liner shipping company was not able to make substantial changes in the vessel sailing speed due to increasing unit cost of both fuel types, and port skipping was the most favorable vessel schedule recovery strategy for problem instance 10. Based on the conducted numerical experiments, it can be concluded that the port skipping decisions could be significantly affected with the port skipping cost for the cases, when disruptions occur in sea and at ports within ECAs. The port skipping decisions for all the generated problem instances of disruption case 3 are presented in Figure A3 for the following ports: (1) the Port of Antwerp (Belgium); (2) the Port of Le Havre (France); and (3) the Port of Marsaxlokk (Malta). It was observed that the Port of Antwerp (Belgium) and the Port of Le Havre (France), which are both located within ECAs, were skipped by the liner shipping company for all the considered problem instances, when the port skipping inconvenience coefficient was set to µ = [1.0] ÷ [7.0]-see Figure A3a. Such finding can be supported by the fact that skipping the Port of Antwerp (Belgium) and the Port of Le Havre (France) was the most favorable decision for the liner shipping company in order to recover the vessel schedule (i.e., the port skipping costs were lower as compared to the other additional route service costs, which could be endured as a result of disruptions).
On the other hand, an increase in the port skipping inconvenience coefficient to µ = [8.0] ÷ [10.0] could significantly increase the port skipping cost to be endured by the liner shipping company. Therefore, the liner shipping company did not skip the Port of Antwerp (Belgium) for all the considered problem instances and the port skipping inconvenience coefficient of µ = [8.0] ÷ [10.0]-see Figure A3b. Changes in the unit cost of HFO and MGO from one problem instance to another did not influence the port skipping decisions. However, the Port of Le Havre (France) was still skipped by the liner shipping company even after increasing the port skipping inconvenience coefficient to µ = [8.0] ÷ [10.0] in order to effectively recover the vessel schedule. The port skipping decisions for all the generated problem instances of disruption case 3 are presented in Figure A3 for the following ports: (1) the Port of Antwerp (Belgium); (2) the Port of Le Havre (France); and (3) the Port of Marsaxlokk (Malta). It was observed that the Port of Antwerp (Belgium) and the Port of Le Havre (France), which are both located within ECAs, were skipped by the liner shipping company for all the considered problem instances, when the port skipping inconvenience coefficient was set to = [1.0] ÷ [7.0] -see Figure A3a. Such finding can be supported by the fact that skipping the Port of Antwerp (Belgium) and the Port of Le Havre (France) was the most favorable decision for the liner shipping company in order to recover the vessel schedule (i.e., the port skipping costs were lower as compared to the other additional route service costs, which could be endured as a result of disruptions). On the other hand, an increase in the port skipping inconvenience coefficient to = [8.0] ÷ [10.0] could significantly increase the port skipping cost to be endured by the liner shipping company. Therefore, the liner shipping company did not skip the Port of Antwerp (Belgium) for all the considered problem instances and the port skipping inconvenience coefficient of = [8.0] ÷ [10.0]-see Figure A3b. Changes in the unit cost of HFO and MGO from one problem instance to another did not influence the port skipping decisions. However, the Port of Le Havre (France) was still skipped by the liner shipping company even after increasing the port skipping inconvenience coefficient to = [8.0] ÷ [10.0] in order to effectively recover the vessel schedule.
|
2019-07-18T19:11:19.718Z
|
2019-06-20T00:00:00.000
|
{
"year": 2019,
"sha1": "582cf0e57f7218babec2c0e0f2534e5f0fb07a7b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/12/2380/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6125a150bf5baa6129e2b87d77e4d96438ba0345",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
220848821
|
pes2o/s2orc
|
v3-fos-license
|
Subcellular Localization Relevance and Cancer-Associated Mechanisms of Diacylglycerol Kinases
An increasing number of reports suggests a significant involvement of the phosphoinositide (PI) cycle in cancer development and progression. Diacylglycerol kinases (DGKs) are very active in the PI cycle. They are a family of ten members that convert diacylglycerol (DAG) into phosphatidic acid (PA), two-second messengers with versatile cellular functions. Notably, some DGK isoforms, such as DGKα, have been reported to possess promising therapeutic potential in cancer therapy. However, further studies are needed in order to better comprehend their involvement in cancer. In this review, we highlight that DGKs are an essential component of the PI cycle that localize within several subcellular compartments, including the nucleus and plasma membrane, together with their PI substrates and that they are involved in mediating major cancer cell mechanisms such as growth and metastasis. DGKs control cancer cell survival, proliferation, and angiogenesis by regulating Akt/mTOR and MAPK/ERK pathways. In addition, some DGKs control cancer cell migration by regulating the activities of the Rho GTPases Rac1 and RhoA.
Introduction
Phosphoinositides (PIs) represent a tiny component of the total phospholipid content in eukaryotic cell membranes, but they regulate numerous cellular activities such as cell adhesion [1], migration [2], apoptosis [3], vesicular trafficking [4], and post-translational modifications [5]. These processes are consistent with cancer-associated cellular mechanisms. PI metabolism is controlled by several kinases, phosphatases, and phospholipases following their stimulation by different external stimuli. An increasing number of studies report that alterations in the PI cycle, resulting from dysfunctional PI metabolic enzymes, are involved in cancer [6][7][8].
Accumulating evidence demonstrates that DGKs, as well as phospholipases C (PLCs) and PKCs, are distributed across several subcellular compartments together with their substrates [15][16][17] and, as PLCs and PKCs, they are involved in cell regulation [18,19]. Nuclear localization allows DGKs to participate in a PI cycle which is independent of that of the plasma membrane [20,21]. Therefore, DGK activity may regulate distinct cellular functions and explain the complexities that surround DGK signaling [17]. In fact, DGKs regulate cytokine/growth factor-mediated cell proliferation and migration, cell growth, seizure activity, and insulin receptor-mediated glucose metabolism, suggesting that DGKs may also be involved in several diseases including epilepsy and diabetes [22,23].
Of particular note is the involvement of DGKs in cancer development and progression [24][25][26]. For instance, mutations in the DGKα gene can drive pancreatic cancer [24] as well as promote hepatocellular carcinoma (HCC) progression by activating the mitogen-activated protein kinase (MAPK) pathway [27]. In 3D colon and breast cancer models, DGKα was reported to promote cell survival by regulating Src [28]. For these reasons, there are several reports suggesting that DGKα may be a promising therapeutic target in cancer therapy [7,29]. Meanwhile, in colorectal cancer (CRC), DGKγ plays tumor-suppressive roles, while DGKζ activity promotes tumor progression [26,30,31]. Moreover, DGKη and DGKδ regulate cell growth and proliferation in cervical cancer cell lines [32,33], whereas epigenetic changes in DGKι are reported in glioblastoma and HCC cells [34,35]. Despite the numerous reports of the involvement of DGKs in cancer as well as their clinical potential, the comprehension of the specific cellular functions regulated by DGKs in cancer is not yet complete. This review aims to provide up-to-date knowledge of the regulatory roles played by DGKs in cancer cell survival and metastasis, while also highlighting the downstream regulation of DGKs, the role of their cellular localization and summing up current knowledge on targeting DGKs in cancer therapies.
Activation and Regulation of DGK Isozymes
The 10 members of the mammalian DGK family are classified into 5 different subtypes depending on their structural motifs [36]: type I (DGKs α, β, and γ), type II (DGKs δ, η, and κ), type III (DGKε), type IV (DGKs ζ, and ι) and type V (DGKθ). Moreover, some DGK isoforms undergo further alternative splicing which often changes either the distribution or activity of the enzyme [37,38]. Their differences may be attributed to their evolution, in order to regulate specific cellular processes evident in higher vertebrates [39]. DGKs are ubiquitous kinases that are mainly expressed in the brain and hemopoietic tissue [40]. They are prominently distributed across several different regions of the brain, including the cerebellum, hippocampus, and the olfactory bulb, therefore suggesting involvement of DGKs in central nervous system functions [41]. Some DGK isoforms are also expressed in the retina (ε, γ, ι) [41], in striated (δ and ζ) and cardiac muscle (β and ε) [41,42] and in the lungs (α, ε, ζ and η) [43]. So far, it has been shown that different DGK isoforms can be co-expressed in the same tissue and even in the same cell, suggesting that each subtype may carry out tissue or cell-specific functions [41].
Until now, all recognized mammalian DGKs possess two common kinase domains comprising a conserved catalytic domain, which is characterized by a highly conserved ATP binding site and an accessory domain [44]. The possession of additional distinct domains, that seems to have regulatory roles, as shown in DGK family types, contributes to isoform-specific functions and diversity among mammalian DGK isoforms [17]. For instance, type I isoforms (α, β, and γ) participate in calcium (Ca 2+ ) signaling because of their Ca 2+ binding motif, while the carboxyl terminus-based sterile alpha motif (SAM) of DGKδ, which is a type II DGK, promotes protein-protein interactions [45]. In addition, DGKδ possesses a PH domain that weakly binds to phosphatidylinositols [46]. The nuclear localization of type IV DGKs (ζ and ι) is enhanced via their nuclear localization sequence (NLS). This domain also serves as substrate for conventional PKCs and is homologous to the phosphorylation domain of the myristoylated alanine-rich kinase substrate (MARCKS) protein. DGKθ, which is the only type V DGK isoform, can be differentiated by its PH domain, three C1 domains, and a Ras-associating domain that mediates Ras signaling [47].
All DGKs possess at least two cysteine-rich regions similar to the DAG-binding C1A and C1B domains of PKC [48]. The C1 regions of DGKs allow membrane binding either through protein interactions, as demonstrated by several DGKs and β-arrestin [49] or through lipid interactions, as shown by DGKα and the lipid product of phosphoinositide 3-kinase (PI3K) [50,51]. C1 domains are recognized phorbol ester or DAG binding regions, but several studies have tried to evaluate the binding potential of the C1 domains of some DGKs to a DAG analog or phorbol ester reported that only the C1A domain of DGK β and γ displays successful binding [52]. Therefore, it is not clear whether the C1 domains of all DGKs can actually bind DAG.
The activation and regulation of DGKs is a very complex process that needs further studies to be fully comprehended [37]. Given the structural and subcellular localization differences, it may be possible that different activation mechanisms exist for each individual DGK isoform. Considering also the ability of DGKs to translocate to different cellular sites, the presence of post-translational modifications and their binding to different cofactors, such as membrane lipids and Ca 2+ , may produce some degree of diversity in their functions. DAG is accessed by DGKs upon their translocation to DAG-producing cellular membranes, where DGKs are proposed to be activated during agonist or kinase promoted phosphorylation or following their binding to some cofactors or to other proteins [39]. Indeed, some studies reported that the distinct activities observed in the various DGK isoforms may depend on the type of agonist and the cofactors they bind to during their activation [53]. For example, DGKα, which is one of the most studied DGK isoforms, demonstrates this complexity in T lymphocytes. Based on the type of agonist used to activate DGKα in T cells, it translocates to two different membrane compartments: stimulation with interleukin 2 (IL-2) induces the translocation of DGKα from the cytosol to the perinuclear region [54], whereas the translocation from the cytosol to the plasma membrane occurs when DGKα is stimulated by T-cell antigen receptor [55]. Several different cofactors, such as Ca 2+ , which binds to the EF-hand motif and membrane lipids, including phosphatidylserine (PS), sphingosine, the PI3K lipid products, PtdIns(3,4)P 2 , and PtdIns(3,4,5)P 3 have been reported to modify DGKα activity both in vitro and in vivo [56]. As for other DGK isoforms, activation of DGKδ may be enhanced by the binding of its PH domain to phosphatidylinositols [41], DGKε is inhibited by both PtdIns(4,5)P 2 and PS, while DGKζ is activated by both PtdIns(4,5)P 2 and PS [57]. Moreover, the protein-protein interaction between DGKθ and RhoA is involved in the regulation of the activity of DGKθ, where its kinase activity is completely reduced by RhoA [17].
Several studies showed that the specificity of DGK activities could also be attributed to its association with or inhibition of DAG-activated proteins, such as the RasGRP proteins [58]. Generally, when DAG is abundant, RasGRPs activate either Rap or Ras, or both, and this mechanism is RasGRP-isoform specific. Consequently, the downstream effects of DGKs diverge because DGK isozymes bind to different isoforms of RasGRP [59]. Hence, the functional specificity of DGKs depends on their interactions. For example, type IV DGKs, ζ, and ι, are all structurally similar, but they induce opposing effects on Ras signaling. DGKζ attenuates Ras signaling both in vitro [58,60] and in vivo [60], whereas DGKι enhances it [59]. These opposing effects were mainly dependent on the ability of DGKs ζ and ι to bind and inhibit specific RasGRP enzymes, respectively RasGRP1 and RasGRP3 [58,59]. Since the activities of DGKs maintain a balance between DAG and PA levels, DGKs can also be associated with proteins whose activities are regulated by PA. In fact, DGKs regulate either directly or indirectly Rac1 [30], mTOR [10], PIP5K type 1α [17], and atypical PKCs [50], all regulated by PA while mediating several essential cellular effects such as cell survival, migration, and vesicle trafficking [11,12].
DGKs were initially described as modulators of the classical and novel PKC family members. However, some DGKs form complexes with certain DAG-sensitive PKC isoforms, thus being regulated by PKC-dependent phosphorylation [36,50]. Indeed, these DGKs and their respective PKC counterparts mutually regulate each other's enzyme activities, as seen in the case of DGKζ and PKCα. At immune and nervous synapses, the activation of DGKζ and PKCα is mutually regulated by both kinases [50]. At basal conditions, DGKζ phosphorylates DAG and prevents PKCα activation [61] but upon stimulation, there is an overproduction of local DAG levels that is too abundant for DGKζ to phosphorylate, leading to the availability of excess DAG. Consequently, high levels of DAG activate PKCα, which phosphorylates DGKζ, causing their physical dissociation [61] and promoting transient or even prolonged activation of PKCα. In the context of cancer, the DGKζ-PKCα axis could be important in regulating signaling in tumor cells. For instance, DGKζ undergoes a PKCα-dependent phosphorylation to enhance its shuttle from the nucleus to the cytosol [62], a biological process which is implicated in cancer cells responding to stress conditions [63]. Moreover, DGKα is involved in inflammation in tumor cells by positively regulating tumor necrosis factor α (TNFα)-induced nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) activation via a PKCζ-mediated Ser311 phosphorylation of the NF-κB subunit p65/RelA [64].
Overall, DGKs can be activated by several mechanisms which might be isoform-specific, and they produce different cellular outcomes based on the type of co-factors and proteins they associate with. The concept of DGK-isoform specific functions is supported by mouse knockout studies showing that targeted deletion of specific DGK isoforms leads to a distinct phenotype [59,[65][66][67][68].
Cellular Localization and Distribution of DGKs
As previously reported, DGKs can translocate to various distinct cellular compartments depending on the type of agonist. This supports the fact that the activities of DGKs may be confined to specific DAG pools produced after receptor activation [17]. This is the case of DGKε, which only phosphorylates DAG species that possess an arachidonoyl group at the sn-2 position [50]. Moreover, recent studies have also shown the presence of an alternative DAG metabolic pathway where individual DGK isoforms phosphorylate different molecular DAG pools or species independently from PI turnover [13]. For instance, DGKδ2 interacts via its SAM domain with the ER enzyme sphingomyelin synthase related protein (SMSr) generating DAG from phosphatidylethanolamine and ceramide [69]. Through a flip-flop mechanism, DAG produced by SMSr in the ER crosses the ER membrane from the lumen region to the cytosolic region to give access to DGKδ2 [69]. A previous work by van der Bend and colleagues showed that receptor activation of DGK in cells induces a physiological DAG generation while treating cells with exogenous PLC induces a global, non-specific DAG production [17,70]. Moreover, receptor activation of DGKs exhibited a significant increase in kinase activity, compared to the kinase activity produced from treating cells with exogenous PLC. Following this observation, the authors suggested that DGKs are activated only in spatially restricted subcellular sites characterized by DAG production because DGKs cannot use DAG pools that are randomly generated in the plasma membrane [17,70].
DGKs usually localize within several cell compartments, with majority localizing at least partially within the plasma membrane. This is either constitutively, as seen in the case of DGKκ [71], or following stimulation with specific agonists, such as DGKδ1, which is translocated to the plasma membrane upon exposure to phorbol esters [72], or DGKα, following engagement of the T cell receptor [55]. Moreover, DGKs θ and ζ are found at the plasma membrane upon activation of some G protein-coupled receptors (Table 1) [73,74]. Several studies showed that the nuclear inositide signaling is involved in regulating essential cellular processes, such as cell cycle and differentiation [75][76][77][78], while it is also implicated in several pathologies, including myelodysplastic syndromes, brain diseases, and cancer [79][80][81][82]. Interestingly, DGKs, which have been discovered in nearly all cell compartments, were also found in the nucleus. Notably, DGKs play a critical role in the PI cycle, and the conversion of DAG to PA by DGKs represents the first step in resynthesizing PIs [14]. In addition, different agonists, such as insulin-like growth factor 1 (IGF-1) or thrombin, can generate DAG in the nucleus but not in the plasma membrane. Moreover, nuclear DAG levels fluctuate independently of extranuclear DAG during cell cycle [20]. Among the nuclear DGKs, DGK α, ζ, and ι translocate in and out of the nucleus [17], while DGKγ is shuttled from the cytoplasm to the nucleus [83] and a significant fraction of DGKθ localize mainly within nuclear speckles [84,85]. DGKθ can also translocate from the cytosol to the plasma membrane upon stimulation by PKCs and activated GPCRs [86]. In addition, nuclear DGKs ζ and ι localize within distinct nuclear compartments [17,84] whereas DGKα localizes mainly at the nuclear periphery [17,71]. DGKs also localize within other organelles and this may be cell type-specific [87,88]. Localization and expression of DGKs in the brain remain elusive. However, DGKε localizes to the subsurface cisterns of cerebellar Purkinje cells and colocalizes with inositol trisphosphate receptor-1 (InsP 3 R-1) in dendrites and axons of the brain, thus confirming the involvement of DGKε in neuronal and brain functions [89]. Moreover, DGKε can also localize within the endoplasmic reticulum and the plasma membrane [87]. In adrenal cells, a group of PI signaling molecules are expressed significantly in zona glomerulosa cells and medullary chromaffin cells in the adrenal gland. The same study showed that DGKγ localizes to the Golgi complex, DGKε to the plasma membrane, and DGKζ to the nucleus of adrenal cells [88]. Furthermore, DGKs δ and η localize to endosomes [37,45].
Even though the specific functions of DGKs are not clearly known yet, several reports demonstrated the presence of DGK activity in cellular contents containing cytoskeletal components and the possible involvement of DGKs in cytoskeletal remodeling and cellular morphology [90,91]. DGKs can interact with proteins associated with cytoskeletal reorganization, such as Rac and Rho GTPases, PIP5K, Cdc42, and Rho-GDI [17,91,92]. For example, DGKζ binds directly to Rac1 to form a complex with Rho-GDI and PAK1 [93], DGKβ co-localized with actin filaments [94], whereas endogenous DGKζ co-purified with cytoskeletal proteins and localized to the leading edge of C2 myoblasts [42] and glioblastoma cells [58].
The Impact of DGKs in the Regulation of Cancer Cell Mechanisms
Despite all the progress made in medicine, cancer remains one of the frequently occurring causes of death in humans. Therefore, it is important to better understand the various mechanisms associated with cancer development and progression, in order to pave the way for new personalized medicine approaches. Interestingly, abnormal levels of the DGK substrate, DAG, are involved in malignant cell transformation, as increased DAG levels induce tumor-promoting effects. Consequently, a decreased expression or activity of DGKs could lead to higher DAG levels that could promote malignant cell transformation ( Figure 1) [17,35]. DGKζ activity may explain clearly how DGKs negatively regulate DAG in order to limit the transforming potential of DAG in cancer [37]. Following T cell receptor stimulation, DGKζ negatively regulates RasGRP1, an enzyme involved in cell proliferation. As such, excessive activation of the oncogenic protein Ras is observed in DGKζ-deficient lymphocytes upon T cell receptor stimulation and this correlates with high DAG levels. Even DGKα seems to modulate RasGRP1 and its deletion induces hyperproliferation in T cells [58,65]. Besides, high levels of DGKα expression correlate positively with lung cancer patient survival [95], whereas in HCC, the downregulation of DGKα inhibits cell proliferation and metastasis [27]. Hence, DGKs act as both tumor suppressors and tumor promoters in cancer but this is cancer cell type-dependent.
All these reports demonstrate that DGK signaling may be essential in cancer development and progression. As such, this part of the review highlights the impact of DGKs in the regulation of cell growth, proliferation, and metastasis in cancer.
Cell Growth and Proliferation in Cancer
Cell growth and proliferation are essential factors in cancer development and progression. These processes have been shown to be regulated by the altered expression and/or activity of cell cyclerelated proteins in cancer cells [17,96]. Several reports demonstrated the involvement of the DGK family in cell cycle regulation. For example, DGKs α and ζ play contrasting roles in regulating the cell cycle. DGKζ inhibited the progression of cells from G1 to S phase of the cell cycle [62], while DGKα-induced PA was required for IL-2 mediated progression of cells to the S phase [54]. DGKζ is a negative regulator of cell cycle progression in C2C12 mouse myoblasts: DGKζ overexpression blocked cells at the G1 phase of the cell cycle via its interaction with the Retinoblastoma protein (pRb) which is a tumor suppressor and a cell cycle regulator, and DGKζ downregulation increased the number of cells at both S and G2/M phases of the cell cycle [97]. Interestingly, DGKζ is highly expressed in patient-derived acute myeloid leukemia cells and the knockdown of DGKζ in HL-60 promyelocytic cells induces a cell cycle arrest at the G2M checkpoint, inhibiting cell proliferation DGKζ activity may explain clearly how DGKs negatively regulate DAG in order to limit the transforming potential of DAG in cancer [37]. Following T cell receptor stimulation, DGKζ negatively regulates RasGRP1, an enzyme involved in cell proliferation. As such, excessive activation of the oncogenic protein Ras is observed in DGKζ-deficient lymphocytes upon T cell receptor stimulation and this correlates with high DAG levels. Even DGKα seems to modulate RasGRP1 and its deletion induces hyperproliferation in T cells [58,65]. Besides, high levels of DGKα expression correlate positively with lung cancer patient survival [95], whereas in HCC, the downregulation of DGKα inhibits cell proliferation and metastasis [27]. Hence, DGKs act as both tumor suppressors and tumor promoters in cancer but this is cancer cell type-dependent.
All these reports demonstrate that DGK signaling may be essential in cancer development and progression. As such, this part of the review highlights the impact of DGKs in the regulation of cell growth, proliferation, and metastasis in cancer.
Cell Growth and Proliferation in Cancer
Cell growth and proliferation are essential factors in cancer development and progression. These processes have been shown to be regulated by the altered expression and/or activity of cell cycle-related proteins in cancer cells [17,96]. Several reports demonstrated the involvement of the DGK family in cell cycle regulation. For example, DGKs α and ζ play contrasting roles in regulating the cell cycle. DGKζ inhibited the progression of cells from G1 to S phase of the cell cycle [62], while DGKα-induced PA was required for IL-2 mediated progression of cells to the S phase [54]. DGKζ is a negative regulator of cell cycle progression in C2C12 mouse myoblasts: DGKζ overexpression blocked cells at the G1 phase of the cell cycle via its interaction with the Retinoblastoma protein (pRb) which is a tumor suppressor and a cell cycle regulator, and DGKζ downregulation increased the number of cells at both S and G2/M phases of the cell cycle [97]. Interestingly, DGKζ is highly expressed in patient-derived acute myeloid leukemia cells and the knockdown of DGKζ in HL-60 promyelocytic cells induces a cell cycle arrest at the G2M checkpoint, inhibiting cell proliferation while increasing apoptosis (Table 2) [98]. In addition, DGKζ inhibition in U251 and U87 glioblastoma cells caused a marked decrease in cyclin D1 (CCND1) protein expression, which led to an arrest of cells at the G0/G1 phase [99]. Furthermore, major regulators of cancer cell growth and proliferation, such as the phosphorylated forms of Akt and mTOR, were also decreased, resulting in a significant reduction of cell proliferation in DGKζ knockdown cells compared to control cells. The authors also showed in an in vivo model that the tumorigenic capability of glioblastoma cells was reduced when DGKζ expression was decreased. Hence, DGKζ inhibition may confer advantages to glioblastoma patients [99]. Acute Myeloid Leukemia Induces cell cycle arrest at G2M, Inhibits cell proliferation and increases apoptosis [98] DGKη Lung cancer Impairs MAPK signaling [103] In other cancer cells, such as K562 human erythroleukemia cells, DGKα can modulate cell cycle progression by influencing the phosphorylated status of pRb, which subsequently induces cell cycle arrest by impairing the G1/S transition [100]. In HCC cells, DGKα knockdown significantly suppresses cell proliferation, whereas overexpressing wildtype DGKα but not the kinase-dead mutant in the same cells significantly enhances proliferation. Similar results were obtained in HCC xenograft model experiments, where DGKα regulates cell proliferation via activation of the MAPK pathway. Specifically, DGKα downregulation impaired mitogen-activated protein kinase (MEK) and extracellular signal-regulated kinase (ERK) phosphorylation, both of which are crucial in the regulation of cell growth and migration [27]. Moreover, a novel DGKα specific inhibitor CU-3, which was successfully obtained after a high-throughput screening of about 9600 chemical compounds, induced apoptosis in HepG2 HCC cells and HeLa cervical cancer cells, while simultaneously enhancing immune response by promoting IL-2 production [101]. Consistent with these data, it was also reported that silencing or inhibiting DGKα activity with short interfering RNA (siRNA) or small-molecule inhibitor R59022 caused increased death of glioblastoma and melanoma cells by interrupting essential oncogenic pathways [29]. DGKα knockdown decreased both total and phosphorylated forms of mTOR, hypoxia-inducible factor 1-alpha (HIF1α), c-Myc levels, and phosphorylation of Akt in glioblastoma cells. Xenograft experiments also demonstrated that DGKα knockdown and inhibition affects tumor growth, angiogenesis, and survival of mice with intracranial and subcutaneous tumors. Intriguingly, knockdown of DGKα in non-cancerous cells, such as astrocytes and fibroblasts, showed no form of cytotoxicity as revealed in both glioblastoma and melanoma cells. Hence, small-molecule inhibition of DGKα is selectively toxic to human cancer cells but not normal human cells, thus making DGKα inhibition a promising therapeutic target [29].
Several studies demonstrated an active role of DGKα also in Src oncogenic functions [8,28]. Src is a regulator of mitogenic and survival signaling pathways that are downstream of receptor and non-receptor tyrosine kinases, such as the vascular endothelial growth factor receptor (VEGFR), human epidermal growth factor receptor-2 (HER2) and focal adhesion kinase (FAK), which are often aberrantly expressed in colon, breast, and pancreatic cancer [8]. Using 3D colon and breast cancer cell cultures, it was demonstrated that DGKα is essential in cell growth and survival by promoting the stabilization of Src activation. Importantly, DGKα enzymatic activity is necessary for Src activation. Pharmacological or genetic DGKα silencing restricted tumor growth in vivo, thus confirming the function of DGKα in malignant transformation [28].
Furthermore, DGK is involved in the major biological features of the transformed phenotype of Kaposi's sarcoma (KS) cells, where DGK is essential for cell proliferation and DGK inhibitors could be promising for therapy [104]. Indeed, the DGK pharmacological inhibitor R59949 strongly reduces hepatocyte growth factor (HGF)-induced KS proliferation and anchorage-independent growth without affecting cell survival or the classical Akt and MAPK pathways, which are usually implicated in KS.
On the other hand, further studies showed that CHO-K1 ovary cells expressing the kinase negative mutant of DGKγ exhibited a larger size, slower growth rate, and an extended S phase, suggesting that the increase of cell size was induced by protein synthesis during the extended S phase and that DGKγ regulates cell cycle [61]. Even though the activities of DGKs in cancer seem to support tumor-promoting roles, there is evidence that DGKs can also support tumor suppressor activities [102]. For instance, DGKγ expression is downregulated in HCC tumor tissues and colorectal cancer (CRC) cell lines when compared to non-tumor control tissues, and this correlates with poor clinical outcomes [102]. Interestingly, DGKγ downregulation in HCC is due to epigenetic mutations induced by histone H3 and H4 deacetylation. In addition, an analysis of methylation of the CpG islands of DGK promoter genes in CRC cells revealed that DGKγ is hypermethylated in CRC cells but not in normal colonic tissue, and this corresponds with reduced DGKγ expression in CRC cell lines compared to control cells [26]. However, both constitutively active and kinase-dead DGKγ mutants induced inhibitory effects on CRC cell proliferation [26]. Notably, the ectopic expression of DGKγ in HCC cells decreased cell growth by downregulating glucose transporter 1 (GLUT1) expression and inhibiting cell glycolysis. In fact, GLUT1 expression is high in HCC and promotes tumorigenicity, therefore DGKγ plays tumor suppressor roles in HCC by lowering GLUT1 levels [102].
DGKε activity can also regulate the Ras/RAF/MEK/ERK signaling in cervical cancer cell line models [33]. This pathway plays pivotal roles in the regulation of cell proliferation, survival, and differentiation. The study showed that siRNA downregulation of DGKε impairs the epidermal growth factor (EGF)-activated Ras/RAF/MEK/ERK signaling cascade in HeLa cells. However, the mechanism through which DGKε regulates this pathway is still unknown [33]. Additionally, the potential of DGKη to regulate MAPK signaling, which is a downstream target of epidermal growth factor receptor (EGFR), led a group to study the oncogenic effects of DGKη in lung cancer, which is often characterized by mutations in EGFR and KRAS. The authors reported that silencing DGKη in lung cancer models, characterized by EGFR and KRAS mutations, reduced cancer cell growth while enhancing the cells' sensitivity to EGFR inhibitor Afatinib [103].
Cell Migration, Invasiveness, and Metastasis
The motility and invasion of cancer cells from the primary tumor to a distant organ is an essential step in tumor metastasis. This event requires chemotactic migration of cancer cells and crossing of extracellular matrix barriers that surround the tumor [105]. As previously stated, DGKγ plays tumor-suppressive roles in CRC. The ectopic expression of wildtype, as well as kinase active and inactive mutant forms of DGKγ, restricts cell migration and invasion in CRC cells by inhibiting Rac1 activity [26]. Rac1 is a member of the Rho GTPases, which are small GTP-binding proteins that regulate cytoskeletal dynamics and activate essential protein kinases involved in Epithelial to Mesenchymal Transition (EMT) [106]. EMT involves the reprogramming of epithelial cells into mesenchymal cells, leading to morphological changes, specifically more elongated and spindle-like forms with increased migratory and invasive properties [107]. Notably, Rac1 is highly expressed in different stages of colorectal tumors. Its activity in CRC tissues positively correlates with poor prognosis of CRC patients by promoting EMT-mediated invasion of CRC cells via the activation of the signal transducers and activators of transcription 3 (STAT3) pathway [106]. Therefore, it would be interesting to elucidate the mechanisms associated with DGKγ-mediated inhibition of Rac1 activity for potential CRC therapy. DGKγ also plays tumor suppressor roles in HCC cells by reducing cell migration when DGKγ is overexpressed [102]. Conversely, DGKα is highly expressed in HCC and promotes tumorigenicity [27]. In fact, knockdown of DGKα suppresses cell migration by impairing the Ras/RAF/MEK/ERK pathway in HCC cells. The Ras/RAF/MEK/ERK pathway is indeed frequently deregulated in HCC and the activation of this pathway is significantly involved in cancer cell invasion [27]. In fact, ERK signaling is a critical mediator of cell migration, although it is also a classic mediator of cell growth, proliferation, and differentiation. ERK activates several proteins that regulate cell-matrix adhesion, cell protrusion, and retraction, all of which are essential processes recognized during cell motility [108]. Moreover, ERK controls EMT-regulated cell migration through Rac1/Fox01 activation [107]. DGKα activity has also been reported to be a key factor in the migratory and invasive responses induced by several growth factors, including HGF and vascular endothelial growth factor (VEGF) in endothelial, epithelial, and leukemic cells [109][110][111][112]. In line with the potential to regulate migration in endothelial cells by DGKα, a study employing both DGKα specific siRNA and/or DGK pharmacological inhibitor R59949 demonstrated that DGKα activity is a key regulator of migration in Hec-1A endometrial cancer cell line [112]. Inhibition of DGKα indeed reduced cell migration towards estrogen chemoattractant as well as abolished ruffle formation in Hec-1A cells [112]. In addition, DGKα promotes invasive migration in H1299 lung cancer cells and A2780 ovarian carcinoma cells by controlling Rab coupling protein (RCP)-driven integrin trafficking [113]. Furthermore, R59949 significantly reduced HGF-induced motility in KS cell lines with limited effects on cell adhesion and spreading [104], but it did not affect MAPK and Akt signaling pathways.
The downstream product of DGKs, PA has been associated with the receptor tyrosine kinase (RTK) signaling, which is an upstream regulator of the Ras/RAF/MEK/ERK cascade [114]. Another study attributed a potential role of PA in regulating tumor metastasis due to its ability to induce the secretion of Type 1 matrix metalloproteases (MMP1), enzymes able to promote metastasis [115]. However, these studies refer to PA generated by phospholipase D (PLD). Hence, it would be important to understand whether PA generated by DGKs performs the same functions. Interestingly, the application of nanomolar concentrations of PA increased cell migration in invasive MDA-MB-231 human breast cancer cells but had no effect on non-neoplastic control cells. Moreover, applying Clostridium difficile Toxin B to the PA-treated MDA-MB-231 breast cancer cells inhibited Rho activity and was followed by a marked decrease in cell migration [116]. These data strengthen the link between DGK/PA and Rho GTPases in cytoskeletal organization and subsequent cell migration. In addition, PA may be central in the regulation of cell motility by controlling the activity of type I PIP5K isozymes and PtdIns(4,5)P 2 [117]. In fact, PA stimulates PIP5K, that participates in actin reorganization by generating PtdIns(4,5)P 2 , which is a primary regulator of cytoskeletal organization, so that PA signaling is also critical in PtdIns(4,5)P 2 resynthesis [117].
A study reported that DGKζ deficiency in fibroblast cells induces a reduction in Rac1 and RhoA activation, as well as a significant reduction in cell migration [118]. Considering this finding, the authors extended their study by elucidating the impact of DGKζ signaling in CRC metastasis [30]. In tumor-derived CRC cell lines, knocking down DGKζ expression produced similar results as those seen in fibroblasts. A significant decrease in Rac1 and RhoA activity in DGKζ knockdown CRC cells was also observed and was followed by a decrease in cell invasion. Concomitantly, DGKζ depletion decreased the invasiveness of prostate cancer and metastatic breast cancer cells [30]. Thus, opposite to the tumor-suppressive roles of DGKγ as described above, DGKζ may also promote tumorigenesis by potentiating cell invasion and migration in several cancer types by regulating Rac1 and RhoA activity. This may be due to the fact that coordinated events between Rac1 and RhoA are necessary for effective migration in cancer metastasis. Indeed, RhoA is involved in the maintenance of actin stress fibers and focal adhesions, while Rac1 regulates the generation of lamellipodia, membrane ruffles formation, and Cdc42 signaling in filopodia production [119].
As for other DGKs, such as DGKδ and DGKι, there are a few reports demonstrating their participation in the development and progression of cancer. Downregulation of DGKδ in cervical and lung adenocarcinoma cell line models induced a downregulation of Akt activity, leading to a decrease in cell migration and proliferation. Moreover, DGKδ can control Akt activity through pleckstrin homology domain leucine-rich repeat protein phosphatase 2 (PHLPP2) [32]. Epigenetic studies have also revealed that DGKι may be methylated in cancer, including glioblastoma and HCC [34,35], while it is still unknown whether DGKι mutations may produce effects directly involved in metastasis of these cancer types.
Targeting DGKs in Cancer Therapies
The development of effective therapies to fight cancer continues to be one of the major challenges in modern medicine. Chemotherapy and radiotherapy are somehow successful, but these approaches are non-specific and often lead to short-or long-term adverse effects which usually affect quality of life [7]. Recently, exploring immune-based therapeutic systems, such as blockage of immune checkpoints or adoptive cell transfer (ACT) that stimulate antitumor immunity by targeting and attacking tumor cells, seems promising because of its specificity. The starting point of this immune response is represented by chimeric antigen receptor (CAR)-T cells [120]. Interestingly, incoming reports suggest that targeting DGK activity could be a strong approach to reinforce the anti-tumor functions of T cells [7].
Currently, DGKα holds much promise in cancer therapy [7,29], as its inhibition presents toxicity in various cancer, but not normal human cells [29]. For instance, in T cells, the inhibition of DGKα activity may generate a simultaneous response of reinforcing T cell attack on tumor cells, while directly inhibiting tumor growth [7].
The inhibition of DGKα by R59949 in KS and endometrial cancer cells leads to decreased cell proliferation, growth, and migration [104,112]. On the other hand, using the small molecule inhibitor of DGKs R59022, DGKα was inhibited in glioblastoma, cervical cancer, melanoma, and breast cancer cell lines. In these cells, the percentage of cell death was increased compared to control normal fibroblasts and astrocytes [29]. Indeed, in glioblastoma in vivo tumor models, the same authors showed that DGKα inhibition decreases angiogenesis, tumor growth, and survival of mice with tumors [29]. Targeting DGKs may be important in cancer immunotherapy, as intratumoral injection of DGK knockout T-cells into U87MGvIII glioblastoma tumor models, obtained by CRISPR/Cas9, caused significant suppression of the tumors [25]. More importantly, this result was due to an enhancement in the effector functions of T-cells in the xenograft model. The authors also showed that the CRISPR/Cas9 generated DGK-knockout in CAR-T cells potentiates T-cell functions by increasing cluster of differentiation 3 (CD3) signaling. Consequently, the cells became resistant to immunosuppressive factors such as transforming growth factor-β (TGFβ) and prostaglandin E2, which are known mediators of cancer cell survival [25].
Other studies tested CU-3, a DGK pharmacological inhibitor with a higher specificity for DGKα than R59949 and R59022, mainly due to its specific targeting of the ATP binding site in the catalytic domain of DGKα. This molecule induced apoptosis in several cancer cells, while enhancing immune response [101]. Similarly, compound A, which specifically inhibits type I DGKs and especially DGKα, induced apoptosis and reduced viability of melanoma and several other cancer cell lines [121].
In addition, two novel DGKα inhibitor compounds, namely 11 and 20 (with an IC 50 On the other hand, Ritanserin, an established serotonin receptor inhibitor, has recently been identified as a DGKα inhibitor. Interestingly, it is more potent than R59022, although these two compounds differ structurally by just a single fluorine [123]. Ritanserin has already been shown to be well-tolerated and safe for human use in clinical trials, potentially paving the way to use it clinically as a DGKα inhibitor [123]. In fact, treatment of several cancer cells with Ritanserin has yielded similar results as other DGK inhibitors [7,124]. For example, the mesenchymal subtypes of lung and pancreatic carcinoma, as well as the mesenchymal subtype of glioblastoma, are sensitive to Ritanserin. Indeed, DGKα inhibition by Ritanserin induced cell death in glioblastoma stem cells and this was partially mediated by apoptosis [124]. Additionally, Ritanserin, as with other small molecule inhibitors of DGKα, also enhanced T cell signaling but failed to promote long-term T-cell activation [125].
Conclusions
All the reported studies highlight DGK signaling as a promising target for cancer therapy. However, more studies are needed to fully comprehend DGK specific roles in cancer development and progression. Due to the isoform-specific functions observed in different types of cancer cells and even subcellular sites, it would be crucial to fully understand how the specific DGK isoforms control downstream oncogenic signaling, as these pathways can regulate proliferation, growth, angiogenesis, immunity, and migration. To better understand DGK signaling it would also be essential to explore the potential crosstalk among the various DGK isoforms, the possible redundancy or compensatory functions of other isoforms during the inhibition of one or more DGK isoforms, as well as consider the DGK specific subcellular localization relevance. Moreover, the discovery of more potent DGKspecific isoform inhibitors may be useful to study the isoform-specific functions and develop new cancer targeted therapies. Future studies may also benefit from the combinatorial effect of DGK inhibitors and other standard cancer therapies, such as radiation and chemotherapy. Furthermore, since PA is involved in several cellular processes, the combination of both DGK inhibitors and inhibitors of PA-synthesizing enzymes may prove to be more beneficial in cancer therapy than when used individually. Finally, since recent reports demonstrated that DGKs may have a preference for On the other hand, Ritanserin, an established serotonin receptor inhibitor, has recently been identified as a DGKα inhibitor. Interestingly, it is more potent than R59022, although these two compounds differ structurally by just a single fluorine [123]. Ritanserin has already been shown to be well-tolerated and safe for human use in clinical trials, potentially paving the way to use it clinically as a DGKα inhibitor [123]. In fact, treatment of several cancer cells with Ritanserin has yielded similar results as other DGK inhibitors [7,124]. For example, the mesenchymal subtypes of lung and pancreatic carcinoma, as well as the mesenchymal subtype of glioblastoma, are sensitive to Ritanserin. Indeed, DGKα inhibition by Ritanserin induced cell death in glioblastoma stem cells and this was partially mediated by apoptosis [124]. Additionally, Ritanserin, as with other small molecule inhibitors of DGKα, also enhanced T cell signaling but failed to promote long-term T-cell activation [125].
Conclusions
All the reported studies highlight DGK signaling as a promising target for cancer therapy. However, more studies are needed to fully comprehend DGK specific roles in cancer development and progression. Due to the isoform-specific functions observed in different types of cancer cells and even subcellular sites, it would be crucial to fully understand how the specific DGK isoforms control downstream oncogenic signaling, as these pathways can regulate proliferation, growth, angiogenesis, immunity, and migration. To better understand DGK signaling it would also be essential to explore the potential crosstalk among the various DGK isoforms, the possible redundancy or compensatory functions of other isoforms during the inhibition of one or more DGK isoforms, as well as consider the DGK specific subcellular localization relevance. Moreover, the discovery of more potent DGK-specific isoform inhibitors may be useful to study the isoform-specific functions and develop new cancer targeted therapies. Future studies may also benefit from the combinatorial effect of DGK inhibitors and other standard cancer therapies, such as radiation and chemotherapy. Furthermore, since PA is involved in several cellular processes, the combination of both DGK inhibitors and inhibitors of PA-synthesizing enzymes may prove to be more beneficial in cancer therapy than when used individually. Finally, since recent reports demonstrated that DGKs may have a preference for specific DAG species, which are independent of PI turnover pathways, it would be strategic to further investigate the cellular signaling pathways associated with PA, produced by both PI independent and PI dependent pathways.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-07-30T02:09:01.550Z
|
2020-07-26T00:00:00.000
|
{
"year": 2020,
"sha1": "6b00064358ac1635e7291ae111bdf5d501deec51",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/15/5297/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c254fa4f260092ed02d35909c9b6b80317bed80",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
67858283
|
pes2o/s2orc
|
v3-fos-license
|
Production of high titer of citric acid from inulin
Background Citric acid is considered as the most economically feasible product of microbiological production, therefore studies on cheap and renewable raw materials for its production are highly desirable. In this study citric acid was synthesized by genetically engineered strains of Yarrowia lipolytica from widely available, renewable polysaccharide – inulin. Hydrolysis of inulin by the Y. lipolytica strains was established by expressing the inulinase gene (INU1 gene; GenBank: X57202.1) with its native secretion signal sequence was amplified from genomic DNA from Kluyveromyces marxianus CBS6432. To ensure the maximum citric acid titer, the optimal cultivation strategy–repeated-batch culture was applied. Results The strain Y. lipolytica AWG7 INU 8 secreted more than 200 g dm− 3 of citric acid during repeated-batch culture on inulin, with a productivity of 0.51 g dm− 3 h− 1 and a yield of 0.85 g g− 1. Conclusions The citric acid titer obtained in the proposed process is the highest value reported in the literature for Yarrowia yeast. The obtained results suggest that citric acid production from inulin by engineered Y. lipolytica may be a very promising technology for industrial citric acid production.
Background
Production of citric acid (CA), the intermediate of Krebs cycle, is one of the oldest technology of organic acid production applied at industrial scale [1]. Initially CA was extracted from Italian lemons which what was later replaced by its biosynthesis using Aspergillus niger [2]. Afterwards, the discovery of CA secretion by the yeast Yarrowia lipolytica caused a rapid progress in research on that process [3]. The global CA production in 2015 reached 2 Mln tones and 3.7% of annual growth is expected till 2020 (https://ihsmarkit.com/products/citric-acid-chemical-economics-handbook.html). Therefore, CA is the most abundantly produced chemical on an industrial scale and the most widely used organic acid, it has also "generally regarded as safe" (GRAS) status [4]. The main function of this acid is its application as food acidulant. It is also used for prevention of oxidative deterioration in flavor or color [5]. CA and its derivatives are also ingredients in detergents, personal care products as well as they are widely used in pharmaceutical and biomedical industry. Due to many applications of this valuable compound, there is a great demand on developing very efficient technology for its production. Nowadays, from an industrial point of view, the most important challenges when developing novel technologies are to design the production process with high titer, productivity and yield, simultaneously fulfilling the main principles of cleaner production, energy saving and sustainable development [6].
One of the criteria when developing a new biotechnology process is the cultivation system. The most common systems used industrially are batch or fed-batch cultures. To achieve better results, the repeated-batch culture (RBC) could be used. This cultivation system allows for better dynamics and higher efficiency of the biosynthesis process by extending the effective production phase in comparison to traditional batch culture [7]. The RBC was already successfully used for lipid, ethanol and erythritol production [7][8][9].
Other important criterion of efficient technology using microorganisms is the substrate applied in the process. The raw materials used for biotechnological processes must be cheap, renewable and not competing with food production. Ideally, such a feedstock should be waste or by-products from other industries.
The nonconventional Y. lipolytica yeast is rapidly emerging as a valuable host for the production of a variety of lipids, organic acids, polyols and other metabolites [10]. Nowadays, both, lab-scale as well as industrial production of chemicals using Y. lipolytica, are being improved through genetic engineering. Recently, Sabra et al. [11] focused on transcriptome and fluxome characterization of citrate producing strain ACA DC 50109 and enhanced citrate secretion in a glucose based medium to 55 g dm − 3 . Furthermore, improvement of succinic acid production (110.7 g dm − 3 ) was achieved through deletion of succinate dehydrogenase and CoA-transferase genes followed by overexpression of TCA cycle genes [12]. More recent work characterized biological pathway of erythritol biosynthesis and improved its production through overexpression of transketolase or erythrose reductase genes in Y. lipolytica [13,14]. Furthermore, β-carotene production in Y. lipolytica was also investigated [15]. Overexpression of the optimum promoter-gene pairs for each transcriptional unit in lipid overproducing strain followed by optimization of cultivation method significantly improved β-carotene production in comparison to strain without lipid overproduction.
In the processes of CA production using Y. lipolytica many research groups focused on glycerol, by-product from biodiesel or soap production [16][17][18]. However, some other waste substrates were also applied: olive mill waste-water [19], molasses [20] or pretreated cellulose [21]. The plant biomass constitutes very promising raw material due to high sugar content and its availability. Besides cellulose, some plants, such as chicory, Jerusalem artichoke or dahlia, are known to store energy in the form on inulin (IN), a fructose polymer, which is accumulated in large quantity in roots and rhizoids [22]. Inulin is classified as dietary fibre and promotes the growth of intestinal bacteria [23]. IN has significant number of pharmaceutical and food applications. It is frequently used as sugar or fat substitute in different types of food and as an excipient and stabilizer in many pharmaceuticals [24,25]. Furthermore, IN was found to have also anti-cancer [26] and immuno-modulatory properties [27]. The global IN market was valued at USD 1674.3 Mln in 2017, and is expected to reach USD 5099.2 Mln by 2025. Interestingly, Europe is the largest IN market which accounts for approximately 42% of the global IN market (http://www.acutemarketreports.com/report/artichoke-inulin-market-professional-survey-report). Instantly growing IN market results in increasing amounts of inulin-rich wastes after its production that must be managed. One of the possibilities to utilize these raw materials is microbiological production of bioethanol, single-cell protein, lipids, CA, butanediol or lactic acid [28][29][30][31][32].
Due to the always high demand for CA and increasing market of IN, the main goal of this study was to improve biosynthesis of CA by engineered Y. lipolytica strains, using IN as carbon source. To enhance the CA titer, yield and productivity from IN, repeated-batch culture was applied.
Results
Overexpression of INU1 gene in Y. lipolytica strains It is known phenomenon, that CA secretion by Y. lipolytica occurs when nitrogen and to some level phosphorous are deficient in the medium. The abilities to secrete large amount of this acid are also strain dependent [33][34][35]. In the current study, new inulinase expressing derivatives of Y. lipolytica AWG7, able to grow on IN, were investigated for CA production from this raw material. The parental AWG7 strain was selected as efficient CA producer, with high purity of CA over ICA. During continuous culture with glycerol as carbon source, it secreted over 97 g dm − 3 of CA from 150 g dm − 3 of glycerol [36]. In the current study, growth of the wild strain and its inulinase positive derivatives was analyzed in YNB medium supplemented with fructose or IN (Fig. 1). Furthermore, control experiment with glucose was also performed to analyze if the random INU1 gene integration into Y. lipolytica genome, did not affect growth of the resulted transformants (Fig. 1). No difference was observed for all analyzed strains in both, YNB medium with glucose and fructose (Fig. 1a, b). Although the integration cassette for INU1 gene expression was inserted into the genome using retrotransposable zeta sequences [37], what causes randome integration and may result in different inulinase expression, no differences was observed among the analyzed transformants growing in medium with IN (Fig. 1c). Slow and not very significant growth of the wild strain AWG7 may be the result of trace amounts of free fructose present in the medium (Fig. 1c). The good growth of all analyzed transformants in medium with IN is an indirect proof for efficient inulinase expression and hydrolysis of IN. Similar growth pattern for all strains was the basis for their further analysis for CA biosynthesis.
Selection of the best CA producer using batch cultures
In the next step of the study, all the engineered strains were analyzed for their abilities to secrete CA while growing in medium with IN. In control experiment, the AWG7 strain grew in synthetic medium with fructose and glucose. The obtained results are presented in Table 1. Inulin concentration of 100 g dm − 3 was used at the beginning of the process. The culture was carried out until the available substrate (fructose released from inulin) was exhausted from the medium. Fructose concentration was monitored by HPLC method. Six out of seven analyzed transformants were able to secrete more than 60 g dm − 3 of CA from 100 g dm − 3 of IN. The best CA producing strain -AWG7 INU8 -secreted 75.5 g dm − 3 . It is very important to notice, that parental AWG7 strain secreted only 48.7 g dm − 3 of CA in fructose based medium, which was significantly lower not only compared to glucose based medium but also to all its derivatives growing in medium with IN (fructose polymer). What is also important to mention is the fact that the concentration of the undesirable ICA remained for all transformants at very low level (less than 2 g dm − 3 ). The processes using different inulinase expressing strains differed significantly in terms of CA productivity and yield. While the best CA producer secreted this acid with the productivity of 0.8 g dm − 3 h − 1 , the highest value of this parameter was reached by AWG7 INU5 strain (0.93 g dm − 3 h − 1 ). Additionally, although the CA yield was very different depending on the strain, all the obtained values were satisfactory, reaching 0.76 g g − 1 for AGW7 INU 8 strain. This value was 35% higher than the CA yield obtained by the parental AWG7 strain on fructose. In the process with IN as carbon source some of the analyzed strains produced also small amounts of mannitol into the culture broth, however, concentration of this by product remained at low level and did not exceed 7 g dm − 3 .
Significant differences were also observed among the analyzed strains in terms of biomass production. The concentration of cells in the culture ranged from 9.0 g dm − 3 to 16.5 g dm − 3 . The parental AWG7 strain grew only slightly better in medium with glucose compared to fructose based medium.
RBC as a toll of process intensification
Two best CA producers identified in the previous stage of the study was chosen for further analysis. The AWG7 INU8 transformant was used due to its highest CA yield and AWG7 INU5 strain was chosen due to its highest CA productivity. Both strains were cultured using RBC system to increase the titer, yield and productivity obtained during traditional batch cultures. The kinetics of the RBC processes (changes of biomass, CA and fructose over time) for both transformants was presented on Fig. 2. There was no technical difficulties observed during the quintuple RBC experiment as well as no microbiological contamination occurred. CA biosynthesis using this system is stable and feasible new process. The whole process remained stable during 450 to 600 h. Medium exchange in the bioreactor (40% of working volume each time) was carried out four times (Figs. 2 and 3). The rate of medium exchange was chosen based on the previous work on erythritol production in RBC system [6].
The RBC processes for both strains differed slightly in terms of biomass biosynthesis (Fig. 1). Higher concentration of cells was observed for AWG7 INU5 strain (Fig. 1b), however, the biomass concentration dropped after the first medium exchange for both strains (Fig. 1). IN hydrolysis was occurred rapidly in both processes which can be seen as high increase of fructose concentration in the medium. Higher fructose utilization rate was reached by AWG7 INU5 strain (Fig. 1b) which resulted in 150 h shorter process compared to AWG7 INU8 transformant (Fig. 1a). In contrary, AWG7 INU8 strain secreted 30 g dm − 3 more of CA into the medium (203 g dm − 3 ) compared to AWG7 INU5 strain (170 g dm − 3 ). Furthermore, after two medium exchange, the concentration of CA was increasing for both strains.
The overall CA yield of the RBC process was higher for AWG7 INU 8 strain and reached 0.51 g g − 1 ( Table 2). Albeit, the overall CA yield did not obtain very high values, the maximum partial yield reached 0.85 g g − 1 for AWG7 INU 8 strain in 2nd exchange of the medium ( Table 2). The overall CA productivity was almost the same for both strains and reached 0.3 g dm − 3 h − 1 ( Table 2). The highest CA productivity (0.51 g dm − 3 h − 1 ) during the RBC process was noted for AWG7 INU8 strain during the 2nd medium exchange ( Table 2). The concentration of the undesired ICA during the whole RBC process for both transformants did not exceed 2 g dm − 3 ( Table 2).
Activity of inulinase displayed on the yeast cells
During RBC on IN, the activity of inulinase was measured. Both strains showed the activity of expressed enzyme (Fig. 4). The initial activity of inulinase was increased until the first replacement of the medium was done for both strains (Fig. 4) and then decreased when concentration of released fructose was decreased. The highest inulinase activity was noted for AWG7 INU 8 strain which reached 120 U g − 1 of cell dry mass (Fig. 4a). For AWG7 INU5 strain maximum activity of inulinase was reached after 320 h of the culture (Fig. 4b).
Discussion
The CA is mostly produced organic acid using from molasses and fungi A. niger. Although the process is well known and used for decades, it shows some disadvantages which must be eliminatedit is multi-stages and dependent on the purity of the substrate. Therefore, the whole process generates some environmentally undesired wastes [17]. As shown previously, CA biosynthesis using Y. lipolytica may become an efficient alternative method if its production [36,38]. The process with yeast is more favorable due to the higher resistance of the cells to high substrate concentration and their high resistance to metal ions. Furthermore, yeast are easy to culture and to establish stable continuous processes of desired metabolites production. Additional advantage of using yeast, especially Y. lipolytica, is the availability of genetic tools to engineer their metabolism and boost the production processes [39]. The CA production by Y. lipolytica were based mainly on glucose and crude glycerol [33,36,40,41]. However, the global increase in IN and prebiotic food manufacturing caused growing interest in utilization of IN for biotechnological processes which may help to reduce the amount of IN-rich wastes generated during the production process [30,31,42]. Application of IN for the processes with Y. lipolytica in concert with the use of RBC system could help to develop more efficient CA production technology. The use of the RBC system for CA biosynthesis is a very desirable solution. It offers application of high concentrated media without inhibition of cells growth due to the high concentration of biomass in the bioreactor. Furthermore, the secreted acid is partially removed from the tank which eliminated its toxicity to the cells. In the literature, many examples of CA production by Y. lipolytica using different cultivation systems can be found. Summary of the available results is presented in Table 3. The highest CA titer was obtained in the presented work by Y. lipolytica AWG7 INU 8 transformant during RBC process with inulin. The CA titer obtained by AWG7 INU 5 strain is also higher to these obtained for other strains growing on sunflower oil using batch cultures or on glycerol using fed-batch systems [16,43,44]. Furthermore, studies with the parental AWG7 strain using RBC system with glycerol [44] also did not reach the concentration of CA obtained by its inulinase positive transformants studied in the current work (Table 3). Nevertheless, the parental strain used in the previous work presented higher productivity during the RBC process [44]. In the study conducted by Liu et al. [42], genetic modified strain of Y. lipolytica growing on inulin in batch culture, produced CA almost with the same yield (0.84 g g − 1 ) as the strain AWG7 INU8 used in this study in RBC system.
Additional advantage of the process with inulinase positive transformants of AWG7 strain growing on inulin is the low level of the undesired ICA (below 2 g dm − 3 , Tables 2 and 3). The formation of ICA is strain and substrate dependent and differs among different cultivation systems [43]. In general, the concentration of ICA secreted by different wild strains of Y. lipolytica growing on carbohydrates or glycerol ranges from 8 to 16% [45,46]. Other Y. lipolytica strain -Wratislavia K1, also derivative of Polish A-101 strain, during fed-batch culture on IN secreted 6.6-12.9 g dm − 3 [31]. The high CA/ICA ratio obtained in the RBC process with IN during the presented study will facilitate the process of its purification from the post-culture medium. The transformant AWG7 INU8 can produce 120 U g − 1 of inulinase activity. Liu et al. [30] determined the activity of immobilized inulinase with 6× His-tag to be 22.6 U mg − 1 of cell dry mass after 96 h of the cell growth for Y. lipolytica expressing the inulinase gene.
Conclusion
In conclusion, the growing IN market and increasing amount of IN-rich wastes prompt to find new ways of their utilization. The presented study showed great potential of genetically engineered strains of Y. lipolytica AWG7 expressing inulinase to efficiently convert IN to highly desired CA. Furthermore, application of the RBC system allowed to increase the CA titer to 200 g dm − 3 , which is, to our knowledge, the highest concentration of CA reported for Y. lipolytica.
Strains
The parental strain Y. lipolytica AWG7 used in this study was isolated from Y. lipolytica A-101-1.31 strain after its exposure to UV irradiation [47]. The strain belongs to the Yeast Culture Collection of the Department of Biotechnology and Food Microbiology, Wroclaw University of Environmental and Life Sciences, Poland. The strain has been deposited in a culture collection CIRM -Levures (INAG 36017) under number CLIB 81. This strain is a very efficient CA producer and is not able to form mycelium (data not shown). The new derivatives of AWG7 strain (AWG7 INU 1, AWG7 INU 2, AWG7 INU 3, AWG7 INU I, AWG7 INU 5, AWG7 INU This study a the highest productivity noticed during the process , b the highest yield noticed during the process Qproductivity of citric acid, Yyield of citric acid production from Kluyveromyces marxianus CBS6432 (INU1 gene; GenBank: X57202.1) were constructed according to the procedure described previously [31]. Y. lipolytica transformants were selected on the YPD medium with 400 μg cm − 3 of hygromycin B. Verification of the transformants was performed both, by PCR as well as growth detection on solid minimal medium with 1% of IN. Seven randomly selected transformants were chosen for further analysis of citric acid production in bioreactors. All yeast strains used in this study were stored at − 80°C and refreshed before use by 24 h culture in YPD medium at 28°C, 2.33 Hz.
Determination of IN utilization using microcultures
Inoculation medium (YPD) contained: glucose -20 g, yeast extract -10 g and bacteriological peptone -20 g in 1 dm 3 of distilled water. The inoculation cultures were grown for 24 h at 28°C, 2.33 Hz on a rotary shaker (CERTOMAT IS, Sartorius Stedim Biotech). The growth of Y. lipolytica AWG7 and its derivatives was analyzed in Bioscreen C system (Oy Growth Curves Ab Ltd., Finland), in Yeast Nitrogen Base medium (Sigma-Aldrich) supplemented with 2% (w v − 1 ) of glucose, fructose or IN. The overnight inoculation cultures were centrifuged and washed twice with sterile water. The yeast strains grew in 150 μl of the appropriate medium using 100-well microplates. The optical density (OD 600 ) of the cells was standardized to 0.15. The experiments were performed in quintuples at 28°C under constant, intensive agitation. The growth of cells was monitored by measuring the OD at 420-600 nm every 30 min for 48 h. Rakicka et al. [49] have published this methodology previously.
CA biosynthesis in bioreactors
The differences in CA production among the 8 investigated strains were tested using batch cultures in medium containing: inulin -100 g, NH 4 Cl -2 g, KH 2 PO 4 -0.25 g, MgSO 4 × 7H 2 O -1.0 g, yeast extract -1 g in 1 dm 3 of tap water [35]. The parental strain (AWG7) was cultivated on glucose and fructose as a control experiment.
The intensification of CA production was performed by applying the repeated batch strategy (RBC) for only two best CA producers -AWG7 INU 5 and AWG7 INU 8 transformants. The medium used for CA biosynthesis was as described above. After the whole substrate was exhausted from the medium, 800 cm 3 of the culture was withdrawn, and replaced by the same volume of fresh medium. This procedure was repeated three times (Fig. 3). During each RBC culture the working volume at the beginning of the process was maintained at 2 dm 3 . The end of each RBC cycle was determined when the concentration of fructose dropped below 20 g dm − 3 .
All bioreactor cultures were performed in 5-dm 3 stirred-tank reactor (BIOSTAT B-PLUS, Sartorius, Germany) with the working volume of 2.0 dm 3 at 28°C. Aeration and stirring rates were set at 0.6 m − 1 and 13.3 Hz, respectively. The pH 5.5 was maintained by additions of 5 M NaOH solution, as described previously [48,50]. All cultures were cultivated until complete exhaustion of the carbon source. Bioreactor with the appropriate medium was autoclaved at 121°C for 20 min. All cultures were conducted in two biological replicates.
Analytical methods
Ten milliliters of culture broth was centrifuged (5 min, 2700 RCF). The biomass was washed twice with distilled water and filtered on 0.45 mm pore-size membranes. The biomass concentration was determined gravimetrically after drying at 105°C and expressed in grams of cell dry weight per liter (g dm − 3 ). The concentration of substrate (glucose, fructose) and CA was measured in the supernatants by high-performance liquid chromatography (Dionex-Thermo Fisher Scientific, UK) using a Carbohydrate H+ Column (Thermo Scientific, Waltham, MA) coupled to a UV detector (k = 210 nm) and refractive index detector (Shodex, Ogimachi, Japan). The column was eluted with 25 mM trifluoroacetic acid at 65°C and a flow rate of 0.6 cm 3 min − 1 . The concentration of isocitric acid (ICA) in the supernatant was determined using an Isocitrate Assay Kit (Sigma-Aldrich). This methodology has been described previously by Rakicka et al. [48,50].
Inulinase activity of the strains was determined according to Gong et al. [51]. The reaction mixture containing 0.1 cm 3 of the supernatant, 0.9 cm 3 of phosphate buffer (0.1 M, pH 6.0) and 1% (mass per volume) of IN (Sigma) was incubated at 50°C for 15 min. The reaction was inactivated immediately by keeping the reaction mixture at 100°C for 10 min. The amount of reducing sugar in the reaction mixture was assayed by the method introduced by Miller [52]. One unit of inulinase activity (U) was defined as the amount of enzyme that releases 1 μM of reducing sugar per minute nder the assay conditions applied in this study. The specific inulinase activity was calculated as units per gram (U g − 1 ) of cell dry mass.
Calculation of fermentation parameters and list of abbreviations
The yield of CA production from IN (Y), expressed in g g − 1 , was calculated using the formula: The productivity of CA in batch and repeated-batch culture (Q), expressed in g dm − 3 h − 1 , was calculated as: In all formulas, CA stands for citric acid concentration, IN stands for the consumed inulin concentration in the culture (g dm − 3 ), t is the duration of the fermentation process (h).
|
2019-02-19T01:06:57.895Z
|
2019-02-11T00:00:00.000
|
{
"year": 2019,
"sha1": "d2cbf3ff235d651e331968e982696a04f3b98ac3",
"oa_license": "CCBY",
"oa_url": "https://bmcbiotechnol.biomedcentral.com/track/pdf/10.1186/s12896-019-0503-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2cbf3ff235d651e331968e982696a04f3b98ac3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14181400
|
pes2o/s2orc
|
v3-fos-license
|
Renal impairment after liver transplantation - a pilot trial of calcineurin inhibitor-free vs. calcineurin inhibitor sparing immunosuppression in patients with mildly impaired renal function after liver transplantation
Objectives Chronic kidney disease is frequent in patients after orthotopic liver transplantation (OLT) and has impact on survival. Patients receiving calcineurin inhibitors (CNI) are at increased risk to develop impaired renal function. Early CNI reduction and concomitant use of mycophenolat mofetil (MMF) has been shown to improve renal function. Methods The aim of this trial was to compare dose-reduced CNI/MMF versus CNI-free MMF/prednisone-based treatment in stable patients after OLT with respect to glomerular filtration rate (GFR). 21 patients [GFR 44.9 ± 9.9 mL/min/1.73 m2 measured by 99m-Tc-DTPA-clearance, serum creatinine (SCr) 1.5 ± 0.42 mg/dL] were randomized either to exchange CNI for 10 mg prednisone (group 1; n = 8) or to receive CNI at 25% of the initial dose (group 2; n = 13) each in combination with 1000 mg MMF b.i.d. Results At month 12 mean SCr (-0.3 ± 0.4 mg/dL, p = 0.031) and GFR improved (8.6 ± 13.1 mL/min/1.73 m2, p = 0.015) in group 2 but remained unchanged in group 1. Main side effects were gastroinstestinal symptoms (14.3%) and infections (4.8%). Two biopsy proven, steroid-responsive rejections occurred. In group 1 mean diastolic blood pressure (BP) increased by 11 ± 22 mmHg (p = 0.03). Conclusions Reduced dose CNI in combination with MMF but not CNI-free-immunosuppression leads to improvement of GFR in patients with moderately elevated SCr levels after OLT. Addition of steroids resulted in increased diastolic blood pressure presumably counterbalancing the benefits of CNI withdrawal on renal function.
INTRODUCTION
Liver transplant recipients are the second most frequent group of patients, who develop chronic renal failure among all recipients of solid organ transplantation [1]. Up to 18% of patients after OLT have been shown to suffer from end stage renal disease (ESRD) within 10 years after engraftment [2]. Apart from preoperative GFR, diabetes and chronic hepatitis C infection [1,3], the use of calcineurin inhibitor (CNI) based immunosuppression has been shown to put the patient at particular risk to develop chronic kidney disease [4].
Therefore, both CNI-sparing and CNI free regimens should offer an opportunity to treat liver transplant recipients with impaired renal function. Whereas CNI reduction with concomitant use of mycophenolate mofetil (MMF) has been demonstrated as an successful and safe option [6][7][8][9][10][11][12][13], MMF monotherapy was complicated by a high frequency of rejection episodes [5,14]. Alternatively CNI withdrawal has been studied in combination with conversion to mTor inhibitors. However, results in such studies are controversial [15,16]. Finally it remains unclear, whether the high rate of rejection episodes can be prevented when MMF monotherapy is combined with steroids.
We performed a single center prospective randomized pilot trial to compare a dose reduced CNI/MMFversus a MMF/prednisone treatment in patients after liver transplantation with moderately impaired renal function.
PATIENT RECRUITMENT
Patients were screened for renal impairment by 99m-Tc-DTPA-clearance following the method of Russell et al. [17] and included if GFR was below 70 mL/ min/1.73m 2 . Additional inclusion criteria consisted of age greater than 18 years and CNI-toxicity as main cause of impaired renal function by excluding other factors such as relevant proteinuria (>1 g/24 h), hematuria or diabetes mellitus. Exclusion criteria com-prised patients with severely increased serum creatinine (SCr) > 5 mg/dL, (re-) infection with hepatitis B (HBs-antigen positive), rejection episodes during the previous 12 months, CMV-infection during the previous 6 months, contraindications for the use of corticosteroids or MMF, pregnancy or breast-feeding, unwillingness in pre-menopausal women to take contraceptives for the duration of the study, active gastric ulcer disease, malignancy, hemoglobin < 10 g/dL, total leucocyte counts < 3.5 G/L and platelet counts < 70 G/L.
The study protocol was approved by the local ethics committee in accordance with the principles of the declaration of Helsinki.
STUDY PROTOCOL
Patients were randomized either to completely discontinue CNI and to add 10 mg prednisone instead (group 1) or to reduce the CNI to 25% of the initial dosage without any additional corticosteroids (group 2). Each patient received 1000 mg MMF b.i.d.. CNI was tapered stepwise over 8 weeks after target MMF dose was achieved. To ensure equal renal function starting levels patients were first stratified with respect to serum creatinine above and below 1.4 mg/dL and than randomly allocated to the study groups at a 1:1 ratio using randomization blocks of five subjects.
For every suspected rejection, histological confirmation was obtained. Patients with biopsy proven rejection (BPR) were judged as treatment failure and withdrawn from the study.
FOLLOW UP
After study inclusion, patients were monitored in weekly intervals for the first 14 weeks. At each visit blood counts, liver functions tests and ciclosporine-or tacrolimus 12-h serum concentrations were measured and any side effects were recorded. At inclusion as well as after months 6 and 12 serum creatinine (Scr) and 99m-Tc-DTPA-clearance were measured.
ANALYSIS OF THE DATA
The study was evaluated on an intent-to treat (ITT-) basis. Results are expressed as arithmetic means ± SD. Statistical analysis was performed using the StatView 5.0 TM Software (Version for Windows; SAS Institute Inc., Cary, NC, USA). Differences between the groups were analyzed by Mann-Whitney-Test and differences of paired values by Wilcoxon-Rank-Test. p-values < 0.05 were considered significant.
BASELINE CHARACTERISTICS
Between May 2003 and May 2005 twenty-one patients with a GFR <70 mL/min/1.73 m 2 were randomized into the treatment groups (group 1: n = 8; group 2: n = 13). Patient characteristics and immunosuppressive medication at the time of study inclusion are given in Table 1. There were no statistical significant differences noted between both groups regarding age, time since engraftment and renal function as determined by SCr and 99Tc-DTPA-clearance.
EFFECT ON RENAL FUNCTION
SCr levels and GFRs are summarized in Figures 1 and 2, respectively. At month 6, none of the groups revealed a significant change of SCr and GFR as compared to baseline. After 12 months, however, a statistically significant decrease of mean SCr (-0.3 ± 0.4 mg/dL, p = 0.031) and a corresponding rise in mean GFR (8.6 ± 13.1 mL/min/1.73m 2 , p = 0.015) could be recorded in patients randomized into group 2, whereas no significant improvement of GFR was not-
COURSE OF BLOOD PRESSURE AND BLOOD LIPID LEVELS
In the CNI-free steroid containing treatment arm a significant rise in mean diastolic blood pressure was recorded from 73.2 ± 15.1 mmHg at study inclusion to 83.6 ± 10.3 mmHg at month 12 (11 ± 22 mmHg; p = 0.03), whereas diastolic and systolic blood pressure remained unchanged in patients receiving CNI at a reduced dosage (Fig. 3). At baseline, 7 patients received antihypertensive medication, which was intensified in 2 patients during the course of the study (one in each group). One additional patient from group 1 had to be started on antihypertensive treatment. In contrast to blood pressure, mean levels of cholesterin and triglycerides remained unchanged after conversion to the study medication and during follow up. At baseline a single patient in group 2 received a statin owing to the need for highly active antiretroviral therapy to treat coexisting HIV infection. In this individual, a dramatic decrease of triglycerides was noted after conversion to the CNI-free regimen. One more patient (group 2) had to be started on a lipid-lowering medication during the course of the study.
SIDE EFFECTS AND REJECTION EPISODE
Observed side effects (SE) consisted of gastrointestinal problems (abdominal discomfort and diarrhea) (14.3%), genital herpes simplex infection (4.8%), observation of pleural effusion and ascites of unknown etiology (polyserositis) (4.8%) and myoclonia (4.8%). One patient experienced acute renal failure with the onset of heavy diarrhea and was discontinued from the study. Under treatment with intravenous fluids renal function could be restored to the previous level. Allocation of the side effects to both treatment groups and SE-related discontinuation of MMF therapy are depicted in Table 2. In the patient with polyserositis, symptoms re-occurred 17 months after discontinuation of MMF application. There was no significant change in serum ALT levels during the study period (Fig. 4). BPR occurred in 2 patients. The first patient, allocated to treatment group 1, underwent liver biopsy because of an increase of liver function tests at week 32. She was re-placed on tacrolimus based immunosuppression after receipt of the pathology report. The second patient underwent liver biopsy because of the appearance of unclear polyserositis in the absence of increased liver function tests. Both rejections were considered as mild by the pathologist and responded well to steroid pulse therapy. Altogether study medication had to be terminated in 5 patients (23.8%). Discontinuation of the study medication occurred due to side effects or rejection (GI-side effects n = 2, myoclonia n = 1, BPR n = 2). One additional patient withdrew consent shortly after study inclusion due to the discontinuation of contraceptive medication.
All (n =) Group 1 (n =) Group 2 (n =) Discontinuation
None of the patients required dose adjustments for low leukocyte, platelet counts or anemia and none developed diabetes. DISCUSSION Here, we report the results of a pilot trial on CNI-free versus a CNI reduced immunosuppression under the concomitant use of MMF in liver transplant recipients with impaired renal function, applying 99m-Tc-DTPA clearance -the gold standard method -to measure GFR. We found, that conversion to MMF in combination with CNI-dose reduction was able to significantly improve GFR, whereas the CNI-free regimen in combination with steroids did not ameliorate renal function.
Previous studies have already demonstrated a positive effect of CNI-dose reduction on renal function in liver transplant recipients with chronic kidney disease. The following various approaches have been studied: de novo immunosuppression with MMF combined with delayed introduction of CNI's [7][8][9], reduced dose CNIs either de novo [10] or reduction after more than one year after engraftment [11][12][13][14]. However, in these studies assessment of renal function had been based on indirect methods such as serum creatinine [5], creatinine clearance [11][12][13] or creatinine based estimated GFR (eGFR) [7][8][9]. In contrast, we applied 99m-Tc-DTPA-clearance -a radioisotope gold standard method to measure glomerular function -to evaluate the course of renal function after conversion, since it has meanwhile been well established that serum creatinine and creatinine based equations, e.g. MDRD, only insufficiently reflect true GFR [18,19]. In contrast to the work of Schlitt, Reich and Créput [5,11,14], CNI-free immunosuppression did not reveal any improvement of renal function in our patients. However, since we used an ITT analysis, the number of drop-outs may have influenced our assessment of outcomes. In our study, CNI-free immunosuppression comprised MMF in combination with corticosteroids, which had been chosen, because a high number of rejections had been reported in patients with MMF monotherapy. Although only a low dose of 10 mg prednisone was administered, this type of treatment was associated with an increment of 11 mmHg in diastolic blood pressure. This unexpected effect may have counterbalanced the potentially positive effects of CNI withdrawal in our patients in group 1. On the other hand, both MMF in combination with reduced dose CNI and MMF in combination with corticosteroids proved to be relatively safe, taking into account that acute rejection occurred in only 10% (2/21) of patients. Of note, all rejection episodes responded well to corticosteroid pulse therapy, so that normal hepatic function was restored in all patients with rejection. Both rejection episodes occurred in a "stable phase" of immunsuppression 4.6 and 7.5 months after conversion, so that the causal relationship to the treatment protocol remains unclear. In our study the frequency of BPR was similar to that reported by Reich et al. (9-14%) but higher than in the trials of Pageaux and Creput (0%) [11][12][13].
As expected, gastrointestinal intolerance was the most frequent side effect, which required treatment discontinuation in 2 of 3 patients with this complication. Otherwise, the spectrum of side effects was similar to other studies. Herpes infection is a complication which had already been previously attributed to MMF treatment [20]. In contrast, myoclonia, which was observed in a single patient of group 2 has not yet been reported under MMF medication. Finally, polyserositis which was observed also in a patient must most likely not be attributed to MMF medication, because the syndrome re-occurred 17 months after MMF had been discontinued.
This study had been planned as a pilot trial to provide data for a larger multicentre trial. Thus the low number of patients and the single centre design may limit interpretation of our results. Nevertheless, we obtained a clear result not favouring CNI-free immunosuppression in liver transplant recipients with impaired renal function. Of note this result is based on direct measurement of GFR with a gold standard method rather than indirect assessments of renal function by serum creatinine or eGFR which may be invalid in patients after liver transplantation [18,19] or with impaired liver function.
Thus, the obligate use of corticosteroids with their potentially hypertensive effect cannot be generally recommended in a CNI-free regimen of MMF-based im-munosuppression. In addition, our trial adds evidence to the existing data using a GFR gold standard meaesurement, that CNI-reduction in combination with MMF is a safe and effective method to treat liver transplant recipients with an impairment of renal function.
|
2016-05-04T20:20:58.661Z
|
2009-05-14T00:00:00.000
|
{
"year": 2009,
"sha1": "33e6b45633d489ab98fad975c70dd7158fe3c07f",
"oa_license": null,
"oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/2047-783X-14-5-210",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "33e6b45633d489ab98fad975c70dd7158fe3c07f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38313461
|
pes2o/s2orc
|
v3-fos-license
|
Inverse Photoemission Spectroscopy of Multiwall Carbon Nanotubes
Multiwall Carbon Nanotubes (MWCNTs) were synthesized by Chemical Vapor Deposition (CVD). Two different procedures were used to grow MWCNT films roughly, aligned in the direction normal to the SiO 2 /Si(111) substrate. Inverse Photoemission Spectroscopy measurements, on these samples, show the existence of resonances which could be traced back to a flat graphene sheet. The unoccupied valence band is fairly similar to that shown by graphite except by an additional intensity in the vicinity of the Fermi level. This resonance could be interpreted both as tubes tips end effects or van Hove singularities in the density of states.
I. INTRODUCTION
In the last few years an important number of scientists from different disciplines have concentrated their efforts in the field of nanoscale materials science and technology.In particular, the study of Carbon Nanotubes (CNTs) properties has attracted a special interest.Applications like field emitters, diodes, transistors, flat-panel displays, SPM tips, hydrogen storage, reinforced polymers and molecular delivery are only a few of their possible applications.
Photoemission experiment [1] have demonstrated the occupied electronic structure of MWCNTs arrays is similar to that of highly ordered pyrolytic graphite (HOPG).The main differences are found in the existence of resonances very close to the Fermi edge [1].In SWCNT these states were attributed to van Hove singularities of the density of states (DOS).Due to the additional quantization along the nanotube circumference, peaks appear symmetrically around the Fermi level (ε F ) and their energy splitting depends mainly in the diameter of the tubes.In the case of MWCNTs the nature of these additional intensities close to Fermi level is still controversial.They have been attributed to emissions from the tubes tips [2] or to resonances derived from a mixture of the π bands and the 1D subband of the nanotubes [3].
It is well known that CNTs can be grown by Chemical Vapor Deposition (CVD) methods, by nucleation of C around metallic particles of Fe, Co and Ni [4].Nanoparticles of these metals act as catalyst in the decomposition of hydrocarbonated molecules such as methane, ethane or acetylene.
In this work we used two CVD procedures in order to synthesize aligned MWCNT-films on flat SiO 2 /Si(111) substrates.One of the films was obtained by pyrolysis of Iron (II) Phthalocyanine [5].This compound has a double function, first supplying iron atoms for the formation of the catalytic nanoparticles and then as a carbon source.In the other method pyrolysis of acetylene over nanostructured Fe films was used [6].Most of the previous measurements of the electronic structure of these systems have been done on states below ε F and there is not much information on the empty electronic states.Consequently the aim of this paper is to describe the electronic structure of unoccupied states of MWC-NTs, above ε F using IPS.
II. EXPERIMENTAL SECTION
Multiwall Carbon Nanotubes were synthesized by Thermal Chemical Vapor Deposition (CVD) in a horizontal tube furnace (4.5 cm inner diameter).
Growth method I: (sample I) This procedure has been previously reported [5].In a typical reaction, 0.05g of Fe-Pc were decomposed over SiO 2 /Si(111) wafers at 1000 o C. The reaction under these conditions takes about 20 min.
Growth method II: (sample II) Previous to the introduction in the furnace, Fe thin films were electron beam evaporated onto SiO 2 /Si(111) [6].The coverage of the iron films was estimated by Auger Electron Spectroscopy (AES) to be 60% atomic abundance.The post-treatment annealing (800 o C × 20 min, in the CVD furnace) of this film in the presence of hydrogen, induces the formation of Fe nanostructures (np-Fe).The CNT growth was carried out by catalytic decomposition of acetylene, at 800 o C, for half an hour.
Scanning Electron Microscopy (SEM) micrographs were obtained from the as prepared samples, in a LEO SEM 1420VP.Transmission Electron Microscopy (TEM) measurements were performed over dispersed samples.The micrographs were taken in a Zeiss EM900 operated at 80 KV.
Inverse photoemission spectroscopy (IPS) has been used to obtain information regarding the unoccupied density of states.The signal corresponds to the intensity of photons emitted by a sample bombarded by low energy electrons.In a process much like the generation of X-rays, Vacuum Ultraviolet (VUV) photons are created by an electronic transition in the solid.The intensity of emitted photons is closely dependent on the unoccupied density of states (uDOS).Measurements were performed in a home built isochromat spectrometer [7].
III. RESULTS
In both methods, the resulting product include the formation of CNTs densely packed, with a preferential orientation perpendicular to the substrate surface, as can be seen in Figures 1 and 2. Collectively, the CNTs form a thin film of constant thickness, which looks like a turf or carpet.
Figure 1 shows a series of SEM micrographs of a CNT film grown by pyrolysis of Iron Phthalocyanine (sample I). Figure 1(a), shows the side view of a 18 µm thick CNT-film.Figures 1(b) and 1(c) are a magnification of the lateral profile and the top view, respectively.
Figure 2 shows the SEM micrographs of a CNT array grown by pyrolysis of acetylene over an annealed 60%-iron film (sample II).The thickness of the CNT-film estimated from figure 2 Transmission electron micrographs of dispersed tubes were also analyzed.For sample I, the mean diameter was 56 nm whereas for sample II the mean diameter was 49 nm.
From the SEM and TEM micrographs it is possible to verify some differences between samples I and II.The first one is the film thickness, as has been described above.Additionally, sample I presents straight tubes whereas CNTs in sample II have a helical structure.
From the top view it can be verified that although the MWCNT-films show a good microscopic order, most of the tube tips on the top of the film are bended with a random orientation in the plane.With the idea of searching for states closer to ε F , we collected a series of spectra with a very low e-beam dose per point to minimize induced damage and thus obtain a higher definition of the spectra in the corresponding energy region.Indeed, as shown in Fig. 2(c) resonance F, close to 0.8 [eV], is consistently present in all measured spectra.This resonance is compared with the emission from HOPG, which over the same energy range only shows a smooth increase, with no significant variations of the photon intensity.We have searched, with no success for an angular dependence of the different spectral features.This last result seems to be consistent with the micrographs in Fig. 1, which shows the tubes have no preferential orientation on the top of the CNT film.Physics, vol. 36, no. 3B, September, 2006 IV.DISCUSSION
Brazilian Journal of
The general belief, at least for large diameter nanotubes, is that most of the electronic structure of Carbon Nanotubes could be traced back to the two-dimensional material from which they are constructed, graphene.This single atomic layer of graphite consists of a 2-D honeycomb structure of σ sp2 -bonded carbon atoms.The in plane bonding (σ orbitals) form strong covalent bonds with neighboring carbon atoms.Therefore, occupied σ and unoccupied σ* bonds are formed.The third C 2p electrons are in 2pz-orbitals perpendicular to the plane and form a weaker π bond with the 2pz-orbitals of the atoms in neighboring sites.Due to a weaker bonding, the splitting between the occupied and the unoccupied π bands is smaller, thus the π* band appears closer to ε F .
Photoemission spectra from CNTs are dominated by emission from π and σ bands with resonances at 3[eV] and 8[eV] below ε F respectively [2].One of the main differences with graphite is the extra emission close to ε F , which has been observed on SWCNTs, MWCNTs and even in more complex carbon "onions" like structures [10].On SWCNTs, the photoemission data show clear oscillations in the density of states, in agreement with theoretical predictions based of van Hove singularities of the DOS.For MWCNTs this increased intensity has been explained in two ways.Suzuki et al. [2] using photoelectron spectro-microscopy have studied the valence band structure along a single tube axis.Their spectra showed an increased intensity at ε F only from the spatial region close to the top the tubes.They explain this behavior as structural defects of the tube tips by the insertions of pentagons on the graphene network, hence implying a higher density of dangling bond at the spherical tips than in the cylindrical side walls.On the other hand Choi et al. [3] using photoemission and density functional calculations postulate the additional ε F intensity is due to mixing between π bands and a truly 1D subband between layers.
If the behavior of the unoccupied valence band in CNTs is similar to the occupied states, as expected, the resulting spectra should resemble those obtained from graphite.
Figure 3(b) shows the IPS spectra from HOPG.Two are the dominant features, one at 1.8[eV] above ε F which corresponds to the non dispersing π * band resonance and the image state at 3.5 [eV].For HOPG no increased intensity is observed close to ε F .When we compare the graphite spectra with the one from the MWCNT sample, we can observe a small shoulder in the emission which seems to be related to the π * bands, with an obvious reduction in the intensity.Nevertheless this feature is always present in all collected spectra with a small fluctuation in energy as we change to different places in the sample, but it remains unchanged within the resolution of the spectrometer.Above 3.5 [eV] we only detect one reproducible structure, in each sample, around 12.5 [eV], (B and B').By comparing to graphite the energy of this fairly broad resonance corresponds to the σ* band.Probably the most significant departure with HOPG is the complete disappearance of the image resonance D, which by the very nature of this state, it requires of a flat surface to exist.If this type of state exists in CNTs the symmetry of the tubes must induce a shift in the energy of the resonance.The idea of collecting spectra from a sputtered HOPG was done to compare the effects of generating many defects on the HOPG surface with the disorder induced states in the NT's spectra.There are two important effects on HOPG: the complete disappearance of resonance D, as expected, and the increased intensity (E) in the IP spectrum close to ε F .These extrinsic surfaces states, which are linked to disorder, have a higher contribution to the DOS for energies of the order of 1 [eV] and below.In fact, this result is consistent with the difficulties we had in getting a low noise measurement in CNTs samples close to ε F .Most likely this noise was due to e-beam induced damage of the tubes, hence the need of lowering the dose.Even though both helical and straight tubes are fairly well aligned with the substrate normal, we see no angular dependence of the IPS intensity of any feature.This result together with the SEM images of top of the film led us to believe we only have access to the tips of the tubes or whatever tubes are lying in the top of the CNT film.In both cases no dependence in the orientation of the e-beam is expected.Resonances A and A' could be related to band structure effects of the tubes, but further measurements are required to clarify this point, since the energy of the resonances could be linked to the tube diameter.
For resonance F, which appears robustly in CNT samples, we could not narrow down, to a single one, the justification for the origin of this feature.Two competing explanations are: the resonance is linked to dangling bonds at the tip of the tubes; or the 1D nature of the tubes DOS, which can manifest itself even at these very large tube diameters.This is clearly an open question, which with the information available to us now, can not be resolved.
Figure1shows a series of SEM micrographs of a CNT film grown by pyrolysis of Iron Phthalocyanine (sample I).Figure1(a), shows the side view of a 18 µm thick CNT-film.Figures1(b) and 1(c) are a magnification of the lateral profile and the top view, respectively.Figure2shows the SEM micrographs of a CNT array grown by pyrolysis of acetylene over an annealed 60%-iron film (sample II).The thickness of the CNT-film estimated from figure2(a) is 82 µm.Figures 2 (a), 2(b) and 2(c) correspond to the same views shown in Fig. 1.Transmission electron micrographs of dispersed tubes were also analyzed.For sample I, the mean diameter was 56 nm whereas for sample II the mean diameter was 49 nm.From the SEM and TEM micrographs it is possible to verify some differences between samples I and II.The first one is the film thickness, as has been described above.Additionally, sample I presents straight tubes whereas CNTs in sample II have a helical structure.From the top view it can be verified that although the MWCNT-films show a good microscopic order, most of the tube tips on the top of the film are bended with a random orientation in the plane.Figure 3 a) shows normal incidence IP spectra of samples I and II taken over a wide energy range.The energies are measured with respect to the Fermi level, value which is fixed using normal incidence IP spectra from an Al sample.The main features in both spectra have been labeled A and A', they are both close to 3[eV] but with a clear shift in energy between both samples, indicating some electronic structure differences between the samples.B' and B are broad resonances located close to an energy of 12,5[eV].
Figure 3 aFIG. 1 :FIG. 2 :
Figure1shows a series of SEM micrographs of a CNT film grown by pyrolysis of Iron Phthalocyanine (sample I).Figure1(a), shows the side view of a 18 µm thick CNT-film.Figures1(b) and 1(c) are a magnification of the lateral profile and the top view, respectively.Figure2shows the SEM micrographs of a CNT array grown by pyrolysis of acetylene over an annealed 60%-iron film (sample II).The thickness of the CNT-film estimated from figure2(a) is 82 µm.Figures 2 (a), 2(b) and 2(c) correspond to the same views shown in Fig. 1.Transmission electron micrographs of dispersed tubes were also analyzed.For sample I, the mean diameter was 56 nm whereas for sample II the mean diameter was 49 nm.From the SEM and TEM micrographs it is possible to verify some differences between samples I and II.The first one is the film thickness, as has been described above.Additionally, sample I presents straight tubes whereas CNTs in sample II have a helical structure.From the top view it can be verified that although the MWCNT-films show a good microscopic order, most of the tube tips on the top of the film are bended with a random orientation in the plane.Figure 3 a) shows normal incidence IP spectra of samples I and II taken over a wide energy range.The energies are measured with respect to the Fermi level, value which is fixed using normal incidence IP spectra from an Al sample.The main features in both spectra have been labeled A and A', they are both close to 3[eV] but with a clear shift in energy between both samples, indicating some electronic structure differences between the samples.B' and B are broad resonances located close to an energy of 12,5[eV].DWRPLF DEXQGDQFH 7KH & PLQ LQ WKH &9' SUHVHQFH RI K\GURJHQ LQGXFHV WUXFWXUHV QS)H 7KH &17 FDWDO\WLF GHFRPSRVLWLRQ RI DQ KRXU URVFRS\ 6(0 PLFURJUDSKV SUHSDUHG VDPSOHV LQ D /(2 (OHFWURQ 0LFURVFRS\ 7(0 HG RYHU GLVSHUVHG VDPSOHV LQ D =HLVV (0 RSHUDWHG DW VSHFWURVFRS\ ,36 KDV EHHHQ UHJDUGLQJ WKH XQRFFXSLHG FRUUHVSRQGV WR WKH LQWHQVLW\ RI OH ERPEDUGHG E\ ORZ HQHUJ\ OLNH WKH JHQHUDWLRQ RI ;UD\V SKRWRQV DUH FUHDWHG E\ DQ OLG 7KH LQWHQVLW\ RI HPLWWHG RQ WKH XQRFFXSLHG GHQVLW\ RI WV ZHUH SHUIRUPHG LQ D KRPH >@
|
2017-10-15T08:02:51.307Z
|
2006-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "fc4140ede16ddc72fae5666a4a4ad63cc7623509",
"oa_license": null,
"oa_url": "https://www.scielo.br/j/bjp/a/cYhbHKPT44PHbC6Sv8sCVDB/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cb2ace9cc20d7b5f21ca8d6db688b78342cce5a7",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256620591
|
pes2o/s2orc
|
v3-fos-license
|
CsbZIP1-CsMYB12 mediates the production of bitter-tasting flavonols in tea plants (Camellia sinensis) through a coordinated activator–repressor network
Under high light conditions or UV radiation, tea plant leaves produce more flavonols, which contribute to the bitter taste of tea; however, neither the flavonol biosynthesis pathways nor the regulation of their production are well understood. Intriguingly, tea leaf flavonols are enhanced by UV-B but reduced by shading treatment. CsFLS, CsUGT78A14, CsMYB12, and CsbZIP1 were upregulated by UV-B radiation and downregulated by shading. CsMYB12 and CsbZIP1 bound to the promoters of CsFLS and CsUGT78A14, respectively, and activated their expression individually. CsbZIP1 positively regulated CsMYB12 and interacted with CsMYB12, which specifically activated flavonol biosynthesis. Meanwhile, CsPIF3 and two MYB repressor genes, CsMYB4 and CsMYB7, displayed expression patterns opposite to that of CsMYB12. CsMYB4 and CsMYB7 bound to CsFLS and CsUGT78A14 and repressed their CsMYB12-activated expression. While CsbZIP1 and CsMYB12 regulated neither CsMYB4 nor CsMYB7, CsMYB12 interacted with CsbZIP1, CsMYB4, and CsMYB7, but CsbZIP1 did not physically interact with CsMYB4 or CsMYB7. Finally, CsPIF3 bound to and activated CsMYB7 under shading to repress flavonol biosynthesis. These combined results suggest that UV activation and shading repression of flavonol biosynthesis in tea leaves are coordinated through a complex network involving CsbZIP1 and CsPIF3 as positive MYB activators and negative MYB repressors, respectively. The study thus provides insight into the regulatory mechanism underlying the production of bitter-tasting flavonols in tea plants.
Introduction
Tea plants (Camellia sinensis) synthesize diverse flavonoids, such as catechins, flavonols, and anthocyanins and their derivatives, at significant levels in tender tissues, such as apical buds and young leaves. These flavonoids, together with caffeine and theanine, constitute the major bioactive secondary metabolites in teas, contributing to their pleasant flavors, rich tastes, and multiple health benefits, features that are of vast importance given that tea is the most consumed nonalcohol beverage in the world [1][2][3] . Both catechins, primarily epigallocatechin-3gallate (EGCG), and flavonols, mainly kaempferol glycosides, are the major contributing factors to the bitter and astringent tastes, with very low sensory doses being recognized by the human tongue 4,5 . The tender shoot tips of tea plants are usually picked during early spring, when the weather is still cool and misty with less sunlight radiation, to ensure the highest quality of teas. It has been well documented that these tender shoot tips contain higher levels of amino acids, mainly theanine, and fewer bitter-tasting catechins and flavonols in spring 6 . Indeed, tea plant leaves often accumulate higher levels of flavonols and catechins, which may result from high-intensity light irradiation during the summer-autumn seasons 7 . Light intensity and light quality significantly affect the accumulation of characteristic secondary metabolites in tea plant leaves 8,9 . Both red and blue light promote the production of catechins and caffeine 9 , while UV-A and UV-B promote the production of anthocyanins 10 . Thus, shading of tea plants has been frequently applied in tea gardens to reduce the contents of these bitter-tasting and astringent flavonoids in tea plant leaves 8,11,12 . Transcriptome and metabolite profiling revealed that transcription factors (TFs) involved in light perception and signaling may be connected with TFs regulating flavonoid biosynthetic genes 8,9 . However, the genetic factors and detailed molecular mechanisms underlying how light exposure promotes and shading reduces the accumulation of tea flavonoid contents in tea plant leaves are not yet understood 8,13,14 . Since the levels of flavonols significantly affect tea flavor and health function, it is highly desirable to understand how environmental factors regulate their biosynthesis.
Flavonols are a particular class of flavonoids that are present in most green leaves. The biosynthesis and regulation of flavonol glycosides in tea plant leaves attracted our attention, as they are the major bitter-tasting substances in tea leaves grown under strong light conditions. The branched flavonol pathway has been studied extensively, including work on common shared enzymes such as F3H, F3'H, F3′5′H, as well as specific enzymes such as flavonol synthase (FLS) and UDP-glucose: flavonol glycosyltransferases (UGTs) 6,15 . Flavonol-specific FLS competes for the precursor dihydroflavonols with dihydroflavonol 4-reductase (DFR), leading to varying amounts of anthocyanin and proanthocyanidin synthesis 16,17 . Flavonol synthesis is also highly regulated at the transcriptional level by several tissue-specifically expressed R2R3-MYB transcriptional activators, such as Arabidopsis AtMYB11, 12, and 111 18 , apple MYB12 and MYB22 19 , and grapevine VvMYBF1 20 . Meanwhile, R2R3-MYB repressors, such as Arabidopsis AtMYB7 and AtMYB4, have been demonstrated to be regulators of flavonoid biosynthesis in plants 21 . These activators and repressors, as well as other TFs, are specifically responsive to certain environmental cues and together form a regulatory network to fine tune flavonol biosynthesis in plants 22,23 .
Light is a crucial signal that affects plant growth and development and involves light receptor phytohormones, signaling proteins, and many downstream effectors, including metabolic enzymes and developmental regulators 24,25 . Several photoreceptors are characterized to respond to different wavelengths of light: the red/far-red light photoreceptor phytochromes, the blue/UB-A light photoreceptor cryptochromes and phototropins, and the UV-B light photoreceptor UVR8 26,27 . These activated photoreceptors directly or indirectly modify the stability of primary TFs such as ELONGATED HYPOCOTYL 5 (HY5), PHYTOCHROME INTERACTING FACTOR 3 (PIF3), and PHYTOCHROME INTERACTING FACTOR 4 (PIF4) 28 . It is well known that plants accumulate increased levels of flavonols under high light conditions or UV-B irradiation than under regular light irradiation 29 . The E3 ubiquitin ligase CONSTITUTIVE PHOTO-MORPHOGENIC1 (COP1) negatively controls photomorphogenesis by interacting with SUPPRESSOR OF PHYTOCHROME A (SPA1-SPA4) proteins to inhibit photomorphogenic growth 30 . HY5 is a key photomorphogenesis-promoting factor downstream of COP1 and is destabilized by COP1 in darkness 31 . HY5 directly regulates the promoters of thousands of genes involved in plant development and flavonoid biosynthesis 32 . HY5 can regulate flavonol biosynthesis by mediating UV-B or light irradiation-induced AtMYB12 activation and flavonol accumulation 33 . In the second branch of the pathway, a basic helix-loop-helix (bHLH) TF and PHYTOCHROME-INTERACTING FACTORs (PIFs) promote skotomorphogenesis and repress photomorphogenesis under red and far-red light conditions 34 . PIFs play diverse roles in plant growth and development by positively or negatively regulating a large number of downstream genes 35 . PIF3 plays multiple roles in light signaling as a negative factor in hypocotyl elongation and anthocyanin biosynthesis and a positive factor in the plant shading response 36 . In contrast to the case for HY5, light irradiation leads to PIF3 protein phosphorylation and degradation 37 . HY5 and PIFs are oppositely regulated by light. PIF3 and HY5 interact with cryptochromes and UVR8 to regulate light-responsive genes 38 . HY5 and PIF1/ PIF3 interact with each other directly and antagonistically regulate reactive oxygen species-responsive genes and the greening of etiolated seedlings upon light irradiation 39 . However, how these factors are involved in lightor shading-regulated flavonoid biosynthesis remains unknown. An improved understanding of these mechanisms in tea plants is highly important given the common application of shading to tea plants to mediate the quality of tea production 8,9,11 .
This study attempts to dissect the comprehensive regulatory network mediating light-and shading-regulated biosynthesis of flavonols in tea plants. UV-B or shading treatment prominently altered bitter-tasting flavonol contents in tea plant leaves. UV-B radiation acted through CsbZIP1-CsMYB12 on the key flavonol biosynthetic genes CsFLS and CsUGT78A14, while shading repressed flavonol biosynthesis not only by inactivation of HY5-like CsbZIP1 but also via activation of CsPIF3, which further activated the MYB repressor genes CsMYB4 and CsMYB7. Transactivation assays revealed that CsMYB4 and CsMYB7 repressed CsFLS. We therefore demonstrated a complex regulatory network composed of both activators and repressors in the regulation of bitter-tasting flavonol production by UV-B exposure and shading treatment in tea plant leaves.
Materials and methods
Plant material and growth conditions "Shu Cha Zao", "Long Jing", "Huang Shan Bai Cha", "Zi Juan", and "Huang Kui" tea plants were grown in the experimental tea garden of Anhui Agricultural University, Anhui, China (31°. 55′ North, 117°. 12′ East; Hefei City, Anhui Province, China). UV-B conditions (300 μW cm −2 , photoperiod of 12 h per day) were provided using a special lamp (PHILIPS NARROWBAND TL 20 W, Poland) with a characteristic peak at 311 nm, 25/18°C light/dark. The shading experiment consisted of two treatments: tea plants with natural growth (control) and tea plants with 90% shading treatment. The nets were placed over the plants on 27 July 2019, when a new round of bud burst started. The second leaves of tea of the same growth stage were collected throughout shading treatments (0 h, 4 h, 8 h, 12 h, 2 days, 4 days, 8 days, and 14 days after shading). All samples were stored at −80°C until use. Methyl jasmonate (MeJA) treatment experiments were performed as described previously 40 . Tea plant leaves sprayed with 100 µM MeJA solution or distilled water (control) were collected at 0, 12, 24, and 48 h after the onset of treatment. The polyethylene glycol (PEG) and NaCl treatment experiments were performed as described previously 41 . Briefly, tea plant seedlings were treated with 25% PEG or 200 mM NaCl for 0, 24, 48, and 72 h to mimic drought and salinity stress conditions, respectively. For the cold treatment experiments, tea plant leaves were collected during the cold acclimation (CA) process. Control (CK): 25°C; CA 1 -6 h: 10°C for 6 h; CA 1 -7 days: from 10°C to 4°C for 7 days; CA 2 -7 days: from 4°C to 0°C for 7 days; DA-7d: recover under 25°C to 20°C for 7 days, as described previously 42 . Transcriptome data from experiments with tea cv. "Shu Cha Zao" were retrieved from the tea plant information archive (http://tpia.teaplant.org/ index.html). Methanol, acetonitrile, and acetic acid of chromatographic grade were purchased from Shanghai GuoMei Pharmaceutical Co. UPLC-grade water was prepared from distilled water using a Milli-Q system (Millipore Laboratory, Bedford, MA, USA). Flavonols were detected by ultrahigh-performance liquid chromatography (UPLC) on an Agilent Infinity-Lab Poroshell HPH-C18 instrument (4.6 × 100 mm, 2.7 μm, Agilent, Santa Clara, CA, USA). The samples (5 μL injection volume) were loaded on an Inertsil ODS-3 column and eluted at a flow rate of 1.0 mL/min. Mobile phases A and B were composed of 0.1% acetic acid in distilled water and acetonitrile, respectively. The elution program was as follows: calibration with 95% A (1% acetic acid) and 5% B (100% acetonitrile), a linear gradient from 5 to 10% B for 0 − 2 min, from 10 to 20% B for 2−15 min, from 20 to 30% B for 15−30 min, and from 30 to 55% B for 30 − 55 min was performed, followed by washing and equilibration. The flavonols were detected at a wavelength of 350 nm, and the column temperature was set at 35°C 6 .
Detection of flavonols from leaves
Leaves of "Shu Cha Zao", "Long Jing", "Huang Shan Bai Cha", "Zi Juan", and "Huang Kui" tea plants and different tissues of "Shu Cha Zao" were ground to a fine powder using a mortar and pestle in liquid nitrogen. The powered leaf samples (0.2 g) were extracted with 2 mL 80% methanol by sonication at room temperature for 5 min, followed by centrifugation at 4500×g for 10 min. The residues were re-extracted twice by this method. The supernatants were filtered through a 0.22-μm membrane. Flavonols were analyzed according to previously described UPLC methods 6 .
Quantitative real-time PCR
Total RNA was isolated from leaves with RNAiso Plus and RNAiso Mate for Plant Tissue Kits (TaKaRa, China). Double-stranded cDNA was prepared using the Super SMART PCR cDNA Synthesis Kit (Clontech, Palo Alto, USA) following the manufacturer's instructions. Quantitative real-time PCR (qRT-PCR) was carried out using the SYBR Green method for the detection of double-stranded PCR products (TaKaRa, Dalian, China). An IQ5 real-time PCR detection system (Bio-Rad) was utilized in this study as previously described. The tea β-actin gene was used as an internal reference gene (HQ420251.1, https://www. ncbi.nlm.nih.gov/nuccore/). qRT-PCR data were generated using an Applied Biosystems 7900HT instrument, and analyses were performed using SDS software (Applied Biosystems). PCR efficiencies were calculated using Lin-Reg software. The primers for representative genes in this study were designed by Primer Premier 5.0 software (PREMIER Biosoft company; Tables S1 and S2).
Sequence alignment and phylogenetic analysis
In this study, amino acid sequence alignment analysis of MYBs was conducted using DNAMAN 8.0 software (Lynnon, Quebec, Canada). A phylogenetic analysis using the amino acid sequences of MYB members was performed using MEGA 7.0 software (http://www.megasoftware.net/, Mega Software, State College, PA, U.S.A.), and a phylogenetic tree was constructed using neighbor-joining distance analysis. The tree nodes were evaluated with the bootstrap method for 1000 replicates, and the evolutionary distances were computed using the p-distance method. Sequence information used in the phylogenetic tree is shown in Table S3.
Subcellular localization
Sequence information on CsMYB4, CsMYB7, CsMYB12 and CsPIF3 was obtained from the tea plant genome (http:// tpia.teaplant.org/). Sequence information of CsbZIP1 was obtained from transcriptome data 8 . The ORFs of CsMYB4, CsMYB7, CsMYB12, CsbZIP1, and CsPIF3 within the entry vector pDONR211 were cloned into the destination binary vector, namely, PK7WGF2.0, for subcellular localization studies. Positive vectors in which the ORF was fused at the N-terminus of GFP were obtained and named PK7WGF2.0-CsMYB4, PK7WGF2.0-CsMYB7, PK7WGF2.0-CsMYB12, PK7WGF2.0-CsbZIP1, and PK7WGF2.0-CsPIF3, respectively. As described above, the plasmids were introduced into A. tumefaciens strain GV3101 to select a positive colony for infiltration of Nicotiana benthamiana. After 48 h of infiltration, leaves were examined using an Olympus FV1000 confocal microscope (Olympus, Tokyo, Japan). GFP fluorescence signals were excited with a 488-nm laser, and the emitted light was recorded from 500 to 530 nm to display the subcellular localization of CsMYB4, CsMYB7, CsMYB12, CsbZIP1, and CsPIF3.
Overexpression of CsMYB12 in soybean hairy roots
CsMYB12 was constructed in pB2WG7 for overexpression and GUS as a control. These confirmed constructs were transformed into Agrobacterium rhizogenes strain K599 by electroporation. Positive colonies were selected on LB-agar medium containing selective antibiotics at 28°C. Positive K599 colonies were used to generate hairy roots from germinated soybean (Glycine max) seeds. Soybean cultivar "Tianlong #1" seeds were surface sterilized and germinated in Petri dishes containing sterilized filter paper. The surfaces of 7-day-old green cotyledons were wounded and infected with K599 harboring the vectors for overexpression. The transgenic hairy roots were subjected to semiquantitative or qRT-PCR analyses to validate their identity. The transgenic hairy roots were maintained on half-strength Murashige and Skoog medium (MS medium) containing 7.5 mg L −1 phosphinothricin (ppt) for selection in a growth chamber at 23°C with a 16 h/8 h light/dark photoperiod and subculture every 3-4 weeks.
Yeast one-hybrid and two-hybrid assays
Yeast one-hybrid (Y1H) and two-hybrid assays were conducted as previously described 43 . Yeast one-hybridization assays were performed using the Matchmaker Gold Yeast One-Hybrid System (Clontech). To construct transcription factor-expressing cassettes, the ORFs of CsMYB4, CsMYB7, CsMYB12, CsPIF3, and CsbZIP1 were recombined into the pGADT7 vector (Clontech, Palo Alto, USA). The cloned promoter fragments of CsMYB12, CsMYB7, CsFLS, and CsUGT78A14 were inserted into the pHIS2.1 vector. The yeast strain Y187 containing the recombinant pHIS2.1 vector was grown on -Trp-Leu (-T-L) screening medium for 3 days at 30°C. Then, the interactions between the MYB TFs and promoter fragments were detected on medium lacking Trp, Leu and His (-T-L-H) for 3-5 days at 30°C. Empty pGADT7 vectors were used as controls.
For yeast two-hybrid assays (Y2H), the ORFs of the CsMYB4, CsMYB7, CsbZIP1, and CsMYB12 genes were recombined into the pGBKT7 and pGADT7 vectors, respectively (Clontech, Palo Alto, USA). The recombinant plasmids were cotransformed into the yeast strain AH109 and cultured on medium lacking Trp and Leu (-T-L) for 3 days at 30°C. For interaction screening, the yeast cells were transferred to medium lacking Trp, Leu, His and adenine (-T -L-H-A) with X-gal for 3-5 days at 30°C. Empty pGADT7 and pGBKT7 vectors were used as controls.
Luciferase reporter assay
The ORFs of CsMYB4, CsMYB7, CsMYB12, CsbZIP1, and CsPIF3 were recombined into the P2GW7 effector expression system, as described previously 43 . The cloned promoter fragments of CsMYB12, CsMYB7, and CsFLS were inserted into the pGreen-0800-LUC reporter. Protoplasts derived from Arabidopsis thaliana were used as the materials for transient transfection. Each transfection contained the GUS plasmid for normalization. For transient transfection, 1 μL of GUS plasmid, 5 μL of LUC reporter, and 10 μL of effector were mixed together and transformed into Arabidopsis thaliana protoplasts using 40% polyethylene glycol. After reaction at 24°C for 12 h, the LUC and GUS activities were tested using a Multimode Plate Reader (Victor X4, PerkinElmer, http://www. perkinelmer.com/). The promoter activity was calculated by the ratio of LUC to GUS activity.
Suppression of CsMYB12 and CsbZIP1 in tea shoot tips by using candidate antisense oligonucleotides Since tea plant transformation technology has not yet been developed, knockdown of the target gene with antisense oligonucleotides (asODN) containing the segment complementary to the target gene was used to examine how CsbZIP1 and CsMYB12 affect flavonol synthesis in tea shoot tips 44,45 . The antisense oligonucleotides (asODN) were selected by using Soligo software (http://sfold.wadsworth. org/cgi-bin/soligo.pl) with CsMYB12 and CsbZIP1 as input sequences (Table S2). To silence the genes, fresh shoot tips (with the apical bud and 1st leaf) of the tea plant variety "Shu Cha Zao" were incubated in 2 ml Eppendorf tubes containing 40 μM asODN-CsMYB12 or asODN-CsbZIP1 solution for various times. Shoot tips incubated with the same concentrations of sense oligonucleotides (sODN) were used as the control. Shoot tips were sampled at different time intervals for RNA and flavonol analysis.
Bioinformatic analysis
The GenBank accession numbers for genes characterized in the study were as follows: CsMYB12 (MT498592), CsMYB4 (MT498593), CsMYB7 (MT498594), CsbZIP1 (MT498595), and CsPIF3 (MT498596). A multiple sequence alignment of the amino acid sequences of the CsMYB TF proteins of tea plant, rice and Arabidopsis was generated with ClustalW. An unrooted phylogenetic tree based on the sequence alignments was constructed using MEGA 7.0 software (http://www.megasoftware.net/) and the neighbor-joining method with the following parameters: pairwise alignment and 1000 bootstrap replicates. All resulting heatmaps of expression were structured by the pheatmap R package.
Statistical analysis
All experimental data are taken from at least three independent experiments. For C. sinensis shoot tip antisense inhibition experiments, at least 10 independent plants were analyzed with three repeats each. For Y2H assays, subcellular localization, and transgenic hairy root experiments, representative pictures are shown. Differences at the 95% confidence level in two-tailed Student's t test were considered significant.
UV-B and shading treatments regulated flavonol biosynthesis in tea plant leaves
To understand the molecular regulatory mechanisms underlying the regulation of flavonol synthesis by light exposure, we conducted both UV-B radiation and shading treatment experiments on tea plant seedlings hydroponically grown in SK nutrient solution and on tea plants grown in tea gardens. In tea plant seedlings grown hydroponically under UV-B conditions (300 μW cm −2 , photoperiod of 12 h per day, provided with a special lamp (PHILIPS NAR-ROWBAND TL 20 W, Poland; Fig. 1a Fig. 2c). The expression patterns of genes associated with flavonol biosynthesis were next analyzed to understand how light regulates flavonol accumulation in tea plant leaves. Two structural genes and a MYB TF (Fig. 1d) of the flavonol biosynthetic pathway, namely, CsFLS, CsUGT78A14, and CsMYB12, were revealed by qRT-PCR analysis to be significantly activated by UV-B radiation (Fig. 1c, e). In addition, flavonol biosynthetic pathway genes displayed significantly lower expression levels in shaded leaves than in tea plant leaves fully exposed to sunlight (Fig. 2d). Moreover, CsCOP1 transcript levels were upregulated by shading treatment, but CsUVR8 transcript levels were repressed (Fig. S4a, b).
CsMYB12 mediated light-induced flavonol biosynthesis
We next identified the TFs that may regulate the lightinduced or shading treatment-repressed biosynthesis of flavonol. When analyzing transcriptome data from previous experiments, we identified an AtMYB12 homolog MYB TF TEA009412 (tentatively named CsMYB12), which is more highly expressed in tea plant varieties Longjing (LJ) and Shu Cha Zao (SCZ) than in Huang Shan Bai Cha (HSBC), Huang Kui (HK), and Zi Juan (ZJ), corresponding to the higher flavonol contents in LJ and SCZ than in HSBC, HK, and ZJ varieties (Fig. S5a, b). Another AtMYB12 homolog, TEA016401, was expressed at very low levels in most tissues and did not respond to light radiation or shading treatment (Figs. S6, S7). Light and shading treatment experiments with SCZ and qRT-PCR verification of CsMYB12 transcripts also showed that CsMYB12 was repressed by shading treatment, coincident with the reduced total flavonols (Fig. 3a), and that CsMYB12 transcript levels in various tissues of tea plants were tightly associated with the total flavonol contents in these tissues (Fig. 3b). When CsMYB12 was overexpressed in soybean hairy roots (Fig. S8), it also triggered significant increases in flavonol and flavonone biosynthesis (Figs. 3c, S9). Metabolite profiling revealed that K-3-O-Glu, K-7-O-Glu, A-7-O-Glu, and A-8-C-O-Glu levels were significantly higher in the CsMYB12overexpression (OE) hairy root lines than in the GUS control lines (Fig. 3d). Naringenin, kaempferol, and eriodictyol contents were markedly increased in CsMYB12-OE hairy root lines compared with the GUS lines (Fig. 3e). Thus, CsMYB12 is a flavonol biosynthesis regulator in tea plants. Consistent with this, GFP-CsMYB12 fusion protein signals in tobacco epidermal cells were observed in the nucleus, suggesting its function as a TF (Fig. 3f). We further investigated the Data were from three independent experiments and expressed as the means ± SD. (n = 3). The differences were analyzed in two-tailed comparisons with the control, and *p < 0.05; **p < 0.01 in Student's t test regulatory function of CsMYB12. Yeast one hybrid (Y1H) studies showed that as a nuclear R2R3-MYB TF, CsMYB12 could bind to the promoters of the critical flavonol synthetic genes CsFLS and CsUGT78A14, whose promoter regions contain several putative MYB-binding cis-elements (Fig. 3g). Transactivation assays using a dual luciferase reporter system showed that CsMYB12 resulted in 3-fold activation of CsFLS (Fig. 3h).
The light signaling bZIP TF CsbZIP1 regulates CsMYB12 and CsFLS
We next analyzed the transcriptome data of tea plant leaves under shading treatment and found that several bZIP TFs were downregulated, including three HY5 homologs, TEA012075, TEA014348, and TEA032623 (Fig. S10). However, the bZIP gene most significantly downregulated following shading is a nonannotated 46 . We cloned it and found that it shared 72.62% similarity with Arabidopsis HY5 at the amino acid sequence level; therefore, we named it HY5-like TF CsbZIP1 (Fig. S11). Phylogenetic analysis revealed that CsbZIP1 clustered together with VvHY5 but apart from three other HY5 orthologs, AtHY5, HaHY5, and AaHY5 (Fig. 4a). Furthermore, CsbZIP1 transcript levels were repressed by shading treatment (Fig. 4b). qRT-PCR analysis results showed that CsbZIP1 was expressed at higher levels in the first, second and third leaves than in the buds, flowers, stems, fruits and roots (Fig. 4c). GFP-CsbZIP1 fusion protein signals in tobacco epidermal cells were observed in the nucleus, suggesting its nuclear localization as a TF (Fig. 4d). Moreover, Y1H assays revealed that CsbZIP1 could bind to the promoters of CsMYB12 and two flavonol Differences between SD and LT (control) were analyzed. Data were from three independent experiments and expressed as the mean ± S.D. (n = 3). Differences were analyzed in two tailed with the control, *p < 0.05; **p < 0.01 in Student's t test biosynthetic genes, CsFLS and CsUGT78A14 (Fig. 4e). Furthermore, CsMYB12 and CsbZIP1 physically interacted with one another in a yeast two-hybrid assay (Fig. 4f), indicating a possible synergistic activation effect on CsFLS, CsUGT78A14, and other genes associated with flavonol biosynthesis. Furthermore, a transactivation assay revealed that CsbZIP1 bound to the promoters of CsMYB12 and CsFLS and markedly activated proCsMYB12 and CsFLS expression in a transactivation assay (Fig. 4g, h). These results suggested that CsbZIP1 bound directly to the CsMYB12 promoter via the C region that contained the Gbox to indirectly regulate flavonol biosynthesis in tea plants.
R2R3-MYB repressors mediated the shading treatmentrepression of flavonol synthesis
During the analysis of the transcriptome data from several light-treatment experiments [8][9][10][11] we observed that two other R2R3-MYB TFs, CsMYB4 and CsMYB7, could be markedly activated by shading treatment (Fig. 5a, c). Both CsMYB4 and CsMYB7 clustered together with VvMYBC2-L1, PtoMYB156, and other R2R3-MYB repressors in our sequence phylogeny (Fig. S12). Furthermore, both CsMYB4 and CsMYB7 repressors contain a conserved LxLxL sequence within the C-terminal region (Fig. S13). We next tested whether these proteins acted as negative regulators of flavonol synthesis during shadingor light treatment-modified flavonol biosynthesis. Both CsMYB4 and CsMYB7 were expressed in green tissues in tea plants (Fig. 5b, d), and both CsMYB4 and CsMYB7 were localized to the nuclei, as shown by GFP-CsMYB4 and GFP-CsMYB7 fusion expression in tobacco leaf epidermal cells (Fig. 5e, f). Furthermore, they also bound to the promoters of the CsFLS and CsUGT78A14 genes, suggesting that they could regulate flavonol synthesis (Fig. 5g). The Y2H experiment results revealed that CsMYB12 interacted with MYB4 and MYB7 and that CsMYB4 and CsMYB7 interacted with each other (Figs. 5h and S14). Using the dual luciferase reporter gene system, 0800-LUC vectors of proCsFLS, as well as p2GW7 vectors of CsMYB4, CsMYB7, CsbZIP1, and CsMYB12, were constructed and transferred into Arabidopsis thaliana for promoter activation experiments (Fig. 5i). Transactivation assays with proCsFLS-driven LUC reporters showed that while CsMYB12 and CsbZIP1-activated proCsFLS, CsMYB7, or CsMYB4 individually or together synergistically repressed the CsMYB12-or CsMYB12 + CsbZIP1activated proCsFLS. From these analyses, CsMYB7 appeared to have stronger repression activity than CsMYB4 (Figs. 5j and S15). To further understand how these MYB TF genes respond to UV-B radiation, we also examined CsMYB7 and CsMYB4 expression in UV-B radiation experiments (Fig. 5k). Indeed, CsbZIP1 was significantly upregulated by UV-B radiation, CsMYB4 and CsMYB7 were less changed, and only CsMYB7 was upregulated at 48 h after radiation (Fig. 5k).
CsPIF3 activated CsMYB7 and thereby repressed flavonol synthesis
We further asked how CsMYB7 and CsMYB4 in tea plant shoot tips were activated by shading treatment. Since CsPIF3 genes have been shown to be upregulated by shading, we examined whether this essential light signaling gene can activate CsMYB7 and CsMYB4. Of the two Arabidopsis AtPIF3 homologs, TEA006216 and TEA007077 (Fig. 6a), only the latter was dramatically upregulated by shading (Fig. S16). We thus named it CsPIF3. CsPIF3, AtPIF1, AtPIF3, and AtPIF8 all had conserved APB and APA elements (Fig. S17), and qRT-PCR results showed that CsPIF3 could be upregulated in tea plant leaves by shading treatment compared with the control (Fig. 6b). CsPIF3 displayed higher expression levels in stems, roots, and leaves (Figs. 6c and S16). Because the CsMYB4 promoter was not assembled in the reference tea plant genome 46 , we cloned only the CsMYB7 promoter, which contained a G-box cis-element that is reported as a binding site by AtPIF3 (Fig. S18). CsPIF3 was localized to nuclei, as shown by GFP-CsPIF3 fusions expressed in tobacco leaf epidermal cells (Fig. 6d). Y1H experiments showed that CsPIF3 can bind to and activate the promoter of the CsMYB7 gene (Fig. 6e). Additionally, as a nucleuslocalized TF, CsPIF3 activated the CsMYB7 promoter in a transactivation assay (Fig. 6f). These results showed that under shading, CsPIF3 could activate CsMYB7, through which CsPIF3 repressed flavonol synthesis. Under UV-B radiation, CsUVR8 expression was slightly upregulated, and CsCOP1 was significantly upregulated by UV-B radiation. However, in contrast to CsbZIP1, CsPIF3 expression was almost unchanged by UV-B radiation (Fig. 6g).
Suppression of CsMYB12 and CsbZIP1 in shoot tips affected flavonol biosynthesis
To further analyze the physiological role of CsMYB12 as a regulator of flavonol biosynthesis in tea plants, the expression level of CsMYB12 was suppressed in the C. sinensis bud and 1st leaves by using an antisense oligodeoxynucleotide (asOND)-interfering gene-specific suppression strategy (Fig. 7a) 45 . asODN knockdown resulted in CsMYB12 and two structural genes of the flavonol biosynthetic pathway, CsFLS and CsUGT78A14, which were markedly repressed over the treatment period, as verified by qRT-PCR (Fig. 7b). The contents of K-3-O-Glu and Q-3-O-Glu in the apical bud and 1st leaf were reduced by up to 1.5-fold compared with those in the control (Fig. 7c). Furthermore, CsbZIP1 was knocked down by using a similar asODN approach to understand its regulation of CsMYB12 following UV-B treatment (Fig. 7d). Obvious asODN suppression of CsbZIP1 was observed only under UV radiation (Fig. 7e). In general, CsMYB12 transcripts did not fluctuate when CsbZIP1 was repressed under normal light intensity. However, UV-B treatment resulted in a significant upregulation of CsMYB12 in the untreated asODN control, while CsMYB12 was unchanged in asODN-CsbZIP1-treated shoot tips (Fig. 7f). Correspondingly, the flavonol contents did not change in asODN-bZIP1 compared with the control, in which flavonoid contents increased upon UV-B (Fig. 7g). Thus, CsMYB12 appears to play a key role in the regulation of flavonol biosynthesis in tea plants.
Other studies showed that flavonol contents in tea plant tissues also increased under MeJA treatment and salinity and PEG stresses, coupled with significant changes in the structural genes involved in flavonol biosynthesis 40,41 (Fig. S19). CsMYB12 and CsMYB7 were generally more highly expressed in leaves than in roots (Figs. 3b and 5d); however, CsMYB7 was expressed at low levels following cold and MeJA treatments (Fig. S20a, c). CsMYB12 expression was induced by cold and MeJA treatment of leaves (Fig. S20a, c) but repressed by salt and PEG treatment (osmotic stress; Fig. S20b, d) 40,41 . Meanwhile, CsPIF3 and CsMYB4 showed the opposite behavior; CsPIF3 was repressed by MeJA treatment, while CsMYB4 was initially slightly induced and then repressed by MeJA when CsMYB12 transcripts reached their highest levels (Fig. S20c) 41 . These results indicate that light and abiotic stress regulation of flavonol synthesis and accumulation occurs at the level of transcription.
Discussion
Characteristics of tea, such as color, taste, smell, and levels of health-conferring metabolites, are regarded as the major tea quality parameters that guide tea plant cultivation, breeding, and tea processing. These qualities depend primarily on the types and contents of tea plantspecific secondary metabolites present in the fresh tea plant leaves and the ways these starting materials are processed into teas 5 . Flavonol glycosides, such as myricetin 3-O-galactoside and quercetin-3-O-rutinoside, although present at relatively low levels in tea plant leaves compared with catechins and caffeine, have recently been recognized as among the major contributors to the bitter and astringent tastes of tea 47,48 . Therefore, their biosynthesis and regulation in tea plant leaves have been the focus of considerable attention. FLS and UGT are two critical and specific genes involved in flavonol glycoside biosynthesis 6,49 . While flavonol glycosides present in tea plant leaves grown under strong light in the spring-summer season are the major contributors to teas with stronger bitter tastes 7 , a reduction in light radiation by various measures, such as shading, has been shown to effectively improve tea quality 8,11 . Studies have revealed the biosynthetic pathways and enzymes involved in the production of a wide array of flavonol glycosides that are present at significant levels in strong light-radiated plant leaves or fruits 50,51 . Despite the fact that the regulatory mechanism underlying the high light or UV-B-radiation induction of flavonol glycoside biosynthesis is well known 51 , the repression of this activity by shading treatment is poorly understood. Significant progress has been made in understanding plant responses to high light or UV-B and shading treatments, including the characterization of several photoreceptors, COPIs, HY5, and PIFs, and other downstream effectors 52 . Our current study attempted to elucidate the transcriptional regulatory mechanisms underlying light-regulated flavonol accumulation in tea plants, providing new insights into the complex regulatory network controlling light-and shading-regulated flavonol biosynthesis.
High light or UV-B radiation regulated flavonol biosynthesis in tea plant leaves Many reports have investigated tea metabolites under shade or altered light conditions. Previous studies have focused predominantly on the effect of shading treatment on catechin biosynthesis in tea cultivars 8,11 . In the present study, we found that flavonols decreased even more i Constructions of reporter and effector expression vectors for dual luciferase assays. j Transactivation of CsMYB12, CsMYB4, and CsMYB7 individually or in combination on the promoter activity of CsFLS in the luciferase reporter assay. k Effect of UV-B radiation on the expression of CsMYB4, CsMYB7, and CsbZIP1. Differences between shading (SD) and light (LT) (control) were analyzed. Data were from three independent experiments and expressed as the means ± S.D. (n = 3). Differences comparison with the control were analyzed by two-tailed Student's t test, *p < 0.05; **p < 0.01 significantly in tea leaves than catechins did upon shading treatment (Fig. 2b). Consistent with these transcriptome data, our qRT-PCR data showed that the expression of key flavonol pathway genes, including CsF3H, CsF3′H, CsF3′5′H, CsFLS, and CsUGT78A14, was markedly reduced by shading treatment and conversely was upregulated by UV-B. We further revealed that the MYB TF CsMYB12 directly regulated these flavonol synthesis genes and that two light signaling TFs, CsbZIP1 and CsPIF3, worked upstream of CsMYB12, thereby acting in f Transactivation of CsPIF3 activity on CsMYB7 with the luciferase reporter assay. g Effect of UV-B radiation on the expression of CsUVR8, CsCOP1, and CsPIF3. Differences between shading treatment (SD) and light (LT) control were analyzed. Data were from three independent experiments and expressed as the means ± S.D. (n = 3). Differences in comparison with the control were analyzed in two-tailed Student's t test, *p < 0.05; **p < 0.01, *p < 0.05; **p < 0.01 concert to translate UV-B and high-light radiation or shading treatment into effects on flavonol biosynthesis. Furthermore, we uncovered an even more complex regulatory network by naturally or deliberately regulating lighting or shading treatments. These studies likely reveal the two sides of the same coin. That is, UV-B or high-light radiation induces CsbZIP1 and CsMYB12 and dominantly upregulates flavonol synthetic genes, thereby promoting the accumulation of bitter-tasting flavonols in tea plant leaves in the spring-summer season, whereas under shading treatment, both CsbZIP1 and CsMYB12 are repressed as the result of both COP1-mediated CsbZIP1 degradation and the regulation of two R2R3 repressors, CsMYB7 and CsMYB4, by another bHLH light signaling protein (CsPIF3) to further effectively repress CsMYB12 activity and thus repress flavonol biosynthesis (Fig. 8). Furthermore, upregulated CsPIF3, CsMYB7, and CsMYB4 mediate red-or far-red light signaling in darkness. Thus, it is suggested that shading treatment, similar to red light radiation, can effectively reduce the biosynthesis of f Changes in CsMYB12 transcripts with asODN-CsbZIP1 treatment upon UV-B radiation. g Changes in flavonol contents in asODN-CsbZIP1 in comparison with sODN treatment. Differences between asODN treatments and sODN treatments (control) were analyzed. Data are expressed as means ± s.d. from at least three replicates. Differences in comparison with the control were analyzed in two-tailed Student's t tests, *p < 0.05; **p < 0.01 flavonols and that two mechanisms explain the reduced flavonol content in tea plant leaves under shading treatment.
MYB activators or repressors as regulators of phenylpropanoid metabolism in plants
Many R2R3-MYB activators of flavonol biosynthesis have been characterized. AtMYB11, 12, and 111 control flavonol accumulation in different parts of the Arabidopsis seedling 18 . VvMYBF1 was confirmed to complement the flavonol-deficient phenotype of the AtMYB12 mutant 20 . In addition, under abiotic stress conditions, MYB repressor TFs are particularly important, and R2R3-MYB repressors contain a conserved LxLxL sequence within the C-terminal region 53 . In grapevine, three flavonoid repressor MYBs, namely, MYBC2-L1, MYBC2-L2, and MYBC2-L3, were identified 22 . CsMYB4a, as a lignin synthesis repressor, was identified in tea plants 54 . Therefore, activator-repressor systems coordinate the fine tuning of critical metabolite biosynthesis and accumulation 22 . Subgroup 4 of R2R3-MYB transcription factors in Arabidopsis consists of repressors MYB3, MYB4, MYB7, and MYB32, possessing the conserved EAR repression motif 54 . In this study, we isolated three potential genes, CsMYB12, CsMYB4, and CsMYB7, from tea plants that were hypothesized to positively and negatively regulate flavonol biosynthesis. Then, overexpression of CsMYB12 promoted the accumulation of flavonol in soybean roots (Fig. S9). The luciferase reporter assay results showed that CsMYB4 and CsMYB7 had a significant effect on the negative regulator CsFLS (Fig. 5j). Thus, CsMYB4 and CsMYB7 can affect flavonol biosynthesis, rendering them repressors with potentially broad impacts on tea plant secondary metabolism. It seems likely that the coordinated action of repressor and activator MYBs could be important for the fine tuning of flavonoid biosynthesis during development or following stress. 26,52 , it is of interest to characterize the CsbZIP1 gene and its signaling pathway in the context of light-induced and developmental regulation of tea plant secondary metabolism. In contrast, PIF3 is a bHLH TF with a light regulation mechanism on the other darkness side that binds to the palindromic G box motif CACGTG, which is common to many plant genes 56,57 . In this study, we functionally characterized two light signaling genes, CsbZIP1 and CsPIF3, in the regulation of flavonol biosynthesis in lightand shading-treated tea plants through their binding and activating and repressing CsMYB12, as well as two negative MYB regulator genes, CsMYB4 and CsMYB7. The complex regulatory network composed of both activators and repressors of various kinds of TFs related to the light and shading responses in tea plants can explain the increased levels of bitter-tasting flavonols under high light (including stronger UV-B radiation during the early summer and late autumn), as well as the drastic reduction in flavonol levels in tea plant leaves under shading treatment.
The interaction between CsbZIP1 and CsMYB12 and the direct binding of CsbZIP1 to the promoter of the CsMYB12 gene for its activation play a dominant role in connecting light signal perception to flavonol biosynthesis in tea plants. Even under shading treatment, reduced CsbZIP1 transcript levels remain a critical factor maintaining certain levels of flavonols in tea plant leaves. CsPIF3 is expressed at significantly higher levels in tea plant leaves under shading treatment, and CsPIF3 binds to the G-box in the promoter of CsMYB7 to upregulate the CsMYB7 repressor and, more likely, CsMYB4. Both CsMYB7 and CsMYB4 repressed CsFLS transcription and thus interfered with CsMYB12 function as activators of CsFLS and other flavonol biosynthetic genes under shading treatment. The regulatory function of CsMYB12 seems highly specific to flavonoid biosynthetic genes, such as CsFLS and CsUGT78A14. However, the regulatory targets of CsMYB4 and CsMYB7 may be nonspecific, since their upregulation under shading treatment or changes in lighting could also be negatively correlated with catechin levels. Under shading treatment, catechin contents also generally decreased when tea plant shoot tips became less bitter. Further study will be needed to demonstrate how these negative MYB regulators work in the regulation of flavonoid biosynthesis in tea plants.
It is possible that AtPIF3 and HY5 interact directly; alternatively, their antagonistic effects may be mediated through another factor, such as cryptochromes and UVR8 39 . HY5 binding to the promoters of UV-B-responsive genes is enhanced by UV-B in a UVR8-dependent manner in Arabidopsis thaliana. In agreement with this observation, overexpression of REPRESSOR OF UV-B PHOTO-MORPHOGENESIS2, a negative regulator of UVR8 function, blocks UV-B-responsive HY5 enrichment at target promoters 58 . A T/G-box in the HY5 promoter is required for its UV-B responsiveness. HY5 and its homolog HYH bind to the T/G(HY5)-box cis-acting element to activate its own expression redundantly upon UV-B exposure. HY5 and HYH interact directly with a T/G-box cis-acting element of the HY5 promoter, mediating the transcriptional activation of HY5 in response to UV-B 59 .
In summary, UV-B radiation promoted and shading repressed flavonol biosynthetic genes and consequently flavonol production in tea plant leaves. We demonstrated here that the different effects of light and shading involved CsbZIP1 and CsPIF3, a flavonol biosynthesis activator CsMYB12, and two MYB repressors CsMYB7 and CsMYB4. UV-B radiation of tea plants upregulated CsbZIP1 and CsMYB12 (Figs. 1d and 5k), whereas 90% shading treatment clearly upregulated CsCOP1 and repressed CsbZIP1 and CsMYB12 (Figs. 1d, 4b, and S4a). CsbZIP1 acted as an activator of CsMYB12 and CsFLS and CsUGT78A14 genes with CsMYB12 to promote UV-Binduced flavonol production. However, after shading treatment, CsbZIP1 and CsMYB12 were repressed to lower expression levels. Meanwhile, shading treatment activated CsPIF3, CsMYB7, and CsMYB4 to antagonize the effect of CsbZIP1 and repress the CsMYB12, CsFLS, and CsUGT78A14 genes. Both CsMYB7 and CsMYB4 repressed CsFLS and CsUGT78A14 by directly binding to their promoters. CsMYB7 and CsMYB4 also directly interact with CsMYB12 and may interfere with CsMYB12 activation activity. Furthermore, CsPIF3 activated CsMYB7 through binding to its promoter. This study provides new insights into the mechanism of how light regulates the production of bitter-tasting flavonols in tea plants, which may provide molecular tools for the genetic improvement of tea quality and flavor.
|
2023-02-07T14:35:20.856Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "be0e011d8f89796b9337ae540611968f30a798cd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41438-021-00545-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "be0e011d8f89796b9337ae540611968f30a798cd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
233590736
|
pes2o/s2orc
|
v3-fos-license
|
To disclose or to falsify: The effects of cognitive trust and affective trust on customer cooperation in contact tracing
Contact tracing involves collecting people’s information to track the spread of COVID-19 and to warn people who have been in the proximity of infected individuals. This measure is important to public health and safety during the pandemic. However, customers’ concerns about the violation of their privacy might inhibit their cooperation in the contact tracing process, which poses a risk to public safety. This research investigates how to facilitate customers’ cooperative behavior in contact tracing based on cognitive trust and affective trust. The findings show that cognitive trust increases people’s willingness to disclose information and reduces their willingness to falsify it, whereas affective trust increases the willingness for both disclosure and falsification. This research contributes to the literature on customer data privacy by illuminating how cognitive and affective trust distinctly influence cooperative behavior, which has important implications for hospitality businesses.
Introduction
Contact tracing is a measure for mitigating the transmission of infectious diseases, which is important to public health and safety during the COVID-19 pandemic (World Health Organization, 2020). It requires businesses to collect customers' personal information (e.g., name, phone number, location, and travel history) in order to track the spread of the virus and to warn people who have been in the proximity of infected individuals (Servick, 2020). An increasing number of countries have implemented contact tracing in hospitality venues, such as restaurants, cafes, pubs, and clubs. For example, New Zealand's largest online restaurant-booking website, Restaurant Hub, provides contact tracing instructions and QR codes for its registered venues (Restaurant Hub, 2020). Australia and the UK also require hospitality venues to collect customers' contact details to support contact tracing (Government of NSW, 2020;The UK Government, 2020).
Contact tracing requires adequate customer cooperation to be effective (The New Daily, 2020). However, many customers of hospitality businesses have been reported to be reluctant to cooperate because of privacy concerns, posing a serious threat to public safety (The Conversation, 2020;Yahoo, 2020). Therefore, the present research draws on customer data privacy literature to investigate the main reasons behind customer reluctance to cooperate in contact tracing during the COVID-19 pandemic.
Drawing on the social exchange theory, extant literature on customer data privacy suggests that customers relinquish their personal information in exchange for products or services (Krishen et al., 2017;Lwin et al., 2007;Martin and Murphy, 2017;Schumann et al., 2014;White, 2004). Due to increasingly personalized advertisements and widely reported data scandals (e.g., the Facebook-Cambridge Analytica scandal), customers have become wary of the potential risk of disclosing their personal information (e.g., data access, breach, and misuse) and feel vulnerable in such exchanges (Kim et al., 2018;Krafft et al., 2017;. Their perceived vulnerability triggers uncooperative behavioral intentions toward information requesters, including withholding and falsifying information . Recent research on customer data privacy calls for examining interpersonal trust as a mechanism for promoting customers' cooperative behavior (Bleier and Eisenbeiss, 2015;Jagadish, 2020;Martin and Murphy, 2017;Steinhoff et al., 2019;Waldman, 2018). Trust is a sensitive indicator of people's willingness to accept vulnerability with positive expectations and beliefs about the other party in social exchanges . Concerning data privacy, customers are willing to accept being vulnerable in disclosing information with a business that they trust because they have positive expectations of it (Lwin et al., 2007;Waldman, 2018).
Despite the substantial role of trust in influencing customers' cooperative behavior, research on customer data privacy has not yet considered the effects of different forms of trust. Grounded in social exchange theory, the trust literature has identified two forms of trust: cognitive trust and affective trust (e.g., McAllister, 1995). Cognitive trust refers to the rational evaluation of whether the other party to an exchange is trustworthy based on the knowledge and information regarding its ability, professionalism, and reliability (McAllister, 1995;Schaubroeck et al., 2011;Su and Mattila, 2020;Yang and Mossholder, 2010). Affective trust refers to the emotional bonds or connections with the party to the exchange that are grounded in the care and concern that it demonstrates (McAllister, 1995;Schaubroeck et al., 2011;Su and Mattila, 2020;Yang and Mossholder, 2010).
Past research has shown that these two forms of trust are driven by different antecedents and lead to different behavioral outcomes in leadership and organizational behavior (Nienaber et al., 2015;Schaubroeck et al., 2011). However, it is unclear how these two forms of trust evolve and distinctively influence customers' cooperative behavior related to data privacy. To address this research gap and link to the COVID-19 pandemic, this research investigates the following questions: (1) How do cognitive trust and affective trust influence different cooperative behaviors toward contact tracing? (2) What are the antecedents for different forms of trust in contact tracing?
This research makes several theoretical contributions. First, it is among the first to apply cognitive and affective trust as psychological mechanisms to explicate different cooperative behaviors, namely, disclosure and falsification, related to data privacy, especially in the context of contract trancing. Our findings add theoretical nuance to the literature on customer data privacy (e.g., Janakiraman et al., 2018;Kashmiri et al., 2017; by showing that these two forms of trust have divergent effects. Second, this research identifies contact tracing-related perceptual factors and illuminates how they contribute to different forms of trust, thus adding to research on the antecedents of trust (Acquisti et al., 2015(Acquisti et al., , 2012. Taken together, our findings have important implications for the hospitality industry and governments in facilitating customer cooperation in contact tracing.
The remainder of this paper is organized as follows: We review the relevant literature to develop our hypotheses regarding how cognitive trust and affective trust drive cooperative behaviors in contact tracing. We then conduct an exploratory qualitative study to identify the relevant perceptual antecedents and revert to the literature to conceptualize how they influence the two forms of trust. After establishing our conceptual model, we employ a survey study to test all hypotheses. Finally, we discuss the theoretical contributions and practical implications of our findings.
Conceptual development
Despite the benefit of contact tracing to public health and safety in the COVID-19 pandemic, it has been reported that many customers are reluctant to cooperate when they are asked to provide their personal information (The Conversation, 2020;Yahoo, 2020). Thus, it is important for us to understand the major obstacles to customer cooperative behaviors in contact tracing. To do so, we conducted a short survey of 240 participants from the US on Amazon Mechanic Turk (see Appendix A for more details). 74 % of the participants reported that they had been asked to provide contact information at hospitality venues. However, only 24 % of them reported having provided correct and complete information each time. Furthermore, among a range of plausible reasons, the most chosen reason for their reluctance to cooperate was their concern about privacy (for 68 % of the participants). These findings allow us to draw on customer data privacy literature to investigate cooperative behavior in contact tracing.
Contact tracing as social exchange
By drawing on social exchange theory (Blau, 1964), the literature on customer data privacy conceptualizes the provision of personal information as a form of social exchange in which customers relinquish their personal information in order to gain access to services or to obtain more relevant and better services from businesses (e.g., Martin and Murphy, 2017;Schumann et al., 2014;White, 2004). Whether to provide personal information or not depends on the customer's evaluation of the expected benefit and cost of doing so. In this evaluation, the cost is largely associated with the perceived privacy vulnerability (Schumann et al., 2014). Privacy is defined as the customer's control over the dissemination and use of their information (Jaap et al., 2021;Martin and Murphy, 2017). As customers have less control over their information after relinquishing it to a business, they become vulnerable to its potential misuse .
By applying this notion to the context of our research, contact tracing can be viewed as the exchange between customers and hospitality businesses where customers may feel that control has been taken over their information in exchange for services, leading consumers to feel vulnerable to privacy-related risks. Such vulnerability creates barriers for customers to cooperate in contact tracing.
Trust as the core mechanism of cooperation in contact tracing
Trust refers to an individual's intention to accept vulnerability based upon positive expectations of the exchange party's intentions or behavior . Trust acts as a core mechanism to facilitate people's cooperation with the exchange party (Aguirre et al., 2015;Luo, 2002;Schaubroeck et al., 2011). When customers trust a business, they tend to accept their vulnerability because they believe that the business will handle their information appropriately . Hence, in terms of data privacy, trust generally increases customers' cooperation with the business by disclosing information to it (Lwin et al., 2007;Wirtz and Lwin, 2009). Our literature review (Table 1) shows that prior research on trust and cooperative behavior related to data privacy has not considered different forms of trust and their effects on different cooperative behaviors such as willingness to disclose and falsify. The present research aims at addressing these research gaps.
Trust consists of two conceptually different forms-cognition-based and affect-based trust (Hon and Lu, 2010;Massey et al., 2019;McAllister, 1995;Tomlinson et al., 2020). Cognitive trust develops through knowledge and information that enables individuals to evaluate the other party's competence and reliability, while affective trust is generated from positive feelings based on care and concern demonstrated by the other party (McAllister, 1995;Schaubroeck et al., 2011;Yang and Mossholder, 2010). We propose that these two forms of trust may influence customer cooperative behaviors differently due to their distinct characteristics. Specifically, we focus on disclosure and falsification as two specific behaviors.
Customers' willingness to disclose represents their intention to cooperate publicly, and thus is an important behavioral outcome in the exchange of information between customers and businesses (Martin and Murphy, 2017). When customers disclose information, they can inconspicuously provide truthful information or offer false or incomplete information, which is called "falsification" (Lwin et al., 2007;Norberg and Horne, 2014). Falsification can complicate contact tracing efforts and put public health at risk (Queensland Government, 2020). Therefore, we conceptualize willingness to disclose and willingness to falsify as separate behaviors toward contact tracing and investigate how they are Table 1 Empirical studies on trust and customer cooperative behavior related to data privacy.
Yes Yes
Hospitality and tourism Ioannou et al. (2020) Travel (online) Willingness to share information Travelers' trust. Trust is defined as "willingness to be vulnerable to the actions of another." (p. 4) Trust can reduce risk beliefs and encourages people to share their information.
Willingness to disclose
Trust in organization and trust in system/app. Trust is viewed as "a social relationship where principals invest resources in agents in exchange for an uncertain future benefit." (p. 123) Consumer trust in hotel apps has a positive effect on willingness to disclose information.
No No
Marketing Aiken and Boush (2006) Websites Willingness to provide personal information Signals of trust: a third-party certification, an objective-source rating, and an implication of investment in advertising. Perceived trustworthiness: cognitive, affective, and behavioral Trust signals can influence consumers' perceived security and privacy, perceived firm trustworthiness, and willingness to provide personal information.
No Yes
Bansal et al.
Online service context (financial websites) Intention to disclose information Trust in the website. Trust is defined as "the willingness to depend on another person or institution based on the belief in the integrity, ability, and benevolence of the other party." (p.1) Trust has an important impact on disclosure intention. This impact is influenced by customer personality and the sensitivity of the context.
No No
Bart et al.
Websites Behavioral intent (e. g., providing personal information, purchase intention, word of mouth) Online trust. Trust is defined as "a psychological state comprising the intention to accept vulnerability based on positive expectations of the intentions or behaviors of Another." (p.134) Online trust mediates the relationships between the characteristics of websites and consumers and consumer behavioral intent.
No No Cho (2006) business-toconsumer (B2C) Internet exchange relationships Self-disclosure and willingness to commit Trust and distrust. Trust and distrust are considered as distinct entities.
Trust and distrust have different effects on customer behavioral intentions such as self-disclosure.
No No
Grosso et al.
Retailing (online and offline) Willingness to disclose Micro-and micro-level trust: trust in a retail personnel, trust in a retailer, trust in a country The interaction between three types of trust, privacy concerns, and information type influence consumers' willingness to disclose.
No No
Martin et al. Willingness to disclose Firm trust. It was measured based on whether a firm could be trusted, counted on, and relied on.
Firm trust is a key mechanism that mediates antecedents and disclosure intention. Specifically, firm trust can reduce perceived risk of disclosure.
No No
White (2004) Relationship marketing Willingness to disclose Relational depth (trust is considered as the underlying theoretical mechanism without being explicitly tested).
Deep relationship perceptions are associated with strong satisfaction and trust. Such perceptions can reduce risks in disclosing certain kinds of information but increase risks in disclosing embarrassing information.
No No Wirtz and Lwin (2009) Online retailing Relational behavior (provision and updating of customer information) Trust in the website. Trust means that "customers have faith in the organization's reliability and integrity and feel secure about sharing their personal information with the organization." (p. 192) Trust facilitates consumers' relational behavior such as providing and updating their information on the website.
No No
Information management (continued on next page) influenced by cognitive trust and affective trust. Cognitive trust. Cognitive trust is individuals' evaluation of the exchange party regarding whether it is competent, professional, and capable of handling the exchange (Johnson and Grayson, 2005;McAllister, 1995). In the context of contact tracing, customers who have cognitive trust in a business hold positive expectations that the business will implement an appropriate process in collecting, storing, and using their data . Such positive expectations of competence in data management increase customers' willingness to accept their vulnerability in this exchange. Therefore, high cognitive trust in a business motivates customers to cooperate by disclosing truthful information that can support its purpose of contact tracing. In contrast, low cognitive trust in a business makes customers feel vulnerable to privacy-related risks such that they tend to exhibit behavioral reactance, such as withholding and falsification. Therefore, we propose the following: H1. Customers' cognitive trust in the business is positively related to their willingness to disclose (H1a) and negatively related to their willingness to falsify (H1b) in contact tracing conducted by this business.
Affective trust. Affective trust captures individuals' positive feelings and emotions of the exchange party which are grounded in its demonstrated concern and care (Johnson and Grayson, 2005;McAllister, 1995). If individuals feel that the exchange party has concern and care Willingness to provide access to personal information, willingness to disclose Trust in electronic medium. Trust is defined as "a multi-dimensional construct comprising of competence, reliability, and safety trusting beliefs; i.e., the individual's belief that electronic storage provides a reliable and safe environment in which to store health information, and her belief that the electronic storage format provides the necessary components to facilitate electronic storage of health information." (p. 474) Trust, intended purpose, and type of information have interaction effects on willingness to disclose and willingness to provide access.
No No
Chai et al. Blog
Knowledge sharing behaviors
Bloggers' Trust. Trust is defined as "a user's beliefs about the reliability, credibility, and accuracy of information gathered through the Web." (p.318) Bloggers' trust, strength of social ties, and reciprocity all influence their knowledge sharing behavior positively.
No No Dinev and Hart (2006) Online transactions
Willingness to provide personal information
Internet trust. Trust is defined as "beliefs reflecting confidence that personal information Internet websites will be handled competently." (p. 64) Internet trust has a positive effect on willingness to provide personal information to transact on the Internet.
No No
Kehr et al.
Mobile application (driving behavior app) Intention to disclose Institutional trust. It is defined as "an individual's confidence that the data-requesting medium will not misuse his or her data." (p. 611) Institutional trust influences consumers' intention to disclose private information.
No No
Malhotra et al.
E-commerce Behavioral intention (intention to reveal personal information) Trusting belief. It is defined as "the degree to which people believe a firm is dependable in protecting consumers' personal information." (p. 341) Trusting beliefs negatively influence risk beliefs and positively influence intentions to reveal personal information.
No No
Miltgen and Smith (2019) Commercial websites
Withholding, falsification
Trust. It refers to "an individual's trust in an entity that is requesting data." (p. 706) Trust reduces customers' intentions to withhold and falsify their information.
No No
Zimmer et al.
Websites Intention to disclose Trust in website. It refers to the "belief that the website is benevolent, competent, or honest in handling personal information." (p.
117)
Trust can encourage people to disclose information because it reduces risk perceptions associated with disclosure.
No No
Contact tracing Guillon and Kergall (2020) Digital contact tracing Willingness to use a contact-tracing application (disclosing information to the application) Trust in the government. It is defined as trust in the government to handle the health crisis.
Trust is significantly related to people's willingness to use a contact-tracing application.
NA
The benefit appeals, privacy designs, and convenience influence peoples' intention to install contact tracing apps.
No No for them, they tend to disclose broader and deeper information (Borg and Freytag, 2012;Park et al., 2011). This is because individuals reciprocate by providing their information to the exchange party in order to maintain a positive relationship (Moon, 2000;Park et al., 2011;White, 2004). In light of this notion, high affective trust associated with a business should increase customers' willingness to disclose in contact tracing. However, as positive feelings and emotions toward the business deepen, the associated motive to maintain this relationship can also become a pressure that leads to the invasion of people's privacy (O'Malley et al., 1997;Song et al., 2016;Steinhoff et al., 2019). That is, individuals may feel obligated to comply with the request for disclosure from the exchange party without knowing if their privacy will be competently protected. To resolve this relationship maintenance-privacy risk dilemma, they can take a symbolic cooperative action-falsification-to maintain the relationship with the business without putting their privacy at risk. Therefore, affective trust can increase customers' willingness to disclose but also increase their willingness to falsify. Formally, we propose: H2. Customers' affective trust in the business is positively related to their willingness to disclose (H2a) and their willingness to falsify (H2b) in contact tracing conducted by this business.
Perceptual antecedents of trust in the context of contact tracing
Past studies on customer data privacy show that customers' trust and cooperative behaviors toward businesses are driven by various perceptual antecedents (Acquisti et al., 2015(Acquisti et al., , 2012Bansal et al., 2016;Su and Mattila, 2020;Yang et al., 2019). As hospitality businesses have only started adopting contact tracing since the COVID-19 pandemic, the perceptual antecedents of cognitive trust and affective trust in this context are unclear. Therefore, we conducted a pilot qualitative study to identify the most relevant perceptual antecedents. We followed protocols from Kvale (1996) to design semi-structured interviews and used convenience sampling to recruit 24 participants (including a pilot test with five participants) with experience and knowledge of contact tracing from Anglosphere countries, including Australia, New Zealand, the UK, the US, and Canada (see Appendix B for the participants' profile). Our interview questions primarily focused on exploring participants' experiences with contact tracing in hospitality venues (e.g., restaurants, cafes, and bars), the main reasons and drivers for their cooperative (or uncooperative) behavior, and their perceptions of the practices of hospitality businesses and governments (see Appendix C for interview questions).
We conducted thematic analysis (Braun and Clarke, 2006) and identified four key perceptual factors influencing cooperative behaviors in contact tracing: 1) perceived ethics of data collection, 2) perceived data protection policy, 3) perceived governmental regulation, and 4) perceived prevalence of participation, as shown in Table 2 (see Appendix D for detailed data analysis). We explain each of these perceptual antecedents in the following section. For each perceptual antecedent, we first illustrate its meanings and characteristics based on our qualitative findings, and then draw on the relevant literature to explain and predict how this antecedent influences the two forms of trust.
Perceived ethics of data collection. Our qualitative findings reveal that participants were willing to disclose their personal information if they perceived the following ethical characteristics of hospitality businesses: ethical considerations (e.g., the data were collected to protect other customers rather than to target customers for commercial purposes), societal benefits (e.g., data collection helps stop the spread of COVID-19, and helps businesses stay open and provide services for people), and social responsibility (e.g., businesses exercise responsibility in protecting their customers and honor their obligation to support contact tracing). For example, one participant (ID 01) emphasized that restaurants do not collect data for self-interest by stating: "They [restaurants] have collected information because they're looking after their customers. The information that they've collected is basically for the use of public health and for the benefit of customers … it's to protect them …. I very much doubt that they [restaurants] would be misusing it." Many studies have shown that ethics are vital to the development of customer trust (Martínez and Del Bosque, 2013;Stanaland et al., 2011;Sung and Kim, 2010). Customers have positive expectations of an ethical business that exercises social responsibility and cares about public well-being (Fan, 2005;Singh et al., 2012). This expectation can be generalized to the customers' belief that ethical businesses are also concerned with, and care about, their customers in implementing contact tracing. Therefore, customers feel less vulnerable and have higher trust in ethical businesses.
As businesses' ethics reflect their motives and characteristics, such as altruism, benevolence, and sincerity, that generate positive feelings among customers and help form emotional bonds with them (Fan, 2005;Sung and Kim, 2010), such ethics contribute to affective trust. However, ethics do not convey any information about the competence and professionalism of the business in managing customers' data. Therefore, we propose the following hypothesis: H3. Customers' perceived ethics of data collection conducted by the business is positively related to their affective trust in this business.
Perceived data protection policy. Our interview participants believed that their data should be handled safely by businesses to avoid data breaches and scandals. They stressed the importance of a business's control of data (e.g., ability to control the flow of data, and management of data storage and data safety), protection of data (e.g., confidentiality of customer information and employees' professionalism in data management), and ownership of data (e.g., who owns customer data, and customers' own access to their data). These three themes can be merged and interpreted as perceived data protection policy. For example, one participant (ID 02) mentioned that data protection policy is an important driver for information disclosure: "Usually, when you enter your information on the Internet, they [websites] recommend that you read the privacy policy, which indicates how they're going to protect your information. But at restaurants, they do not indicate anything like this. I don't actually know how they're going to protect my information. I feel kind of insecure, and I don't really want to give up my information." Data protection policy describes the way in which businesses exercise ownership of and power over customer data, and how they protect their use of customer data (Lwin et al., 2007;Martin and Murphy, 2017;Xu et al., 2012). Customers nowadays have become increasingly wary of the data protection policies of businesses (Kim et al., 2018;Wright and Xie, 2019). If customers perceive that a business implements adequate data protection, they believe that it has the competence and professionalism to manage their data properly, and are thus more willing to accept vulnerability (Lwin et al., 2007;Steinhoff et al., 2019). Therefore, businesses that integrate strong data protection policies into effective marketing communication can build the positive belief of competence, professionalism, and reliability among customers (Martin and Murphy, 2017;Trim and Lee, 2019), which in turn contributes to cognitive trust. However, data protection policy is limited in building emotional bonds through care and concern toward customers. Therefore, we hypothesize: H4. Customers' perceived data protection policy of the business is positively related to their cognitive trust in this business.
Perceived governmental regulation. Our interview participants attached importance to how governments regulate their businesses' contact tracing practices. They emphasized the following characteristics of governmental regulation: confidence in regulators (e.g., positive beliefs about the government's efforts to control the pandemic and protect their citizens), sufficiency of regulations (e.g., adequacy of data security laws and policy of contact tracing), and regulation enforcement and policy support (e.g., governments encouraging citizens to participate in contact tracing and providing advice to businesses on how to collect, Table 2 Summary of first-order codes, and second-and third-order themes (pilot study). Hospitality businesses collect information because they want to protect customers Perceived ethical consideration Hospitality businesses lack ethics because they collect customers' data for targeted marketing Collecting data for contact tracing is the businesses' civic duty Perceived social responsibility Every member of society has the obligation to support contact tracing Only customers own their data, and they decide what information to share and with whom they share their information The use of QR codes for contact tracing is a new normal Social norm All my family members, friends, and colleagues are okay with providing their information store, and protect customer data). For example, one participant (ID 10) claimed that governments should guide restaurants to conduct contact tracing by providing the relevant policies. She explained: "There should be some level of government intervention, at least some regular check or some regular audit to make sure the hospitality venues are following the rules and guidelines properly." Customers expect governments to act as an external force to supervise the data practices of businesses through regulations and policies (Acquisti et al., 2015;Kashmiri et al., 2017;Lwin et al., 2007;Poortinga and Pidgeon, 2006;Xu et al., 2012). Well-developed governmental regulation deters business misconduct, such as the misuse of customer data, and provides detailed guidance and support for businesses to engage in appropriate data practices (Lwin et al., 2007;Xu et al., 2012). With more regulations, guidance, and support from governments, businesses can more knowledgeably, competently, and professionally collect and manage customer data, thus gaining cognitive trust from customers. However, complying with government regulations does not necessarily mean that the business cares for, and is concerned about, its customers, which limits its impact on affective trust. We thus propose the following hypothesis: H5. Customers' perceived governmental regulation is positively related to their cognitive trust in the business.
Perceived prevalence of information disclosure. Participants tended to observe how other people respond to contact tracing requests and used their immediate social circle as a reference to guide their own behavior. They paid attention to prevalence (e.g., the majority of the population, popular content in media channels, and employers' emphasis on contact tracing) and social norms (e.g., popular use of QR codes, and family, friends, and colleagues respecting contact tracing in hospitality venues). For example, one participant (ID 08) claimed that she would participate in contact tracing at restaurants without too much concern if she saw this behavior becoming a norm followed by many other people. She said: "When you see people write in their names, you just follow that [behavior] without even thinking about it." Social and peer influence play an important role in how people perceive and respond to privacy (Acquisti et al., 2015(Acquisti et al., , 2012. Customers' perceived prevalence of information disclosure captures the extent to which others are engaging in information disclosure (Dootson et al., 2017). According to the "bandwagon effect," customers tend to follow what the majority believes and does because other people's actions serve as social proof (Van Herpen et al., 2009). This social proof is premised on the belief that if other people believe in something or follow something, it must be good. Applying this logic to contact tracing, if customers observe many other people disclosing information to a business, they infer that these people must trust the business (or that this business is trustworthy). It may not be straightforward or easy for customers to observe on which form the trust is developed, thus they can attribute such trust to either cognitive trust or affective trust. Therefore:
H6.
Customers' perceived prevalence of information disclosure is positively related to their cognitive trust (H6a) and affective trust (H6b).
The conceptual model
Based on our hypothesis development, we propose a perception-trust-behavior intention framework to capture the psychological process that drives the cooperative behaviors of customers in the context of contact tracing. The antecedents of trust include four perceptual factors, as identified in the qualitative pilot study. The two forms of trust, cognitive and affective, in turn, influence customers' willingness to disclose and willingness to falsify. Fig. 1 summarizes the conceptual model and the predicted relationships.
Method
Our study used a survey to test the conceptual model and our hypotheses. We provided a brief to give participants the background information on contact tracing (see Appendix E). To ease participants into a concrete context of consumption, the survey instructed them to read that a restaurant chain "X" was adopting contact tracing and asked them questions related to this restaurant's contact tracing. We purposefully kept the brief short and generic in order to elicit participants' reactions based on their experiences and perceptions of contact tracing in the context of restaurants/eateries. We pre-tested the brief with 15 participants and sought feedback from them to ensure that it was easy to understand and had no ambiguity.
Sample and procedure
We collected data from US participants in June 2020, when contact tracing in restaurants had already begun in other Anglosphere countries, such as the UK, Australia, and New Zealand, and was on the horizon for Participants residing in the US were recruited from Amazon Mechanic Turk. Upon giving their consent, they were instructed to read the brief and then complete a questionnaire. In total, 420 participants took part in the survey. After removing cases with incomplete responses (n = 12) and those that failed the attention check (n = 43) (Huang et al., 2015), we obtained 365 valid responses (29.9 % female, 82.5 % aged from 18 to 45 years, 84.9 % with university degrees, 44.1 % with an annual income from $45,000 to $90,000). The demographic details are presented in Table 3.
Measurements
All measurements were adapted from existing scales into our research context. The measurement items were subjected to a series of pre-tests (n = 15) to check for their relevance, readability, and comprehensiveness. In addition, to ensure the substantive validity of the measurements, a committee of five marketing researchers was asked to assess the items and assign them to the corresponding constructs according to how well they reflected their full contents. In response, minor revisions in terms of wordings were made to improve the quality of the questionnaire.
Specifically, we measured perceived ethics of data collection (ETH) based on items from Edinger-Schons et al. (2018). Perceived data protection policy (POL) was measured with items devised by Lwin et al. (2007). We adapted items from the literature (Lwin et al., 2007;Poortinga and Pidgeon, 2006) to measure perceived governmental regulation (REG), and items from Dootson et al. (2017) to measure perceived prevalence of information disclosure (PRE). Regarding trust, we modified items from interpersonal trust research (Johnson and Grayson, 2005;McAllister, 1995) to measure cognitive trust (COT), and affective trust (AFT). Finally, willingness to disclose (DCL) and willingness to falsify (FAL) were measured by items adapted from Morosan (2018) and , respectively. All items were measured on seven-point Likert scales (see Appendix F for the full item list).
Analysis and results
We carried out data analysis following a two-stage procedure of structural equation modeling (Anderson and Gerbing, 1988) with SmartPLS 3 (Ringle et al., 2015). In the first stage, the psychometric properties (reliability and validity) of the measurement model were examined. In the second stage, the structural model was estimated, and the hypotheses were tested using a partial least-squares structural equation model (PLS-SEM). Parameters in the model were estimated by a bootstrapping method with 5000 resamplings. The PLS-SEM approach was used because it is suitable for research that focuses on prediction and theory development (Reinartz et al., 2009), and it is appropriate for evaluating models with complex relationships (Chin, 1998).
Measurement model validation
The reliability of the measurement of each construct was assessed by composite reliability (CR) and Cronbach's alpha (α) scores. As is shown in Table 4, all construct measures achieved high internal consistency, with the values of CR and α exceeding the recommended threshold of 0.7 (Hair et al., 2016). Moreover, the average variance extracted (AVE) of each construct was greater than 0.5, and all factor loadings were higher than 0.7 and statistically significant (p < 0.001). This established the convergent validity of the measurement of the constructs (Hair et al., 2016).
As is shown in Table 5, in terms of discriminant validity, the measurement model passed the Fornell and Larcker criterion (i.e., the square root of the AVE of each construct was higher than its correlations with other constructs), and the heterotrait-monotrait ratio of correlations (HTMT) criterion (i.e., HTMT values were lower than 0.9 and statistically different from one) (Hair et al., 2017). The loading of each indicator on its corresponding construct was also greater than its crossloadings on the other constructs (see Table 4). These tests confirmed the discriminant validity of the measurement of the constructs in the model.
Because our data were collected in a cross-sectional survey with single informants, we used both ex-and post-ante remedies to minimize the threat of the common method bias (CMB) (Podsakoff et al. (2003). By way of ex-ante remedies, we tried to control for the CMB by obtaining feedback through a series of pre-tests while ensuring respondent anonymity, reducing the apprehension of evaluation, including attention checks, and organizing the order of questions to prevent item-priming effects (Podsakoff et al., 2003). Of the ex-post remedies, first, Harman's single-factor test (Podsakoff and Organ, 1986) showed that the most variance explained by one factor met the criterion of less than 50 % (44.5 %). Second, a full collinearity test (Kock and Lynn, 2012) resulted in VIF values less than 3.33 (maximum VIF = 2.353), indicating no pathological collinearity (Kock, 2015). Third, the highest correlation between the constructs was lower than 0.9 (r = 0.77 between COT and AFT), indicating that there was no extremely high correlation (Pavlou et al., 2007). Fourth, the results of the common method factor approach, as suggested by Liang et al. (2007), showed that the variance of each item was mostly explained by its theoretical construct (average variance = 74.4 %), rather than by the common method factor (average variance = 1.3 %). Moreover, while all item loadings on the corresponding theoretical constructs were statistically significant (p < 0.001), several loadings on the method factor were insignificant (p > 0.05) (see Appendix G). Taken together, we conclude that CMB was not a serious issue in this study. Fig. 2 shows the results of model estimation from the PLS-SEM in terms of path coefficients (β) and significance (p value) as well as the model's explained variance (R 2 ) and predictive relevance (Q 2 ). More details of the PLS-SEM results are provided in Appendix H.
Overall, the structural model did not suffer from collinearity problems, as indicated by the low VIF values (< 3) of all sets of exogenous constructs (see Appendix G). It also had a moderate explanatory power or predictive accuracy (Hair et al., 2016), as the R 2 values were within Note: *** p < .001. ETH = Perceived ethics of data collection. POL = Perceived data protection policy. REG = Perceived governmental regulation. PRE = Perceived prevalence of information disclosure. COT = Cognitive trust. AFT = Affective trust. DCL = Willingness to disclose. FAL = Willingness to falsify.
the acceptable range (>10 %, with the exception for the R 2 of willingness to falsify) (Falk and Miller, 1992). All Q 2 statistics obtained by a blindfolding procedure were greater than zero, indicating the predictive relevance of the endogenous constructs in the model (Hair et al., 2016). We further analyzed the mediating roles of the two forms of trust on the relationships between perceptions and behavioral outcomes. Table 6 shows that cognitive trust significantly mediated the effect of perceived data protection policy, governmental regulation, and prevalence of information disclosure on customers' willingness to disclose (p < 0.05), but did not significantly mediate the relationships between any perception and willingness to falsify (p > 0.05). By comparison, affective trust significantly mediated the influence of perceived ethics and the prevalence of information disclosure on willingness to disclose (p < 0.001), and was a significant mediator of the effects of perceived ethics and the prevalence of information disclosure on willingness to falsify (p < 0.01). These results provide further evidence of the distinct roles of cognitive trust and affective trust in our conceptual model.
Conclusions
Contact tracing is a safety practice for hospitality businesses to remain open in a safe environment during the COVID-19 pandemic. This research investigates customers' cooperative behavior toward contact tracing based on cognitive trust and affective trust, and it examines perceptual antecedents of the two forms of trust. The findings show that cognitive trust facilitates willingness to disclose and reduces willingness to falsify, leading to actual cooperative behavior in contact tracing. By contrast, affective trust increases both the willingness toward disclosure and falsification, suggesting that it may encourage symbolic cooperative behavior. Such symbolic cooperation complicates contact tracing by creating barriers to quickly and precisely reach people who have been in proximal contact with infected individuals, and thus is detrimental to public safety.
Moreover, the findings demonstrate the relevant perceptual factors in contact tracing that influence cognitive trust and affective trust. Specifically, the antecedents of cognitive trust comprise perceived data protection policy, perceived governmental regulation, and perceived prevalence of information disclosure, while the antecedents of affective trust comprise perceived ethics of data collection and perceived prevalence of information disclosure. Theoretical and practical implications can be derived from our findings.
Table 6
Estimates of indirect effects in the model. Note: ETH = Perceived ethics of data collection. POL = Perceived data protection policy. REG = Perceived governmental regulation. PRE = Perceived prevalence of information disclosure. COT = Cognitive trust. AFT = Affective trust. DCL = Willingness to disclose. FAL = Willingness to falsify.
Theoretical implications
The primary contribution of this research is the demonstration of how different forms of trust influence cooperative behavior toward contact tracing in distinct ways. Although the literature on customer data privacy demonstrates has shown that trust is a key mechanism that promotes cooperative behavior, it overlooks the effects of different forms of trust (e. g., Aguirre et al., 2015;Jagadish, 2020;Martin and Murphy, 2017;Waldman, 2018). Our findings show that cognitive and affective trust differently influence two types of cooperative behaviors: the willingness toward disclosure and falsification. Cognitive trust encourages customers to disclose truthful information through positive evaluations of the competence, professionalism, and reliability of a business, which contribute to the confidence that private customer information will be handled appropriately. By comparison, affective trust can be a double-edged sword in facilitating customer cooperation in contact tracing. Customers tend to disclose information to a business with high affective trust because they have the motive to maintain a positive relationship with it. However, such a motive can pressure customers and trigger their intentions to provide false information, especially when they are unsure about how their data will be collected, managed, and used. Thus, our work here broadens and deepens the understanding of the role of trust in driving cooperative behavior related to data privacy.
Moreover, we add to the research on customer data privacy by identifying the antecedents of the two forms of trust and examining their effects. While past studies have suggested various antecedents of trust and consequent behaviors (Acquisti et al., 2015;Bansal et al., 2016;Su and Mattila, 2020;Yang et al., 2019), antecedents that specifically contribute to cognitive or affective trust are unclear. Our findings show that perceived protection policy reflects the competence, capability, and professionalism of the business in terms of ensuring the security of customer data, which is related to the attributes of cognitive trust. Similarly, perceived governmental regulation creates positive beliefs that the business has sufficient knowledge and skills to act competently and professionally in contact tracing, thus contributing to cognitive trust. Moreover, perceived business ethics conveys such business qualities as altruism, benevolence, and sincerity that create positive feelings and emotional connections, thus strengthening affective trust. Customers apply the prevalence of information disclosure as social proof to develop both cognitive and affective trust. The investigation of these perceptions deepens our understanding of the distinct characteristics of the two forms of trust, and it helps us establish a perception-trust-behavioral intention conceptual model to understand the psychological processes of customers in the context of contact tracing.
Practical implications
Our research has direct practical implications for hospitality businesses and governments in terms of effectively encouraging customers to cooperate in contact tracing. Hospitality businesses should prioritize the development of cognitive trust over that of affective trust to improve the effectiveness of contact tracing. Despite the clear benefit of affective trust in establishing and maintaining sustainable customer relationships, hospitality businesses should be cautious about the negative and costly effects of affective trust on customer cooperation that may even backfire. Therefore, for the sake of public health and safety in the COVID-19 pandemic, hospitality businesses should gain customer confidence in their capability of handling contact tracing competently and professionally before showing care and concern to develop relationships.
Hospitality businesses can gain cognitive trust by strengthening positive perceptions of data protection policy, the prevalence of information disclosure, and governmental regulation. First, businesses should offer fact-based information about their data protection policy to strengthen customers' cognitive trust in them. For example, they can provide the relevant information on leaflets, posters, social media, and webpages to inform customers. Second, hospitality industry associations or influential platforms can collaborate with hospitality businesses to make contact tracing prevalent at a large scale through technological support and promotional activities. For example, the Restaurant Association of New Zealand has taken the lead in providing support for restaurants and cafes to conduct standardized contact tracing (Restaurant Association of New Zealand, 2020).
Finally, governments play an important role in helping hospitality businesses gain cognitive trust and facilitate contact tracing. Governments should enact sufficient regulatory mechanisms, such as administrative regulations, privacy standards, and supervision systems. Governments also need to strengthen the degree of regulatory enforcement on data practices in the hospitality industry. This regulatory enforcement is seen as an endorsement of the hospitality industry. For example, compared with Australia, where many electronic check-ins are outsourced to private companies with opaque privacy rules, the New Zealand government participates in designing and operating mobile applications for contact tracing for hospitality check-ins to balance the protection of personal privacy with protecting public health ( ABC News, 2020). Such regulatory efforts send clear signals to customers that their privacy is respected. In this regard, hospitality businesses should lobby for, rather than resist, strong government regulation and intervention in contact tracing.
Limitations and future research directions
This research has limitations that offer directions for future studies. First, we measured customers' behavioral intentions rather than their actual behaviors. Future research can collaborate with hospitality businesses to conduct field studies to better measure cooperative behavior. Second, our research was conducted in the Western context, where personal privacy and self-independence are highly valued. A fruitful area for future research will be to replicate our conceptual model in Eastern countries, such as South Korea and China, that have a strong culture of interdependence. How hospitality businesses and governments can facilitate cooperative behavior might need to be evaluated through a cultural lens. Finally, future research can extend the generalizability of our findings to other privacy-related contexts, such as customer registration, that require customers to disclose their information.
Funding
The authors gratefully acknowledge grants from Macquarie University for financial support.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in the research.
Declaration of Competing Interest
None.
Appendix A. A short survey
Tables A1-A3 Appendix B. Participants profile (pilot study) The COVID-19 pandemic is one of the greatest challenges we face in decades. Government officials and health experts have been working together to contain the spread of the virus and help industries to recover from the crisis. Many health experts suggest a contact tracing approach: tapping into personal information (e.g., name, phone number, location, travel history, health status) to track the spread of infection and warn people who have come into close proximity to a COVID-19 infected individual. Contact tracing has two purposes: 1) to figure out who a sick person caught an illness from, and 2) to find out who they've been in contact with while infectious. This contact tracing approach has been adopted by several countries, such as South Korea, Singapore, Australia, New Zealand, and some states in the U.S. Restaurants, cafes, and bars have been hit hard by the pandemic. Their owners hope to reopen their businesses and allow customers to dine, and some of them have adopted the contact tracing approach for safety management. These restaurants require customers to provide contact details upon entry. For example, restaurants may require customers to provide information including names, contact information and visit times by filling a form, scanning a QR code or downloading a mobile App. A major restaurant chain X in the U.S. (in order to avoid legal disputes, we use "X" to replace the real restaurant name) has adopted the contact tracing approach for safety management. This restaurant chain collects personal information including names and phone numbers from their customers. Please answer the following questions based on your opinion about this restaurant chain X.
Construct Item Reference
Perceived ethics of data collection (ETH): a perception of the extent to which the business takes responsibility to look after public safety and wellbeing Edinger-Schons et al.
ETH1
The purpose for this restaurant chain to gather personal information during the COVID-19 pandemic is to fulfill responsibility for the society.
ETH2
The purpose for this restaurant chain to gather personal information during the COVID -19 pandemic is to support customers to live a healthy life.
ETH3
The purpose for this restaurant chain to gather personal information during the COVID -19 pandemic is to express their genuine feeling of responsibility.
ETH4
Given the act of collecting personal information during the COVID -19 pandemic, I believe this restaurant chain is genuinely concerned about being socially responsible.
ETH5
Given the act of collecting personal information during the COVID -19 pandemic, I believe this restaurant chain is committed to public welfare out of unselfish motives.
ETH6
Given the act of collecting personal information during the COVID -19 pandemic, I believe this restaurant chain's commitment is based on the wish to do good. Perceived data protection policy (POL): a positive perception of the way the business exercises ownership, power, and protection over the use of customer data Lwin et al. (2007) POL1 This restaurant chain prevents customer information from being used for purposes other than those initially stated during the COVID -19 pandemic. POL2 This restaurant chain prevents customer information from being shared with unauthorized external parties.
(continued on next page) (continued )
POL3
This restaurant chain's databases containing customer information are protected from unauthorized access regardless of costs. POL4 This restaurant chain keeps customer information secure. Perceived governmental regulation (REG): a perception regarding how well data practices and potential privacy risks are regulated and monitored by government authorities Lwin et al. (2007); Poortinga and Pidgeon (2006)
REG1
The existing laws in the U.S. are sufficient to protect restaurant customers' privacy when contact tracing is adopted during the COVID-19 pandemic.
REG2
There is stringent legal enforcement in the U.S to protect personal information of restaurant customers when contact tracing is adopted during the COVID-19 pandemic.
REG3
The U.S. government is doing enough to ensure that restaurant customers are protected against privacy violations when contact tracing is adopted during the COVID-19 pandemic. Perceived prevalence of information disclosure (PRE): a perception of the extent to which others are engaging in information disclosure Given the approach of this restaurant chain, I believe in its competence during the COVID-19 pandemic. COT3 I can rely on this restaurant chain to serve me carefully during the COVID-19 pandemic.
COT4
I am confident about this restaurant chain's ability to professionally operate its business during the COVID-19 pandemic. COT5 I can confidently depend on this restaurant chain if I visit it during the COVID-19 pandemic. Affective trust (AFT): emotional bonds or connections with the exchange party that are grounded in its care and concern Johnson and Grayson (2005); McAllister (1995) AFT1 I would feel a sense of personal connection with this restaurant chain if I visit it during the COVID-19 pandemic. AFT2 I feel that this restaurant chain will respond to me caringly as a customer during the COVID-19 pandemic. AFT3 I feel that this restaurant chain will show a warm and caring attitude toward me during the COVID-19 pandemic. AFT4 I feel that this restaurant chain will be concerned about me during the COVID-19 pandemic. AFT5 I feel that this restaurant chain will care about maintaining a good relationship with me during the COVID-19 pandemic. Willingness to disclose (DCL): an intent to provide information to the business Morosan (2018) DCL1 How likely are you to disclose personal information to this restaurant chain during the COVID-19 pandemic? DCL2 How willing are you to disclose personal information to this restaurant chain during the COVID-19 pandemic? DCL3 How probable are you to disclose personal information to this restaurant chain during the COVID-19 pandemic? Willingness to falsify (FAL): an intent to provide false or incomplete information to the business
FAL1
If this restaurant chain asks for my personal information when I visit it, I will give them false information.
FAL2
If this restaurant chain asks for my personal information when I visit it, I will purposely try to trick them when providing my information.
FAL3
If this restaurant chain asks for my personal information when I visit it, I think it is fine to give misleading answers on personal questions.
Appendix G. Common method bias analysis using the common method factor approach (main study)
|
2021-02-09T14:07:35.020Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "638968f39e15f7ecc93263c6eabbee4ef752b305",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9756570",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a7863f2a75a2605012083519b77aeb9187c67647",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
35783493
|
pes2o/s2orc
|
v3-fos-license
|
Observation of Vertical Betatron Sideband due to Electron Clouds in the KEKB LER
The effects of electron clouds on positively-charged beams have been an active area of research in recent years at particle accelerators around the world. Transverse beam-size blow-up due to electron clouds has been observed in some machines, and is considered to be a major limiting factor in the development of higher-current, higher-luminosity electron-positron colliders. The leading proposed mechanism for beam blow-up is the excitation of a fast head-tail instability due to short-range wakes within the electron cloud. We present here observations of betatron oscillation sidebands in bunch-by-bunch spectra that may provide direct evidence of such head-tail motion in a positron beam.
The development of clouds of electrons in positivelycharged-beam storage rings has been observed at several machines, including the KEKB Low Energy Ring (LER), a 3.5 GeV positron storage ring which is part of the KEK B-Factory. Observations at the KEKB LER of betatron tune shifts along a bunch train via gated tune meter [1], and of transverse bunch size along the train via highspeed gated camera [2] and streak camera [3], show a characteristic increase of transverse tune shifts and beam size starting near the head of the train, reaching saturation at some point along the train. Simulations of electron cloud density due to photo-electrons being drawn towards the positron beam have shown a similar build up of cloud density along the train, reaching saturation at some point [4,5]. Electrons from the cloud have also been measured directly via electrode [6]. Solenoids have been wound around approximately 95% of the drift space in the LER, with a maximum field at the center of the beam pipe of 45 Gauss [7]. The beam size blow-up has been observed to occur above a threshold average bunch current of ∼ 0.35 mA/bunch at 4-rf bucket spacing between bunches with the solenoids off; this threshold is raised when the solenoids are powered on [8]. The beam blow-up has been found to reduce the specific luminosity of the affected bunches [8].
One proposed mechanism for the beam blow-up due to the presence of electron clouds is a strong head-tail instability caused by wake fields created by the passage of the bunch particles through the cloud [9]. Attempts have been made to observe this head-tail motion directly via streak camera [3], but have been unsuccessful, possibly due to a lack of sufficient light intensity. A vertical sideband peak has been reported for a proton beam at the CERN SPS which could be an indication of head-tail motion [10], though no clear signature has yet been reported at a positron machine. We report here on observations of a sideband peak, above the betatron tune, which may provide direct evidence of such a coupled-mode spectral peak in a positron beam.
FIG. 1: Two-dimensional plot of vertical bunch spectrum versus bunch number. The horizontal axis is fractional tune, from 0.5 on the left edge to 0.7 on the right edge. The vertical axis is bunch number in the train, from 1 on the bottom edge to 100 on the top edge. The bunches in the train are spaced 4 RF buckets (about 8 ns) apart. The bright, curved line on the left is the vertical betatron tune, made visible by reducing the bunch-by-bunch feedback gain by 6 dB from the level usually used for stable operation. The line on the right is the sideband.
The sideband peak first appears near the bunchcurrent threshold of beam blow-up -the sidebands cannot be seen when the average bunch current is below the beam-blow-up threshold, and can be seen when the average bunch current is over the threshold. In addition, the presence of the sideband is affected by the electron-cloudsuppression solenoids; for example, it has been observed to appear in bunches at 1 mA per bunch and a 4-bucket spacing only when the solenoids are turned off, and does not appear when the solenoids are turned on. These behaviors cannot be explained by ordinary head-tail mechanisms; we conclude that the sidebands, like the beam blow-up, are caused by the presence of electron clouds.
Observations were made using signals taken from a pair of Beam Position Monitor (BPM) electrodes, which are mounted on the beam pipe, and measure 6 mm in di- ameter. The difference signal from two electrodes on opposite sides of the beam pipe is detected at 2.032 GHz (=4 × f RF ), and recorded by the Bunch Oscillation Recorder (BOR), which is a diagnostic tool in the bunch-by-bunch feedback system [11,12]. The BOR itself consists of an 8-bit digitizer front-end, with a 20-MByte memory, which is capable of storing one beam centroid position measurement per bunch for all 5120 RF buckets in the ring over 4096 turns. Data were taken at the LER on 24 June 2004, in singlebeam mode (no colliding bunches in the HER) and with the majority of the solenoids turned off. The fill pattern consisted of four trains of bunches, spaced evenly around the ring. Each train consisted of 100 bunches, spaced 4 RF buckets (≈ 8 ns) apart. In Figure 1, the spectrum for each bunch is plotted, with fractional tune on the horizontal axis and bunch number on the vertical axis. The betatron tune (made visible by lowering the gain of the bunch-by-bunch feedback system) is seen as the left curved line. The betatron tune is seen to shift successively higher as one moves from the head of the train towards the tail, saturating at around the 40th bunch. To the right of the ν y peak is the sideband peak, which likewise shifts along the train, until saturating a little after the ν y peak. In experiments that have been performed over the past year, it has been observed that changing the vertical betatron tune causes the sideband peak to shift by an equal amount, and in the same direction as that of the betatron tune. Changing the horizontal tune has no effect on this sideband. Figure 2a shows the observed bunch spectra with the feedback gain set at the nominal value for physics running, −9.45 dB. In this plot, the Fourier power spectrum of each bunch in the ring is calculated individually, then the power spectra of all bunches are averaged together. The horizontal scale is in units of fractional tune, and the vertical scale is in units of µm 2 . The vertical betatron tune can be seen as a broad, low peak at a fractional tune of approximately 0.58-0.59. To the right of it can be seen the sideband peak at approximately 0.64. The broad, pedestal-like tail to the left of the peak is due to the projection of a succession of narrow peaks, one for each bunch, which have lower tunes near the head of the train.
The vertical gain of the feedback system was lowered by 6 dB, to the point where the beam started to become slightly unstable, as seen in oscilloscope traces and as reflected in a reduced lifetime for the beam. Under these conditions, the vertical betatron peak becomes enhanced, as shown in Fig. 2b, however the sideband peak amplitude is virtually unchanged.
Finally, the feedback was turned off entirely. The BOR was set to record the 4096 turns immediately following the feedback being turned off. As seen in Fig. 2c, the betatron peak grows enormously, but the sideband peak height is again essentially unchanged. This indicates that this peak does not respond to dipole kicks from the feedback system. The estimated amplitude of the motion at this peak is approximately 1.6 µm,or half of one percent of the vertical beam size of 320 µm at the pickup location.
Experiments were done with changing the synchrotron tune. In one set of measurements, the RF voltage was reduced, which lowered the synchrotron tune by 0.0012. The position of the sideband relative to the vertical betatron tune for both values of ν s are shown in Figure 3a; the sidebands are visible starting from the fifth bunch in the train. The difference between the two curves is shown in Figure 3b. The average of the peak separation over all bunches is not statistically different from zero.
Observations have also been made using the same BOR memory recorder, but using a fast photo-multiplier tube (PMT) as input device instead of a BPM electrode. A Hamamatsu H6780 PMT, was set up to record the light intensity from a focused image of the beam using synchrotron radiation. The image was partially obscured in the vertical direction, leaving only the upper edge of the image visible. The spectra obtained via PMT were identical to those obtained from the BPM electrode, though with a lower signal-to-noise ratio. The amplitude of the peak seen by PMT can only be crudely estimated, but agrees roughly with that seen by the BPM electrode. One feature that the PMT can detect that the BPM cannot is changes in the beam size.
The time-series data of the BPM, shown in Fig. 4a, reveal a burst-like time structure to the sideband oscillations. The sideband peak is present as a low level oscillation that suddenly grows and damps in a burst lasting ∼ 500 turns (5 ms). A break down of the data into 512turn slices shows that the sideband peak is seen most strongly during the burst, and disappears entirely just after the burst.
In the PMT data, as seen in Fig. 4b, a similar 500-turn-duration phenomenon is observable wherein the light level (beam size) increases over the course of 500 turns, then slowly damps afterwards, over the course of ∼ 1500 turns. A slice-by-slice breakdown of such events reveals that the sideband peak is a maximum during the ramp-up, and disappears momentarily just after the burst.
The two sets of observations suggest that in the blownup state, a series of bursts and quiescent periods alter- nate. During the bursts of violent dipole motion, the beam size increases by a further ≈ 5% from its already blown-up state. After it blows up, the dipole motion is temporarily absent, as the emittance of the beam damps down. One possible interpretation for this sideband is that it is a signature of mode-coupling due to the head-tail instability predicted by Ohmi, Zimmermann and Perevedentsev [13]. A notable feature of the side band is that it occurs on the upper side of the betatron peak, which suggests that the effective wake function in the region of the tail of the bunch is a focusing wake. A possible mechanism for producing such a wake is a pinching effect on the electron cloud. Simulations of wakes that change from defocusing to focusing with distance along the bunch have been found in simulations using the KEKB parameters [13,14]. When the synchrotron tune is changed, the average separation between the sideband peak and the betatron peak does not change significantly. In the case of strong head-tail instability, the coupled mode frequency does not necessarily depend strongly on ν s . As an illustration, mode spectra were generated using a toy model with an airbag charge distribution and a simple effective wake, shown in Fig. 5, which uses a resonator-like wake W , increasing along (−z) to represent the enhancement of the wake near the tail of the bunch due to pinching of the electron cloud: where α = ω R /4, and ω R = 2π × 40 GHz. (Note: the oscillation frequency of cloud electrons as calculated from the LER beam size and positron charge density is ∼ 2π × 43 GHz.) Plots of mode spectra as a function of effective R/Q are shown in Fig. 6 for synchrotron tunes of 0.022 and 0.024. As can be seen, the tune of the coupled mode in the region far above the coupling threshold does not change significantly with the synchrotron tune. However, the coupling point between the l = +1 mode (ν y + ν s ) and the l = +2 mode (ν y + 2ν s ) shifts to the right. Since the electron cloud density increases along the leading bunches of the train, this change in the threshold would lead one to expect the position of the first bunch to exhibit the sideband should shift as the synchrotron tune is changed. To investigate this behavior near the threshold, data originally taken on 23 December 2003 at two different values of the synchrotron tune were re-examined. The LER was in single beam mode, with all solenoids off. The bunches were stored at a four-bucket spacing, at a bunch current of 0.52 mA/bunch. Due to the lower bunch current, the growth of the sideband peak is more gradual in this data set than it is in the July 2004 data set. (Due to a high feedback gain, the betatron peak is not pronounced enough to measure.) One set, of four measurements, was taken at an RF voltage of 8 MV, and the other set, of three measurements, was taken at 6 MV. The synchrotron tunes of the two sets, as measured from the synchrotron peak visible in the spectra, were 0.0237 and 0.0203, respectively. The maximum-height frequency bin in the region of the sideband was found for each bunch in the train, and the peak heights of those maximum bins were averaged together within each set. The average peak heights, and 1-sigma statistical error bars at each synchrotron tune are plotted as a function of bunch number along the train in Fig. 7. As can be seen, the development of the sideband peak height occurs earlier in the train at the lower synchrotron tune, in agreement with expectation.
Simulations and linear theory also indicate that for a given cloud density, larger beam sizes should be more stable due to a weaker beam-cloud interaction [13]. The burst-like activity of the blown-up beam may be the result of the beam size varying around some threshold value, as the bunch alternates between states of emittance growth due to the instability occurring when the beam size is below the threshold, and the beam size damping down once over the threshold.
A betatron sideband peak has been found in the vertical tune spectrum of positron bunches in the presence of beam-size blow-up due to electron clouds. The sideband peak is on the upper side of the betatron peak in terms of fractional tune, first appears early in the bunch train, and the separation between this peak and the betatron tune peak increases going along the train, until it saturates at a certain point. The best explanation for it is that it is a signature of the head-tail instability hypothesized to explain transverse beam blow-up due to electron clouds. The presence of this sideband peak also provides a sensitive diagnostic for the presence of electron clouds.
The authors would like to thank Professor K. Oide for his support of this work, and Drs. Y. Funakoshi, T. Ieiri, H. Ikeda, H. Koiso, M. Masuzawa and A. Valishev for many fruitful discussions.
|
2018-04-03T02:58:48.763Z
|
2004-07-30T00:00:00.000
|
{
"year": 2004,
"sha1": "205f0889ff44b80e405d434dffca736e58f77349",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0407149",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "205f0889ff44b80e405d434dffca736e58f77349",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
24564302
|
pes2o/s2orc
|
v3-fos-license
|
Occurrence of antibodies to Toxoplasma gondii in rheas ( Rhea americana ) and ostriches ( Struthio camelus ) from farms of different Brazilian regions
This study aimed to verify the occurrence of antibodies against Toxoplasma gondii in rheas ( Rhea americana ) and ostriches ( Struthio camelus ) commercially breeding in Brazil. Blood samples from 20 rheas and 46 ostriches (young and adults) were serologically tested using a technique known as modified agglutination test (MAT) at an initial titration of 1:16 for ostriches and 1:25 for rheas. Antibodies against T. gondii were found in 50% (10/20) of the rheas, with titers ranging from 1:25 to 1:6,400. The incidence of antibodies against T. gondii in ostriches was 17.4% (8/46) with titers ranging from 1:16 to 1:256. Birds showing titers higher than 1:200 for T. gondii were mainly the young ones. Therefore, rheas and ostriches may be parasitized by T. gondii , showing high levels of antibodies against this parasite.
Toxoplasma gondii causes a parasitic disease of worldwide occurrence that has been identified in different species and is of great important as a zoonosis (MONTEIRO, 2011).Toxoplasmosis is a disease that is usually considered to be asymptomatic, because the host immune defense is able to control the infection and prevent further pathogenic damage.However, depending on the immune status (situations of poor nutrition, stress and immunosuppressive diseases, among others), a decline in the immunological defenses may occur and the individual might develop a clinical stage of the disease (FAGUNDES, 2009).It has been found that when humans and animals are infected by T. gondii, they may have fever, lymphadenopathy and anorexia (BOWMAN et al., 2010;MONTEIRO, 2011).Previous studies described occurrences of positive serology for T. gondii in ostriches and rheas (DUBEY et al., 2000;MAROBIN et al., 2004;CONTENTE et al., 2009;SOARES et al., 2010).
Rheas (Rhea americana) and ostriches (Struthio camelus) are ratites found in different countries, being rheas typically Almeida, A.B. et al. Rev. Bras. Parasitol. Vet.found in South America.Ostrich and rhea farming has become a growing business in Brazil, serving as an agricultural activity of national and international importance (FILHO; LUCIO, 2006).As this activity has grown, there has been a trend towards increasing health problems in these birds, as well as bird mortality and expenses with treatments (FILHO; LUCIO, 2006).These birds are believed to have a great importance in the T. gondii life cycle, since they might be consumed by predators such as felines (BOWMAN et al., 2010;MONTEIRO, 2011).Likewise, it is likely that ratites are involved in the epidemiology of T. gondii in large wild felines.Therefore, the aim of this study was to detect antibodies against T. gondii in R. americana and S. camelus from Brazilian commercial herds.
In this study were evaluated two rhea farms in the municipalities of Rio Rufino, state of Santa Catarina, and Santa Maria, state of Rio Grande do Sul, in southern Brazil, and three ostrich farms from the municipalities of São Paulo, state of São Paulo, São Miguel do Oeste, state of Mato Grosso and Santa Maria, state of Rio Grande do Sul.The first rhea farm had 40 birds: 21 adults aged from 3 to 7 years, and 19 rhea chicks from 4 to 6 months of age.Blood samples were collected from only 17 young and adult birds (Table 1).From the second farm, blood samples were collected from three young rheas aged one year.The ostrich blood samples were well distributed: from farm of the State University of São Paulo in the city of São Paulo, were collected from 17 adult ostriches out of a total population of 30 birds; from a commercial farm with over 5,000 birds in São Miguel do Oeste, 20 ostriches of 14 months of age; and from a farm that was just starting in the business in Santa Maria, nine samples were collected.In this farm, adult birds had been living on this farm for the last two years (aged between 3 and 5 years), and young birds that were ten months of age at the time of sampling had been acquired at the age of three months.On all farms (rhea and ostrich), all the birds were kept on natural pasture and their diet was supplemented with commercial feed.
Blood samples (3 to 5 mL/bird) were collected using a needle (22 gauge) by means of brachial vein puncture.The samples were stored in tubes without anticoagulant, refrigerated at 10 °C, transported to the laboratory and centrifuged at 3,500 g for 10 min.Serum samples were stored at -20 °C until serological analysis for T. gondii.
Rhea and ostrich serum samples were assessed for antibodies against T. gondii by means of the modified agglutination test (MAT), accordance the methodology described by Desmonts and Remington (1980).The initial dilutions for the serum samples were 1:16 for ostriches (SOARES et al., 2010) and 1:25 for rheas (CONTENTE et al., 2009), in buffered saline solution.Thus, titers ≥1:16 and ≥1:25 were considered positive for ostriches and rheas, respectively.Based on this information, positive samples were further diluted in order to identify the maximum antibody titration for each bird.
Out of the 20 rhea serum samples analyzed using MAT, ten were identified as positive for T. gondii (50% seropositive) (Table 1).These results were higher than those found by other researchers.In a study on 74 rheas from commercial farms in the state of Rio Grande do Sul, Brazil, 8.1% were found to be positive for T. gondii using the hemagglutination test (MAROBIN et al., 2004).In another study on 69 rheas, 4.3% were found to showed antibodies against T. gondii (SOARES et al., 2010).In the present study, we found high anti-T.gondii titers (1:200, 1:3,200 and Table 1.Antibodies against Toxoplasma gondii in rheas and ostriches on farms located in the municipalities of Rio Rufino, state of Santa Catarina; São Miguel do Oeste, state of Mato Grosso; and Santa Maria, state of Rio Grande do Sul, Brazil, according modified agglutination test (MAT).
1:6,400), particularly in young rheas (4 to 6 months of age).This finding might be related to recent infection in young ratites, thus suggesting that there is a strong immune response against the parasite.
One of the first studies on T. gondii seroprevalence in which 973 ostriches were sampled, found that 2.9% were considered positive (DUBEY et al., 2000).In the current study, 17.4% of the serum samples from ostriches had differing levels of antibodies against T. gondii, ranging from 1:16 to 1:256.These results were similar to those reported by Contente et al. (2009) in São Paulo, Brazil, that demonstrated 14.36% of seropositive for T. gondii.Considering 17 samples collected in the farm of São Paulo seronegative for T. gondii in the current study, is demonstrated that that the prevalence may depend mainly on the epidemiological situation.
The positivity for T. gondii in ostriches and rheas can be linked to several factors.Among these, the most important is free access by felines to environments shared by ratites like those investigated in the current study.Felines may have eliminated oocysts (the infective form) in their feces (MONTEIRO, 2011).Rheas and ostriches, along with many others (insects, worms, and small rodents) serve as intermediate hosts of T. gondii (RUIZ;FRENKEL, 1980).Therefore, rheas and ostriches play an important role in the epidemiology of toxoplasmosis when these birds ingest intermediate hosts infected by parasite, since they can be latter consumed by wild felines, infecting this predator (DUBEY; BEATTIE, 1988;MONTEIRO, 2011).
As previously mentioned, ostriches and rheas apparently showed no clinical changes, although young birds showed higher levels of circulating antibodies.Similar to ratites, toxoplasmosis in the intermediate host of backyard chickens (Gallus gallus domesticus) is usually asymptomatic (GARCIA et al., 2000).However, clinical signs have been recorded in some birds, such that eye and brain injuries have been seen to affect turkeys and canaries (QUIST et al., 1995;WILLIAMS et al., 2001).It has been found that implementing good sanitary management for these birds is important in order to prevent infection and environmental contamination.Thus, feed and water that are free from T. gondii should be provided, so as to decrease the prevalence of birds infected by the parasite, consequently reducing the risk of infection among humans (CONTENTE et al., 2009).
Based on these results, we conclude that the occurrence of T. gondii infection in R. americana and S. camelus varied among the farms investigated in Brazil.This study suggests that the young birds tested probably had been affected by recent infection, given that they presented higher levels of antibodies against the parasite.
|
2017-07-06T17:40:03.385Z
|
2013-07-01T00:00:00.000
|
{
"year": 2013,
"sha1": "27ab5ef9e4edad645471b6f37424635b068fd2c5",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbpv/a/NbngPjp85Dt9wCHwXGf8WDN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7fbddedb025c499e5a36400e02a058e08d21c387",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
61569756
|
pes2o/s2orc
|
v3-fos-license
|
On the Performance of PF, MLWDF and EXP/PF algorithms in LTE
This paper explores the performance of three packet scheduling algorithms, namely, Proportional Fair (PF) algorithm, Exponential/Proportional Fair (EXP/PF) algorithm and Maximum Largest Weighted Delay First (MLWDF), from the real time traffic perspectives. Simulation results showed that in the downlink of the 3GPP LTE system, the MLWDF outperforms the PF and the EXP/PF algorithms in terms of packet throughput, packet-loss ratio, packet latency, fairness index and total cell spectral efficiency. This paper investigates the performance of three packet scheduling algorithms; PF algorithm, EXP/PF algorithm and MLWDF algorithm that were developed for single carrier wireless systems. The performance of these algorithms is tested from the RT perspectives. Five metrics, namely packet throughput, PLR, packet delay, fairness index and total cell spectral efficiency are used to evaluate the performance of these algorithms. Simulation results indicated that the MLWDF algorithm outperform other algorithms in terms of previous metrics when these algorithms are used for RT traffic.
I. Introduction
Long Term Evolution (LTE) is proposed by the Third Generation Partnership Project (3GPP) in order to provide support for a high-speed data networks. The access technology in the downlink of the 3GPP LTE system is the Orthogonal Frequency Division Multiple Access (OFDMA). In OFDMA, the available bandwidth is divided into groups of orthogonal and narrowband subcarriers, and subcarriers are allocated to users based on their requirements, system configuration and current system load [1].
The radio access network of the LTE is composed of only single logical node called evolved Node Base station (eNB). The eNB handles all Radio Resource Management, including packet scheduling. Packet scheduler is responsible for transmitting user's data packets and efficient utilization of the available radio resources, so that users' Quality of Service (QoS) can be maintained [2]. In order to satisfy users' QoS, different packet scheduling algorithms have been developed for different traffic types. In this paper the performance of PF, MLWDF and EXP/PF algorithms [3][4] [5] has been studied.
In the aforementioned algorithms each connection between the user and eNB is assigned a priority, and then, the connection with the highest priority is scheduled firstly at each scheduling interval.
This paper is organized as follows. Section II describes the LTE downlink system model. Packet scheduling algorithms are described in more detail in Section III; simulation environment is presented in Section IV. Simulation results are shown in Section V, and finally Section VI concludes the paper.
II. Downlink 3GPP LTE system model
In downlink of the 3GPP LTE system, the minimum unit of resource that is allocated to the user called Resource Block (RB). The RB is defined in both time and frequency domains. In the frequency domain it comprises 12 consecutive subcarriers, with the bandwidth of each subcarrier is 15 kHz (i.e. total bandwidth of the RB is 180 KHz), while in the frequency domain it is made up of one time slot which lasts for 0.5 ms duration. A time slot is 7 OFDM symbols [6].
Packet scheduling and all RRM functionalities are conducted at the eNB. In this study an eNB processing 10 MHz bandwidth with inter-cell interference is modeled. The process of packet scheduling is conducted every 1 ms interval, or called Transmission Time Interval (TTI), and each user is allocated two consecutive RBs. In the uplink direction, users report their instantaneous downlink channel conditions on each RB (i.e. Signal-to-Noise-Ratio, SNR) to the serving eNB at each TTI. And the reported SNR values are used to determine the downlink data rate of each user in each scheduling interval (i.e. number of bits per two consecutive RBs) [7].
The proposed method in [8] can be used to calculate the number of bits per symbol for user i at time t at sub-carrier on RB j . The user's data rate during scheduling interval can be calculated using Equation (1).
* (1)
Where is number of symbols per slot, is the number of slots per TTI and is the number of sub-carriers per RB. The SNR values and the associated data rate with these values are given in table 1.
Each active user has a buffer at the eNB as a packet container. The arriving packets toward the buffer are time stamped, and are transmitted to users based on First-in-First-out (FIFO) approach. At each TTI, the packet scheduler (located at the eNB) determines users priority based on the configured scheduling algorithm. Different algorithms use different scheduling criteria (e.g. Head-of-Line (HOL) packet delay, service type, channel condition, buffer status, etc.) when making scheduling decision. Once the user with the highest priority has been selected for transmission one or more resources are allocated to that user as shown in
III. Packet Scheduling Algorithms
The purpose of the packet scheduling algorithms is to maintain the QoS and fairness demands of each user along with an effective utilization of the available radio resources [9]. The packet scheduling algorithms to be considered in this paper were developed for single carrier wireless system, and these algorithms are: Proportional Fair (PF) algorithm, Maximum-Largest Weighted Delay First (MLWDF) and the Exponential/Proportional Fair (EXP/PF) algorithm.
A. Proportional Fair (PF) Algorithm
The PF algorithm was developed to support the Non Real Time (NRT) service in a Code Division Multiple Access-High Data Rate (CDMA-HDR) system [2]. It provides trade-off between the total system throughput and fairness among users. It takes into account both the past data rate and the experienced channel conditions when assigning radio resources. The PF algorithm allocates resources to the user who maximizes the metric k, defined as the ratio of: Where; (3) Where is the achievable data rate of user i at time t, is the average data rate of user i at time t, is the size of the update window; which enables the PF algorithm to maximize the throughput and fairness of each user, and =0 if user i is not selected for transmission at time t-1.
B. Maximum Largest Weighted Delay First (M-LWDF) Algorithm
The M-LWDF was developed to support multiple real-time data users in CDMA-HDR system [10]. The M-LWDF considers channel variations when allocating radio resources and additionally in case of video traffic it considers time delay, thus, it is used in case of different QoS user's requirements. In M-LWDF a user who maximizes the following metric is granted radio resources: Where; (5) June 1 5 , 2 0 1 3 Where is the HOL packet delay of user i at time t (i.e. time difference between current time and arrival time of the packet), is the achievable data rate of user i at time t, is the average data rate of user i at time t, is the delay threshold of user i's packet and is the maximum probability for HOL packet delay of user i to exceed the delay threshold of user i.
C. Exponential/Proportional Fair (EXP/PF) Algorithm
The EXP/PF algorithm was proposed for multimedia applications in the Adaptive Modulation and Coding and Time Division Multiplexing (AMC/TDM) systems. The EXP/PF algorithm is used if there are different types of services (NRT service or RT service). The resources are allocated to users based on the following metric: Where; Where is the average number of packets at the eNB's buffer at time t, k and are constants, is the HOL packets delay of RT service and is the maximum delay of RT service users. Finally the EXP/PF algorithm prioritizes RT traffic users over the NRT traffic users when their HOL delays are approaching the delay deadline.
IV. Simulation Environment
In this paper a simulator called LTE-Sim is used to perform the entire simulation [11]. A single cell of 1 km with inter-cell interference is modeled. There are 50% of users having VoIP flows and the rest of them having video traffic. Users are uniformly distributed within the cell and moving constantly with a speed of 3km/h. The propagation loss model has been implemented and it includes: path-loss, penetration loss, multi-path loss and shadow fading which are summarized below [12]: The performance of the aforementioned algorithms is judged based on packets throughput, Packet Loss Ratio (PLR), packet latency (delay), fairness index and cell spectral efficiency. Fairness among users is implemented using Jain's method [13]. The entire system simulation parameters are shown in Table 2. June 1 5 , 2 0 1 3 Fig. 2 shows the average throughput per video flow. As the cell is charged with more users; the average throughput per video flow decreases for all scheduling algorithms. When the number of users exceeds 20, the PF algorithm suffers from a sharp decrease in the average throughput, while the MLWDF and EXP/PF algorithms show a small decline in the average throughput per flow. It is shown in Fig. 3 that the PLR of all algorithms is less than 1% when the number of users in the cell is 20. When the cell is charged with more users, the PLR shows rapid increase in case of the PF and EXP/PF algorithms with slightly lower growth for MLWDF algorithm. Fig. 4 shows video delay. It is clear that, while the number of users in the cell is less than 40, all algorithms have similar performance. When the number of users exceeds 40, packet delay of video flow in case of PF sharply increases and remains constant for the other algorithms. As shown in Fig. 5, the fairness index of all simulated algorithms is close to 0.5 when the number of users in the cell is less than 30 users. When the number of users in the cell exceeds 30 users, fairness index in case of PF goes down to 0.35 while in the other algorithms it is around 0.4. The average throughput per VoIP flow is shown in Fig. 6. It is clear that the average throughput per packet for VoIP flow is same for all scheduling algorithms, and it maintains between 3600 bps and 3450 bps. The PLR of the VoIP flow is shown in Fig. 7. It is clear that when the number of users in the cell is less than 40 users, there is no considerable difference in PLR performance between the three algorithms. When the number of users exceeds 40 users, the PF algorithm shows sharp increase in the PLR value compared to the other algorithms; it has a value of 3% when the number of users is 80, whereas it has values of 1.5 % and 0.5% for the EXP/PF algorithm and MLWDF algorithm, respectively, at the same number of users. The packet delay of VoIP flows is shown in Fig.8. As the number of users increases, users suffer from a longer latency. When there are more than 30 users, packet delay of VoIP flows shows faster growth when implementing the PF algorithm than that in the case of MLWDF or EXP/PF algorithms. When there are 80 users, packet delay of VoIP flow is 0.25 second, while it is less than 0.05 second when using MLWDF or EXP/PF algorithms. The fairness index for VoIP flows is almost the same for all simulated algorithms and its values are around 0.5, as shown in Fig. 9. Finally, Fig. 10 shows the total cell spectral efficiency. Generally the total cell spectral efficiency increases with increasing number of users up to certain point and tends to maintain after that point.
VI. Conclusion
This paper investigates the performance of three packet scheduling algorithms; PF algorithm, EXP/PF algorithm and MLWDF algorithm that were developed for single carrier wireless systems. The performance of these algorithms is tested from the RT perspectives. Five metrics, namely packet throughput, PLR, packet delay, fairness index and total cell spectral efficiency are used to evaluate the performance of these algorithms. Simulation results indicated that the MLWDF algorithm outperform other algorithms in terms of previous metrics when these algorithms are used for RT traffic.
|
2019-02-15T14:20:47.057Z
|
2013-06-15T00:00:00.000
|
{
"year": 2013,
"sha1": "3e0ef84eea01a7928a404f7b0036e1b865986a43",
"oa_license": "CCBY",
"oa_url": "https://rajpub.com/index.php/ijct/article/download/3429/pdf_70",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f9817bcbb6d1dacd46a3b1a617bcb75c8e80768c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
46794058
|
pes2o/s2orc
|
v3-fos-license
|
Audiovisual Biofeedback-Based Trunk Stabilization Training Using a Pressure Biofeedback System in Stroke Patients: A Randomized, Single-Blinded Study
The purpose of this study was to assess the effects of audiovisual biofeedback-based trunk stabilization training using a pressure biofeedback system (PBS) in stroke patients. Forty-three chronic stroke patients, who had experienced a stroke more than 6 months ago and were able to sit and walk independently, participated in this study. The subjects were randomly allocated to an experimental group (n = 21) or a control group (n = 22). The experimental group participated in audiovisual biofeedback-based trunk stabilization training for 50 minutes/day, 5 days/week, for 6 weeks. The control group underwent trunk stabilization training without any biofeedback. The primary outcome of this study was the thickness of the trunk muscles. The secondary outcomes included static sitting balance ability and dynamic sitting balance ability. The thickness of the trunk muscles, static sitting balance ability, and dynamic sitting balance ability were significantly improved in the experimental group compared to the control group (p < 0.05). The present study showed that trunk stabilization training using a PBS had a positive effect on the contracted ratio of trunk muscles and balance ability. By providing audiovisual feedback, the PBS enables accurate and effective training of the trunk muscles, and it is an effective method for trunk stabilization.
Introduction
In stroke patients, the normal muscle stiffness is lost, muscle strength is impaired, and postural control becomes difficult due to asymmetry [1]. Balance disabilities lead to an increased risk of falls and also affect activities of daily living (ADL) [2,3]. Therefore, improving balance is one of the major goals of rehabilitation in patients with stroke-induced hemiplegia. Although numerous studies have been conducted on improvement in balance [4], a large number of stroke patients continue to have difficulties in these areas.
Postural balance involves control of individual components of the musculoskeletal system, which is achieved by cerebellar integration of information from the vestibular organs and the visual and proprioceptive information [5]. Of these, impaired proprioception and lack of appropriate control of muscle contraction, which are sequelae of brain damage, are the primary concerns in stroke patients [6]. The limb asymmetry makes it difficult for the patient to achieve trunk control [7,8]. Instability and impaired trunk control lead to problems in sitting balance [2,9]. Therefore, in order to maintain postural balance, trunk control and stabilization need to be prioritized [10,11]. Trunk control helps maintain balance by regulating the shifting of body weight during postural changes on various surfaces [9]. Stabilizing the body proximally is important for efficient movement of the limbs [7].
When aiming to improve postural balance clinically, the focus has been on pelvic movements and trunk stability. Trunk stability training helps to control trunk movements by synergistically activating the postural muscles, namely, the abdominal and multifidus muscles, through pelvic and abdominal training [12,13]. Additionally, previous research has shown that strengthening the transversus abdominis 2 Stroke Research and Treatment (TrA) provides stability to the sacroiliac joints and is therefore important for improving trunk stability [14]. However, during such training, the subjects demonstrated compensatory movement patterns using muscles other than the target muscles. Moreover, difficulty in recognizing the use of trunk muscles during training was another factor that interfered with trunk training [15,16].
The trunk muscles are divided into deep muscles and global muscles. The TrA and internal oblique (IO) are deep muscles that contribute to trunk stabilization, while the external oblique (EO) and rectus abdominis (RA) are global muscles that contribute to dynamic movements [12,13].
Recently, several studies have reported that TrA training is effective for trunk stabilization, and training for facilitating isolated TrA contraction has been reported in low back pain patients [17,18]. The abdominal drawing-in maneuver (ADIM) is often used for this purpose [19]. Real-time ultrasound imaging (RTUI) is used during training for a more precise recognition of muscle contraction techniques [20].
Several studies have recently attempted trunk stabilization training in stroke patients, which includes trunk control training through proprioceptive exercise and tasks, weightshift training, and visual and auditory feedback training [21][22][23][24]. However, although the majority of these studies found that trunk stabilization training affected trunk performance, the effects are still not clear, and there have been no specific reports on the effect of trunk stabilization training on trunk muscles in stroke patients.
Recently, an exercise method that utilizes a pressure biofeedback unit (PBU) was introduced; this method promotes symmetrical contraction of the trunk muscles to effectively train the patient for isolated TrA contraction [25]. A PBU involves placing an air pocket between the patient's lower back and a hard surface and using a pressure meter; the extent of movement is verified in real time. This method is used frequently in stabilization of the back or neck. The feedback from the PBU has been shown to be effective in improving trunk stability in low back pain patients by promoting recognition of the correct contraction techniques [26].
Hence, the present study aimed to verify these effects in stroke patients by educating them in the precise exercise methods for isolated TrA contraction using RTUI and applying audiovisual biofeedback-based trunk stabilization training using a PBS.
Subjects.
The subjects in this study were all individuals diagnosed with stroke and admitted to "D" rehabilitation hospital as inpatients in South Korea. The inclusion criteria were as follows: hemiplegic patients who had been diagnosed with stroke at least 6 months ago; patients who had experienced only 1 stroke; patients who scored at least 24 points on the Mini-Mental State Examination; patients capable of unassisted sitting for at least 10 minutes; patients capable of gait for a distance of at least 10 m independently, with or without assistive tools; and patients with a Brunnstrom motor recovery stage of at least 4. The exclusion criteria were as follows: patients participating in another experiment that could affect this study; patients with visual or auditory abnormalities such as vestibular disease, cerebellar disease, unilateral neglect, or apraxia; patients with brain abnormalities outside of the stroke region such as the cerebellum or brainstem; patients with a surgical condition such as a lower limb fracture or peripheral nerve damage; patients with severe renal, musculoskeletal, or cardiovascular disease that would impair training; and patients with visual disability, loss of visual field, or auditory disability. Prior to the study, the aims and procedures of the study were explained to all participants, who signed the research participation consent form of their own free will. The entire study procedure was approved in advance by the Institutional Review Board of the University of Sahmyook.
Sample
Size. This study used a randomized, single-blinded design. To determine the sample size, the G-Power 3.19 software was used [27]. To calculate the sample size, the probability of alpha error and power were set at 0.05 and 0.8, respectively. In addition, the effect size was set at 0.92, based on the trunk ability results in a prior pilot test. Therefore, a sample size of 20 patients per group was necessary. By estimating a dropout rate of about 15%, 23 participants per group needed to be recruited for randomization.
2.3.
Procedure. Among 52 hospitalized stroke patients, 46 patients met the inclusion criteria, and these were randomly allocated to an experimental group or a control group of 23 patients each. Random allocation software was used to minimize selection bias [28]. The experimental group used a PBU and performed audiovisual biofeedback-based trunk stabilization training for 50 minutes/session, 5 sessions/week, for 6 weeks. The control group performed identical trunk stabilization training, but without the PBU. The changes in thickness of trunk muscle, static sitting balance ability, and dynamic sitting balance ability were assessed before and after the training. The tests were performed by the trained assessors, and the assessors were blinded to the subjects' groups. The subjects who became unable to participate in the program during the study due to a change in medical status, or who were unable to receive the posttraining tests, were excluded from the final analysis. In the experimental group, statistical analysis was conducted on 21 patients, excluding 2 who were unable to participate in posttraining tests, and in the control group, the final analysis was conducted on 22 patients, excluding 1 patient who was unable to participate in posttraining tests ( Figure 1).
ADIM Education.
Prior to the training, all subjects in both groups underwent ADIM education. The education was provided by a skilled assessor. RTUI was used to educate the subjects in isolated TrA contraction, without contraction of the EO. With the patient in the supine hook-lying position, ultrasound gel was applied to the region of measurement, and the middle of the probe was placed 2.5 cm anterior to the mid-axillary line, at the midpoint between the 12th rib and the iliac crest. During the measurement, the patient was instructed to slowly and gently pull the lower abdomen below the navel in. The patient was instructed not to move the upper abdomen, back, or pelvis and to focus on the monitor during the movement. The patients were educated until they were capable of performing an isolated TrA contraction [29].
Audiovisual Biofeedback-Based Trunk Stabilization
Training with a PBS. The patients assumed the supine position with the knees raised (supine hook-lying position). A pillow was used to maintain a neutral cervical spine, and the patient was instructed to release the tension in the neck, which was checked by the sternocleidomastoid muscles. Three PBSs (Achievo CST, V2U Healthcare, Pte., Ltd., Singapore) were used to provide audiovisual biofeedbackbased trunk stabilization training. Consisting of an inflatable cushion, a computer system, and a monitor, it detects pressure changes, and when the pressure falls out of a certain range, a red light appears on the monitor, and a warning sound is heard.
The monitor was placed in the direction of the patient's gaze, so that the patient could look comfortably at the monitor during the exercise. A stabilizer was placed below the anterior curvature of the low back, and the lower part of the stabilizer was aligned with the posterior superior iliac spine. Once the patient adopted the correct posture for the exercise, the pressure of the stabilizer was set to 40 mmHg, and the exercise range was selected. The acceptable pressure range started at 20% and decreased by 5% for each stage. The stabilizer pressure was maintained at 40 mmHg, so that the patient would perform the ADIM [29,30]. If the patient was unable to maintain the proper ADIM, and the pressure exceeded the acceptable range, a red light was seen on the monitor and a warning sound was heard.
To stabilize the trunk, 4 stages of the sliding movement were performed, with the stabilizer pressure maintained. During the sliding exercise, the patient fully extends the bent knees and then returns to the original position. The first stage is semisliding, where the feet remain on the ground, and the patient only performs the exercise through half the total range. The second stage is ball sliding, where the full range of sliding is performed, but the patient's feet are placed on top of a ball to make the action easier. The third stage is sliding, where the patient's feet remain on the ground, and the patient fully extends the knees before returning to the original position. The fourth stage is raised sliding, where sliding is performed with the feet lifted slightly off the ground [31][32][33] (Figure 2). Each movement was performed as a set of 10 repetitions [34]. The movements were performed gradually according to individual ability, with the patients advancing to the next stage when they achieved a success rate of at least 90%. Even those who were able to perform the latter stages of the exercise had to start with the first stage of the exercise. Patients were instructed to breathe normally during the exercise, which was monitored, and in the event of breathing difficulties, the patient was allowed to rest before resuming the exercise. The therapist provided assistance to those who required support on the affected side. Care was taken to avoid unnecessary hypertonus in other areas during the exercise. The control group performed the same movements as described above but without any biofeedback.
Measurements.
The outcomes were measured by assessors who were blinded to subjects' group placement before intervention and after completing the 6-week training. The primary outcome was thickness of the trunk muscles. Secondary outcome measures were used to estimate the clinical relevance of the primary outcome results. Static sitting balance ability and dynamic sitting balance ability were assessed for the subjects in each group.
Ultrasonography equipment (Achievo CST, V2U Healthcare, Pte., Ltd., Singapore) was used to measure the thickness of the trunk muscles. A 5 MHz convex transducer was used. With the patient in the supine hook-lying position, ultrasound gel was applied to the measurement area, and the transducer was placed 2.5 cm anterior to the mid-axillary line on the right side of the trunk, at the midpoint between the 12th rib and the iliac crest. Measurements were performed on the unaffected and affected sides during contraction and relaxation. To measure the thickness during contraction, the patient adopted the ADIM position, after the patient was educated on this position. The patient was instructed to pull their lower abdomen back towards the spine in the final 2/3 of the normal exhalation phase [29,30]. Each measurement was repeated 3 times. On the ultrasound imaging screen, the thickness of the TrA, IO, and EO was measured by drawing a vertical line to a point 2.5 cm from the myofascial junction of the TrA and thoracolumbar fascia [35]. The average of the 3 measurements was used in the final analysis. This study compared symmetric and contracted ratio after measurement of thickness of trunk muscle. Symmetric ratio is calculated as unaffected side/affected side and the contracted ratio as contraction/rest.
To evaluate static sitting balance, the Good Balance System (GB300; Metitur Ltd., Finland) was used in this study. The system consists of an equilateral triangular force platform, which is connected to a computer using a 3-channel amplifier with an A/D converter. The sampling frequency used was 50 Hz. This equipment is used to assess balance in patients with senile conditions as well as those with stroke and is widely available [36]. The Good Balance System measures the medial-lateral and anterior-posterior sway speed and velocity moment in the sitting position in stroke patients. The intrarater reliability of the Good Balance System was reported as intraclass correlation coefficients ( ) of 0.51-0.74 (anteriorposterior speed) and 0.63-0.83 (right-left speed) [37]. To assess static balance, the patients sit on a high chair with the feet not contacting the floor. The patients were asked to look at a point (10 cm diameter) that was at a distance of 1 m in front of them for 30 s, while their balance was measured. This test was repeated 3 times. The same procedure was repeated with the patients' eyes closed. For the data analysis, the average values were recorded.
Dynamic balance in the sitting position was assessed using the modified functional reach test (MFRT). A stick ruler was set at the patient's acromial height and fixed on the wall, with the patient seated comfortably on a stool. The stick ruler was used to measure distance during the test. The patient's hips and knees were flexed to 90 ∘ , with the chair and popliteal area 5 cm apart and the feet in contact with the ground. For anterior measurements, the shoulder was flexed to 90 ∘ with the elbow fully extended, and the subject moved his or her upper extremities and trunk as forward as possible. The distance from the starting position to the ending position of the middle finger tip was measured using the stick ruler. For lateral measurements on the unaffected side, the shoulder was abducted to 90 ∘ with the elbow fully extended. The subject moved his or her upper extremities and trunk towards the unaffected side to the maximum range possible. The distance from the starting position to the ending position of the middle finger tip was measured using the stick ruler. All evaluations were repeated 3 times, and the average values were recorded. The interrater reliability of this test was reported as = 0.97, indicating excellent reliability [38].
Data Analysis.
Descriptive statistics were used to summarize baseline characteristics. The Shapiro-Wilk test was used to test the variables for normality. The Chi-square test was used for comparison of categorical dependent variables between the groups. The independent -test was used for a comparison of change in thickness of trunk muscles and balance ability values between the experimental and control groups. Comparisons between pre-and posttreatment data within each group were analyzed using a paired -test. SPSS version 19.0 for Windows was used to perform all analyses and values < 0.05 were regarded as significant.
Results
General characteristics of 43 subjects with chronic stroke who fulfilled the inclusion criteria for the study are shown in Table 1. No significant differences in general characteristics and dependent variables were observed between the experimental and control group.
Results for the primary outcomes are shown in Table 2. Regarding changes in thickness of trunk muscles, the contracted ratio of the TrA in the experimental group was significantly increased after the intervention ( < 0.05). However, the control group displayed no significant difference. After training, the contracted ratios of the IO in both the experimental and control groups were significantly increased ( < 0.05). No significant improvement was observed in the experimental group compared with the control group. The contracted ratio of the EO in the control group was significantly increased ( < 0.05). However, the experimental group displayed no significant difference. In addition, after the 6week training, the symmetric ratios of all muscles in both the experimental and control groups were not increased significantly.
Results for the secondary outcomes are shown in Table 3. Regarding changes in static sitting balance ability, mediallateral sway speed, anterior-posterior sway speed, and velocity of moment in both the experimental and control groups regardless of their vision displayed significant improvement after the intervention. In addition, the improvement was significantly better in the experimental group than in the control group ( < 0.05).
In the MFRT, the reaching distances with the forward, affected side, and unaffected side movements in both the experimental and control groups were significantly increased 6 Stroke Research and Treatment after the intervention ( < 0.05). In addition, the training resulted in significantly larger improvement in all three variables in the experimental group than in the control group ( < 0.05).
Discussions
The effects of audiovisual trunk stabilization training in patients with neurological conditions such as low back pain and stroke have received a lot of attention, and several studies have been conducted on this topic. As most studies are focused on the trunk stabilizing effects of strengthening trunk muscles, there is still a lack of studies demonstrating the effects of trunk stabilization on functional activity [21][22][23][24]. Therefore, the primary aim of the present study was to verify the effects of 6 weeks of audiovisual biofeedback-based trunk stabilization training using a PBU on trunk muscles in stroke patients. The secondary aim of this study was to verify the carryover effect of the training on static sitting balance and dynamic sitting balance.
Proximal stability must be achieved prior to distal movement, whereas functional activity and activation of the trunk muscles are essential preconditions for spinal stabilization during exercise [7]. The trunk muscles are categorized into local and global muscles [39]. Global muscles are located near the surface and include the EO and RA [40]. These muscles provide strength for gross movements of the trunk and not only move the spine, but also enable shifting of loads between the chest and pelvis [12,41]. The local muscles are located deeper and include the multifidus, TrA, and IO; these provide stability to the lumbosacral spine [13].
Karatas et al. [2] reported weakness of trunk muscles in stroke patients compared to elderly individuals without stroke. Dickstein et al. [15] evaluated the trunk muscles in stroke patients and elderly individuals using electromyography and reported that the trunk muscles in stroke patients showed delayed contraction on the affected side compared to the trunk muscles in elderly individuals and that symmetrical contraction of the trunk muscles was also significantly impaired in stroke patients. Moreover, according to the results of previous studies, while healthy adults show activation of TrA prior to movement, subjects with impaired trunk stability, such as those with low back pain patients, had delayed TrA activation, and trunk stabilization training to strengthen the multifidus and TrA was reported to contribute considerably to lumbar stabilization [18,42,43]. 8
Stroke Research and Treatment
The aim of trunk stabilization training is to improve trunk stability by strengthening the deep muscles and promoting synergistic action [12,13]. A large number of studies have used reeducation of muscle control and muscle performance to achieve trunk stabilization, and these studies describe 3 stages of segmental control [44]. In the first stage, feedback is provided to stimulate and activate the local muscles. Feedback methods include palpation, EMG, and RTUI, and these methods aim to increase use of the local muscles and suppress use of the global muscles. In the second stage, the aim is to improve motor control and movements using closed chain exercises. This stage involves gradual weight loading while maintaining co-contraction of the local muscles. The third stage uses open chain exercises and aims to train the patient to maintain local segmental control while performing functional activities.
From a biomechanical perspective, the present study aimed to stimulate and activate local muscles using RTUI and visual biofeedback and used a compound method combining closed chain and open chain exercises using sliding motion. In a study by Lee et al. [45], palpation feedback was used to investigate activation of local muscles. This method is frequently used for trunk stabilization training in clinical practice; however, selective contraction of the deep muscles without biofeedback seems to be difficult. Previously, studies have been conducted using RTUI or pressure feedback to overcome this difficulty. Pressure feedback was used in patients with lower back pain patients to facilitate independent contraction of TrA, and it was found to be effective for stabilization of the sacroiliac joint [7,14]. RTUI has been reported to be more accurate and more effective than pressure feedback [44,46,47]; Seo et al. [48] therefore used RTUI in stroke patients to effectively implement trunk stabilization training. Using RTUI may be effective; however, it has the following disadvantages: It requires expensive equipment; patients experience some discomfort when the ultrasound transducer is placed against their skin; and patients have difficulty interpreting the ultrasound images. Therefore, the present study used a PBS to provide audiovisual feedback. It is thought that a PBU could be used easily in clinical practice.
In the present study, ultrasound was used to measure changes in the thickness of the trunk muscles following training, and trunk stabilization training was found to be effective. After training, the experimental group showed a significant improvement in the contraction ratio of the TrA, at 28% on the affected side and 11% on the unaffected side, and the IO, at 4% on the affected side and 6% on the unaffected side. Conversely, the control group showed a significant change in the contraction ratio of the IO, at 4% on the affected side and 7% on the unaffected side, and the EO, at 8% on the affected side and 11% on the unaffected side.
Vasseljen and Fladmark [20] applied the ADIM using RTUI in patients with lower back pain patients and reported an increase of 3% in the thickness of TrA. Seo et al. [48] applied trunk stabilization exercises using a PBU in chronic stroke patients and reported results similar to those of the present study, with an improvement of 17% and 15% in the thickness of TA during contraction on the affected and unaffected sides, respectively.
The trunk stabilization training and feedback used in the present study promoted isolated contraction of the deep muscle, TrA, and improved trunk stability with strengthening of the TrA. Compared to the control group, the experimental group subjects were thought to have achieved more effective learning of selective TrA contraction, because they were provided with audiovisual feedback. In addition, the effect of selective training with feedback combined with trunk stability training in the present study is thought to have activated the tonic stabilizing muscle, TrA, by facilitating co-contraction in a multidimensional manner. Hodges and Richardson [43] also reported that motor control, achieved by combining functional movements with PBS training, is more effective at promoting activation of local muscles.
In the experimental group, because subjects were given feedback to help maintain a neutral pelvic position during exercise, lumbopelvic motion was restricted, and the TrA could be activated more than the other abdominal muscles [49]. Meanwhile, because the control group underwent training without feedback, it was difficult to maintain the precise posture during exercise, and it is thought that these patients performed the actions with a posterior pelvic tilt. When the pelvis is tilted posteriorly, the global muscles such as the RA and EO are activated more than the muscles of the anterolateral abdomen, and this is considered to be an undesirable pattern for lumbar stabilization [50].
Although the symmetric ratio improved in both groups, there was no significant difference. This may be because although muscle activation improved on the affected side, it improved to the same extent or more on the unaffected side. Moreover, with the exercise methods used in the present study, it was not possible to selectively target the unaffected or affected side. Due to the anatomical nature of the trunk muscles, it is very difficult to perform the exercise only on one side. Therefore, improving the symmetric ratio with trunk stabilization training is expected to be difficult.
The present study was conducted under the assumption that changes in trunk muscles would affect static and dynamic sitting balance.
Stroke patients show a greater impairment of trunk proprioception with an increasing trunk reposition error [9], and improvement of proprioception in stroke patients is reported to positively affect trunk control [51]. Mudie et al. [51] applied body position awareness training in stroke patients and reported improved proprioception. Gruber and Gollhofer [52] used trunk control training on an unstable surface and found that it was very effective at increasing proprioceptive input to the neuromuscular system. Additionally, Kawato et al. [53] reported that trunk stabilization training improved postural control when correcting errors through feedback. The present study was also designed to utilize a PBS, because training with feedback was thought to improve trunk stability by providing awareness of the trunk position and improving postural control. Hence, improvement in trunk stability is thought to have affected the patient's sitting balance.
In the present study, both groups showed a significant improvement in static and dynamic sitting balance, with a greater effect in the experimental group than in the control group. Among the various factors that affect sitting balance Stroke Research and Treatment 9 in stroke patients, stabilization of the trunk muscles is very important. Previous studies have also shown that improving trunk stability improves sitting balance ability [21,54]. The experimental group is thought to have shown improved sitting balance, because the TrA was strengthened using trunk stabilization training and feedback. The TrA provides trunk stability by acting preemptively in feed-forward postural control and various postural changes that increase the spinal load. Conversely, the control group showed improvements in the IO and EO without any feedback, and the global muscles in this group seem to have contributed to trunk stabilization by acting as stabilizers. Combined training of the TrA and EO can be predicted to be even more effective, although this cannot be demonstrated clearly in our results. This should be confirmed by future research.
Both groups also showed improvement on the MFRT, which tests not only static balance, but also reaching with the arms, while maintaining a seated position. The experimental group showed a significant improvement of 10% in the forward direction, 13% on the unaffected side, and 18% on the affected side on the MFRT, and this improvement was greater than that shown by the control group, which showed improvements of 4%, 6%, and 9%, respectively. Lee et al. [55] applied trunk stabilization training with visual feedback in chronic stroke patients, who showed a significant improvement on the MFRT; this is consistent with the results of our study.
When a patient attempts to maintain balance in the seated position, compensatory movements of the limbs can occur to control the anterior-posterior sway. Control of mediallateral movements is closely related to trunk control [56]. The present study showed a significant improvement in the medial-lateral direction, demonstrating that the intervention in this study was closely related to trunk control.
In the present study, as both groups performed trunk stabilization exercise, it was not possible to be precise about the effects of stabilization training. In addition, the 6-week intervention duration was not long enough to produce changes in gait.
Conflicts of Interest
The authors have no potential conflicts of interest to declare.
Authors' Contributions
Sangwoo Jung and Kyeongjin Lee contributed equally to this work as the co-first authors.
|
2018-04-03T02:04:59.654Z
|
2017-12-20T00:00:00.000
|
{
"year": 2017,
"sha1": "24e6f5cf2863ab6ca58cef77830b51113a73e96e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/srt/2017/6190593.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e309b3b405c0b095782a23fd635b9f78baa84ca",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259489333
|
pes2o/s2orc
|
v3-fos-license
|
CURRICULA AND ART SIGNS AROUND A COUNTERCULTURAL EDUCATION
: The article discuss how curricula that use art with its sensitive signs can enable a movement of thought around a countercultural education that affirms life in counterpoint to capitalist processes. It dialogues, with post-fundamentalist theoretical intercessors, the concepts of cultures, curricula, teachings, and images of artistic signs when forming people to come. Methodologically it is a training research carried out with teachers of a municipal school network, virtually via Google Meet , in the year 2021. Therefore, it aims to think about the strength of artistic signs in cultural and curricular inventions and compositions. In the results, the text points out some destabilizing points that teachers, during encounters with the signs of the arts, excavate, open fissures in dogmatic thinking — thought as representation — opening gaps for the passage of inventive nomadic thought, the one that escapes, which experiences and creates possibilities for power flows of forces. This process expands differentiation processes and thus invents new images for the curricula, cultures, and schools with the force of difference.
The arrow of thought, transformed into pure fore, equally approximates us from the pathways trod by Deleuze and Guattari (1997, p. 18) to whom "[...] "[...] affections cross the bodies as arrows, not as war weapons", i.e., Deleuze and Guattari think this image as a poetic of forces, including, as the target of the arrow and "spiritualized" shot of thought, the culture of a people to come.
To Guattari and Rolnik (1986, p. 15), "[...] the concept of culture is deeply reactionary" because it separates the social work into isolated spheres, standardizes, established, and capitalized for the dominant semiotization way, crossed by their political realities.In his turn, Deleuze (2008) points out that, as an arrow, the assumptions of a culture work as a type of battlefield of operations theater in which the capital would be in charge of economic subjection and the culture of the subjective subjection.
It is a merger between two regimes, two dimensions of the capitalist submission process.However, to these authors, resistance would be intertwined with the submission dimension.Thus, culture, as a generic activity, presupposes a negative idea of culture and a positive problematization.
The criticism is directly correlated with creation because it always considers itself as a culture but always about a culture, waiting, if possible, for a culture to come.Thus, when considering the possibilities of a culture to come, the authors assume and defend a countercultural perspective and/ or resistance against the capitalist models.It is counterculture in the sense that it is possible to develop singular forms of submission and/or what can be called "singularization processes" (Guattari & Rolnik, 1986, p. 17), refusing and questioning these forms of established coding-, because "[...] they refuse them to build ways of sensitiveness, ways to relate with the other, creative production forms that produce a singular subjectivity.An existential singularization [...].
In a world in which the capital is the general reference of human relations, regardless of the so-called political ideologies that have become indiscernible nowadays, we can perceive the commodification and massification of the ways of dressing, eating, feeling, loving, and consuming: The capitalist order produces the forms of human relations even in their unconscious representations: how they work, how they are taught, how they love, how they have sex, how they speak, etc.It fabricates the relationship with production, with nature, with facts, with movement, with the body, with food, with the present, with the past, and the future -summing up, it creates the men's relationship with the world and with themselves (Guattari & Rolnik, 1986, p. 42).
Therefore, we see in Deleuze and Guattari that the countercultural tone passes mainly by the opposition of a concept of culture as the materialization of an image of the representational thought grounded over common sense and good sense.
To Guattari and Rolnik (1986), the concept of culture normally involves a sense of "cultivating the spirit", assuming a correspondence with a culture-value that determines those who have culture and those who do not, erudite culture versus popular one and/or schooling level; a relationship with the sense of "collective culture-soul" in which to each collective soul (peoples, ethnicities, social groups) an identity culture is attributed; the sense of "culture-good" in which the dissemination and culture production are the focus.
Hence, these conceptions permeate teachers' practice and education but not only them.
There is always a certain openness to the possibilities of a concept of culture beyond common sense and subjection, a culture to come, with resistance and problematization, a concept of countercultural culture.
Deleuze, to whom culture can and should be something else, calls the recurrent concepts of culture 'grotesque images of culture', which find a privileged space in the contemporary world, as a patina of erudition or a depth index, mirrored in " " [...] the tests, the government slogans, the newspaper contests ( which invite us to choose according to our taste, as long as it coincides with the taste of others")" (Deleuze, 1988, p. 171).
So, it is essential to seek forms through which culture leaves these intertwined and selfenclosed spheres, contraposing them to the concepts of culture-value, culture-group (social or ethnical), culture-good, producing and creating projects of culture singularization that dismantle the particularities and reproduction traits in the social-political, cultural-educational field and, in within the later, teaching.
In this sense, we are, without a doubt, in a crisis and/or a cultural desert, defined mainly by the fact that culture has become a good to be consumed and that its clients changed, as in television, in which the real clients are no longer the audience but the advertisers.Thus, the audience members receive the cultural goods the advisers want, erasing any criticism against commercial advertisements (Pellejero, 2008).Nonetheless, according to Deleuze (1988apud Pellejero 2008), it is clear that there will always be other circuits so that a parallel agency will regain cultural richness, similar to what Nietzsche said: someone shoots an arrow, an arrow in space or even a period, a collective group shoots an arrow and later it falls, someone takes it and shoots it back somewhere else.Creation works this way: literature, cinema, and artistic signs, in general, cross over deserts and make the oasis flourish.
Many arrows have been shot by artists, researchers, teachers, children, and young people who seek to bring life to the desert in the world, to (re)invent it and (re)populate it with more joy, resistance, invention, and, above all, more collective power so that we can live a beautiful life with the force of difference.After all, "[...] we can only wish together", as Deleuze stated (2008, p. 14).Escaping from the interiority of a culture through the exteriority of encounters, analyzing the connection between the thought movement and a given culture.
Indeed, meetings with paintings, music, cinema, and literature but not on their cultural dimensions, but in the sense that they hide anything that escapes the cultural domain because it is only from these points of non-culture or counterculture that it is possible to go beyond a given culture.(Pellejero, 2008, p. 3-our highlight).
To enact collective wishes in teaching and to move the images of schools, cultures, and curricula, we held encounters with art signs (from cinema and literature) in conversation webs with teachers.Signs are affections that ask for a way and free the variation of life power in school spaces.
Therefore, art signs work as thought triggers and allow the problematization of educational and curriculum policies that "try" to belittle the life in/of schools.These meetings expand the power of joy and affirm life in its highest power lever.
Commonly, in the scope of cultural and curriculum policies, in different educational contexts, subjectivation processes are activated through multiple images circulating there.Thus, we question how, in standardized, vertical, and hierarchical relationships, we have been inhibiting the creative activities and the field of possibilities in the curricula experienced in daily school life: What images, sounds, words, gestures, and smells cross the cultures and curricula that are in constantly moving in the schools?What senses and meanings of curricula and culture are created with the multiple images that inhabit the school spaces-times?Can art potentialize curriculum problematization and creation?
We consider curriculum in plural (curricula) inspired by the post-fundamentalist movement (referring to all theoretical-practical perspectives contrary to the defense of universal principles, essentialism, and a non-contingent approach.In Brazil, the theoreticians that stand out in this curriculum perspective are Marlucy Alves Paraíso, Elizabeth Macedo, Alice Alves Casimiro, Nilda Alves, Sandra Mara Corazza, among others.Curricula because life is woven in a web of multiple lines threads that cannot be reduced to the propositions of national curriculum guidelines and/or Education Secretaries and/or the curriculum syllabuses established by/for schools.Curricula cannot be limited to these, as they go beyond an organizational plan.At the composition level, the curricula are trespassed by relational forces, such as school, family, school community, management bodies, political-economical-social system, media, etc.Even though the tension of the prescriptions and the fundamentalist predeterminations are present, we highlight that the curricula are constituted in networks of complex actions established in an immanence field not determined a priori.Not the curriculum but the curricula!In this perspective, there is not an inner sense of curriculum (a sense in itself) because its meaning is always derived from the contingency of sayings and actions that give consistency to what is experienced in school by different forces in relation.
These are curricula that are lived and boosted by collective forces and desires, entangled by encounters, collective experiences, and events.To affirm the force of encounters means understanding "[...] curricula beyond the processes of learning-teaching to the condition of something solitary, individual, personal, and in the level of awareness' interiority" (Carvalho, Silva, & Delboni, 2018, p. 814), to rely on learning and teaching that are established in the networks of affections and conversations, through composition, singularization, and differentiation processes.
In this sense, we question: in cultural standardized, vertical, and hierarchical relationships, how are we inhibiting the creative activity and the field of possibilities in the curricula experienced in the school's daily life?Can art potentialize the curriculum problematization and creation?Methodologically, it is a training-research1 held with teachers from the municipal system, via Google Meet, during the night period in 2021, amidst the Covid-19 pandemic.Initially, the research foresaw the participation of 75 teachers undergoing a significant change because, on the eve of the training start, the city of Serra/ES (Brazil) -as well as other cities from the metropolitan region of Vitória -was strongly pressured by the Federal Prosecution Service to anticipate the return to in-person classes, following a hybrid model.With the return of in-personal work, the teachers were faced with a schedule conflict to participate in the formation activity, leading to a sudden decrease in the number of participants.
On the other hand, in this context of return to in-person work and the need to flexibilize the schedule of teachers and trainers, the formation was kept with 42 teachers enrolled and divided into three groups.
For this text, we selected only part of the material produced by a group of 14 teachers.
We should also stress our option to present some results of the research at the end as examples of possibilities to reach the formulated objective2 .To do justice to our theoretical reference, we must say that, in no way will we indicate who is speaking, not only due to the ethical issue of anonymity but mainly because we affirm the power of collective agency of enunciation (Deleuze & Guattari, 1995).
We start from the premise that no statement occurs in the individual field, i.e., it is not a speaking being that expresses their ideas in isolation" One individual or another, considered within a mass, has a pack subconscious that is not necessarily similar to the mass packs they partake," say Deleuze and Guattari (1995, p. 49).They go beyond: "Each of us is involved in such an agency, reproducing the statement when believing we speak on its behalf or speaking on its behalf when producing the statement" (Deleuze & Guattari, 1995, p. 50).Because of this, we do not rely on a text based on creating fictional names for the teachers but allow the statements to appear with their force and pure liveliness.What matters more are the forces that follow them in their statements and how they can expand in conversation with other forces.It is important to follow the force of nomadic thought, which escapes the representations to think and experience the novelty, the unthinkable, and the difference in the curricula.We argue that these inventive forces expand the power of a collective action that desertifies and depopulates schools to repopulate them in this way.
Therefore, this article argues about the use of artistic signs in teacher education as a way to potentialize resistance and/or a thought movement around a countercultural education that affirms life against the processes of capitalist subjection.Dialoguing with post-fundamentalist theoreticians about the concepts of cultures, curricula, and teaching as shared encounters, we debate their relationship with forming a people to come ( in this case, elementary school teachers).We argue that in their encounters with artistic signs, the teachers destabilize the dogmatic thought -though as representation -opening gaps for a nomadic, inventive thought, which escapes, experiments, and creates openings for the flowing of pure thought, with no images, allowing the collective body to expand the processes of differentiation, thus, inventing new images for the curricula, the resistance culture, and schools with the power of difference.
NOMADISM…
In the research trajectory, we see curricula as nomadic experimentations beyond the rigid, predefined contours, without the indicative compass that guides the object; they consider the perception of what Deleuze and Guattari (1995, p. 15) call rhizome a good translation of the multiple connections that occur in the ways of culture and curricula.As the Greeks said, the idiots were the inhabitants of themselves (Deleuze & Guattari, 1992).Within themselves, they see new possibilities of thinking and acting with much more resistance than those vulnerable to those varied becomings that present and offer themselves in the fog.
The conceptual characters of Deleuze and Guattari (1992) -the idiot and the nomadillustrate two possibilities.The first refers to the dogmatic closeness of thinking and acting.The second, related to the nomad, is related to a creature that has its territory produced by the pathway that starts from one point to another but that does not have, in any of these points, fixed limits, as they are points to be abandoned again by the nomadic need (Deleuze & Guattari, 1997).
However, what matters is to consider that we all are, in a way, idiots and nomads.Therefore, we should live an existence with a certain perception of when we are one more than another and, if we believe it is pertinent, change.Recreating ourselves in our own pathway gives our senses and perceptions new ways to handle what we are, where we are, and what we will do.
That is why we want to get rid of ready-made opinions.We only ask for our ideas to connect following the minimum level of the constant rules.The association of ideas had always meant to provide us with protective rules, similarities, and causality, which allows us to give some order to the ideas, to go from one to another, according to an order in space and time, stopping our "fantasy" to wander the universe (Deleuze & Guattari, 1992, p. 259).
The nomad, vulnerable to the fog and the chaos in the relationship created when allowing encounters and, thus, enabling events, would be able to produce thoughts, not only opinions, because he would be in the vortices of the realities produced on levels that cross chaos -"[...] art, science, and philosophy -as forms of thought or creation" (Deleuze & Guattari, 1992, p. 267).
This chaotic flow holds the unlimited speed of birth and fading [...] Imprisoned on its opinionbody-truth, the idiot, through the iron of his mirror, believes himself free, careless, does not notice that the points that delimit his thought, actions, and view as they are not moveable, narrowed to the tiny space of the shield visor of his identity helmet (Deleuze & Guattari, 1992, p. 153).
While the nomad faces and insinuates himself in the fog, the idiot protects itself in tradition, in the previously-said, the easily assimilated, the plausible, the palpable, in the representation of reality.
The nomad learns through movement, displacement, the need to expand and leave the house-body, the body-territory 3 , the nomad is the one that has no point, pathways, nor land, while, obviously, having them4 .If the nomad can be called 'deterritorialized' per excellence, it is exactly because the reterritorialization is not done later, as is the case of migrants, and neither in something else, as in the case of the sedentary (in fact, the relationship of the sedentary with the land is mediatized by something else, the regime of property, the State apparel…).On the contrary, the nomad is the deterritorialization that establishes his relationship with the land because he reterritorialized himself in deterritorialization itself (Deleuze & Guattari, 1997).The nomad allows thought by abandoning and creating territories, as the "[...] act of thinking is established in the relationship between territory and land, i.e., as the deterritorialization of the territory to the land, and the reterritorialization of the land to the territory" (Deleuze & Guattari, 1992, p. 113).
When abandoning his territory, the nomad opens himself to the risk and new creations to deal with himself and the encounters he will find in his path.Hence, he seeks to create senses, perspectives, and thoughts.He tries not to give in to the opacity of the fog but wants to enable new forms of seeing, thinking, and feeling within it.He will seek ways for us to understand each other and the world and to live the best way possible to leave and arrive, to learn to feel, think, and see.He may try to abandon the persistent search for the grand truth and worry about the small wavering truths in the path.The aim is to make this means, and this walk between undefined and moving points the territory itself, also moving, and, with it, to produce approximations and relationships with others, to enable life, an intensely creative life, which allows us to think beyond the plausibility and predictability of the paths already wandered and that only lead us to our walls.
Nomadism translates "[...] a pure and measureless multiplicity, the gang, the irruption of the ephemera, and the metamorphosis power".(Deleuze & Guattari, 1997 p. 13).The challenge is not to think if a curriculum or nomadic teaching have their territory, if they cross landscape point; more than this, it is to notice that, in the case of nomadic curriculum interventions, what is done, what exists is there to be abandoned, changed.It exists in schools, in the layers over the layers of visible interventions, an elusive, abandoned existence.
In each curricula wall lies layers and layers of ephemera.This is the most evident view of the continuous process of territorialization and deterritorialization of the arts in the urban scene, a type of nomadism that, as stated by Deleuze and Guattari (1997), does not necessarily need to leave the place.
The nomad would be the one that acts, undoing the grooves of the spaces created to classify, work, and standardize behaviors that "[...] creates the desert and is created by it.He is the vector of deterritorialization" (Deleuze & Guattari, 1997, p. 53).The desert does not necessarily assume the shape of nothing; on the contrary, it composes "a bunch of stories" galleries of memory layers on the school curriculum walls.
Stories that persist in the images that resist under the overlapped layers of white between the walls exist without their regimes of resistance to gain force, to be active by affections as intensities.
To Deleuze and Guattari (1997, p. 78), "[...] agencies are passional, they are composed by desire.Desire is not related to a natural or spontaneous determination, there is only agencying, agencied, contrived desire."Affections are intensities.They cross and recreate speeds and flows in the grooved spaces of barriers, fixed and segregationist frontiers of the cultural processes in curricula, interposing themselves in the interstices "from the filters to the fluidity of the masses" (Deleuze & Guattari, 1997, p. 60).How does one follow the acts of non-authentic intervention in school spaces, arts, and registrations that act without having a place?How do we define this matter-movement, related to fraudulent art, "[...] this matter-energy, this matter flow, this matter in variation, which enters in the agencies and that leave them?
In this perspective, we establish a dialogue with Deleuze and Guattari, pointing out that, though art is organized as a subject, displaying a corporeity, it neither mixes itself with the intelligible formal essence nor the sensitive, formed, perceived thingness" (Deleuze & Guattari 1997, p. 89).When this art enters the folds between curricula and material and digital teaching, the greater it presents itself in "[...] a space-time itself inaccurate".As the body cannot be reduced to an organism (Deleuze & Guattari, 1995), art is condensed from materialities, affections, expressivities and intensities.
What founds art nomadism is the movement, and this does not mean undoing the organism school curriculum but insists on opening it to "[...] connections that suppose a whole agency, circuits, conjunctions, superpositions, and limits, passages, and distributions of intensity, territories, and deterritorializations, as a land surveyor" (Deleuze & Guattari, 1996, p. 22).
In the teachers' vital effort, the wish is to overcome the "city with no windows," the landscapes erected as extensive walls.This is also the challenge that moves teachers' practice in public schools, as delineating pathways between walls and windows, reaching the between-things, and overcoming curricula that reduce the lives of public-school children to "minimal existences"?(Lapoujade, 2017).
Nomad art can insist on a type of support, in topology, in images repeated to assert the clamor of war machines.This is also the adventure of a teacher who moves under the sign of a "moving science", a trip that only starts when we burn our ships, as adventure starts with a shipwreck.Deleuze and Guattari (1992, p. 253) affirm that only art conserves.Nevertheless, what does art conserve?Exactly the affects and percepts, because "[...] art wants to create a finite that reestablished the infinite: to draw a composition plan that, on its turn, carry monuments or composed sensations, under the action of aesthetic figures".
The images of deserts5 makes us think about the cultural and curricula processes and all the codes, norms, rules, standards, and universalisms produced by the educational policies in action and how we relate with the images created in the schools.Displacements, concerns, problematization: could we attribute new meanings to common school objects?Could we think about the culture and curricula to come in schools?What other unexpected looks and arrows can be shot at schools and their inhabitants?
CULTURE AND COUNTERCULTURE AND ART AND EDUCATION AND...
In an interview in 1980, Deleuze stated that "[...]contemporary culture is an offense to any thought".This provocation led a generation of young people to think about the senses and meanings of a culture.On this occasion, Deleuze assumed himself as countercultural by refusing any cultural reserve.
He understood that philosophy could not be defined only formally or methodologically but, mainly, it needed to position itself on the horizon of a given culture.When emphasizing that we could not think of the cultural dimension through the bias of subordination, dialogue, or consensus because what is behind this idea of consensus is always a fight between thought and stupidity, he highlights that: "Always thinking oneself against culture but always about culture, if possible, hoping for a culture to come" (Pellejero, 2008, p. 2).
Criticizing the intellectuals, Deleuze (2008, p. 8) said: " I hate culture, I cannot stand it.[...] I don't believe in culture; in a way, I believe in meetings.[...] We do not meet people, but things, works".
Following Spinoza'a (2007) thought, reason can be defined in two ways.First, by the effort to select and organize the good encounters, i.e., the meetings of the ways that compose with us and inspire in us joyful passions (feelings that agree with reason).Second, through the perception and understanding of common notions, i.e., the relations that enter in this composition, from where other relationships (thoughts) arise and through which new feelings are experienced, this time, actively (feelings emerged from reason)).Deleuze (2002) defends that good meetings increase our power to act.In this perspective, the formal possession of this power to act and know emerges with one principal aim.Thus, reason, instead of floating randomly through meetings, should aim to unite things and beings whose relations area directly composed with ours: "Thus reason seeks the sovereign good or 'our own advantage,' proprium utile, which is common to all men (V, 24-28)" (Spinoza apud Deleuze, 2002, p. 61).To Deleuze (2002) the, good encounters can happen with humans and non-humans that raise joyful passions and/or the passage from a passive affection regime to an active one.
In this sense, Deleuze (2002) considered that good encounters would increase our power to act and evaluate the relationship of thought with a given culture.He did not want to think about what was cultural in that scenario but to think about what escaped the cultural domain because, in this way, it was possible to go beyond a given culture.Therefore, being counterculture is important to destabilize power relations because when transforming, changing, and desertifying the existing power networks, it would be possible to make the compossible and incompossible in cultures and curricula emerge.Deleuze (2008), thus, affirms that the encounters…are not only with people but with flows and forces, similar to when we go to an exhibition to seek a painting that touches and moves us.A painting exhibition or a trip to the movies, seeking, lurking for an encounter with an idea.
Problematize and experiment differently the cultural products imply encounters but, to do so, we need to lookout out for these encounters: a conversation, a painting, a movie, a drawing, a short story, all of them can take place in a fold, unfold, refold producing a reversion that bursts the routine and allows us to break away from the pattern, in this case, the curricula and teaching standard.
In this sense, the curricula cannot be understood Cartesianly, as a path to be followed, with definitions and determination.We understand the maze from Deleuze's problematization (1991) in his book The Fold: Leibniz and the Baroque.Leibniz uses the maze to explain the concept of space.Thus, the space is established as a maze with endless folds, as a city is composed of blocks, houses, buildings, rooms, and furniture.There are always folds within the folds, which constitute the spaces, as in origami, "[...] the art of paper folding" (Deleuze, 1991, p. 18).Therefore, curricula, art, and culture are a crossing of ways, paths, derivations, and bifurcations to whom it is never possible to delineate only one plan or previously defined trajectory exactly because in these webs -or folds -something will not fit and will compose other plans or possible words that will lead to other trajectories and folds.For this reason, Deleuze (1991, p. 17) affirms that "[...] the smallest element in the maze is the fold[...]" because we understand that the maze is multiple, as it is folded in many ways, considered that the webs that compose it change and metamorphosizes endless times."That is why the unfold is never the contrary of the fold, but it is the movement that goes from some folds to others" (Deleuze, 1991, p. 140).
With this said, we can imagine that multiple plans cross each other, established in these possible worlds, created, and imagined, immanent movements that allow us to fold and refold together.
In this pathway and now in the context of integrated world capitalism, we can ask: amidst the massive worldwide production of specific ways of living, dressing, and loving, disseminated by mass media and consumed by crowds, can we think about producing singular and singularizing subjectivities that escape the dominant models?
We believe we can.About this, as Guattari and Rolnik (1986), we affirm that if contemporary subjectivation is inexorably anchored in capitalist devices, this does not mean its complete imprisonment.
It is always possible to resist the present, to escape the dominant models, to appropriate oneself differently from what is daily offered by the television, the cinema, the boss, the spouse, the school, or the outdoor billboards because this development of capitalistic subjectivity brings enormous possibilities of deviation and singularization.Summing up, it is always possible to dare to singularize yourself (Guattari & Rolnik, 1986).This way, as Guattari (1992) taught us, nothing is given.One needs to fight for new fields of possibilities, from the understanding that subjectivity is constantly produced, inventing in daily life new ways of existence, and new relationships with oneself and the world.
ABOUT SO MANY OTHER WORDS AND THE ARROWS-FORCES OF ART SIGNS MOVING TEACHERS' PROBLEMATIZATION ABOUT CURRICULA AND TEACHING
In our studies with the teachers, we sought to induce thought in the encounters with artistic signs in conversation nets because, as Deleuze (2010) explains in Proust and Signs, "[...] we only think when forced"!The thought can be violated by encountering cinematographic, artistic, and literary images, breaking down the clichés and the "truths" imagined for schools.The rupture of the dogmatic thought creates openings for the flow of forces of pure thought, i.e., an imageless thought that allows the collective body to create images for the curricula, the teaching, and the childhood.Thus, we question: what forces us to think?What elements make cultures and curricula move?What forces fixate cultures and curricula, stopping them from entering a constant movement?
What forces can escape a culture or a given curriculum?What can make cultures and curricula shake/destabilize to create a culture to come, a becoming culture?Questioning does not mean adaptation of representations but a work of thought that questions educational and curricula policies, constantly throwing new and disquieting questions that, when pointing out contradictions, exercise arguing and confrontation of ideas.
In one of the meetings, the triggering element for the conversation network was Manoel de Barros's literature in the book Exercícios de ser criança.From Barros's work, the teachers were invited to problematize the current legislation, analyzing the (im)possibility of materializing a national democratic and participative base, as well as the relation and impact of Base Nacional Comum Curricular (BNCC-National Curriculum Framework) for teachers' education and work in school daily lives.The teachers point out the difficulty of finding spaces and times to dialogue and to plan their actions collectively.They criticized the lack of public exams for teachers and the policy of hiring temporary teachers and transferring, which hinders the establishment of an organized collective body.They questioned the largescale evaluation policies and the demands concerned security protocols.However, they indicated some escape and lifelines for schools: So, we are always trying to seek new forms to do differently.We seek to change when we perceive that things are not working.
We have to adapt to the children's reality.To reach each one, seeking the best for their lives.Children can surprise us a lot.
We are on rotation, leaving the remote teaching towards the hybrid model, which includes restrictions and security protocols, rules, and norms.Nevertheless, we must remember that children spent a period at home and came back surprising us.
In nets of conversation and solidarity, the teachers questioned the protocols and the norms thought without the participation of those that make school every day.They criticized the proposed plans without students' involvement and pointed out that the events and daily relationships create fabulations and curricula inventions.
The images shown in these meetings made us reflect beyond our possibilities.As a teacher, we can always dare in our teaching work, and this also makes us think about the students: how do we let them have the freedom to express themselves?Children always invite us to go beyond and call us to observe the daily details, but we often do not join their ideas as we are stuck in the plans proposed.
As Pellejero (2008, p. 4) points out, culture exists not to be understood, nor recovered, nor inhabited, "[...]but to escape, to provoke escapes, to do something that escapes all codes: flows and noncoded elements, active and revolutionary escape lines" It is a throwing of darts because only then it is possible to enact devices of resistance and creation.
In the meetings with the schools, we mapped a plurality of viewpoints in which the different relationships are not reduced to oppositions but to possible solutions with different positions in dispute and processes of negotiation and differentiation.Therefore, culture no longer represents [...] the sum of objective assumptions of an image of thought that prevents us from asking what it means to think and appears as an adventure of the involuntary, which interlinks a sensibility, a memory, and, soon, a thought, with all acts of violence and cruelties needed to trace a new people of thinkers and give rise to the spirit (Pellejero, 2008, p. 6).Rolnik (2015) presents us with two of the multiple experiences we do in the world, and that subjectivity is willing to apprehend.The first is based on perception because we live experiences by associating them with our codes and representations.This perception allows us to give meaning and establish communication and sociability.However, this perception is not the only one to conduct our existence; several other ways to apprehend the world operate simultaneously, establishing our subjectivity.
Another type of experience that subjectivity does is the power around us, which moves the world as a living body.These forces produce effects on our bodies.Such effects consist of another way of seeing and feeling what happens in each moment.Deleuze and Guattari (1992, p. 194) call these effects percepts and affects.The percepts and affects that cross our bodies boost the constant process of recreating ourselves and our surroundings because they trigger disquiets and destabilization.However, they cannot be represented, so they do not fit the current cultural cartography, risking it.The affections "[...] overflow the power of those permeated by them".
The short films with their arts, as in Manoel de Barros with their poetries, created sensation blocs that allowed teachers to move their thoughts and fabulate possibilities for the schools.What percepts and affects allow displacements in cultures and curricula?What effects do the meeting with images (literary and cinematographic) provoke in the conversation nets with the teachers?Do they allow new cultural and curricula movements and the invention of incompossible worlds?6 This sharing of experiences has enriched our practice.It is good to hear teachers' reports and their daily experiences, improving the formation process.As we have started to think with Manoel de Barros about the water that flows outside the sieve.As teachers, we began to give specific importance to the children's voices.We are sad to see the context of the current society, so we need to value our daily work and share more about what we do.
The effect of art signs in the conversation webs with the teachers reverberate the force of collective action, create bonds and groups, movements that touch and affect us.With Rolnik (2015), we understand the need to problematize the reactive and conservative macro and micropolitical forces and seek to produce cultural changes and displacements in the networks of power, affections, and subjectivities, because this is the only way we can think about new ways to populate the world.Populate the world with art makes it more joyful, colorful, pulsating, and wishful because art and culture potentialize new curricula to establish a people that is lacking.Therefore, we need to activate the displacement of the reactive micropolitics of the inconsistent colonial capitalism to create a new concept of politics that would be a micropolitical action in its active sense-"[...] a new way to decipher reality, to situate the problems, and to act from them critically" (Rolnik, 2015, p. 10).
In the formative encounters with the teachers, from Manoel de Barros's work, the teachers exemplified some movements that work as an active force and a countercultural resistance, such as the campaign Aqui já tem Currículo ("There is already a curriculum here"), created by the Associação Nacional de Pesquisas em Educação (Anped-National Association of Education Research), during the release of the first version of Base Nacional Comum Curricular (BNCC-National Curriculum Framework).Teachers from all over Brazil joined the campaign and sent their experiences and curriculum compositions collectively created in their schools and local communities.
Another example of active micro-political action debated in the conversations, from the reading of Manoel de Barros, was the movement of high schoolers when more than two hundred schools from São Paulo were occupied to protest against a plan to reorganize the state public system (Pelbart, 2016).This gesture was transformed into power and collective intelligence because the "intolerable," such as the commodification of education, the current power relationships, and the wornout ways to think about education, learning, and evaluation processes, were questioned.Thus, other possibilities were contemplated and wished: the (un)thinkable started to be collectively imagined.Rolnik (2018) highlights that we always oscillate between active and reactive micropolitics.
Therefore, we must combat the reactive tendencies within ourselves and our actions and relations.A life's work -an existence ethics.To do so, we need to carefully listen to the affects and percepts responsible for destabilization because the action of desire lies the opening for new possibilities and for creation.In this movement, the virtual world that inhabits the subjectivities updates itself.
I wanted to share what happened with me this week that is very similar to what we are saying: I did a reinterpretation activity with my students of Alfredo Volpi 's work "Boat with birds".When it was time to present, the greatest concern was to show the students' work.So, I set a panel.The letters were not too perfect.And when I showed the panel at school, people were concerned with the perfection of the panel and not the students' mental work.That hurt me a lot, and I was very upset.So, I think we have to rethink our way of working.A suggestion for the other girls in the formation: to enact the students' work and not perfection.There we have a little of our practice, fight, and sensibility.
In the conversation networks with the teachers, we sought to destabilize, through artistic signs, the sensorial-motor arc, making the teachers fabulate new possibilities for the schools.Fabulation goes through becomings that ask for a way.Fabulating is never to speak in one's name.It is to talk through another, in the name of minorities, multiple nomads that populate them, and with whom it populates the world."To fabulate is to convey the powers that the becomings will raise on us and that is devoid of language" (Lapoujade, 2017, p. 282).The effect of this policy of desire action is a becoming of subjectivity and its immediate relational field.
When questioned, the teachers said that the most important in this pandemic moment to strengthen the collective were encounters, friendship, solidarity, and conversation networks.We have been relying on the possibility of following the processes of collectively creating new images of schools created with the power of the intensive movement of different cultures and curricula practiced and experienced in the schools' daily life.Such a collective and inventive force states the difference as a multiplicity and engenders singularities, exploring the power of the nomadic thought, a thought in constant becoming.It is a way of thinking that seeks to fabricate new life possibilities, other ways of existence, a life aesthetic, an ethics because the movements and singularities do not wish to be the idea of a single world (Deleuze, 2008).
The revolutionary becomings, boosted by the outbursts of affects and percepts in/of encounters with images and artistic signs, forcing us to problematize and reinvent reality.These are moments in which the collective imagination is triggered to create resistance and new ways of existing, seeking new alliances and new senses for the cultures, the curricula, and teaching.As Rolnik (2015) points out, it is not enough to accept this responsibility as a citizen; one needs to assume the responsibility as a living being to act in the sense of active micropolitics.This condition turns us into agents to create forms of collective existence.
The short film The Other Me (École Supérieure des Métiers Artistiques, 2020) was also used as a triggering element of a meeting with the teachers.The film allowed us to think about the routines, how much they cannot robotize, and how we subtly follow the logic of culture and or a curriculum thought by the edicts of their funding agencies, not listening to the collective desires and abdicating the power of joy, so crucial for our lives.As Carvalho, Silva, and Delboni (2020) state, images promote changes in the subjective forms (from within), granting singularities that, when shared, make thoughts shake."[...] They foresee and glimpse what, we can, with some effort, image to have seeing" (Samain, 2018, p. 35).
Thus, teachers talk about their affections with the images of the film and show how much we need pauses, breaths, fabulation, and contemplation -in the Guattari sense -which means breathing together, thinking together, pulsing together, following hand in hand, seeking recognition of what we do collectively, valuing our inventions, our choices and, mostly, sharing our differences.In this movement, we imagine the possibilities amidst the power of complex and collective action, creating ways to live with more joy and inventiveness, and renovating the images of culture, teaching, and curricula.In this sense, the teachers state: I think the short film was very interesting.It made me think about the mechanic work in/at teaching.That constant daily work, teaching the same content in the same rhythm.And when we are faced with another opportunity, another possibility.
The possibility of finding another world.I thought it was interesting that the character had the opportunity of returning, but he didn't want to.And during every moment of the trajectory, his suitcase was with him.In teaching, this is very strong: my material, my locker, that content, those activities you work on every year.We carry all that with us and don't want to throw anything away.He left his shirt, and the jacket, he left things little by little, small things.This process of gradually letting some things that we think we need or that we will need along the way is very interesting.His decision to leave the elevator and leave the suitcase, as if he was now free to dare and see other possibilities.This scene really touched me in a teaching sense, as sometimes we leave some habits behind and allow inventive teaching to the children by listening to them.Sometimes, I go through my locker and throw some things away.But, I need more.become conditions of real experience; the artwork, on the other hand, really appears as experimentation" (Deleuze, 1974, p. 262).
Thus, in its relationship with education in new forms of experimentation, art can become a counterculture.Therefore, culture no longer represents [...] the sum of objective assumptions of an image of thought that prevents us from asking what thinking means, and appears as an adventure of the involuntary, which unchains a sensibility, an aesthetic, and, soon, a thought with all necessary violences and cruelties to delineate a people of thinkers and give rise to the spirit (Pellejero, 2008, p. 6).
Culture does not exist to be understood, nor recovered, nor inhabited, as Pellejero (2008, p. 4) points out "[...] but to escape it, to provoke escapes, active and revolutionary escaping lines, lines of absolute decoding that oppose culture".Therefore, in the encounters with the teachers, we witnessed the problematization of school routines, teachers' procedures, the relationship between education, culture, and art, and the need to seek other possibilities to create inventive curriculum movements.Summing up, the search for other ways for school and teaching to exist, which imply movements of deterritorialization and agency of resistant devices, is understood as creation because every creation is an act of resistance.
Finally, culture in school is understood not as erudition, collective soul, or goods but as an act of resistance that makes itself countercultural when questioning the bases of the social system and the current political-economical capitalism, which establishes the schooling processes.
NOT TO CONCLUDE... AS WE SEEK THE OASIS AND FLOWER, DETERRITORIALIZING DESERTS
In the work The Transparency Society, Han (2017) clearly and pessimistically shows the relation of contemporary culture, referring to the control of life and/or other possible words.He argues that the current economic system needs the existence of a similitude between the social relations built by the individuals and social groups, considering that neoliberalism does not work if people act differently because, from the digital social networks, several quantifiable data is produced which allows us to see tendencies and reactions, resulting in algorithmic operations that equate and dominate individuals and social groups.
Thus, according to Han (2017), individuals and social groups are transformed into dividable beings, a mass that is an agglomerate of data, i.e., globalization demands the overcoming of differences between people because the most similar they are the fastest will be the circulation of capital, goods, and information.The tendency is to make everyone similar as consumers.
Hence, mass culture becomes something inevitable because the desacralization of the world guides our activities to market value, disregarding any production that does aim to become a standardized good to be consumed.The transparency violence described by Han as a digital panopticon reflects a pessimistic political perspective.How to become oasis and flowers deterritorializing this curriculum deserts?Would culture, art, and education be encapsulated in the capitalist system of our globalized world: Han (2017, p. 69-70) leaves an opening when stating that "[...] total control destroys the freedom of action and leads, in the end, to uniformity and, thus, nowadays new configurations are demanded, even in public spaces shared in the cities".In this sense, we agree and defend the optimistic understanding that there are agencies that affirm life and place us in even more limited existential conditions.If different ways to produce agencies coexist, we need to perceive those that enclose our vital drive, a creative impulse that allows us to escape the automatisms that conform us to bare life (Agamben, 2015).
Taken as power, life and cultural processes cannot be conceived from establishing an ontologically essentialized life in identification and homogenization processes because life always overflows.Therefore, we should not ask, "What is it?"because we would have an essentialist perspective through an identity bias.We need to break away from the questions: "What is culture?","What is art?", "What is curriculum?", "What is school?","What is life?" and ask about the relations and possibilities of life and their effects, by the traces through which we are delineating the homogenization and standardization processes or the traces that indicate singularization processes and the affirmation of plurality -a bare life or a life?So, not to conclude, because there is no recipe, only possibilities in the relational education and culture as counterculture, we ask: what deterritorialization lines, lines of active and revolutionary escape have we been creating in our meetings with the artistic signs that permeate the cultures and curricula in schools?Have we been finding escaping lines with so many policies to regulate education: Have we been creating with the teachers new and possible images for the cultures and curricula?Have we been placing cultures and curricula in movement?Have we been creating forces to resist the images of schools and curricula that asphyxiate teachers in measurement scales, indicators of performance profiles, and good conducts?Have we been transgressing these regulation forces, creating collective resistance to build a culture to come?
In the encounters with the teachers, having artistic signs as thought triggers, like arrows shot in the desert to be populated, teachers create connections and agencies that act in the collective bodies as a revolutionary, subversive force, which creates the desire for new experiences and collective creations overflowing.Even with all the sad moments the world faces, the teachers collectively problematized the precariousness of teachers' work in this pandemic scenario, in which the educational inequalities get even more visible.However, in fighting movements, they persist, affirming the sensitive life that pulses with the desire for a better world for all.They deny the imposition of a dogmatic education inspired by an arbitrary hegemonic culture, asserting a countercultural education.
Manoel de Barros's literature calls us to the absurd.I was thinking: instead of keeping this expectation that the student will get things right, will do what we planned, to give them a vote of confidence to the student.Because the vote of confidence is for them to do what we want, what we think…And if we think about the possibility of creating absurd with the children?It is very important to allow movements of absurd [...].To let children surprise us.
|
2023-07-11T01:23:47.946Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9545fda6e7263c61b9ca8b57bc9ab44f7f1687b1",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/edur/a/pvvjBPH5h6GDySHpCMWhcvt/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b31be086c45cb5a5279ff629d48e8057737d5795",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
34566510
|
pes2o/s2orc
|
v3-fos-license
|
Creation and Evaluation of Software Teams - A Social Approach
This work discusses an important issue in the area of human resource management by proposing a novel model for creation and evaluation of software teams. The model consists of several assessments, including a technical test, a quality of life test and a psychological-sociological test. Since the technical test requires particular organizational specifications and cannot be examined without reference to a specific company, only the sociological test and the quality of life tests are extensively discussed in this work. Two strategies are discussed for assigning roles in a project. Initially, six software projects were selected, and after extensive analysis of the projects, two projects were chosen and correctives actions were applied. An empirical evaluation was also conducted to assess the model effectiveness. The experimental results demonstrate that the application of the model improved the productivity of project teams.
Introduction
There are various techniques and guidelines for improving the process of building project teams. However, these guidelines should be adapted to specific environments. Generally, each member of a given team possesses a special area of expertise or natural ability that should be utilized by project managers. Accordingly, many successful organizations depend on the optimal mix of competence, trust and mutual esteem in team relationships. Human resource management is an interdisciplinary area in project management. Some project managers perceive and manage individuals as if they were modular components rather than unique team members; however, software production processes are different from other industrial production processes. During software production, many problems that occur are directly related to software teams and to the mutual relationships among their members. For instance, (DeMarco & Lister, 1999) argue that team relationships are highly relevant, and consequently, there are four elements that affect human resources: the management of human resource techniques, human resource acquisition processes, activities that improve team productivity and the office environment. According to (Curtis et al, 1988), human resource selection and management is more important than technologies and tools. The IEEE vice-president suggests that in order to develop a successful project, managers should focus on understanding the project goals, appropriately handling the flow of ideas, and honing the team members' relationships (Weinberg, 1986). Overall, he maintains that the quality of products depends on software teams, where each member contributes to the quality by performing his/her part. In general, selection processes consist of applying technical tests and interviews. However, these procedures alone do not ensure the selection of successful software teams, especially since interviews do not always properly account for all aspects of human behavior. As (DeMarco & Lister, 1999) explain, the skill tests are usually focused on the tasks that candidates would perform at the beginning of the work. However, these tests do not necessarily guarantee the correct evaluation of each candidate during the entire project. Members of a software team often change their activities or roles during the span of the project, thus indicating that such tasks have not been adequately considered during the initial human resource acquisition process. Other viewpoints about the selection of software teams are presented by (Edgemon, 1995) and (Pressman, 2005); Edgemon proposes the following four areas: problem resolution, leader skills, reward management, and sociological behavior. There are several tests to assess the personality of individuals (Myers et al, 1985)(Catell et al., 2008 (Belbin & Mead, 2010). However, none has the particularity of evaluating people in normal situations and stress situations; this is an important element in the work environments of software development. On the other hand, Pressman promotes project management on the basis of four elements, known as the four "Ps:" Personal, Product, Process and Project. The order of Pressman's elements is not arbitrary, as he explicitly states that personal management is the most important aspect in software projects. The Project Management Institute deals with human resource management, process organization, and the management and leadership of project teams. Accordingly, the Institute has proposed the following four processes: developing human resources plans, acquiring a project team, developing a project team, and managing a project team. There are four techniques for acquiring project teams, as described in the guide to the Project Management Body of Knowledge (PMBOK) (Project-Management-Institute, 2004): pre-assignment, negotiation, acquisition and virtual teams. Although the PMBOK guide is one of the most accepted international standards of project management, it constitutes an abstract guide that should be adapted to specific situations and particular environments. The People Capability Maturity Model (P-CMM) defines "staffing" as one of the prime process areas at the "Managed Level" (Curtis, et al., 2009), thus indicating the importance of staffing for organizations. Specifically, the purpose of staffing is to establish a process where qualified individuals are recruited, selected, and transitioned into assignments. The "ability to perform" statements include the required definitions used in an organization's selection process and the necessary methods and procedures for individuals involved in staffing activities. Moreover, the "practices" description establishes that a selection process and appropriate selection criteria are defined for each available position. In particular, Thomsett considers the team's relationships highly relevant for a project's success (Thomsett, 1990). TSPi is a methodology that provides a defined process to develop software by teams. TSPi aim is to show defined process components (roles, scripts, forms and standards) (Carleton, A., et al., 2010). However, this methodology does not show how to form a good team. This paper proposes a model that concerns the acquisition of human resources for software teams. The main idea of this model entails the combination of technical expertise and the sociological relationships among team members. This proposal can be utilized independently of the software development methodology used or the size of the team. In section 2 describes the techniques involved in the new model and provides details for acquiring the model's algorithm. However, these techniques are not an appropriate substitute for human expertise; rather, they solely constitute a decision-making tool. Section 3 analyses the experimental results, and Section 4 presents concluding remarks. A social model for acquiring software development teams Technical knowledge is considered a prime requirement among software team members; however, elements pertaining to human resources also need to be considered. Specifically, these elements include sociological behavior and human relationships, technical knowledge and software team competencies, and the quality of life for software team members, as shown in Figure 1 Thus in order to achieve the optimal balance during the software development process, the human resource selection process should guarantee equilibrium among these elements.
The model presented in this paper consists of four processes, as depicted in Figure 1 Process 1: Open process and initialization, Process 2: Competence evaluation process and interviews, Process 3: Roles assignment process; and, Process 4: Close process.
Open Process
Prior to the open process, team managers need to know the project objectives. Subsequently, the managers should define four milestones: a) Create the human resources management group b) Establish the number of work places in a human organization chart c) Define the specific roles required for the project d) Receive personal requests In order to obtain the first milestone, the project manager should create a special group, the HR management group, or they should contact human resources management services for outsourcing. For the second and third milestones, human resource management experts should define a hierarchical organization chart. The fourth milestone consists of a voluntary request list, which requires the candidate's name, contact address, possible role, and other basic information. By the end of this step, project managers should have a list of candidates interested in the project.
Competence Evaluation Process
The proposed model recommends the application of three aptitude tests to each candidate: a technical test, a sociological test and a quality of life test. First, the technical test should be applied in conjunction with each candidate's role aspirations and should be based on the competency evaluation processes. As previously mentioned, each organization should define the required roles by considering the characteristics of the team members. The technical test should be developed according to these requirements. For example, in software production projects, the common roles include analyst, designer, architect, developer and project manager. However, in the technical test, the roles are entirely dependent on the project features. Accordingly, (Brainbench Previsor Company, 2008) and (Verio, 2008) have discussed test solutions for technical skills. The second test consists of a questionnaire for evaluating the sociological state of candidates (Aragon, 2007). This assessment provides an integrated perspective of individuals' conduct under normal conditions as well as in tense situations. Specifically, the test evaluates candidates' activity level in a group and their attitudes towards people in a work environment. As a result, project managers can utilize these tests to predict an individual's personal behavior prior to their assignment in a software project. There are two elements in this proposed test: a sociological questionnaire (Tables I and II) and a guide for applying it (Section 1).
1) The Sociological Questionnaire
The questionnaires presented in Tables 1 and 2 have been created for this work based on (Gomez & Acosta, 2003). The third group of tests consists of a questionnaire for evaluating the quality of life of candidates, as presented in Table 5. k. Stubborn/bull-headed and absent minded. L. An enthusiastic person who understands easily and adapts to any situation.
l. Superficial/shallow and disloyal. l. Being inconsistent to attract attention.
The sociological test guide
This section presents the steps for analyzing individual personalities. Additionally, it discusses some tools that analyze team balance in the sociological test, ensuring that software teams consist of diverse personalities that create equilibrium amongst team members and minimize discord.
The validation of the scales of the measures used in this test was performed through the application of the Delphi Method to 29 experts from different organizations of Cuban software, dedicated to the management or human resources research. There were three rounds where each one was evaluated by experts at different scales for the measurements. Four experts were eliminated so in the final round there were only 25. The stadigraphs that were used in the study was the mean, mode and standard deviation that give us an overview of the results obtained in each of the questions (Torres, 2011).
Step 1: Create a graph to represent an integrated view of a person's characteristics, as shown in Figure 2. Step 2: Complete the questionnaires presented in Table 1 and Table 2. Each question has four possible answers to which respondents should assign a value between 1 and 4; repeated values are not permitted. Higher values mean that respondents believe they possess a certain characteristic, whereas lower values indicate that respondents do not associate themselves with a particular attitude.
Step 3: Summarize the results by using Equation 1 and 2. Equation 1 and 2: Set of equations to summarize the questionnaire results.
60 (1) 60 In these equations, the uppercase letters represent an individual's behavior under normal conditions, whereas the lowercase letters denote a person's actions in stressful situations. The resultant value for high-intensity situations provides an overall perspective of a person's behavior in tense situations, which may serve as a starting point for a subsequent analysis of an individual's conflict style. The variables Z, X, W, and Y contain the total values obtained from both questionnaires in normal situations, while the variables z, x, w, and y indicate the same values in stressful situations. These variables are explained in further detail below: Variable Z (and z) contains the total value obtained from both questionnaires. This variable is related with the respondent's behavior in supporting other people. Variable X (and x) measures the degree to which a respondent is proactive.
Variable W (and w) assesses the respondent's behavior in decisionmaking.
Variable Y (and y) evaluates the degree to which a person is relaxed and agreeable.
Step 4: Define the person's activity level by following Rule 1 and using Equation 3. Equation 3 and Rule 1 are used to determine the activity level of each person. Equation 3 is presented below: ; (3) Where the variable M contains the values related with a person's positive and proactive attitudes and the variable N denotes the person's score as it relates to passive attitudes. Accordingly, a person can be classified as an active or passive individual based on a comparison between these variables, which is known as Rule 1.
Rule 1: IF M > N THEN a Person is Active ELSE Person is Passive Step 5: Define the person's orientation by using Rule 2 and Equation 4. Specifically, Equation 4 will determine the extent to which a person is people-orientated or task-orientated. ; (4) Where P contains the results related with a person's tendency to support other people and R contains the score associated with the respondent's focus on task execution. The resulting variables from Equation 4, P and R, can be compared to see which is greater. Accordingly, the respondent can be classified as "Oriented to persons" or "Oriented to tasks", as shown in Rule 2. Rule 2: IF P > R THEN a Person is Oriented to Persons ELSE Person is Oriented to Tasks Step 6: Use the following rules to determine the style for each person using the variables X, Y, Z, and W in a normal situation and x, y, z, and w in tense situation. The variable Diff i,j identifies the difference between two variables, i and j, Y,Z,W, x, y, z, w]. By analyzing the questionnaires' characteristics, it should be evident that the maximum difference between the two variables is 12. Thus Max Diff i,j =12. Furthermore, When the difference between i and j is equal to or greater than 80%, the difference is considered as a Remarkable Difference. For a Remarkable Difference to be evident, Diff i,j >= 10. When the difference between i and j is between 50% and 80 %, it is considered as a Discrete Difference. For a Discrete Difference to exist, Diff i,j >= 6 and Diff i,j <=9. When the difference between i and j is less than or equal to 50% it is considered as a Short Difference. For a Short Difference to be evident, Diff i,j <= 5. Step 7: After evaluating each individual in the preceding steps, describe the features of each worker based on the information contained in Table 3. Step 8: Complete Table 4 by inputting the descriptive information for all team members and use this information in the process of roles assignment. Through of responses analysis to the test was identified like an active person, its dominant feature is being controller. Others characteristics that can measure a person, according to the test, were a little farther from the main, which is closer is Analyzer. Its orientation is directed to tasks, such orientation is typical of directors, economic and mathematical. Let it be a controller person reveals that at the time of confront a problem or a question whenever he believes have the solution and looking what is best. In stressful situations do not change their Controller characteristics, remain its key features. Using the results of Equations 1 and 2 in Step 6, was obtained that person has Major/Minor style, this means that this person is Controller closely followed by a second feature, be Analyzer. It can be concluded that this person is able to lead a team, take responsibility and challenges without fear because it has a strong selfconfidence.
Quality of life test
Our quality of life test is based on the Chronic Heart Failure Questionnaire proposed by (Guyatt, et al., 1989). For our test, the questions have been divided into two categories: Fatigue (2, 4, 7 and 9) and Emotions (1, 3, 5, 6, 8, 10 and 11). As demonstrated by the questionnaires in Table 5, each question has a rating of 1 to 7, where 1 indicates a lower quality of life and 7 denotes a higher quality of life. In each category, the scores for the questions are added together, as shown in Table 6. A low overall score indicates that a person's lifestyle causes unhappiness or frustration, whereas a higher score denotes that an individual's lifestyle does not have an adverse effect on that person. Quality of life questionnaires are often used to recommend that people experience more enjoyment in life (Rothstein & Goffin, 2006;ISQOLS, 1995).
Roles Assignment Process
There are two strategies for assigning specific roles to each person; first, roles could be assigned on the basis of an expert's judgment, or alternatively, automatic tools could recommend roles. Both strategies use information generated during the process of competence evaluation; specifically, the human resources organization chart and the results of the competence evaluation process are used as inputs in both of these strategies. However, regardless of which strategy is used, the system merely suggests roles for each person rather than assigning them to individuals on the basis of the applied algorithm. Using these two strategies, the system guarantees the use of the information obtained in the competence evaluation process, and it makes the following suggestions: 1. Each person should occupy a specific role in an appropriate workplace on the basis of his/her technical evaluation. 2. The teams should contain a balance in personalities. Specifically, a good team exists when the difference among variables X, Y, Z, and W is appropriate, indicating that the team members should have positive relations and work efficiently with one another. 3. In the first strategy of expert judgment, the team in charge of human resource acquisition should obtain the necessary information from each individual and assign the roles to each project member. 4. In the second strategy, the use of a semi-automatic tool or software helps to assign the roles by providing a preliminary structure of the human resources organization. However, this initial structure does not constitute the final human resources organization, which depends on the human resource acquisition team. 5. The semi-automatic strategy is not a substitute for human experience. Accordingly, the results and suggestions generated by this strategy should leave room for modifications and adaptations by humans.
Close Process
Finally, the close process consists of two main activities: 1. The completion of the project staff 2. The communication of the acquisition results to the stakeholders. Accordingly, we propose two reports to be generated in this process: 1. The report of human resource completion, which specifies the selected individuals and the position of each person in the project. 2. The report of acquisition processes, which includes all of the elements and aspects that were involved in the acquisition process.
Experimental Results
In order to verify the model's effectiveness, we selected six software development projects to which to apply the model and its tools. In April 2008 we applied the model and its tools to these projects. We proposed changes to these projects' organizational structure, which were applied in the following year. As explained in the following sections, our conclusions are based on three statistical tests.
a) Projects characteristics
All the projects analyzed were from a single center software product development. Project 1 was related to software quality management, and there were twelve people working on the project. In Project 2, twenty-two people were involved in a project addressing issues of business management. Project 3 concerned an e-commerce system, and the development team consisted of ten individuals. In Project 4, twenty-two people were working on various topics related to project management software systems. Project 5 involved the development of a statistical system with a twenty-three person team. Finally, Project 6 included twenty-two persons, and it focused on the development of a generic platform for conducting dynamic reports.
Statistical tests to detect difficulties (April 2008)
We applied the The purpose for including these variables was to measure their status in teams that had been created without taking into account the elements suggested in the research. Assess the level of each of them, propose changes as suggested by the proposal and then assess whether there had been improvements in productivity and personal relationships of these teams.
A simple random sample method was used to select two of the six projects, representing 33% of the original project sample. These two projects, Projects 5 and 6, were evaluated in October 2009, and the results demonstrated that the application of our model significantly improved the projects' performance. Specifically, we detected considerable differences in these projects on the basis of the following variables: Orientation of Normal Situations and Tense Situations: Human beings change their orientation significantly depending on whether the situation is considered normal or tense. In normal situations, the orientation of team members is focused on individuals, whereas in tense situations, team members are focused more strongly on the task. Expected Role vs. Current Role (Normal Situations): There is a remarkable difference between the expected role and the current role in normal situations. Expected Role vs. Current Role (Tense Situations): There is a significant difference between the expected role and the current role in high-pressure situations.
Recommended changes for improving projects' performance
Our observations demonstrated that most individuals were performing tasks that were different than those recommended by the model. Accordingly, their roles within the team required modification, and, in order to improve the project performance, we recommended the reallocation of human resources within the project, the most convenient rearrangement of roles, and a reorganization of the projects to achieve more balanced teams.
Statistical test, checking the improvements
In order to evaluate the projects, we compared the results of Projects 5 and 6 with their previous results. Specifically, two pairs of samples (Project-5 2008, Project-5 2009 and (Project-6 2008, Project-6 2009, were compared, focusing on the teams' balance and performance. After the application of corrective measures, Projects 5 and 6 were observed to have balanced teams. We applied the Wilcoxon Test to compare the results, which are displayed in Table 8. This table reflects the increased productivity of Projects 5 and 6 after the application of the corrective actions to the projects' teams. To estimate productivity was taken into account the efficiency (E) of the teams taking as indicator the number of requirements (R) between the time, in months needed to developing them (Oficina Nacional de Normalización, 2007). Variables such as the number of people in the teams, the characteristics of the requirements specification and the daily time utilized, remained stable throughout the experiment so were not taken into account for calculating efficiency.
Threats to validity
There are two major threats to the model´s validity: 1. The quality of collected data depends on the tests application. The organization must assure the quality of the questionnaires application required for attitudes and for life assessment. 2. The results obtained in this study could be influenced by other factors like the improvement of the individual competence during the project development.
Conclusions
The acquisition process of human resource management consists of four main activities: initialization, competence evaluation, roles assignment, and communications to the stakeholders. Within the stage of competence evaluation, we have proposed three types of tests: the technical test, the quality of life test, and the psychologicalsociological test. These tests form the basis for our proposed model for evaluating the quality and balance of software teams, which has been applied to real software projects. The experimental results demonstrate that the application of the model improved the productivity of project teams. We have also proposed two alternatives to the role assignment of individuals: manual techniques and automatic techniques; however, algorithms do not substitute for human experience, as they need to be revised by humans.
|
2015-12-02T17:57:13.000Z
|
2015-12-02T00:00:00.000
|
{
"year": 2015,
"sha1": "b7ac56ce31b11c4f71aff1b6fce6b758889ed0e1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1512.00787",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b7ac56ce31b11c4f71aff1b6fce6b758889ed0e1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
27644755
|
pes2o/s2orc
|
v3-fos-license
|
Directed random walks on polytopes with few facets
Let $P$ be a simple polytope with $n-d = 2$, where $d$ is the dimension and $n$ is the number of facets. The graph of such a polytope is also called a grid. It is known that the directed random walk along the edges of $P$ terminates after $O(\log^2 n)$ steps, if the edges are oriented in a (pseudo-)linear fashion. We prove that the same bound holds for the more general unique sink orientations.
Introduction
Our research is motivated by the simplex algorithm for linear programming. We consider the variation where the algorithm chooses at each step the next position uniformly at random from all improving neighbouring positions; this rule is commonly called Random-Edge. Its expected runtime on general linear programs can be mildly exponential; cf. Friedmann et al. [2011]. Better bounds can be hoped for if one imposes restrictions on the input. It is intuitively plausible that Random-Edge should run very fast if the number of constraints (or facets) is very small in relation to the dimension. Gärtner et al. [2001] analyzed the performance of Random-Edge on simple polytopes with n facets in dimension d = n − 2, and obtained the tight bound O(log 2 n). It is natural to ask inhowfar this bound depends on the geometry of the problem. To this end we consider the setting where the notion of 'improving' is specified not by a linear objective function, but by a unique sink orientation, which is a more general object with a simple combinatorial definition.
Unique sink orientations have been studied in numerous contexts; see e.g. Szabó and Welzl [2001], Gärtner and Schurr [2006], and Gärtner et al. [2008]. They are defined as follows. A sink in a directed graph is a vertex without any outgoing edges. Now, an orientation of the edges of a polytope is a unique sink orientation if every non-empty face of the polytope has a unique sink. The definition is motivated by the fact that every linear orientation (the orientation obtained from a generic linear objective function) is a unique sink orientation (but the converse does not hold).
The purpose of this note is to prove the following theorem.
Theorem 1. Let n − d = 2. Let P be a simple d-dimensional polytope with n facets, endowed with a unique sink orientation. A directed random walk on P , starting at an arbitrary vertex, arrives at the sink after an expected number of O(log 2 n) steps.
x 1 x 2 y 1 y 2 y 3 Figure 1: The graph of a polytope (a prism) with n = 5 facets and dimension d = 3, which is a grid. Every vertex is identified by a pair {x i , y j }. The arrows give an example of a unique sink orientation.
To be perfectly clear, by a directed random walk on a given directed graph we mean the following process: We begin in some given vertex v 0 ; we choose one edge uniformly at random from the set of outgoing edges at v 0 ; we move to the other endpoint and call it v 1 ; then we continue in the same fashion until we possibly arrive at a sink.
For the more restrictive Holt-Klee (or pseudo-linear ) orientations, the bound in theorem 1 has been proved by Tschirschnitz [2003]. The general structure of our proof is very similar to Tschirschnitz'; the only notable deviation will be the proof of lemma 4.
In order to fix some notation, let H be the set of the n halfspaces that define the polytope P , and let V be the set of vertices. We will identify every vertex v ∈ V with the set {h ∈ H : v lies in the interior of h}, so that V becomes a subset of the powerset of H.
The following fact appeared as lemma 2.1 in Felsner et al. [2005], and may also be seen as a consequence of the existence of Gale diagrams: Assuming that P is a simple polytope with n − d = 2, there exists a partition H = X∪Y such that, under the identification of vertices with subsets of H described above, we have Practically speaking, we can thus refer to each vertex by its X-coordinate and Y -coordinate. Furthermore, a set of vertices forms a face if and only if it is of the form V ∩ 2 H for some H ⊆ H. In particular, two vertices v, v are adjacent if and only if the sets v, v are not disjoint.
The graph of the polytope P is thus isomorphic to the product of two complete graphs, as in fig. 1. Such a graph is also called a grid. Those readers who are not used to unique sink orientations might at this point want to check that the orientation shown in the figure is indeed a unique sink orientation. Note in particular that every row or column in fig. 1 is also a face of P , the edges of such a face constitute a complete graph, and they must be oriented in an acyclic fashion.
We write u → v for a directed edge from a vertex u to a vertex v. A non-empty directed path from u to v is denoted by u → + v. The outmap Φ : V → 2 H specifies the outgoing edges at each vertex and is defined by We also abbreviate For example, for the top middle vertex of the grid pictured in fig. 1 is known as the refined out-degree. From Felsner et al. [2005], Lemma 3.1, we know that for every pair of indices (i, j) with 0 ≤ i < |X| − 1 and 0 ≤ j < |Y | − 1 there exists a unique vertex with refined out-degree (i, j). We use this property to define the following 'milestones' for our random walk.
The number of milestones will be (The indices are chosen in such a way that the vertex w i exists and has exactly 2 i outgoing edges.) Furthermore we define w 0 as the unique sink of P ; in other words, w 0 is the unique vertex with refined out-degree (0, 0). Now define W i as the set of vertices to which there is a non-empty directed path from this vertex, i. e., The sets W i serve as a measure of progress: Starting from a vertex in the set W i+1 , the next 'milestone' is hit when the random walk arrives for the first time in a vertex of W i ; and once the random walk arrives in some W i , it stays therein. Note that the indices are counting down: The random walk arrives in the global sink as soon as the milestone W 0 = {w 0 } is hit. Our goal is now to prove the following propositions.
Proposition 1. The expected time until the random walk, starting from a vertex in W i+1 , arrives in W i , is bounded by O(log n). (i = 0, . . . , L − 1.) Proposition 2. The expected time until the random walk, starting from an arbitrary position, arrives in W L , is also bounded by O(log n).
The bound O(log 2 n) in theorem 1 will follow by observing that there are L + 1 = O(log n) many milestones, each of which is hit -according to the two propositions -after at most O(log n) steps in expectation.
The technical statements of the following lemma will be useful for deducing the orientation of some edges incident to the current position. The statements are known or follow readily from known results. 1 Lemma 1.
(e) Every unique sink orientation of P is acyclic.
Proof of proposition 1
We write v 0 , v 1 , . . . for the positions of the random walk, where we consider the starting position v 0 as a fixed element of V (not a random variable). Let i be chosen such that v 0 ∈ W i+1 , and let T denote the hitting time of the set W i : We want to bound the expected time until the random walk arrives in W i ; in other words, we want to bound E T .
One way to look at the directed random walk is as follows: At time k + 1 it picks a pivot h k+1 uniformly at random from the set Φ(v k ). This pivot determines the edge along which to move away from the vertex v k . Concretely, the next position v k+1 is the unique neighbour Note that the pivot h k+1 is only defined in this way when the position v k is not already the global sink; so if v k = w 0 is the global sink then we let h k+1 = ♦, where ♦ is just a formal symbol to remind us that the random walk has already terminated.
We define some auxiliary stopping times. Let σ denote the first time that an element of the set Φ(w i+1 ) is pivoted. Furthermore let τ 1 < τ 2 < . . . be the instants in time when an element of the set Φ(w i+1 ) ∪ Φ(w i ) is pivoted, and let τ N be the first among these instants when the random walk hits the set W i .
More precisely, we let We suppress the dependence on i in the notation, considering i (the index of the next milestone) fixed throughout the section.
Lemma 2. The random set {h 1 , . . . , h σ−1 } is always either a subset of X or a subset of Y . As a consequence, the vertices v 0 , . . . , v σ−1 share either their Y -coordinate or their X-coordinate.
Proof. Assume we encounter the event that, say, h 2 ∈ X and h 3 ∈ Y , where 3 < σ. Then v 3 = {h 2 , h 3 }. By definition of our stopping time σ, none of our pivots considered here are elements of Φ(w i+1 ); hence we have v 3 ∩ Φ(w i+1 ) = ∅. On the other hand, by our choice of i we have v 3 ∈ W i+1 and thus v 3 ∩ Φ(w i+1 ) = ∅ by lemma 1(b); a contradiction.
Lemma 3. We have E σ ≤ H n + 1, where H n denotes the nth harmonic number.
Proof. By lemma 2, either the pivots h 0 , . . . , h σ−1 are all elements of X, or they are all elements of Y . Thus it suffices to bound the expected time until, say, the pivot is not an element of Y . In terms of fig. 1, this means to bound the expected time until the random walk leaves the current row of the grid. This expectation only becomes larger if we condition on the event that the random walk stays in the current row until it reaches the sink of the row. Reaching the sink of the row can be shown to take H n steps in expectation; we still have to add 1 for the possible additional step that leaves the current row.
Proof. We have T ≤ τ N ; so we will concentrate our efforts on bounding the expectation of τ N . In the following we will need to make the starting position v 0 of the random walk explicit in the notation; we will do so by writing the starting position as a subscript, as in Using the Markov property we find, for all j ≥ 1, where the last step used lemma 3. Hence, applying lemma 5 (appendix) to the sequence (τ j − τ j−1 ) 1≤j≤N , It remains to show that the number E N can be bounded from above by the constant 155. To this end we consider the following events.
Claims. For any choice of starting position we have Note that the event E 3 is equivalent to the event N ≤ 3 (or in other words, the event that the next milestone is hit no later than at time τ 3 ). Thus, once these claims are proved, we can conclude that E N can be bounded from above by the expected number of steps that it takes a Bernoulli process with parameter p = 1 5 to hit 3 successive successes. By theorem 2 in the appendix this number is, as desired, Proofs of the claims.
(i) In order to show (i), it suffices to show Pr E 1 v τ 1 / ∈ W i ≥ 1 5 . Let us thus assume v τ 1 / ∈ W i , which means that the random walk has not yet hit the next milestone at time τ 1 . In particular, the random walk has not yet terminated at time τ 1 . We want to show that now the event h τ 1 ∈ Φ(w i ) happens with probability at least 1 5 . By definition of the random walk, the kth pivot h k is chosen uniformly at random from the set of violating constraints, Φ(v k−1 ). This is true for any time k at which the random walk has not yet terminated; now we consider k = τ 1 : By definition of τ 1 , the pivot h τ 1 is then chosen (still uniformly at random) only from the smaller set Since the next milestone has not yet been hit, we know that v τ 1 −1 / ∈ W i holds, which is equivalent to writing In both these cases we see that S contains a subset of Φ(w i ) of size 2 i−1 . On the other hand, (ii) Similarly to how we proceeded in the proof of (i), it suffices to show Pr E 2 E 1 and v τ 2 / ∈ W i ≥ 1 5 . So we assume E 1 and v τ 2 / ∈ W i , and without loss of generality we assume h τ 1 ∈ Φ X (w i ). We want to show that now h τ 2 ∈ Φ Y (w i ) happens with probability at least 1 5 . This time the pivot h τ 2 is chosen uniformly at random from the set As before, |S | ≤ 5 · 2 i−1 . Also as before, either S ⊇ Φ X (w i ) or S ⊇ Φ Y (w i ) must hold. Now, however, we can observe that the latter alternative must be true (implying the claim because |Φ Y (w i )| = 2 i−1 ), as follows: It suffices to show that h τ 1 / ∈ Φ(v τ 2 −1 ). (Indeed, this implies S ⊇ Φ X (w i ), which leaves us only with the other alternative, S ⊇ Φ Y (w i ).) We know from lemma 2 that the vertices v τ 1 and v τ 2 −1 share either their X-coordinate or their Y -coordinate. If they share their X-coordinate, so that h τ 1 ∈ v τ 2 −1 , then clearly h τ 1 / ∈ Φ(v τ 2 −1 ). If on the other hand our two vertices share their Y -coordinate but not their X-coordinate, let us consider the grid edge between our two vertices: Since there is a walk from v τ 1 to v τ 2 −1 , the edge cannot be directed from v τ 2 −1 to v τ 1 , by acyclicity (lemma 1(e)). Note furthermore that we have v τ 1 = {h τ 1 , y}, where y is the shared Y -coordinate. Hence the non-existence of a directed edge from v τ 2 −1 to v τ 1 translates into saying that we have h τ 1 / ∈ Φ(v τ 2 −1 ), as desired.
(iii) We assume that E 1 and E 2 occur, and we can also assume h τ 1 ∈ Φ X (w i ), h τ 2 ∈ Φ Y (w i ), without loss of generality. We can also assume that the next milestone has not already been hit before the time τ 3 (i.e., we assume v τ 3 −1 / ∈ W i ). Now we want to show that with probability at least 1 5 we have v τ 3 ∈ W i . Analogously to the proof of (ii) we note that h τ 3 is chosen uniformly at random from a set of cardinality at most 5 · 2 i−1 , and Φ X (w i ) is a subset of this set, with cardinality 2 i−1 . This shows already that, with probability at least 1 then the claim will follow (because then, with probability at least 1 5 , we have v τ 3 ⊆ Φ(w i ) and hence v τ 3 ∈ W i , cf. lemma 1(a)).
By lemma 2 we know that v τ 2 and v τ 3 −1 share either their X-coordinate or their Ycoordinate. If they share their Y -coordinate then we are done, because then h τ 2 ∈ v τ 3 −1 ∩ Φ Y (w i ). Assume now that they share their X-coordinate but not their Ycoordinate; we will now examine the Xand Y -coordinates of v τ 3 −1 separately, and show that neither of them can be an element of the set Φ(w i+1 ), which by lemma lemma 1(b) yields a contradiction to the fact that we have v τ 3 −1 ∈ W i+1 .
-The Y -coordinate of v τ 3 −1 is given by h τ 3 −1 , which by definition of our stopping times is not an element of Φ(w i+1 ).
-Let x denote the X-coordinate of v τ 3 −1 . This x is also the X-coordinate of v τ 2 , and also the one of v τ 2 −1 . Suppose we had x ∈ Φ(w i+1 ): Then by definition of our stopping times x cannot be the pivot h τ 2 −1 ; hence, v τ 2 −1 = {x, h τ 2 −1 } and h τ 2 −1 ∈ Y . By lemma 2 we obtain that {h τ 1 +1 , . . . , h τ 2 −1 } is a subset of Y , so that as a consequence v 1 and v τ 2 −1 share their X-coordinate. But the X-coordinate of v 1 is h τ 1 ; thus we have found x = h τ 1 ∈ Φ(w i ) and v τ 2 = {x, h τ 2 } ⊆ Φ(w i ). By lemma 1(a) we obtain v τ 2 ∈ W i . So the next milestone has already been hit by the time τ 2 : a contradiction.
This concludes this section, establishing that the expected time going from one milestone to the next is bounded by O(log n) (proposition 1). A very similar argumentation can be used to also bound by O(log n) the expected time until the initial milestone is hit, yielding proposition 2. Theorem 1 now follows by observing that there are O(log n) many milestones.
Conclusion
In this note we have shown that the performance of Random-Edge on simple d-polytopes with d + 2 facets does not suffer if the improving directions are specified by an arbitrary unique sink orientation.
The exact performance of Random-Edge on simple d-polytopes with d + k facets, where k ≥ 3 is considered constant, remains an open problem. The question is open even for k = 3 and with the improving directions specified by a linear objective function.
Appendix
The lemma and the theorem below, both of elementary nature, are used in section 3.
Lemma 5. Let X 1 , X 2 , . . . be non-negative random variables, let N be a random variable with values in N 0 ∪ {∞}, and let X = N j=1 X j . Assume that there is M > 0 such that for all j, E X j N ≥ j ≤ M . Then Proof. Without loss of generality we can assume that E X j j > N vanishes for all j. (If this is not the case then we can just replace each X j by the random variable that equals X j when j ≤ N , and equals 0 otherwise.) Now we can write the expectation of each individual variable X j as Pr N < j ≤ M Pr N ≥ j and conclude using monotone convergence: Theorem 2. For p ∈ (0, 1) and n ∈ N 0 , let τ (p, n) denote the first time that, during a Bernoulli process with parameter p, one has encountered n successive successes. Its expectation is given by E τ (p, n) = 1 − p n p n (1 − p) .
|
2017-05-29T15:14:42.000Z
|
2017-05-29T00:00:00.000
|
{
"year": 2017,
"sha1": "d03ca3dfb2094a2acdf31fda91ae295ced65173c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.10243",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d03ca3dfb2094a2acdf31fda91ae295ced65173c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
245997552
|
pes2o/s2orc
|
v3-fos-license
|
Prioritization of health emergency research and disaster preparedness
The spontaneous nature of health emergencies and disasters (HED) require research prioritization and preparedness from multidisciplinary sectors such as the current coronavirus disease 2019 (COVID-19) pandemic that has become a center of attention to the research community globally. This study aims at assessing global research evolution, precedence, and preparedness toward combating the COVID-19 pandemic via systematic analysis of published studies. We retrieved COVID-19 studies from Scopus and Web of Science databases from January 01, 2020, to March 23, 2020, according to the PRISMA guidelines using the search term “COVID-19 OR coronavir∗”. The dataset was analyzed for productivity indices, conceptual frameworks (CFs), discipline, and collaboration networks (CNs). Results revealed a total of 817 studies on COVID-19. The top two productive researchers include those by Wang Y. (3.55%) and Li Y. (2.94%). Among disciplines, virology (n = 40, 5 h-index), microbiology (n = 27, 2 h-index), immunology (n = 22), and infectious diseases (n = 21) were at the forefront. China (n = 181) and the United States (n = 69) ranked the first and second productive nations, respectively. Country CNs in COVID-19 can be clustered into four subnetworks. Also, four thematic areas evolved in COVID-19 research for the period, namely, epidemiologic studies of infectious bronchitis virus including coronavirus, elucidation of historical respiratory viral outbreaks, zoonoses and phylogenetic analysis, and influenza zoonosis; while the prevailing CFs of research prioritization ranged from comparative symptomatology of severe acute respiratory syndrome coronavirus (SARS-CoV)-2 and Middle East respiratory syndrome coronavirus (MERS-CoV), perceptivity studies from SARS-CoV-1,2 outbreaks, antigenic structural studies for vaccine production to antibody therapeutic target studies. In conclusion, the COVID-19 research has received progressive attention since the beginning of the pandemic; however, this study recommends that integrative and multidisciplinary research priority and preparation should be channelled toward HED from all experimental and nonexperimental biases of knowledge.
Introduction
The current coronavirus disease 2019 (COVID-19) outbreak that suddenly spread from China to the other parts of the world has become a center of attention to the research world, scientists, government agencies, nongovernmental organizations, and individuals. Large behavioral, health, and state measures were undertaken to alleviate the outbreak and prevent the virus from persisting in the human population in China and around the world. However, the efforts to mitigate or reduce the spatial distribution of the virus have become mirage. How these unprecedented interventions, including travel restrictions, had affected by COVID-19 spread in our world remains unclear and to be 1.1 Methods
COVID-19 data sources
We have an interest in information/research evolution and response to health emergency using COVID-19 as an ongoing global pandemic. For this purpose, we retrieved COVID-19-related documents from the Scopus database and the Web of Science (WoS) core collections from January 01, 2020, to March 23, 2020 (19:39:27 GMTþ2) according to the modified method of the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)" [5]. No exclusion criteria were considered in order to map the various information and research response to COVID-19 pandemic from vast research landscapes. The databases searched were as follows:
Data analytics
Data retrieved from the two databases were combined, de-duplicated, and normalized for bibliometric attributes such as authors' names, affiliations (institutions, country), article source, and keyword synonymic forms. The normalized data were analyzed for descriptive performance indices/rates in terms of documents, authors, productivity, journal source, institution, country, total citations, intellectual domain, and collaboration index. We mapped the most cited documents related to the COVID-19 pandemic and identified the conceptual framework through multidimensional scaling analysis of author-keywords co-occurrences [3].
Evaluation of collaboration network in health emergency research
Collaboration networking during health emergencies is necessary for various reasons, for instance, to pool inadequate resources together to achieve desired results, for intellectual and knowledge sharing, and for technical know-how and skill transfer required to halt health emergencies such as outbreaks and pandemics. For this study, we assessed collaboration regarding efforts channelled toward combating ongoing COVID-19 pandemic from the author-, institution-, and country-wise. In all cases, the network has a simple bipartite vector form typical of authors  Articles, institutions  Articles, and countries  Articles. Mathematically, where Network is a symmetrical matrix (C ¼ C T ) and C is a bipartite network matrix. The edges/nodes of the network imply authors/institutions/countries and the associated curves the basis/means of collaboration among the nations. The network visual presentation was according to the Jaccard's similarity index normalized Fruchterman forcedirected layout [6].
Analytic platforms
The open ware analytic platforms of R and python were employed in this study. Data analysis was done based on the integrated usage of the bibliometrix package in Rstudio v.3.6.2, ScientoPy package, and Excel 2016 [7,8].
Results
This research systematically analyzed the distribution of the articles, stratified by geography, organization, journals, relevant sources, and more. This study also analyzed keyword frequency and then used bibliometric mapping methods to illustrate research trend and evolution on COVID-19. Results were examined to better clarify this field's structure and research hotspot and trends in COVID-19 study and the need for health emergency research and disaster preparedness prioritization. This study also provides information on most influential themes and keywords to develop elated themes of research on COVID- 19 More so, the discipline-based COVID-19 research and information is presented in Table 24.4. Among all the disciplines evaluated, virology and microbiology ranked first and second with published research of about 40 (5 h-index) and 27 (2 h-index) articles, respectively. While immunology, infectious diseases, veterinary sciences, general and internal medicine, and pharmacology and pharmacy ranked third, fourth, fifth, sixth, and seventh with about 22 (2 h-index), 21 (3 h-index), 15 (1 h-index), 14 (6 h-index), and 11 (1 h-index) articles, respectively. These are the most influential or the most productive fields in COVID-19-related studies between January and March 2020.
The contribution of various nations toward research aimed at this pandemic based on published articles is investigated in this study. The global distribution of scientific articles indirectly informed health emergency research tailored toward COVID-19 and may overlap with availability/advancement of analytical tools and the capacity of researchers from various nations in both developed and developing countries [9]. Among the top countries, China and the United States of America ranked first and second in the most productive countries, with a total of 181 and 69 published articles, accounting for about 22.15% and 8.45% of the total articles, respectively, published on COVID-19 within the Other information on most productive countries on COVID-19 are presented in Table 24.5; readers may refer to the information in Table 24.5. High research outputs from China, the United States, the United Kingdom, and Korea may be attributed to the fact that they are the most affected countries by this pandemic as well as the countries with funding available for research on COVID-19. Also, the world leaders and research scientists are looking for a solution for the pandemic, which might trigger the need for research on the pandemic [10,11]. Chapter 24 Prioritization of health emergency 473 The information in Fig. 24.2 presents country collaboration networks on COVID-19 research during the period of the survey. The function estimates and good-of-fit show that the output on COVID-19 research evolved in the past few months of its spread. In addition to the effects on human health, COVID-19 can wreak havoc on the global economy, which can linger with continuous adverse impact on the development and other global environments, which is likely accounting for the increased number of articles related to the research on COVID-19 and with a possible increase in the nearest future [12,13]. The result from this study reveals that China, the United States, the United Kingdom, Saudi Arabia, Germany, Switzerland, Canada, and Italy ranked first, second, third, fourth, fifth, sixth, seventh, and eighth, respectively, in terms of collaboration on COVID-19 research during the study period. Other countries including Japan, Sweden, Netherlands, Nigeria, Thailand, and South Africa are also identified for their collaboration studies on this pandemic between January and March 2020. This study also reveals that the top collaborative nations are the countries that this pandemic affected most, especially in the first 2 months when COVID-19 started, while few studies are from countries recently hit by the pandemic, including Croatia, Austria, Tanzania, Sudan, New Zealand, Russia, and Chile, among other nations.
Authors from China, the United States, the United Kingdom, Germany, and Korea ranked first, second, third, fourth and fifth with about 498, 71, 17, 14, and 9 citations, respectively (Table 24.6). High research outputs and citations received by these nations are attributed to the fact that these regions of the world were seriously affected by the pandemic, especially in the area of disease monitoring and control as well as in searching for a way out of its spatial distribution and infection [14e16]. This might have also encouraged researchers in the area to focus on the cure for COVID-19, which might have influenced research on COVID-19, yielding more research output on the mode of transmission, the most vulnerable age groups, and other genetics-related uses around the pandemic, as well as mitigation with possibly more publications will emanate from this country on COVID-19-related issues [14,16,17]. The information in S 1 Table reveals characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study", "First case of 2019 novel coronavirus in the United States", "Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding", "A novel coronavirus outbreak of global health concern", and "The continuing 2019-nCoV epidemic threat of novel coronaviruses to global health-The latest 2019 novel coronavirus outbreak in Wuhan, China." Table 24.7 presents the most influential and relevant sources of research on COVID-19 between January and March 2020. The result from the study reveals the top journals or sources with the most published research articles on COVID-19-related studies. These relevant sources cover a range of subjects in their respective articles. The Lancet and the Journal of Medical Virology ranked first and second with about 38 (4.65%) and 32 (3.92%) Table 24.7. Fig. 24.3 presents Lotka's model of scientific productivity on COVID-19 during the survey period. Lotka's model presents the frequency of authors' publications in a specified field. It shows authors making contributions in the field and the ratio of all contributors who make one contribution in the field with a significant percentage [18,19]. The estimated value of n for the dataset is calculated using Lotka's model. The Beta value in COVID-19 is 2.44 for all author data, which provides the best fitting value for the dataset. Fig. 24.3 shows the log-transformed Lotka's model plot with a P-value, R 2 , Beta coefficient, and constant C of 0.02, 0.97, 2.44, and 0.58, respectively. P-value is the number of authors producing n papers and C is a constant characteristic of a particular subject area. Other statistics about the collaboration network in COVID-19 research and information is presented in S 2 . Basically, the network diameter 6 and 4 showed that the collaboration based on authors' coupling and country is typical of acquaintanceship.
The top author's coupling cluster is presented in Fig. 24 concepts and frameworks often related to COVID-19 could be detected via country collaboration ( Fig. 24.2), conceptual framework (S 1 Figure), and co-occurrence of terms and keywords (Fig. 24.4). Bibliographic coupling exists when two publications coreference a third document in their contents. It indicates a probability that the two documents present a related subject matter. Two documents are scientometrically coupled when they cite one or more publications in common. This is an indicator that such an area of interest is very important and can be a research hotspot in the field, for instance, COVID-19. The information in Fig. 24.5 presents co-occurring keywords, which reflect research on pressing and emerging issues facing the world, especially the COVID-19 pandemic. Research conducted between January and March 2020 was chosen for RStudio analysis with a time slice of months. From each slice, the top most-occurred or re-occurring items were picked. The nodes represent keywords, and each node's size correlates to the keywords' co-occurring frequency. The color of the lines between keywords reveals chronologic order: red, yellow, green, and pink. The maximum frequency was perceptivity studies from SARS-CoV outbreak and epidemic). While cluster #3 (antigenic structural studies for vaccine production) comprises of virus, antiviral, phylogenetic analysis, spike protein, and nucleocapsid protein and cluster #4 (antibody therapeutic target studies) consist of infectious bronchitis, neutralizing antibody, bat, ace 2, and angiotensin-converting enzyme 2. Most nodes marked with blue circles represent a good relationship between keywords and its centrality, and these keywords are very important in COVID-19 research and its occurrences. In other words, these nodes represent emerging trends in the field of COVID-19, with the strongest bursts.
Institutional collaboration networks are evaluated in this study where the size of the circle represents the research efforts/outcomes in term of documents published by FIGURE 24.5 Keywords co-occurrence networks in COVID-19 research and information from 01/01/2020, to 23/03/2020. different affiliations (Fig. 24.6). The link between two circles denotes the strength of bidirectional collaboration between them quantified via their coauthored documents. The sum of all links a circle possessed represents the overall strength of the collaborations the corresponding institution has made with other institutions [4]. Among the top collaborative institutions are the University of Toronto Chinese University of Hong Kong, Wuhan University of Virology, Fudan University, Peking University, and Guangzhou Medical University. Furthermore, studies have shown that in different university ranking systems, the number of citations has more than 20% share [20e22]. Therefore many institutions encourage their researchers to publish high-quality and influential research articles that reach the broadest possible audience or receive high citations [23]. Consequently, the published literature has revealed increased visibility through the availability of research outputs via open-access repositories, broader access outcomes, and higher citation effects [23e26]. The research visibility improved both the report and the researcher's citation and chronologically h-index.
The study identified four thematic evolutionary thrust areas in COVID-19 emergency research prioritization: the first cluster (bottom right: epidemiologic studies of infectious bronchitis virus including coronavirus) consists of coronavirus, infectious bronchitis virus, and epidemiology followed by the second cluster (top right: elucidation of historical respiratory viral outbreaks) that consists of MERS and SARS. More so, the third cluster (top left, namely, zoonoses and phylogenetic analysis) comprises zoonoses and phylogenetic analysis, and the fourth cluster (bottom left) consists of zoonosis and influenza (influenza zoonosis) (Fig. 24.7). These domains of research have received progressively more attention in the past few months since the beginning of COVID-19. Thus to analyze the thematic evolution of this pandemic, Figs. 24.4 and 24.6 show several findings (prevailing themes) that sought to find solutions in an effort to halt the COVID-19 pandemic. Research related to COVID-19 contributes to scientific advancements and provides the needful information on the disease.
Discussion
This study offers a conceptual representation of COVID-19 research progression, and it has been noted that studies in this area have disciplinary and multidisciplinary emphasis (combining two or more fields) in which new knowledge is gained through interaction and incorporation of new ideas, views, tools, and techniques across different fields. More so, interdisciplinary work frequently involves institutions, organizations, scientists, and nations.
This study attempted to provide concise quantitative and qualitative overview of the world prioritization of health emergency research and preparedness using publications of COVID-19 between January and March 2020 as a model scenario. The results indicate that researchers from around the world started publishing the articles just immediately after the occurrence of COVID-19 and the number of articles in this field is still growing quickly. This new virus is a concern for the world, as it has affected various sectors globally, including global economy, health, migration, airlines, and other vital sectors since the inception of COVID-19 [26e29]. As reflected in the study, all the continents have been affected and almost all activities have been grounded [30,31].
The development of a multidisciplinary task force involving researchers, institutional leaders, infectious disease and infection prevention specialists, and technology experts is a critical step in addressing global concerns and developing open and productive communication on COVID-19. An initial needs-based assessment was done of the current state to determine the necessary operational processes for outbreak management, the existing informatics structure to support these processes, and the gaps that needed to be bridged in a timely fashion. Doing so allowed us to expediently assess studies on COVID-19 between January and March 2020. This study revealed that globally, China and the United States ranked first and second, respectively, in all the research productivity measures including production, citations, authors, and single-and multiplecountry authors of COVID-19 research during the study period. Other counties including Korea, Italy, Germany, Canada, and Saudi Arabia also ranked high in the research on COVID-19. This also reveals most of the nations are most affected especially in the first month of inception of the COVID-19, while other countries that are less affected have a low record of novel studies on this pandemic. However, the result from this study suggests the need for those countries lagging in research or scientific findings to put more effort into finding solutions before they get hit by the pandemic, especially nations from Africa.
As of March 31, 2020, the number of confirmed COVID-19 cases globally is 784,392, with the number of recorded deaths of about 37,780 and recoveries of 167,035. Therefore prioritization of health emergency research and disaster preparedness for COVID-19 and its impact on global health and economy is paramount [14]. COVID-19 is affecting territories and 200 countries around the world and 2 international conveyances, i.e., Holland America's MS Zaandam cruise ship and the Diamond Princess cruise ship harbored in Yokohama, Japan [32]. Due to the obvious impact of COVID-19, various sectors have been grounded globally, lives have been lost, and businesses are collapsing, and nations under lockdown are disrupting activities in all spheres of life.
One of the ways for prioritizing health emergency and disaster preparedness for COVID-19 is to balance the need to concentrate on the pandemic while ensuring highquality healthcare and non-new infection-related operations and research on pandemic to provide support to all facets of the population and sectors [14,16,33]. Finally, in an evolving pandemic environment, face with challenges for developing guidelines or protocols that typically require inputs and approvals from multiple stakeholders with emerging research outcomes, it is unavoidable that proper dissemination of the guideline/protocols to sustain the rapid reduction of the spread and impact of health emergency such as COVID-19 will encounter many obstacles [11,34]. The COVID-19 pandemic has revealed the importance of a multidisciplinary team of health workers or approach in combating health emergencies and disasters and prior building of strong and consolidative health systems capable of sustaining unanticipated health emergencies and disasters. The most significant mitigation strategy and disaster preparedness for the challenges around COVID-19 is the establishment of a 24-hour information platform that included representation from the WHO Information Services. The information received from the center will be very useful for researchers and scientists for further analysis and evaluation of the issues around COVID-19. This will immensely contribute to the mitigation and preparedness strategies for the pandemic. More so, it will enable real-time identification of failures and successes, a focus on evolving needs, and feedback for subsequent interventions.
Conclusion
This study assessed global research evolution, prioritization, and preparedness toward health emergencies and disasters using the COVID-19 pandemic as a typical model based on productivity indices, conceptual frameworks, discipline, and collaboration networks. The study unveiled global research efforts made by researchers from different nations, disciplines, institutions, and fields. Fundamental research prioritization was noticed in China, the United States, the United Kingdom, Saudi Arabia, Germany, Switzerland, Canada, and Italy, as well as from disciplines, namely, virology, microbiology, immunology, infectious diseases, veterinary sciences, general and internal medicine, and pharmacology and pharmacy. The various conceptual frameworks and thematic areas, given the research priorities during the period, were (1) epidemiologic studies of infectious bronchitis virus including coronavirus, (2) elucidation of historical respiratory viral outbreaks, (3) zoonoses and phylogenetic analysis, (4) influenza zoonosis, (5) comparative symptomatology of novel coronavirus (SARS-CoV-2) and MARS-CoV, (6) perceptivity studies from SARS-CoV-1,2 outbreaks, (7) antigenic structural studies for vaccine production, and (8) antibody therapeutic target studies.
Generally, study revealed a skewed health emergency research response and prioritization only from the affected nations, which could have informed prior research preparedness from the then-unaffected countries to support decision-making and possible implementation to mitigate the pandemic. Although COVID-19 research has received progressive attention since the beginning of the pandemic, the number of studies included in this study might not be exhaustive of COVID-19 research based on the limited number of databases consulted and the fact that new studies are being published daily. However, this study recommends integrative and multidisciplinary research priority and preparation toward health emergencies and disasters from all experimental and nonexperimental biases of knowledge from affected and unaffected nations.
|
2022-01-17T16:13:04.675Z
|
2022-01-14T00:00:00.000
|
{
"year": 2022,
"sha1": "c61758591c71bba8721e4d4faa97086ced1747d8",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/b978-0-323-90769-9.00033-5",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a3181e978229baa5ca148cbe3d044261f89f1f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
90139105
|
pes2o/s2orc
|
v3-fos-license
|
Object-based change detection using semivariogram indices derived from NDVI images : The environmental disaster in Mariana , Brazil
Object-based change detection is a powerful analysis tool for remote sensing data, but few studies consider the potential of temporal semivariogram indices for mapping land-cover changes using object-based approaches. In this study, we explored and evaluated the performance of semivariogram indices calculated from remote sensing imagery, using the Normalized Differential Vegetation Index (NDVI) to detect changes in spatial features related to land cover caused by a disastrous 2015 dam failure in Brazil’s Mariana district. We calculated the NDVI from Landsat 8 images acquired before and after the disaster, then created objects by multiresolution segmentation analysis based on post-disaster images. Experimental semivariograms were computed within the image objects and semivariogram indices were calculated and selected by principal component analysis. We used the selected indices as input data to a support vector machine algorithm for classifying change and no-change classes. The selected semivariogram indices showed their effectiveness as input data for object-based change detection analysis, producing highly accurate maps of areas affected by post-dam-failure flooding in the region. This approach can be used in many other contexts for rapid and accurate assessment of such land-cover changes.
INTRODUCTION
The collapse of a mining dam in the Brazilian state of Minas Gerais on November 5 th 2015, considered one of the biggest environmental disasters in the country's history, resulted in the destruction of whole communities by a river of mud and mining waste.This calamity affected the Gualaxo River, a tributary to the Carmo River and ultimately the Doce River, waterways that supply water to a significant number of municipalities.The flood affected 600 kilometers of riverbed and destroyed human and animal lives as well as several land-cover classes (such as grasslands, urban areas, and native vegetation), including in permanent preservation areas.The full extent of the environmental impacts is yet unknown, and the changes within the affected area have yet to be fully quantified.
Remote sensing techniques are effective in capturing the structure, rates, and changes of land cover.They can supply essential information concerning the ecological status of a region, including changes that modify plant phenological standards and deforestation (Munroe; Southworth ;Tucker 2002;Tucker et al., 2005;Yue et al., 2003).The Normalized Difference Vegetation Index (NDVI) is an important approach to the analysis of land-cover structure analysis and its temporal modifications (Griffith et al., 2007).According to Costantini et al. (2012) and Garrigues et al. (2006), NDVI images are the most robust variable used to describe the spatial and temporal heterogeneity of a landscape's biosphere.In addition, these data can be treated as regionalized variables once the information contained in a pixel is highly correlated with the information contained in neighboring pixels (Acerbi Junior et al., 2015;Curran, 1988).
Studies of environmental disasters have emphasized the importance of damage determination to assist environmental management programs and stressed the use of remote sensing images and geostatistical techniques as central tools for this kind of analytical approach (Sertel;Kaya;Curran, 2007).Combining remote sensing information with GIS techniques and geospatial databases can increase the accuracy and reduce the processing time of change detection and classification procedures (Berberoglu et al., 2000;Berberoglu;Akin 2009;Garcia-Pedrero et al., 2015).
For example, semivariograms are an analytical technique used to assess the relationship and variance between points based on distance and a given variable.These have been used as measures of texture (Curran 1988;Woodcock;Strahler;Jupp et al., 1988), for improved image classification (Balaguer et al., 2010;Balaguer-Beser et al., 2011;Wu et al., 2015Yue et al., 2013;Powers et al., 2015), and more recently, in change detection studies (Costantini et al., 2012;Sertel et al., 2007;Gil-Yepes et al., 2016).Acerbi Junior et al. (2015) demonstrated the potential of semivariogram parameters (derived from bitemporal NDVI images) to detect changes in Brazilian savanna vegetation, showing that these parameters increased on deforested areas and remained constant in regions where the land cover had not changed.
In recent years, semivariograms have also contributed to object-based image analysis (OBIA) (Meer, 2012).Powers et al. (2015) used semivariogram features and OBIA for classification of industrial disturbances in forest areas.Balaguer et al. (2010) achieved high-accuracy measurements by combining semivariogram features and spectral information in land cover mapping.Gil-Yepes et al. (2016) proposed and evaluated a set of new temporal geostatistical features for object-based change detection (OBCD) analysis within agricultural plots at two different dates, showing that the new set of cross-semivariogram and codispersion features provided high global accuracy measures when compared to the use of only spectral information.
Textural features have proven to be more effective than spectral bands alone for change detection (Chen et al., 2012;Wu et al., 2000).However, few studies have explored the potential of temporal semivariogram features for mapping land cover changes using the OBCD approach.We hypothesized that landscape changes could be accurately detected using only semivariograms calculated from NDVI images and so we explored and evaluated the performance of semivariogram indices in an object-based approach to detecting land-cover changes caused by the 2015 dam-collapse disaster in Brazil.
MATERIAL AND METHODS
We derived the NDVI from Landsat 8 images for use in an object-based change detection approach to analyzing land-cover changes in the afflicted area, using the following methodology (graphically summarized in Figure 1): (1) Image acquisition and NDVI transformation (2) Object delimitation by multiresolution algorithm based on the post-disaster image (3) Experimental semivariogram computed within the objects (4) Generation of semivariogram indices, as proposed by Balaguer et al. (2010) (5) Selection of the most important semivariogram indices by PCA analysis (6) Change detection using the Support Vector Machine (SVM) algorithm (7) Evaluation by the confusion matrix and its accuracy measures
Study area and data
The district of Mariana is located in the central region of Minas Gerais state, Brazil, between the 43º 05' 00" and 43º 30' 00" meridians and the 20º 08' 00" and 20º 35' 00" parallels (Figure 2).The district includes the upper portion of the Doce River basin and is characterized by hilly relief and abundant tablelands.The climatic conditions are typical of humid tropical highlands, with hot and rainy summers.The vegetation is predominantly composed of Atlantic Forest and Savanna biomes.
We acquired Landsat 8 satellite images from the United States Geological Survey for Earth Observation and Science (USGS/EROS) from October 2015 (predisaster) and November 2015 (post-disaster), at the processing level of Landsat Surface Reflectance, with the appropriate geometrical corrections and reflectance values to the soil level.We then generated the NDVI (Equation 1), which is based on quotients and uses the spectral bands from the red and near-infrared bands to enhance vegetative characteristics and minimize the effects of shadows caused by the terrain's topography (Berra et al., 2012;Vorovencii, 2014).The values of this index vary from -1 to 1, calculated as: where ρNIR and ρRED are the reflectance values for the near-infrared and red wavelengths, respectively.
Image segmentation
In the object-based change detection method, pixels are not individually classified but rather combined into homogenous groups (objects) and classified together (Chen et al., 2012;Desclée;Bogaert;Defourny 2006;Hussain et al., 2013).The object is characterized using a large number of descriptive features derived from the images and becomes the basic unit of analysis.In comparison with pixel-based methods, additional spatial and contextual information can be obtained from the objects (Blaschke 2010;Hussain et al., 2013;Ruiz et al., 2011;Wu et al., 2015).
Object-based semivariogram analysis is based on the delimitation of homogeneous groups, in which the objects' boundaries are pre-defined and the semivariogram features are extracted from each object.Multiresolution segmentation is a basic procedure in the eCognition software employed in this study; we used a multiresolution segmentation algorithm (Baatz;Schäpe, 2000) to generate objects based on the post-disaster NDVI image.The size, shape, and spectral variation of each object are controlled by three key segmentation parameters: shape, compactness, and scale.The shape parameter was set to 0.1 and the compactness to 0.5.The most critical step is the selection of the scale parameter, which controls the size of the image objects.This sets a threshold of homogeneity determining how many neighboring pixels can be merged together to form an image object (Mui et al., 2015).We tested values from 80 to 200 for this parameter and obtained the best segmentation result using the value 150. Figure 3 shows the image segmentation procedure.
Experimental semivariogram
For continuous variables, such as the NDVI, the experimental semivariogram is defined as half of the average squared difference between values separated by a given lag, where this lag is a vector in both distance and direction (Atkinson;Lewis, 2000).The semivariance is defined from the spatial variance of measures performed in samples from a determined distance "h", being the sum of the squares' difference between the sampled values separated by a distance "h", divided by two times the number of possible pairs on each distance.This was estimated using Equation 2: where N(h) is the number of pairs of points separated by the distance h, Z(x) is the value of the regionalized variable in the point x, and Z(x+h) is the value of the point (x+h).
The semivariogram is the graphic representation of the spatial variance versus distance h, which allows an estimate of the variance value for different combinations of pairs of points.The semivariance functions are characterized by three parameters: sill (σ²), range (φ), and nugget effect (τ²).The sill parameter is the plateau reached by semivariance values and shows the quantity of variation explained by the spatial structure of the data.The range parameter is the distance where the semivariogram reaches the sill, showing the distance until the data are correlated.The nugget effect is the combination of sampling errors and variations that happen in scales smaller than the distance between the sampled points (Curran, 1988).
Since we wanted to characterize the NDVI spatial variability to obtain maximum detail, we used a onepixel interval between two lags (the distance between pairs of points in the semivariogram calculation), so the lag size was equivalent to the pixel size (30 m).After some experimentation to find an appropriate optimal lag distance, we fixed the number of lags at 20 pixels (resulting in a lag distance of 600 m) to ensure that sill values would provide a concise description of data variability.According to Woodcock, Strahler and Jupp (1988), the size of the samples needs to be larger than the range of influence to characterize the initial part of the semivariogram and large enough to reveal the presence of periodicity.
Set of semivariogram indices
The set of semivariogram indices we used was described by Balaguer et al. (2010) based on the points defining the experimental semivariogram.These indices describe the shape of the experimental semivariograms and therefore the properties that characterize the spatial patterns of the image object (Table 1); they have been categorized according to the position of the lags used in their definition (near the origin and up to the first maximum).The devised feature groups provide information such as the change ratio, slope, concavity, and convexity (curvature) level of the images and data variability.
The model) whose parameters (such as sill and range) are adopted as texture measures (Chen;Gong, 2004;Woodcock;Strahler;Jupp, 1988).This method often suffers from the selection of a proper function because simple functions are not sufficiently distinguishable and complex ones may be subject to overfitting (Chica-Olmo;Abarca-Hernández, 2000).The semivariogram indices are free of the problems caused by modeling the experimental semivariogram and thus have become more popular for describing the spatial properties of remote sensing images (Wu et al., 2015).
Feature extraction
We focused on two classes in this study: (1) nochange objects consisting of areas with the same cover in both images and (2) change objects consisting of areas affected by flooding from the dam failure.A data set of 200 objects (with 100 objects per class) was sampled with 50% of the samples randomly chosen as training samples and the rest used as evaluation samples.Within the objects, the semivariogram indices were extracted in both images using FETEX 2.0 software (Ruiz et al., 2011), a feature extraction tool for object-based image analysis.
Due to the high number of indices, some of the information they provide may overlap with others, and so are probably redundant in terms of efficiently describing the objects.Thus we employed principal component analysis (PCA) in order to group and interpret the redundancies in the information provided by the analyzed semivariogram indices.By choosing the variables with higher impact on the first two principal components, we were able to reduce the number of variables, avoid redundant variables (multicollinearity), and make further analyses more efficient.
Change detection and evaluation
In order to detect changes in the images, we chose to use a support vector machine (SVM) algorithm.Consisting of a group of theoretically superior machine learning algorithms, this approach is especially advantageous in the presence of heterogeneous classes for which only a few training samples are available (Wu et al., 2015).
SVMs operate by assuming that each set of inputs will have a unique relation to the response variable, and that the grouping and relation of these predictors to one another is sufficient to identify rules that can be used to predict the response variable from new input sets.To do this, SVMs project the input space data into a feature space with a much larger dimension, enabling linearly nonseparable data to become separable in the feature space.For example, this method has been successfully used in forestry classification problems (García-Gutiérrez et al., 2015;Wu et al., 2015).We used the Gaussian or radial basis function (RBF) as the Kernel function and performed change detection evaluation using a confusion matrix (Congalton, 1991) and its accuracy measures, validating the results with a manually-produced map.
Semivariogram indices selection
By computing the PCA over the complete set of semivariogram features, we concentrated most of the data's variability in the first components; the resulting visualization of the data allows for a better understanding of redundancies (Figure 4).The proportion of variability explained by PC1 and PC2 (the first two principal components) was 53.15%.As a result of PCA analysis for the group of indices that provide information near the origin, we removed RVF and RSF and included FDO and SDT as input data for the change detection analysis.After analyzing the indices that provided information up to the first maxima, we also removed AFM, VFM, FML and RMM and included DMF and SDF as further input data.We selected the variables that presented higher values in module in the first two components (Table 2).
Exploring the semivariogram indices
We analyzed the semivariogram curves considering both the change (Figure 5a) and no-change (Figure 5b) classes.In the former, the image's spatial variability changed considerably from native vegetation (predisaster image) to flooded areas (post-disaster image).The flooded areas had a low overall variability due to the homogeneity of NDVI pixels with low internal variation.The high relative variability of native vegetation is explained by the presence of high and low NDVI values in the same object.In contrast, the semivariogram curves for the no-change objects presented similar values.
The pre-selected semivariogram indices decreased (FDO and DMF) or increased (SDT and SDF) considerably in the presence of changes (Figure 6a) and remained almost constant in the absence of changes (Figure 6b).FDO is the first derivate near the origin and represents the slope of the semivariogram at the first two lags; it shows the variability changes of the data at short distances.FDO presented high values for heterogeneous objects (Figure 7a) and low values for homogeneous objects (Figure 7b).SDT is the second derivative at the third lag.This index approximates the value of the second derivative of the semivariogram at the third lag.It quantifies the concavity or convexity level of the semivariogram at short distances, corresponding with the heterogeneity of the objects in the image.Negative values indicate that the semivariogram is convex and thus that the image is heterogeneous at short distances.SDT presented high negative values for change objects (Figure 7a) and low negative values for no-change objects (Figure 7b).Table 3: Confusion matrix of the support vector machines classification.
Change detection assessment
The classification accuracy measures, using the selected semivariogram indices as input for the SVM algorithm, are shown in Table 3.The semivariogram indices showed their effectiveness in the classification of change and no-change classes, presenting an overall accuracy of 95.12% and producer's and user's accuracies higher than 85%. Figure 8 shows the change detection map (producer's accuracy = 100%); all objects classified as no-change in the map are correct (user's accuracy = 100%).However, according to the validation data set, there are still some misclassification problems with 14.29% of the objects classified erroneously as change (user's accuracy = 85.71%) and the omission of 6.9% of change-class objects in the map.
In summary, the semivariogram indices synthesized the most relevant information about the shape of the semivariogram (slope) in a few features.They identified the singular points (maxima) and enhanced the information contained in the first lags, where spatial correlation at short distances is higher.These indices also have a specific meaning, allowing them to be easily interpreted.
CONCLUSIONS
In this study, we used spatial context to detect land cover changes resulting from a Brazilian dam failure using an object-based approach.We explored and investigated the potential of semivariogram indices as inputs for training the support vector machines algorithm for change detection.Our results indicate that landscape changes can be accurately detected using only textural features calculated from semivariograms derived from NDVI images.The semivariogram indices selected by PCA analysis showed their effectiveness in the classification results, presenting high accuracy values.Using semivariograms as the main geostatistical tool to describe spatial variability standards in data means that indices derived from NDVI variability have the potential to discriminate between homogeneous and heterogeneous classes within objects.This approach can be used in many other contexts for rapid and accurate assessment of such land-cover changes.Further research should explore the use of geostatistical features to characterize the degree of changes as well as the impact of the initial land cover class and the image segmentation epoch on the analysis results.Other studies could analyze the influence of seasonality on change detection in vegetated areas.
Figure 2 :
Figure 2: Study area location within Minas Gerais state, Brazil.
Figure 3 :
Figure 3: Image segmentation procedure for feature extraction.
Figure 5 :
Figure 5: Semivariograms from pre-and post-disaster images for: (a) change objects; (b) no change objects.
Figure 6 :
Figure 6: Values of pre-selected semivariogram indices from image epoch 1 and image epoch 2 for: (a) change objects; (b) no-change objects.
Figure 7 :
Figure 7: Semivariogram representation of the total data variance for the FDO and SDT indices: (a) heterogeneous objects, and (b) homogeneous objects.
DMF is the difference between the mean of the semivariogram values up to the first maximum (MFM) and the semivariance at the first lag (difference mean of semivariogram and first lag semivariance).This index shows the decreasing rate of the spatial correlation in the image up to the lags where the semivariogram theoretically tends to be stabilized.The results showed a high variation of DMF values for change objects and a relatively low variation of DMF values for no-change objects.SDF is the second-order difference between the first lag and first maximum.This parameter provides information about the semivariogram curvature in that interval, also representing the low frequency values in the image.SDF values presented a high variation for change objects and low variation for no-change objects.
a=Indices that provide semivariogram information near the origin; b=Indices that provide semivariogram information at first maxima.
|
2019-04-02T13:12:07.251Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "b512b86afa6f61a2c95eec391290d4149fe73b24",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cagro/v41n5/1981-1829-cagro-41-05-554.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b512b86afa6f61a2c95eec391290d4149fe73b24",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119207602
|
pes2o/s2orc
|
v3-fos-license
|
Interactions of Charged Spin-2 Fields
In light of recent progress in ghost-free theories of massive gravity and multi-gravity, we reconsider the problem of constructing a ghost-free theory of an interacting spin-2 field charged under a U(1) gauge symmetry. Our starting point is the theory originally proposed by Federbush, which is essentially Fierz-Pauli generalized to include a minimal coupling to a U(1) gauge field. We show the Federbush theory with a dynamical U(1) field is in fact ghost-free and can be treated as a healthy effective field theory to describe a massive charged spin-2 particle. It can even potentially have healthy dynamics above its strong-coupling scale. We then construct candidate gravitational extensions to the Federbush theory both by using Dimensional Deconstruction, and by constructing a general non-linear completion. However, we find that the U(1) symmetry forces us to modify the form of the Einstein-Hilbert kinetic term. By performing a constraint analysis directly in the first-order form, we show that these modified kinetic terms inevitably reintroduce the Boulware-Deser ghost. As a by-product of our analysis, we present a new proof for ghost-freedom of bi-gravity in 2+1 dimensions (also known as Zwei-Dreibein gravity). We also give a complementary algebraic argument that the Einstein-Hilbert kinetic term is incompatible with a U(1) symmetry, for a finite number of gravitons.
Introduction
It has been more than seventy years since Wigner demonstrated that all consistent, relativistic, quantum particles can be classified by their mass m and their spin j [1,2]. Experimentally, particle accelerators have established the existence of composite, charged massive higher spin particles [3]. Nevertheless, the theoretical understanding of higher spin fields is considerably less developed than their lower spin counterparts.
The most obvious bosonic higher spin theory to consider is spin-2. There are arguments that the only consistent theory of a massless, self-interacting, Lorentz-invariant spin-2 field is General Relativity [4][5][6][7][8]. In fact, recent work has established that these assumptions may be weakened somewhat. Ghost-freedom alone is sufficient to derive the Einstein-Hilbert action as the kinetic term for Lorentz-invariant massive fields [9] or for massless gravity theories where time translation invariance is broken explicitly [10,11].
However the massive case is less well understood. In the 1930's, Fierz and Pauli wrote down the linearized, non-interacting theory of a single massive spin-2 field [12,13]. There are several issues. The first is the vDVZ discontinuity of this model [14,15]. The vDVZ discontinuity is a curious feature of the Fierz-Pauli action, which is that in the limit m → 0, the Fierz-Pauli predictions do not become equivalent to that of the linearized Einstein-Hilbert Lagrangian. Vainshtein was the first to see that this discontinuity could be avoid by adding self-interactions for the massive spin-2 field, and associating the regime of validity of the linear approximation [16]. However, Boulware and Deser showed that generically a non-linear extension of the Fierz-Pauli action would introduce a sixth ghost mode, the Boulware-Deser ghost [17].
Only recently has a theory of a Lorentz-invariant self-interacting, massive spin-2 field that propagates 2(2) + 1 = 5 healthy degrees of freedom (dofs) been found [18][19][20][21][22][23]. This was generalized to an arbitrary number of interacting spin-2 fields in [24,25]. Typically in these theories one has in mind that the graviton itself has a mass. The theory has been applied in cosmology, where the mass of the graviton may be relevant for explaining the observed acceleration of the universe if the mass corresponds to the Hubble scale today, m ∼ H 0 , while remaining technically natural [26,27]. For a recent review of this types of theories, see [28].
However we do not necessarily need to identify a massive spin-2 field with gravity. In this context it is interesting to think about possible additional interactions for a massive spin-2 field. A natural extension is to allow for the massive spin-2 dof to be charged under a local U(1) symmetry. For example, we might try to minimally couple the spin-2 field by taking the Fierz-Pauli action for a spin-2 field H µν and promoting H µν to a complex field. We can then minimally couple H µν to a U(1) gauge field A µ by replacing ∂ µ H ±,νσ → D µ H ±,νσ = (∂ µ ∓ iqA µ )H ±νσ , (1.1) where q is the charge of the spin-2 field. We might ask if there is a consistent effective field theory description for these dofs, and whether there are consistent gravitational interactions of the massive charged spin-2 field H ±,µν . In fact the minimally coupled theory of a charged spin-2 field at the linear level was studied originally by Federbush [29]. There it was argued that there was a unique minimally coupled theory at the linear level that propagated the correct number of dofs in the background of a constant electromagnetic field.
However, Velo and Zwanziger showed that generically the minimal coupling procedure would typically lead to the presence of superluminal group and phase velocities around certain backgrounds [30][31][32]. This result was also confirmed more recently in [33,34]. However in light of the fact that superluminal phase and group velocities have been observed in nature, see for example [35], we should not be so quick to use this result to imply a failure of causality. Acausality only occurs if the front velocity is superluminal. At tree level this is equivalent to the velocity obtained through a characteristic analysis, but at the quantum level the computation of this velocity is strongly sensitive to the strong-coupling physics and the classical characteristic analysis cannot be trusted. A well known case of the quantum effects rendering the front velocity luminal when the low energy phase/group velocity is superluminal are the case of the propagation of light in gravitational fields [36][37][38][39]. It can be show that the effective theory obtained from integrating out the electron gives rise to superluminal phase velocities at low energies, whereas the complete one-loop photon propagator is causal.
There are several models of interacting charged spin-2 fields which are known to be consistent. Recently, there has been the development of Vasiliev's higher spin theory [40], but an older model is that of the spin-2 resonances coming from the Kaluza-Klein tower [41]. There has also been a lot of work in constructing theories of spin-2 fields arising from string theory, for example, [42][43][44]. The drawback in these approaches is that they entail an infinite tower of charged, massive spin-2 fields or an infinite tower of higher spin fields. While these are all excellent examples of UV complete theories which contain charged spin-2 states, the infinite tower structure is not palatable if we only wish to describe single meson resonances through an effective field theory.
Charged spin-2 fields were also studied by Porrati in [34,45,46]. In theories of a single massive charged spin-2 field with charge q coupled to a U(1) gauge field there is a model-independent strong-coupling scale 1 Λ q,3 = q −1/3 m. In other words, perturbative unitarity always breaks down at the scale Λ q,3 or lower.
However the breakdown of perturbative unitarity at some scale may not require the introduction of new physics at that scale, for explicit examples see [48]. Indeed, it has been argued that the Vainshtein mechanism can act as a way of recovering unitarity non-perturbatively [47]. Thus it is not necessarily appropriate to think of Λ q,3 as the cutoff (meaning a scale at which new physics enters), but rather as an energy scale at which perturbation theory breaks down, i.e. the strong-coupling scale. The idea that a theory can self-unitarize above its strong-coupling scale is also the essence of the 'classicalization' picture [49,50].
In this work we shall extend the results in the literature by showing that the Federbush theory is in fact completely ghost-free, even for a dynamical U(1) field. Thus while the Federbush action describes a consistent effective field theory for the spin-2, it may be possible to extend the regime of validity of the theory above the scale Λ q,3 provided one can make sense of the strong-coupling region.
A charged spin-2 field has several applications. It is known that Nature furnishes several composite charged massive spin-2 fields, e.g. hadronic resonances such as π 2 (1670), ρ 3 (1690), and α 4 (2040). Indeed, early attempts at bi-gravity and charged massive spin-2 fields were aimed at building a consistent description of these mesons [51], and the work in constructing linearized charged spin-2 fields also existed to help describe mesons [29].
Additionally, a charged spin-2 field may be useful in condensed matter applications of the AdS/CFT correspondence, such as holographic superconductivity. For a review of holographic superconductors, see for example [52]. In studying superconductors, a standard set-up is to consider a black hole with scalar hair. The (massive) scalar field plays the role of spontaneously breaking a U(1) symmetry, giving rise to superconductivity [53]. However, the scalar is only capable of describing S-wave superconductivity. In order to describe a D-wave superconductor, one needs black hole hair with charged helicity-2 dofs. A massive graviton can also be useful to break translation invariance in the bulk space, which can be useful for studying the DC conductivity. See also [54][55][56][57][58] for work on applying massive gravity in a holographic context. Especially in light of AdS/CFT applications, another question that we can ask is whether a charged spin-2 field can be consistently coupled to gravity. Given recent progress in massive gravity, one might hope that the key to describing gravitational interactions of a single, self-interacting, massive, charged spin-2 field will lie in the recently discovered non-linear ghost-free mass structure [20]. It therefore seems timely to inspect if the recently discovered self-interacting massive spin-2 fields can help us describe a unitary Lagrangian for any sufficiently long-lived meson. 2 Massive gravity can be written as where the interaction potential U is built out of a dynamical metric (with associated vielbein e a ) and a fixed reference metric (with associated vielbein f a ). The graviton potential that is free of the Boulware-Deser ghost at the non-linear is given by the set of interactions This form of the mass term was recently shown in [59] to emerge from an extra dimensional picture using Dimensional Deconstruction. Briefly, the Einstein-Hilbert term in 5 dimensions can be written in a particular gauge as where y is a coordinate along the compact direction, and where we have temporarily neglected the zero modes corresponding to the radion and gravi-photon. We then discretize the compact direction, replacing the continuous coordinate y by a discrete "site index." In particular, by discretizing in the sense ∂ y e a µ → m(e a 2 − e a 1 ), we recover a particular combination of the ghost-free interactions in Equation (1.3). By considering a more general discretization procedure, we may generate all of the interactions. This procedure was also generalized to multi-gravity in [60].
Deconstruction was also shown to be equivalent to truncating the Kaluza-Klein tower, essentially by interpreting ∂ y ∼ inm for integer n. This suggests a method for generating a theory of a charged spin-2 field. In the Kaluza-Klein representation, the vielbeins are complex,ẽ a n,µ . In this representation the continuum theory has a global U(1) symmetry under whichẽ a n,µ →ẽ a n,µ e inθ . (1.5) We may make this symmetry local by the minimal coupling replacement In fact, the field A appears naturally in the Kaluza-Klein context as a zero mode. In this context is sometimes known as the 'gravi-photon. ' The U(1) symmetry is associated with the group of continuous translations in the compact direction. This will be broken by a discrete subgroup upon discretizing. In Fourier space, this manifests itself by the presence of operators that violate charge conservation. However, there is a natural way to recover the U(1) symmetry by simply projecting out those charge-violating operators. This will generate a candidate nonlinear theory for a charged spin-2 field. However, this projection modifies the kinetic structure. In light of recent results [9], we might expect that this will inevitably introduce ghosts. In fact we will give several arguments that the Boulware-Deser mode is present in any theory with a linearly realized U(1). Indeed, the new kinetic interactions we will derive by this method are closely related to the interactions considered in [9].
Summary: Our main results are • The Federbush theory of a single massive spin-2 field interacting with a U (1) gauge field, propagating on Minkowski space, is ghost-free. While it has been known that Federbush theory was ghost-free around constant electric field backgrounds (for example see [29,43]), here we will present a proof that it is, in fact, fully ghost-free even with a dynamical U(1) field. As a result, it is possible for the Federbush theory to have strongly coupled dynamics at a scale q −1/3 m without violating unitarity.
• There is a unique set of gravitational interactions for the Federbush theory in three dimensions that can be written with differential forms, and which reduces to Federbush around Minkowski space.
• This unique non-linear extension to Federbush propagates a ghostly mode on a curved background.
• As a by-product of our analysis, we develop some novel techniques to perform a constraint analysis based on [61] in the Einstein-Cartan formalism to check for the absence of ghosts. In Appendix B, we provide an alternative proof for the ghost freedom of bigravity in three dimensions (also known as Zwei-Dreibein gravity [62]) using our techniques.
• We finally give an algebraic argument preventing the Einstein-Hilbert term from being compatible with a U(1) symmetry.
Outline: The rest of this work is organized as follows. In section 2 we shall review what is known about the linear theory of a charged, massive spin-2 field, and show that the Federbush action with a fully dynamical U(1) gauge field is actually ghostfree. In section 3, we apply the method of Dimensional Deconstruction to generate a theory of a charged spin-2 field. In section 4, we shall show that generically actions that attempt to generalize Federbush to include gravitational coupling will introduce ghosts, by performing a constraint analysis in the Stückelberg formalism. In section 5, we analyze the group structure necessary for a non-linearly completed action, and demonstrate several of the initial problems with the theory. Finally, the appendices contain supplementary detail and alternative arguments.
2 Ghost-free charged spin-2 fields on Minkowski Before attempting to construct an interacting theory of a massive charged spin-2 field, we first consider the flat space limit as a starting point. Federbush wrote down a theory of a single, massive charged spin-2 field in [29] that was argued to propagate five dofs in the background of a constant electromagnetic field. Charged spin-2 fields have also been studied by Porrati in [34]. In this section we will review what is known about the flat space case, following the discussion in [34]. We will also find that the Federbush theory (also derived by Porrati) is completely ghost free. To our knowledge this goes beyond what has been done in the literature, where the stability analysis has been restricted to constant electromagnetic backgrounds.
We start with the Fierz-Pauli action for a complex spin-2 field H µν where E is the Lichnerowicz operator, normalized so that Square brackets refer to taking the trace with respect to the flat space-time metric, [H] = η µν H µν . Since H µν is complex, this theory propagates 2 × 5 = 10 real dofs. This theory has a global U(1) symmetry under which H → He iθ . We can make this symmetry local by coupling H µν to a U(1) gauge field A µ , adding a kinetic term for A µ , and making the replacement where q is the charge. When applied to Fierz-Pauli, this procedure is ambiguous, because the covariant derivatives do not commute. When acting on a field φ with charge q, Since there are different representations of the Lichnerowicz operator that differ by integrating by parts and commuting partial derivatives, there are different "minimal" covariantizations. The most general minimally coupled action is The ordering ambiguity is represented by the parameter g, which we may identify with the gyromagnetic ratio [33]. Already we may comment that from the point of view of an effective field theory, as long as this additional operator is not forbidden by some symmetry we expect it to arise, at least from quantum corrections. Following [34], we can study this theory using a Stückelberg analysis. We may introduce complex Stückelberg fields where (a, b) ≡ ab+ ba, the action is invariant under charged linearized diffeomorphisms (diffs). We now study the interactions in this theory, which arise entirely through the coupling between the U(1) gauge field and the spin-2 field. That is, there are no self interactions of the spin-2 dofs. It will be useful to consider a decoupling limit The parameter n will be fixed by the interaction that arises at the lowest scale in this limit. Interestingly for q = m/M Pl , Λ q,n = (m n−1 M Pl ) 1/n , which we may identify as the usual scale Λ n arising in the effective field theory approach to massive gravity [63]. We may de-mix the kinetic term for the helicity-0 mode by performing the field redefinition The kinetic terms for h, B, π, A take the form From the scalings given in Equation (2.6), the kinetic terms for the Stückelberg fields B µ and π do not scale with q. Thus for a generic choice of g we can identify the scale of the lowest order interactions as Λ q,4 = q −1/4 m. Explicitly the interactions are given by (2.10) These interactions are higher derivative and signal the presence of ghosts arising at the scale Λ q,4 . Since this interacting is genuinely ghostly, we cannot imagine any strongcoupling self-unitarization mechanism to resolve it. Thus we may definitively say that the cutoff of this theory is at highest Λ c ∼ Λ q,4 /(2g − 1) 1/4 .
Federbush is ghost-free
However, as shown in [34], we may remove all interactions arising at the scale Λ q,4 by the special choice of gyromagnetic ratio g = 1/2 (this corresponds to the theory originally proposed by Federbush [29]). In our conventions, it is clear that this choice corresponds to minimal coupling prescription To identify the leading order interactions, we do an expansion in powers of q, keeping in mind that D ∼ ∂ − qA and that [D, D] ∼ qF . The leading order interactions come from the Lichnerowicz operator, which after introducing the Stückelberg fields takes the schematic form (2.12) Let us first consider the interactions at order q. Because of the double epsilon structure the only non-vanishing term at this order uses the commutator to make εεDDDB ∼ qεεF ∂B. The interaction arises at the scale Λ q,3 . It is given explicitly by Because of the double epsilon structure, the equations of motion for L Λ q,3 are manifestly second order. As a result, the Federbush theory is ghost-free at the scale Λ q, 3 . This means that there is no obstacle to treating the Federbush theory as a strongly coupled theory till energy scales Λ c where we could potentially have Λ c ≫ Λ q,3 , so long as no new dofs enter below Λ c .
In fact, the Federbush theory is ghost-free to all orders in q. This follows directly from the double epsilon structure, which automatically removes any higher derivatives in the equations of motion. The ghost-freedom has also been explicitly checked by computing the equations of motion for the Stückleberg-ed action and showing that all of the equations of motion are second order in time derivatives, using the techniques described in [9].
Velo-Zwanziger problem in the Stückelberg language
Even though it is ghost-free, the Galileon-type structure of the interactions might lead us to suspect that the Federbush theory admits superluminal propagation around certain backgrounds. Indeed this is simply a manifestation of the well-known Velo-Zwanziger problem, expressed in modern language.
Let us consider an external electromagnetic field,F µν . Then the quadratic action for the perturbations is In this language it is clear that we can find backgrounds with superluminal group velocity. For example, perturbing around an electromagnetic backgroundF µν , the operator Λ −3 q,e ∂F B * ∂ 2 πΛ 3 q,3 will modify the kinetic structure and can lead to superluminalities. This problem can occur even for arbitrarily small values of the electromagnetic field, since a sound speed c 2 s = 1 + ǫ for small ǫ is still superluminal. In the literature the Velo-Zwanziger problem has traditionally been studied for backgrounds with a constant electromagnetic field. For such backgrounds, there is no contribution to the kinetic term at the scale Λ q,3 , as is evident by the expression above. Instead for background with constant electromagnetic fields, the leading correction to the kinetic term is schematically of the form Thus in standard presentations of the Velo-Zwanziger problem considering constant electromagnetic backgrounds, the superluminalities come from the operator that arises at a higher scale Λ q,2 .
We may tempted, as Velo and Zwanziger were, to attribute this apparent superluminality to a failure of causality. However the group and phase velocities can both be superluminal at low energies without conflicting causality, since the speed of information is set by the front velocity (see the review [28] for a discussion and references on this point). The front velocity lies in the strong-coupling region for which this tree-level analysis is not appropriate. More precisely the test of causality is whether the commutator [π(x), π(y)] vanishes outside the light cone. This vanishing is tied to the analyticity of its Fourier transform which is sensitive to the high energy behavior of the correlation function. Group and phase velocities that exceed the speed of light in vacuum have been observed in nature (see for example [35]). These measurements also explicitly confirm that the front velocity is luminal, consistent with causality. In addition it is known that the propagation of photons in a curved space-time can exhibit superluminalities in its low energy effective theory which are absent in the UV completion [36][37][38][39].
Ghost-free extensions to Federbush
We construct other charged spin-2 theories that are ghost free at the scale Λ q,3 covariantizing the interactions proposed by Hinterbichler in [64]. For example, in 4 + 1 dimensions we could have the operator where the capital indices A, B = {0, 1, 2, 3, 4}. The overall scale chosen so that a consistent decoupling limit exists at Λ q,3 . To identify the leading interaction, we may use the same argument as above, since all we have done is replacing one η with an h. L 5d kin gives rise to an interaction at Λ q,3 As before, the double epsilon structure prevents higher order derivatives from appearing in the equations of motion. This can clearly be extended to the full set of interactions in any dimension, of the form εεH * DDH(H * H) n η d−6−2n proposed in [64]. Thus, these represent consistent self-interactions of a spin-2 field on Minkowski space. However, in four dimensions there are no such terms invariant under a U(1) symmetry, so we will not consider this possibility further in this work.
First-order form
It is useful to recast the Federbush action in first order-form. By first-order form, we mean that the action is written so that all fields appear with at most one derivative. This may be viewed as an intermediate-step in passing to the Hamiltonian. It will be convenient for us to work with first-order form when attempting to construct gravitational interactions.
To go to first-order form, we introduce a new field θ ab µ (essentially the linearized spin connection) which plays the role of the momentum conjugate to H. We treat θ on equal footing as H. The first-order form for Fierz-Pauli is given by the action where the one form 1 a has components 1 a µ = δ a µ . Upon integrating out the auxiliary field θ, we find In deriving this, we have used This is the linearized version of the symmetric vielbein condition, which as is wellknown is needed to show the equivalence of the vielbein and metric formulations of massive gravity. Putting this back into the action (which is allowed since θ is not a dynamical field), we recover the usual form of the Fierz-Pauli action.
We can obtain a first-order representation of the Federbush action by simply following the minimal coupling procedure Explicitly, the first order form for the Federbush action is A short calculation shows that integrating out θ reproduces the Federbush action. It makes sense that covariantizing the theory in the first order form preserves the dofs. In first order form the dofs and Lagrange multipliers are manifest. We do not change the constraint structure by adding a gauge potential in this form.
Note that the spin connection is modified at the linear level due to the presence of the gauge field A µ . This behavior will persist at the non-linear level. Thus at the non-linear level it will be convenient to work in a first-order form.
Gravitational interactions?
At this stage, from the point of view of massive gravity, the natural step is to try to construct a non-linear completion by adding self interactions for the graviton of the form H n , ∂ 2 H m . The reason is that in the case of massive gravity, one expects to couple the massive spin-2 directly to the stress energy tensor T µν of matter fields. By the standard arguments (for example [7]), this will force the spin-2 field to have non-linear interactions that realize a diffeomorphism symmetry.
However, as emphasized in the introduction, we do not have in mind that the charged spin-2 field is carrying a gravitational force. In other words, we will not couple charged spin-2 field to matter directly. As a result, we do not necessarily need to add non-linear self-interactions to the massive graviton, beyond those considered in Section 2.3.
Nevertheless, it is interesting and important to understand the interactions of the massive charged spin-2 field with the true carrier of the gravitational force. In other words, we can view the charged spin-2 field itself as a matter field, and attempt to couple it to an electrically neutral, massless graviton.
In order to see if such an effective theory exists, we can get inspiration from multigravity and extra dimensional theories. A charged spin-2 field is built out of two real spin-2 fields. When we include the coupling to gravity, we will get a theory of multiple interacting spin-2 fields. In fact, in section 5 we will show that it is impossible to view a charged spin-2 field as a gravitational theory with an Einstein-Hilbert kinetic term, because the charged spin-2 cannot realize a gravitational-type symmetry.
It is worth spending a moment defining what we would want for our gravitationally extended theory. We will require: 1. The gravitational extension should have a U(1) symmetry. We will only consider non-linear extensions where this U(1) symmetry is linearly realized.
2. The theory should be fully ghost free at all orders.
3. We would like a theory with a single charged spin-2 dof, coupled to gravity. As a result, the action should be built only out of a single complex spin-2 field H µν , and a neutral metric g µν . 4. The non-linear theory should reduce to the Federbush theory in the appropriate limit. Implicit in this requirement is that no dofs should become infinitely strongly coupled in this limit.
Given these expected properties, we will attempt to construct a non-linear theory using various techniques. Ultimately we will discover that the ghost is re-introduced at some scale. When considering non-linear completions, we will work in 2+1 dimensions. We emphasize that a necessary condition for the theory to exist in higher dimensions is that it must work in 2+1 dimensions.
This can be seen from multiple perspectives. If a consistent theory exists in 3+1 dimensions, there must be a consistent theory in 2+1 dimensions, because it is always possible to do a Kaluza-Klein compactification to reduce the 3+1 theory to the 2+1 theory. Furthermore, in d spatial dimensions it is always possible to consider physical situations with translation invariance in d − 2 spatial directions, so that the system effectively becomes 2+1 dimensional. As a more general statement, there is no physical reason to expect that by adding more complication in extra spatial dimensions that we can resolve a difficulty that is already present in 2 + 1 dimensions. 3 One possible objection to this reasoning is that in higher dimensions we can add operators that would be topological in lower dimensions (such as the Lovelock terms), so there is more freedom in higher dimensions. In fact, as we will see, the main obstruction to constructing ghost-free theories of a charged spin-2 field is that the Einstein-Hilbert term itself is incompatible with the U(1) symmetry. As a result we are forced to modify the kinetic structure, and this forces us to re-introduce the Boulware-Deser ghost. The higher order Lovelock terms will share this property. A group theoretic version of this argument, which is independent of spatial dimension, is given in section 5.
The main reason for working in 2+1 dimensions is that the theory in 2+1 dimensions is much easier to work with technically. More detail on the advantages and formalism of 2+1 gravity (as well as conventions) are given in Appendix A. For other work studying the constraint analysis of massive-gravity type theories in three dimensions, see for example [65][66][67].
Charged Deconstruction
As described in the previous section, we will now be working in 2+1 dimensions for the remainder of the paper. Thus starting from this section, we will use Greek indices µ, ν, · · · to represent space-time indices in 2+1 dimensions. Capital Roman letters M, N, · · · will be used for space-time indices in 3+1 dimensions. In this section, we will also use a hat to distinguish between four dimensional exterior derivativesd and three dimensional exterior derivatives d.
We will apply the formalism of deconstruction to General Relativity in 3+1 dimensions to generate a candidate theory for a charged spin-2 field in 2+1 dimensions. First we will review the relevant Kaluza-Klein decomposition to clarify the gauge choices which are important for discretization. Then by discretizing the action we will generate a candidate theory for a massive graviton charged under a U(1) group in 2+1 dimensions.
In fact, the naïve discretization process will break the U(1) symmetry, because the continuous translation symmetry is broken to a discrete subgroup. However, we will find a natural way to restore the U(1) symmetry in the resulting candidate theory. In the next sections, we will consider the consistency of this candidate non-linear extension.
Kaluza-Klein with a vector zero mode
As discussed in [59], it is crucial to apply the deconstruction procedure using the vielbein language. 4 The vielbein E A M is related to the metric g M N by For our purposes it will be useful to work with the Einstein-Cartan formalism, in which the spin connection Ω AB M is treated as an independent variable that is determined by its own equation of motion. This is analogous to the Palatini formalism in the metric language.
In terms of E and Ω, the 4 dimensional action for pure gravity is where the Riemann curvature two-form is given by S 4d is invariant under diffeomorphisms, under which E and Ω both transform as oneforms. It also enjoys a local Lorentz symmetry under which the fields transform as By varying the action with respect to the spin connection one obtains the torsion-free condition in 4 dimensions and by varying with respect to the vielbein one obtains the vacuum Einstein equations We perform a 3+1 split along the y direction by parameterizing the vielbein as and the spin connections as where K a µ ≡ Ω a4 µ , λ a ≡ Ω a4 y . In terms of these variables the action may be written as where ε abc ≡ ε abc4 and where D = d + ω is the three-dimensional covariant exterior derivative and where [a, b] = ab − ba.
Our strategy will be to integrate out the components of the spin connection associated with the fourth direction, namely β ab , K a µ , λ a . The resulting action will be in a form appropriate for a three-dimensional observer, with a three dimensional spin connection ω ab and with fields transforming in the three-dimensional Poincaré group.
First however we will fix some of the gauge freedom. This is an important step, because as discussed in [59], different gauges in the continuum theory can produce different theories upon discretization.
• We fix 3 of the 6 Lorentz symmetries by setting N a = 0. (3.10) We can do this by Lorentz transforming N a → Λ ab N b + Λ a5 and taking Λ a5 = −Λ ab N b .
• We also partially fix four of diff gauge symmetries by setting We cannot use the gauge freedom to set A µ = 0 and N = 1 completely, we may only remove the y dependence. These fields represent the massless zero vector and scalar modes.
• In fact we will neglect the scalar mode (the radion). We are using Kaluza-Klein to motivate an action for a charged spin-2 field, and for these purposes the radion is not relevant. Thus we will set N = 1 here.
With these gauge conditions in mind we can write down the torsion-free conditions (3.5) where at least one of the local Lorentz or space-time indices lie along the extra dimension λ a e a µ = 0. (3.12) These equations may be easily solved. The last equation sets λ a = 0 since e a µ is invertible. We may also take advantage of our remaining 3 local Lorentz gauge freedoms to set As a result of this gauge condition we may solve for K a µ K a µ = ∂ y e a µ − (3.14) The other equation of motion becomes e a [µ ∂ y e a ν] = 0. (3.15) This last condition is the symmetric vielbein condition, here we see that it follows as an algebraic identity as a result of our gauge choice (3.13).
Finally plugging the solutions for the auxiliary fields β, λ, K into the action 5 (3.9) we find In deriving this expression we have used the fact that using the symmetric vielbein condition (3.15).
We also have set to zero an interaction We emphasize that we are not assuming the torsion-free condition De = 0. We have not yet integrated out ω. Based on the discussion in Section 2, we expect that the spin connection will be modified by the presence of the electromagnetic field. This implies that we do not want to assume the torsion free condition. However, De ∼ A.
As a result, after integrating out the spin connection, the above interaction will be proportional to A ∧ A = 0. The presence of the Kaluza-Klein vector mode gives us a physically motivated starting point for considering theories of massive charged spin-2 fields. We will apply Dimensional Deconstruction to the four-dimensional action and generate a candidate action for a massive charged spin-2 field.
Using deconstruction to generate a charged spin-2 theory
Now we imagine a discrete set of N special places along the fourth direction, with y coordinate y I , I = 1, · · · , N. We will now discretize the fourth compact dimension, keeping only the fields with located at y = y I . Following [59] we discretize the derivative ∂ y in the sense where the α IJ are in principle arbitrary coefficients that form some representation of a discretized derivative. Two natural choices considered in [59] are a "local" discretization α IJ = δ I,J+1 − δ I,J and a "truncated Kaluza-Klein" discretization α IJ = [sin(2π(I − J)/N)] −1 . We also replace the integral over y with a sum over sites Applying this procedure to the action and canonically normalizing the photon kinetic term yields where M 3 ≡ M 2 Pl /m. We consider the charge q to be an arbitrary parameter. The value of q that arises from Deconstruction is ( 3.22) The determinant |e| and inverse vielbeins that appear in the photon kinetic term are somewhat ambiguous. In light of recent work on matter couplings [68][69][70], the safest choice would be to have |e| represent the determinant of a vielbein on just one site. In fact, in this work we will be mostly concerned with the self-interactions of the spin-2 field, and the determinant factor will not matter for the rest of our analysis.
The Fourier transformed action
As discussed above, the discretized theory does not have a U(1) symmetry. To see this it is easiest to set q = 0 and to ignore the vector zero mode. We will then show that there is no global U(1) symmetry present in this limit. We may work in a representation where the (lack of) U(1) symmetry is manifest by using a discrete Fourier transform where Φ a I = {e a I , ω ab I }. Assuming N is odd for simplicity, the inverse Fourier transform is then given by Note that while the Φ I fields are real, the Fourier transformed fieldsΦ n are complex. However since theΦ n fields obey the conditionΦ * n =Φ −n , there are the same number of dofs in each representation (as there must be, since the discrete Fourier transform is an invertible field redefinition that cannot change the physics).
Interestingly, theω n are not connections for n = 0. Instead, theω n transform as tensors under diagonal local Lorentz transformations. To see this, note for example that in the case N = 3 that Since the difference of two connections transforms as a tensor,ω 1 transforms as a tensor.
(3.29)
In this form, it is clear that the theory has a global U(1) symmetry because each interaction comes with a charge conserving delta function. This is nothing more than the usual statement that translation invariance in real space corresponds to momentum conservation in momentum space. For our purposes, the physical significance of this observation is that if the global U(1) symmetry were present, by introducing a gauge field A µ and making the U(1) symmetry local we could discover a theory of massive spin-2 particles charged under a U(1) gauge symmetry. In the continuum four-dimensional theory, this minimally coupled field appears and is the massless KK vector mode. In the discretized theory, at finite N it could in principle be any Abelian gauge field. δ n 1 +n 2 ,0 (dω a n 1 ∧ẽ a n 2 ) . (3.31) When we truncate the sum at finite N, we must allow for operators that violate charge by an integer multiple of the number of fields. Thus the process of discretization breaks the U(1) symmetry present in the continuum theory, corresponding to the statement that discretization has broken translation invariance in the compact direction.
Note that this subtlety does not affect the quadratic terms, because n 1 + n 2 = kN implies k = 0 for |n| ≤ (N − 1)/2. However charge violation is allowed for the cubic terms, because n 1 + n 2 + n 3 = kN implies k = −1, 0, 1. Thus the obstruction to the U(1) symmetry arises only at the nonlinear level, and is invisible in the linear theory.
Explicitly, for N = 3 Nevertheless, we may restore the U(1). We simply introduce a projection operator that subtracts off the charge violating terms, by keeping only terms with k = 0. As we have seen, the quadratic terms are unaffected by this projection. The form structure implies that only cubic interactions can arise in three dimensions. Thus let us see what the impact of applying this projection operator is for a generic cubic interaction. We will specialize to the case N = 3 for simplicity.
Mass term
The mass term is actually simpler to deal with. If we demand that the mass term has U(1) invariance, we simply limit the permissible choices of β IJK . Since the resulting mass term is still of the form of a ghost-free theory, there is no obstruction to choosing a U(1) invariant mass term.
The U(1) invariant mass term has two parameters Written in site language this amounts to a two parameter family for the β IJK coefficients
Kinetic term
The kinetic term includes the cubic interaction If we focus on the terms with k = ±1 we find 6 Performing the inverse Fourier transform (3.24) yields Thus in the site language, the U(1) invariant cubic interaction takes the form In other words, the cost of throwing out the terms that violate charge conservation in Fourier space is that we generate nonlocal terms upon taking the inverse Fourier transform. Thus we can write the full U(1) invariant kinetic term as where S GR is the sum of the usual Einstein Hilbert terms, and S new kin is given by In this language, it is clear that maintaining the U(1) symmetry has forced us to modify the Einstein-Hilbert structure for the kinetic term. As we will see in the next section, this is ultimately fatal. In fact, the interactions we have generated are closely related to the interactions found by applying Dimensional Deconstruction to the Gauss-Bonnet term in 5 dimensions, as considered in [9]. This can be made more explicit performing a field redefinition under which the Einstein-Hilbert term becomes The interactions R[ω 1 ] ∧ e 2 are of the same form as the interactions in [9]. However these interactions are different because they are being considered in first-order form.
It is worth emphasizing that this illustrates again why the situation will not get better in higher space-time dimensions. By going to higher dimensions, we can potentially add more Lovelock terms. However the issue is that the Lovelock terms themselves necessarily break the U(1) symmetry, and so must be modified. It is the modification to the Lovelock term that is ultimately responsible for re-introducing the Boulware Deser mode, as we will show in the next section. We have illustrated this explicitly in 2 + 1 dimensions for the Einstein-Hilbert combination.
Deconstruction-Motivated Charged Spin-2 Theory
We have now reached the main result for this section, a natural candidate theory with a global U(1) symmetry is A few remarks are in order: • The next step, in principle, is to minimally couple a U(1) gauge field through a minimal coupling procedure, d → d − ieA. However, first we should check whether the candidate theory with a global symmetry is ghost free.
• After introducing the gauge field through minimal coupling, the theory given in equation (3.45) reduces to Federbush in the limit M 3 → ∞. This is most easily seen by comparing the theory with the first-order form of Federbush given in (2.22).
• Note that S new kin has no dependence on the graviton mass m, the only scale present is M 3 . This scale is completely fixed by the U(1) invariance since S new kin is not U(1) invariant by itself, only the combination S GR + S new kin is U(1) invariant.
• S new kin has diagonalized diff invariance, guaranteed by the form structure, as well as diagonalized local Lorentz invariance, which can be seen by expanding out the γ IJK explicitly • We also see an advantage of working in the first order formalism. The equation of motion for the spin connections has been modified in a nontrivial way, and it is much easier to keep the spin connections as independent variables rather than needing to integrate them out explicitly.
Degrees of freedom of generic non-linear completions
Rather than moving directly into establishing the number dofs of the action inspired by Deconstruction, we will now re-consider the problem of constructing a non-linear completion for Federbush from a more general perspective. The lesson from Deconstruction is that there is no way to associate a linearly realized U(1) symmetry directly with the Einstein-Hilbert kinetic term. As a result, the first step is to try to find an appropriate ghost-free U(1) invariant kinetic term for the spin-2 field. As in the previous section, it is simpler to start by constructing a theory with no interaction with the U(1) gauge field by taking the limit q → 0. In this limit, the non-linear completion will have a global U(1) symmetry. If the non-linear completion is ghost free for finite q, then the theory should also be ghost free in this limit.
We will write down the full set of terms in 2+1 dimensions consistent with the desired symmetries (a linearly-realized U(1) symmetry). We will find a unique ansatz, which remarkably is equivalent to the one discovered using Deconstruction.
We will in fact show that there is no ghost-free, non-linear completion in three dimensions with a linearly realized global U(1) symmetry. As a result, the corresponding theory with a local U(1) with q = 0 cannot exist. Thus there is no non-linear ghost-free gravitational completion to Federbush.
U(1) invariant actions
More precisely, let us start trying to build the most general non-linear theory, following the guidelines in section 2.5. The dofs should be limited to a single massive, charged spin-2 field H a ±,µ , and a dynamical vielbein e a µ that is neutral under the U(1) symmetry representing a massless graviton.
We may always choose to work with a representation of the action where only first derivatives appear. We will choose to work with this form, to simplify the appearance of the non-linear interactions. In first order form, we also need to introduce auxiliary fields Θ ±,µν that carry information about the charged spin-2 fields.
The theory will be built out of the fields • e a µ , ω ab , a gravitational background which transform as U(1) scalars.
In terms of the language of the previous section, we may think of Θ a as being the dual of the discrete Fourier transform of the spin connection, Θ a + = ε abcωbc 1 . However, here we are simply thinking of Θ a ±,µ as a field that will play the role of the momentum conjugate to H a ±,µ , without any a priori geometric interpretation (the fact that this can be done in a Lorentz-invariant way is what makes three dimensions special). Both H and Θ transform as Lorentz and diff tensors.
U(1) invariance is manifest in this representation. To ensure diagonal Lorentz invariance, the spin connection ω should appear only through the curvature R[ω] ab or the exterior covariant derivative D = d + ω. 7 We will also limit our attention to actions that can be expressed in a wedge structure, without using a Hodge dual. We expect theories that are not of this form to have ghosts. As we will discuss in more detail below, in the Stückelberg language, in order to avoid Boulware-Deser ghost modes it is crucial that some combination of the Stückelberg fields are non-dynamical. However, for non-form like interactions this will almost always make the situation worse. The reason is that if we have a non-wedge interaction in unitary gauge where X µν ab is some function of the other fields. After introducing the Stückelberg fields by H = H + Dφ, this will have the form which generically leads to kinetic terms for the Stückelberg fieldṡ φ aφb X 00 ab .
(4.3)
By local Lorentz invariance, this gives ALL of the Stückelberg fields φ a a kinetic term, and so they are all dynamical. This is already too many dofs, without even considering what happens to the Lorentz Stückelberg fields, which either are also part of the momenta conjugate to φ a or in principle could form their own dofs. The wedge structure will guarantee invariance under diagonal diffeomorphisms. With these restrictions, the most general action, up to total boundary terms, is given by Note that there is only one U(1) invariant mass term when we separate out the cosmological constant, consistent with what was found above. This action may be simplified with a field redefinition. We may factor the kinetic terms where By performing a linear field redefinition, while maintaining Θ − = Θ * + and H − = H * + , we may set c 2 = c 3 = 0. 8 This amounts to diagonalizing the kinetic term.
This linear field redefinition will of course renormalize the coefficients c 4 , c 5 , m 2 , Λ, however since we have kept these parameters general up until now we will simply absorb the effects of the transformation into our definition of those parameters. After this field redefinition, we may rescale the fields to absorb c 1 and c 4 . Thus we are led to the action This is the most general non-linear completion, given the assumptions outlined above.
Reproducing Federbush
In fact, we may immediately conclude that c 5 = 0, just by the fact that having c 5 = 0 does not reproduce the Federbush action at the linear level. Perturbing around flat space 9) and focusing only on the charged sector h ± , θ ± , the above action becomes Integrating out θ, we find that Plugging this back into the action, we find the second order action where S F.P. is the usual Fierz-Pauli action.
Perhaps unsurprisingly, the non-Fierz-Pauli interaction has a ghost because the shift h 0i appears with a time derivative, and so becomes dynamical. This may be seen in the Stückelberg language as well. Replacing h ±,µν → h ±,µν + ∂ (µ B ±,ν) we find that which has manifestly higher order equations of motion for B µ . This leads to a ghost in the free theory, so in the decoupling limit the ghost will be massless. This is unacceptable, so we conclude that c 5 = 0.
Unique non-linear ansatz
Thus we are lead to a unique ansatz for a the U(1) invariant kinetic term (4.14) Remarkably, this is the action that we arrived at from the modified Deconstruction procedure in (3.45), if we identify H a ± withẽ a ±1 and Θ a ± with ε abcω bc ±1 . We now want to determine the number of dofs in Equation (4.14). This can be done by an ADM analysis. However there is another way we can proceed, which we now describe.
Phase space analysis of the non-linear theory
We will do the analysis in the Stückelberg language directly in first-order form. The precise method we are using is new.
We will introduce Stückelberg fields for the diffeomorphism and Lorentz symmetries. The advantage of this method is that all additional constraints other than the usual one which removes the BD ghost are first class. Then in principle one simply needs to count the dofs in the naïve phase space. This is sufficient to count the number of dofs, and thus we will be able to diagnose the presence or absence of Boulware-Deser modes.
In typical massive gravity and bi-gravity contexts, the analysis is done in second order form. In order to determine whether all the Stückelberg fields are dynamical (in which case the Boulware-Deser ghost is present), one needs to check if the Hessian δ 2 S/δφ a δφ b is invertible (for example see [22]). However, this condition is extremely hard to check in the fully non-linear theory.
Nevertheless, there is an equivalent condition that we can use to simplify the analysis. If, and only if, the theory is free of the Boulware-Deser ghost, then the Boulware-Deser ghost mode should be absent in the quadratic lagrangian, perturbing around an arbitrary, off-shell background.
Thus we may diagnose the presence of a Boulware-Deser ghost by studying the quadratic action around an arbitrary, off-shell background. We can perturb the action in unitary gauge, and then introduce the Stückelberg fields directly at the level of the perturbations. This greatly simplifies the way the Stückelberg fields enter the action. Furthermore, it is much easier to establish the dofs of a quadratic action, than an arbitrary non-linear one.
The appendices contain some useful supplementary material. In Appendix B, we apply this method to bi-gravity in three dimensions (also known as Zwei-Dreibein gravity) and confirm that bi-gravity is ghost free. In Appendix C, we perform a more brute force approach by perturbing to cubic order around Minkowski space.
Strategy
The starting point is to perturb the action around an arbitrary background As discussed above, we do not require the background to be on-shell. We will then introduce the Stückelberg fields directly at the level of the perturbations. Since we are dealing only with the quadratic action, we do not necessarily need to pattern the Stückelberg fields off of the non-linear symmetry. It is enough to introduce enough new gauge symmetries to make all constraints first class, with corresponding phase space variables (i.e. we must introduce derivatives along with the fields). Additionally, we would like to maintain the background gauge symmetries (the diff and local Lorentz symmetries associated with the gravitational backgroundē a µ ) at the level of the perturbations. We will choose the following convenient Stückelberg decomposition v a ± → v a ± +Dφ a ± µ a ± → µ a ± +Dλ a ± , (4.16) whereDφ a = dφ a +ω ab φ b is the background covariant derivative. Because we maintain the background symmetries, the action remains in firstorder form after introducing the Stückelberg fields. Terms with two derivatives can be rewritten as terms with one derivative on fluctuations after integration by parts. A generic term with scalar fluctuations χ a and ψ b and a background fieldΦ a µ will have the form The antisymmetry of the wedge structure allows us to use the identityD 2 ψ =Rψ.
Similarly, terms with three derivatives can be rewritten with one derivative using integration by parts and the Bianchi identity for the background,DR = 0. Additionally, it is clear that the zero components h a 0 , θ ab 0 , v a ±,0 , µ a ±,0 will appear as Lagrange multipliers to this order because of the form structure. The next step is to establish the size of the phase space. Before performing this step, we will first perform a counting argument to establish how Boulware-Deser ghost manifests itself in this representation.
Degrees of freedom for healthy spin-2 fields in three-dimensions
After introducing the diff Stückelberg fields B µ I and Lorentz Stückelberg fields λ a I , we may identify the dynamical fields and their conjugate momenta as follows: • e a i , ω ab i : 6 components × 2 = 12 fields.
We also have several first class constraints, associated with the gauge symmetries: • 3 diagonal diffeomorphism symmetries (with Lagrange multipliers e a 0 ).
• 2 × 3 Stückelberg local Lorentz symmetries (with Lagrange multipliers Θ a ±,0 ). Thus, in general the dof counting is (12 + 24 + 12) dynamical variables −2 × 18 first class constraints = 2 × (2 + 2) + 2 × (1 + 1) dofs. (4.18) A massless graviton has 0 propagating dofs in three dimensions, and a charged massive graviton has 2 × 2 = 4. So we expect to have 8 phase space dofs. These 8 phase space dofs are represented by the first term above. The second term represents 2 extra phase space dofs for each of the Stückelbergized sites. This corresponds to one extra scalar dof for each of the massive modes, which is the usual Boulware-Deser ghost. In order to avoid the existence of these Boulware-Deser ghost modes, we must project out some of the phase space dofs. This must be done by writing an action where four independent combinations of the Stückelberg fields are non-dynamical. 9
Constant gravitational background
Having set up this formalism it is not hard to see that there is a ghost. We simply need to work on a fixed gravitational background, with h = θ = 0. In order to avoid a ghost, it is necessary for the theory to be ghost-free with a fixed gravitational background. We will also assume the background is torsion free,Dē = 0.
Perturbing our non-linear ansatz (4.14) around an arbitrary off-shell background, and introducing the Stückelberg fields, we are led to where N.D. refers to terms with no derivatives acting on the fluctuations or Stückelberg fields.
Focusing on time derivatives this becomes Note that in first-order form the Lorentz Stückelberg fields λ a play the role of momenta conjugate to the diff Stückelberg fields φ a . This is explored in more detail in Appendix C.
The key issue is whether or not all of the Stückelberg fields have independent conjugate momenta. We can make this more explicit by rewriting π −,a in terms of The worrying term is the first term, proportional to λ −,c . The reason is that P −,a i is already a momentum conjugate to e +,a i , so if one linear combination of the π −,a depends only ∼ P − then there is no independent momentum for the corresponding linear combination of the φ −,a . A different version of this argument is given in Appendix 3.2.
For Minkowski space, withR = 0 andē a i = δ a i , we find that the Stückelberg field φ +,0 does not have an independent conjugate momentum, because π −,0 = m 2 P −,i i . This is simply a confirmation in three dimensions of the fact that the first order form of Fierz-Pauli is ghost-free.
However for a generic background, all three components of π will be independent of P through the dependence on λ. Thus around curved backgrounds, the Boulware-Deser mode will appear in the phase space.
To summarize, we have shown that the unique form-like extension of the Federbush theory contains a Boulware-Deser mode in the q → 0 limit. The argument in this section covers both of the possible kinds of non-linear completion discussed in section 2.5. Since the Federbush theory propagates ten dofs, the kinetic term of the ghost vanishes around Minkowski, so the new dof if taken seriously would be infinitely strongly coupled around Minkowski space. However, from an effective field theory point of view, the new dof can be taken as an indication of a higher derivative terms in the Lagrangian which indicates unitarity violation at some scale. As usual in an EFT as long as we consider physics below that scale then the ghostly mode can be harmless. The crucial point is that the scale of the ghost is hierarchically above the strong-coupling scale of the Federbush theory.
Group-theoretic obstructions to non-linear charged spin-2 fields
Having demonstrated the general problems associated to our attempt to enforce U(1) symmetry on spin-2 fields, we now present an alternative argument. While before we focused on specific lagrangians, and found it easier to work in 2 + 1 space-time dimensions, in this section we will give a group theoretic argument that works in any space-time dimension. Thus in this section we will work in d + 1 space-time dimensions, and show that there is a group theoretic explanation for why it is impossible to construct a U(1) invariant theory while preserving ISO(1, d) symmetry associated to the spin-2 field in d + 1 space-time dimensions.
Obstructions to finding [ISO(1, d) × ISO(1, d)] ⋊ U(1)
If we suppose that the kinetic terms must be given by an Einstein-Hilbert kinetic term, then this entails two distinct copies of the ISO(1, d) algebra, one for each copy of the Einstein-Cartan action. Then the two vielbein together form massive representations of the Poincaré group after the ghost-free mass terms are added. The mass terms breaks one copy of the local ISO(1, d) symmetries. This copy can be restored via a set of Stückelberg fields, this makes the symmetry ISO(1, d) × ISO(1, d) non-linearly realized, but still present. The U(1) ∼ = SO(2), contrariwise, must mix with these two Poincaré algebras. This is because at the level of field representations (using, for the moment, the real representation of the 2 of SO(2)), we see that This is because when one says that a spin-j particle is "charged", one means that the particle is both complex (in other words, the 2 of U(1)) and a spin-j representation of the Poincaré group. This tells us, then, that the group, G, that we are looking for is of the form It is natural then to ask if one can consistently construct this group. We shall assign Q as the generator of U(1) and P a i , M ab i as the generators of ISO(1, d) × ISO(1, d). If we attempt to construct the given algebra, we find that the following follows without Unfortunately, a problem arises for the following commutation relations The issue here is that the two algebras should separately generate two copies of ISO (1, d). However, the SO(2) index is clearly obstructing this, since the exact object we would need in order to accomplish this would a structure constant, f ijk , in order to work correctly (i.e. convert two indices into one free index). Unfortunately, it is well known that U(1) is abelian, and thus f ijk ≡ 0, and thus we see that because the U(1) is abelian, this requires that the generator M ab i commute with all other generators. In other words, the non-Abelian properties of generators are incompatible with the 2 of U(1) structure.
This can be seen even at the level of Yang-Mills, for reference, where the non-Abelian internal group G cannot be semi-direct producted into the internal group (i.e. there cannot be a self-charged photon under an abelian symmetry); a consistent theory can only be made with a direct product. 10
Checking the Jacobi identity
One may also see this by analyzing the Jacobi identity. Here we write the most natural commutation relations to force the P into a 2 of U(1). The established commutation relations for the generators {Q, M ab 1 , M ab 2 , P c 1 , P c 2 } is the following: However, the following Jacobi identity can be seen to fail: and thus these generators fail to form a Lie algebra, which means that exponentiating them will fail to lead to a closed Lie group.
Kac-Moody algebra admits no finite truncations
Finally, a different approach can be found by studying Kac-Moody algebras, see Ref. [71]. If one takes the usual prescription for Kaluza-Klein compactification (here done in a different gauge, but a separate gauge-fixing will result in the same story), where the gauge fields are parameterized as follows Then after performing the Fourier expansion over the compact extra dimension, y ∈ [0, 2π[, we find the infinite tower of modes Performing a (4 + 1)-split on these generators, where µ = 1, 2, and 3 and x 5 ≡ y, we have the following form 11P µ (x, y) → P n µ = e iny ∂ µ , (5.12) (5.14) Next we impose the conditions (5.10) on this splitting, which yields the following Kac-Moody algebra of which there are several important things to note. Firstly, this is an infinite-dimensional Lie algebra, since the index n on the generators runs over all integers. The second thing to note is that there cannot be a finite truncation of the generators containing multiple copies of the Poincaré generators.The algebra only consistently closes, for instance, when the Lorentz generators form their Virosoro-like algebra. This is group-theoretic explanation for why one can have linearly realized charged spin-2 fields if there is an infinite tower, but finite truncations are inconsistent. In other words, this is why Dimensional Deconstruction recovers a copy of U(1) as N → ∞. We see here that we cannot simultaneously diagonalize the charge basis and the Lorentz boost or momentum basis (since they do not commute), and thus charge always entangles itself into these operators. The only exception, of course, is if we take a finite truncation of a single graviton, but this prohibits us from having 1 < N < ∞ number of gravitons. This means that we can see the failure to generate consistent charge spin-2 theories from dimensional deconstruction's relationship with the Kaluza-Klein procedure.
Ostensibly, one might expect that there could be an alternative infinite-dimensional algebra that one might generate by sending N → ∞ in Dimensional Deconstruction (with a different topology, for instance), which might have a consistent finite truncation. However, this is why the non-existence of the group [ISO(1, d) × ISO(1, d)] ⋊ U(1) will prevent this from happening. The most one can hope for is [ISO(1, d) × ISO(1, d)] × U(1). These group theoretic arguments appear to be consistent with our explicit findings.
Discussion
We have explored whether the ghost-free properties of massive gravity might allow for the existence of a single charged spin-2 field. We have defined a set of natural requirements for a charged spin-2 field: 1.) A ghost-free theory of a single massive complex spin-2 field with a linearly realized U(1) symmetry.
2.) This theory simultaneously exhibits a non-linearly realized double copy of ISO(1, d) symmetry through its Stückelberg fields. (Or triple copy if a massless spin-2 field is added).
Using a modified variant of Dimensional Deconstruction that unfreezes the vector zero-mode of the graviton (i.e. the gravi-photon), we obtained an interesting model that had many novel and non-trivial features such as spin-1 and spin-2 coupling, but ultimately broke the U(1) symmetry. We see that through a straightforward process, the U(1) symmetry may be restored to a unique theory. This unique candidate theory has manifest U(1) invariance, and can be derived from only assuming U(1) invariance and a general form structure. Unfortunately, the U(1) structure explicitly breaks the finely-tuned structure of the kinetic term for General Relativity, and the de-tuning was demonstrated to give rise to a spurious BD ghost dof which is infinitely strongly coupled around flat-space. This more or less prohibits such a theory arising in higher dimensions, since they would presumably have to give rise to a healthy three-dimensional theory via dimensional reduction. From an EFT point of view the existence of the BD ghost may just be taken as an indication of higher derivative operators in the EFT. These operators will be suppressed by a scale which tends to infinity in the limit M Pl → ∞ in which we recover the Federbush theory.
Alternatively, one can view this question from the standpoint of group theory, as was done previously [71]. Without assuming any higher dimensional structure, we explicitly demonstrate that there cannot exist a group mixing the vielbein (and thus the copies of ISO(1, d)), because doing so requires a violation of the Jacobi identity and the group cannot close. In essence, this is the obstruction to the theory that was almost generated by Dimensional Deconstruction, where the U(1) leaves the action invariant and the algebra closes only when the number of gravitons is taken to infinity, at least one such example of a resulting consistent is that of Kac-Moody. It is previously wellknown that Kac-Moody has no finite subgroup containing two or more copies of the Poincaré group. In principle, one might imagine that the infinite collection of gravitons might give rise to other infinite-dimensional Lie algebras that could, and therefore it is useful to see explicitly that the guilty assumption lies in U(1) rotating the Poincaré copies into one another, and thus this gives a rather general argument against such a structure.
However, if one weakens this requirement, as is done in the unique candidate theory, and instead relies on not making the ISO(1, 2) symmetry manifest, one enforces the U(1) symmetry from the outset, it breaks the semi-direct product into a direct product. Again, the resulting theory appears to be unique, assuming that it can be cast into differential form, but it gives rise to an unphysical dof.
Nevertheless we attempted to construct an appropriate U(1) invariant kinetic term that was not of the Einstein-Hilbert form. We showed that the new kinetic term that we created propagated a Boulware-Deser ghost by performing a Stückelberg analysis directly in first-order form. The methods described in this paper can be extended easily in three dimensions to discuss the first-order form of the kinetic interactions described in [9]. It would also be interesting to extend this method to four dimensions, however this is complicated by the well-known fact that in dimensions greater than 3 the spin connection ω ab has more components than the vielbein e a , thus the Lorentz invariant first-order form contains redundant variables that must be eliminated before the constraint analysis can be performed.
This concretely demonstrates that the existence of ghost-free mass terms are not the obstruction to a charged spin-2 field, but instead the Einstein-Hilbert terms are incompatible with the requisite U(1) structure needed to support a charged spin-2 theory. Thus, having a ghost-free theory of a massive, self-interacting spin-2 field does not help one write down a theory of a ghost-free theory of a self-interacting, charged spin-2 field.
Acknowledgments
We would like to thank Shuang-Yong Zhou and Raquel Ribeiro for useful comments on the manuscript, and Kurt Hinterbichler for useful discussions. CdR is supported by Department of Energy grant DE-SC0009946. AJT are supported by a Department of Energy Early Career Award DE-SC0010600. AM is supported by the NSF Graduate Research Fellowship Program. The authors would like to thank the Perimeter Institute for Theoretical Physics for hospitality and support during part of this work.
Appendices A Three-dimensional Einstein-Cartan formalism
The vielbein formalism has already been shown to greatly simplify the form of the interactions of ghost-free massive gravity and multi-gravity theories [25,72,73]. Since we will be interested in modified kinetic terms in this work, we will be including the spin connection in our ADM analysis. This can be done using the Einstein-Cartan (EC) formalism, where the spin connection ω ab is treated as an independent field.
The EC formalism is particularly simple in three dimensions, which is why we focus on three dimensions.
1.) In a D-dimensional spacetime, the Hamiltonian analysis of the EC action is, in general, very complicated. This is because the kinetic terms in the Hamiltonian go asė a i ω bc j ε ij ε abc ; therefore the spin connection ω ab is the momenta conjugate to e a . However, the number of spin connections ω ab i , which is D(D − 1)/2 × (D − 1), is in general much larger than the number of genuine conjugate momenta to e a i , which is D × (D − 1). To reconcile this, one will find that there are many secondary, second-class constraints that project out the excess of conjugate momenta and return the theory to the healthy number of phase space dofs. Such an analysis is quite copious even for ordinary gravity [74]. Contrarily, it is uniquely true in D = 3 that the conditions become just right and the number of spin connections exactly equals the number of conjugate momenta. This makes the analysis of potentially ghostly interactions for gravity theories ideal in D = 3. The naïve expectation is that if the theories fail in D = 3, a compactification argument tells us that they are unlikely to work in any higher dimensions (see the discussion in Sec. 2.5).
2.) In three dimensions, we are greatly aided by the Poincaré duality, which relates 1forms, i.e. vectors, with 2-forms by the Hodge star, i.e. ⋆(B ρσ ) = ε µ ρσ B ρσ . Using these tricks, we can define a dual spin connection ω a ≡ ε abc ω bc which naturally comprises the conjugate momenta to e a , rather than its more complicated form ω ab .
This will cause the Hamiltonian analysis to simplify much more than in four or higher dimensions, however we emphasize that the main results of this paper are fully generalizable to arbitrary dimensions.
The EC Action in D = 3
To make this more concrete, let us start with the D = 3 EC action 12 in differential form notation: Next we define the dual of ω ab as ω a ≡ ε abc ω bc . The inverse is given by ω ab = 1 2 ε abc ω c Then, distributing the overall ε abc into the two terms and applying the definition of ω a , one derives This leads us to define the dual Riemann tensor Similarly, we can express the covariant derivative of a Lorentz vector λ a in terms of the dual spin connection as Then the EC action is Varying this with respect to e a yields the Einstein equation We will now convert the EC action in the previous section into its Hamiltonian form. After integrating the exterior derivative by parts (A.10) 12 In what follows we will Wick rotate to Euclidean space, so the position of the indices does matter. Note that this is only true because we are working with the vielbein indices; if we were working with the spacetime indices, the difference is important. One may trivially Wick rotate back to the Lorentzian by forcing upstairs indices to only contract with downstairs and interpreting it as the standard Minkowski inner product between them.
In index notation this is given by We then perform the (2 + 1)-split onto the action, yielding Here we see that e a 0 and ω a 0 enter into the theory as Lagrange multipliers, and given the definition of conjugate momenta 13) we see that the ω a are the momenta conjugate to e a as promised. The inverse Legendre transformation then easily shows us that the Hamiltonian is given by which is pure constraint. This is expected because all diffeomorphism invariant theories give rise to Hamiltonians that are pure constraint.
B Application of first-order constraint analysis to bi-gravity As a check on the method described in section 4, we will here show that the method can be used to show the absence of the Boulware-Deser mode in bi-gravity in three dimensions. Start with bi-gravity with no cosmological constants (B.1) As in other sections, it is useful to work with the dual of the spin connection by defining ω a = ε abc ω bc . Then we perturb to quadratic order around an arbitrary background e a I,µ =ē a I,µ + v a I,µ ω ab I,µ =ω ab I,µ + µ ab I,µ . (B.2) Next we introduce the Stückelberg fields at the level of the perturbations. We introduce the Stückelberg fields through site 2 for convenience v a 2 → v a 2 +Dφ a µ ab 2 → µ ab 2 +Dλ ab , (B.3) whereD = d +ω 1 is the background covariant derivative. Since we have introduced the Stückelberg fields that act as maps from site 2 to site 1, in this representation we may identify the diagonal local Lorentz transformations with site 1, and thus the spin connection appearing inD is the spin connection for site 1.
The quadratic action takes the form (assuming the torsion vanishes,De = 0) where S N.D. refers to terms with no derivatives on fluctuations.
We now define the duals µ a ≡ ε abc µ bc and λ a ≡ ε abc λ c . Focusing on the time derivatives yields In this case, Ω AB is a 30 × 30 matrix. It is given by where the background-dependent function A ab i [e] is given by Computing the eigenvalues of Ω AB explicitly, we find that there are 2 eigenvalues that vanish identically, independently of the choice of background and of the parameter choices. This is a proof, in 3 dimensions, that ghost-free bi-gravity (and thus ghostfree massive gravity) propagates no more than two dofs around any arbitrary off shell background. This method is very simple. Of course there are background for which there are more than 2 zero eigenvalues. This corresponds to the well known backgrounds in the literature where the kinetic term for one or more of the perturbations vanishes, signaling a strongly coupled background solution.
In [65], the analysis was done in unitary gauge and it was pointed out that there is an ambiguity corresponding to the need to impose a secondary constraint. That ambiguity corresponds here to the way we introduce the Stückelberg fields. After introducing the Lorentz Stückelberg fields, we may always chose a gauge where the symmetric vielbein condition e a [µ f a ν] = 0 is satisfied.
C Alternative approach to Hamiltonian analysis
In this appendix we provide an alternative, perhaps more direct argument that the new kinetic terms that we were forced to introduce by the U(1) symmetry reintroduce the Boulware-Deser ghosts. The outline of the argument is • We will start with the U(1) invariant action suggested by deconstruction. We will introduce 2 copies of the Lorentz and diff Stückelberg fields so that we reintroduce the full undiagonalized gauge symmetries. As a result, all constraints will be first class.
• By the counting argument of section 4.2.2, we will see that the theory will only propagate 4 dofs (the correct number for two massive gravitons in 3 dimensions) if one linear combination of the Stückelberg fields on each site is non-dynamical.
• By perturbing the action to cubic order about Minkowski space, we will see that for a generic choice of parameters that all of the Stückelberg fields will be dynamical, and so the theory will propagate too many dofs. We may identify these extra propagating modes as Boulware-Deser ghosts.
A key feature of our analysis is that all of the constraints are first class. As a result, we will not generate any new secondary constraints.
Perturbation theory in the Stückelberg language
Our starting point is the Deconstruction-inspired theory written in site language, given by Equations (3.34-3.42). We showed that this was equivalent to the non-linear ansatz in Equation (4.14). We first introduce the Stückelberg fields for both diff and local Lorentz symmetries [75] e a I,µ (x) → ∂ µ Φ µ ′ I Λ aa ′ I e a ′ µ ′ (x), I = 2, 3 (C.1) Note that we only introduce Stückelberg fields on sites 2 and 3. In this way we associate the diagonal copies of the gauge symmetries with the gauge symmetries acting on site 1. The choice of site 1 here is arbitrary but convenient for the analysis. The Φ µ I are maps from site I = 2, 3 to site 1, which has the coordinates x µ , Thus diff indices are raised and lowered with the metric on site 1. We will now work perturbatively around flat space e a I,µ = δ a µ + h a I,µ ω ab I,µ = θ ab I,µ = 1 2 ε abc θ c I Φ µ I = x µ + B µ I Λ ab I = e λ ab = δ ab + λ ab + · · · = δ ab + 1 2 ε abc λ c . (C.4) Note that λ ab I = −λ ba I . Also note that we have used the fact that we are in three dimensions to rewrite antisymmetric tensors with 2 indices as vectors, V a = ε abc V bc . We will now work in units where M Pl = 1, and work perturbatively in the variables above.
Determining the size of the naïve phase space
To implement the counting of section 4.2.2, it is thus necessary to establish the size of the naïve phase space. In other words, we must count the number of dynamical variables before any constraints are imposed. The form structure of the action guarantees that, to cubic order, the action takes the form where the ξ m are the dynamical variables. When we introduce our new kinetic terms, we will find that the phase space measure Ω[ξ] is not written in Darboux form. In other words, it will not be possible to cleanly separate the fields into coordinates and conjugate momenta without doing a field redefinition.
To avoid needing to explicitly find the field redefinition to go to Darboux form (which is always possible locally), we will determine the naïve phase space directly from the symplectic form Ω. By varying the action with respect to ξ m , we obtain the equations of motion This is a set of dynamical equations. If Ω is invertible, then all of the ξ n have independent, dynamical equations. If Ω is not invertible, then not all of the equations are independent. The number of nonzero eigenvalues of Ω[ξ] mn gives the number of dynamical variables in the naïve phase space. For more details see for example [61].
Counting degrees of freedom at quadratic order
We now obtain the derivative parts of the action at quadratic order. We find We see that to this order, the action is still in first order form, with one time derivative per field. However, B, λ, and ω all appear with time derivatives, and it is not possible to integrate by parts so that only two of them contain time derivatives. Thus the introduction of our new kinetic interaction has taken the action out of Darboux form, and instead the action is written in a more general form. Thus to determine the size of the naïve phase space, we must determine the number of nonzero eigenvalues of the phase space measure Ω.
Form of Ω
The crucial question is whether or not Ω mn is invertible. If it is, then the naïve phase space contains all of the fields as potential dofs. If it is not, then some of the fields are not dofs. We have seen that we will not propagate the correct number of dofs for a massive spin-2 field unless some of the freedom is projected out. As stressed above, the Boulware Deser mode is associated with some of the components of the Stückelberg fields. Thus in order to remove the Boulware-Deser ghost, it is crucial that det Ω = 0. Since we are working perturbatively, we may write Ω = Ω 0 + εΩ (1) + · · · , (C. 23) where the superscript indicates the order in the field. The constant part Ω (0) is determined from the quadratic action, the part linear in the fields is determined from the cubic action. The form of Ω is where i, j run over h a I,i , χ a I,i , and a, b run over λ I 0 , B I 0 . The determinant of this matrix can be computed perturbatively as det Ω = det Ω It is not necessary to compute det Ω ij explicitly, it is enough to know that it is nonzero. The reason it is nonzero is because at quadratic order, all of the fields that the i, j indices run over are dynamical.
Now Ω (1) ab in principle will have contributions from the mass and kinetic terms Ω consistent with the expectation that det Ω = 0 for ghost-free tri-gravity.
Computing det Ω
(1) ij Since det Ω = 0, we only need to compute Ω Because in a healthy theory the Boulware-Deser ghost must be absent from all solutions, we need only find one solution for which Ω is invertible. Thus we will consider the case that B 0 I = χ a I,i = 0. We emphasize that this may only be done after computing Ω.
Now the schematic forms of the momenta are π B 0 ∼ ∂B(h + λ) + λ 2 + λh + h 2 π λ 0 ∼ B∂B + Bλ + Bh. There are clearly solutions for which this is non-zero, signaling the presence of a Boulware-Deser mode.
|
2014-10-20T20:00:03.000Z
|
2014-10-20T00:00:00.000
|
{
"year": 2015,
"sha1": "29a024e37cf9e6f1dc41787b7be5428c4eae8b87",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.5422",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "29a024e37cf9e6f1dc41787b7be5428c4eae8b87",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
10163784
|
pes2o/s2orc
|
v3-fos-license
|
Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation
Magic state distillation is an important primitive in fault-tolerant quantum computation. The magic states are pure non-stabilizer states which can be distilled from certain mixed non-stabilizer states via Clifford group operations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli eigenstates are not expected to be magic state distillable, but it has been an open question whether all mixed states outside this set may be distilled. In this Letter we show that, when resources are finitely limited, non-distillable states exist outside the stabilizer octahedron. In analogy with the bound entangled states, which arise in entanglement theory, we call such states bound states for magic state distillation.
The significant noise and decoherence in quantum systems means that harnessing these systems for computational tasks must be performed fault tolerantly [1,2]. In a wide variety of setups only a limited set of gates, known as the Clifford group, are implemented in a manifestly fault tolerant manner. Examples include some anyonic topological quantum computers [3][4][5], post-selected quantum computers [6,7] and measurement based topological quantum computers [8]. This motivates the problem of when such devices, with practically error free Clifford gates, may be promoted to a full quantum computer. The celebrated Gottesman-Knill theorem shows that a Clifford circuit acting on stabilizer states -simultaneous eigenstates of several Pauli operators -can be efficiently simulated by a classical computer [9]. However, given a resource of pure non-stabilizer states, we can implement gates outside the Clifford group. For example, a qubit in an eigenstate of the Hadamard enables one to implement a π/8 phase gate that when supplementing the Clifford group gives a dense covering of all unitary operations [10], and so enables universal quantum computation.
Preparation of non-stabilizer states would usually require a non-Clifford operation, so in this context, one would require that even noisy copies of these states enable high fidelity quantum computation. Bravyi and Kitaev [10] showed that this can be achieved. Coining the term magic state distillation, they showed that most mixed non-stabilizer states can be distilled via Clifford group circuits to fewer copies of a lower entropy state, reaching in the limit of infinite iterations a pure nonstabilizer magic state. However, the protocols they presented do not succeed for all mixed non-stabilizer states. Bravyi and Kitaev were not satisfied by the ambiguous status of these states and concluded that "The most exciting open problem is to understand the computational power of the model in [this] region of parameters.". Either all non-stabilizer states are efficiently distillable by an undiscovered protocol, or there exist non-stabilizer states that are impossible to distill. Such undistillable states we call bound states for magic state distillation, in analogy with bound states in entanglement distillation [11]. Here we make progress by showing that bound states exist for a very broad class of protocols. By showing that a single round of a finite sized protocol will not improve these states, it follows that repeating such a protocol, even with an infinite number of iterations, will also have no benefit. Hence, we explain why all known protocols fail to distill some states.
The single-qubit stabilizer states, for which the Gottesman-Knill theorem applies, are the six pure stabilizer states (the eigenstates of ±X, ±Y and ±Z) and any incoherent mixture of these. In the Bloch sphere, this convex set with 6 vertices forms the stabilizer octahedron partially shown in figure 1a. Single-qubit states have density matrices: where a = (a X , a Y , a Z ) is a unit vector, and f is the fidelity w.r.t the pure state |ψ a ψ a | = (1 1 + a X X + a Y Y + a Z Z)/2. Stabilizer states satisfy: where the equality holds for states on the surface of the octahedron, and we denote the fidelity of such surface states as f S a , which is unique assuming f ≥ 1/2. Prior protocols for magic state distillation [7,10,12,13] increase fidelity towards eigenstates of Clifford gates, such as the Hadamard H and the T gate [18]. These eigenstates have a H = (1, 0, 1)/ √ 2 and a T = (1, 1, 1)/ √ 3, with f = 1 for ideal magic states. Given the ability to prepare a mixed nonstabilizer state, ρ, we can perform an operation called polarization, or twirling, that brings ρ onto a symmetry axis of the octahedron. For example, by randomly applying 1 Bravyi and Kitaev proposed the following protocol [10] for |T state distillation: (1) Prepare 5 copies of ρ(f, a T ); (2) Measure the 4 stabilizers of the five-qubit error correcting code; (3) If all measurements give +1, the protocol succeeds and the encoded state is decoded into a single qubit state, and otherwise restart. Upon a successful implementation of this protocol the output qubit has a fidelity F (f ) plotted in figure 2b. Provided the initial fidelity is greater than some threshold, a successful implementation yields a higher fidelity. This protocol has a non-tight threshold, and exhibits a gap between the threshold and the set of stabilizer states. Because the initial state was twirled onto the T axis, the threshold forms a plane in the Bloch sphere (see figure 1). In contrast, Reichardt has proposed a protocol that does have a tight threshold for distillation of ρ(f, a H ) states in a H-like direction [12]. His protocol is similar to above, but uses 7 qubits each attempt and measures the 6 stabilizers of the STEANE code [1]. In figure 2a we show the performance of this protocol, where there is no threshold gap. When the initial mixture is not of the form ρ(f, a H ), we twirl the initial mixture onto the H axis. Hence, the threshold forms a plane for each H-like direction (see figure 1). Although the protocol is tight in directions crossing an octahedron edge, the protocol fails to distill some mixed states just above the octahedron faces, and so is not tight in all directions. Even the combined region of states distilled by all known protocols still leaves a set of states above the octahedron faces, whose distillability properties are unknown.
Here we show that for all size n protocols there is a region of bound states above the octahedron faces. More formally, we considering all states ρ(f, a P ) where a P has all positive (non-zero) components. Having all components as non-zero excludes states above octahedron edges. Considering only states in the positive octant is completely general as Clifford gates enable movement between octants. Many copies of bound states cannot be used to improve on a single copy, and below we formalize the idea of not improved and state our main result.
Definition 1 We say ρ is not an improvement on ρ(f, a P ), when ρ is a convex mixture of C i ρ(f, a P )C † i and stabilizer The performance of magic state distillation of: (a) the STEANE code for distilling states in a H-like direction; (b) the five qubit code distilling states in a T -like direction. Notice that both functions are continuous, and that in (b) an input state on the octahedron surface, f = f S a T , will output a state below the surface. Consequently, there is also a region above the surface where the output is a stabilizer state, and hence, not an improvement on the initial state.
states, where C i are Clifford group gates.
Theorem 1 Consider a device capable of ideal Clifford gates, preparation of stabilizer states, classical feedforward and Pauli measurements. For any protocol on this device that takes ρ(f, a P ) ⊗n and outputs a single qubit, ρ , there exists an > 0 such that ρ is not an improvement on ρ(f, a P ) for f ≤ f S a P + .
Theorem 1 covers a wide class of protocols, which attain a fidelity that is upper-bounded by a narrower class of protocols [14], such that theorem 1 follows from: Theorem 2 Consider all protocols that follow these steps: (i) prepare ρ(f, a P ) ⊗n ; (ii) measure the n − 1 generators of an n qubit stabilizer code S n−1 with one logical qubit; (iii) postselect on all "+1" measurement outcomes; (iv) decode the stabilizer code and output the logical qubit as the single qubit state ρ . For all such protocols there exists an > 0 such that ρ is not an improvement on ρ(f, a P ) for f ≤ f S a P + .
Prior protocols, such as those based on the STEANE code and 5 qubit code, are covered explicitly by theorem 2. Here we use the structure of stabilizer codes to prove theorem 2, with theorem 1 following directly from the results of [14], where such distillation protocols are shown to have equal efficacy with more general Clifford protocols. It is crucial to consider the implication of these theorems when an n-qubit protocol is iterated m times. When a single round provides no improvement on the initial resource, the input into the second round will only differ by Clifford group operations, and hence our theorem applies to the second, and all subsequent, rounds. Hence, repeated iteration cannot be used to circumvent our theorem. Before proving these theorems, we derive a pair of powerful lemmas that identify bound states.
Lemma 1 Consider n copies of an octahedron surface state ρ(f S a P , a P ) projected onto the codespace of S n−1 and then decoded. If the output qubit is in the octahedron interior, then there exists an > 0 such that for f ≤ f S a P + the same projection on ρ(f, a P ) ⊗n also projects onto a mixed stabilizer state.
This lemma follows directly from the dependence of the output on f , which for finite n is always continuous. We can observe this lemma at work in figure 2b. Our next lemma identifies when octahedron surface states are projected into the octahedron interior. Before stating this we must establish some notation. An initial state, ρ(f S a P , a P ) ⊗n , is an ensemble of pure stabilizer states: where |Ψ g is stabilized, g|Ψ g = |Ψ g , by the group G g generated by g =(g 1 , g 2 ,... g n ). The operator g i is X i , Y i or Z i , with i labeling the qubit on which it acts. Each contribution has a weighting q g = i (a gi /(a X + a Y + a Z )).
Measuring the generators of S n−1 and post-selecting on "+1" outcomes, projects onto the codespace of S n−1 with projector P = s∈Sn−1 s/2 n−1 , producing: with projected terms, |Ψ g , of new weighting q g . Each |Ψ g has its stabilizer generated by (G g , s 1 , s 2 , ....s n−1 ), where G g is an independent generator that: (a) was present in the initial group G g ∈ G g ; and (b) commutes with the measurement stabilizers G g S n−1 = S n−1 G g . In other words, it must be equivalent to one of six logical Pauli operators of the codespace. We denote the set of logical operators as L, and its elements ±X L , ±Y L and ±Z L , and so G g ∈ L.S n−1 . This defines a decoding via the Clifford map, X L → X 1 and Z L → Z 1 . Since there are only six distinct logical states, we can combine many terms in equation 4: where |Ψ L has stabilizer generators (L, s 1 , s 2 , ....s n−1 ). The new weighting is q L = q g with the sum taken over all g that generate G g containing an element G g ∈ L.S n−1 . We can now state the next lemma: Lemma 2 Given n copies of ρ(f S a P , a P ) projected into the codespace of S n−1 and decoded, the output qubit is in the octahedron interior if there exist any two pure states in the initial ensemble, |Ψ g and |Ψ g (defined in equation 3), such that both: (i) the projected pure states are orthogonal, so that L ∈ G g and −sL ∈ G g where L ∈ L and s ∈ S n−1 ; and (ii) upon projection |Ψ g and |Ψ g do not vanish, so q g = 0 and q g = 0.
We prove this lemma by contradiction. From equation 2, and (2f − 1)a L = (q L − q −L ), surface states satisfy: and we assume to the contrary that the projected state has this form. Since q ±L are non-negative reals, we have |q L −q −L | = q L + q −L − 2Min(q L , q −L ), where Min(q L , q −L ) is the minimum of q L and q −L . Along with the normalization condition, L q L = 1, this entails: Since all terms are positive, no cancellations can occur and so every term must vanish, hence Min(q L , q −L ) = 0, ∀L. However, conditions (i) and (ii) of the lemma entail that there exists a non-vanishing Min(q L , q −L ), as q L ≥ q g = 0 and q −L ≥ q g = 0. Having arrived at this contradiction, we conclude the falsity of the assumption that the projected state remains on the octahedron surface, and so must be in the octahedron interior. This proves lemma 2, and we now show that lemma 2 applies to all stabilizer reductions that do not trivially take ρ(f, a P ) ⊗n → C i ρ(f, a P )C † i . Our proof continues by finding canonical generators for the code S n−1 . A related method has been used to prove that all stabilizer states are local Clifford equivalent to a graph state [15], and we review this first. All stabilizer states have a stabilizer S n with n generators. Each generator is a tensor product of n single-qubit Pauli operators. This can be visualized as an n by n matrix with elements that are Pauli operators, each row a generator and each column a qubit. Different, yet equivalent, generators are produced by row multiplication, via which we can produce a canonical form. In this form column i has a non-trivial Pauli operator A i that appears on the diagonal, and all other operators in that column are either the identity or another operator B i . Note that A i and B i compose a third non-trivial Pauli A i B i = i(−1) γi C i with γ i = 0, 1. Hence, all stabilizer states differ from some graph state by only local Cliffords that map A code, S n−1 , has one less generator than the number of qubits, and so more columns than rows. We can apply the diagonalisation procedure on an n − 1 by n − 1 submatrix, to bring this submatrix into canonical form. Hence, we can find generators of S n−1 such that: where the variables β k,j = 0, 1 denote whether B k or 1 1 k is present, and α j = 0, 1 defines the phase. With the n th column out of canonical form, this leaves the n th qubit operator T j,n unspecified. However, if all these generators have T j,n = 1 1 n , then the protocol is trivial and projects n − 1 qubits into a known stabilizer state and the last qubit untouched, and so no improvement is made for any f . Hence, herein we assume the non-trivial case; in particular we assume stabilizer T n−1,n = 1 1 n . Since, we can always relabel qubits this is completely general. Furthermore, we can define T n−1,n = A n . Now we can define a logical operator in the codespace of S n−1 : where the variables ζ j = 0, 1 are uniquely fixed by commutation relations Z L s j = s j Z L . Note that Z L has some inbuilt freedom as B n is not fixed other than that B n = A n , 1 1 n , which is equivalent to free choice of γ n in the expression A n B n = i(−1) γn C n . Now we enquire whether the final state contains two terms stabilized by Z L and −sZ L respectively, hence satisfying the conditions for lemma 2. If we consider the product of Z L and s n−1 , and choose γ n = α n−1 + γ n−1 mod 2, we have: Our choice of γ n ensures a minus sign on the left hand side, which aids in finding |Ψ g and |Ψ g that satisfy our lemma by being stabilized by G g = Z L and G g = −s n−1 Z L respectively. This criterion is fulfilled when: These states only vanish under projection, q g , q g = 0, if they are stabilized by the negative of some element of the code S n−1 . To prove they don't vanish, we first observe that every element of G g and G g has either 1 1 j or B j acting on qubit j, for all j = 1, 2, ...n − 2. The only elements of S n−1 for which this is true are 1 1 and s n−1 , but s n−1 has A n−1 A n acting on the last two qubits and neither G g or G g contain any such element.
Using a canonical form of the generators of S n−1 , we have shown that non-trivial codes always satisfy the conditions of lemma 2. That is, all non-trivial codespace projections take many surface states into the octahedron interior. From the continuity expressed by lemma 1, this entails the existance of a finite region of non-stabilizer states that are also projected into the octahedron. Hence, all n-copy protocols do no improve on a single copy for some region of bound states above the octahedron faces, completing the proof. This does not contradict known tight thresholds in edge directions, as these directions have a with one zero component.
Although our proof holds for protocols using fixed and finite n copies of ρ(f, a P ), we could conceive of a protocol that varies n. If this varying-n protocol has an n-dependent threshold, f T a P (n), and f T a P (n) → f S a P as n → ∞, then its threshold would be arbitrarily suppressible. Repeated iterations of a protocol, or equivalently employing concatenation of a singlequbit code, will not change the threshold. However, one could consider a broader class of protocols consisting of iterates that act on p qubits and output q qubits (for p > q > 1) followed by a final round outputting a single qubit. Such protocols map n qubits to 1 qubit, with n growing each iterate, but with only p qubits involved in each iterate. This implies that multi-qubit output iterates may suppress the threshold effectively, and are worth further study. Currently, no such protocol is known. As such, in the asymptotic regime, bound magic states may not exist. However, numerical evidence so far indicates that smaller codes tend to produce better thresholds than larger codes. Nevertheless, the theorem does not rule out infinite cases from attaining a tight threshold. In the regime of finite resources, bound states do exist, and it is interesting ask what computational power Clifford circuits acting on such states possess. Can we find methods of efficiently classically simulating bound states; or can bound states be exploited in algorithms that offer a speedup over classical computation? Furthermore, our proof assumes a protocol acting on identical copies, which invites study into whether our results extend to non-identical copies. In particular, following the analogy with entanglement distillation, we speculate that bound magic states may be distillable via "catalysis", where some nonconsumed distillable resource activates the distillation [11]. Finally we note that noisy Clifford gates can also enable quantum computation [16,17], and we conjecture that a similar theorem will apply to a class of noisy Clifford gates analogous to states just above the octahedron faces.
|
2010-02-01T10:16:56.000Z
|
2009-08-06T00:00:00.000
|
{
"year": 2009,
"sha1": "1034880a6b19c5edab7232179c4d1fcf6cad7b0c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0908.0836",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1034880a6b19c5edab7232179c4d1fcf6cad7b0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
30712614
|
pes2o/s2orc
|
v3-fos-license
|
Multiple Osteochondral Allograft Transplantation with Concomitant Tibial Tubercle Osteotomy for Multifocal Chondral Disease of the Knee
Symptomatic patellofemoral chondral lesions are a challenging clinical entity, as these defects may result from persistent lateral patellar maltracking or repetitive microtrauma. Anteromedializing tibial tubercle osteotomy has been shown to be an effective strategy for primary and adjunctive treatment of focal or diffuse patellofemoral disease to improve the biomechanical loading environment. Similarly, osteochondral allograft transplantation has proven efficacy in physiologically young, high-demand patients with condylar or patellofemoral lesions, particularly without early arthritic progression. The authors present the surgical management of a young athlete with symptomatic tricompartmental focal chondral defects with fresh osteochondral allograft transplantation and anteromedializing tibial tubercle osteotomy.
S ymptomatic patellofemoral chondromalacia is a challenging clinical scenario in young, active individuals. Although previous reports have shown good to excellent outcomes with combined tibial tubercle osteotomy in conjunction with autologous chondrocyte implantation, 1-3 a recent publication reported only 63% return to preoperative physical function in an active population at minimum 2 years after tibial tubercle osteotomy with or without a cartilage restoration procedure. 4 Similarly, osteochondral allograft transplantation (OCA) is a well-established surgical option that addresses the injury to both the articular cartilage and underlying subchondral bone, with reported rates of return to sporting activity of up to 88%. [5][6][7] At our institution, a recent analysis of the OCA patient population revealed an overall 87% allograft survival rate for all lesions of the knee at an average 5-year follow-up. 8 Treatment of concomitant pathology such as patellar maltracking or rotational malalignment is critical to enhance more long-term osteochondral allograft viability and prevent progression of disease. The purpose of this surgical technique description was to describe the method for concomitant fresh osteochondral allograft transplantation and anteromedializing (AMZ) tibial tubercle osteotomy for treatment of multicompartment articular cartilage defects of the knee.
Technique Diagnosis
In addition to history and physical examination, standard, weightbearing preoperative knee radiographs, including large cassette alignment views, should be performed (Fig 1). 9 Sizing markers are used on these radiographs to allow appropriate graft size matching. 10 Advanced axial imaging is frequently used to better delineate the tibial tubercleetrochlear groove distance (TT-TG) prior to tibial tubercle osteotomy, 11 and magnetic resonance imaging may be performed preoperatively to better assess the extent of subchondral involvement and concomitant ligamentous or meniscal pathology. 9
Indication
OCA is predominantly indicated for high-demand, nonobese, physiologically young (i.e., age <50 years) patients who have failed conservative and/or prior articular cartilage repair techniques. 12,13 OCA may be used as primary treatment of focal, medium to large cartilage defects that are less amenable to marrow stimulation and in patients not ideally indicated for arthroplasty procedures. 8,14 Conversely, significant osteoarthritis or diffuse, nonfocal chondral pathology remains a contraindication to OCA. Symptomatic concomitant pathology such as malalignment (i.e., coronal and rotational), meniscal insufficiency, ligamentous injury, or adjacent compartment chondral disease may be treated simultaneously to optimize clinical outcomes. 15,16 AMZ has been indicated for patients with refractory anterior knee pain to offload the pathologic contact pressures, particularly in the presence of cartilage restoration. 17
Patient Positioning and Anesthesia
The patient was positioned supine on a flat-top table with a thigh tourniquet and general anesthesia. Although not the authors' preference, a foot positioner may be useful to maintain adequate knee flexion (70 -110 ) for visualization of condylar defect(s).
Surgical Technique
If not previously performed (i.e., 3-6 months), staging arthroscopy is preferred to determine the extent of location of symptomatic cartilage disease prior to ordering fresh osteochondral allograft. Diagnostic arthroscopy confirmed the presence of moderately sized, focal, Outerbridge grade III/IV chondral defects of the patella, trochlea, and medial and lateral femoral condyles without further ligamentous or meniscal injury (Video 1).
A 10-cm longitudinal midline incision and dissection was performed from the superior aspect of the patella distally to the tibial tubercle. Fresh (15-28 days postharvest) osteochondral allografts (JRF Ortho, Centennial, CO) of the distal femur and patella were thawed in room-temperature saline on the back table. The Arthrex T3 AMZ (Arthrex, Naples, FL) system was used to perform the tibial tubercle osteotomy (Video 1). On visualization of the patellar tendon insertion, the pin guide was inserted into the tibial tubercle perpendicular at the level of the Gerdy tubercle. The 60 cutting guide was assembled and provisionally pinned on the anteromedial aspect of the tibia (Fig 2). The horizontal guide and tibial pin were removed and the anterior compartment was sharply elevated in subperiosteal fashion. The anterior compartment was sharply elevated, and multiple Chandler retractors were placed around the posterolateral flare of the tibia. An oscillating saw was used to obliquely cut the tibial tubercle distally through the slot capture and ½-and 1-in. osteotomes were used to complete the cut laterally and superiorly beneath the patellar tendon, while maintaining the distally based periosteal sleeve. To normalize TT-TG and offload bipolar, lateral defects of the patellar and trochlea, approximately 10 mm of translation of the tibial tubercle was confirmed. Two 4.5-mm cortical screws were drilled in standard lag fashion and placed perpendicularly to the plane of the osteotomy. Prominent anteromedial bone was sharply resected and employed for bone grafting of the anterolateral defect after translation. Alternatively, tibial tubercle fixation may be delayed until after OCA depending on difficulty of surgical exposure.
A lateral parapatellar arthrotomy with soft tissue lengthening and limited, medial vastus-sparing arthrotomy were performed for exposure and perpendicular lesion access (Video 1). With an assistant gently flexing the knee, the trochlear defect was delivered using a large rake and z-retractor ( Fig 3A). Cannulated cylindrical sizing guides from the Allograft OATS system (Arthrex) were placed over the defect to determine the diameter of donor allograft needed. The 22.5-mm trochlear defect was sized, and a 2.4-mm guide pin was inserted through the cannulated sizing guide in the center of the defect to a depth of at least 3 cm (Fig 3 B and C). The sizing guide was removed and a cannulated recipient harvester of the same size was placed over the guidepin to score the peripheral cartilage. A cannulated cutting reamer was then placed over the guidepin, and the defect was reamed to a depth of approximately 7 mm under cold irrigation to prevent thermal necrosis. The reamer, guidepin, and particulate debris were removed and a small ruler was used to measure the depth of the 4 quadrants (3-, 6-, 9-, and 12-o'clock). A fresh no. 15 blade may be used to debride any frayed cartilage around the rim.
On the back table, a bushing of the same size as the defect was placed over the donor condyle at the exact location and held firmly by an assistant, while the surgeon used a donor harvester to extract donor osteochondral graft (Fig 3D). Graft measurements were marked out on the donor plug, and the donor allograft was trimmed to the appropriate depth using an oscillating saw, rasp, and rongeurs. Pulsatile lavage with bacitracin-mixed saline was used for 2 minutes over the donor plug. The donor plug was then press-fit by hand, with care to ensure that the 12-o'clock position on the graft and recipient site were matched. An oversized tamp was used to impact the plug flush to the surrounding articular surface (Fig 3E). A similar technique was applied to lesions on the patella, medial femoral condyle, and lateral femoral condyle with increasing degrees of knee flexion. To access the undersurface of the patella, the deep capsule in the suprapatellar pouch and prepatellar fat pad may be elevated to allow an assistant to evert the patella 90 . After graft implantation and copious irrigation, layered wound was performed with titrated lateral retinacular lengthening, and a hinged knee brace was applied (Table 1).
Postoperative Rehabilitation
A hinged knee brace is locked in full extension and taken off only for physical therapy and use of a continuous passive motion machine. At 2 weeks, the brace is unlocked and discontinued when the patient is able to perform a straight-leg raise without an extension lag. Depending on the quality of fixation, weight bearing may range from complete noneweight bearing to touch-down weight bearing. Partial weight bearing is initiated at 6 weeks with range-of-motion goals of 130 knee flexion and full extension. Full weight bearing and range of motion should be achieved by 8 weeks. Physical therapy begins closed-chain exercises and gait training. At 12 weeks, strength training including stationary bicycling and light jogging is advanced with full return to vigorous athletic activities discouraged until approximately 8 months.
Discussion
Recently, several authors have reported clinical outcomes of osteochondral allografts for treatment of isolated full-thickness patellofemoral cartilage defects. 5,18 Historically, OCA for patellar lesions have yielded inferior results when compared with corresponding defects involving the femoral condyle or trochlea. However, for patients with extensive failure of both conservative and surgical interventions, few evidence-based treatment options address large, multifocal defects with restoration of the native osteochondral architecture in a single-stage procedure. 5,8,18 To date, limited clinical outcomes are available The defect is sized using a cannulated cylindrical sizing guide (Arthrex) to encompass the full extent of the defect. A large rake was placed laterally and a z-retractor was placed medially, superior to the patella to allow the cylindrical sizing guide to be placed flush over the defect. (C) Intraoperative image of a right knee trochlear defect being prepared to be reamed to a depth of approximately 6 mm to 8 mm using a cannulated cutting reamer of the same size as the cylindrical sizing guide previously used to measure the diameter of the defect. (D) Intraoperative image of a graft harvester placed over a bushing of the same size as the measured defect and used to core the donor plug through the full extent of the donor tissue. An assistant is used to hold the bushing firmly in the appropriate location on the donor tissue. (E) Intraoperative image showing an osteochondral allograft plug is being press fit into the previously reamed patella defect. The graft was then gently tamped flushed to the surrounding articular cartilage. detailing the results with multiple OCA. In selected patients (i.e., young, active) with multiple, focal cartilage defects of the knee, this approach represents a viable strategy for joint preservation with anatomic reconstruction. Conversely, certain limitations must be acknowledged, including its technical difficulty, increased cost, and limited graft availability with a 4to 6-month waiting period (Table 2).
For focal patellofemoral chondral defects, adjunctive AMZ may be performed to optimize underlying pathomechanics, correct rotational malalignment, and transfer adverse contact pressures. 2 Although our technique highlights the utility of OCA for multiple, symptomatic chondral lesions, it also underscores the importance of concomitant AMZ in improving the patellofemoral kinematics. The TT-TG should be normalized to a target goal of less than 10 to 12 mm, whereas a 60 cut is commonly used by the senior author to achieve offloading of lateral and distally based patellar lesions while avoiding overmedialization. 19,20 Maintaining optimal patellofemoral biomechanics is essential to protect the biologic microenvironment of the osteochondral allograft, both to enhance incorporation and reduce adjacent articular disease progression.
|
2018-04-03T01:44:06.886Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "256eb89e9d2b0c54b8c518eb9b4fb1c1980411f2",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroscopytechniques.org/article/S2212628717301676/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "256eb89e9d2b0c54b8c518eb9b4fb1c1980411f2",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
228996287
|
pes2o/s2orc
|
v3-fos-license
|
An Optimization of the Signal-to-Noise Ratio Distribution of an Indoor Visible Light Communication System Based on the Conventional Layout Model
For an actual visible light communication system, it is necessary to consider the uniformity of indoor illumination. Most of the existing optimization schemes, however, do not consider the effect of the first reflected light, and do not conform to the practical application conventions, which increases the actual cost and the complexity of system construction. In this paper, considering the first reflected light and based on the conventional layout model and the classic indoor visible light communication model, a scheme using the parameter Q to determine the optimal layout of channel quality is proposed. We determined the layout, and then carried out a simulation. For comparison, the normal layout and the optimal layout of illumination were also simulated. The simulation results show that the illuminance distributions of the three layouts meet the standards of the International Organization for Standardization. The optimal layout of channel quality in the signal-to-noise ratio distribution, maximum delay spread distribution, and impulse response is obviously better than the optimal layout of illumination. In particular, the effective area percentage of the optimal layout of channel quality is increased by 0.32% and 6.08% to 88.80% as compared with the normal layout’s 88.48% and the optimal layout of illumination’s 82.72%. However, compared with the normal layout, the advantages are not very prominent.
Introduction
For an actual visible light communication (VLC) system, it is necessary to consider the uniformity of indoor illumination, so it is preferred to use light-emitting diodes (LEDs) with a larger half-power angle, which enhances the multipath effect while causing inter-symbol interference (ISI) and reducing the signal-to-noise ratio (SNR). In order to improve the performance of a VLC system, various optimization schemes have been proposed [1][2][3][4][5].
An optimization scheme based on an evolutionary algorithm is proposed to modify the optical intensity of LED transmitters for reducing the signal power fluctuation extent [1]. A new arrangement scheme of LED lamps is proposed, and the SNR fluctuation is reduced from 14.5 to 0.9 dB [2]. A hybrid scheme that combines spotlighting with uniform lighting is proposed to achieve uniform illumination and high data rate [3]. An optimization scheme based on light-shaping diffusers is proposed to achieve uniform power distribution [4]. An optimization scheme based on the Lambertian order is proposed to improve the performance of a VLC system [5].
According to published papers [6,7], it is necessary to consider the effect of the first reflected light, but most of the existing optimization schemes only consider the direct radiation, and the optimization results usually do not conform to the practical application conventions, which increase the actual cost and the complexity of system construction. In this paper, considering the first reflected light and based on the conventional layout model and the classic indoor VLC model, the normal layout, the optimal layout of illumination, and the optimal layout are simulated, and the relevant data are analyzed.
Following the introduction, this paper is organized in four sections. In Section 2, the basic theories and calculation formulas of VLC simulation are shown. In Section 3, the data and indoor environment used in the simulation are shown, the parameter Q is defined, and the three layouts are determined. In Section 4, the illuminance distribution, SNR distribution, maximum delay spread (M-DS) distribution, and impulse response (IR) of the three layouts are shown and analyzed. Finally, our conclusions are given in Section 5.
Basic Theories of VLC
This section is divided into three parts, which respectively explain the basic theories and calculation formulas used in the simulation of illuminance, optical radiation power, and channel quality in VLC. In existing papers [5,6], it is assumed that the light source is a near-Lambertian light source and the reflective surface is a standard Lambertian surface. These two assumptions are used in this paper. Some basic photometric parameters are described below.
The basic physical quantity used in optical radiometry is the radiant flux or radiant power, symbolized by Φe in watts (W). Photometry is a discipline that studies the senses of human eyes. The units used are based on a physical basis and the characteristics of the human eye. Therefore, all units in photometry are artificial.
Luminous flux indicates the visual intensity of the human eye caused by radiant flux, which is the physical quantity derived from the radiant flux Φe, that is, the photometric quantity of the radiant flux, based on the effect of the radiation on the standard photometric observer of the International Commission on illumination (CIE). Luminous flux is an intrinsic property of a light source. The symbol is Φv and the unit is lumen (lm = cd·sr): where V(λ) is the photopic optical efficiency function, as shown in Figure 1; Km is the photopic optical efficiency at λ = 555 nm, and is also the maximum of V(λ), one of the photometric constants, Km = 683 lm/W. Luminous intensity indicates the illuminating characteristic of a light source in a given direction, which is defined as the magnitude of the luminous flux Φv per unit solid angle Ω in a given direction. The symbol is Iv and the unit is candela (cd): Luminance indicates the illuminating characteristic of the illuminating surface in a given position and direction, which is defined as the illuminating intensity Iv per unit cross-section perpendicular to the direction. The symbol is Lv and the unit is candela per square meter (cd/m 2 ): where dA is the surface element of the beam section; θ is the angle between the normal of the surface element and the given direction, as shown in Figure 2. Illuminance indicates the light-receiving characteristic of the illuminated surface in a given position, which is defined as the luminous flux Φv received on the unit tangent plane on the fixed point. The symbol is Ev and the unit is lux (lx = cd·sr/m 2 ): where dA is the surface element of the tangent plane on the fixed point; θ is the angle between the normal of the surface element and the direction from the fixed point to the point source; r is the distance between the fixed point and the point source, as shown in Figure 3. The luminous intensity of a standard Lambertian source (standard Lambertian surface) conforms to the law of cosine, as shown in Figure 4.
From Equations (3) and (5), the luminance of a standard Lambertian source (standard Lambertian surface) is independent of the direction, as shown in Figure 5: Symbolizing the luminance of a standard Lambertian source as Lv and the area as dA, the total luminous flux Φv radiated in the entire hemispherical surface, SP, is given in Equation (7): Therefore, the luminous exitance Mv and the luminance Lv of a standard Lambertian source are in the following relationship: In the LED industry, people use the half power angle θ1/2 as a measure of the angle of illumination. The half power angle θ1/2 is defined as the angle between the axial direction and the direction in which the luminous intensity is half with the axial direction. From Equation (5), the half power angle θ1/2 of the standard Lambertian source is 60°.
In practical applications, the LED half power angle θ1/2 is often not equal to 60°. Such an LED is called a near-Lambertian light source, and the luminous intensity function is expressed as: where m is the order of the Lambertian source. Combined with the definition of the half power angle θ1/2, the following is obtained: So, we know that the order of the Lambertian source m and the half power angle θ1/2 correspond to each other. The order of the Lambertian source m can be obtained by the half power angle θ1/2, and the normalized luminous intensity distribution function curve is plotted, as shown in Figure 6. (3) and (9), the luminance of a near-Lambertian source is related to direction, and the normalized luminance distribution is shown in Figure 7: It has been proved that the proportion of the secondary reflected light to the total is too small [6]. In this paper, only direct light and first reflected light are considered.
Horizontal Illuminance of LOS
From Equations (4) and (9), the horizontal illuminance of LOS, Ev,LOS, is known: where I0 is the axial luminous intensity of LED lamps. cos cos
Horizontal Illuminance of NLOS
where ρ is the reflectivity of the wall. From Equations (3), (8) and (14), the luminous intensity in the normal direction at the reflection where dAWALL is the reflective area of a small region. From Equations (4), (5) and (15)
Received Power of LOS and NLOS
Analogously to the definition of luminous intensity, in radiometry, the radiation intensity is defined as the radiant flux Φe per unit solid angle Ω in a given direction. The symbol is Ie and the unit is watts per solid angle (W/sr): Because luminous intensity and radiation intensity are descriptions of the same physical phenomenon from different aspects, according to Equation (9), the radiation intensity of a near-Lambertian source can be assumed as follows: where Vm is a variable associated with m and determining the relationship between Φe and Ie. For a single-sided LED, the total radiant flux Φe radiated throughout the hemispherical surface SP can be calculated as: From Equations (18) and (19), the radiant intensity Ie is known: where AR is the physical area of the detector in a photo diode (PD); PS is the transmitted optical power of the source; TS(ζ) is the gain of an optical filter; Ψc is the width of the field of vision (FOV) at a receiver; g(ζ) is the gain of an optical concentrator. g(ζ) is known [6], and is given in Equation (22): where n is the refractive index.
Channel Quality
Channel noise is divided into three parts: ISI noise, shot noise, and thermal noise. In the indoor VLC communication system, the signal-to-noise ratio at the receiving end is expressed as follows [6]: where γ is the detector responsivity; PrSignal is the signal power; σ 2 shot is the shot noise; σ 2 thermal is the thermal noise; PrISI is the ISI noise power.
ISI Noise
ISI noise is caused by ISI power. In a VLC system, the power received by the PD is divided into signal power and ISI power. When a signal is transmitted, during the first arriving signal operation, the power of all the paths is the signal power. After the first arriving signal is completed, the power of all the paths is the ISI power, as shown in Figure 9. In Figure 9, tmin is the earliest arrival time of the signals in all paths, ti is the arrival time of the signal in the path, and T is the symbol period.
Shot Noise
Shot noise originates from active components in electronic devices and is caused by uneven electron emission. In this system, it results from the photocurrent caused by LED illumination and the dark current caused by ambient illumination.
The shot noise caused by LED illumination, σ 2 shot,LED, is known [6]: where q is the electronic charge; B is equivalent noise bandwidth, and it is equal to the data rate. The shot noise caused by ambient illumination, σ 2 shot,bg, is known [6]: where Ibg is background current; I2 is one of the noise band width factors.
Thermal Noise
Thermal noise is also called resistance noise, originates from passive components in electronic devices, and is caused by electronic Brownian motion. In this system, it results from feedback resistors and field effect transistor (FETs).
The thermal noise resulting from feedback resistors, σ 2 thermal,Res, is known [6]: where k is Boltzmann's constant; Tk is the absolute temperature; η is the fixed capacitance of the photo detector per unit area; G is the open-loop voltage gain; AR is the physical area of the detector in a PD; B is equivalent noise bandwidth, and it is equal to the data rate. The thermal noise resulting from FETs, σ 2 thermal,FET, is known [6]: where Г is the FET channel noise factor; I3 is one of the noise band width factors; gm is the FET transconductance; η is the fixed capacitance of the photo detector per unit area; AR is the physical area of the detector in a PD; B is the equivalent noise bandwidth, and it is equal to the data rate.
Simulation Data and LED Layouts
This section is divided into two parts. The first part describes the data used in the simulation. The second part determines the three different LED layouts (normal, optimal illumination, optimal channel quality). All the simulations in this paper were completed using MATLAB software.
Conventional LED Layout Model
The conventional indoor LEDs layout model is shown in Figure 10. The room size is 5 m × 5 m, the height of the table is 0.75 m from the floor, and the height of the lamps' light is 2.5 m from the floor. The four lamps are symmetrically distributed in the room, and each lamp consists of 60 × 60 LED chips with a pitch of 0.01 m [6]. From Figure 10, it is known that if the distance x between the edge of one of the lamps and the edge of the adjacent wall is determined, the overall layout of the indoor LEDs is uniquely determined.
Simulation Parameter
The simulation settings are shown in Table 1. The parameters of the LED chips are shown in Table 2. The parameters of the PDs are shown in Table 3. Table 3. Parameters of the photo diodes (PDs).
Item
Data Detector physical area, AR 1 cm 2 Gain of an optical filter, TS 1 FOV, Ψc 60° Refractive index of a lens, n 1.5 O/E conversion efficiency, γ 0.53 A/W The parameters of the noise calculation are shown in Table 4.
Normal Layout
The normal layout is the equally spaced layout used in [6]. Each lamp is located at the center of the respective area, and x is 0.955 m, as shown in Figure 11.
Optimal Layout of Illumination
Drawing on existing literature [2], we use the parameter QE to evaluate the horizontal illuminance.
where v E is the mean of horizontal illuminance; var(Ev) is the variance of horizontal illuminance.
The larger QE is, the more uniform the horizontal illumination distribution will be. The relationship between x and QE is shown in Figure 12. As can be seen from the figure, when x is 0.77 m, QE reaches the maximum value of 8.0945, meaning that this LED layout is the optimal layout of illumination under the conventional indoor LED layout model.
Optimal Layout of Channel Quality
Drawing on existing literature [4], we use the parameter QSNR to evaluate the channel quality.
where SNR is the mean of the SNR; var(SNR) is the variance of the SNR. The larger QSNR is, the more uniform the SNR distribution will be. The relationship between x and QSNR is shown in Figure 13. As can be seen from the figure, when x is 0.97 m, QSNR reaches the maximum value of 18.1616, meaning that this LED layout is the optimal layout of channel quality under the conventional indoor LED layout model.
Data Analysis
In Section 3, three kinds of LED layouts were determined. In this section, we compare and analyze the illuminance distribution, SNR distribution, M-DS distribution, and IR of the three layouts.
Illuminance Distribution
The illuminance distribution of the three different LED layouts is simulated. The simulation results are shown in Figure 14, and the data comparison is shown in Table 5. In Section 3, we found that the normal layout and the optimal layout of channel quality are so similar that the difference in the data of illuminance distribution of these layouts is too small. The standard deviation (STD) of the optimal layout of illumination is reduced by 42.7798 and 48.7914 to 120.4515 lx compared with the normal layout's 163.2313 lx and the optimal layout of channel quality's 169.2429 lx.
In general, the illuminance distribution of the three layouts meets the standards of the International Organization for Standardization [8]. However, it is recommended to use the optimal layout of illumination in applications with high illumination uniformity requirements.
SNR Distribution
The SNR distribution of the three different LED layouts is simulated. The simulation results are shown in Figure 15. A received optical power of SNR = 13.6 dB is required for a stable communication link [6]. In Figure 16, the dark area is the area in which SNR < 13.6 dB. The data comparison is shown in Table 6. Table 6, the effective area percentage of the optimal layout of channel quality is increased by 0.32% and 6.08% to 88.80% as compared with the normal layout's 88.48% and the optimal layout of illumination's 82.72%. The STD of SNR distribution of the optimal layout of channel quality is reduced by 0.0116 and 0.7338 to 0.7978 dB as compared with the normal layout's 0.8094 dB and the optimal layout of illumination's 1.5316 dB. The minimum SNR distribution of the optimal layout of channel quality is increased by 0.1138 and 1.1945 to 13.0040 dB as compared with the normal layout's 12.8902 dB and the optimal layout of illumination's 11.8095 dB.
In general, the normal layout and the optimal layout of channel quality are so similar that both of them can be chosen in general practical applications, but it is recommended to use the optimal layout of channel quality in applications with high SNR standards.
M-DS Distribution
In an indoor VLC system, there are lots of different channel paths; the arrival time of the signals via the paths differs due to the different path lengths, and the pulse width of the received signal is widened due to the multipath effect. The phenomenon is called delay spread.
Differently from most existing literature [1,2,5], in this paper, we consider the effect of FOV in M-DS distribution, so we get something new.
The M-DS refers to the difference in arrival time between the earliest arrival signal and the latest arrival signal. The M-DS distribution of the three different LED layouts is simulated. The simulation results are shown in Figure 17 and the data comparison is shown in Table 7. By comparing Figures 15 and 17, we find that, in general, where M-DS is smaller, SNR is larger. As for the difference in the data in the corner, it can be explained by the FOV. Due to the presence of the FOV, most signals with a long transmission path in the corner are not within its FOV and, therefore, not recorded, resulting in a large drop in the M-DS at that location. The same reason causes the ISI noise power and the signal power at the position to be correspondingly reduced, while the shot noise and thermal noise are constant. All these result in the M-DS in the corner being the shortest, but the SNR in the same position not being large. From Table 7, the minimum M-DS distribution of the optimal layout of channel quality is reduced by 0.1093 and 1.4204 to 19.2880 ns as compared with the normal layout's 19.3973 ns and the optimal layout of illumination's 20.7084 ns. The mean of the M-DS distribution of the optimal layout of channel quality is reduced by 0.0706 and 0.8962 to 22.8768 ns as compared with the normal layout's 22.9474 ns and the optimal layout of illumination's 23.7730 ns. The maximum M-DS distribution of the optimal layout of channel quality is reduced by 0.0549 and 0.7744 to 24.2783 ns as compared with the normal layout's 24.3332 ns and the optimal layout of illumination's 25.0527 ns.
Impulse Response
The impulse response of the three different LED layouts is simulated. The simulation results are shown in Figure 18. Since the layouts of the normal layout and the optimal layout of channel quality are too close, the impulse responses of these are also similar. From Figure 18, there is no significant delay between the impulse response of the normal layout (optimal layout of channel quality) at the center and corner, and the difference between the amplitude of the two is large. There is a significant delay between the impulse response of the optimal layout of illumination at the center and corner, and the difference between the amplitudes of the two is smaller.
Conclusions
Considering the first reflected light, based on the conventional layout model and the classic indoor VLC model, we first defined the parameter Q and then drew the corresponding curve of the layout and the parameter Q, thus determining the optimal layout of illumination and the optimal layout of channel quality. Combined with the normal layout, three layouts were determined. Later, an illuminance distribution simulation, SNR distribution simulation, M-DS distribution simulation, and IR simulation were performed for each layout. The achieved results show that the illuminance distributions of the three layouts meet the standards of the International Organization for Standardization. In the SNR distribution, M-DS distribution, and impulse response, the optimal layout of channel quality is obviously better than the optimal layout of illumination. In particular, the effective area percentage of the optimal layout of channel quality is increased by 0.32% and 6.08% to 88.80% as compared with the normal layout's 88.48% and the optimal layout of illumination's 82.72%. However, compared with the normal layout, the advantages are not very prominent.
Funding: This research was funded by National Natural Science Foundation of China, grant number 61301175.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-11-05T09:10:42.200Z
|
2020-10-29T00:00:00.000
|
{
"year": 2020,
"sha1": "91969a3edfe4a483fd7273de4656c07111e13f13",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/21/9006/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b3205948317650c0b17fa55de072dacd077026bf",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
209324108
|
pes2o/s2orc
|
v3-fos-license
|
Parareal with a Learned Coarse Model for Robotic Manipulation
A key component of many robotics model-based planning and control algorithms is physics predictions, that is, forecasting a sequence of states given an initial state and a sequence of controls. This process is slow and a major computational bottleneck for robotics planning algorithms. Parallel-in-time integration methods can help to leverage parallel computing to accelerate physics predictions and thus planning. The Parareal algorithm iterates between a coarse serial integrator and a fine parallel integrator. A key challenge is to devise a coarse level model that is computationally cheap but accurate enough for Parareal to converge quickly. Here, we investigate the use of a deep neural network physics model as a coarse model for Parareal in the context of robotic manipulation. In simulated experiments using the physics engine Mujoco as fine propagator we show that the learned coarse model leads to faster Parareal convergence than a coarse physics-based model. We further show that the learned coarse model allows to apply Parareal to scenarios with multiple objects, where the physics-based coarse model is not applicable. Finally, We conduct experiments on a real robot and show that Parareal predictions are close to real-world physics predictions for robotic pushing of multiple objects. Some real robot manipulation plans using Parareal can be found at https://www.youtube.com/watch?v=wCh2o1rf-gA .
Introduction
We present a method for fast and accurate physics predictions during non-prehensile manipulation planning and control. An example scenario is shown in Figure 1, where a robot arm pushes the marked cylindrical object into a target zone without pushing the other three objects off the table. We are interested in predicting the motion of the objects in a fast and accurate way. Physics engines like Mujoco solve Newton's equation to predict motion. They are accurate but slow. Coarse models can be build by introducing simplifying assumptions, trading accuracy for solution speed but their lack of precision will eventually compromise the robot's chance of completing a given task successfully.
Given an initial state and a sequence of controls, the problem of predicting the resulting sequence of states is a key component of a variety of model-based planning and control algorithms [10,12,11,25,5,9,2,22,14]. Mathematically, such a prediction requires solving an initial value problem. Typically, those are solved through numerical integration over time-steps using e.g. Euler's method or Runge-Kutta methods and an underlying physics model to provide the forces. However, the speed with which these accurate physics-based predictions can be performed is still slow [6] and faster physics-based predictions can contribute significantly to contact-based/non-prehensile manipulation planning and control.
In a previous paper [3], we demonstrated that predictions for a robot pushing a single object can be made faster by combining a fine physics-based model with a simple, coarse physics-based model using the parallelin-time method Parareal. Using 4 cores, Parareal was about a factor two faster than the fine physics engine alone while providing comparable accuracy and the same success rate for a push planning with obstacle avoidance. Here, we extend these results by investigating a learned deep neural network as coarse model and show that it leads to faster Parareal convergence. We also demonstrate that Parareal can be used to speed up physics prediction in scenarios where the robot pushes multiple objects.
Related Work
Parareal has been used in many different areas. Trindade et al., for example, use it to simulate incompressible laminar flows [24]. Maday et al. have tested it for to simulate dynamics in quantum chemistry [16]. The method was introduced by Lions et al. in 2001 [15]. Combinations of parallel-in-time integration and neural networks have not yet been studied widely. Very recently, Yalla and Enquist showed the promise of using a machine learned model as coarse propagator [26] for test problems. Going the other way, Schroder [21] and Gnther et al. [20] recently showed that parallel-in-time integration can be used to speed up the process of training neural networks. Parareal's potential to speed up planning simulations for robotic manipulation in singleobject scenarios using a physics-based coarse model was recently demonstrated by Agboh et al. [3].
Combining different physics models for robotic manipulation has been the topic of recent research, although not with a focus on improving prediction speed. Kloss et al. [13] address the question of accuracy and generalization in combined neural-analytical models. Ajay et al. [4] focus on modeling of the inherent stochastic nature of the real world physics, by combining an analytical, deterministic rigid-body simulator with a stochastic neural network.
We can make physics engines faster by using larger simulation time steps, however this decreases the accuracy and can result in unstable behavior. To generate stable behaviour at large time-step sizes, Pan et al. [18] propose an integrator for articulated body dynamics by using only position variables to formulate the dynamic equation. Moreover, Fan et al. [7] propose linear-time variational integrators of arbitrarily high order for robotic simulation and use them in trajectory optimization to complete robotics tasks. Recent work have focused on making the underlying planning and control algorithms faster. For example, Giftthaler et al. [8] introduced a multiple-shooting variant of the trajectory optimizer -iterative linear quadratic regulator (ilqr) which has shown impressive results for real-time nonlinear optimal control of complex robotic systems [17,19].
Robotic manipulation
Consider the scene shown in Figure 1. The robot's manipulation task is to control the motion of the green goal object through pushing contact from the cylindrical pusher in the robot's gripper. The robot needs to push the goal object into a goal region marked with an X. It is allowed to make contact with other sliders but not push them off the table or into the goal region.
The system's state at time point n consists of the pose q and velocity 9 q of the pusher P and N s sliders, S i . . . S Ns : x n " rq P n , q S i n , . . . , q S Ns n , 9 q P n , 9 q S i n , . . . , 9 q S Ns n s The pose of slider i consists of its position and orientation on the plane: q S i " rq S i x , q S i y , q S i θ s T . The pusher's pose is: q P " rq Px , q Py s T and control inputs are velocities u n " ru x n , u y n s T applied on the pusher at time n for a control duration of ∆t.
A robotics planning and control algorithm takes in an initial state of the system x 0 , and outputs an optimal sequence of controls tu 0 , u 1 , . . . , u N´1 u. However, to generate this optimal sequence, the planner needs to simulate many different control sequences and predict many resulting sequences of states as tx 1 , x 2 , . . . , x N u.
The planner makes these simulations through a physics model F of the real-world that predicts the next state x n`1 given the current state x n and a control input u n x n`1 " F px n , u n , ∆tq. (1) We use the general physics engine Mujoco [23] to model the system dynamics. It solves Newton's equations of motion for the complex multi-contact dynamics problem.
Parareal
Normally, computing all states x n happens in a serial fashion, by evaluating (1) first for n " 0, then for n " 1, Fig. 1: Example of a robotic manipulation planning and control task using physics predictions. The robot controls the motion of the green object solely through contact. The goal is to push the green object into the target region marked X. The robot must complete the task without pushing other objects off the table or into the goal region.
etc. Parareal replaces this inherently serial procedure by a parallel-in-time integration process where some of the work can be done in parallel. For Parareal, we need a coarse physics model It needs to be computationally cheap relative to the fine model but does not have to be very accurate. Parareal begins by computing an initial guess x k"0 n of the state at each time point n of the trajectory using the coarse model.
This guess is then corrected via the Parreal iteration for all timesteps n " 0, . . . , N´1. The newly introduced superscript k counts the number of Parareal iterations.
The key point in iteration (3) is that evaluating the fine physics model can be done in parallel for all n " 0, . . . , N´1, while only the fast coarse model has to be computed serially. After one Parareal iteration, x 1 1 is exactly the fine solution. After two iterations, x 1 1 and x 2 2 are exactly the fine solutions. When k " N , Parareal produces the exact fine solution. However, to produce speed up, we need to stop Parareal at much earlier iterations. This way, Parareal can run in less wall-clock time than running the fine model serially step-by-step. Below, we demonstrate that even after a small number of iterations, the solution produced by Parareal is of sufficient quality to allow our robot to succeed with different tasks. Note that, for the sake of simplicity, we assume here that the number of controls N and the number of processors used to parallelize in time are identical, but this can easily be generalised.
Coarse models
In this section, we introduce two coarse physics models for Parareal -a learned coarse model and the analytical coarse model from Agboh et al. [3].
Learned coarse model
As an alternative to the coarse physics model, we train a deep neural network as a coarse model for Parareal for robotic pushing.
Network architecture
The input to our neural network model is a state x n and a single action u n . The output is a single next state x n`1 . We use a feed-forward deep neural network (DNN) with 5 fully connected layers. The first 4 contain 512, 256, 128 and 64 neurons, respectively, with ReLU activation function. The output layer contains 24 neurons with linear activation functions.
Dataset
We collect training data using the physics engine Mujoco [23]. Each training sample is a tuple (x n , u n , x n`1 ). It contains a randomly sampled initial state, action, and next state. We collect over 2 million such samples from the physics simulator.
During robotic pushing, a physics model may need to predict the resulting state even for cases when there is no contact between pusher and slider. We include both contact and no-contact cases in the training data.
We train a single neural network to handle one pusher with at least one and at most N s objects being pushed (also called sliders). While collecting data for a particular number of sliders, we placed the unused sliders in distinct fixed positions outside the pushing workspace. These exact positions must be passed to the neural network at test time if fewer than N s sliders are active. For example, if N s " 4, to make a prediction for a 3 slider scene, we place the last slider at the same fixed position used during training.
Loss function
The standard loss function for training is the mean squared error between the network's prediction and the training data. On its own, this leads to infeasible state predictions where there is pusher-slider or slider-slider penetration. We resolve this by adding a no penetration loss term such that the final loss function reads Here, W F is a constant weight, B is the batch size, V is number of samples per batch, x f ij is the next state predicted by the fine model, x N N ij is the next state predicted by the DNN model. p i and p j are positions of sliders i and j respectively, r p is the radius of the pusher, and r i , r j represent the radius of sliders i and j respectively. The first line of Equation 4 is the standard mean squared error. The second line penalizes pusherslider penetration and the third line penalizes sliderslider penetration.
Finally, the network makes a single step prediction. However, robotic manipulation typically needs a multistep prediction as a result of a control sequence. To do this, we start from the initial state and apply the first action in the sequence to get a resulting next state. Then, we use this next state as a new input to the network together with the second action in the sequence and so on. This way, we repeatedly query the network with its previous predictions as the current state input.
Analytical coarse model
Agboh et al. [3] have proposed a simple, kinematic coarse physics model for pushing a single object. The model moves the slider with the same linear velocity as the pusher as long as there is contact between the two. We give details below for completeness: q S n`1 " q S n`r u x n , u y n , ωs T¨p c¨∆ t (5) 9 q S n`1 " tru x n , u y n , ωs T if p c ą 0, 9 q S n otherwiseu (7) q P n`1 " q P n`un¨∆ t, 9 q P n`1 " u n .
Here, p c is the ratio of contact distance d contact travelled by the pusher when in contact with the slider and the total pushing distance, r c is a vector from the contact point to the object's center at the current state q S n , θ is the angle between the pushing direction and the vector r c , ω is the coarse angular velocity induced by the pusher on the slider. K ω is a positive constant.
Planning and control
We use the predictive model based on Parareal described above in a planning and control framework for pushing an object on a table to a target location. We take an optimization approach to solve this problem. Given the table geometry, goal position, the current state of the pusher and all sliders x 0 , and an initial candidate sequence of controls tu 0 , u 1 , . . . , u N´1 u, the optimization procedure outputs an optimal sequence tu0 , u1 , . . . , uN´1u according to some defined cost.
The predictive model is used within this optimizer to roll-out a sequence of controls to predict the states tx 1 , . . . , x N u. These are then used to compute the cost associated with those controls. The details of the exact trajectory optimizer can be found elsewhere in Agboh et al. [1]. The cost function we use penalizes moving obstacle sliders and dropping objects from the table but encourages getting the goal object into the goal location.
We use the trajectory optimizer in a model-predictive control (MPC) framework. Once we get an output control sequence from the optimizer, we do not execute the whole sequence on the real-robot serially one after the other. Instead, we execute only the first action, update x 0 with the observed state of the system, and repeat the optimization to generate a new control sequence. We repeat this process until the task is complete.
Such an optimization-based MPC approach to pushing manipulation is frequently used to handle uncertainty and improve success in the real-world [5,10,13,2]. Here, our focus is to evaluate the performance of Parareal with learned coarse model for planning and control.
Experiments and Results
In our experiments, we investigate three key issues. First, we investigate how fast Parareal converges to the fine solution for robotic pushing tasks with different coarse models. Second, we investigate the physics prediction accuracy of Parareal with respect to real-world pushing data. Finally, we demonstrate that the Parareal physics model can be used to complete real-robot manipulation tasks.
In Subsection 6.1 we provide preliminary information used throughout the experiments. Subsection 6.2 investigates convergence of Parareal for two different coarse models -the analytical coarse model for single object pushing and a learned coarse model for both single and multiple object pushing. In Subsection 6.3 we present results from real-robot experiments. First, we compare the accuracy of Parareal predictions against real-world pushing physics. Then, we show several realrobot plan executions using Parareal with a learned coarse physics model as predictive model.
Preliminaries
In all experiments, we run Mujoco at the largest possible time-step (1ms) beyond which the simulator becomes unstable. All computations run on a standard Laptop PC with an Intel(R) Core (TM) i7-4712HQ CPU @2.3GHz with N " 4 cores. Our control sequences consist of four actions, each applied for a control duration ∆ t " 1s.
Our real robot setup is shown in Figure 1. We have a Robotiq two-finger gripper holding the cylindrical pusher. We place markers on the pusher and sliders to sense their full pose in the environment with an Opti-Track motion capture system.
Parareal convergence
In this section we investigate how fast Parareal converges using two coarse models -the analytic model for single object pushing and the learned model for both single object and multi-object pushing. At each iteration, we compute a root mean square (RMS) error between Parareals predictions and the fine model's predictions of the corresponding sequence of states.
Single object pushing
We randomly sample an initial state for the pusher and slider. We also randomly sample a control sequence where the pusher contacts the slider at least once during execution. Thereafter, we execute the control sequence starting from the initial state using Parareal. For the sample state and control sequence, we perform to runs, one using the learned model and the other using the physics model as coarse propagator in Parareal. The analytical model makes a single step prediction 170 times faster than the fine model on average, while the learned model is 130 times faster on average. While technically this means a tighter bound on speedup for the learned model (see Section II.A in Agboh et al. [3]), both models are so fast that our actual speedup is almost completely governed by the number of iterations. Therefore, the slightly higher cost of the learned versus 4-slider Parareal prediction 2-slider Parareal prediction Fig. 3: Root mean square error (in log scale) along the full trajectory per slider in a 4-slider pushing experiment (left) using only the learned model. We find that the learned model enables Parareal convergence for the multiobject case. Two sample motions are illustrated (center and right) for multi-object physics prediction. the phyiscs coarse model does not have a significant impact on Parareal's performance.
We collect 100 state and control sequence samples and compute the RMS between Parareal and the fine model run in serial. The results are shown in Figure 2 (left). We see that the learned model leads to faster convergence of Parareal than the analytical model for single object pushing,. This is because the learned model is more accurate. For example, the single-step prediction of the learned model as shown in Figure 2 (right) red is much closer to the fine prediction shown in green than analytical model shown in Figure 2 (center).
Multi-object pushing
We randomly sample a valid initial state for the pusher and multiple sliders. Then, similar to the single object pushing case, we also sample a random control sequence that makes contact with at least one slider. We then predict the corresponding sequence of states using Parareal. However, for multi-object pushing we use only the learned model as the coarse physics model within Parareal. The analytical model for single-object pushing would need significant modifications to work for the multi-object case. Again, we collect 100 state and control sequence samples and run Parareal for each of them. Our results are shown in Figure 3. Figure 3(left) shows the RMS error per slider for each Parareal iteration. While there are differences in the accuracy of the predictions for different slides, all errors decrease and Parareal converges at a reasonable pace. Some sample predictions are shown for a 4 slider environment in Figure 3(center), and for a 2-slider environment in Figure 3(right). In both scenes, the pusher moves forward making contact with multiple sliders and Parareal is able to predict how the state evolves.
Real robot experiments
In this section we investigate the physics prediction accuracy of Parareal with respect to real-world pushing physics. We do this for the multi-object case. In addition, we show real-world demonstrations for robotic manipulation where we use Parareal for physics prediction.
Parareal prediction vs. real-world physics
Our coarse model neural network was trained using simulated data. Here, we demonstrate that Parareal using the trained coarse model is also able to predict realworld states. We randomly set an initial state in a realworld example by selecting positions for the pusher and sliders. This state is recorded using our motion capture system. Next, we sample a control sequence and let the real robot execute it. Again, we record the corresponding sequence of states using motion capture. Then, for the recorded initial state and control sequence pair, we use Parareal to produce the corresponding sequence of states and compare the result against the states measured for the real robot with optical tracking. Figure 5 shows the RMS error between Parareal's prediction at different iteration numbers and the realworld pushing data. Vertical red bars indicate 95% confidence intervals.
Parareal's real-world error decreases with increasing iteration numbers and it is eventually twice as accurate as the coarse model. These results indicate that
Planning and control
We use the Parareal predictive model for robotic manipulation to generate plans faster than using the fine model directly. In this section, we complete 3 real robot executions with Parareal at 1 iteration. We use the learned model as the coarse model in all cases.
As can be seen in Figure 6, the robot's task is to push the green slider into the target region marked with X. The robot is allowed to make contact with other sliders but not push them off the table or into the goal region.
The robot was successful for all 3 sample scenes. Some sample plans for two scenes are shown in Figure 6. The third scene is shown in Figure 1. We find that using Parareal with a learned coarse model for physics predictions, a robot can successfully complete complex real-world pushing manipulation tasks involving multiple objects. At 1 Parareal iteration, we complete the tasks about 4 times faster than directly using the fine model.
Summary
We demonstrate the promise of using Parareal to parallelize the predictive model in a robot manipulation task involving multiple objects. As coarse model, we propose a neural network, trained with a physics simulator. We show that for single object pushing, Parareal converges faster with the learned model than with a coarse physics-based model we introduced in earlier work. Furthermore, we show that Parareal with the learned model as coarse propagator can successfully complete tasks that involve pushing multiple objects. We also show that although a simulator is used to provide training data, Parareal with a learned coarse model can accuractly predict experiments that involve pushing with a real robot. Fig. 5: The resulting sequence of states for applying a random control sequence starting from some random initial state in the real-world. Our goal is to assess the accuracy of the Parareal physics models with respect to real-world physics. We collect 50 such samples. These are some snapshots for 3 of such scenes -one per row with initial state on the left and final state on the right.
|
2019-12-12T13:54:24.000Z
|
2019-12-12T00:00:00.000
|
{
"year": 2020,
"sha1": "da6314716b5fb9b8170e149367337caaea0a6041",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00791-020-00327-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f79188529984657eab3b4b87048670cb5cc209e4",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
52911463
|
pes2o/s2orc
|
v3-fos-license
|
Lethal Bleeding from a Duodenal Cancerous Ulcer Communicating with the Superior Mesenteric Artery in a Patient with Pancreatic Head Cancer
Pancreatic cancer often invades the duodenum and causes obstruction, but rarely causes massive duodenal bleeding. A 68-year-old male was admitted to our hospital because of vomiting. Enhanced abdominal CT showed a hypovascular tumor with air bubbles in the uncinate process of the pancreas. The tumor invaded the duodenum and metastasized to the liver and peritoneum. The main trunk of the superior mesenteric artery (SMA) was circumferentially involved. After admission, he had hematemesis and melena. Emergency gastroduodenoscopy revealed pulsating vessels in the third portion of the duodenum and he eventually experienced hemorrhagic shock. Severe bleeding occurred from his mouth and anus like a catastrophic flood. It was difficult to sustain blood pressure even with massive blood transfusion with pumping. After insertion of an intra-aortic balloon occlusion catheter, the massive bleeding was eventually stopped. Although we attempted interventional radiography, aortography revealed direct communication between the main SMA trunk and the duodenal lumen. The tumor was considered anatomically and oncologically unresectable. Thus, we did not perform further intervention. The patient died 2 h after angiography. Herein, we report the case of pancreatic head cancer causing lethal bleeding associated with tumor-involved SMA. Duodenal bleeding associated with pancreatic cancer invasion should be considered as an oncogenic emergency.
Introduction
Pancreatic cancer is highly malignant and it easily invades adjacent organs such as the stomach, duodenum, and bile duct. The main symptoms are abdominal pain, jaundice, nausea, and body weight loss. However, massive gastrointestinal bleeding associated with direct invasion to the duodenum is rare. Previously, Lee et al. [1] reported that only 2.6% of pancreatic cancer patients initially presented with gastrointestinal bleeding. Usually, hemostatic treatment including endoscopic, radiological, and surgical interventions can be successfully achieved. However, uncontrollable hemorrhage might lead to an oncological emergency in some cases.
Here we report a case of disastrous duodenal bleeding caused by direct communication between the main trunk of the superior mesenteric artery (SMA) and the duodenal lumen associated with pancreatic cancer.
Case Report
A 68-year-old male visited the primary hospital with complaints of body weight loss (20 kg/3 months) and vomiting. Abdominal ultrasonography revealed a pancreatic tumor and gastric dilatation. He was introduced to our hospital with suspicion of duodenal obstruction associated with pancreatic cancer. Blood examination revealed elevation of biliary tract enzymes such as ALP (481 U/L) and GGT (78 U/L), tumor markers such as CEA (6.8 ng/mL) and CA19-9 (987 U/mL) levels, and slight anemia (Hb level 13.1 g/dL). Enhanced CT showed a hypovascular tumor in the uncinate process of the pancreas with dilatation of the bile duct, pancreatic duct, and stomach (Fig. 1a). The pancreatic tumor had air bubbles presumably from direct invasion to the duodenum. The main trunk of the SMA was circumferentially involved in the collapsing tumor (Fig. 1b). The superior mesenteric vein and splenic vein were narrowed. Multiple low-density lesions highly suspected to be metastatic tumors were also detected in the bilateral lobe of the liver (Fig. 1c). Intra-abdominal nodules suspected to be peritoneal dissemination were also detected. The patient was admitted to our department and a nasogastric tube was inserted for decompression. Elective gastrojejunostomy was scheduled several days later. The day before surgery, the drain fluid from the nasogastric tube changed from clear gastric juice to dark blood. The amount of excreted fluid was 300 mL. Emergency gastroduodenoscopy was performed and a deep ulcer with pulsating vessels was shown in the third portion of the duodenum (Fig. 2). A hemostatic procedure was not performed because active bleeding from the ulcer was not observed. A few hours later, hemorrhagic shock occurred because of recurrent hematemesis and melena. The patient was transferred to the ICU and underwent resuscitative maneuver. Massive bleeding occurred from the mouth and anus like a catastrophic flood. It was difficult to sustain blood pressure even with massive blood transfusion and pumping. The estimated blood loss was extended well beyond 10 L. In total, we administered 6,000 mL of 5% albumin, 5040 mL of red blood cells, and 3,360 mL of fresh frozen plasma. We performed angiography to identify the source of bleeding. After aortic occlusion with an intra-aortic balloon catheter above the bifurcation of the celiac artery (Fig. 3a), the massive bleeding stopped. Aortography revealed direct communication between the main SMA trunk and the duodenal lumen (Fig. 3b, c). We did not perform further intervention because peripheral flow of the SMA could not be observed and the pancreatic cancer appeared to be anatomically and oncologically unresectable. The patient died 2 h after angiography.
Discussion
Pancreatic cancer has highly invasive characteristics, and it can cause various clinical symptoms such as abdominal pain, anorexia, nausea, body weight loss, and jaundice. Pancreatic cancer easily invades adjacent organs and sometimes causes gastrointestinal obstruction. Although duodenal invasion frequently occurs in patients with pancreatic cancer, massive gastrointestinal bleeding is seldom encountered. Pancreatic cancer is responsible for only 0.35-1.9% of upper gastrointestinal bleeding cases [2]. Lee et al. [1] reported that 2.6% of patients with pancreatic cancer presented with gastrointestinal bleeding as the initial manifestation. On the other hand, few authors have reported serious hemorrhage as an initial manifestation of pancreatic cancer [3][4][5][6].
The types of bleeding associated with pancreatic cancer are divided into the following three categories: (1) rupture of esophageal or gastric varices caused by narrowing of the splenic or portal vein, (2) bleeding from the pancreatic duct orifice (hemosuccus pancreaticus or wirsungorrhagia), and (3) bleeding from direct invasion of the tumor into the stomach or duodenum [7].
Although endoscopic hemostasis is effective to stop bleeding and is considered the first choice of treatment, such a strategy is not always effective. Additionally, endoscopic clipping of exposed vessels often drastically changes the situation from a steady state to an uncontrollable state. In our case, active bleeding was not observed on gastroduodenoscopy. We did not treat the pulsating vessels located in the cancerous ulcer because the vascular diameter appeared to be oversized to manage endoscopically.
Transcatheter arterial embolization is useful for temporary hemostasis in cases with arterial bleeding as a second step. In cases of active bleeding, improvement of the general conditions has the highest priority. On the other hand, we should keep in mind that posttreatment ischemia can be caused by occlusion of the culprit vessels. Moreover, transcatheter arterial embolization is not always effective because of the presence of multiple collateral vessels from the celiac and superior mesenteric arterial communication. In our case, we did not perform radiological interventions because peripheral flow of the SMA could not be observed. As enhanced abdominal CT showed that the main trunk of the SMA was circumferentially involved in the collapsing tumor, prophylactic stent placement would be one of the choices of treatment. Previously, Nakai et al. [8] reported that a self-expanding bare metal stent and stent graft were successfully used to repair an SMA pseudoaneurysm and dissection after pancreaticoduodenectomy (PD). However, the possibility of graft infection should be considered even after successful hemostasis with stent placement [9]. Intra-aortic balloon occlusion (IABO) has been reported to be very useful in the case of hemorrhagic shock associated with abdominal or pelvic trauma [10]. IABO can block blood flow below the inflatable balloon and maintain central arterial pressure. In our case, after IABO above the bifurcation of the celiac artery, massive bleeding was effectively stopped. Few authors have reported the usefulness of IABO in temporary hemostasis for gastrointestinal bleeding [11][12][13]. We should keep in mind that long occlusion time is associated with abdominal organ ischemia and reperfusion injury.
Surgical hemostasis including PD is highly controversial. Z'graggen et al. [14] reported that emergency PD could be considered in the case of repetitive nontraumatic bleeding that required massive transfusion, unless the coagulation disorder showed progression. Lissidini et al. [15] suggested that emergency PD could be an effective life-saving operation for pancreaticoduodenal trauma, perforations, and bleeding. However, this approach should be performed in a high-volume center by surgeons with a high level of experience in hepatobiliary and pancreatic surgery. We did not perform emergency PD because the pancreatic cancer appeared to be anatomically and oncologically unresectable.
In conclusion, pancreatic cancer rarely causes disastrous bleeding. As the presence of air bubbles and vascular involvement may indicate an impending catastrophe, prophylactic stent placement can be considered as a treatment option. Additionally, IABO appeared to be useful for uncontrollable gastrointestinal bleeding. The pancreatic tumor shows air bubbles (arrow). The main SMA trunk (arrowhead) is circumferentially involved in the collapsing tumor. c An intrahepatic low-density lesion, highly suspected to be a metastatic tumor (arrow), and dilatation of the intrahepatic bile ducts are seen. 3. a An intra-aortic balloon catheter is placed above the bifurcation of the celiac artery (arrow). A nasogastric tube is placed in the stomach. b, c Aortography shows direct communication between the main trunk of the SMA (arrow) and the inner cavity of the duodenum (arrowhead).
|
2018-10-21T21:47:41.223Z
|
2018-08-23T00:00:00.000
|
{
"year": 2018,
"sha1": "95bf5adc1e44d96955953c270a3763340b284e6a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1159/000492207",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95bf5adc1e44d96955953c270a3763340b284e6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207930402
|
pes2o/s2orc
|
v3-fos-license
|
The Power of Two Choices for Random Walks
We apply the power-of-two-choices paradigm to a random walk on a graph: rather than moving to a uniform random neighbour at each step, a controller is allowed to choose from two independent uniform random neighbours. We prove that this allows the controller to significantly accelerate the hitting and cover times in several natural graph classes. In particular, we show that the cover time becomes linear in the number $n$ of vertices on discrete tori and bounded degree trees, of order $\mathcal{O}(n \log \log n)$ on bounded degree expanders, and of order $\mathcal{O}(n (\log \log n)^2)$ on the Erd\H{o}s-R\'{e}nyi random graph in a certain sparsely connected regime. We also consider the algorithmic question of computing an optimal strategy, and prove a dichotomy in efficiency between computing strategies for hitting and cover times.
Introduction
The power of choice paradigm asserts that when a random process is offered a choice between two or more uniformly selected options, as opposed to being supplied with just one, then a series of choices can be made to improve the overall performance. This idea was first applied to the 'balls into bins' model [5,9,31], where it was proved that the power of choice decreases the maximum load from log n log log n to log log n when assigning n balls to n bins. The power of choice was later extensively studied for random graphs under the broader class of rule-based random graph processes, known as Achlioptas processes, see for example [1,10,11,33,34] and references therein. The power of choice has also been studied with regard to the Preferential Attachment process for growing a random connected graph; in this context, the choices may have a powerful effect on the degree distribution, see e.g. [24,30].
In this paper, we extend the power-of-two-choices paradigm to random walk on a graph. We show that for many natural classes of graphs, this results in a significant speed-up of the cover and hitting times, which are the expected times to visit all vertices or any fixed vertex from a worst-case start vertex. We study the choice random walk (CRW), which at every step is offered two uniformly random independently sampled neighbours (with repetition) of the current location and (with full knowledge of the graph) must choose one as the next step; see Section 2 for more details. We prove that the cover time of CRW decreases to (n) for grids (i.e. finite quotients of Z d ) and bounded degree trees on n vertices, and that the cover time of expander graphs decreases to O n log log n . We note that for simple random walk (SRW) these cover times are all (n log n) and some are (n 2 ) [2]. We also consider computational questions relating to choosing a good strategy: we show that an optimal strategy for minimising a hitting time can be computed in polynomial time, but choosing an optimal strategy for minimising the cover time is NP-hard. See Section 1.2 for more details and other results.
Part of our motivation is to improve the efficiency of random walks used in algorithmic applications such as searching, routing, self-stabilization and query processing in wireless networks, peer-to-peer networks and other distributed systems. One practical setting where routing using the power of choice walk may be advantageous is in relatively slowly evolving dynamic networks such as the internet. For example, say a packet has a target destination v and each node stores a pointer to a neighbour which it believes leads most directly to v. If this network is perturbed, then the deterministic scheme may get stuck in 'dead ends' , whereas a random walk would avoid this fate. The CRW which prefers edges pointed to by a node may be the best of both worlds as it would also avoid traps but may see a speed-up over the SRW when the original paths are still largely intact.
Related literature
To the best of our knowledge, Avin and Krishnamachari [3] were the first to apply the principle of the power of choice to random walks. However, their version only considers a simple choice rule where the vertex with fewer previous visits is always preferred, and ties are broken randomly. This is in the spirit of balanced allocations, the origin of the power-of-two-choices paradigm. Their results are mainly empirical and suggest a decrease in the variance of the cover time, and a significant improvement in visit load balancing. This is related to the greedy random walk of Orenshtein and Shinkar [32], which chooses uniformly from adjacent vertices that have not yet been visited (if possible). This model is well studied for expanders [8,17]. The power of choice has also been studied in the context of deterministic random walks and the rotor-router model [7,18].
Perhaps closest to our work, Azar, Broder, Karlin, Linial and Phillips [4] introduced the ε-biased random walk (ε-BRW) where at each step with probability ε > 0 a controller can choose a neighbour of the current vertex to move to, otherwise a uniformly random one is selected. The model is quite similar to ours in the sense that the controller has full knowledge of the graph when choosing a neighbour. They obtained bounds on the stationary probabilities and show that optimal strategies for maximising or minimising stationary probabilities or hitting times can be computed in polynomial time. There is some overlap with our results in Section 7, where in particular Theorem 7.4 uses a clever substitution from [4] to express an optimisation problem as a linear program. One major difference is that Azar, Broder, Karlin, Linial and Phillips restrict their study to time-independent strategies and do not investigate cover times. Three of the authors of this paper have recently extended [4] to the time-dependent setting and studied cover times for ε-BRWs [25]. The conference paper [22] collects some of our results on the CRW from here and on the ε-BRW from [25] giving a comparison between the two processes.
Azar, Broder, Karlin, Linial and Phillips [4] suggest that the most natural choice of bias for the ε-BRW is ε = (1/d max ), where d max is the maximum degree. It is shown in [22,Prop. 1] that the CRW can emulate the ε-BRW provided ε 1/d max . However, the reverse does not hold unless the bias ε is close to 1; the main obstacle is that avoiding a particular next step is much more difficult for the ε-BRW. Further evidence that the CRW is more powerful than the ε-BRW is in the cover time bounds we prove for the CRW in Theorem 6.1 and for the time-dependent version of the ε-BRW in [25,Thm. 3.2]. For the most natural choice ε = (1/d max ), these bounds differ by a factor which is almost linear in d max , suggesting that the CRW deals better with high-degree graphs than the ε-BRW.
With regard to complexity questions, we note that for the SRW, hitting times can be expressed as the solution to a set of n linear equations and can therefore be computed in polynomial time. Determining the complexity of computing the cover time, however, is far more challenging and still remains open [2,Open Problem 6.35]. Significant progress was made by Ding, Lee and Peres [20] who discovered a deterministic polynomial time O(1)-approximation algorithm for the cover time. In this paper, we show that computing an optimal strategy for the cover time of the CRW is NP-hard.
Our results
In this section, we shall present the main results we have obtained for CRW. The numbers of these theorems correspond to where they appear in the paper, although some theorem statements have been simplified for ease of exposition.
The CRW is not reversible in general; however, we show that it can emulate certain reversible chains. Combining this with the well-known connection between electrical networks and reversible Markov chains, we obtain the following general bound on the maximum hitting time t two hit (G) between any two vertices of a graph G. Theorem 1.1. For any finite graph G, we have t two hit (G) < min{3|E|, n 2 }. This is tight up to constants at both ends of the density spectrum and improves considerably over the well-known O(n|E|) worst-case bound for the SRW. A witness to tightness for sparse graphs is traversing a path from end to end, and for dense graphs hitting a vertex connected by a single edge to a clique.
Most of this paper focuses on the cover time t two cov (G) for CRW on a graph G under an optimal strategy. For the SRW t hit , the maximum hitting time between any two vertices determines the cover time up to a log n factor by Matthew's bound [28,Ch. 11.2]. However, due to the effect of the choices, this does not apply to the CRW and so we develop other methods to bound t two cov . The next result implies that t two cov (T) is linear for a bounded degree tree T: Theorem 1.2. For every d ∈ N and every n-vertex tree T with maximum degree d, we have t two cov (T) 8dn. Our strategy for achieving this changes with time and covers the vertices of T in a prescribed order.
Next, we obtain a similar result for d-dimensional grids and tori. The proof technique is different: we show that there exists a CRW strategy for the infinite d-dimensional grid under which the CRW becomes strongly recurrent. In particular, the expected crossing time of any edge is finite. We use this to deduce Theorem 1.3. For any d, and any d-dimensional n-vertex torus or grid G, we have t two cov (G) = (n) and t two hit (G) = (diam (G)) = n 1/d .
Avin and Krishnamachari [3] conjecture a speed-up for their aforementioned local power of two choice walk on the two-dimensional grid. Theorem 1.3 corroborates this for our global version of the process but does not yet prove their conjecture.
We develop a method for boosting the probabilities of rare events in the CRW, which gives bounds on hitting and cover times. Perhaps the most important application of these methods is to expander graphs: Theorem 1.4. For every sequence (G n ) n∈N of bounded degree expanders, where G n has n vertices, we have t two cov (G n ) = O n log log n . Theorem 1.4 is in fact an immediate corollary of a more general bound (Theorem 6.1), bounding t two cov (G) in terms of the hitting time (of the SRW), relaxation time and degree discrepancy. In particular, these bounds apply w.h.p. to the random d-regular graph for fixed d. Another application of these methods gives the following bounds for the Erdős-Rényi random graph, showing a significant improvement on cover time for the regime with subpolynomial growth of the average degree. p) where np c ln n for any fixed c > 1 and log np = o log n . Then w.h.p.
Finally, Section 7 deals with the computational complexity of computing optimal strategies to minimise hitting and cover times. We show the following dichotomy: an optimal strategy to hit a set of vertices can be computed efficiently, whereas choosing between two cover time strategies is NP-hard. More precisely, we have Theorem 1.6. For any graph G, S ⊂ V and x ∈ V \ S, a strategy minimising the hitting time of S from x can be computed in time poly (|V|).
Notice that any strategy for covering a graph must specify a set of choice preferences from every vertex for every possible set of vertices covered and thus may have size exponential in n. This makes the second half of the dichotomy as phrased above sound somewhat modest. However, what we show is that even in the 'on-line' setting where one is given the set covered so far then just choosing the next step (outputting something of polynomial size) is NP-hard. Theorem 1.7. Given the covered set X and position v of the walk at some time, it is NP-hard to choose the next step from two neighbours of v so as to minimise the expected time for the CRW to visit every vertex not in X, assuming an optimal strategy is followed thereafter.
Our proof shows that this remains NP-hard if G is constrained to have maximum degree 3. To the best of our knowledge, this is the first intractability result for processes involving random walks with choice.
Our results for fundamental graph topologies are summarised in Table 1, along with the corresponding hitting and cover times for the SRW for ease of comparison.
Preliminaries
The CRW is a discrete time stochastic process (X t ) t 0 on the vertices of a connected graph G = (V, E), influenced by a controller. The starting state is a fixed vertex; at each time t ∈ N the controller is presented with two neighbours {c t 1 , c t 2 } of the current state X t chosen uniformly at random with replacement and must choose one of these neighbours as the next state X t+1 . We assume that at each time t the controller knows the graph G, its current position X t ∈ V, and H t : , the history of the process so far. The controller has access to arbitrary computational resources and an infinite string of random bits ω in order to choose X t+1 from {c t 1 , c t 2 }. A CRW strategy is a function which given any t, H t and {c t 1 , c t 2 } ⊆ (X t ), outputs one Complete graph n − 1 We say that a CRW strategy is unchanging if it is independent of both time and the history of the walk. We say that an unchanging strategy is reversible if the Markov chain it defines is reversible. We recall that any reversible Markov chain is identically distributed with a random walk on an edge-weighted graph as explained, for example, in [2], we shall make use of this representation. For many graphs with a high degree of symmetry, we can find good reversible strategies, and we can then use tools from the theory of reversible Markov chains to analyse the CRW on these graphs. The strategies we consider may use random bits in addition to those used for choosing {c t 1 , c t 2 }; we say a strategy is deterministic if no additional random bits are used. If we are trying to minimise the expected hitting time of a given vertex, it is easy to see that there is an unchanging, deterministic optimal strategy. However, it need not be reversible; an example where it is not is given in Figure 1. We shall use reversible strategies to bound the hitting time of the optimal strategy; these will also in general not be deterministic.
For a strategy α and for a vertex v and distinct neighbours i, j let α j v,i ∈ [0, 1] be the probability that when the walk is at v it chooses i when offered {i, j} as choices, that is, α j v,i := P X t+1 = i | X t = v, c t = {i, j} (this probability is also conditional on H t but we suppress this for notational convenience). These are the only parameters we may vary, but we shall find it convenient to define α i v,i := 1/2 for each i adjacent to v. Thus, The transition probabilities q v,i for the strategy α are then given by: For a family of parameters α j v,i to yield a valid set of transition probabilities q i,j they must satisfy for every v ∈ V. Notice that any weights satisfying (1) also satisfy (3). Let C two v (G) denote the minimum expected time (taken over all strategies) for the CRW to visit every vertex of G starting from v and define the cover time t two cov (G) := max v∈V C two v (G). Analogously, let H two x (y) denote the minimum expected time for the CRW to reach y, which may be a single vertex or a set of vertices, starting from a vertex x and define the hitting time t two hit (G): = max x,y∈V H two x (y). We drop the superscript from this notation when referring to the associated quantities for the SRW.
Bounds from weighted graphs
In this section, we analyse CRW strategies which emulate a random walk on a weighted graph. We prove a tight general bound on hitting times and show that any vertex of a graph with maximum degree 3 can be hit in time proportional to its distance from the start vertex.
An extremal hitting time result
In this section, we prove that t two hit (G) = O(e(G)) for an arbitrary graph G, where e(G) is the number of edges. This bound is best possible up to the implied constants: for sparse graphs, the path has t two hit around 2e(G). For dense graphs, a clique with a pendant path, where the length of the path is growing much slower than the size of the clique, gives t two hit around 3n 2 /8.
Lemma 3.1. Fix a vertex v, and partition its neighbours into two sets, A and B.
There is an unchanging strategy for the CRW such that whenever the walker is at v, it moves to a random neighbour according to the probability distribution in which every vertex in B is twice as likely as every vertex in A.
Proof. Fix some number p ∈ [0, 1] and consider the following strategy for moving from v. If offered two choices from the same set, choose between them uniformly at random, but if offered one choice from A and one choice from B, choose the one from A with probability p. Clearly, all elements of A are equiprobable, as are all elements of B, so it is sufficient to show that for some p this strategy chooses an element of A with probability q = |A| |A|+2|B| . If this is the case, each element of A will be chosen with probability 1 |A|+2|B| and each element of B w.p. Since (|A| + |B|) 2 |A|(|A| + 2|B|), we have q q, and hence for some p ∈ [0, 1/2] we have the required probability by continuity.
By considering the strategy at each vertex separately, we immediately get the following consequence.
Corollary 3.2. Let G = (V, E) be a locally finite weighted graph with weight function w:E → R + , having the property that for any two incident edges xy, xz either w(xy) = w(xz), or w(xy) = 2w(xz), or 2w(xy) = w(xz). Then there is an unchanging strategy for the CRW on G which simulates the random walk defined by the weights w.
Here, by the random walk defined by the weights w, we mean the reversible Markov chain where the transition probability from a vertex x to a neighbour y is proportional to w(xy). For a weighted graph (G, w), write w(G) = e∈E(G) w(e). Lemma 3.3. Let (G, w) be a finite weighted graph, and let x be a vertex such that every edge incident with x has weight 1. Then for any vertex y adjacent to x, we have Proof. Since the stationary distribution is given by We now restate and prove our result for CRW hitting times.
Proof. We have to show that the above bounds apply to H two y (x) for two arbitrary vertices x, y. Define a weight function w: ) . Note that w satisfies the requirements of Corollary 3.2, so we can bound H two y (x) by the corresponding hitting time of the random walk on (G, w). We will now bound the latter hitting time.
Write d for the maximum distance of a vertex from x, and V k for the set of vertices at distance exactly k from x. Note that if y ∈ V k+1 then For each 0 k d − 1, let G k be the simple weighted graph obtained by deleting i<k V i and identifying vertices in V k to give a vertex v k ; if a vertex in V k+1 has multiple edges to V k , delete all but one of them to leave a simple graph. Since removing edges between V k+1 and V k cannot reduce the hitting time of V k , we have for any . Note that the latter hitting time is unchanged by multiplying all weights by 2 k , and since every z ∈ V k+1 is adjacent to If e is an edge between V j and V j+1 , then the contribution of e to the kth term of the above sum is 2 k−j+1 if k < j, at most 1 if k = j and 0 otherwise, so its total contribution is less than 3 and is less than 2 if e is one of the edges deleted to make G j simple. If e is an edge within V j , then its contribution to the kth term is 2 k−j+1 if k < j and 0 otherwise, so its total contribution is less than 2. The first bound follows. Note that of the edges of the first type which are not deleted, there is exactly one from each vertex (other than x) to a vertex in a lower layer of G, and so these edges form a tree. Thus, there are n − 1 such edges, whose contribution is bounded by 3, and at most n 2 − (n − 1) other edges, whose contribution is bounded by 2, giving a bound of 2 n 2 + n − 1 = n 2 − 1.
Cover times of subcubic graphs
In this section, we prove the CRW cover time of any subcubic graph is linear in the number of vertices, where we remind the reader that a subcubic graph is a graph with maximum degree 3.
Proposition 3.4.
Let G be any connected graph of maximum degree 3. Then, H two u (v) 9 for any uv ∈ E(G). If in addition G is finite with n vertices, then t two cov (G) = (n).
If G has n vertices, let v be any vertex and choose a spanning walk in the graph starting at v and having at most 2n − 3 edges. Such a walk always exists, for example, a depth-first exploration of a spanning tree. Proceed in 2n − 3 rounds, in each round using the strategy above to hit the next vertex of the walk. Each round has expected duration at most 9 by (4), and so t two cov (G) 18n − 27.
Remark. Since t two hit (G) t two cov (G), this is also linear. Even for 3-regular graphs the diameter could grow linearly, so this is the best possible.
Trees
In this section, we show that t two cov (T) = (n) for trees T of bounded degree. Even more, we will prove that we can even specify an arbitrary (closed) walk W traversing each edge of T once in each direction and cover the vertices of T in the order dictated by W in linear expected time. This is the gist of the following result:
Theorem 1.2 For every d ∈ N and every tree T with maximum degree d, we have x,y∈V(T),xy∈E(T)
This result will be proved by realising a strategy to cover T as a sequence of weighted walks, and then bounding the hitting times in these walks using the Essential Edge Lemma. We shall now remind the reader of the setting and statement of this lemma: we say that an edge vx of a graph is essential if its removal would disconnect the graph, into two components A(v, x) and A(x, v), say, containing v and x, respectively. Let E(v, x) be the set of edges of A(v, x).
is the hitting time of the reversible Markov chain defined by these edge weights.
We now define the CRW strategies we use in the proof of Theorem 1.2. Given a tree T, we pick an arbitrary 'root' vertex r ∈ V(T). In order to obtain an upper bound on H two x (y) for x, y ∈ V(T) such that xy ∈ E(T), we follow the (unchanging) strategy σ xy making the following choices at each vertex v: Reduce the distance to y if possible. Otherwise, choose uniformly an option that increases distance to r if at least one is available.
In other words, σ xy prefers the unique neighbour w of v with d(w, y) < d(v, y), avoids the unique neighbour z with d(z, r) < d(v, r) and is indifferent among all other neighbours of v. We emphasise that r was an arbitrary vertex, but it is important for our calculations below that it is fixed for all σ xy , x, y ∈ V(T).
Since the strategy σ xy is unchanging, there is an assignment of weights w x,y (e), e ∈ E(T) such that the corresponding random walk (as defined after Corollary 3.2) is equidistributed with the CRW under strategy σ xy when both walks start at x and stop when first visiting y. These weights can be multiplied by any positive constant without changing the random walk they define, and we normalise by fixing w x,y (xy) = 1 for concreteness. The rest of the weights can be calculated explicitly, and so we can apply the Lemma 4.1 to give the bound: with the understanding that we set w x,y (e) = 0 if y separates x from e as this edge does not contribute to the sum in Lemma 4.1. The latter formula expresses H two x (y) as a sum of contributions of each e ∈ E(T). The main surprise in the proof of Theorem 1.2 is the following lemma, which says that for each e ∈ E(T), the sum of these contributions w x,y (e) over all H two x (y), x, y ∈ V(T), xy ∈ E(T), is bounded. An obvious double-counting argument involving (6) will then establish Theorem 1.2. We emphasise that this sum is taken over all ordered pairs of adjacent vertices. The proof of Theorem 1.2 is based on the fact that, for a fixed e, the values w x,y (e) decay fast with the distance d(xy, e), and even more so in the direction of r. (Here, we define the distance between two edges xy, wz ∈ E to be d(xy, wz) := min{d(x, w), d(y, z), d(x, z), d(y, w)}.) The following two propositions will yield quantitative bounds on the speed of this decay.
Proposition 4.3. Let G be any graph, x ∈ V(G), and v ∈ N(x). Consider a CRW strategy that when at x always chooses v when that choice is available, otherwise it chooses each of the available options independently with probability 1/2. Then for every w
since each w = v is chosen with equal probability, and only when v is not among the options. Thus, Proof. As in the proof of Proposition 4.3, we have q Armed with these propositions, we can now prove Lemma 4.2.
Proof of Lemma 4.2. Fix e ∈ E(T). We split := x,y∈V(T),xy∈E(T) w x,y (e) as a sum = i∈N i of 'layers' i , corresponding roughly to distance from e, and show that i decays exponentially in i.
Let P be the path from e to r in T (excluding e), and let L 0 be the set of all edges of P and all edges incident with P (including e). Let 0 := x,y∈V(T),xy∈L 0 w x,y (e) be the total weight assigned to e by pairs of adjacent vertices of L 0 . Define L i , i 1 recursively as the set of edges incident with L i−1 not contained in j<i L j , and let i := x,y∈V(T),xy∈L i w x,y (e). See Figure 2 for an illustration of the different sets L i .
Claim
The following inequalities hold . , x k , where x k = r be the vertices of P as they appear from e to r.
Recall that w x i+1 ,x i (e) = 0 for every i 1, as e is separated from x i+1 by removing x i and thus does not contribute to the sum in the formula for H x i+1 (x i ) from Lemma 4.1. In the other direction, we ) < 1/2 because this ratio coincides with the ratio of the corresponding transition probabilities by the definitions. Moreover, at each x j , 1 j < i, the strategy σ x i x i+1 makes the same choices as σ x j x j+1 , hence by Proposition 4.3 again, with the convention that x 0 x 1 = e. Our claim follows by multiplying these fractions for j ranging from 1 to i. For each of the at most d − 1 other edges x i z = e of L 0 incident with x i , where i 1, we use the rather generous bound w x i ,z (e) < (1/2) i−1 , which is true by similar arguments because Adding these contributions, we obtain We will bound the contribution of the edges incident with yv to i in terms of the contribution of yv to i−1 . For this, let vw ∈ L i . Note that v separates w from r and so first, w w,v (e) = 0. Second, this implies σ vw avoids moving from v to y whenever possible by (5). Thus, Proposition 4.4 yields each edge yv ∈ L i−1 , and adding together, noting that at most one end vertex v of yv is incident with edges in L i by construction, we finally deduce Combining both parts of the Claim proves our statement, as = i i 2 0 4d.
It is now easy to complete the proof of Theorem 1.2.
Proof of Theorem 1.2. By (4.1), we have Changing the order of summation, and then applying Lemma 4.2 to each summand, we bound the right-hand side by
Infinite graphs and cover time of tori
In this section, we bound the cover time of the d-dimensional discrete torus Z d k , which has n = k d vertices. Here, we think of the dimension d as being fixed while the side length k grows. In order to prove a linear bound on the cover time, we will instead consider the infinite limit Z d and infinite (but locally finite) graphs more generally.
For infinite graphs, it is meaningless to ask about the CRW cover time, but still interesting to ask about hitting times. The most fundamental question is whether these can be made finite, which corresponds to asking for positive recurrence.
Definition 5.1. A graph is positive choice recurrent (PCR) if there exists an unchanging strategy for the CRW such that the expected return time to any given vertex is finite. A graph is strongly PCR if for every p ∈ (0, 1) there exists an unchanging CRW strategy such that expected return times are finite for the process which, at every time step, takes a step of the CRW with that strategy with probability p and a step of the SRW otherwise.
A natural question is whether there is a strategy under which the walk becomes a transient Markov chain. The answer is always yes: fixing a root r and giving each edge uv weight 2 min (d(u,r),d(v,r)) produces a suitable weighting to apply Corollary 3.2. This weighted graph is transient because any infinite geodesic starting at the root has total resistance 2 (see e.g. [29,Thm. 2.3]), and taking other edges into account cannot increase the effective resistance to infinity.
While positive recurrence is the property which will be useful to us, we might also ask for the weaker property of choice recurrence, where we simply require return times to be almost surely finite. It is possible for a graph to be choice recurrent but not PCR; indeed, there are graphs which are recurrent under the SRW but not PCR.
Remark. Proposition 3.4 implies that any graph of maximum degree 3 is PCR. This is not true for higher degrees, since for the infinite 4-regular tree any strategy is more likely to move away from a given target vertex than towards it.
Note that Z d = Z d−1 Z, where indicates the Cartesian product. We will need the following result about Cartesian products of PCR graphs.
Lemma 5.2. If G is PCR, H is strongly PCR and both G, H are regular, then G H is PCR.
Proof. Define the p-product of two time-homogeneous Markov chains A, B to be the chain with state space S(A) × S(B) where at each time step with probability p a transition of B occurs, and otherwise a transition of A occurs. If both chains are irreducible and positive recurrent, then so is the p-product (this follows easily from the existence of stationary distributions). Now we define a strategy for the CRW on G H as follows. If at least one of the choices given is a move in the H co-ordinate, we make such a move. Now the probability of exactly one of the options being a move in H is 2rs (r+s) 2 , where G is r-regular and H is s-regular, and the probability of both options being moves in H is s 2 (r+s) 2 . Thus, conditional on at least one option being in H, both are in H with probability s 2r+s . There is a strategy on H, for this probability of having two choices, which reaches the root in finite time; whenever we move in the H co-ordinate we use this strategy. If both choices are moves in G, then we follow the appropriate strategy for the random walk with two choices in G. The resulting Markov chain is the 2rs+s 2 (r+s) 2 -product of positive recurrent Markov chains on G and H, hence positive recurrent.
The same argument shows that if in addition G is strongly PCR, then so is G H. Lemma 5.2 allows us to conclude that Z d is PCR and consequently obtain a bound on its cover times and hitting times.
Hitting and cover times in expanders
In this section, we prove bounds on the cover and hitting times of the CRW on a graph G in terms of fundamental parameters. First, we introduce our notation. Let G be a graph with n vertices, and write d max , d min and d avg for the maximum, minimum and average degree of G, respectively. Let t rel be the relaxation time of G, defined as 1 1−λ 2 , where λ 2 is the second largest eigenvalue of the transition matrix of the lazy random walk (LRW) on G with loop probability 1/2. Recall that t hit is maximum over all pairs uv ∈ V of the expected time it takes the SRW to reach u from v. Our first result bounds the CRW cover time.
Theorem 6.1. For any connected n-vertex graph G the following holds We also bound hitting times. First, we define the exponent γ d = log d d 2 2d−1 ; note that γ d is increasing in d, γ d < 1 and 1 − γ d ∼ 1/ log 2 d. Also recall that for a set S ⊆ V let π(S) = s∈S π(s) be the stationary probability of S. Theorem 6.2. For any graph G, and any x ∈ V and S ⊂ V, we have H two x (S) 12 · π(S) −γ dmax · t rel · ln n; this bound also holds for return times. Consequently, We say that a sequence of graphs (G n ) is a sequence of expanders if t rel (G n ) = (1). Theorems 6.1 & 6.2 yield the following corollary: Theorem 1.4 For every sequence (G n ) n∈N of bounded degree n-vertex expanders, we have t two cov (G n ) = O n log log n and t two hit (G n ) n α for some fixed α < 1. These are significantly less than the corresponding cover and hitting times by the SRW, which are (n log n) and (n), respectively [2, Thm. 10.1]. Theorems 6.1 and 6.2 will follow from Theorem 6.3 below. For a given graph G, we consider possible trajectories of a (non-lazy) walker, that is, finite sequences of vertices in which any two consecutive vertices are adjacent; the length of a trajectory will be the number of steps taken. In the following, we use bold characters to denote trajectories in G and if u ∈ V(G), then u will denote the length 0 trajectory from u. Fix a non-negative integer t and a set S of trajectories of length t.
Let p x,S denote the probability that extending a trajectory x to length t according to the law of a SRW results in a member of S. Let q x,S denote the corresponding probability under the CRW law; this probability will depend on the particular strategy used. This function can encode probabilities of many events of interest such as 'the graph is covered by time t' , 'the walk is in a set X at time t' or 'the walk has hit a vertex x by time t' for example. However, let us emphasise that our result in fact applies to any possible event. Theorem 6.3. Let G be a graph, u ∈ V, t > 0 and S be a set of trajectories of length t from u. Then, there exists a strategy for the CRW such that q u,S p u,S γ dmax .
We also give an analogue of Theorem 6.3 for bad events. This analogue, unlike Theorem 6.3, gives an exponent which does not depend on the maximum degree d max of G, and so a significant reduction is possible even if d max is large. Theorem 6.4. Let G be a graph, u ∈ V, t > 0, and S be a set of trajectories of length t from u. Then there exists a strategy for the CRW such that q u,S p u,S 2 .
Remark. The exponent 2 in Theorem 6.4 is best possible, since we have equality whenever t = 1 and therefore also when t > 1 but every trajectory of the SRW of length t − 1 has the same probability to reach S. Similarly the exponent given in Theorem 6.3 is best possible, as evidenced by the case where this probability is 1/d max for every trajectory of length t − 1.
After stating two technical lemmas in Section 6.2, we then explain an alternative way of considering the CRW in Section 6.3, which enables the proof of Theorems 6.3 and 6.4 to be completed. To motivate the importance of Theorem 6.3, we shall begin by showing how it implies our main results on cover time and hitting times.
Deducing Theorems and 6.2 from Theorem 6.3
In order to prove our main bounds from the key tool, Theorem 6.3, we must first overcome the obstacle that Theorem 6.3 is expressed in terms of the SRW probabilities, whereas our bounds involve the relaxation time, which is defined in terms of the LRW. The reason for using the LRW to define relaxation time is to ensure that the associated Markov chain is aperiodic. Our next lemma resolves this issue by relating the relaxation time to SRW probabilities.
Write p (t) x,· and p (t) x,· for the distribution of the SRW and LRW, respectively, after t steps started at x, and write π(S) for the stationary probability of a set S (note that the two walks have the same stationary distribution).
Lemma 6.5. For any graph G, S ⊂ V and x ∈ V, there exists t 4t rel ln n such that
x,S π(S)/3.
Proof.
If G is bipartite, then we may find a subsetS ⊆ S which lies entirely within one part satisfying π(S) π(S)/2. Otherwise, the SRW is aperiodic and we setS = S. We now consider the multigraphḠ formed from G by contractingS to a single vertexs, retaining all edges (with edges insideS becoming loops ats). Retaining edges ensures that the stationary probability ofs inḠ is precisely π(S). Letλ 2 be the second largest eigenvalue of the LRW onḠ. Then for any x / ∈S and t 0, by [28, (12.11)], we have | p (t) x,s − π(S)| π(S)/π(x) · e −t(1−λ 2 ) . It follows that if we run the LRW onḠ for T = log (3/ π(S)π(x))/(1 −λ 2 ) steps then Now, we can express the density of the LRW by p (T) , where the random variable X T ∼ Bin (T, 1/2) is the number of non-lazy steps taken by the LRW in time T. Thus, We can assume n 2 or else the result holds trivially, so log (3/ π(S)π(x)) log 3 + 2 log n 4 log n. Finally, [2,Cor. 3.27] gives thatλ 2 λ 2 , so T 4t rel ln n.
Our strategy to bound the cover time will be to emulate the SRW until most of the vertices are covered, only using the additional strength of the CRW when there are few uncovered vertices remaining. We will need a simple lemma to bound how long the first stage takes.
Lemma 6.6. Let U(t) be the number of unvisited vertices at time t by a SRW on a graph and let T n/2 x be the number of SRW steps taken before U n/2 x . Then
n 2 x and E T n/2 x 4(x + 1)t hit .
Proof.
Let v ∈ V. Then by Markov's inequality P w [X t = v, ∀0 t 2t hit ] 1/2, for any w ∈ V. Thus, the probability v is not visited by time 2x · t hit is at most 2 −x by sub-multiplicity and so the expected number of unvisited vertices at time 2x · t hit is at most n · 2 −x . By the above E [ U(2(x + 1)t hit ) ] n/(2 · 2 x ) and so P [ U(2(x + 1)t hit ) n/2 x ] 1/2 by Markov's inequality. Considering sections of length 2(x + 1)t hit separately, and continuing until one section covers the required number of vertices, we use in expectation at most two such sections, thus E T n/2 x 4(x + 1)t hit .
We now have what we need to prove the cover and hitting time bounds.
Proof of Theorem 6.1. For convenience, we write γ = γ d max . We first emulate the SRW (i.e. set α z x,y = 1/2 for all x, y, z ∈ V(G) with y, z ∈ (x)) until all but m = n/ log C n vertices have been visited, for some C to be specified later. Let τ 1 be the expected time to complete this phase. Then, by Lemma 6.6, we have τ 1 4t hit · C log 2 log n.
We cover the remaining vertices in m different phases, labelled m, m − 1, . . . , 1, each of which reduces the number of uncovered vertices by 1. In phase i, a set of i vertices are still uncovered, and we write S i for this set. By Lemma 6.5 for any vertex x, there is some t 4t rel log n such that and thus q (t) u,S i d min · i/(3nd avg ) γ by Theorem 6.3. Since from any starting point, we can achieve this probability of hitting a vertex in S i within the next 4t rel log n steps, the expected number of attempts needed to achieve this is at most d min · i/(3nd avg ) −γ , meaning that the expected time required to complete phase i is at most Hence, the expected time τ 2 to complete all m phases satisfies Then, since For the first bound, we choose C = log ( d avg d min ) · t rel · log 2 n / (1 − γ ) · log log n then since log C(1−γ ) n = (d avg /d min )t rel · log 2 n and γ < 1 this gives τ 2 = O(n) by (7) above. Since in any graph t hit = (n), a the total time is therefore O(τ 1 ), and for this value of C we have Proof of Theorem 6.2. Write T = 4 · t rel · ln n. For any x ∈ V and S ⊂ V, Lemma 6.5 gives a t T such that p (t) x,S π(S)/3, and Theorem 6.3 consequently gives a strategy for the CRW such that x,y (π(y)/3) γ . Thus, for any target set S and start vertex x, we need in expectation at most (3/π(S)) γ attempts to hit S in at most T steps, since if an attempt fails, ending at some vertex z, we have the same bound on the probability of hitting S from z. Therefore, there is a strategy for the CRW where the hitting time H x (S) 12 · π(S) −γ · t rel log n. The second result follows since for any vertex π(v) d min nd avg .
The max choice and min choice operations
In this section, we introduce two operators which represent the effect of making optimal choices for a single step of the random walk, assuming that the effects of choice on future steps are already known and prove inequalities relating them to power means. Define the max choice operator MC 2 : [0, ∞) m → [0, ∞) as follows: For p ∈ R \ {0}, the p-power mean M p of non-negative reals x 1 , . . . , x m is defined by: We use a key lemma which could be be described as a multivariate anti-convexity inequality.
Proof. By the power-mean inequality, since γ −1 m γ −1 d it is sufficient to prove the case m = d. We show this by induction on d; we have equality for d = 1. Suppose that either d = 2 or d 3 and the result holds for d − 1. Without loss of generality, using symmetry and homogeneity of both operators, we may assume that max{x 1 , . . . , x d } = x d = 1.
We first claim that we may further assume x 1 = · · · = x d−1 . If d = 2, this claim is trivial. If a Let τ v be the first time that v is visited during a random walk from u.
where the first inequality uses the assumption that the result holds for n − 1 and the second uses the power-mean inequality. Thus, replacing x 1 , . . . , x d−1 byx, . . . ,x does not increase the difference between the two operators, proving the claim. Next, we claim that the function . , x, 1) is linear, and the two functions agree at 0 (by choice of γ d ) and at 1, this will complete the Lemma 6.7 will be used to prove Theorem 6.3. In order to prove Theorem 6.4, we will need a corresponding inequality for an appropriate operator. To that end, we define the min choice operator mC 2 :[0, ∞) m → [0, ∞) by: Proof. Observe that
The tree gadget for graphs
In this section, we prove Theorem 6.3. To achieve this, we introduce the tree gadget which encodes trajectories of length at most t from u in a rooted graph (G, u) by vertices of an arborescence (T t , r), that is, a tree with all edges oriented away from the root r. Given (G, u), we represent each trajectory of length i t started from u in G as a node at distance i from the root r in the tree T t . The root r represents the trajectory of length 0 from u. There is an edge from x to y in T t if x is obtained from y by deleting the final vertex. See Figure 3 for an illustration of the tree gadget.
Also for x ∈ V(T t ) let + (x) = {y ∈ V(T t ):xy ∈ E(T t )} be the offspring of x in T ; as usual we write d + (x) for the number of offspring. Write |x| for the length of the trajectory x. To prove Proof of Theorem 6.3. For ease of notation, we write η = 1/γ d max . To each node x of the tree gadget T t , we assign the value q x,S under the CRW strategy of preferring the choice which extends to a trajectory y ∈ + (x) giving a higher value of q y,S . This is well defined because both the strategy and the values q x,S can be computed in a 'bottom up' fashion starting at the leaves, where if x ∈ V(T t ) is a leaf then q x,S is 1 if x ∈ S and 0 otherwise. Suppose x is not a leaf. The controller is presented with two uniformly random offspring y, z ∈ + (x) and chooses y if q y,S q z,S and z otherwise. Thus, we have q x,S = 1 d + (x) 2 y,z∈ + (x) max{q y,S , q z,S } = MC 2 q y,S y∈ + (x) .
We define the following potential function (i) on the i th generation of the tree gadget T : where the sum ranges over all trajectories x of length i. Notice that if xy ∈ E(T t ) then Also since each y with absy = i has exactly one parent x with |x| = i − 1, we can write We now show that (i) is non-increasing in i. By combining (10) and (11), we can see that the difference (i−1) − (i) is given by: Recalling (9), to establish (i−1) − (i) 0, it is sufficient to show the following inequality holds whenever x is not a leaf: Raising both sides to the power 1/η = γ d max , since d + (x) d max this inequality holds by Lemma 6.7, and thus (i) is non-increasing in i.
Observe (0) = q η u,S . Also if |x| = t then q x,S = 1 if x ∈ S and 0 otherwise. It follows that Thus, since (t) is non-increasing q η u,S = (0) (t) = p u,S , as required.
Theorem 6.4 now follows similarly to Theorem 6.3.
Proof of Theorem 6.4. Construct the tree gadget to height t. We associate each node x with the probability q x,S under a strategy which always prefers the smaller value. For a leaf this is simply the indicator function 1 {x∈S} , whereas for an internal vertex it is given by mC 2 (q y,S ) y∈ + (x) . We define a potential function by: As before, Further, for each internal vertex x we have, using Lemma 6.8, Summing over all x at level i, we obtain (i) (i+1) for each i < t, and consequently √ q u,S = (0) (t) = p u,S , as required.
Random graphs
We now consider CRW hitting and cover times in the Erdős-Rényi random graph G(n, p). This is the probability distribution over all n-vertex simple graphs generated by sampling each possible edge independently with probability p, see [12] for more details. , p) where np c ln n for any fixed c > 1 and log np = o log n . Then w.h.p.
Proof. To begin, we show that the graph is almost regular w.h.p.
Claim For p as above, d min , d max = np w.h.p.
Proof of claim.
In G d ∼ G(n, p) since each edge is independent with probability p, each degree d(u) is distributed as a binomial random variable Bin (n − 1, p). The Chernoff bound [14,Thm. 3.2] states that for any λ, P Bin (n, p) np + λ exp − λ 2 2(np+λ/3) . Thus, by a union bound over all vertices d max 5np w.h.p.. For d min note that the expected number of vertices of degree k is given by x k = n n−1 k p k (1 − p) n−1−k . We shall consider k = κnp for κ 1/2, in this case x k /x k−1 = np k (1−p) 2 and so the expected number of vertices with degree k is O(x k ).
Observe that x k Cooper & Frieze [16] show that for np = c ln n, c > 1 w.h.p. the conductance of G(n, p) is at least 1/6, implying that t rel = O(1) [28,Thm. 13.14]. For larger values of np, Coja-Oghlan [15,Thm. 1.2] showed that there exists some c < ∞ such that for np c log n the spectral gap of the normalised Laplacian of G(n, p) Since the normalised Laplacian L is similar to the random walk Laplacian L , and the later is given by L = I − P, we see that also in this range t rel = O(1). We have shown that, in this regime, G(n, p) is almost regular and has constant relaxation time w.h.p., thus t hit = O(n) w.h.p. by [13,Thm. 5.2]. Theorems 6.1 & 6.2 now yield the results.
Thus, the CRW gives a significant improvement in the cover and hitting times whenever degrees of G(n, p) are subpolynomial in n.
Computing optimal choice strategies
In this section, we focus on the following problem: given a graph G and an objective, how can we compute a series of choices for the walk which achieves the given objective in optimal expected time? In particular, we consider the following computational problems related to our main objectives of max/minimising hitting times, cover times and stationary probabilities π v .
Stat (G, w): Find a CRW strategy min/maximising v∈V w v π v for vertex weights w v 0.
Hit (G, v, S): Find a CRW strategy minimising H two v (S) for a given S ⊆ V(G) and v ∈ V(G).
Cov (G, v): Find a CRW strategy minimising C two v (G) for a given v ∈ V(G).
The analogous problems to Stat (G, w) and Hit (G, v, S) were studied in [4] for the BRW. While Stat is not one of our primary objectives, we include it here both as a natural problem to consider but also because of its relationship to Hit in the case where w is the indicator function of a set S; we shall abuse notation by writing Stat(G, S) for this case. Clearly for Stat, we must restrict ourselves to unchanging strategies for the stationary probabilities π v to be well defined; we shall show that Hit also has an unchanging optimal strategy. For Hit and Cov, there are two possible interpretations of what it means to 'find' a CRW strategy. Perhaps the most natural is to compute a sequence of optimal choices in an online fashion, that is at each time step to compute which of the two offered choices to accept. For any particular walk, with suitable memoisation, at most a polynomial number of such computations will be required for either problem: which choice to accept depends only on the current vertex, the two choices, and in the case of Cov the vacant set, which can change at most n times. We might alternatively want to compute a complete optimal strategy in advance; for Hit this requires only a polynomial number of single-choice computations, but for Cov the number of possible situations our strategy must cover will be exponential. However, we shall show that Cov is hard even for individual choices.
A polynomial-time algorithm for Stat and Hit
First, we show how the (unknown) optimal values H two x (v) determine an optimal strategy for Hit(G, ·, v). In the following two lemmas, we will need to work with a multigraph F; in this context, the choice offered at each stage is between two random edges from the current vertex.
. Let β be the deterministic unchanging strategy given by β v k v i ,v j = 1 whenever j < k. Then β is optimal (among all strategies) for Hit (F, x, v) for every x = v, and also for the problem of minimising Proof. Fix an optimal strategy α for Hit (F, x, v), and for each y ∈ (x) write q y for the probability that the first step under this strategy is from x to y. Recall that q y = z∈ (x) 2α z x,y d(x) 2 . Now given that the first step is at y, an optimal strategy for the remaining steps is precisely an optimal strategy for Hit(F, y, v), and thus Suppose there exist y, z ∈ (x) with H two y (v) < H two z (v) but α z x,y < 1 at the first step. By instead (at time 1 only) always choosing y in preference to z, the expected hitting time is decreased by , then the expected hitting time does not depend on α z x,y , and so any strategy satisfying these conditions at time 1, and thereafter following an optimal strategy, is itself optimal.
It follows by induction that following β for k turns and thereafter following α is optimal; since this gives arbitrarily good approximations of the expected hitting time under β, β is itself optimal for Hit (F, x, v), and, since the definition of β does not depend on x, for Hit(F, y, v) for any y = v.
Next, we show that β is also an optimal strategy for minimising E v τ + v . Suppose not, and let γ be an optimal strategy. Write q γ x for the probability of moving from v to x at time 1 under γ , and H , which is non-positive by choice of β. Thus, after a sequence of such changes, we obtain
Lemma 7.3. For any simple graph G of order n and every pair of vertices x, y with H two
Proof. Note that the hitting times h x x∈V of S from x for any given unchanging strategy are uniquely determined by the equations: where P is the transition matrix for the strategy. This set of equations can be written as Ah = b, where A := (I − Q), Q i,j = P i,j if i / ∈ S and 0 otherwise, and b is a 0-1 vector. Notice that A is diagonally dominant, and from any row where equality occurs there is a path of non-zero entries to a strictly dominant row. It is straightforward to check that such a matrix is invertible: see for example [6,Lem. 3.2]. For any non-random strategy, and in particular for the optimal strategy described above, every transition probability from x is a multiple of d(x) −2 . Thus, all the elements of A can be put over a common denominator D, where D := LCM (d(x) 2 ) x∈V < (n!) 2 < n 2n /2.
where C is the matrix of cofactors. Each entry in C can be put over a common denominator which is at most D n , and so the same applies to each entry of C T b. Also, |A| < 2 n by Hadamard's inequality [26,Thm. 7.8.1]. It follows that if two hitting times differ, they differ by at least (2D) −n .
For any graph G and weighting w:V → [0, ∞) on the vertices of G we can phrase Stat (G, w) as an optimisation problem as follows, where we shall encode our actions using the probabilities α z x,y = P X t+1 = y | X t = x, c = {y, z} from Section 2. maximise: For minimising the stationary probabilities, we maximise −1 times the objective function.
To prove Theorem 7.4, the quadratic terms in (13) can be eliminated using the same substitution as [4, Thm. 6]; we can then solve (13) as a linear program. Proof. We prove the simple graph case; this proof may be easily extended for multigraphs with suitably adapted notation. The optimisation problem (13) above can be rephrased as a linear program by making the substitution r x,y,z = π(x) · α z x,y . Either the Ellipsoid method or Karmarkar's algorithm will approximate the solution to within an additive ε > 0 factor in time which is polynomial in the dimension of the problem and log (1/ε), see for example [23,27].
We now show how one can now use this linear program to determine the hitting times. Theorem 1.6 For any graph G and any S ⊂ V, a solution to Hit (G, x, S) for every x ∈ V \ S can be computed in time poly (n).
Proof. Contract S to a single vertex v to obtain a multigraph F; where a vertex x has more than one edge to S in G, retain multiple edges between x and v in F. Note that F has at most n vertices and at most n 2 edges. Provided that the CRW on G has not yet reached S, there is a natural correspondence between strategies on G and F with the same transition probabilities, and it follows that H two x (S) for G and H two x (v) for F are equal for any x ∈ V(G) \ S. We compute an optimal strategy to Stat(F, {v}) to within an additive error of ε := n −10n 2 ; note that log (1/ε) = o(n 3 ) and so this may be done in time poly (n) by Theorem 7.4. Applying Lemma 7.2 to F and Lemma 7.3 to G, using the equality of corresponding hitting times, implies that this strategy has α z x,y > 1/2 whenever H two y (v) < H two z (v), and so rounding each of the probabilities α z x,y to the nearest integer gives an optimal strategy (on F) for every x, which may easily be converted to an optimal strategy for G.
A hardness result for Cov
We show that in general even the online version of Cov (G, v) is NP-hard. To that end, we introduce the following problem, which represents a single decision in the online version. The input is a graph G, a current vertex u, two vertices v and w which are adjacent to u, and a visited set X, which must be connected and contain u.
NextStep (G, u, v, w, X): Choose whether to move from u to either v or w so as to minimise the expected time for the CRW to visit every vertex not in X, assuming an optimal strategy is followed thereafter.
Any such problem may arise during a random walk with choice on G starting from any vertex in X, no matter what strategy was followed up to that point, since with positive probability no real choice was offered in the walk up to that point.
Theorem 1.7
NextStep is NP-hard, even if G is constrained to have maximum degree 3.
Proof.
We give a (Cook) reduction from the NP-hard problem of either finding a Hamilton path in a given graph H or determining that none exists. This is known to be NP-hard even if H is restricted to have maximum degree 3 [21]. We shall find it more convenient to work with the following problem, which takes as input a graph G, a current vertex u and a connected visited set X containing u.
BestStep (G, u, X): Choose a neighbour of u to move to so as to minimise the expected time for the CRW to visit every vertex not in X, assuming an optimal strategy is followed thereafter.
We may solve BestStep(G, u, X) by computing NextStep(G, u, v, w, X) for every pair v, w of neighbours of u; since all optimal neighbours must be preferred to all others, this will identify a set of one or more optimal choices for BestStep(G, u, X). Consequently, it is sufficient to reduce the Hamilton path search problem to BestStep. Given an n-vertex graph H, construct the graph G as follows. First replace each edge of H by a path of length 2cn 2 through new vertices. Next add a new pendant path of length n 3 starting at the midpoint of each path corresponding to an edge of H. Finally, add edges to form a cycle consisting of the end vertices of these pendant paths (in any order). Note that if H has maximum degree 3, so does G.
Fix a starting vertex u and a non-empty unvisited set Y ⊆ V(H) \ {u} and set X = V(G) \ Y. (The purpose of the second and third stages of the construction is to make X connected without affecting the optimal strategy.) Suppose that H contains at least one path of length |Y| starting at u which visits every vertex of Y; in particular if Y = V(H) \ {u} this is a Hamilton path of H. We claim that any optimal next step is to move towards the next vertex on some such path. Assuming the truth of this claim, an algorithm to find a Hamilton path starting at x, if one exists, is to set u = x and Y = V(H) \ {x}, then find the vertex y such that moving towards y is optimal, set u = y and remove y from Y, then continue. If this fails to find a Hamilton path, repeat for other possible choices of x.
To prove the claim, first we argue by induction that there is a strategy to visit every vertex in |Y| in expected time (4cn 2 + O(n) )|Y|, where the implied constant does not depend on c. This is clearly true for |Y| = 0. Let y be the next vertex on a suitable path in H, and let z be the middle vertex of the path corresponding to the edge uy. Attempting to reach z by a straightforward strategy, the distance to z evolves as a random walk with probability 3/4 of decreasing unless the current location is a branch vertex. We thus reach z in expected time 2cn 2 plus an additional constant time for each visit to u, of which we expect O d(u) = O(n), giving a total expected time of 2cn 2 + O(n) (if the walker is forced to a different branch vertex first, the expected time to return from this point is polynomial in n, but this event occurs with exponentially small probability). Similarly, the time taken to reach y from z is 2cn 2 + O(1). Once y is reached, there is (by choice of y) a path of length |Y| − 1 in H starting from y and visiting all of Y \ {y}. Thus, by induction, the required bound holds. Secondly, suppose that an optimal first step in a strategy from u moves towards a vertex y of H which is not the first step in a suitable path. Since the expected remaining time decreases whenever an optimal step is taken, two successive optimal steps cannot be in opposite directions unless the walker visits a vertex of Y in between. Thus, the optimal strategy is to continue in the direction of y if possible, and such a strategy reaches y before returning to u with at least constant probability p, and this takes at least 2cn 2 steps. Note that the expected time taken to reach another vertex of H from a vertex in H, even if the walker is purely trying to minimise this quantity, is at least 4cn 2 , and from either u or y at least |Y| such transitions are necessary to cover Y. Thus, such a strategy, conditioned on the first step being in the direction of y , has expected time at least 4cn 2 + 2pcn 2 , which, for suitable choice of c, proves the claim.
Computing Cov via Markov decision processes
To compute a solution for Cov (G, v), we can encode the cover time problem as a hitting time problem on a (significantly) larger graph. Proof. There is a natural bijection between the out-edges in G from u and those in G from (u, T) for any u ∈ V, T ⊆ V. This extends to a natural bijection from finite walks (which we may think of as a vertex together with a history) in G starting from v to walks in G starting from v, and also to a measure-preserving bijection between the choices which may be offered from u and (u, T). Thus, there is a natural bijection between strategies for the two walks, and both the choices offered and any random bits used may be coupled so that corresponding strategies produce corresponding walks. Since the walk in G has covered V if and only if the walk in G has hit some vertex in W, the times that these events first occur are identically distributed for corresponding strategies, and in particular the sets of optimal strategies correspond.
In light of Lemma 7.5, it may appear that we can solve Cov(G, v) by converting it to an instance of Hit G, v, W and appealing to Theorem 1.6. This is unfortunately not the case as G is a directed graph and Theorem 1.6 cannot handle directed graphs. Lemma 7.5 is still of use as we can phrase Hit in terms of Markov decision processes (MDPs) and then standard results tell us that an optimal strategy for the problem can be computed in finite time.
A MDP is a discrete time finite-state stochastic process controlled by a sequence of decisions [19]. At each step, a controller specifies a probability distribution over a set of actions which may be taken and this has a direct affect on the next step of the process. Costs are associated with each step/action and the aim of the controller is to minimise the total cost of performing a given task, for example, hitting a given state. In our setting, the actions are orderings of the vertices in each neighbourhood and the cost of each step/action is one unit of time. The problem Hit G, u, v) is then an instance of the optimal first passage problem which is known to be computable in finite time [19]. Corollary 7.6. For any graph G and v ∈ V, an optimal policy for the problem Cov (G, v) can be computed in exponential time.
Proof. We first encode the problem Cov (G, v) as the problem Hit G, v, W as described in Lemma 7.5. Now as mentioned Hit G, v, W is an instance of the optimal first passage problem which for a given graph G, start vertex v and target vertex W can be computed in finite time using either policy iteration or linear programming, see for example [19,Ch. 5,Cor. 1]. Examination of the linear program on [19, page. 58] reveals that there is a constraint for every ordering of the neighbours of each vertex. Since G has at most 2 n vertices and each of these has at most n neighbours, we see that there are at most 2 n · n! e n 3 constraints. It follows that this linear program can be solved in time poly (e n 3 ), thus Cov (G, v) ∈ EXP.
Remark. Since in our setting actions are orderings of neighbourhoods, the space of actions may be factorial in the size of the graph. The algorithms for computing Hit G, u, v) from [19] used to establish Corollary 7.6 are polynomial in the number of actions and thus will not yield a polynomial-time algorithm for the problem. This is why we resisted appealing to MDP theory when finding a polynomial-time algorithm for Hit G, u, v) on undirected graphs in Section 7.1.
Summary
In this paper, we proposed a new random walk process inspired by the power of choice paradigm. We derived several quantitative bounds on the hitting and cover times and also presented a dichotomy with regard to computing optimal strategies.
While we were able to show that on an expander graph, the CRW significantly outperforms the SRW in terms of its cover time, we do not yet know the exact order of magnitude of t two cov . In fact, we do not have any lower bound on t two cov improving the trivial (n) for any sequence of bounded degree graphs. Constructing a sequence of graphs (G n ), especially expanders, with t two cov (G n ) = ω(n) would be very interesting.
We have shown that Cov ∈ EXP and that the problem is NP-hard. It would be interesting to find a complexity class for which the problem is complete, and we suspect it is PSPACE-complete.
|
2019-11-12T22:25:34.000Z
|
2019-11-12T00:00:00.000
|
{
"year": 2019,
"sha1": "8da3a77b081689f25b04ffcb2c722afe111e9621",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6792931BB35F640A3DF2844D33BC8AF2/S0963548321000183a.pdf/div-class-title-the-power-of-two-choices-for-random-walks-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b2a13adcd7c71ff5ad0d39117582cdaf097131a6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
9462169
|
pes2o/s2orc
|
v3-fos-license
|
Delay and Lifetime Performance of Underwater Wireless Sensor Networks with Mobile Element Based Data Collection
the
Introduction
An underwater sensor network is made up of a group of many autonomous sensor nodes and vehicles deployed underwater and networked via acoustic links, performing collaborating tasks. They enhance our ability to observe and predict the ocean by enabling many applications such as oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation, and tactical surveillance. The underlying physical layer technology used in UWSNs is acoustic communication, since electromagnetic waves and optical signals are unsuitable due to high attenuation and scattering, respectively, in the underwater environment. UWSNs differ significantly from terrestrial sensor networks in several aspects: low and distance-dependent bandwidth, high latency, node mobility, high error probability, frequency dependent transmission range, 3-dimensional space, and so on [1]. The available bandwidth is of the order of 10 kHz at a few kilometres and the propagation speed is only around 1500 m/s. The high cost of deploying and/or redeploying underwater equipment makes the issue of energy saving/efficiency a critical one for UWSNs. Underwater sensors are expensive, partially because of their more complex transceivers, and the ocean area that needs to be sensed is quite large. Hence, UWSN deployment can be much sparser compared with terrestrial sensor networks. In addition, the network may be easily partitioned due to node mobility, harsh environment, sparse deployment, and resource constraints. Such networks may never have an end-to-end contemporaneous path and traditional routing protocols are not practical since packets will be dropped when no routes are available. Hence, sparse and/or disconnected UWSNs are to be treated as ICNs or DTNs, but the conventional multicopy DTN approaches are not suitable due to resource constraints.
Three approaches have been reported for data collection in Wireless Sensor Networks [2], and they can be used in UWSNs also: (i) base station approach which uses direct communication between the sensor node and the sink, (ii) ad hoc network which uses a multihop path from the source to the sink, and (iii) mobility-assisted approach which makes use of mobile elements and the store-carryforward concept for data collection. The first approach provides fast delivery but suffers from reduced life time of sensors due to relatively high communication energy. The ad hoc multihop network reduces the transmit power requirement and achieves medium delay but suffers from the "hot spot" problem near the sink and necessitates an endto-end contemporaneous path between the source and the sink. Identifying the opportunities and challenges associated with the use of mobility-assisted approach for applicationoriented periodic data collection in the harsh underwater environment and quantifying the resulting change in network performance constitute the focus of this paper.
We investigate the effectiveness of the mobile sink (MS) based architecture for continuous monitoring and offline data collection in sparse underwater acoustic sensor networks. We consider delay-tolerant deep water applications and use static two-dimensional UWSNs for ocean bottom monitoring. Analytical models for evaluating message delay, sensor buffer occupancy, Packet Delivery Ratio, node energy consumption, and network lifetime are presented. Three different analytical models using queueing theoretic approaches are used to evaluate and predict the delay performance: bulk service queue, M/G/1 queue with vacation, and polling. A comparison of these delay models and the validation/comparison of the analytical results with the simulation results are also presented. A significant contribution of our work is the development of an analytical model for network lifetime improvement factor achieved by employing mobility-assisted data collection in UWSNs, taking into account all the peculiarities of the underwater channel and propagation. Another major contribution is the identification of the appropriate delay model to be used, based on the service policy of the mobile sink, which, in turn depends on the requirements of the application. Yet another contribution is the investigation of the impact of MS trajectories on the network performance. The analytical models are validated using simulation models developed in Aqua-Sim [3], an NS-2 [4] based network simulator developed by the University of Connecticut. By incorporating the DTN concepts and polling schemes into Aqua-Sim, an improved NS-2 based simulation model for application-oriented and energy-efficient data collection in underwater environment is made available, which we hope will widen the facility for further research in this area.
The remainder of this paper is organized as follows: Section 2 summarizes the related work. Proposed MS-based data collection framework is presented in Section 3. Analytical models for node energy consumption, network lifetime and delay performance are presented in Section 4. Section 5 presents the analytical and simulation results, followed by a discussion and comparison of delay models in Section 6. Finally, the paper is concluded in Section 7.
Related Work
The main research challenges in this area are identified in [1], mainly concentrating on hardware and protocol issues and a cross-layer design approach is highly advocated in it. In spite of the significant work envisioning novel applications and high-level architectures, not much research has addressed the design of supporting communication and networking protocols. Most original contributions have focused on acoustic modem design. Recently, several routing protocols have been developed for underwater sensor networks, most of them suitable only for connected networks. A detailed review and comparison of different routing techniques for UWSNs is given in [5]. Vector-based forwarding (VBF) is a typical geographical routing protocol [6] and hop-by-hop vectorbased forwarding (HH-VBF) [7] is its more energy-efficient version. Both VBF and HH-VBF take care of node mobility, but they require the network to be connected and the energy overhead is quite high. Distributed routing algorithms for delay-sensitive and delay-insensitive applications are proposed in [8]. Cui et al. [9] present the interesting paradigm of mobile UWSN, that is, networks where the underwater sensors move because of water currents. Two interesting architectures are investigated, one for long-term nontimecritical applications (e.g., oceanography and pollution detection) and the other for short-term time-critical explorations (e.g., disaster prevention and military surveillance). Energy analysis of routing protocols for UWSNs is presented in [10] and energy-efficient routing schemes for UWSNs have been discussed in [11]. The key parameters that affect the lifetime of Wireless Sensor Networks have been identified in [12] and the use of AUVs to prolong lifetime in UWSN is discussed in [13].
Recently, considerable effort has been devoted to developing architectures and routing algorithms for DTNs [14,15], characterized by frequent partitions and potentially long message delivery delays. A survey of routing in ICNs and DTNs is presented in [16]. Shah et al. [2] have presented a three-tier architecture based on mobility to address the problem of energy-efficient data collection in a terrestrial sensor network. For the same architecture, an enhanced analytical model has been presented in [17]. The current state-of-the-art solutions in underwater DTNs is presented in [18]. An adaptive routing protocol has been proposed in [19] for UWSN, considering it as a DTN. In order to increase the energy efficiency in resource constrained underwater environment, a delay-tolerant data dolphin (DDD) scheme is presented in [20]. A survey of data collection in Wireless Sensor Networks with mobile elements has been done in [21]. Use of Mobile Data Collectors (MDC) for reliable and energy-efficient data collection in sparse sensor networks is studied in [22]. A message ferrying approach for data delivery in sparse mobile ad hoc networks is presented in [23] and controlling the mobility of multiple data transport ferries in a DTN is discussed in [24]. The use of controllably mobile elements to reduce the energy consumption for communication and thus to increase useful network life time has been discussed in [25]. The usage of message ferries in ad hoc networks is considered in [26] and the design of message International Journal of Distributed Sensor Networks 3 ferry route for sparse ad hoc networks with nodes having arbitrary movement is presented in [27]. AUV-aided routing for UWSNs is discussed in [28,29], and the performance of DTN routing protocols for UWSNs is analyzed in [30].
To analytically evaluate the latency performance of data collection, the data collection process with a single mobile sink is modelled as an M/G/1 queueing model in [31]. Polling-based scheduling with QoS capability for wireless body sensor networks is presented in [32]. Mobile element scheduling for data collection in terrestrial sensor networks is discussed in [33][34][35]. A rendezvous based approach with mobile sinks is used in [36] and rendezvous planning in sensor networks is investigated in [37]. The message ferrying DTN schemes and algorithms presented in [23,24,27] are found to be interesting and effective for successful data delivery in sparse networks with mobile nodes; however, their impact on network performance has not been properly quantified using analytical techniques.
Mobility-assisted on-demand data collection in sparse UWSNs is presented in our previous paper [38], which is aimed at supporting delay-sensitive event-driven applications in sparse UWSNs and makes use of a polling model for delay analysis. In contrast, the work presented in this paper is aimed at supporting energy-efficient off-line data collection scheme for periodic sensing applications in UWSNs, where network lifetime is more important than message delay. The existing literature on mobility-assisted data collection in UWSNs has focussed either on the aspects of energy efficiency and lifetime enhancement or on the aspects of message delay and successful delivery. We have adopted a comprehensive approach that considers all the aspects of data collection like energy efficiency, network lifetime, message latency, sensor buffer occupancy, and Packet Delivery Ratio of the proposed mobility-assisted data collection framework for delay-tolerant applications in sparse UWSNs. Three different analytical models are proposed to investigate the latency performance of the data collection scheme and the merits/demerits of each model, and the factors affecting their choice for a specified application are identified. To the best of our knowledge, there exists no other works in the literature, in which such a comprehensive approach for network performance evaluation is adopted and a comparison of delay models is made to identify the most suitable one to be adopted by the network designer, according to the resource constraints and application requirements.
System Model
The general UWSN scenario as shown in Figure 1 consists of a group of different kinds of sensor nodes and underwater vehicles used for collaborative monitoring. Depending on the application, either 3-dimensional or 2-dimensional node deployment is possible. In a 3D network, ordinary sensor nodes float at different depths and they can communicate using acoustic links. They relay data to the sinks using direct link or through multihop paths. In a 2D network, sensor nodes are anchored to the ocean bottom and are interconnected to one or more underwater sinks by means of wireless acoustic links. The sinks may be surface sinks or underwater sinks; the former can communicate with the command centre on shore or mother ship via RF links and with the underwater sensor nodes via acoustic link, while the latter is responsible for receiving data from the neighbouring nodes in the ocean bottom and relaying it to the surface sink. There will be mobile nodes with varying capabilities, ranging from courier nodes capable of moving in vertical direction alone to sophisticated Autonomous Underwater Vehicles (AUVs) with multiple underwater sensors and trajectory control mechanisms.
The UWSNs considered in our study are large and mostly sparse, with possibly disconnected components and with mobile elements used for data collection. We consider 2dimensional network with sensor nodes anchored to the ocean bottom and a mobile node used for data collection, a model of which is illustrated in Figure 2. The mobile node is assumed to be rechargeable or resource-unconstrained and hence its energy consumption for communication with underwater or surface sink is not considered. Since the underwater sink nodes can be equipped with optional high speed fibre optic link with the surface sink, and the surface sinks are equipped with RF links with the command centre, we restrict our study to the mobility-assisted data collection in the 2-dimensional ocean bottom area alone. In addition, data is assumed to be successfully delivered once it has been collected by the mobile sink. Therefore, aspects like the distribution of surface sinks, communication between mobile sink and underwater sink, communication between underwater sink and surface sink, and so forth are not considered. For the purpose of analyzing the network life time in ad hoc multihop scheme, we assume the network to be dense or connected, since the routing overhead can be best illustrated with such a network.
Mobility-Assisted Data Collection.
We consider a modified form of the 3-tier architecture [17] for mobility-assisted data collection, with the upper two layers merged together, resulting in a mobile sink (MS) architecture. The static sensors monitor the underwater surroundings, generate data, and store it in the sensor buffer until a contact or transmission opportunity occurs. They have limited battery power and buffer space and can communicate using acoustic links only. Their data communications are limited to single hop data transfer to a nearby mobile sink (MS), so as to reduce energy consumption.
Mobile sinks are mobile entities with large processing and storage capacity, renewable power, and the ability to communicate with static sensors and other sinks (if any). As an MS moves in close proximity to (i.e., within transmission range of) a static sensor, the sensor's data is transferred to the MS and buffered there for further processing. The MS can pause at the vicinity of the sensor till all the buffered data has been transferred. The mobility of the MS can be either random or controlled. In the case of random mobility, the worst-case latency of data transfer cannot be bounded. This unbounded latency may lead to excessive data caching and result in buffer overflows. Hence, it is better to use controlled mobility whenever it is feasible. If the MS is already deployed and its trajectory cannot be controlled (e.g., sensors attached to watery mammals), random mobility can be made use of, even though with reduced performance.
A sensor needs to detect a contact or the presence of a nearby MS to be able to send its data. In the proposed framework, the responsibility of contact discovery is with the MS and it periodically sends out HELLO messages to notify its presence. The amount of data collected in a single visit of the MS depends on the service policy and can be fixed according to the application requirements and network conditions. After collecting the data generated and buffered by the sensor node, the MS proceeds to the next location and this process is repeated. If the location information of the sensors with buffered data is known to the MS in advance, the MS may visit them in an optimum manner; otherwise, a trajectory that covers the deployment area is selected and both contact discovery and data transfer are carried out while moving along this trajectory.
3.1.1. MS Trajectories. Different MS trajectories, as shown in Figure 3, can be employed [28,39,40], keeping in mind that the basic application is periodic surveying of a region at sea. Some other possible trajectories (not shown in the figure) are circular, lawn-mower, and figure of eight [41]. Complex trajectories will be required for event-driven on-demand data collection, but here we consider simple cyclic paths suitable for periodic off line data collection for delay-tolerant applications. Trajectories B and C are similar, the difference being in the time to complete one cycle and the transmit power requirement of static sensors to ensure connectivity. E consists of three mobile sinks, each following a separate elliptical trajectory.
The choice of a particular trajectory for a specific application depends on the application requirements, position of underwater sink, and energy constraints of nodes. Trajectories A and D with large travel time of MS are expected to be suited for delay-tolerant applications, whereas C and E with less travel times may be suitable for delay-sensitive applications. B, C, and D fit well for a scenario in which the UW sink is located at the centre, while E is specially suited for the sink located at the middle of one side of the square deployment area. We restrict our study to the first three trajectories alone, because of our assumption of a single MS in the system model, and the practical difficulty in guiding the MS along a cyclic spiral path on the ocean bottom.
Performance Metrics.
The important performance metrics for mobility-assisted data collection are Packet Delivery Ratio (PDR), message latency, node energy consumption, network lifetime, and buffer space requirement at sensors.
Packet Delivery Ratio.
The effectiveness of data delivery is indicated by the PDR and it can be affected by errors in communication, buffer overflow, or failure of MS. If the MS does not approach the sensor for a long time, the buffer may fill up and eventually overflow, resulting in packet loss.
Message Latency.
Latency is the average time taken by the data to reach the sink from the time of its generation. Since data is assumed to be successfully delivered once it has been collected by the MS, latency has only two components in our model: queueing delay in the sensor buffer and the service time for transferring the data from the sensor to the MS. Due to limited speed of the MS (maximum 20 m/s due to practical reasons), the travel time of the MS is expected to be very large (of the order of several minutes or even few hours for large networks).
International Journal of Distributed Sensor Networks
Energy Consumption.
Energy required for sensing is negligible compared to that for communication, and energy consumption for reception is very small compared to that for transmission. Assuming tunable transmit power, short range single hop communication between the static sensors and MS is expected to reduce and balance the energy consumption among sensors.
Network Lifetime.
Network lifetime is the time span from the deployment to the instant when the network is considered nonfunctional: when a network should be considered nonfunctional is, however, application-specific [12]. It has been defined in many ways by different researchers [42]. Network lifetime is a key characteristic to evaluate the performance of sensor networks and parameters like coverage, connectivity, and node availability can be reduced to lifetime considerations. Reduced and balanced energy consumption among the nodes will lead to enhanced network lifetime in MS-based architecture.
Sensor Buffer Occupancy.
Since the generated packets are to be stored in the sensor buffer till the arrival of the MS, buffer capacity should be sufficiently high to avoid buffer overflow and packet loss. Average buffer occupancy gives an indication about the buffer space to be allotted for a particular application depending on the data generation rate and the MS arrival rate.
Analytical Study
In this section, the energy consumption and lifetime of the sensor network as well as the latency and delivery ratio of the sensed data are investigated through analytical means. The average sensor buffer occupancy and the network lifetime improvement factor are also evaluated. All the features of acoustic propagation and devices significantly affect these performance measures and hence the performance of the data collection schemes. The study permits us to assess the impact of a set of network parameters like depth of deployment, target SNR, sensor power profile, data arrival rate, data transfer rate, sensor buffer size, number and range of sensor nodes, MS visit frequency, service discipline, MS movement pattern, and so forth on the performance metrics and hence to decide the appropriate values for these parameters according to specific application requirements.
Energy Consumption and Network
Lifetime. Energy consumption needs to be considered as a critical issue since it is difficult to recharge or even replace batteries for a large number of sparsely distributed sensors. One important motivation for employing a mobile sink is that it increases the energy efficiency and lifetime of the network by reducing the source-destination distance and eliminating the relaying of data. We compare the energy consumption of static sensor nodes considering direct, multihop and MS-based data transmission schemes. Since multihop communication necessitates an end-to-end contemporaneous path between the source and sink, we assume the network to be connected (by deploying sufficiently large number of nodes or by increasing the transmission range of the nodes to ensure connectivity).
Hop Energy Consumption.
The battery power required for sensing and processing is negligible compared to that for underwater acoustic data transmission, and hence we consider the energy consumption for data transmission only. Also, we assume that the sensor nodes have fixed receive power and tunable transmit power, that is, the transmit power can be varied according to the range of operation. We consider the effects of path loss and noise for our energy analysis, the two main sources of transmission losses being attenuation and spreading [10]. The propagation model helps us to estimate the transmit power required for a specified target signal-to-noise ratio (SNR). The SNR of an emitted underwater signal at the receiver is expressed by the passive sonar equation as [43] where SL is the source level, TL is the transmission loss which is a function of distance and frequency of operation , NL is the frequency-dependent noise level, and DI is the directivity index. The source level SL = 20 log , where is the intensity at 1 m from the source (expressed in watts/m 2 ) and is given by = /2 , where is the transmit power and is the water depth in metres. All quantities in (1) are in dB re Pa, where the reference value 1 Pa corresponds to the intensity value of 0.67 × 10 −18 W/m 2 . Assuming a target SNR of 20 dB at the receiver, an ambient noise level of 70 dB (which is representative of underwater environments), and omnidirectional antennas for transmission and reception, we have the required source level: Considering the effects of absorption and spreading, the transmission loss or the attenuation factor TL( , ) of an underwater acoustic channel for a distance and frequency is given by [43] 10 log TL ( , ) = ⋅ 10 log + ⋅ 10 log ( ) , where the first term is the spreading loss and the second term is the absorption loss. Spreading loss is independent of frequency and depends only on the geometry. The spreading coefficient = 1 for cylindrical spreading and = 2 for spherical case. The spreading is assumed to be cylindrical in shallow water and spherical in deep water; hence, the variation of spreading loss with distance is linear in shallow water, while it is quadratic in deep water. Absorption loss increases with frequency and the distance between nodes.
Thorp's formula [44] is used to express the absorption coefficient as If a tone of frequency and power is transmitted over a distance , the received signal power will be /TL( , ). For a given target signal-to-noise ratio SNR , available channel bandwidth ( ), and noise power spectral density ( ), the required transmit power ( ) can be expressed as a function of the transmitter-receiver distance [11] as where is a margin to ensure that the average SNR at the receiver is larger than the minimum required value and is a penalty factor that accounts for signal processing inefficiencies at the receiver. Acoustic power is converted to electrical power by the relation [43]: where 10 −17.2 is the conversion factor from acoustic power in dB re Pa to electrical power in Watts and is the overall efficiency of the electronic circuitry [11]. The receive power is independent of distance and depends on the complexity of the receiver circuitry. If is the packet size in bits and is the bandwidth efficiency of modulation, the energy consumption for the transfer of a packet over a single hop of length becomes where el ( ) is the electrical power (in watts) corresponding to ( ) in dB re Pa. Compared to , el is very large and hence its contribution to the energy consumption of sensor nodes is significant. As the transmission loss is proportional to the square of the hop length in the deep water scenario, the energy consumption of sensor nodes can be reduced considerably by reducing the source to sink distance and/or increasing the channel bandwidth. Unlike terrestrial channels, underwater acoustic channel exhibits distance-dependant bandwidth, a few kHz in long range (several tens of kilometres) systems to more than hundred kHz in short-range (several tens of meters) systems. The very large transmit power requirement at large distances results in heavy energy consumption and increased level of interference. Hence, single hop (direct) data transmission is not feasible if source-to-destination distance is large and we will not consider the direct transmission (BS approach) further.
Relaying Overhead and Energy Efficiency.
Assuming ideal channel with no packet errors, energy efficiency can be increased by reducing the relaying overhead. The important motivations for using a mobile element for data collection are as follows: (i) it eliminates the need for end-to-end connectivity between source and destination; (ii) it reduces energy consumption by reducing the transmission range; and (iii) it balances the energy consumption by eliminating the relaying of data. The first feature contributes to improved data delivery performance, and the other two features lead to enhanced lifetime of the network. To quantify the potential savings in energy consumption and network lifetime, we compare the energy requirements with and without a mobile node. The energy consumption of static nodes alone is considered, since the mobile node is assumed to be rechargeable or not energyconstrained.
Though we have considered a square deployment area in our system model, for tractability in the analysis of relaying overhead in ad hoc multihop network, we approximate it by a circular area whose diameter equals the side of the square. Assume static sensor nodes to be randomly deployed with uniform distribution over a circular area of radius . Unlike our earlier system model with sparse deployment of sensors, here we assume a sufficiently large value for that results in a connected ad hoc multihop network, since the relaying overhead can be best illustrated with such a network. The static sensor nodes generate and send data to the sink, located at the centre of the circular area. We assume that an ideal Medium Access Control (MAC) is available so that no energy is wasted in collisions. To quantify the relaying overhead in the multihop ad hoc network, we follow the approach similar to the one used in [25] and consider the number of transmissions incurred by the nodes located at different distances from the sink.
Let the transmission range of the nodes be . The circular deployment region is divided into concentric annuli to count the minimum number of relay hops required for each node to reach the sink. Since the nodes are uniformly distributed, the number of nodes in th annular region of area ( ) is approximately equal to ( ( )/ ) for large . If each static node in the th annulus generates one message, then the minimum number of transmissions originated from the th annulus is Min ( ) = ( ( )/ ) . Now each of these transmissions will be received by at least one node in the ( − 1)th annulus and transmitted towards the inner annulus. Hence, in the case of multihop architecture, if every node generates 1 packet, for a large value of , on an average, the number of receptions and transmissions to be undertaken by a node in annulus will be respectively, except for the outermost annulus where the corresponding values are 0 and 1. Hence, the energy consumed by a static node in the th annulus (1 ≤ ≤ ⌈ / ⌉) for the transfer of packets of bits each from each static sensor to the sink will be In the mobility-assisted data collection, irrespective of the position of the nodes, each static node transmits only the packets generated by it. Thus, the energy consumed by a static node for transferring packets of bits each becomes which is independent of its proximity to the sink. If we define the Relaying Overhead Index(ROI) of a sensor node as the ratio of the total number of transmissions from the node to the number of transmissions corresponding to the packets originated at that node, it is evident that all the sensor nodes have the same ROI (equal to l with an errorfree channel) in MS-based scheme, while it is approximately equal to Node ( ) in multihop network. Since relaying is eliminated in MS-based scheme, total energy consumption is equally divided among all the nodes, thus balancing the energy consumption.
Network Lifetime.
Network lifetime forms an upper bound for the utility of the sensor network and it strongly depends on the lifetimes of the individual nodes that constitute the network [42]. We consider lifetime as the time until the first sensor node is drained off its energy and use the general formula for it as derived in [12] and restated as follows.
For a WSN with total nonrechargeable initial energy 0 , the average network lifetime E[L], measured as the average amount of time until the network dies, is given by where is the constant continuous power consumption over the whole network, E[ ] is the expected wasted energy (i.e., the total unused energy in the network when it dies), is the average sensor reporting rate, and E[ comm ] is the expected communication energy consumed by all sensors in a randomly chosen data collection.
If we ignore the idle time energy consumption in our network and define the network lifetime as the time span until the first death of any sensor, then Hence, a lifetime-maximizing data collection scheme should aim at reducing the average wasted energy E[ ] and the average communication energy E[ comm ]. Mobility-assisted data collection achieves balanced energy consumption (thus reducing the average wasted energy) and reduced communication energy, thus enhancing the network lifetime as given by (13). The potential saving in network lifetime of MS-based model compared to the ad hoc multihop network can be quantified in terms of the relaying overhead of the nodes as derived in Section 4.1.2. If represents the initial energy of a static sensor, the lifetime of a node expressed in terms of the maximum number of original packet transmissions over 8 International Journal of Distributed Sensor Networks a distance of length that can be afforded by it, before being completely drained off its battery power, can be expressed as for ad hoc multihop and MS-based data collection schemes, respectively. In (14) and (15), hop ( ) is given by (7) and Node ( ) is given by (9). Since the relaying overhead of a node in ad hoc multihop network (represented by Node ( )) increases with its proximity to sink, (14) shows that the lifetime of a single node is maximum for the nodes in the periphery of the network and it decreases with proximity to sink. For MS-based scheme, the node lifetime as indicated by (15) is same for all static sensors irrespective of its location.
The network is considered dead if any of the sensors die due to energy depletion. Assuming the deployment area of sensor nodes to be very large compared to the transmission range (quite typical of UWSNs), single hop direct communication is infeasible, and the network lifetime in multihop network is limited by the relaying overhead of the single hop neighbours of the sink. Hence, in terms of number of transmissions, the network lifetime SN LMH and SN MS for multihop and MS-based schemes, respectively, can be expressed as From (9) and (14), Hence, the lifetime improvement factor of the proposed MS-based framework over the ad hoc multihop approach can be expressed as Thus, the lifetime improvement factor is the average number of transmissions to be undertaken by any of the 1-hop neighbours of the sink node, corresponding to a single packet transmission from each of the sensor nodes in the outer annuli. Using (9) and (18) can be simplified to obtain Equation (19) shows that the relative improvement in lifetime is proportional to the ratio of area of the region outside one-hop neighbourhood of the sink located at the centre to that of the region within one-hop neighbourhood. It is evident from (19) that the network lifetime improvement is more pronounced in the case of large networks (high ) and with sensors having small transmit power or short range of transmission ( ). Also, the lifetime improvement is independent of the traffic type, data generation rate, MS arrival rate, packet length, and channel bandwidth. The numerical results from this analytical model are compared with the simulation results in Section 5.
Latency and Buffer Occupancy.
In the MS-based network model, packets generated at the static sensor nodes at random intervals are queued in the sensor buffer, waiting for a contact or transmission opportunity. When a contact occurs (the MS approaches the static sensor), the packets generated and buffered so far are transferred to the MS. Assuming packet generation processes to be independent Poisson processes, we use three different analytical models to evaluate the delay performance and associated parameters. These models facilitate the computation of average queueing delay, service time and response time of a packet, an estimate of the average buffer occupancy, and average number of packets in the system and system utilization.
The suitability of the model for a particular application depends on the service policy of the MS, which dictates the number of packets collected from the sensor buffer in a single visit. MS has the freedom to decide the number of packets to be collected from the sensor buffer, based on the service policy suitable for the application. Some possible options are (i) collecting a fixed number of buffered packets, (ii) collecting all the packets generated and buffered till the instant of MS visiting the sensor, and (iii) collecting all the generated and buffered packets including those being generated when the MS is collecting the already buffered packets. The policy to be adopted depends on the nature of deployment and requirements of the application. For example, if the sensor deployment is for periodic monitoring and if fairness is more important than delay, option (i) is more suitable, because otherwise, MS will spend more time at heavily loaded queues. If mean delay and number of packets collected are of prime importance, option (iii) is the best. Option (ii) appears to be both fair and efficient, though with a larger mean delay and reduced throughput than that of (iii). Variations of these options are also possible, taking into consideration other factors like delay-sensitivity of application and easiness of implementation.
The first model using Bulk Service Queues with service size is suitable for the analysis of a system in which the MS collects a fixed number of buffered packets in each visit. Also, with a sufficiently large value of , it can support the option (ii) of the service policy. The second one using M/G/1 queue with vacation is appropriate in situations where the MS adopts option (iii) as the service policy. The polling model is more general in nature, supporting all the service policies and providing further options for performance enhancement.
Bulk Service Queueing Model.
We extend the model proposed in [17] with appropriate modifications for our scenario. Random movement of the mobile node is considered in [17], which we augment first with a simpler solution for Poisson distribution of MS arrival process. Based on the finding that the random movement of the MS is not suitable for our application, the model is modified with controlled motion of the MS in a square deployment area with sensors. The interaction between a single sensor and MS has been extended to a network scenario with sensors. The concept of input load has been introduced and its impact on packet queueing delay has been studied. The factors affecting the MS arrival rate (and hence the packet delay) in a network with sensors have been identified. Finally, the accuracy and flexibility of the model is validated using simulation and by comparison with other analytical models.
The primary component of this model is a queue of generated (but not delivered) data at each static sensor node, which is served whenever the MS is in the sensor's transmission range. The arrival of the MS near the sensor is considered as a discrete event and when the MS visits the sensor node, it can pause at the location such that all the data generated and stored in the buffer is transferred.
This queueing model resembles the bulk service model in the queueing literature and is typically denoted by / / 1/SB, where SB is the maximum buffer capacity and is the service size. We assume that SB ≥ . If less than units of data are available at the sensor, then that data is transferred and the MS leaves without waiting for additional data. Once the buffered data is transferred, the sensor has to wait for the next arrival of the MS for the next data transfer. The data generation and MS arrival processes are assumed to be renewal processes with average rates and , respectively. We also assume that when an MS visits a sensor, no other sensor is nearby and contending for service. Data transmission does not incur any loss and the only loss (if any) is due to sensor buffer overflow.
Since a maximum of packets are collected in one visit of the MS, the net service rate is . For the system to be stable, the net service rate should be greater than the mean arrival rate ; otherwise, the sensor queue can become arbitrarily large. Hence, we have to ensure that, for a given and , the batch size is sufficiently large such that all the packets generated in one cycle time of the MS will be transferred in one visit itself. If the random variable represents the queue length at the MS arrival instant, the average of is used as a measure of the average sensor buffer occupancy, which in turn decides the Packet Delivery Ratio (PDR). Let denote the probability that the queue length The queue length distribution depends on the arrival pattern of the MS and other system parameters, and it is difficult to obtain a closed form expression for it.
If we assume the data generation process and the MS arrival process to be Poisson, and the sensors having infinite buffer space, we can directly apply the results from Section 3.2 of [45]. The amount of time required for the service of any batch is an exponentially distributed random variable, whether or not the batch is of full size . Thus, our model becomes the bulk service / /1/∞ queue model, which is of course a non-birth-death Markovian problem. When the steady state is assumed, it is found that where is the differential operator. It is found that there is one, and only one, root (say 0 ) in (0,1) and 0 = 1− 0 . Hence, where 0 is the only root which is less than 1 of (21).
Since the stationary solution is so similar to that of / /1, the average buffer occupancy now becomes Now, instead of Poisson distribution, let us assume general distribution of the MS arrival process with mean arrival rate or mean interarrival time 1/ . Also, in our model, we assume that all the data generated and buffered so far is transferred when the MS visits the sensor. Therefore, as given in [17], the amount of data in the sensor buffer will be the minimum of (i) the amount of data generated in one cycle time of the MS and (ii) the sensor buffer size. For Poisson data generation, the amount of data generated in an interval depends only on the length of the interval and hence the expected sensor buffer length becomes The average buffer occupancy gives an indication of the sensor buffer requirement for a specified average data generation rate , MS arrival rate , and the service size . For a fixed service size , [ ] increases with and decreases with .
With the assumption of large and infinite buffer space, and on substituting the value obtained from (24) into (20), we get the Packet Delivery Ratio to be 1 with this model. If the sensor buffer space or the service size is not sufficiently large to accommodate the incoming traffic without buffer overflow, packets will be dropped and PDR is reduced. Hence, the sensor buffer size SB and/or service size should be designed such that no packet is lost due to buffer overflow, for a given data generation rate and the MS arrival rate . Sensor buffer size SB is limited by the size and hardware cost of the sensor memory, while the batch size is limited by the bandwidth of the channel between the sensor and the MS.
Let and represent the queueing delay of a packet in the sensor buffer and its transfer time from the buffer to the MS, respectively. Due to our assumption that all the buffered data is transferred in the next visit of the MS (which is true for large values of ), the average queueing delay is the mean residual time for the next visit of the MS at the sensor node. If the interarrival time of the MS is with mean [ ] = 1/ and variance ms , then the mean residual time for the next visit of the MS is given by Section 7.4 of [46] as With Poisson assumption for the MS arrival process, ms = ( [ ]) 2 and the average queueing delay as computed using (25) becomes 1/ . As the variance of the MS arrival process ( ms ) increases, mean queueing delay ( ) also increases and it will grow without bound as ms approaches ∞. With controlled and deterministic motion of the MS, the variance of the MS arrival process ms equals zero and the mean queueing delay as computed using (25) Now the mean interarrival time of the MS at the tagged node becomes Since the MS arrival rate = 1/ [ ], with controlled deterministic motion of the MS, the mean queueing delay of the packet, as expressed by (25), can be rewritten as If the input load is increased, the mean interarrival time of the MS and the mean waiting time of the packets increase.
For fixed values of and , the MS arrival rate can be increased or mean waiting time can be reduced by increasing the number and/or velocity of MS. However, the number of MS is limited by cost considerations and the speed of MS cannot be increased beyond a limit (say 20 m/s) due to practical reasons.
Applying Little's theorem, the average sensor buffer occupancy is given by The sensor buffer occupancy increases with the data generation rate and decreases with the MS arrival rate. Similar to the case with mean waiting time, the sensor buffer occupancy is also minimum for controlled deterministic motion of the MS. Based on this result that the controlled deterministic motion of the MS achieves minimum message latency and sensor buffer occupancy, we will consider this alone for our further study.
M/G/1 Queue with Vacation.
Here the queue of generated data waiting for transmission in the sensor buffer is modelled as an M/G/1 queue with vacation [47]. In an M/G/1 queue with vacation, at the end of a busy period, the server goes on vacation for a random interval of time with first and second moments [ ] and [ 2 ], respectively. A new arrival has to wait in the queue for the completion of the current service or vacation and then for the service of all the customers waiting before it. An arriving customer to an empty system must wait for service until the end of the current vacation.
M/G/1 queueing model has been used in [31] for the delay analysis of mobility-assisted routing, but the model they have considered is entirely different from ours. In [31], the M/G/1 queue is the queue maintained by the MS, where the service requests are the data collection requests from the sensor nodes. To serve each request, the MS will move towards the sensor node that has sent the request and collect all data from that sensor node. It is assumed that data transmission time is negligible, and hence the service times are the MS travelling times to the sensor nodes.
The assumption of negligible data transmission time is not valid in our case, due to the very low bandwidth available in the underwater channel. Under heavy load conditions, the time required to transfer all the packets (already buffered and being generated) is comparable to the travel time of the MS. Hence, a simple M/G/1 queueing model is not sufficient to investigate the latency and buffer occupancy of our system model and to implement the service policy options discussed in Section 4.2. Unlike [31], we have M/G/1 queue with server vacation model for each sensor node. The service requests are the transmission requests of data generated at that node and hence the service times are the data transmission times. The time from service completion (i.e., queue becomes empty) till the next arrival of the MS at that queue is the server vacation time.
We focus on the interaction between a single static sensor node and the MS. The MS in our system model corresponds to the single server of the M/G/1 queueing model and the transfer of each packet from the sensor buffer to the MS corresponds to one service. The MS keeps on serving a sensor buffer until there are no more packets in it. At the end of this service period, the MS moves to the next location. The interarrival time of the MS at the tagged node corresponds to the vacation of the M/G/1 system. For better performance, we make use of controlled mobility (rather than random mobility) of the MS.
The expected waiting time in queue for the M/G/1 system with vacation is given by where = [ ] is the input load or the system utilization factor.
From this, the expected response time of a message, the average buffer size, and the average number of messages in the system (in queue and in service) can be evaluated as where is the mean travel time in a cycle of the MS. With symmetric sensor buffers and deterministic cyclic motion of the MS, variance of the vacation interval is zero and hence the expected waiting time becomes The first term in the expression for expected waiting time as given by (30) does not contribute considerably under light load conditions, since the packet transfer time is very low compared to that of the vacation interval. However, under heavy load conditions, the number of packets in the sensor buffer will be large, which results in a considerable service time at each queue. Thus, the mean queueing delay of the packets is dominated by the travel time of the MS under light load conditions and by the transfer time of the buffered packets under heavy load conditions.
Polling Model.
Polling is a technique for channel access as well as delay modelling. A typical polling system consists of a number of queues attended by a single server who visits the queues in some order to render service to the messages waiting at the queues [48]. If the server finds at least one message when it visits a queue, then the service is immediately started at that queue; otherwise, it moves to the next queue, which requires a finite switch-over time. Polling cycle time is the time between the server's visit to the same queue in successive cycles. The order in which the server visits the queues is decided by a routing mechanism.
With respect to the number of messages served during one visit of the server to a queue, three main types of service disciplines are available: exhaustive, gated, and limited. Under the exhaustive policy, the messages arriving at a queue in service are candidates for service in the same visit period, whereas under the gated policy they are not. For the system stability, all messages that arrive during a cycle must be served during a cycle time.
This polling model can be used to model our system with the MS acting as the single server and the sensor buffers containing generated (but not transmitted) data behaving like the queues to be served. The time taken by the MS to travel from the proximity of one node to next is represented by the switch-over time or walk time and the time taken to transfer the data from the sensor buffer to the MS is represented by the service time.
Assuming continuous-time polling model with symmetric queues and each queue having infinite buffer space, if the first and second moments of the message service time and MS walk time are known, the mean message waiting time for different service disciplines can be computed. Assuming Poisson arrival of packets at rate at each sensor buffer, the offered load is given by = [ ], where is the service time of a packet. This service time is the sum of the packet transmission time (that depends on the packet length and channel bandwidth), the propagation delay (that depends on the distance between the sensor and the MS), and a minimum time spacing between the transmission of two packets to ensure interference-free operation. Assuming packet length of 50 bytes and bandwidth 10 kHz, the mean service time of a packet comes to around 0.0467 seconds. For a spacing of 400 metres between two adjacent nodes and MS velocity 15 m/s, MS walk time between two nodes comes to nearly half a minute.
The walk time is assumed to be independent of the arrival and service processes. If the mean of the total walk time is denoted by , the mean cycle time of the MS is given by With all parameters being the same, the mean waiting time is less for exhaustive service than gated service, but the former lacks fairness if some queues are heavily loaded and the succeeding queues are lightly loaded. For controlled motion of the MC with the assumption of equal distance between the visit locations, the walk time is constant and hence the first term in the above expressions is zero, leading to minimum waiting time. This again validates our assumption that controlled motion of the MS is better than random motion, wherever feasible.
Once the mean waiting time of a packet in the sensor buffer is evaluated using (35) or (36)
Packet Delivery Ratio.
As discussed in Section 4.2.1, the Bulk Service Queueing model permits us to evaluate the Packet Delivery Ratio of the MS-based scheme. Substituting the value obtained from (24) in to (20) and by applying Jenson's inequality, the lower bound of PDR will be With sufficiently large buffer space to avoid buffer overflow and value of sufficiently large such that all the packets generated within one cycle time of the MS are collected in that visit itself, the PDR becomes 1 for MS-based data collection.
Since vector-based forwarding (VBF) is used in multihop network case for comparison purpose, the approach used in [6] with appropriate modifications for 2-dimensional deployment is followed to evaluate PDR in the ad hoc multihop network. Assuming nodes each with transmission range , uniformly deployed in a square area of side , the density of nodes in the network = / 2 . Now, if represents the radius of the routing pipe in VBF, and represents the loss probability of packets, it can be shown that the probability of successful delivery of a packet over ℎ hops is Equation (38) shows that, for a fixed packet loss probability , probability of successful packet delivery increases with node density, width of routing pipe, and transmission range of sensor nodes in the ad hoc multihop network that uses VBF. Thus, achieving a reasonably good delivery performance using ad hoc multihop approach for data collection in sparse and energy-constrained environments is almost impossible. At the same time, as shown by (37), probability of successful packet delivery is independent of node density in MS-based data collection, thus making it the better option for sparse and constrained networks.
Results
To validate our analytical models, we have used an NS-2 based network simulator for underwater applications, Aqua-Sim [3]. It is an event-driven, object-oriented simulator written in C++ with an OTCL interpreter as the front-end. We have developed the simulation environment for mobility-assisted data collection in C++ and the Aqua-Sim framework. The behaviour of the underwater sink and the underlying layers have been augmented with DTN support with beaconing and node discovery, MS-based data collection with different service policies, and interference-free operation. The proposed energy models have been incorporated in the simulation model with facility for tuning the transmitting, receiving, and channel parameters and power consumption. Suitability of the three different analytical models in supporting different application requirements and the impact of MS trajectory on the performance metrics have been verified with simulations. Simulations implementing three different service policies of the MS upon visiting each sensor have been carried out: (i) collect a fixed number of the buffered packets, (ii) collect all the packets generated and buffered till the arrival instant of MS, and (iii) collect the packets buffered so far plus the packets being generated till the MS leaves the sensor.
200 simulations are performed to get each result, and in the simulations, input load has been varied by varying the packet arrival rate at the sensors. Controlled motion of the MS along trajectory A has been used for the simulation study on latency and buffer occupancy. Vector-based forwarding protocol is used in the simulation of ad hoc multihop network and the same with modifications to support mobility-assisted routing is used in MS-based data collection. Both CBR (constant bit rate) and Poisson packet arrival models have been considered with equal data generation rate at all the nodes.
Assuming a target SNR = 20 dB and noise level = 70 dB, the analytical results corresponding to the variation of transmission loss with frequency of operation and distance between the sensor nodes as expressed by (3) are plotted in Figures 4 and 5 for shallow water and deep water environments, respectively. Transmission loss is the sum of spreading loss and absorption loss. Spreading loss is independent of frequency, whereas absorption loss increases with frequency and the distance between nodes. Since spreading is cylindrical ( = 1) for shallow water environment and circular ( = 2) for deep water environment, the variation of spreading loss with distance is linear in shallow water, while it is quadratic in deep water.
Assuming tunable transmit power , fixed receive power = 0.075 W, fixed packet length = 400 bits, and bandwidth efficiency of modulation = 0.5 [3,11] in (7) the effect of hop length and channel bandwidth on per-hop energy consumption is plotted in Figures 6 and 7 for shallow water and deep water, respectively. The variation is sharper for deep water since the transmission loss is proportional to the square of the hop length here. From the diagrams, it is evident that, in the energy-constrained underwater environment, direct single hop communication is not feasible over large distances. Also, due to the distance-dependent bandwidth in the underwater scenario, short-range communication is advantageous in two ways, and the effect is more significant in the deep water scenario. Simulation results give slightly higher values than analytical ones due to the presence of distance-independent fixed costs in transmission.
The analytical and simulation results corresponding to the relaying overhead and network lifetime in a dense/connected network have been obtained and plotted. Assuming 40 sensor nodes uniformly distributed in a circular area of radius 1000 m, with the communication range of the static sensor nodes fixed at a very low value of 50 m (for underwater sensors, less than 1 km is short range and 50 m is considered to be very short), variation of per node energy consumption with proximity to sink as computed using (10) and (11) for ad hoc multihop architecture and MSbased architecture, respectively, is illustrated in Figure 8. As expected, it demonstrates the increased relaying overhead of nodes near the static sink that results in the "hot spot" effect in ad hoc multihop approach contrasted with balanced and reduced energy consumption of nodes in MS-based scheme.
Using the values of hop energy consumption obtained from (7) for different transmission ranges and assuming each static sensor node having an initial energy of 10000 Joules, maximum number of transmissions that can be afforded by a sensor in the MS-based architecture is computed using (15). It is interesting to observe that this value decreases with an increase in transmission range (due to high transmit power requirement), but it is independent of the data generation rate. With each node generating packets of length 50 bytes each and using a channel bandwidth of 10 kHz, the mean lifetime of the network (in days) for different transmission ranges and data reporting rates is illustrated in Figure 9. It is evident that, for a fixed data reporting rate, the lifetime of each node and thus the entire network can be enhanced by adopting short range high bandwidth communication, wherever possible.
Assuming the radius of the deployment area to be 1000 m, the relative improvement in network lifetime, as indicated by the lifetime improvement factor in (19), has been evaluated for different transmission ranges, traffic types (CBR and Poisson), packet generation rates, and MS arrival rates and has been illustrated in Figure 10. We have considered the network to be alive till the first node dies due to energy depletion. The results demonstrate the superior performance of MS-based architecture over ad hoc multihop scheme as well as the advantage of using very low transmission range. The relative improvement of the MS-based scheme over multihop network is more pronounced for large and energy-constrained networks. It is interesting to observe that, even though the network lifetime reduces with the data generation rate in both cases, the lifetime improvement factor is independent of traffic type, data generation rate, and MS arrival rate. Another interesting observation is that, if the transmission range and the radius of deployment are same, there exists direct single hop communication between the source and the sink, resulting in no relaying overhead and no rationale in opting for MS-based scheme. However, such situations will never occur in practical UWSN deployments because of the heavy energy cost, high bit error rate, and very limited bandwidth of long range single hop communication in the underwater environment.
The delay performance of the mobility-assisted scheme is not comparable with that of ad hoc network since the former takes much larger time for data collection. Hence, we have tried to demonstrate the factors affecting this delay and to compare the delay performance of different analytical models for MS-based data collection, as these models represent variants of the generic scheme for MS-based data collection. The network scenario considered for delay analysis with all the three analytical models is the same: a square area of size 2000 m × 2000 m with 10 sensor nodes randomly distributed in the deployment area with each sensor node having sufficient buffer space to avoid buffer overflow. Controlled mobility of the MS along trajectory A is used for simulations and the input load is varied by varying the packet generation rate.
Assuming sufficiently large values of such that any packet generated in one cycle time of the MS is transferred in the next visit of the MS itself, the delay performance of the Bulk Service Queueing model for different MS mobility patterns as evaluated using (25) has been plotted in Figure 11. For same data generation rate and MS velocity, delay is least for the deterministic motion of the MS and it increases with the variance of the MS arrival distribution. Sensor buffer occupancy also exhibits the same behaviour as depicted in Figure 12. Hence, controlled and deterministic motion of the MS is the most desirable one, wherever deployment permits.
With the same assumptions and using deterministic motion of MS in the Bulk Service Queueing model, the variation of mean queueing delay of packets with varying input load and speed of MS is plotted in Figure 13 and the corresponding buffer occupancy computed using (29) is compared with the simulation results in Figure 14. Since increasing the speed of MS is equivalent to increasing its arrival rate at the sensor, both message waiting time and sensor buffer occupancy decrease with increase in MS speed. However, the speed of MS cannot be increased beyond a limit, due to practical reasons. Also, an increase in network load (due to increase in number of nodes and/or data generation rate) leads to increased message delay and buffer occupancy. If the packet arrival rate exceeds the net service rate , system stability is lost and the delay performance parameters increase exponentially. Hence, , , and are the design parameters that can be tuned for desired delay performance, and given any two of these three parameters, the third should be properly chosen to ensure stability, minimize delay, and avoid buffer overflow and packet loss.
The mean queueing delay obtained using M/G/1 queue with vacation for different values of data generation rate and different speeds of the mobile sink have been plotted in Figure 15 using (32)) closely match with that obtained from simulation. With the same network scenario, same input load, and same MS speed, the values obtained with Bulk Service Queueing model are larger than those with M/G/1 vacation model because the former behaves like the gated service scheme of the polling model and the latter like the exhaustive service scheme. In Bulk Service Queue model with large value of or polling model with gated service, upon visiting a node, MS collects the packets generated and buffered till its arrival instant. In M/G/1 with vacation and polling with exhaustive service, MS collects all the packets buffered so far, plus the packets being generated when the already buffered packets are being transferred. As indicated by (35) and (36), polling with exhaustive service results in lower mean waiting time compared to polling with gated service. Figure 16 shows the average buffer occupancy results obtained with the M/G/1 queue with vacation model. Similar to the case of mean queueing delay of packets, the values obtained with this model are larger than those obtained with the Bulk Service Queue model. As expected, the number of packets awaiting their turn for transmission increases with the input load and decreases with the frequency of MS visit. Since insufficient buffer space will lead to buffer overflow and packet loss, this result allows us to design the appropriate sensor buffer size according to the application requirement and the MS arrival pattern. The sensor buffer occupancy is minimum when the MS leaves a sensor and maximum when it approaches the sensor. Thus, the maximum buffer occupancy is the mean cycle time multiplied by the packet generation rate. If the application is delay-tolerant but losssensitive, it is not sufficient that the sensor buffer size just exceeds the mean buffer occupancy. For such applications, the sensor buffer space should be chosen to be greater than or equal to the maximum buffer occupancy (at the MS arrival instant). Assuming controlled motion of the MS with 10 sensor nodes (having infinite buffer size) placed at equal distances, the mean cycle time of the MS, pause time (data upload time) of the MS at a sensor, and average waiting time of a message have been evaluated using (33), (34), (35), and (36) and illustrated in Figure 17. As the input load increases (due to increase in number of nodes, data generation rate, or mean message service time), the mean waiting time also increases. In terms of mean waiting time, exhaustive service is superior to gated, whereas the mean cycle time is same for both policies. At light loads, the mean cycle time of the MS and the mean waiting time of the message are dominated by the MS walk time, while at heavy loads, they are dominated by the service time (pause time) at sensors. The mean waiting time for different values of data generation rate and MS speeds using the polling model with exhaustive service and controlled motion of MS is exactly same as that of M/G/1 queue with vacation model and hence not repeated. The variation of the mean buffer occupancy with varying load and MS speeds also shows exactly same behaviour as that of M/G/1 with vacation.
Assuming sufficiently large buffer size to avoid buffer overflow, no losses in communication, and controlled motion of the MS with a speed of 15 m/s, the impact of data generation rate and service batch size (maximum number of packets collected from each sensor in a single visit) on Packet Delivery Ratio (PDR) is studied using the Bulk Service Queue model and illustrated in Figure 18. Under stable condition, all the packets generated in one cycle time of the MS are transferred in the next visit of MS, and thus PDR is 1. At low data generation rates, low batch size is sufficient for good delivery performance, whereas at high data generation rates, high batch size is necessary for system stability and successful data delivery. As the data generation rate exceeds the net service rate ( ), stability condition is lost and PDR starts dropping, finally reaching zero.
With the controlled motion of the MS at 15 m/s, simulation results illustrating the combined effect of sensor buffer size and data generation rate on PDR are shown in Figure 19. Assuming no loss in transmission, packet loss (if any) occurs due to buffer overflow and hence the delivery ratio is strongly dependant on the buffer size. For a fixed packet arrival rate, as the buffer size is increased, delivery ratio increases (sharply for low arrival rates and slowly for high arrival rates) and once it reaches a maximum value, it remains insensitive to buffer size. For the same target delivery ratio, small buffer size is sufficient under low packet arrival rates, but as the arrival rate is increased, buffer space requirement also increases. Also, for the same buffer space available, as the data generation rate increases, delivery ratio decreases. Hence, this graph provides us an insight into how to decide the sensor buffer size for a particular application.
Fixing the width of the routing pipe to be 100 m in multihop communication using VBF, variation of PDR with node density in ad hoc multihop and MS-based data collection schemes is shown in Figure 20. Assuming infinite buffer size and no communication errors, ideally the delivery ratio should be 1 for MS-based data collection scheme, but when we run the simulation for a finite amount of time, all the sensor nodes may not be detected and hence the delivery ratio is slightly less than 1. In multihop ad hoc network using VBF, delivery ratio is very small for low node density, due to connectivity gaps in the network. As the node density is increased, PDR increases due to improved connectivity, reaches a maximum value, and then remains almost constant. Also, for a fixed node density, increasing the communication range of sensor nodes or the width of Table 1: Impact of MS trajectory on energy efficiency and PDR (application 1: delay-tolerant; application 2: delay-sensitive with delivery deadline 3 minutes).
MS trajectory
Commn. range Energy consumption PDR (%) (as in Figure 3 routing pipe results in enhanced PDR; however, it is not recommended due to increased power consumption. Since MS-based data collection fills connectivity gaps, PDR is independent of node density. Also, transmission range of 250 m has given good results in MS-based scheme and hence there is no meaning in increasing it further. The results show that MS-based scheme is the better option for sparse and energy-constrained networks, to achieve successful packet delivery.
Network throughput is the number of bits transferred per unit time. Simulation results showing the impact of buffer size and data generation rate on network throughput under the assumption of controlled motion of MS at 15 m/s are given in Figure 21. As expected, for fixed buffer size, throughput increases with the data generation rate. For a fixed data generation rate, it increases with the buffer size initially and then remains constant at the maximum possible value.
For investigating the influence of MS trajectory on the performance metrics of our data collection scheme, we have fixed the input load at 0.4 and tuned the transmission range of the sensors to be large enough to be detected by the MS during its travel. Assuming controlled deterministic motion of the MS along three typical trajectories shown in Figure 3 waiting time are evaluated using (33) and (35), respectively, and illustrated in Figures 22 and 23. It is observed that, among the trajectories considered for comparison study, cycle time of MS and waiting time of messages are smallest for trajectory C and largest for A. The impact of trajectory on delay is more pronounced at light loads than at heavy loads. This is because the cycle time and waiting time are dominated by the MS travel time at light loads, and by the data upload time at the sensors at heavy loads.
At the same time, though the MS requires more time to complete one cycle by following trajectory A, it ensures improved delivery performance in a highly constrained environment. This is because of the reduced transmit power requirement of static sensors to ensure short-range, high data rate single hop connectivity with the MS. Due to the requirement of increased transmission range of static sensors while using trajectory C, their energy consumption will be the highest with trajectory C and the lowest with trajectory A. Since we have fixed the transmission range to be sufficiently high such that all the nodes in the deployment area will be covered by the MS by travelling along the trajectory, PDR with all the three trajectories, as observed in simulation, nearly equals the theoretical value of 1 for a delay-tolerant application. However, for applications with tight deadline requirements, packets missing the deadline are discarded.
Since the trajectory C offers the best delay performance, in the case of delay-sensitive applications, its delivery performance (as indicated by PDR) is also the best. Thus, the results illustrate a trade-off between energy consumption and latency and acts as a guide to select the MS trajectory according to the application requirements; that is, trajectory A with transmission range 250 m is suitable for a delay-tolerant application in highly constrained environment, whereas, trajectory C with 750 m range will be more suited for a delay-sensitive application in a network with nodes having higher energy reserves. The static sink with direct single hop communication supports time-critical applications, at the cost of very high energy consumption. For transmission range and transmitting power level fixed at small values, MS following trajectory A achieves better coverage and successful data delivery than B, C, and static sink, though at the cost of increased delay.
Discussion
Exploiting controlled mobility of sensor nodes for improving data collection performance of resource-constrained and disruption-prone underwater sensor networks has been presented and the analytical techniques to estimate data collection latency and sensor buffer requirement prior to network deployment have been discussed. The proposed framework is found to be effective for delay-tolerant sensing applications and in situations where network lifetime is more important than message latency. Now let us see what exactly is the advantage of analysing the system with three different delay models and how effectively the results can be used for future research in this area.
Comparison of Delay Models.
The different analytical models of latency computation-Bulk Service Queue, M/G/1 queue with vacation, polling with exhaustive service, and polling with gated service-represent different service models of the proposed MS-assisted data collection framework. Having presented the results of the latency analysis using these models, a comparative study of the results obtained reveals some interesting information that we hope will be useful for the network designer.
Assuming controlled motion of the MS (with speed fixed at 15 m/s) in a square area of size 2000 m × 2000 m, and 10 sensor nodes uniformly distributed in this area, generating packets of size 50 bytes, Table 2 gives a comparison of the mean waiting time as evaluated using the different delay models. This comparison of the latency performance of different analytical models that we have performed is, in fact, comparison of the different service models of the proposed MS-assisted data collection framework. For a fixed input load, polling model with exhaustive service policy and M/G/1 queue with vacation model provide optimum waiting time and buffer occupancy. An interesting observation is that the behaviour of M/G/1 queue model with vacation is exactly the same as the exhaustive service scheme of the polling model and thus both models turn out to be one and the same. It is observed that the Bulk Service Queueing model with ≥ 1 is similar to the -limited polling model, but the former permits easy analysis with a finite buffer space, which is not so with the latter one. Also, with sufficiently high values of service batch size and sensor buffer space SB, the Bulk Service Queueing model behaves like the gated service polling model. The mean waiting times and sensor buffer space requirements with these models are larger than those of the polling with exhaustive service and M/G/1 queue with vacation models. However, they exhibit a desirable characteristic which is significant especially in asymmetrical systems, that is, systems in which the packet generation rates (and thus the offered load) vary heavily among the sensors. All sensors receive fair treatment under these policies since the packets buffered till the polling instant alone are collected in the current visit of the MS, in contrast to the other two service models where the nodes with high data generation rates may prevent the MS from visiting the lightly loaded queues.
Out of the three analytical models for delay performance, the polling model is the most versatile and flexible one, mainly because of its capability to incorporate different service policies, scheduling disciplines, and optimization schemes. The polling model, with its number of variants, can accurately model the average case and worst case latency performance of mobility-assisted data collection framework.
Scope for Further Study.
As a future work, we plan to extend the proposed framework to 3-dimensional network with multiple mobile sinks and to formulate a cross-layer adaptive approach for improved data collection performance. It is also planned to investigate techniques for reducing data collection latency and optimizing the network performance dynamically according to application requirements and network conditions. Machine learning techniques need to be explored in the underwater DTN schemes to improve their adaptability to the changing environment. Optimization algorithms can be developed to design better MS trajectories for periodic as well as event-driven data collection. Another interesting work will be the investigation of the impact of traffic models and MAC on network performance.
Conclusion
In this paper, we have considered application-oriented and energy-efficient data collection schemes for sparse underwater acoustic sensor networks, considering them as delaytolerant networks. Exploiting the controlled mobility of underwater sink node, we have achieved high data delivery ratio and enhanced network lifetime, at the cost of increased message latency. The MS-based scheme has been found to be the only feasible option in disconnected networks and the more energy-efficient option in all networks. Realistic models for energy consumption, network lifetime, Packet Delivery Ratio, and message latency have been presented to analyze the proposed framework. The analytical models presented permit the evaluation of the network performance metrics prior to actual deployment of nodes.
Delay analysis was done based on three different queueing models, a comparison of which reveals that each model is suitable for some specific application requirements. The Bulk Service Queueing model is suitable for modelling a system in which the MS follows a round-robin approach of visiting the sensors and collects a fixed number of packets from each sensor. It also facilitates modelling of finite buffer systems, which is more realistic, but the analysis of which is not easy with the other models. The M/G/1 queue with vacation can effectively model a system in which, upon visiting a sensor, the MS collects all the packets buffered so far plus the packets being generated when the already buffered packets are being transferred. Polling model is found to be the most versatile and flexible one among the three models considered, with its k-limited service scheme supporting periodic data collection with fair treatment given to all sensors, exhaustive service scheme being the optimal one in terms of mean waiting time and buffer occupancy, and the gated service scheme standing in between the other two in terms of delay and fairness metrics. Both analytical and simulation results show that the Bulk Service Queueing model with sufficiently large service size and buffer space behaves like the gated service scheme of the polling model.
We have developed the simulation platform (in C++ and OTCL) by incorporating the DTN routing features and the queueing models into the NS-2 based network simulator, Aqua-Sim. Thus, an enhanced simulation environment is made available for further research and predeployment investigations in this novel area. Simulation results closely match with the analytical ones and the proposed scheme is suitable for nontime-critical applications. This paper has focused on the performance metrics of Packet Delivery Ratio and energy consumption, without giving much importance to the timely delivery of data. Hence, the scheme is suitable only for delay-tolerant applications. With very short distance between static sensor and MS, optical communication with very high data rate is feasible, thus minimizing the service time at the sensors, thereby minimizing the data collection latency to a great extent, especially at heavy load conditions. For delay-sensitive applications, the emergency data or request for service will have to be routed to the sink directly or through ad hoc network. The inevitable trade-off between energy consumption and latency has to be considered according to the application requirement and network connectivity.
|
2018-04-03T00:10:49.651Z
|
2015-05-01T00:00:00.000
|
{
"year": 2015,
"sha1": "cdcd0f26467d992fca65f468d32fc1cc5f749e06",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2015/128757",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2d64c70d35a2debe7ee66f3445556ec83a7eaa30",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
79704657
|
pes2o/s2orc
|
v3-fos-license
|
A Study on Appraisal of Knowledge, Attitude and Practices of Trained AWWs regarding Malnutrition under IMNCI
Context: Malnutrition is the biggest health problem of children in developing countries. Approximately 60 million children are underweight in India and child malnutrition is responsible for 22% of the country’s burden of disease. Aims: (i) To study the knowledge and attitude of anganwadi workers (AWWs) after IMNCI training regarding malnutrition. (ii) To assess the skills acquired by AWWs after IMNCI training regarding malnutrition. Materials and Methods: The present study was a cross-sectional study conducted in five talukas of Surendranagar district from August 2012 to January 2013. Sample size included all AWWs of five talukas of Surendranagar district, who had received basic IMNCI training. Out of a total 833 AWWs, 774 were interviewed. Statistical Analysis Used: Descriptive statistics and Chi-square test. Results: The analysis shows that majority of AWWs were educated up to secondary level (49.49%). Nearly 20% of AWWs were educated up to primary level, which could be a barrier to any program implementation. 80.6% of the respondents correctly identified the grade-4 malnutrition from growth chart, while nearly 20% of the respondents were able to identify low-grade, i.e., first to third degree malnutrition. Conclusions: Educational status plays a great role for the success of any program as it affects the understanding and grasping level of AWWs about their skillful management of malnutrition. Efficient and keen work in the field requires not only proper training but also assessment of their skills at all levels. Re-training at timely interval can play a lead role to improve their skills.
Introduction
Malnutrition is the biggest health problem among children in developing countries. Approximately 60 million children are underweight in India and child malnutrition is responsible for 22% of the country's burden of diseases. 1 Malnourished children are less likely to perform well in school and more likely to grow into malnourished adults at greater risk of disease and early death. India reports that 50% of all child deaths are due to malnutrition. 2 Despite global efforts for improving MCH and specific efforts like ICDS scheme, malnutrition among children remains a significant problem in India.
Aims and Objectives
• To study the knowledge and attitude of anganwadi workers regarding malnutrition, who had undergone IMNCI training. • To assess the skill learnt by trained anganwadi workers regarding managing malnutrition. • To establish relationship between educational status of anganwadi workers and their knowledge, attitude and practice in managing malnutrition in children.
Materials and Methods
The present study was a cross-sectional study conducted from August 2012 to January 2013. After enlisting all talukas of Surendranagar district, five talukas (Sayla, Limbdi, Dhranghadhra, Muli and Wadhwan) were selected through simple random sampling for the study. The information was gathered by using pre-tested semi structured proforma from all AWWs of five talukas who have received basic IMNCI training.
Out of a total of 883 AWWs, 774 were interviewed at monthly taluka meeting and remaining 59 were either absent during this meeting or submitted incomplete details. Data analysis was done by using statistical software SPSS 20. Table 1 shows that most of the anganwadi workers were in the age group between 20 and 40 years. Mean age of anganwadi workers who participated in the study was 40.94±9.03 years. Figure 1 indicates that majority of anganwadi workers were educated up to secondary level (49.49%). Nearly 20% of the workers were educated only up to primary level; this group will have a low understanding and grasping during the training, which could end up as a barrier to program implementation.
Figure 1.Educational Status of Anganwadi Workers
Regarding the assessment of knowledge of anganwadi workers in relation to examination of malnourished child ( (1) in severe malnutrition. While from rest of others, 3% believed that vitamin A supplementation alone, 1.8% believed that advising mother about breast feeding, 4% believed to give an advice to mother to keep the baby warm and 65.2% AWWs suggested that urgent referral to hospital was sufficient as a treatment. Table 2
.Knowledge of AWWs regarding Malnutrition in 2 Months to 5 Years Old Children
When their attitude was judged regarding follow up of malnourished child whose weight was less than expected age, it was found that about half of the respondents (51.8%) could be able to give advice correctly. 15.8% AWWs believed that advice to mothers about feeding problems would be sufficient, while only 7.9% respondents believed in followup after 14 days.
Association was shown between educational status of anganwadi workers and their knowledge regarding clinical features of malnutrition (Table 3). It is clearly seen that as the educational status of the AWWs increased, their level of knowledge also increased and the difference was also statistically significant (c 2 =189.43 and p=0.0001).
Association between Educational Status of Anganwadi Workers and Knowledge of Clinical Features of Malnutrition
When their attitude was assessed regarding the follow up of malnourished child whose weight was less than expected age ( When the association between the educational status of the AWWs and their attitude of referral in case of malnutrition in children aged 2 months to 5 years was assessed (Table 5), it was found that more than half (51.80%) of the respondents were following the standard protocol for reference. And the difference was statistically significant (c 2 =36.49 and p=0.00003).
When on follow up examination you find that child weighs less than the expected age, what advice you will give to mother?
Frequency Percent
Advice to mother about feeding problem 122 15.8 Follow up after 14 days and weighing of child 61 7.9 If feeding problem continues, than referral to hospital and follow up examination 190 24.5 All of the above 401 51.8 Total 774 100
Table 5.Association between Educational Status of Anganwadi Workers and Attitude of Referral in Case of Malnutrition in Children Aged 2 Months to 5 Years
When they were asked for revised training for IMNCI (
Discussion
Assessment of anganwadi workers regarding the knowledge in relation to examination of malnourished children shows that around 44% of the respondents were able to correctly mention the entire task needed for assessment. While 32.5% believed that plotting on growth chart was sufficient, minority of them believed presence of pedal edema (7.4%) and direct examination of a malnourished child is sufficient (16.3%) to diagnose malnutrition in a child. Statistically significant association was found between educational status and knowledge of anganwadi workers about clinical features of malnutrition.
The study done by Chaudhary et al. 3 shows that only 63.5% of anganwadi workers trained in IMNCI could be able to check correctly for under-nutrition, which is consistent with our findings. The above findings are also consistent with a study in Madagascar evaluative supervision of IMNCI in Madagascar, 4 which shows that 71% of health workers correctly checked for nutrition status in the supervisory visits.
When knowledge was assessed regarding the identification of malnutrition grade-4 from growth chart, 80.6% of the respondents correctly identified the grade of malnutrition, while remaining 20% respondents could not. The findings in our study are consistent with a study done by Kapil et al. 5 The study done by Amaral et al. 6 in Brazil shows that health workers assessment task for weight checked against growth chart is 77.5% which is nearly similar to our findings. The findings of a study report by Kelley workers have average skills regarding growth monitoring.
When anganwadi workers were assessed regarding their practice for treatment in severe malnutrition, it was found that only 26% respondents correctly identified all the steps needed for correct treatment and 3% believed that vitamin A supplementation alone was sufficient for treatment. Only 1.8% believed that advising mother about breast feeding was sufficient, and 4% believed in advising mother to keep baby warm and 65.2% believed that urgent referral to hospital was necessary. Similar findings are observed by Kelley and Black 7 in their study report, which shows that sensitivity of anganwadi workers to PEM (protein energy malnutrition), was 30% and specificity 29%.
When their attitude was assessed regarding the follow up of malnourished child weighing less than expected age, it was found that almost half of the respondents were able to give correct advice. The findings in our study are almost similar to a study conducted by Beracochea et al., 9 which shows that only 42% of health workers correctly referred severely malnourished child to the health center.
Conclusion and Recommendation
The present study shows that the skills regarding case management, observation, skill, assessment and classification regarding malnutrition were correct during the study. However, the decision about correct treatment was poor because of long gap between training and implementation and delay of logistic supply. The study indicates that to sustain the skill of anganwadi workers in the field, required follow-up visits should be at suitable time interval after training, and the interval between training and follow up visit should not be too long.
Educational status plays a great role for the success of any program as it affects the understanding and grasping level of anganwadi workers about their skillful management of malnutrition. For efficient and keen work in the field, they require not only proper training but also assessment of their skills at all levels. Re-training at timely interval can play a lead role in case of improvement of skills of anganwadi workers. Various modes of training for better understating regarding skills and management of malnutrition in their own words can be helpful.
|
2019-03-17T13:11:54.565Z
|
2018-03-14T00:00:00.000
|
{
"year": 2018,
"sha1": "b33098e8d31fb97d989c49b97c669e36a64fc0a2",
"oa_license": "CCBYNC",
"oa_url": "https://medical.adrpublications.in/index.php/Preventive-Curative-CommunityMed/article/download/1472/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c1262507965ee5421c07fe478effd424cb3d31a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224989387
|
pes2o/s2orc
|
v3-fos-license
|
New Ways of Working and Public Healthcare Professionals’ Well-Being: The Response to Face the COVID-19 Pandemic
This research proposes analyzing the influence of new ways of working (NWW) on healthcare professional’s well-being and how these may affect work performance and public service motivation. These variables and relationships were important before COVID-19 pandemic, and everything points to the fact that during and after the pandemic their importance will be higher. To buffer the potential negative effects of implementing the NWW, both organizations and employees must identify personal (psychological capital) and job resources (inter-role conflict, psychological empowerment, meaning of work) capable of acting as effective moderators to promote employee well-being and avoid negative experiences at work. This paper aims to shed light on new ways of coping and adapting to uncertain job requirements such as those that have arisen during COVID-19. Moreover, it highlights the great changes that public healthcare needs to face to improve the quality of the service offered to society. It is urgent that public administrators and human resources managers design effective strategies and make effective decisions in which employee well-being and service quality are main priorities.
The Influence of NWW on Employee Well-Being
The COVID-19 pandemic decreed by the World Health Organization has forced public administrations to adopt drastic measures regarding timing and ways of working. Social distancing, self-isolation or mobility restrictions to protect citizens have also made full sense at the workplace. The response to face the COVID-19 represents an extraordinary challenge for the public sector and the public healthcare system, which must propose and implement obligatory, flexible and innovative ways of working to protect employees and the entire society.
New ways of working (NWW) refers to the process of designing the work in which employees can control the timing and place to work based on new technologies [1]. In other words, NWW are aimed to provide employees the necessary flexibility and freedom to determine how, where and when they can work, using new technological advances [2][3][4]. In the context of the collective fight against COVID-19, public healthcare professionals, namely medical and nursing staff whose work is directly aimed at improving the health of patients, have been forced to adopt NWW that have lead them to work from home, or at least from places other than the workplace, requiring adaptation to carry out flexible, online and virtual tasks. In this sense, telemedicine and e-health using virtual software platforms through information and communication technologies have been able to deliver remote care and health assessment to patients both infected with COVID-19 or not [5,6]. The COVID-19 outbreak has significantly changed the way of working in the public healthcare system, accelerating the widespread use of remote healthcare approaches such as the telephone, video-based consultations and new virtual software platforms to avoid in-person consultations and prevent the spread of the infection.
NWW have a decisive influence on how work is understood and, what is more important, on the working conditions. In this sense, NWW mean a huge change in relation to the relationship between public employees and their work; namely, where and how public employees carry out their work, the demands of the work on them, and the demands the employees face outside their jobs [2,7]. Thus, the flexibility to organize the work can have a great influence on the behaviors of employees and may lead to both negative situations that affect their well-being and performance, or, conversely, very positive experiences at work that result in higher well-being, performance and service quality [1,8]. In public healthcare, NWW based on new technologies may allow employees to carry out their job without physical interactions among them or with patients and their families, helping to achieve in this way the goal of keeping the proper social distance. At this point, it is important to analyze the potential effects of NWW both on public healthcare workers and on the functioning of the public healthcare system.
The characteristics of the public sector can lead employees to suffer negative experiences at work, particularly in the specific context of public healthcare [9]. Public healthcare professionals are under high levels of pressure because they must carry out their work allocating scant resources to equally needful patients, providing care for all severely unwell patients, balancing their own physical and mental healthcare needs with those of the patients, and aligning their desires and duties with those of the patients' family and friends [10]. This pressure has significantly increased due to the occurrence of the COVID-19 pandemic. Moreover, the rigid structure of public healthcare organizations can make it very difficult for healthcare professionals to show the necessary flexibility they need to carry out their tasks in many cases [11]. All these reasons make these professionals more vulnerable to suffer negative experiences at work like emotional exhaustion, which relates to feelings of fatigue, irritability and frustration, and wearing out or depleting the emotional resources of the employee [12]. The very nature of health work, together with the global fight against COVID-19, has intensified the risk of suffering emotional exhaustion. To the harsh working conditions are now added the risk of infection of themselves, their colleagues, their patients and families, or some member of their families or group of friends [13,14]. The implementation of NWW as a response to COVID-19 may help increase the risk of suffering emotional exhaustion as a result of the informational overload, the interruptions, the misunderstandings, the necessity of being continuously connected or the lack of support from colleagues and superiors [2,7]. Moreover, medical care based on diagnoses without physical examinations, together with the lack of training in new technologies and the resistance to change of both healthcare professionals and patients, can seriously damage the quality of the healthcare professionals' work, leading them to be emotionally exhausted.
Given the speedy implementation of NWW due to the situation generated by COVID-19, it is urgent to review the preparation of recommendations, policies and practices related to ensuring appropriate and safe remote workplaces, providing at the same time training in the use of technical equipment and virtual collaborative environments [7]. In sum, the special nature of public healthcare together with the emergence of NWW can lead employees to be emotionally exhausted. To date, no research has analyzed the effect of NWW on emotional exhaustion among public healthcare professionals.
The fight against the COVID-19 pandemic can lead public healthcare professionals to be emotionally exhausted, but also develop strong bonds with their work, strengthening its vocational nature and wishing to develop it with more energy and willingness. In fact, public healthcare can be understood mostly as a vocational profession since employees are expected to be willing to do something meaningful, help others and serve to the public interests [15]. In this sense, the main aim of the public healthcare professionals is promoting health, whereby the quality of care and patient safety are the main priorities [16]. Considering both the public and vocational natures of the work of healthcare professionals, it seems reasonable to think that they can feel their profession as a challenge able to promote very positive experiences that lead them to engage with their work and the public interest. Engaged employees develop "a positive, fulfilling work related state of mind that is characterized by vigor, dedication and absorption" (p. 74) [17]. Thus, extreme situations such as the COVID-19 pandemic and the rapid implementation of NWW can stimulate employees to make more efforts to carry out their job (challenging job demand), engaging them in their work and duties even more than in 'normal' situations [18]. Furthermore, always, but to a greater extend in the COVID-19 context, providing public healthcare employees more flexibility around their work combined with adequate and agile communication technologies can help them be more proactive, take responsibility for their own professional development and be enthusiastically involved with their work [2,7]. Some previous research has concluded the positive relationship between the flexibility derived from some NWW and employee work engagement [8,19], however more research is needed considering other aspects beyond flexibility and the high number of different situations derived from the new work contexts.
There are many work environments in which the workplaces are not completely safe, technical equipment are not always guaranteed or collaborative communication is poor [7], especially in public healthcare where the pressure and work rate is huge mainly in periods like the COVID-19 pandemic, and thus the well-being of employees may be negatively affected both physically and psychologically [20]. Conversely, work environments characterized by autonomy, flexibility and adequate communication practices [2,7] can lead public professionals to commit to their work, enjoying in turn better physical and psychological health. As NWW can significantly influence the work environment and an employee's well-being, much more research is needed to analyze the sense and significance of this influence and the implications on people's lives and service quality. According to job demands-resources theory [21], NWW can be considered both a job resource and a job demand depending on their positive or negative potential effects. Precisely, this fact justifies the need to analyze exhaustively its influence and effects on employee well-being. As a job demand, NWW can spark a stressful process that may-via burnout-lead to the development of negative experiences at work that negatively influence employee physical and psychological health. Conversely, as a job resource, NWW can spark a motivational process that may-via engagement-lead to the development of positive experiences at work that influence positively employee physical and psychological health [22]. To date, no research has analyzed the mediating role of emotional exhaustion and work engagement in the relationship between NWW and employee well-being in the specific study context of public healthcare. This fact, together with the great relevance and complexity of the work of public healthcare professionals and the still uncertain consequences of the COVID-19 pandemic, justifies the need for much more scientific research in this field. Figure 1 shows the proposed model I.
The Influence of Employee Well-Being on Their Performance and Public Service Motivation
The advancement of knowledge around the implications of NWW is also justified by the fact that the physical and psychological well-being of public healthcare professionals is the cornerstone of every well-functioning healthcare system [14], affecting job performance [23,24], service quality and public service motivation. Factors like the distribution of work shifts, caring for a large number of patients in short spaces of time, intense working days, shortage of staff or the lack of means to draw up effective treatments and diagnoses [10,14] make public healthcare a very complex environment in which to work, and more so in the COVID-19 context. In this sense, low mood, sleep problems, low levels of concentration or the potential risk of infecting or being infected can lead public healthcare professionals to carry out their work under poor conditions, increasing their insecurity and consequently the possibility of making more errors [14,25]. All of this can lead to lower performance levels and lower employee motivation. On the contrary, being positive, keeping healthy and with a balanced state of mind can help employees do their job successfully, be more proactive, help patients or colleagues that require assistance, or seek solutions to regulate the flow of patients [26]; all of which leads in turn to higher performance levels and higher employee motivation.
Performance (in-role behavior) can be defined as what is required or expected as part of the duties and responsibilities formally assigned to the work of the employees [27]. The pressure of the COVID-19 pandemic, the complexity of healthcare work and the uncertainties around NWW can affect the service quality offered to patients and the employee well-being, affecting at the end the effectiveness with which employees must carry out their formally-prescribed job tasks. In the same way, those extra-role behaviors that go beyond the formalized requirements of work [27] and that are so important for the proper functioning of public healthcare through the promotion of voluntary cooperative behaviors [28] can also be seriously affected, making it difficult to shape informal organizational structures among healthcare professionals. Public service motivation refers to an individual's predisposition to respond to motives grounded primarily or uniquely in public institutions [29]. Thus, employees in public service motivated are altruistic and they do not expect reciprocity from the recipients of their services [30,31]. In this sense, public servants with high levels of service motivation focus their available energy and dedication on the public good on a daily basis [15]. Public healthcare professionals serve other people through the delivery of public services, so they must devote all their energies and efforts to perform their tasks in the best way for the entire society. In this sense, public healthcare workers with good physical and psychological health can increase their levels of public service motivation, reinforcing the belief that work is pleasurable and meaningful [15], which in turn can lead them to improve the quality of the service offered. However, low levels of health or psychological well-being can lead public healthcare workers to have less desire to help others [15], affecting in this way their service motivation and consequently the service quality, the effectiveness of the healthcare system and the well-being of the entire society. According to the conservation of resources theory [32], employees who show lack of resources to develop their work are more vulnerable to the experience of loss spirals, while those employees with ample resources have more opportunities to obtain new ones. Thus, public healthcare professionals who do not have sufficient energy to do their work correctly try to conserve their energies, reaching minimum performance requirements. Conversely, public healthcare professionals who have sufficient energy to do their work correctly are capable of creating new and better opportunities for establishing positive experiences at work. Despite the great relevance and implications of these variables, to date no research has analyzed the effect of employee physical and psychological well-being on their performance and service motivation in the specific study context of public healthcare. Figure 2 shows the proposed model II.
The Moderating Role of Employee Psychological Capital, Inter-Role Conflict, Psychological Empowerment and the Meaning of Work
At this point, what seems clear is that NWW can both hinder and promote the work of public healthcare professionals, leading them, depending on the case, to either be emotionally exhausted or engaged in their work. The COVID-19 work context is forcing healthcare professionals to resort to personal resources to cope with and adapt to an increasingly uncertain and complex work environment. Precisely, employee psychological capital, which refers to the positive psychological state of development characterized by self-efficacy, optimism, hope, and resilience [33,34], can help employees mobilize their personal resources to develop essential strengths to face the adversities of the work environment [24,35,36]. At the same time, the rapid implementation of NWW due to the COVID-19 pandemic may provoke an inadequate work-life balance for employees. In this sense, the inter-role conflict, which is defined as the role pressures from the work and family domains that are mutually incompatible in some respect [37], may also moderate employee emotional exhaustion or, where appropriate, employee work engagement. In the COVID-19 work context, the intense pressure on healthcare professionals and the rapid adoption of NWW may lead healthcare professionals to be unable to simultaneously and effectively fulfill their family and professional responsibilities, seriously affecting their well-being. Thus, maintaining family and work boundaries has become a great challenge for public healthcare professionals, who can see how the results of their efforts can affect their emotional exhaustion (decreasing it) or their work engagement (increasing it). Despite their relevance, to date there is no research analyzing in depth the positive/negative sign and the intensity of the moderating role of employee psychological capital and inter-role conflict. At this point, the issue to be analyzed is the moderating role of both variables in the relationships between NWW and employee emotional exhaustion or employee work engagement in the specific study context of public healthcare.
Likewise, the analysis of the moderating role of psychological empowerment, which refers to a set of psychological states-meaning, competence, self-determination and impact-that are necessary to feel a sense of control in relation to one's work [38] is crucial. Psychological empowerment can help employees make decisions with greater autonomy and flexibility, as well as exert influence regarding work and accomplish tasks in meaningful ways [39,40]. Psychological empowerment makes employees more active in their jobs, helps them improve their job methods and ensure compliance with job objectives in an effective way [41]. In the context of the COVID-19 pandemic, psychological empowerment can help public healthcare professionals provide new tools to cope with the feelings of emotional exhaustion, or reinforce the engagement to their job thereby increasing their well-being. In this sense, psychological empowerment can be viewed as a powerful instrument, able to encourage workers to develop their tasks adequately; since they have the sensation of controlling their work, they have the capacity to perform their assigned tasks and they can positively influence organizational outcomes [42,43]. Thus, psychological empowerment can act as a motivational source in the workplace, improving the level of physical and psychological employee well-being by strengthening engagement or buffering feelings of emotional exhaustion. To the best of our knowledge, no research has analyzed the moderating role of psychological empowerment in the relationship between employee emotional exhaustion or employee work engagement and employee well-being in the specific study context of public healthcare.
Meaning of work, which refers to the basic values that employees associate with work such as work content, the meaning of the tasks developed and the visualization of their contribution to service [44], can provide employees the resources needed to value their work [45], improving their performance and service motivation levels [46]. In the context of the COVID-19 pandemic, healthcare professionals must feel that they are members of an organization whose main objective is the quality of the service offered, whereby the meaning of work can help them feel that their work context is significant and purposeful, and lead them to achieve their work goals, stimulating personal growth, learning and development [22]. To the best of our knowledge, no research has analyzed the moderating role of meaning of work in the relationships between employee well-being and employee performance or service motivation in the specific study context of public healthcare.
According to job demands-resources theory, job resources such as employee psychological capital, inter-role conflict or psychological empowerment may act as 'buffers' on the negative relationship between high job demands or low job resources and employee well-being. According to conservation resources theory, the accumulation of resources as meaning of work may lead employees to be more likely to risk these resources to obtain others capable of contributing to the achievement of positive outcomes. The results of this study can surely contribute to the advancement of knowledge around NWW, emotional exhaustion and work engagement, as well as the improvement of the well-being of public healthcare professionals and the quality of the service offered to patients, their families and the entire society. Figure 3 shows the full proposed model.
Practical Implication and Conclusions
NWW can be crucial to understand the role of variables such as emotional exhaustion and work engagement and how they can affect the public healthcare system in the future. The challenge posed by public administrations through the incorporation of NWW due to the COVID-19 pandemic may cause huge changes around the work of healthcare professionals. In fact, work environments like the current ones characterized by high volatility, uncertainty and complexity can be an exciting opportunity for encouraging public administrations and healthcare managers to propose innovative formulas related to the way of working and service delivery [47]. In this sense, it is urgent that public administrators and human resources managers design effective strategies aimed to adapt NWW as soon as possible to the needs of the employees. More specifically, the drafting of codes of good practices together with protocols for action in the event of COVID-19 outbreaks, or clearly planning the available and necessary resources to carry out the work correctly and safely, can be key actions to face these situations effectively. In addition, providing technological applications to facilitate more effective communication flows with patients and colleagues, or promote virtual collaborative workspaces, may help enhance healthcare professionals' well-being, at the same time increasing the quality of the service offered. Enhancing employee personal resources or providing tools to improve the sense of control, the autonomy, the flexibility and the meaning of work can allow public healthcare professionals to cope with and adapt to job requirements in a better way. Therefore, public healthcare is obliged to focus on stimulating positive experiences in which human resources development, public service quality and employee well-being are main priorities. In any working context, but to a greater extent in a context marked by the COVID-19 pandemic, it is essential to understand the influence of NWW on employee well-being to promote and foster job environments characterized by healthy and happy employees capable of achieving better results at all levels.
|
2020-10-02T13:04:48.416Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "985e45a0fca5de658e1ebec5c46f9a59da22f4b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/19/8087/pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e6fc6e77c59cef9a0350a7553ba92d7bc86acd40",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
233866591
|
pes2o/s2orc
|
v3-fos-license
|
Improvement in quality of life among Sri Lankan patients with haemorrhoids after invasive treatment: a longitudinal observational study
Abstract Background Haemorrhoids is a common chronic disease that can significantly impact patients’ quality of life. Yet, few studies have evaluated health-related quality of life (HRQoL) of patients with haemorrhoids before and after treatment. This study investigated the HRQoL of patients with haemorrhoids before and after treatment and the change in HRQoL from baseline. Methods A prospective observational study of patients with haemorrhoids was conducted at two public hospitals in Kandy, Sri Lanka. Two questionnaires assessing symptom severity and haemorrhoid-specific QoL were administered at initial consultation and at 4- and 8-week follow-ups after treatment (sclerotherapy, rubber band ligation (RBL), haemorrhoidectomy or evacuation of haematoma). The primary outcome was the least squares (LS) change of HRQoL score from baseline, measured using the Short Health Scale adapted for Haemorrhoidal Disease (4 domains: symptom load, interference with daily activities, concern, general well-being). Results In 48 patients selected for this study, LS mean change from baseline showed significant improvement in HRQoL across all domains and total Short Health Scale adapted for Haemorrhoidal Disease score at 4- and 8-week follow-ups (P < 0.001). Difference in LS mean change from baseline also showed continued improvement of HRQoL from week 4 to week 8 (P < 0.010). ‘Concern’ showed greatest improvement at 4 and 8 weeks (P < 0.001). Averaged LS mean changes from baseline showed RBL had greater improvement of HRQoL compared with sclerotherapy (P = 0.004). Conclusion Patients with haemorrhoids had improved HRQoL after invasive treatment. Haemorrhoid-specific QoL is an important component of the extent of disease and can serve as an aid to guide treatment, assess outcomes and monitor disease.
Introduction
Haemorrhoids is a chronic and recurring disease that can significantly disrupt patients' daily lives and well-being 1 . It affects about one third of the population 2 and is most common in adults in their fifties 3 . More than half develop haemorrhoids during their lifetime 4 . Yet, quality of life of patients with haemorrhoids has not been well studied in low-to-middle-income countries (LMICs). Although haemorrhoids is prevalent in Sri Lanka, no studies have been conducted in Sri Lanka to evaluate the healthrelated quality of life (HRQoL) of patients with haemorrhoids.
The occurrence of symptomatic haemorrhoids does not correlate strongly with higher-grade haemorrhoids 5 . There is also poor correlation between the degree of prolapse and patient symptoms 6 . Therefore, lower-grade haemorrhoids can still be associated with severe symptoms and greatly affect daily life and wellbeing, which makes the overall effect of haemorrhoids difficult to assess 7 . When considering symptom severity, past research found a significant impact on patients' quality of life (QoL) 7 . Therefore, using haemorrhoid-specific QoL may better reflect the burden of disease rather than using clinical assessment alone.
Treatment of haemorrhoids ranges from lifestyle modifications to surgical intervention. In Kandy, Sri Lanka, invasive treatment options include sclerotherapy with 5 per cent phenol, rubber band ligation (RBL), haemorrhoidectomy, and evacuation of haematoma if present. Treatment is typically guided by the classification of haemorrhoids 6,8 . The existing grading of haemorrhoids to guide treatment is not completely satisfactory, however 9 . There is poor correlation between the grade of haemorrhoids and symptoms 6 , so the decision to treat and the type of treatment to recommend must also be guided by the severity of symptoms and their impact on HRQoL.
Standardized and validated outcome measures comparing different treatment options have also been lacking 7,10 . The Haemorrhoidal Disease Symptom Score (HDSS) 11 and the Short Health Scale adapted for Haemorrhoidal Disease (SHS HD ) 10 have been found to be reliable, responsive and valid as outcome measures 10 . These two scoring systems evaluate symptom severity and haemorrhoid-specific QoL respectively. Using these questionnaires before and after treatment can further assess treatment outcomes of patients with haemorrhoids within the Sri Lankan sociocultural context. With limited resources and high patient load, standardized outcome measures using haemorrhoid-specific QoL may also guide treatment recommendations and inform best practices.
This study aimed to investigate the HRQoL of patients with haemorrhoids in Sri Lanka. The primary objective was to investigate the HRQoL of patients with haemorrhoids before and after invasive treatment (sclerotherapy with 5 per cent phenol, RBL, haemorrhoidectomy or evacuation of haematoma if present) and the change in HRQoL from baseline. The secondary objectives were to investigate the change in HRQoL of patients with haemorrhoids across the different invasive treatments, the severity of symptoms before and after invasive treatment, and the association of potential risk factors with baseline HRQoL scores. Finally, the cultural perceptions of haemorrhoids and the healthseeking behaviours of patients were evaluated.
Study setting and design
A prospective observational study was conducted in Kandy, Sri Lanka, between October 2019 and February 2020. Enrolled patients with haemorrhoids were followed up 4 weeks and 8 weeks after initiation of treatment, or if they received no treatment, after initial consultation. Ethical approval for the study was obtained from the Ethics Review Committee at the University of Peradeniya and the Institutional Review Board at National University of Singapore.
Sample size calculation estimated that 50 patients were required to achieve 80 per cent power at alpha 0.05 to detect a clinically relevant effect size of 0.4 in HRQoL before and after treatment.
Sampling and selection strategy
Patients were recruited from the general surgical and rectal outpatient clinics and inpatient wards at the Teaching Hospital Peradeniya and Teaching Hospital Kandy, the two largest public hospitals in the Central Province of Sri Lanka. They had to be at least 18 years old and present with haemorrhoids as the primary complaint. Patients might have been newly or previously diagnosed with haemorrhoids but had not received any invasive treatment in the 6 months before recruitment. Convenience sampling was used.
Patients were approached for participation in the study by a Sinhala-speaking research assistant. Exclusion criteria were: active medical conditions associated with pain or bleeding per rectum (e.g., anal fissures or fistula, rectal prolapse), surgical procedures to the anorectal region within the past 6 months or cognitive or language limitations that would affect completion of questionnaires. The diagnosis of haemorrhoids was confirmed by the surgeon during the initial consultation. Written consent was obtained from the participants.
Data collection, questionnaire content and administration
The HDSS and SHS HD questionnaires (Appendix S1), used together, give an overall perspective of symptoms experienced and their impact on daily life and well-being 10 .
The HDSS assessed symptom severity based on the frequency of five cardinal haemorrhoid-related symptoms over the past 3 months -pain, itching, bleeding, soiling and prolapse 10,11 .
The SHS HD assessed haemorrhoid-specific QoL of patients 10 . It is the only disease-specific QoL tool that has been shown to be reliable, responsive and valid in accordance with the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) guidelines, especially as a post-treatment outcome measure 10 . Other symptom-based scoring systems for haemorrhoids had not been tested for all three measurement properties.
Both questionnaires were administered by an interviewer in Sinhala at the initial visit (baseline). An additional qualitative survey was also administered to investigate patients' knowledge and perceptions of haemorrhoids and their health-seeking behaviours. The verbal questions and answers were in Sinhala, but data were recorded in English. There was no interference with regular patient treatment.
The HDSS and SHS HD were re-administered at the 4-and 8week follow-ups. Upon completion of the 4-week follow-up questionnaires in person, patients were compensated 300 Sri Lankan Rupees for participation and transportation costs. Patients who were unable to attend follow-up at the hospital were followed up via telephone call. All 8-week follow-ups were conducted via telephone call for patients' convenience. Patients lost to follow-up were not included in the analysis.
Questions on sociodemographics of patients with haemorrhoids including age 3 , sex 3 , BMI 5 , occupation and income level, symptoms common to haemorrhoids (rectal bleeding, perianal pain, itching, soiling, swelling or lump at anus 5,12 ), bathroom habits (constipation 13,14 , straining 15 ) and strenuous activity, such as heavy lifting, were included. Clinical findings and treatment recommendations were recorded after evaluation.
Variables and co-variables
The primary outcome variable was the change of HRQoL score from baseline. HRQoL was measured using the SHS HD 10 which evaluates four domains: symptom load, interference with daily activities, concerns and general well-being. Each domain was assessed using a 7-point Likert scale. An overall maximum score is 28 points and overall minimum score is 4 points. A lower SHS HD score reflects greater HRQoL. Patients who underwent invasive treatment received one of the following: sclerotherapy with 5 per cent phenol, RBL, haemorrhoidectomy or evacuation of haematoma if present.
Baseline demographics (age, sex, BMI) and baseline HRQoL scores were considered co-variables as these factors are known to influence self-perceived HRQoL scores.
Statistical analysis
Continuous variables were summarized as mean(s.d.). Categorical data were summarized using frequencies. Missing height, weight and calculated BMI values were not imputed. Missing income values were accounted for by means imputation.
A repeated measures linear mixed model analysis on change of HRQoL scores from baseline was performed with subjects as random effects, weeks (4 and 8) as fixed effects and baseline HRQoL score as a co-variable, with post-hoc pairwise comparisons. This analysis was used as the study involved repeated measurements on individual patients at three different time points. A similar analysis was performed on change of HDSS scores from baseline. The above analysis model was augmented to include fixed effect terms for treatment group and week x treatment group interaction to compare the HRQoL of patients with haemorrhoids among the different invasive treatment groups (sclerotherapy, RBL, haemorrhoidectomy). For each week, contrasts were used to test a two degree-of-freedom null hypothesis of no difference among the three treatment groups (H 0 : . If the null hypothesis was rejected, post-hoc pairwise comparisons were performed. A similar analysis was performed to compare posttreatment HRQoL between patients who had received previous invasive treatment and patients who had not. The LASSO stepwise algorithm for general linear models (GLM) was used to investigate associations of potential risk factors with baseline HRQoL scores, using significance levels to enter and stay of P < 0.050. Optimal selection is based on the Akaike information criteria. Subsequently, a GLM analysis was performed on selected factors to obtain Type III sums of squares and corresponding F-test P values. For all tests, P < 0.050 was considered statistically significant. Data were analysed using SAS V R version 9.4 (SAS Institute, Cary, North Carolina, USA).
Results
Patients A flow chart of the patients who participated in the study is shown in Fig. 1. Fifty-eight patients completed both 4-and 8-week follow-ups. Of these, 48 patients received invasive treatment, five received medical treatment and five received supportive or no treatment. As the number of patients who received medical treatment and supportive or no treatment was small, the analysis was focused on those patients who received invasive treatment (28 males and 20 females, mean age: 47.7 years; Table 1).
Note: Figure Replacement Requested. Thirteen patients had missing self-reported height, and five patients had missing self-reported weight, resulting in missing/ unknown BMI data, while four patients had missing income values. Thirty-seven patients had a history of haemorrhoids. Fifteen patients had previously tried invasive treatment for haemorrhoids ( Table 1).
The most common complaints were swelling/prolapse (36 patients) and rectal bleeding (33 patients). Swelling/prolapse (HDSS) was the symptom most commonly rated as most severe.
Forty-six patients had internal haemorrhoids only. One patient had both internal and external haemorrhoids, and one patient had external haemorrhoids only. Twenty-five patients were treated with sclerotherapy, while seventeen patients had RBL ( Table 2). Other patient characteristics, symptoms and clinical features at baseline are presented in Tables 1 and 2.
Health-related quality of life scores
Baseline HRQoL scores were significantly associated with posttreatment HRQoL scores and with change from baseline for all SHS HD domains and total SHS HD score (P < 0.001). Patients with higher baseline HRQoL scores (poorer HRQoL) had lower posttreatment HRQoL scores (better HRQoL) and greater change from baseline HRQoL scores (greater improvement in HRQoL). Thus, baseline HRQoL was included as a co-variable in the analysis model. Age, sex and BMI had no significant effect on HRQoL scores or change from baseline and were therefore excluded.
Post-treatment adjusted least squares (LS) mean scores showed significant differences between weeks 4 and 8 (P < 0.010). Lower SHS HD score reflects greater HRQoL. Positive values for LS mean change from baseline indicate improved HRQoL. LS mean change from baseline showed significant improvement in HRQoL across all domains and total SHS HD score at week 4 and week 8 (P < 0.001). Differences in LS mean change from baseline were negative, indicating that HRQoL scores continued to improve from week 4 to week 8 (P < 0.010). Of the HRQoL domains, concern had the largest mean score at baseline, suggesting patients were most worried about their symptoms compared with other domains of the SHS HD . Notably, concern also showed greatest improvement at weeks 4 and 8 and from week 4 to week 8 ( Table 3).
LS mean change of HRQoL scores (total SHS HD score) from baseline for sclerotherapy, RBL, and haemorrhoidectomy respectively was 3. week 4 for RBL versus sclerotherapy (change from baseline at week 4 (DBL W4) ¼ 1.92, P ¼ 0.021) but no significant difference at week 8 (DBL W8 ¼ 1.54, P ¼ 0.062). However, when averaged across weeks 4 and 8, patients who received RBL had greater improvement in HRQoL than those with sclerotherapy (b ¼ 1.73, P ¼ 0.004). Averaged differences between haemorrhoidectomy and RBL (P ¼ 0.469), and haemorrhoidectomy and sclerotherapy (P ¼ 0.306) were not statistically significant. Evacuation of haematoma was not included in this analysis due to small sample size (2 patients). LS mean change of HRQoL scores (total SHS HD score) from baseline for patients who previously received invasive treatment and those who had not was 3.60 and 4.67 respectively at week 4, and 5.34 and 5.44 at week 8. Averaged differences showed no significant difference between patients who had previously received invasive treatment and those who had not (P ¼ 0.430).
Symptom severity scores
LS mean change of HDSS scores from baseline showed significant improvement in symptom severity across total HDSS score and all domains at weeks 4 and 8 (P < 0.050). Bleeding, soiling and total HDSS score had significant differences in change from baseline (P < 0.050), indicating that symptom severity continued to improve from week 4 to week 8 in these domains. Improvement in symptom severity for pain, itching, and swelling/prolapse mainly occurred from baseline to week 4 as the differences in change from baseline were not statistically significant ( Table 4).
LS mean change of HDSS scores (total HDSS score) from baseline for sclerotherapy, RBL, and haemorrhoidectomy was 3.70, 6.44 and 2.29 respectively at week 4, and 4.98, 7.61 and 5.54 at week 8. Patients who received RBL had greater improvement in symptom severity at week 4 and week 8 compared to those who received sclerotherapy (DBL W4 ¼ 2.74, P ¼ 0.015; DBL W8 ¼ 2.64, P ¼ 0.019).
Risk factors associated with baseline HRQoL score
Higher HDSS scores indicate more severe symptoms. Higher baseline total SHS HD and domain scores reflect poorer baseline HRQoL. Soiling was positively correlated (P < 0.010) with total SHS HD score, symptom load, interference and concern. Higher pain scores and eating more fruits and vegetables were also associated with higher baseline concern scores ( Table 5). Other potential risk factors such as age, sex, BMI, straining, bleeding (HDSS) and swelling/prolapse (HDSS) were not selected in the stepwise regression.
Perceptions and health-seeking behaviours
In the supplementary qualitative analysis, 30 patients did not know anything about haemorrhoids prior to the initial consultation. Seventeen patients had not seen a doctor for haemorrhoids previously and most gave the reason of not seeing a doctor because they had mild symptoms or symptoms only recently developed. Perianal pain (20 patients) and rectal bleeding (16 patients) were the most common reasons reported by patients for deciding to seek medical care. All 48 patients reported 'Doctor prescribed' as the reason for undergoing invasive treatment after the initial consultation.
Discussion
Haemorrhoids is a benign but common chronic disease, which can disrupt patients' lives by impacting their daily lives and wellbeing 1 . As 66 per cent of all employed people in Sri Lanka work in the informal sector, consisting mostly of labour-intensive jobs and dependence on daily salary 16 , haemorrhoids can impact their HRQoL and ultimately affect their livelihoods. Therefore, evaluating HRQoL is imperative in assessing impact of disease, with the goal of treatment to address symptoms 17 and improve haemorrhoid-specific QoL. Parallel to HRQoL scores, symptom severity also improved across total HDSS score and all domains at 4 and 8 weeks after invasive treatment. As the SHS HD was shown to be responsive and highly correlated with symptom load and patient postoperative satisfaction 10 , improved HRQoL after invasive treatment further emphasizes the benefit of haemorrhoid-specific QoL as a way to assess treatment outcomes together with symptom severity and clinical findings. Haemorrhoid-specific QoL and overall symptom severity also continued to improve from week 4 to week 8, which may reflect the continued effects of invasive treatments. Most patients did not report taking other traditional or prescribed medication during this period, and none had returned to the hospital for worsening of symptoms or additional treatment. Proctoscopy is not routinely performed to evaluate for resolution of haemorrhoids if symptoms have not worsened. However, as a multisymptomatic disease, resolution of one symptom may not improve HRQoL. Patients' symptoms may also reduce in frequency but still have a severe impact on their well-being 10 . Improvement in certain symptoms may also contribute more to the improvement in HRQoL. Bleeding and soiling showed significant improvement in symptom severity from week 4 to week 8, whereas pain, itching and swelling/prolapse did not. This highlights the importance of assessing all symptoms of haemorrhoids together with HRQoL. With a high patient load and limited resources in LMICs like Sri Lanka, surgeons may also use the SHS HD and HDSS together as a quick and more robust means to monitor disease recurrence.
Concern in the SHS HD assessed the frequency of haemorrhoidrelated worries patients had 10 . Prior to treatment, patients were most concerned about their symptoms out of all SHS HD domains. This emphasizes the psychological impact the disease has on patients, where clinical grading of haemorrhoids alone does not adequately evaluate the whole disease 6,7 . Concern also had the greatest improvement after invasive treatment at each followup, further reflecting that treatment not only addresses the disease but disease-related concerns. As patients appeared to wholly trust surgeons' recommendations, assessing haemorrhoid-specific QoL may also bring forth their concerns, engage them in the decision-making process and improve patient-centric care.
Higher pain and soiling HDSS scores were associated with greater concern on the SHS HD . This differs from other experiences 7 where frequency of soiling was not significant in a multivariable regression model for impact on QoL. Another author 10 found that all HDSS domains had a positive correlation with symptom load on the SHS HD , whereas in the model in the present study, only soiling showed a positive correlation with symptom load. However, this may be due to the smaller sample. The use of fruits and vegetables was also positively associated with concern, which may reflect patients' health-seeking behaviours: indeed, patients who were greatly concerned may be more likely to consume fruits and vegetables to alleviate their symptoms.
Patients who received RBL had greater improvement in symptom severity and haemorrhoid-specific QoL after treatment compared to sclerotherapy, similar to other studies that showed RBL to be superior to sclerotherapy in patient-perceived response to treatment 18 . In common practice, sclerotherapy is used to treat grade I and II haemorrhoids while RBL is used for grade II and III haemorrhoids 19 . However, RBL is not available at the Teaching Hospital Peradeniya. As such, sclerotherapy is also used to treat grade III haemorrhoids at this hospital. Future studies could use a larger patient sample to investigate the improvement in symptom severity and haemorrhoid-specific QoL amongst the different types of invasive treatments over a longer period, especially since sclerotherapy and RBL can be considered as a course of treatment (i.e., can be given multiple times) 17 . This may have greater implications in reorganizing medical resources to make RBL available at all public hospitals in Sri Lanka.
As symptom severity has been shown to be related to type of treatment, i.e., ambulatory or operative care 7,10 , baseline HRQoL could also be assessed in future studies to decide between different types of invasive treatment. The SHS HD is not diagnostic or prognostic 10 , but it serves as an aid, along with clinical assessment, for surgeons when recommending treatment. Public hospitals in Sri Lanka have limited resources and conservative management, such as the use of flavonoids, is not readily available. As such, invasive treatment is the mainstay of treatment of haemorrhoids. Therefore, careful evaluation of HRQoL to guide treatment may prove beneficial.
This study illustrates that haemorrhoid-specific QoL is an important dimension of the impact of the disease on patients, and can serve as an aid for surgeons to guide treatment, assess outcomes and monitor disease. The longitudinal cohort study assessing prospective data is a strength, with fully completed questionnaires at baseline, 4-and 8-week follow-ups. Limitations of the study include the use of convenience sampling and small sample size, restricting generalizability of the results. The SHS HD is a reliable and responsive HRQoL measure in the Danish population 10 . Although the SHS HD had not yet been validated in Sinhala in the Sri Lankan population, the questionnaires were translated by a Sri Lankan healthcare professional and reviewed by a sworn translator. Additionally, the SHS HD is the first haemorrhoid-specific QoL tool 10 , which was adapted from the Short Health Scale, a validated HRQoL measure for patients with inflammatory bowel disease. Future research could cross-validate the SHS HD and further investigate its use in Sinhala in the Sri Lankan population or in other languages to assess haemorrhoid-specific QoL in other LMICs. Further studies could also test the SHS HD against other HRQoL tools to measure criterion validity.
The awarding bodies of the student grants were not involved in the study design, patient recruitment, data collection, data analysis, or in the preparation of the manuscript in any way. Disclosure. The authors declare no conflict of interest. In addition to the risk factors in the table, age, sex, BMI, straining, bleeding (Haemorrhoidal Disease Symptom Score, HDSS), and swelling/prolapse (HDSS) were also investigated as possible risk factors. *Higher baseline total Short Health Scale adapted for Haemorrhoidal Disease (SHS HD ) score and domain score reflect poorer baseline health-related quality of life. † HDSS assesses symptom severity based on frequency of symptoms (0, never; 1, less than once a month; 2, less than once a week; 3, 1-6 days a week; 4, every day). The larger the HDSS score, the more severe the symptoms.
|
2021-05-07T05:23:56.199Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "24d32b8c230e3a75b330b1a0354001120bd5bfeb",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bjsopen/article-pdf/5/2/zrab014/38891136/zrab014.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24d32b8c230e3a75b330b1a0354001120bd5bfeb",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244518875
|
pes2o/s2orc
|
v3-fos-license
|
Extent of Participation of Farm Youth in Chrysanthemum Cultivation
Youthification of the farming population has the potential to revive, reform, and revolutionize the agriculture and allied sectors by concentrating the youth’s efforts towards it. Chrysanthemum is a traditional flower crop with many economic importance and numerous avenues for value addition and export. The study was conducted among 120 farm youths in Omalur, Kadayampatti, and Mecheri blocks of Salem district, Tamil Nadu.An ex post facto research design was used to study the extent of participation and factors that contribute to the participation of farm youth in chrysanthemum cultivation. Analysis of the responses indicated that a majority of farm youth had a medium (71.66 per cent) level of participation and 19.16 per cent had a high level of participation. Statements with a higher mean score, such as availability of cultivable land (4.16), attractive remuneration (4.09), agricultural knowledge (3.90), and interest in agriculture and allied activities (3.89), were the key contributing factors that led to the increased participation of farm youth in chrysanthemum cultivation.
INTRODUCTION
According to the annual report of the Ministry of Youth Affairs and Sports (2019-20), India has one of the world's youngest populations, with roughly 65 per cent of the population under the age of 35 years. Youth aged 15 to 29 years make up 27.5 per cent of the population and represent one out of every four people. The age bracket 16-30 years was considered as 'youth' by the National Youth Policy of 2012.
In contrast to the rest of the world, India's youth bulge poses two challenges: 1. the country's numerically outstripping young population must be adequately quenched with access to education; 2. the working-age population has to be provided with newer employment prospects. To meet the employment requirements of youth, there is a need to diversify employment opportunities and entrepreneurship development by adding value and innovating the indigenous sectors like agriculture and related activities. Since India is predominantly an agrarian economy, it must channel its resources through effective programmes and policies to meet the needs of the youth, agriculture sector, and the country as a whole.
Floriculture is a sector with a lot of untapped potential and new job opportunities for the youth. According to the APEDA, the area under floriculture production was 3,05,000 ha in India during 2019-20, with a production of 23,01,000 MT of loose flowers and 7,62,000 MT of cut flowers, India's total floriculture export was Rs. 575.98 crores (77.84 USD million) in 2020-21. India is the world's second-largest flower-growing country after China, and it ranks 14 th in terms of floriculture exports, which accounts for only 0.40 per cent share of global floriculture exports in 2018, which could be attributed to gaps in maintaining international quality standards, a lack of integrated cold chain management, unorganized market and distribution networks (Nikhila et al. 2021). Major floriculture centres have emerged in Maharashtra, Karnataka, Andhra Pradesh, Haryana, Tamil Nadu, Rajasthan, and West Bengal. Tamil Nadu is currently India's leading flower producer with 4,61,711 MT during the year 2019-20 (Department of Horticulture and Plantation crops of Tamil Nadu).
"Queen of the East," chrysanthemum of the Asteraceae family, is a popular ornamental plant used as cut flowers, loose blooms, and pot plants eyes are in high demand during the festive season. Secondary metabolites, such as dyes, floral scents, and pyrethrums, can be extracted from chrysanthemum, making it a commercially viable, climate-resilient, and multipurpose flower crop.
To meet the requirements of the flourishing flower export centres, value addition units, and extraction units, the crop must be produced continuously and sustainably.
The Salem district historically has a substantial area covered under the chrysanthemum. According to the Department of Horticulture and Plantation crops of Tamil Nadu, the district had 1143 hectares under chrysanthemum cultivation, with a production of 20574 MT and productivity of 18 MT/Ha in the year 2018-19. Climate suitability, less intensive with considerable remuneration were the reasons for the spread of the crop. Farmers select cultivars based on local demand and sell them in local markets. Chrysanthemum is not cultivated year-round, which was one of the main reasons for the absence of export or viable value-addition or extraction units. Youth participation in agriculture activities has been linked to their economic motivation, scientific orientation, and risk orientation, implying that their involvement will lead to innovations and the discovery of new avenues and opportunities in the sector (Varsha Chouhan, 2018). In this context, the present study was carried out with the objectives to ascertain the extent of the participation of farm youth and to find the factors responsible for the participation of farm youth in chrysanthemum cultivation.
MATERIAL AND METHODS
An ex post facto research design was adopted and the study was carried out in the Salem district of Tamil Nadu. A sample size of 120 farm youths was selected for the study with ten respondents from each village, using snow ball sampling. The top three blocks Omalur, Kadayampatti, and Mecheri and four villages from each block having the highest area under chrysanthemum cultivation were purposively selected.
The extent of participation in chrysanthemum cultivation among farm youth refers to an individual's level of participation in various farming activities, obtained on a three-point continuum scale: regularly, occasionally, and not at all, developed by Martal (2019) with slight modifications. Factors that were responsible for the participation of farm youth in chrysanthemum cultivation were obtained on a 5-point likert scale, following the scale developed by Rashmi Chaudhary et al. (2018) with slight modifications. Descriptive statistics were used for analyzing the data collected.
RESULTS AND DISCUSSION
The extent of the participation of farm youth in chrysanthemum cultivation The extent of participation of farm youth refers to the individual's participation in various farming activities such as variety selection, ploughing, irrigation, spraying of chemicals, pinching, ratooning, grading, packing of flowers, selection of mandi/seller and marketing in chrysanthemum cultivation. Data on the participation of farm youth in chrysanthemum cultivation was gathered, and they were divided into three categories based on their participation score, following the cumulative frequency method. Figure 1 represents the distribution of rural youth by their level of participation in chrysanthemum cultivation. Among the farm youth surveyed, a majority (71.66 per cent) of them had a medium level of participation, 19.16 per cent had a high level, followed by 9.16 per cent with a low level of participation in chrysanthemum cultivation. Factors that are responsible for their participation in chrysanthemum cultivation are discussed below. Most of the farm youth had medium to high levels of participation (90.82 per cent), which was in accordance with the findings of Martal (2018) and Suman Verma (2019).
Factors responsible for the participation of farm youth in chrysanthemum cultivation
Factors responsible for the participation of farm youth in chrysanthemum cultivation according to their mean value are depicted in Figure 2. The most critical factor responsible for the participation of farm youth in chrysanthemum cultivation was the availability of cultivable land with a mean value of 4.16. Land that is cultivable and fertile cannot be left unproductive unless there is a pressing reason to do so. As a result, it acts as a motivator for farm youth to participate in chrysanthemum cultivation.
The second most important factor responsible for the participation of farm youth in chrysanthemum cultivation was its attractive remuneration with a mean score of 4.09. The flowers fetch higher prices during the festive season and most of the farmers target their harvest during this season, with higher demand and localized markets offering fair prices for the farmers.
Other contributing factors are agricultural knowledge, interest in agriculture and allied activities with a mean score of 3.90 and 3.89 leads to farm youth's active participation in cultivation.
Youths raised in farming families have a natural affinity for farming and related activities and looking forward to capitalising on their existing agricultural knowledge and experience.
Lack of job alternatives with a mean score of 3.67 has been cited as a major factor contributing to the youths' return to agriculture. Unemployment caused by the COVID-led lockdown forced them to look for alternative and long-term employment in agriculture and allied activities in rural areas.
Chrysanthemum is a commercial flower crop that blooms from the third month onwards, up to six or eight months, with average productivity of twenty tonnes per hectare. Chrysanthemum's fertilizer requirement, water requirements and incidence of pests and diseases were much lower than other commercial flower crops. Further, the intensity of care and management was far less, providing youth free time to invest it in other productive economic activities. Thus, the respondents gave a mean score of 3.61 for the statement less intensive cash crop.
Fig 2. Factors responsible for the participation of farm youths in chrysanthemum cultivation
Though the crop's water requirement is moderate, short and intense rainfall followed by dry spells induced by climate change were not suitable for the crop leading to delays in planting and harvesting, which were planned for the festive season. Hence, the statement environment and temperature are favorable got a mean score of 3.20.
Chrysanthemum is primarily grown only in a few blocks of the district, and the government's shift in focus on other horticultural crops such as vegetables and fruits has left the chrysanthemum behind, as evidenced by the statement "various government schemes and incentives" with a mean score of 3.06.
Chrysanthemum is not covered under crop insurance schemes and has a much lower chance of receiving institutional or formal credit than other food and commercial crops. Farmers struggle to get fair remunerative prices for the harvest during peak season; since the produce is perishable and bulkier, it poses numerous challenges for the farmer to transport the harvested flowers to other major markets such as Krishnagiri, Madurai, and Koyambedu. The outcome of which, the statements transportation facilities, well connectivity and availability of rural credit facilities received a mean score of 2.89 from the respondents, which was in accordance with the findings of Rashmi Chaudhary et al. (2018).
CONCLUSION
When the entire economy is stressed due to COVID lockdown, rural youths have returned to agriculture, the only sector that had positive growth during the pandemic. Sectors, such as IT and manufacturing, were the most benefited from the participation of the younger generation. Diversion of youth's efforts, knowledge, exposure, and innovation into agriculture shows promising signs for the sector's future growth. Youths participation could benefit commercial floriculture crops like chrysanthemum having untapped economic potential. Higher scientific orientation, rapid learning, and innovativeness of today's youth would be the icing on the cake for rural enterprises. With the help of government policies and programmes supplemented by the guidance of NGOs and extension agencies, youth's interests and energy can be retained in agriculture.
|
2021-11-24T16:14:34.560Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "98aa8f9303aaab4189f1ff95547d79ddd9452686",
"oa_license": "CCBY",
"oa_url": "http://masujournal.org/108/B7L3i0Vg3tN6Y8NlwohOYTQaPHMfQp.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2c7caa4022edf68bc04d3143c9fbfd2777c502c3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
269249746
|
pes2o/s2orc
|
v3-fos-license
|
Addressing ethnic disparities in neurological research in the United Kingdom: An example from the prospective multicentre COVID-19 Clinical Neuroscience Study
Background Minority ethnic groups have often been underrepresented in research, posing a problem in relation to external validity and extrapolation of findings. Here, we aimed to assess recruitment and retainment strategies in a large observational study assessing neurological complications following SARS-CoV-2 infection. Methods Participants were recruited following confirmed infection with SARS-CoV-2 and hospitalisation. Self-reported ethnicity was recorded alongside other demographic data to identify potential barriers to recruitment. Results 807 participants were recruited to COVID-CNS, and ethnicity data were available for 93.2%. We identified a proportionate representation of self-reported ethnicity categories, and distribution of broad ethnicity categories mirrored individual centres’ catchment areas. White ethnicity within individual centres ranged between 44.5% and 89.1%, with highest percentage of participants with non-White ethnicity in London-based centres. Examples are provided how to reach potentially underrepresented minority ethnic groups. Conclusions Recruitment barriers in relation to potentially underrepresented ethnic groups may be overcome with strategies identified here.
a b s t r a c t Background: Minority ethnic groups have often been underrepresented in research, posing a problem in relation to external validity and extrapolation of findings.Here, we aimed to assess recruitment and retainment strategies in a large observational study assessing neurological complications following SARS-CoV-2 infection.Methods: Participants were recruited following confirmed infection with SARS-CoV-2 and hospitalisation.Selfreported ethnicity was recorded alongside other demographic data to identify potential barriers to recruitment.Results: 807 participants were recruited to COVID-CNS, and ethnicity data were available for 93.2%.We identified a proportionate representation of self-reported ethnicity categories, and distribution of broad ethnicity categories
Introduction
Increasing the diversity of study participants in order to represent the general population, is of crucial importance to allow research findings to be translatable, and to enable the personalisation of treatment and care.Addressing ethnicity-related inequalities in research participation is of particular importance in countries or regions with a multi-ethnic population.4][5] It is not fully understood why under-representation of minority ethnic groups occurs, although in some interventional studies, such as for vascular neurology, this may sometimes differ between acute and chronic interventions. 6 , 7][10] The consequences of the pandemic may have exacerbated existing health disparities.The mortality and morbidity burden of acute COVID-19 infection was disproportionately felt by minority ethnic groups and communities, who were also less likely to receive telehealth services. 11ere, we aimed to assess recruitment and retainment strategies in a large observational study prospectively recruiting hospitalised patients with neurological complications following SARS-CoV-2 infection and a control group of hospitalised patients with COVID-19, but without neurological complications.In this post-hoc analysis, we sought to evaluate ethnic diversity by geographical region, to identify potential barriers to recruitment and retention, and to provide examples of strategies to increase ethnic diversity across study populations, which has implications for neurological studies in general. 12 , 13
Methods
Data described in this manuscript were obtained from the COVID-19 Clinical Neuroscience Study (COVID-CNS; www.covidcns.org),a multi-centre observational study in the UK, including 17 centres across England and Wales, addressing the need to understand the clinicoepidemiologic spectrum and biological causes of neurological and neuropsychiatric complications in hospitalised patients with COVID-19, caused by an infection with Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2).This study was added as a separate cohort study embedded within the existing NIHR BioResource -Research Tissue Bank study and obtained ethical approval in the UK (REC 17/EE/0025, IRAS 220277).All participants gave written informed consent, and all procedures were performed in accordance with the declaration of Helsinki.
We obtained data regarding age, sex, level of education, relationship status, and self-reported ethnicity, collected as part of the study protocol.The latter data consisted of broad ethnicity groups as derived from NHS guidance used in the United Kingdom 14 : i) Asian, ii) Black, iii) Mixed, iv) Other, and v) White.Demographic information was reported using the recently published Updated Guidance on the Reporting of Race and Ethnicity in Medical and Science Journals. 13Furthermore, we included admission related data, including days spent in hospital, World Health Organisation (WHO) severity scores for COVID-19, 15 and duration of invasive ventilation.
Information regarding the distribution about the UK population in terms of broad ethnicity categories was collected from the Office for National Statistics website. 14We examined ethnicity distribution only for centres that recruited at least 50 participants to the COVID-CNS studies; centres excluded for the current analyses had a median of eight participants (range 1-30).For centres included in the current analysis, their respective Clinical Catchment Area was determined as follows, based on the National Health System's organisational oversight, 16 described as (centre (local authority)): the Walton Centre and Liverpool University NHS Foundation Trust (Liverpool City Region), Salford Royal (Greater Manchester), Cambridge University Hospitals (Cambridge), Sheffield Teaching Hospitals (Sheffield), University College Hospital (London Boroughs of Camden, Islington, Haringey, Barnet, and Enfield), and King's College Hospital (London Boroughs of Bexleyheath, Bromley, Greenwich, Lambeth, Lewisham, and Southwark).For each catchment area the distribution of broad ethnicity categories was obtained for each borough with the above defined catchment from the Office of National Statistics website with data obtained from the most recent census in the UK (Census 2021). 14 , 17 weighted average for each broad ethnicity category was calculated for each catchment area using the UK Census 2021 for ethnicity and population. 14 , 17ata were summarised descriptively and where data were normally distributed, they are presented as mean ± standard deviation and analysed using student t test.For non-parametric data, data are represented as median (range) and analysed by Mann-Whitney U or Fisher's Exact test.For dichotomous group comparisons, a Chi-Square test was used.A correction for multiple testing, where relevant, was performed using the Bonferroni method.All data were analysed using SPSS Version 28 (IBM SPSS Statistics for Windows, Version 28.0.Armonk, NY: IBM Corp.).
Results
Baseline demographics for the participants in the COVID-CNS study are provided in Table 1 , an overview of ethnicity in Fig. 1 , and the geographical distribution of participant recruitment relative to overall population density is shown in Fig. 2 .We identified variety in the distribution of ethnic minority participation across centres that participated in the COVID-CNS consortium in terms of broad ethnicity groups ( Table 2 ), but for the overall COVID-CNS cohort, as well as the individual included centres, the distribution of broad ethnicity groups was comparable to their Clinical Catchment Areas ( Table 3 ).Moreover, when comparing non-White versus White participant percentages, no differences were observed in the distribution between the centres Clinical Catchment Areas and the Census 2021 data, showing that the COVID-CNS study was able to recruit a cohort of participants representative of the UK population in terms of ethnicity ( Table 3 ).In addition, there was a generally good representation of minority ethnic groups and no differences in distribution of ethnicity groups was observed between male and female participants ( p = 0.507).In addition, female participants more Abbreviations: M: male; F: female; CSE: certificate of secondary education; NHS: National Health Service; NVQ: national vocational qualification.Note that discrepancies may be present between NHS ethnicity categories and broad ethnicity categories due differences in the 'prefer not to say' item for respective categories.often tended to be from an ethnic minority group compared to male participants (31.2% of female participants were from a non-White ethnic background, whereas this was 26.6% for male participants), although this did not reach statistical significance ( p = 0.113).
To determine if any bias was present relating to inclusion of participants from different ethnic backgrounds, we next determined if demographic and COVID-19 related differences existed between different broad ethnicity groups.After correction for multiple testing through the Bonferroni method ( p = 0.05/6 = 0.008), no statistically significant differences were observed for sex, body weight, the number of days spent in hospital, WHO severity scores, duration of invasive ventilation ( Table 2 ) between broad ethnicity category groups, but we did observe a statistically significant difference in age, with participants from Asian and Mixed ethnic minority groups tending to be younger than participants from other groups ( p = 0.001; Table 2 ).
Finally, to identify factors that contributed to the comparable distribution of broad ethnicity groups between the COVID-CNS study and the Clinical Catchment Areas of participating centres, researchers working on the study and our PPIE group were retrospectively asked to identify strategies they used to increase participant diversity.The identified strategies are listed in Table 4 .
Discussion
Here we provide an example of the successful recruitment of individuals from ethnic minority backgrounds in the UK to the COVID-CNS study.We observed a high degree of diversity in ethnic background and demonstrated which strategies may have helped to achieve this level of diversity by focusing on the recruitment approaches.While acknowledging the limitations of a largely post hoc approach in analysing these factors, we feel that with the use of these strategies, a similar degree of diversity in ethnic background in research studies can be observed as in the general population.The strategies employed by the COVID-CNS research teams may prove helpful in other neurological studies and trials and tie in with the advice of widening participation of underrepresented groups in research which is major agenda item for funders eg the NIHR. 18hen asked about barriers to recruitment of minority ethnic groups, the replies given by the COVID-CNS research teams were largely in line with known barriers described in literature.
Solutions to these identified barriers are crucial and seem to be dependent on the background of specific minority ethnic groups.Although most of the evidence seems to stem from non-neurological studies, much can be learned from advancements in other fields.For example, in a study on hypertension self-management, core factors that aided in the
Table 2
Ethnicity across the larger recruiting centres participating in the COVID-CNS consortium.Only centres who recruited at least 50 participants have been included in this overview.recruitment in an African American population included the presence of a culturally sensitive and diverse research team, in addition to the use of incentives. 19Specifically, the study team consisted of individuals from a diverse ethnic background, and the study used monetary incentives to increase retainment.Other factors that contributed to the high recruitment numbers (96.7% over a period of 7 months) and low attrition (16.9% after 6 months), included having previous experience with the study population, working closely with other staff at the study site, and ongoing communication with all parties involved in the study. 19This has also been shown in other studies where individuals from underrepresented racial and ethnic groups feel more confident about participating when the research team approaching them is led by people from the same ethnic background. 20These strategies are largely in line with recently described approaches for neurological studies.This includes the 'King's Model for Minority Ethnic Research Participant Recruitment', which raises awareness about and supports the recruitment of minor- Abbreviations: a: across all groups; b: non-white vs white.
Table 4
Strategies used in the COVID-CNS study to increase participant diversity.ity ethnic groups in neurological and other studies and underlines the key unmet need of validating clinical research outcomes in non-white populations. 21nother key factor to consider is addressing language and communication barriers.This is exemplified by an over 80% satisfaction among Hispanic American participants in relation to a Spanish translation of clinical information in the same format as the English information provided for a study. 22In COVID-CNS, translation services for study documents and research appointments were available.It is important to ensure studies are appropriately funded to cover translation requirements.Other examples of successful recruitment, relevant to COVID-19 studies, include the performance of different COVID-19 vaccine studies.For example, the Novavax study recruited the highest proportion of individuals from a minority ethnic background, which was attributed to the later start of recruitment for the study, enabling a benefit from the ongoing efforts to increase diversity in COVID-19 vaccine trials in general.This also emphasised the need for ongoing engagement and extended recruitment periods as individuals from ethnic minority backgrounds tended to enrol later in the recruitment process, due to strengthened community engagement efforts, and access to more diverse volunteer registry records. 23This trend was not observed in the COVID-CNS study and here recruitment of ethnic minority groups occurred evenly throughout the recruitment period.
In addition to a focus on increasing diversity in study populations from a perspective of ethnic background, other issues need to be addressed when it comes to ensuring representative recruitment for clinical trials and studies.For example, those from a low socioeconomic status remain less represented in study recruitment.Addressing this inequality may be achieved by approaching individuals over the phone through a toll-free (freephone) number, 24 as well as using partners of participants for support, and societal partnership, which have been shown to be effective and can lead to increased enrolment rates. 25 , 26oreover, higher retention rates have been observed through use of (financial) incentives, a personalised approach, using project logos, emphasising participant convenience, and sustained contact with participants. 26Such strategies may also be of importance when trying to recruit participants from a rural setting where underrepresented minority groups may be more difficult to reach due to geographical isolation and lower population density.Here, the use of multiple recruitment strategies could be particularly beneficial. 27inally, addressing physician referrals is crucial; for example in oncology trials 77% of trial participants reported that it was their physician who made them aware about specific studies. 28Surveys show that many physicians do not refer patients to studies due to a lack of time or knowledge about ongoing trials and studies. 29Physicians may have unintentional bias to recruiting participants from non-minority backgrounds. 30Therefore, engaging with physicians, and aiming to increase involvement of physician from ethnic minority groups, as well as providing suitable information materials about studies could aid in increasing recruitment, particularly among individuals from an ethnic minority background.
As with any study it is important to reflect on limitations of our analyses.These include the possible bias introduced by the greater ethnic diversity in London compared to other parts of the UK.As such, it could be reasoned that the greater ethnic diversity observed in the London centres participating in the COVID-19 study could be explained by greater ethnic diversity in London.On the other hand, we noted that the distribution regarding broad ethnicity categories across the United Kingdom was in line with the Census 2021 data.Moreover, other studies undertaken in the same population in London have not reached the diversity in ethnic background observed in the COVID-CNS study. 31 , 32t could be argued that, as the nodal event was admission to hospital and people from ethnic minority group were more likely to have higher COVID-19 disease severity, 33 it was easier to recruit participants from a wider range of ethnic minority groups, also in the light of difficulties some studies on COVID-19 experienced when trying to recruit outpa-tients. 34In this study we have not been able to include data related to eg free childhood meals and exact postcode of participants, which are also determinants of health.Finally, we observed relatively high rates of participants who did not want to indicate their self-reported ethnicity, in addition to the limited ethnicity group options provided by the NHS.Nonetheless, we feel our results are useful and form a useful source of information regarding recruitment of individuals from an ethnic minority background in multi-ethnic countries and regions, also by providing examples of how to successfully overcome barriers to recruitment.This also applies to studies in general, and our examples align with the priority needs and most successful strategies identified in other neurological research, such as engaging in community outreach to build trust and understanding, tailored explanation of the study based on language and cultural background, providing adequate support in relation to time and resources that participants have to invest in study participation, as well as careful scheduling of study visits. 35
Concluding remarks
To conclude, in this study we provided an example of the successful recruitment of individuals from ethnic minority backgrounds in the UK.A high degree of diversity in ethnic background was achieved in recruitment, mirroring the ethnic diversity across the general UK population.We demonstrated which strategies could be used to achieve this level of diversity and how further research into identification of barriers to recruitment and strategies is vital in tackling these barriers across clinical trials and studies to enable the correct extrapolation of research findings to the general population.
Fig. 1 .
Fig. 1.Ethnicity in the COVID-CNS study.Please note that the numbers and percentages for broad ethnicity and NHS ethnicity groups do not necessarily match due to differences in the 'Prefer not to say' items.Abbreviations: NHS: National Health Service.
Fig. 2 .
Fig. 2. Geographical distribution of participant recruitment for the COVID-CNS study relative to overall population.Fig. 2 A shows England and Wales with the number of participants in the COVID-CNS study per postcode area; Fig. 2 B shows overall population density across England and Wales (created using Census 2021 data from the Office for National Statistics; https://www.ons.gov.uk/census/maps ).
Culturally sensitive and diverse
research team • Having a diverse research team from different ethnic backgrounds • Ensuring that the research team is culturally sensitive and motivated to support diverse participation • Video testimonials from participants acting as patient ambassadors on study website Overcoming financial/social barriers • Reimbursement of travel expenses / arranging travel for participants unable to pay in advance • Support completing the online follow-up questionnaires over the phone for participants without internet access • Evening sessions to fit around work schedules • Session times to fit around childcare needs Overcoming language and communication barriers • Aural consent form for visually impaired participants • Accessing translation services for those for whom English was not a first language • Patient testimonial videos to ensure people from ethnic minority groups could identify with participants already recruited to the study Meeting participant's individual needs • Allowing relatives to attend appointments • Working alongside caregivers to ensure both parties' needs are met • Giving anxious participants alternative ways to provide biosamples • Shorter and fractioned sessions • Providing a calm and suitable environment for participants
Table 3
Ethnic background in the COVID-CNS study and individual centres compared to ethnic diversity in the United Kingdom and local centre catchment areas.Catchment areas were defined as follows (centre (local authority)): the Walton Centre (Liverpool), Salford Royal Hospital (Greater Manchester), Cambridge University Hospitals (Cambridge), Sheffield Teaching Hospitals (Sheffield), University College Hospital (London Boroughs of Camden, Islington, Haringey, Barnet, and Enfield), and King's College Hospital (London Boroughs of Bexleyheath, Bromley, Greenwich, Lambeth, Lewisham, and Southwark).
|
2024-04-21T15:09:12.513Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "505daa37c5d98001c53a132884253e25dddc3465",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.clinme.2024.100209",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dc84c376d7495dc1f322caae2d5ec25b2d4528d",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247551901
|
pes2o/s2orc
|
v3-fos-license
|
Resisting the rentier city: grassroots housing activism and renter subjectivity in post-crisis London
This article aims to open up a new discussion about the political potential of the renter to urban social movements by providing a ground-level view of renter activism in contemporary London. Drawing on participant observation conducted as an activist-researcher between 2015 and 2017, I offer an ethnographic social history of Digs, a private renters’ action and support group based in the east London borough of Hackney. Examining the political and organisational evolution of Digs over a six year period, I explore the group’s struggles to cultivate a coherent collective identity for renters, its innovative approaches to mutual support and relational organising, and the difficulties its participants encountered in maintaining participation in a highly intransigent political climate. I argue that although Digs was a relatively small and largely localised group, its members nonetheless cultivated a vital set of knowledge-practices that provided a conceptual and material framework for a citywide renters’ union in London. The case of Digs demonstrates that urban social movements are more likely to evolve effectively when they create the institutional capacity to retain key activists and pass knowledge on.
Introduction
The growing interest in everyday struggles for housing justice has been one of the more heartening by-products of the continuing fall-out from the global financial crisis. Over the past decade, activists, scholars and activist-scholars have documented the quotidian struggles of those who find themselves subject to displacement, dispossession or endemic housing Wilde 65 that emerged through the 'trial and error' of everyday activist life in Hackney. By paying close attention to the often unseen micro-histories of activist groups, I argue that we can develop more effective means of analysing our movements and understanding the challenges they face. In the spirit of 'militant ethnography' (Juris, 2008a, p. 20), I therefore offer these reflections in the hope that they will prove useful not only for renters' movements and housing struggles in other contexts, but also for urban social movements more generally.
Producing the rentier city
Over the past three decades, London has been transformed from a city that was predominantly populated by securely housed homeowners and council tenants to one in which insecure private renting has increasingly become the norm among a sizable portion of its population. In 1981, over 870,000 homes in the capital -almost 35 per cent of all properties-were classified as 'socially rented', of which around 770,000 were council houses with lifetime tenancies tied to a local authority. This compared with around 1.2 million owner-occupied homes and just 378,000 properties in the private rented sector (Watt & Minton, 2016, pp. 208-9). Since then, the steady dismantling of council housing by successive governments, coupled with the transformation of London into the world's prime location for real estate speculation (Beswick et al., 2016), has shaped the city into one in which growing numbers of low-and even middle-income earners find it increasingly difficult to access decent, secure and affordable housing. Though successive governments have presided over this transformation, its origins date back to the election of the Conservative Margaret Thatcher in 1979, who famously championed the dream of a 'property-owning democracy' and attacked council housing for fostering a culture of dependency on the state (Hanley, 2007;Murie & Jones, 2006;Power, 1999). As part of a broader drive to promote individualism and win over traditional Labour voters, Thatcher's government introduced the now infamous Right-to-Buy (RTB) in 1980, a policy that gave council tenants the ability to buy their homes at a discounted price and failed to replace these with equivalent properties in the public sector (Harvey, 2005;Hodkinson, Watt & Mooney, 2013). Council housing stock fell steadily after introduction of RTB, a process that largely continued under Tony Blair's New Labour government between 1997 and 2010 (Watt & Minton, 2016, p. 208). Between 1999 and 2010, for example, London lost around 85,000 council houses to RTB (DCLG, 2017), with many finding their way into the hands of so-called Buy-To-Let (BTL) landlords, who took advantage of BTL mortgage products following their launch in 1996 (Leyshon & French, 2009;Watt & Minton 2016, p. 207). 2 This gradual erosion of council housing as a tenable option for many low-and middleincome earners has not only been the result of national level policies, however. Over the past fifteen years, London's local authorities have come to play an active role in 'decanting' lowincome tenants from public land in order to make way for more lucrative private developments. Inhibited by their inability to borrow money and dramatic cuts to their budgets, the city's borough-level councils have increasingly turned to private developers in order to meet housebuilding targets and redevelop public housing deemed to be in disrepair. In many instances, this 'state-led gentrification' (Watt, 2010) has involved the wholesale demolition of council estates and their replacement with denser 'mixed income' developments that maximise profits for investors by skewing units towards the high end of the market (Elmer & Dening, 2016;Lees, 2014). Developers have become adept at circumventing quotas for low-cost rental properties through so-called viability assessments (Elmer & Dening 2016, p. 274), which enable them to reduce social units if profit margins fall below 20 per cent. Such trends not only force low-income residents out of 'prime real estate' in London's inner-city boroughs, but also drive up rates in the private sector by accelerating existing processes of gentrification (Butler & Lees, 2006).
The loss of council housing and the failure to develop new social housing has meant that far more people find themselves looking for housing in the private rented sector (PRS), but this too has been subject to substantial reform since the 1980s. Between two separate housing acts in 1988 and 1996, the PRS was deregulated and liberalised: firstly through the removal of rent controls that had set limits on how much private landlords could charge tenants, and secondly through the replacement of lifetime Assured Tenancies (ATs) with the significantly less secure Assured Shorthold Tenancies (ASTs). 3 Born from a drive to encourage new 'investors' into the PRS, these reforms heavily weight power in favour of landlords by making it far easier to evict tenants. Issues such as disrepair, unreturned deposits, overcrowding and harassment have all become commonplace for London's private renters, who are projected to constitute 60 per cent of the city's overall population by 2025 (Fraser, 2016). Such measures have been worsened by the long-term stagnation of wages in the UK, which means that rental costs consume an ever-larger portion of renters' earnings (Edwards, 2016).
Many of these long-term trends have been compounded by recent austerity measures that aim to bring down public spending on welfare. In 2013, the Coalition government placed a cap on Housing Benefit, the state subsidy that covers shortfalls in income, pushing many low-income private renters into arrears with their landlords. Eviction by a private landlord is now the leading cause of homelessness in the UK (Butler, 2016), with homelessness applications rising annually since the cap. According to the most recent statistics released by the homelessness charity Shelter, around 170,000 people in London are currently registered as homeless, a figure that constitutes 53 per cent of the UK's overall homeless population (Shelter, 2018). This rise has taken place against the backdrop of a wider shift to more punitive models of welfare distribution in the UK, in which idioms of moral deservingness have reduced access to benefits and precipitated the stigmatisation of those reliant on state support (Hills, 2015;Hyatt, 1997;Koch, 2014Koch, , 2015Wacquant, 2009).
Since the early 1980s, then, a succession of policies has fundamentally reshaped the UK's housing market and its connection to broader patterns of social inequality (Hamnett,3 ASTs permit landlords to evict tenants without reason at just two months' notice.
Wilde 67
2003; Dorling, 2016). The net result is that a growing number of Londoners now find themselves forced to access their housing through a private rented sector that is among the most expensive, insecure and unregulated in Europe. As Watt & Minton put it (2016, p. 208), far from creating a 'property-owning democracy', Thatcher's legacy in London has instead created a 'private landlord owning plutocracy' (emphasis in original) that continues to preside over steadily worsening living conditions for the city's renters.
The renter as a political subject
Digs was formed by a small group of individuals in 2012 when Heather, the organisation's longest-serving member and one of its co-founders, decided to call a meeting for private renters in Hackney after a string of bad experiences with landlords and letting agents. 4 Although she had worked in supported housing in Yorkshire where she grew up, Heather explained that she only began to view her experiences in political terms after moving to London.
It was like moving down here [to London] and realising how much worse it was because of all the market pressures, and being a private renter myself and just feeling, like, massively screwed over. So I just thought, wouldn't it be good if there was some kind of an online forum where local people could share information about landlords and letting agents? You know, finding out more about what the market pressures are, and talking to people who are in different points in the PRS.
This initial idea for a renters' support group in Hackney led to a launch event in late 2012 that was attended by around 30 people. As the group began to take form after this event, Heather and several others attended training sessions on housing law. They used the knowledge gleaned from these sessions to run renters' rights 'skill-ups' for Hackney renters, and this enabled Digs to build up a core of active members and a mailing list of supporters. As interest in the group grew, it became clear that there was an appetite for something more far-reaching than a peer advice forum. Heather described her frustration at seeing the experiences of London's private renters continually overlooked by both politicians and the media, who tended to view the city's housing crisis through the lens of potential buyers who had been priced out of their chosen areas by rising house prices. 'There was still this perception that the private rented sector was for students and for young professionals who appreciated and benefitted from its flexibility,' she recalls. 'But you know, this is my life, it's my friends' lives… And, you know, there was a huge amount of pain and disappointment at how much we were not being listened to.' Heather describes how the group was particularly frustrated that large housing charities such as Crisis and Shelter -whom they regarded as natural allies-attempted to tackle problems in the PRS by running campaigns against socalled 'rogue landlords'. As she put it: 'The idea that there's a few bad apples that can be weeded out -that's just a way to protect the status quo because it's disguising the reality, which is that there's massive systemic failure.' According to Casas-Cortés et al., knowledge-practices constitute the 'experiences, stories, ideologies, and claims to various forms of expertise that define how social actors come to know and inhabit the world' (2008, p. 27). For Digs in that early period of formation, frustration with this inattention to the political roots of problems in the PRS convinced its members that the group 'needed to be something bigger,' as Heather put it. After a series of meetings with other fledgling renters' groups from London boroughs such as Islington, Tower Hamlets and Lambeth, the group helped to launch the 'Let Down' campaign in April of that year. Using a Monopoly board game theme, Digs members carried out a series of actions outside prominent high street letting agents in Islington and Hackney, demanding an end to the fees that letting agents routinely charge tenants in return for signing or renewing tenancies (Digs, 2013). These actions proved a powerful galvanising force, giving existing members confidence, bringing new individuals into the fold and raising the group's profile through coverage in the national press (Kennedy, 2013). As Digs gradually established itself as a prominent voice for private renters in the capital, its position was bolstered by the formation of the Radical Housing Network (RHN), which brought together London's thirty or so active housing groups into a formal coalition around the same time. Although each organisation within RHN has its own specific goals -its members include groups of squatters, council tenants, benefit claimants, travellers and migrants-the network as a whole shares a long-term commitment to the 'radical right to housing' (Madden & Marcuse 2016, p. 191) through de-commodification and de-financialisation (Wills, 2016). Coupled with an effective communication strategy using Facebook and Twitter, the support of RHN meant that Digs was now able to 'punch above its weight' (as one member put it) as an influential voice for renters both within and beyond Hackney.
Scholars of social movements have long emphasised the struggle to create and maintain collective identities as one of the principal challenges for grassroots political activists. As Holland et al. (2008) argue, a movement's ability to reproduce itself often hinges on the fragile relationship between belonging and action among its constituents. Building on work that underlines identity-formation as a decentred, dialogic and place-based process (Melucci, 1995(Melucci, , 1996Satterfield, 2002), they show how nascent collective identities can be challenged or compromised by factors that are often beyond a given group's control (Holland et al., 2008, p. 99). In the case of Digs, questions of scale were inherent to the challenges the group faced as members struggled to carve out a coherent collective identity. In 2015, when I first became actively involved in the group after several years as a supporter, a major preoccupation among long-standing participants was the difficulty of reaching private renters beyond a fairly narrow group of already-politicised activists. At that point the group had some 300 people on its mailing list, but only around 30 of these had attended meetings or taken part in actions or campaigns. During the time in which I was most active (2015-2017), around fifteen of us formed a core of activists, with supporters from the mailing list turning out for larger actions and others being involved for shorter periods. Most individuals in this core group were white and university-educated, a fact that made many of us uncomfortable given Hackney's large black and minority ethnic population. Digs was also a small group relative to the local population of private renters: around a third of Hackney's 270,000 residents fall into this category (London Borough of Hackney, 2019), meaning the group's members and supporters represented a just tiny fragment of its potential constituency.
While Digs activists were in no doubt that significant numbers of private renters across Hackney and London were unhappy with their housing situations, converting this discontent into a self-identifying political subjectivity as renters was by no means a straightforward task. According to Heather, one of the early decisions the group made was to focus very specifically on issues facing private renters rather than the host of related problems that characterise London's housing crisis. Although this strategy had been an effective means of establishing a core group of committed members, reaching beyond activist milieus proved far more difficult. Part of the problem lay in the very thing that had spurred Digs's formation in the first place: the absence of an adequate public debate about the entrenched injustices of the PRS and its political roots, which meant there was no readily available counterdiscourse for disgruntled renters. Digs thus faced the twin challenge of struggling to force open a political conversation through personally demanding public actions at the same time as undertaking the patient work of building a fledgling renters' organisation. The burden of operating simultaneously in these different 'political temporalities' (Wilde, 2017b, p. 48) proved a constant source of concern for Digs members. As I discuss below, however, it also proved to be a vital learning process that spurred the group into developing new organising strategies.
Relational organising, friendship and care
Although Digs struggled to attract new members from beyond activist circles, it proved much more effective at retaining existing members who had lost faith with other forms of activism. One core group member from 2013 onwards was Emma, a Hackney renter who had been involved in climate activism for several years before taking a break from political activity following the birth of her son in 2009. In the summer of 2013 Emma was introduced to Digs by Jacob, an experienced activist who became a vital link between Digs and RHN. Emma described how she was immediately attracted to the emphasis that the group placed on organising around its members' everyday problems.
What really struck me early on with Digs was, like, how much I preferred the housing stuff to all the other activism I'd done before. Because like with climate stuff, it's always really "other" -it's for other people and a bit distant and a bit abstract. Then I started with Digs it was like: this is an issue that affects me and the people around me and I could feel the power in that… One of the things that was really fundamental for Digs was that we really took being renters as the core. Like it really wasn't to help others, it was there to help ourselves in a way that would help everybody.
Alongside this focus on the immediate and the everyday, Emma recalled the social side of Digs being central to the group's appeal: I think I just really liked all the people that I met. It really suited me on a kind of personal level, and I started making friends through this thing. You know we'd always go to the pub after meetings and these things were actually really important to what we were doing.
A critical part of this sociality was the fact that Digs was rooted in Hackney, which meant that members could socialise together with ease both within and beyond the group's core activities. As Emma's words make clear, the strategy of organising around the everyday issues of Hackney renters was central to the development of 'thick' social relationships that went beyond the bounds of the group's political objectives. The strong friendships formed among members as a result became integral to both the organisation's continuity and to its growth in the coming years.
While part of Digs' decision to focus specifically on local private renters had emerged organically, it was also an outcome of the group's adoption of a particular organising strategy in 2013. A key individual in this process was Hero, a qualified community organiser who had received training from Citizens UK, a large NGO that specialises in training community organisers across the country. After adapting Citizens UK's trainings, Hero ran a number of workshops for Digs members based around the concept of 'relational organising', a model premised on the idea that successful organising emerges from strong interpersonal relationships and ties of mutual support within communities. Proponents of relational organising argue that these relationships form the foundations upon which campaigns can be launched and demands levelled at those in positions of power (Christens, 2010;Divakaran & Nerbonne, 2017;Ellis & Scott, 2003;Saundry & McKeown, 2013). Although its roots are in the community organising models originally popularised by figures such as Saul Alinsky in the United States (Alinsky, 1971), relational organising seeks to move away from hierarchical divisions between 'professional' organisers and community members and takes a less instrumentalist approach to gaining 'easy wins' in order to build collective power (Ganz, 2002(Ganz, , 2010McAlevey, 2016). The version that Hero developed for Digs also aimed to move away from a focus on institutional leaders, instead concentrating on building a network of individuals who were each personally invested in empowering private renters in non-hierarchical ways.
Emma recalled how the period following Hero's workshops was a particularly rewarding one for Digs members. Drawing on their trainings, throughout 2014 the group ran Saturday stalls in central Hackney in order to increase their visibility. They used these stalls to take down the names of local renters who showed an interest in the group and then followed these up via email. Those who responded were offered the opportunity to become involved in the group after an informal one-to-one conversation with an existing member. These one-to-ones were designed to find common ground between Digs's goals as a renters' organisation and the self-interest of each renter in question, in the hope that the individual would then make a commitment to being involved on a long-term basis. After several months of stalls and an online campaign titled 'Tell us your story', the group had collected 50 personal accounts of the myriad problems that renters in the borough faced. These were then presented in theatrical form -a 'renters' bingo' game featuring common renter problems such as 'deposit not returned', 'broken boiler' and 'extortionate agency fees'-to representatives from Hackney Council in an effort to pressure the local authority into improving its regulation of the PRS. Emma remembered how the meeting was notable for the visible discomfort of local councillors when they were confronted with the realities of their constituents' housing problems. For Digs members, the experience of using personal stories to directly confront local politicians helped build further confidence in their ability to shift the terms of the public debate around renting.
While relational organising provided a coherent structural and strategic form for Digs to carry out its work, the everyday content of what its members were doing was equally important to the group growing in size and efficacy in that period. According to Emma, this was a time in which a fusion between fun, socialising and more confrontational political actions became central to the group's approach. She described the following regarding the group's anti-gentrification fly-posting around Hackney: We got quite good at doing fun activities together. So we'd do like picnics and socials and that was a really big thing. And then we used to do fly-posting, and that was where ten or fifteen of us would meet at someone's house, print off like 400 posters at the Common House [a radical social centre in East London], cook up the paste and get drunk, and then go off in little groups of four or five and send each other pictures [of the posters]. And we found that those types of social activism really worked.
Taking part in actions that were both social and transgressive was thus integral to the friendships that came to anchor Digs as an organisation. As Emma put it when I asked her what had kept the group going during periods of relatively low activity: 'I think interpersonal relationships. I don't think you can underestimate the work that Heather put in to making it all hang together. We were all doing quite a lot of work and thinking about it quite a lot.' Much of this thinking returned to the question of how to diversify the group's membership and widen its appeal among local renters. Acutely aware that the core membership of Digs was largely white, university-educated and constituted by individuals who were already politically active, discussions were regularly held about how to reach out to more marginalised renters and foreground their voices in the group's strategies and campaigns. After deliberating this issue for some time, in 2015 the group took the decision to make advice and support a more central component of their meetings.
By the time I became involved later that year, Digs meetings always began with a half hour slot in which people were encouraged to converse with the person next to them and share their renting experiences. This approach aimed to give those attending the opportunity to listen to each other informally before going into discussions about actions or campaigns. If an individual wanted to discuss her housing situation further with the wider group, her conversation would then be fed into a group discussion so that advice, support and potential collective actions could be planned in order to help resolve the issue. In one instance, for example, a woman who was attending her first meeting became very upset as she explained how the threat of eviction and a falling out with her flatmates was causing her mental health to deteriorate. The group took the time to listen and, over a cup of tea and samosa, helped talk through her options. In the end, the woman decided against fighting the eviction and told us she would instead look for a new place to live. She expressed her gratitude to the group for listening to her problems and spent the second half of the meeting contributing her ideas for forthcoming actions.
Although it was not always easy to move swiftly from the mutual support part of the meetings to the strategic -on occasions, strategic discussions took place in the pub afterwards-there was a consensus among members that it was vital to prioritise support if pressing problems presented. Danny, a Digs member who became involved shortly after me, described being particularly impressed by this approach during his first few meetings: I felt very welcome at those early Digs meetings I attended, and that was interesting to me. To be honest with you I'd never thought seriously about how to create welcoming organising spaces while I was mostly involved in student politics. In retrospect, I can think of a couple of occasions where groups I was in were criticised for not being sufficiently welcoming, but in general they had always been carried along by their own momentum -the momentum of events… But there wasn't much attention paid to the question of how to create spaces which were open for, and accessible to and warmly welcoming for people who might not immediately be drawn to activist politics.
In Danny's view, the prioritisation of mutual support was integral to the development of strong social bonds that could then be put to political use. As he put it: It's really small stuff, like letting people talk to the person sat next to them rather than immediately launching into an agenda, trying to introduce some element of mutual aid and advice within the meetings in addition to a discussion about a campaign or an action… The presence of people who are trying to think about getting some food, making sure there's tea -all of that stuff.
Recent scholarship has highlighted the importance of affective experiences to the participants of twenty-first century social movements (Garces, 2013;Graeber, 2009;Juris, 2008b;Razsa & Kurnik, 2012). Yet while much work on the alterglobalisation movement and Occupy stresses performative forms of protest, a distinctive feature of London's recent wave of housing activism has involved a redirection of political energies to material struggles in local and everyday settings. In the spaces of London's anti-austerity groups, a focus on collectivised and 'militant' forms of care (Gann, 2015) has become integral to the work of those who resist evictions, challenge local authorities and self-organise as precariously housed tenants and benefit claimants. As I have argued elsewhere (Wilde, n.d.), these practices constitute a different mode of affective experience to performative protest: one in which the act of taking care for others constitutes not merely a vital survival strategy, but also a means of fashioning embryonic moral economies and 'alternative circuits of value' (Skeggs 2011, p. 503). In a similar vein, Digs' interpretation of relational organising functioned as both a method and an ethos: it was a means of reaching out to renters who needed support, but also a knowledge-practice that formed an ethical framework for action and a means of producing a collective identity as renters.
Campaigning, 'capacity' and burn-out
In early 2016, Digs launched a campaign that aimed to bring together the immediate struggles of its members with a wider housing issue. It has long been acknowledged that letting agents and landlords routinely discriminate against benefit claimants by refusing to let properties to those who require Housing Benefit to cover their rent. This ubiquitous practice Wilde 73 in the UK's private rented sector appears most visibly in the form of 'No DSS' on lettings adverts -DSS being the acronym of the now defunct Department of Social Security, which is still used as a byword for benefit claimants in housing circles. The group had raised this issue in its campaigns before, but in late 2015 the decision was taken to focus specifically on 'No DSS' after Emma received an eviction notice from her landlord. Emma's situation was typical of the endemic insecurity that many of London's renters face: although she had always paid her rent, her landlord opted to start eviction proceedings using the infamous Section 21 law that allows landlords to seek possession at just two months' notice and without needing to provide a reason. For Emma, this was compounded by the difficulty of finding a new landlord who was willing to let to a tenant claiming Housing Benefit. At the time she was working full-time, but since her wages fell short of the going market rate in Hackney, she received Housing Benefit as a top up. In her search for a new home, letting agents repeatedly told her that 'our landlords don't accept DSS,' meaning her prospects of finding a home close to her son's school looked bleak.
Emma's situation was the spark for a new Digs campaign -appropriately titled #YesDSS-that aimed to draw attention to the role played by letting agents and landlords in effectively barring benefit claimants from properties. After deciding on a campaign strategy in late 2015, Digs members carried out a 'mystery shopper' survey of 50 estate agents in Hackney in early 2016, with the aim of finding out how many would accept benefit claimants as tenants. After weeks of telephone calls to letting agents, the survey revealed that there was only one landlord in the whole borough that might accept a 'DSS tenant'. In our meetings following the survey, a clear consensus emerged for a series of theatrical and high-profile actions targeting letting agents who had been particularly derisory in their language towards benefit claimants.
In early February the first #YesDSS action took place in central Hackney. It featured cardboard boxes bearing slogans about homelessness and discrimination, a theatrical speech from a Digs activist dressed up as the 'Lord Mayor of private renters' and more serious speeches from those who had suffered as a result of 'No DSS', including a highly moving one by Emma. Around 70 people attended, including housing activists from across London and members of Sisters Uncut, the direct action feminist group who campaign against cuts to domestic violence services. A colourful caravan of activists and supporters then carried out flash occupations of several Hackney letting agents, during which the morality of 'No DSS' was debated with an assortment of bemused, hostile and uncomfortable letting agents. Several agreed to amend their policies as a result of these conversations. Six weeks later, a second action was carried out with slightly smaller numbers. On this occasion, Digs members 'blacklisted' letting agents who still refused to change their policies by sticking cardboard signs on their doors. In what was a clear victory for the campaign, many others simply chose not to open their doors on what would ordinarily be their busiest day of the week.
The immediate response to this action was highly positive. The campaign received coverage in the national press (Foster, 2016), local politicians made public statements denouncing the discrimination of benefit claimants, and several new people began attending meetings after being inspired by the actions. As one later told me: Both of those [#YesDSS] demos were really vibrant. I loved the spectacle of this big crowd of people dancing down Mare Street [Hackney's main high street], going into letting agent offices and making some noise, expressing dissatisfaction but in a theatrical and performative way. I enjoyed the way that people were drawn into that and the energy of it… I remember watching people coming out of JD Sports next to the letting agent and immediately picking up the chant because something was happening in the street.
Much like previous Digs campaigns, the momentum of organising #YesDSS initially had a galvanising effect on the group, with new members giving impetus to the Saturday stalls, providing much-needed support with online communications and bringing fresh perspectives to strategy meetings. But as the months wore on following the actions, the collective energy that #YesDSS produced began to dissipate as Digs members struggled with the question of where to take their campaign and their organisation. While some were keen to repeat the actions in order to maintain pressure on local letting agents, others felt that this would lead to inertia as the protests gradually diminished in size. Some advocated for a more concerted effort to reach out to marginalised local renters through a concerted doorknocking campaign, but others argued that the group lacked the numbers to carry out such endeavours effectively.
These discussions signalled a deeper problem for Digs members, which was that in spite of the justified anger that was expressed during #YesDSS, the campaign also highlighted the numerous legal and institutional barriers that stood in the way of significant PRS reform. Some letting agents could be shamed into changing their policies in Hackney, and some local politicians could be encouraged to push for better regulation, but solving such issues ultimately required a seismic shift in housing policy at the national level. For all the creativity and determination of its members, it was clear that Digs on its own would struggle to achieve such goals. From the summer of 2016 onwards, regular discussions about group's ability to carry out its activities as a collective, and expressions of burn-out from a number of individuals, suggested that Digs might be running into its organisational and strategic limits. Danny, who was at that point still relatively new to the group, recalled hearing the word 'capacity' regularly and feeling that it reflected a struggle to reconcile people's ability to practically commit to Digs in spite of their emotional attachment to the group.
I don't think I had encountered [that word] in an organising setting before. Everyone in Digs was always thinking about "capacity", which I think I understood as a kind of internal vocabulary -an idiom for being worn out. I feel like people were talking about their own individual experience to some extent, but also they wanted to do so on behalf of the organisation, which they are individually important to.
As Jeffrey Juris observes, 'sustaining a mass movement is a complex art, requiring a delicate balance between periodic outbursts of embodied agency and their controlled management, improvisation and staged repetition ' (2008b, pp. 90-91). Although Digs was never a mass movement, it had nonetheless been based around a similarly delicate balance that seemed to have been lost in that period, and this certainly chimed with my own feelings around that time. While I had been closely involved in the #YesDSS campaign, I found myself frustrated that we had been unable to build on our actions after what had been an enjoyable and inspiring campaign up to that point. I was also aware that various looming changes in my personal life meant I would be unable to commit as much time to the group in the coming months, and was concerned that this would put more pressure on friends such as Danny, Emma, Heather and Jacob. In this sense, as much as Digs members wanted to push further and harder, the challenge of being a relatively small group with limited geographical and institutional reach had gradually worn down those who formed its core base.
Scaling up
In their articulation of knowledge-practices, Casas-Cortés et al. highlight the importance that social movement activists place on 'listening, tracing, and mapping the work that they do to bring movements into being ' (2008, p. 28). In the case of Digs, I argue that the group's discussions around strategy, capacity and burn-out proved highly productive even as the organisation itself gradually succumbed to inertia. As these debates periodically surfaced and then receded, they produced 'situated knowledges of the political' (Casas-Cortés et al. 2008, p. 51) that filtered into a project which, since the summer of 2015, had been running in tandem with Digs: the drive to build a London-wide renters' union. This idea had already been mooted by Digs members several years before, but had taken several years to gain momentum. After around a year of meetings between various housing groups and interested individuals from across London, by 2016 the renters' union had become a tangible project that several Digs members, including myself, were involved in. Although there was an array of different ideas about what this union might look like, one major motivation was to move beyond the limitations of volunteer-based local groups like Digs, who tend to rely on a small number of activists in order to function. The hope was that by forming a member-based renters' union that covered the entire city, funding could be raised to pay community organisers (at least during the first phase of the union's formation), thereby giving these individuals the means to carry out more time-consuming activities such as door-knocking and organising events. Several activists, myself included, also felt that a citywide union would prove a more appealing and convincing entity to renters who were yet to become politically active. By mid-2017, an initial framework for the LRU had been established and funding was starting to arrive.
The foundation of the LRU as an established renters' union coincided with Digs becoming largely dormant by the end of 2017. The group carried out one final action around letting agent fees in the summer of 2017, but by the end of the year most core members had either stepped back or committed their time and energy to the LRU. In September 2018, Digs formally announced that the group would be amalgamating with the LRU and helping to establish its Hackney branch, which is now one of union's three active branches. Over the past year, the LRU has grown rapidly in size and now has over a thousand members. The union also has two paid members of staff and is looking to expand further. A number of successful actions against local authorities and landlords, coupled with an active support group in each of its branches, suggest that the LRU is set to play a leading role in London's renters' movement in the coming years. Jacob, who was involved with both Digs and RHN from early on in their formation, explains how the struggles he experienced with Digs were vital processes of learning that have now been channeled into the LRU's structure: We found you can't get people to view themselves as a renter solely through a shared experience of oppression. Building a collective entity, a union of people, as a membership organisation gives us something to identify with, and it's through this organ of collective power that we see ourselves as renters. Becoming a political subject comes out of a history of real shared experiences together and not just an abstract understanding that we are in the same boat.
Wilde 77
In their work on emotion in social movements, Brown & Pickerill (2009) advocate for a closer attention to emotional reflexivity in political spaces. Activists who seek radical political change, they argue, should set about 'making explicit the link between understanding our emotions and prefiguring social transformation ' (2009, p. 11). In the case of Digs, the struggles its members faced around identity, scale and capacity were often articulated emotionally, but they were ultimately analytical in content. The determination of key individuals to use such reflections productively was instrumental to the creation of the nascent renters' union that built on the achievements of local housing groups but also sought to move beyond them. Individuals such as Heather and Jacob acted as vital repositories of knowledge for this new political body, and in doing so contributed to the political evolution of London's renters' movement as a whole.
Conclusion: longevity and reflexivity in urban social movements
This article has ethnographically documented the development of Digs from its inception in as a small private renters' group in 2012 to its eventual incorporation into the LRU in 2018. In the six years in which Digs was active as a distinct local entity, the group played a pivotal role in forcing private renting onto the political agenda. It also established a number of organisational strategies and everyday practices that have since been adopted by the LRU, most notably the centrality of mutual support as a means of reproducing affective ties among members. That Digs also struggled to create a collective identity for renters beyond a relatively small number of ideologically committed activists highlights the challenge of organising across multiple scales. The local is essential as a site for forming 'thick' social relationships, but it also has a limited scope for producing broader collective identities and effecting change in wider terms. Such struggles, however, do not mean that Digs was by any means a failure. Instead, I argue that the strong affective bonds established among the group enabled Digs to maintain itself in spite of the political challenges it faced. Indeed, the importance of these enduring social relationships was not merely that they meant something to the individuals themselves, but also that they helped to retain these key activists and their knowledge within the wider movement. In this sense, the value of investing in social relationships and mutual support is that it nourishes a repository of knowledge and experience that can be put to use as political terrains shift and organising strategies evolve.
It remains to be seen how successful the LRU will become as London's renters continue to struggle for decent, secure and genuinely affordable homes. But the fact that a union exists at all is testament to the determination of Digs's members, whose dogged commitment to housing justice in an intransigent political climate helped to create the conditions for a wider renters' movement. To view such developments in broader terms, the case of Digs demonstrates that by paying greater attention to the fine-grained micro-histories of political organisations, we may find ourselves better placed to understand the future directions that our movements need to take.
|
2022-03-20T15:23:41.816Z
|
2019-09-23T00:00:00.000
|
{
"year": 2019,
"sha1": "51c54e83e961c2121c26204e01894800581d1866",
"oa_license": "CCBYNCSA",
"oa_url": "https://radicalhousingjournal.org/wp-content/uploads/2019/09/RHJ_Issue-1.2_05_Long-read_Wilde_63-80-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2e0712093e4c574fee5e2b5497f3ae858e5ba9cd",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
241992186
|
pes2o/s2orc
|
v3-fos-license
|
Therapeutic Indication of Suranjan Shirin ( Colchicum luteum )
The Colchicum luteum is known by the name of colchicum in English, Suranjan in Sanskrit and Hirantutiya in Hindi. It belongs to the family of Liliaceae. The common names of the plant are autumn crocus and meadow saffron. The corms of the plant are usually used to make natural medicines. It is known to be Phlegm and bile suppressant. These are the principle stabilizing energies that govern the body as well as the mind. It is connected with the structure, lubrication, fluid balance and stability of the entire human body. Test drug Surinjan Shirin is classical and famous Unani drug being used extensively for gouty arthritis in almost every Unani set up. Their said actions are mentioned in various ancient Unani text and some scientific literatures also claim that it has Mulattif (demulcent), Muhallil (Anti-inflammatory) Muarriq (diaphoretic), Muharik (stimulant) etc. activity, and reduces the viscosity of all humours. Hippocrates has described that it maintains the viscosity of all humors. Several studies have been carried out for the therapeutic evaluation of its efficacy and safety. So, I want to compile and summarized all the literature at one space.
INTRODUCTION
Colchicum Luteum is known for its pain-relieving properties that also help in relieving wounds. Colchicum helps in preventing indigestion and work as a laxative that relieves constipation. This medicinal herb is helpful for all kinds of liver and spleen related problems. It purifies the blood and acts as diuretic. This plant is also advised by naturalists for patients who suffer from any condition related to the urination. 1,2 The drug Colchicum laetum is a species of Colchicum found in south east Russia through to the Caucasus. A plant known in cultivation as C. laetum 'Pink Star' is thought to be a selection of Colchicum byzantinum. It has flowers which are pale purple-pink with rounded ends; the petals of each bloom are often held parallel to the soil surface. Colchicum luteum is used as a carminative, laxative, and an aphrodisiac. Colchicines are effective in the treatment of gout, rheumatism, and diseases of liver and spleen. Externally, the corms are applied as paste to lessen inflammation and pain. 3,4,5,6 MATERIALS AND METHOD Review material collected from the different ancient Unani books, PG Dissertation, online authentic research Journals & different websites and summarized with the help of computer.
Habitat:
Distributed in the western temperature Himalaya extending from murree-hill to Kashmir and chamba (India). Upto 700 to 3000 meter it is also found in Afghanistan and Turkey. Annual leaves few lorate linear-oblong or oblanceceolate, obtuse, appearing with flowers, short at flowering time, at fruiting 15-3-cm. Tip rounded, flower1-2 (inspiring), 2.5-3.8 cm in diameter when expanded, perianth golden yellow, tube 7.5-10cm. Segments oblong or oblanceolate, obtuse, many nerved; stamen shorter than the perianth; filaments very much shorter than the yellow anthers; style filiform, much longer than the perianth; capsule 2.5-3.8 cm; valves with long recurved beaks, the drug (corm) suranjan yellow or black in colour. The corms are somewhat conical, ovoid or elongated; they are a translucent or opaque. The flat surface has longitudinal groove. The surface is marked by indefinite and irregular longitudinal striations. Fresh corms measure 15-20mm in diameter. These are odourless and have a bitter and acrid taste. 5,6 Procedure and time of collection: The corms are dug out and separated from shrivelled remains of the flower stalk and the adhering soil. Tied in pieces of cloth and dipped for a short while in boiling water and dried. A better way sometimes adopted is expose the corms for a short time to steam before drying. This prevents the
Study of Powder drug:
The powder is creamy white to light yellow coloured with yellow to brown steaks.
Alterative:
The Colchicum Luteum causes a gradual change in the body which is usually because of improved nutritive absorption as well as the elimination of toxins from the body. The herb works as an aphrodisiac that increases the sexual desires in a person.
Carminative:
The Colchicum Luteum plant reduces flatulence and helps in expelling excessive gas from the intestines.
Laxative:
The herb is known to stimulate the bowel movement in the body naturally and solve the problem of constipation.
Anodyne:
Colchicum Luteum is known for its pain-relieving properties. It is also a very beneficial painrelieving agent.
Joint pains:
This plant has been used for relieving the problem of joint pain for centuries.
Skin related problems:
Application of Colchicum Luteum on the skin can relieve ailments of the skin.
Rheumatoid arthritis:
This plant is beneficial for people suffering from swelling due to rheumatism. It is advised to use a paste of this herb with saffron and egg for relieving rheumatic pain.
Gouty arthritis:
The presence of colchicine in the corms is very beneficial for relieving pain and for inflammation caused due to gout.
Wounds:
The dried root of the plant is beneficial for relieving injuries.
Substitute: Turbud, Aftimum
Important formulation: Raughan-e-suranjan, Majoon-e-suranjan, Habb-e-suranjan. 7,8 Doses: 250 to 500mg Side Effects: The excessive intake of Colchicum Luteum may prove to be harmful. It has been known to cause narcotic action as well as a suppressant for brain activities. It can also cause intestinal pain, diarrhoea and vomiting. The leaves, corm and seeds can be poisonous if taken without consultation. The herb is very bitter to taste and can darken on exposure to light. The corm of the effectively functions as a mitotic poison or spindle inhibitor but the major disadvantage of the colchicine is toxicity and non-target cell (normal cell) effect. 15
|
2020-10-28T19:21:46.223Z
|
2020-09-27T00:00:00.000
|
{
"year": 2020,
"sha1": "705024f27363429ada3ac4d75b7da3412466f179",
"oa_license": null,
"oa_url": "https://doi.org/10.46624/ajptr.2020.v10.i4.022",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "55941c75d39dcef22ae0208500af979755db23bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
10597887
|
pes2o/s2orc
|
v3-fos-license
|
Cross-Cultural Administration of an Odor Discrimination Test
Olfactory sensitivity can be evaluated by various tests, with “Sniffin’ Sticks” test (SST) being one of the most popular. SST consists of tests for odor threshold, discrimination, and identification. It seems relatively straightforward to administer threshold tests in different groups and societies and it has been shown that odor identification tests requires special adaptation before they can be administered to various populations. However, few studies have investigated the application of an odor discrimination task in various regions/cultures. In the present study, we compared the discrimination scores of 169 Polish people with the scores of 99 Tsimane’, Bolivian Amerindians. The Tsimane’ participants scored very low in the discrimination task, despite their general high olfactory sensitivity. This result suggests that when a discrimination task is chosen as the form of olfactory testing, some additional variables need to be controlled. We suggest three sources of low scores of our participants—their cognitive profile, the cultural background, i.e., little knowledge of the odors used in the discrimination test and problems associated with testing environment.
Introduction
Smell is an important human sense -odors can influence our mood, cognition, and behavior (for a review, see : Herz 2002).
The functions of our smell may be categorized into three main groups (Stevenson 2010): related to ingestive behavior (Murphy 1985;Porter et al. 2006), alarm functions-avoiding environmental hazards (Van den Bergh 1999), and functions related to social communication (Ackerl et al. 2002;Sorokowska et al. 2012). Generally, olfaction allows us to detect subtle changes in our environment, but sensitivity of this sense varies across individuals (Murphy et al. 2003).
Olfactory sensitivity can be evaluated by various tests (overview: Thomas-Danguin et al. 2003), with "Sniffin' Sticks" test (SST; Burghart Messtechnik, Wedel, Germany; Hummel et al. 1997;Hummel et al. 2007;Kobal et al. 1996) being one of the most popular. Previous work has established its test-retest reliability and validity (Kobal et al. 2000) and the use of this test is recommended by the "Working Group Olfaction and Gustation" of the German Society for Otorhinolaryngology, Head and Neck Surgery. The normative data of the SST has been established on the basis of results obtained in thousands of healthy subjects in Europe and Australia (Hummel et al. 2007;Katotomichelakis et al. 2007). SST consists of tests for odor threshold, discrimination, and identification. The composite score of olfactory threshold (OT), odor discrimination (OD), and odor identification (OI) are calculated as the total threshold-discrimination-identification (TDI) score, which is ≤15 for anosmia, ≥30 for normosmia, and in between for hyposmia (Hummel et al. 2007).
The "Sniffin' Sticks" olfactory test has been developed in Europe and is used for assessment of olfactory function in many countries. While it seems relatively straightforward to administer threshold tests in different groups and societies (Hoshika 2006;Sorokowska et al. 2013), it is not as clear whether the two remaining parts of SST-identification and discrimination tests-are equally appropriate for crosscultural comparisons.
Application of the identification test appears to be difficult in cross-cultural studies. Performance in odor identification test relies on prior exposure to and familiarity with the presented odors (Goldman and Seamon 1992;Richardson and Zucco 1989), so prior to usage of identification tests in a specific culture odors need to be adapted to the subjects' cultural background (Konstantinidis et al. 2008;Shu et al. 2007). Additionally, odor identification is a semantic memory task and studies show positive relationships among general semantic knowledge, verbal fluency, and proficiency in this test (Hedner et al. 2010;Larsson et al. 2000;Larsson et al. 2004). These two factors could severely limit identification tests' validity in different cultures or populations with very differentiated levels of general semantic knowledge (for example, in societies where some people do not have access to formal education). Additionally, odor identification can only be used if the test is based on a multiple-choice procedure (Cain and Krause 1979) and the verbal descriptors of odors should also be analyzed (and adapted) before it can be administered. An odor identification task is an important tool for the clinical evaluation of olfactory sensitivity and is generally used in large majority of available olfactory tests (Thomas-Danguin et al. 2003), but the results of identification tests seem to be culture-dependent (Doty et al. 1985;Thomas-Danguin et al. 2001). Thus, it seems that this element of the SST cannot be used for direct comparisons of olfactory sensitivity between different cultures and societies.
SST discrimination task involves a triple-forced-choice procedure. However, unlike the threshold test, all presented pens contain odorants. Per triplet, two distracter pens encompass identical smells, while the respective third pen (the clue) contains a different odor. The number of correctly identified clues represents the discrimination score. Odor discrimination is easier to administer than threshold measurement, and seems to be less language-dependent than is identification. However, Thomas-Danguin and collaborators (2001) showed that, even if odor discrimination seems to be a nonverbal task (Hummel et al. 1997), it is to some extent dependent on culture, probably via familiarity effects.
There exists also another problem, which seems even more important because it might influence the applicability of the SST discrimination test even in one culture of dramatically differing levels of education of the society. In the assessment of odor discrimination abilities, the participant is required to detect similarities and differences between odorants, or different concentrations of a presented odorant (Engen 1986). Thus, discrimination requires that the participants are very concentrated and use their memory and other cognitive abilities. In Hedner et al. (2010) study, the cognitive block (executive function and semantic memory composites) accounted for a significant portion of the variance (11.5 %) in odor discrimination. Basically, variations in cognitive function (and thus the performance in discrimination task) are inevitable in the general population. But generally, populations differ in terms of development and education (Human Development Report, 2010) and sometimes, even within a single population, the discrepancy between education level (and related training of higher cognitive functions) of different groups might be high. For example, many more women are illiterate in rural areas of India than in urban areas of this country (69.8 % vs. 39.4 %; Second Human Development Report of State of India 2007). Therefore, it is not clear whether the discrimination test can be used in all groups and populations with equal effectiveness, how well it is suited for cross-cultural comparisons, and which additional variables should be controlled to obtain the least biased olfactory test score of an individual.
In summary, it has been shown that odor identification tests require special adaptation before they can be administered in a various populations; however, few existing studies have analyzed the problems associated with application of an odor discrimination task in various regions/cultures. In the present study, we wanted to test the discrimination performance of a population with no knowledge of odors used in the discrimination test and without "training" in cognitive abilities (i.e., with low education). We compared the discrimination scores of a group of Polish people with the scores of the Tsimane', Bolivian Amerindians. We chose this population because the Tsimane' were found to have very good sense of smell (as tested by SST threshold subtest; Sorokowska et al. 2013), but they have limited access to scented cosmetics and modern chemicals. Also, most Tsimane' adults are illiterate (Godoy et al. 2010) and a large percentage of Tsimane' does not receive any formal education (Godoy et al. 2005;Kirby et al. 2002;Reyes-Garcia et al. 2007).
Participants
The first phase of the study tested 99 Tsimane': 53 females aged 18-51 years (M=29.51, SD=9.44) and 46 men aged 18-50 years (M=32.13 years, SD=10.90) from six villages along the Maniqui River. None of them reported otorhinolaryngological problems at the time of the study. They received a gift (household items worth ∼6 USD) for participating in a series of studies. The second part tested 169 Polish people from Wroclaw: 94 females aged 19-60 years (M = 30.47, SD=12.16) and 75 males aged 18-60 years (M=31.25, SD=12.52).
The study was conducted according to the principles expressed in the Declaration of Helsinki. The study protocol and consent procedure received ethical approval from the Institutional Review Board (IRB) of the University of Wroclaw (Wrocław, Poland) and from the Great Tsimane' Council (the governing body of the Tsimane'). All participants provided informed consent before study inclusion. Due to the low levels of literacy of the Tsimane', we obtained oral consent for participation and documented it using a portable recorder.
Procedure
Trained experimenters assessed olfactory function of participants using the "Sniffin' Sticks" discrimination subtest (Burghart Messtechnik, Wedel, Germany). A translator explained the procedure to the participants in their native languages. We ensured that the Tsimane' participants understood the procedure before completing the task. That is, first, we explained to them that they will smell a set of three pens out of which two contain the same smell and one is different. We gave an example from their everyday life-like that one pen could smell with soap and two could smell of food and they should show us which pen is different. We asked them to smell a randomly chosen set of three pens, and when they confirmed that they had smelled all of them, we asked them to choose the differently smelling pen from the set. If they performed the task correctly (they pointed the pen of different smell oralternatively-they showed which two pens had the same odor) we continued with the standard procedure. If they did not perform the task correctly, we explained the procedure again and asked them to smell the same three pens one by one, this time showing them the correct answer and asking whether they can smell the difference. When they confirmed and said that they understood the task after this additional explanation, we continued with the standard procedure. However, some participants did not understand the task. They were expressing it in many different ways-for example, some said that the test was "stupid" or "difficult," and some stopped answering the questions. We excluded these participants from further participation. Still, a few Tsimane' participants who took part in the test admitted that they had been guessing the answers.
Results
The results of the Tsimane' (Shapiro-Wilk W=.971, p=.03) and Polish participants (Shapiro-Wilk W=.927, p<.001) were not normally distributed. Therefore, we presented the results for the two groups as medians and interquartile ranges and compared them using the non-parametric Mann-Whitney U test. We undertook two-tailed tests throughout, using STATISTICA ver 10 (StatSoft, Inc.) with p<.05 as the level of significance.
For Tsimane, the percentages of correctly discriminated triples ranged from 35 to 57 %, with an average of 47 %. For Polish people, the percentages ranged from 41 % to 91 %, with an average 74 %. Median values in the Tsimane' and Polish groups were 7 and 13, respectively. The lowest result in the Tsimane' group was 1 and the lowest result in the Polish group was 5; the highest results were 14 and 16, respectively.
Results were also analyzed using an analysis of variance for repeated measures (rm-ANOVA; program package SPSS 21.0 from SPSS Inc., Chicago, IL., USA) with "test" as within subject factor (odor threshold, odor discrimination) and "group" as between subject factor (Tsimane, Polish). Although there was no significant effect of the factor "test" (F[1,309]=0.58, p=0.45), overall, Polish subjects scored h i g h e r t h a n Ts i m a n e s u b j e c t s ( f a c t o r " g r o u p " F[1,309]=39.7, p<0.001). However, as indicated by the int e r a c t i o n b e t w e e n f a c t o r s " t e s t " a n d " g r o u p " (F[1,309]=179.9, p<0.001) for odor thresholds, Tsimane subjects scored higher than Polish subjects while this was the other way around for odor discrimination.
Discussion
The Sniffin' Sticks test battery has been used in a large number of studies, and is a part of the everyday rhinological clinical practice in many countries (Hummel et al. 2007;Kobal et al. 1996;Konstantinidis et al. 2008). In our study, we analyzed the results of the SST discrimination subtest in two cultures. Threshold subtest of SST seems to be suitable for cross-cultural and cross-regional comparisons , whereas odor identification tests typically need to be adapted for application in various cultures/regions (Thomas-Danguin et al. 2001). However, odor discrimination tests have not been analyzed from this perspective before. The Tsimane' participants who took part in our study scored very low in the discrimination task, despite their high olfactory sensitivity ). This result suggests that when discrimination task is chosen as the form of olfactory testing, some additional variables need to be controlled. We suggest three sources of low scores of our participants-their cognitive profile, the cultural background, i.e., little knowledge of the odors used in the discrimination test and problems associated with testing environment.
Odor discrimination involves complex processing of olfactory information. Hedner and collaborators (2010) showed that odor discrimination/identification was significantly connected to cognitive proficiency-participants who performed well in executive functioning also discriminated and identified more odors correctly. Also, Larsson (1997) and Larsson and collaborators (2004) showed that proficiency in general knowledge and tasks like letter fluency or vocabulary was positively related to discrimination. These reports suggest that individual's cognitive profile exerts a significant influence on higher order olfactory performance (Dulay et al. 2008;Hedner et al. 2010;Larsson et al. 2004). Interestingly, in the same study, no such influence was observed for the olfactory threshold test (Hedner et al. 2010). It is then possible that the cognitive load necessary to complete the discrimination task might be too high for people who did not have access to formal education, even if their general olfactory sensitivity (assessed by threshold test, like in Sorokowska et al. 2013) is high.
The second source of difficulties could be low knowledge of the odorants used in the discrimination test. Olfactory sensation is a complex biocultural process, and it is not a passive, merely receptive enterprise (Shepard 2004). In his work, exploring the nascent field of "sensory ecology," Shepard (2004) defined a new theoretical perspective, in which sensations are rooted in human physiology, but also constructed through individual experiences and culture. Organoleptic properties can change over time and across and between different cultures, because inherent qualities of chemical substances appear to interact with individual experience, context, and, to a large extent, cultural conditioning (Doty 1986;Wysocki et al. 1991;Shepard 2004). Olfaction plays important roles in (among others) dietary habits, religious beliefs, medicine, memory, and sexuality of various societies (Classen 1992;Gollin 2004;Jernigan 2008;Leonti & Sticher, 2002;Shepard 2004;Sorokowska 2013), but these roles and their importance might differ across cultures. For example, Pieroni and Torry (2007) showed that links between taste perceptions and the medicinal uses of herbal drugs may be quite different across diverse cultural groups (South-Asians (Kashmiris and Gujaratis) and English people), and Majid and Burenhult (2014) showed that some cultures (like Jahai, Peninsular Malaysia) find it easier to name odors than others (native English speakers). All the aforementioned findings are important for the interpretation of our results. Discrimination score might depend on the familiarity of the odors (Thomas-Danguin et al. 2001). Tsimane' knew smells of some of the objects used in the test-like smell of a banana or camphorbut they rather did not have (frequent) contact with the artificial equivalents-like isoamyl acetate and fenchone. It is possible that for societies that are weakly industrialized and/ or have rare contact with chemically created smelling substances, the discrimination test might be too difficult. Given that (while firmly rooted in physiology), sensation is also shaped by individual experience, cultural preconditioning, and environmental variables (Shepard 2004), the solution for similar, future research programs might be to work ethnographically on local odor categories-in terms of classification, real-world referents and symbolic and practical meanings -in conjunction with any comparative scientific methods. Perhaps, local people (unaccustomed to encountering odors in pure synthetic form) need other cues-the actual odorous plant leaf, or fruit, or flower, or object-to better discriminate odors. Such custom-made, culturally-adapted tests would enable the scientists to analyze actual olfactory abilities of people from all over the world. So far, the general olfactory assumptions seem to be based upon tests of the people from "WEIRD" (Western Educated Industrialized Rich Democratic; Henrich et al. 2010) countries, which makes universality of these findings questionable.
There exist also other reasons of lower performance of the Tsimane'. Testing among Tsimane involved some "environmental" problems-the houses where we conducted our study did not have solid walls, so the participants could hear a lot of noise from the villages-like animal sounds, voices of other people, etc. Recently, Seo et al. (2011) showed that subjects' performance in the odor discrimination task was impaired in the presence of background noise. The authors explained that as the odor discrimination task is highly dependent on cognitive ability and education level (Boesveldt et al. 2008;Hedner et al. 2010), the noise-induced deteriorated performance could be mediated by the interruption of cognitive processes required to perform the task (Seo et al. 2011). However, the average Tsimane result was so much lower than European that probably this discrepancy was not simply the effect of background noise alone.
Conclusions
Olfactory tests have significantly increased the understanding of the sense of smell in humans (review by Doty 2001) and cross-cultural testing is an exciting area of olfactory studies. However, few existing studies have analyzed the crosscultural applicability of tests other than identification. Our study shows that there are some problems associated with exploitation of the discrimination test and that researchers should be aware of possible cultural and social differences which may cause different performances in this olfactory test.
The study was conducted according to the principles expressed in the Helsinki Declaration of 1975, as revised in 2008. The study protocol and consent procedure received ethical approval from the Institutional Review Board (IRB) of the University of Wroclaw (Wrocław, Poland) and from the Great Tsimane' Council (the governing body of the Tsimane'). All participants provided informed consent before study inclusion. Due to the low levels of literacy of the Tsimane', we obtained oral consent for participation and documented it using a portable recorder.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
2018-04-03T04:11:34.966Z
|
2014-04-22T00:00:00.000
|
{
"year": 2014,
"sha1": "df6a55e5e21dacfe85435270401307ece7b04f5a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12078-014-9169-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "df6a55e5e21dacfe85435270401307ece7b04f5a",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
5027716
|
pes2o/s2orc
|
v3-fos-license
|
Chronic subdural hematoma: A survey of neurosurgeons’ practices in Nigeria
Background: Chronic subdural hematoma (CSDH) is a commonly encountered condition in neurosurgical practice. In Nigeria, a developing country, patients with CSDH are less likely to be diagnosed and treated by surgical drainage early. Aware of the reported variations in neurosurgeons’ practices regarding CSDH in many parts of the world, we sought to determine the current practices of Nigerian neurosurgeons in managing CSDH. Methods: An Internet-based survey was carried out in which all Nigerian neurosurgeons listed in the Nigerian Academy of Neurological Surgeons directory during the July–December 2012 time period were asked to participate. Questions asked in the survey were: (1) Type of treatment used in patients with CSDH, (2) Use of drains postoperatively, (3) Postoperative patient positioning, (4) Postoperative mobilization, (5) Postoperative complications, and (6) Postoperative computed tomography (CT) scan monitoring. Results: Survey information was sent to the 25 practicing neurosurgeons in Nigeria who met the criteria listed above for being included in this study. Each of the 14 neurosurgeons who responded reported that CSDH is often misdiagnosed initially, usually as a stroke having occurred. Once a diagnosis of CSDH was made, the most common method of treatment reported was placement of one or two burr-holes for drainage of the hematoma. Reported, but used in only a few cases, were twist drill craniostomy, craniectomy, and craniotomy. Each neurosurgeon who responded reported irrigation of the subdural space with sterile saline, and in some cases an antibiotic had been added to the irrigation solution. Six of the 14 neurosurgeons left drains in the subdural space for 24-72 hours. Seven neurosurgeons reported positioning patients with their heads elevated 30° during the immediate postoperative period. No neurosurgeon responding reported use of steroids, and only one acknowledged routine use of anticonvulsive medication for patients with CSDH. Only 3 of the 14 neurosurgeons taking part in the study said they routinely order CT scans postoperatively. Conclusion: There are several differences in the ways Nigerian neurosurgeons manage CSDH. Future studies may help to streamline the approaches to managing CSDH.
INTRODUCTION
Although it has been recognized by neurosurgeons for about 16 decades since it was first described by Virchow, [25] chronic subdural hematoma (CSDH) management is yet to be harmonized as has been done for many other neurosurgical conditions. [19,27] There are several controversies regarding its etiologies, course, optimal care, and outcome. [19,27] Issues regarding the optimal treatment options (twist drill craniostomy, burr-hole craniostomy, and craniotomy), use of drains, postoperative positioning of patients and timing of postoperative mobilization are not yet resolved. [1,8,10,12,13,19,27] Regional, institutional, and personal differences exist and persist. [4,19,21] In the author's opinion, these variations may be a reflection of the personal experiences, place of training and mentoring of the individual attending neurosurgeon.
At the November 2010 meeting of the Nigerian Academy of Neurological Surgeons (NANS) discussions of CSDH suggested wide variations in the management of the condition among the Nigerian Neurosurgeons in attendance in line with reported variations in neurosurgeons' practices in other regions of the world. [4,21] This study therefore sought to determine the current practices of Nigerian neurosurgeons in the management of CSDH.
Survey development
An internet-based survey of Nigerian neurosurgeons was conducted between July and Decembe 2012 using a Google document survey questionnaire, which may be accessed at https://docs.google.com/ spreadsheet/viewform?fromEmail=trueandformkey= dDF3T0ZDMUdMS081UTlLRUpwd3FkaWc6MQ. Background information on the participants regarding number of years in practice and practice setting were requested. Questions on clinical practices were set out in simple "yes or no" or multiple choice patterns as necessary. Respondents were surveyed on their case load of CSDH on per surgeon, per year basis and the clinical course and presentation of the CSDH patients. Their preferred methods of CSDH treatment (twist drill craniostomy vs. burr-hole craniostomy vs. craniectomy vs. flap craniotomy) were also assessed. Those who preferred burr-hole drainage were requested to indicate whether they make one or two burr-holes. Their adjuvant management strategy with respect to the irrigation of the subdural cavity and use of postoperative subdural drains, steroids, and anticonvulsants were then assessed. Those who use drains were further asked to indicate the duration of drain use.
Questions on postoperative care of the CSDH patients were designed to address the following: (1) Positioning of patients in the immediate postoperative period (height of bed: Flat vs. 30° head-up vs. trendelenburg), (2) Timing of postoperative mobilization of the patients, and (3) Whether or not the surgeons obtained routine postdrainage computed tomography (CT). The next set of questions assessed recurrence rate of hematomas and the occurrence of other complications as experienced by the surgeons.
Survey administration
Participants were identified through the NANS directory used for the November 2010 and February 2012 meetings of the association, which is a complete listing of all neurosurgeons in Nigeria. Neurosurgeons who are retired or who are less than 1 year postcertification were excluded. E-mails soliciting for participation in the study were sent to all the 25 eligible neurosurgeons and contained a link to the online survey. Internet-based health care surveys have been validated by previous studies. [3,6] This formed the decision to use the medium for this study.
An introductory cover letter in the e-mails as well as in the online questionnaire noted the apparent differences in care of CSDH in Nigeria and the need to objectively document the current practices. It also indicated the estimated time-burden for completing the questionnaire of 10 minutes and assured that participation was entirely voluntary and guaranteed confidentiality in the data collection and dissemination of results.
An initial study was conducted from February to July 2011. Only eight responses (representing about one-third of the survey population) were received. A preliminary presentation of the findings was made at the 2012 meeting of NANS in Enugu, Nigeria and members were called upon to participate in the survey to validate the findings. Consequently, a second survey (being reported here) was carried out from July to December 2012. The new survey included questions on the case load of CSDH, symptomatology and diagnosis of CSDH, use of steroids and anticonvulsants, as well as the diagnosis of recurrence. Reminders were sent on two occasions during the study period and telephone contacts also made with the neurosurgeons urging for participation in the study. Some respondents (when contacted by phone) had stated that poor access to the internet and their busy schedules delayed their participation.
Data analysis
The responses were recorded anonymously. Responses were recorded on the Google-based Microsoft-Excel database. Simple descriptive statistics of proportions was done using SPSS Version 15 (SPSS Inc., Chicago, IL). Differences in response rates were evaluated using Chi-square statistics (Epi info version 6). A P < 0.05 was considered statistically significant.
The respondents and patient population
The response rate was 56% (14 of 25). Most of the respondents were within 10 years of certification (9/14) and worked in government-owned hospitals (12/14) [ Table 1]. The average case load of CSDH per surgeon per year is 18 (range: 10-30). Most cases of CSDH present late (>72 hours from symptom onset) especially due to delay in making initial diagnosis. However, the patients often present with favorable Glasgow Coma Score (GCS) of 13-15. All respondents reported that CSDH is often initially misdiagnosed as stroke.
Adjuvant surgical strategy
All respondents routinely irrigate the subdural space until clean returns are obtained. Saline impregnated with antibiotics is used by 11 while 3 use saline only. Six of the respondents place subdural drains using nasogastric tubes, Foleys catheters, or scalp vein needles [ Table 2]. The drain is made to exit the scalp via a separate stab incision by three respondents while the other three pass the drain through the same incision for drainage of the hematoma. They remove the drain when the effluent is minimal and/or CSF-like.
Postoperative patient care
Most of the respondents (7/14) nurse their patients 30° head-up in the immediate postoperative period. Reported timing of postoperative mobilization of patients varied from within 24 hours to postoperative day 8-10. Most of the respondents do not obtain routine postoperative CT scans due to financial constraints (5/14) and because they do not think it is generally useful (6/14). Only three surgeons do routine postoperative CT and they reported that it influenced the postoperative care of their patients [ Table 3]. One of them reported diagnosis of pneumocephalus as well as fresh bleeding into the subdural space, while one surgeon stated that it was mostly for reassurance though it led to reoperation in some cases. The third surgeon reported that two patients required reinsertion of the subdural drain when significant residual blood was seen on the postoperative CT.
None of the respondents routinely use steroids in managing CSDH while only one routinely uses anticonvulsants.
Complications
The surgeons assessed the success of the hematoma evacuation using clinical improvement/decline. CT is combined as necessary. The reported approximate hematoma recurrence rates were 0% (5/14), 1-5% (8/14), and 6-10% (1/14). Recurrence was reported by only 3 of those who do not use drains (8) as opposed to 6/6 of those who use drain (P = 0.0309). In addition, recurrence was reported more by those who nurse patients in trendelenburg position (4/14) as opposed to those who nurse them 30° head up and flat (4/7 and 1/3, respectively) in the immediate postoperative period. These differences were not statistically significant (P = 0.1628). Recurrence was also reported more by those who mobilize their patients within 24 hours (4/4) than by 48 hours (3/7) and after 48 hours (1/3) (P = 0.1453). Other reported complications of Table 3].
DISCUSSION
The management strategies available for CSDH may be as varied as the number of attending neurosurgeons in any particular institution. These variations probably underscore the fact that much is yet unknown about this common condition. Variations in practices among neurosurgeons in different countries regarding CSDH management have been documented. [4,21] These studies showed differences in the surgical option of choice, use of steroids, drains, positioning, and mobilization.
The landmark review by Markwalder [15] provided the initial overview of the management of CSDH. Prior to that period, craniotomy with membranectomy used to be considered necessary in all cases. [7,17] Later, membranectomy or capsulectomy was deemed to be of less importance than the drainage of the hematoma itself. [18] Moreover, simple burr-hole drainage was found to be more effective than membranectomy. [22] Markwalder had concluded that "In treating chronic SDH, the twist-drill craniostomy and closed-system drainage of the subdural collection seem to be today's most rational approach to this lesion in children beyond the infant period and in adults. Craniotomy, membranectomy, and craniectomy should be reserved for those instances in which the subdural collection reaccumulates, the brain fails to expand, or there is solid hematoma." [15] Despite these conclusions and given the absence of randomized trials to compare the various methods of draining the hematoma, these various methods have been reported thereafter with varying success rates. [5,23,24,26] In an evidenced-based review of contemporary surgery for CSDH, Weigel and colleagues did not find any study that provided class I evidence on the efficacy of the various management practices. [27] However, the authors "identified twist drill craniostomy and burr-hole craniostomy as the safest methods" and noted that "burr-hole craniostomy has the best cure to complication ratio and is superior to twist drill craniostomy in the treatment of recurrences" and that "craniotomy and burr-hole craniostomy have the lowest recurrence rates". [27] Regarding postoperative positioning of the patients, the various techniques adopted also reveal how so much more needed to be known about CSDH. Tredenlenburg positioning has been practiced with the hope of increasing CSF pressure and aiding brain reexpansion. [22] Head-up positioning in the immediate postoperative period have been reported with conflicting results. While Abouzari and his colleagues [1] reported that assuming the head-up position significantly increased the recurrence of CSDH, Ishfaq et al. [12] reported that it does not.
To drain or not to drain used to, and is probably still, an important discourse in CSDH management. Subdural drain when combined with twist drill craniostomy was considered useful as it allows slow, steady and more complete evacuation of the hematoma and gradual reexpansion of the brain. [23] Subperiosteal drain has also been employed and is thought to reduce the rate of seizure occurrence as well as intracranial infection. [28] Still, many neurosurgeons fear to use drains because of the potential risks of infections associated with it. [4] To further highlight these variations, Henning and Kloster [9] found that continuous irrigation of the subdural space with inflow and outflow after burr-hole decompression of CSDH have a low recurrence rate (2.6%) compared with burr-hole craniostomy with intraoperative irrigation and postoperative closed system drainage, burr-hole craniostomy with intraoperative irrigation only, and craniotomy (29.4%, 39.5%, and 44.4%, respectively). [9] Although, the practices of the nonrespondents may be substantial in the overall overview of CSDH management in Nigeria, the findings of this study indicate that: • Nigerian neurosurgeons use burr-hole as their preferred method of surgical treatment of CSDH • Subdural space irrigation is generally practiced in Nigeria • There is no consensus regarding postoperative positioning of patients among Nigerian Neurosurgeons • Routine postoperative CT scanning is not a common practice in Nigeria due to financial constraints • Nigerian neurosurgeons do not routinely use steroids in managing CSDH • A large majority of Nigerian neurosurgeons do not use prophylactic anticonvulsants in the management of patients with CSDH.
An equal number of respondents (7 each) use single and double burr-holes in managing CSDH. This lack of uniformity is supported by a recent literature from Nigeria. [11] Although, there are no Class I evidence supporting its superiority over other principal treatment modalities, the review by Weigel et al. indicated that burr-hole craniostomy has the best cure to complication ratio. [27] The authors evaluated the various methods of hematoma evacuation with regard to clinical variables of cure rate, recurrence, morbidity, and mortality as published in the English and German literatures and concluded that burr-hole craniostomy "shares the advantages of twist drill craniostomy, with its high cure rate and low risk of morbidity and mortality, and of primary craniotomy, with its low risk of recurrence." [27] Six of the respondents place subdural drains and all of them reported recurrence rates of 1-5%. Five of the eight who do not employ the use of drains reported no recurrence while the remaining three reported rates of 1-5% (two) and 6-10% (one). These differences were statistically significant (P = 0.0309) in contrast to findings from the United Kingdom and the Republic of Ireland. [21] While there is ongoing debates about whether or not to drain, Santarius et al. recently advocated the preference for drain after burr-hole drainage of CSDH. [20] Traditionally, CSDH patients are nursed flat and mobilized late in an effort to reduce hematoma recurrence. [14,16] This may explain why some of the respondents mobilize their patients as late as the 7 th to 10 th days postdrainage. We have recently shown that there is no significant complication referable to the specific type of mobilization (early [day 2] or late [day 7]). [2] It is instructive to note that postoperative CT scanning is not routine in Nigeria in contrast with practices in some developed countries. [8] Two of the three respondents who perform routine postoperative CT scanning work in private settings. This relative nonuse of postoperative CT may be related to the fact that CT machines are not available in some Nigerian neurosurgical centers and where they are available, they often malfunction and may not be repaired for use for several months. Moreover, the cost of a CT study (average N35000.00 or $230) is beyond what the average Nigerian could afford. As such CT scanning is only done when it is considered absolutely necessary.
One significant limitation of this study is the potential effect of nonresponders on the findings. It is possible that the practices of many of the nonresponders differ from those of the respondents. However, given the close interaction between, and the small number of, Nigerian neurosurgeons as well as the fact that most are trained in or affiliated to the three major local training centers (Ibadan, Lagos, and Sokoto), it is most probable that these findings are representative of the general neurosurgical practice in Nigeria.
CONCLUSION
This study has shown that there are several differences in the ways Nigerian neurosurgeons manage CSDH. The relatively high cost of CT scanning in Nigeria, its lack of general availability in Nigerian hospitals as well as the high-frequency of malfunctioning of the available CT scanners may contribute to the rarity of postoperative CT monitoring reported in this study. Future studies may help to streamline the approaches to managing CSDH.
|
2018-04-03T05:21:20.927Z
|
2013-04-18T00:00:00.000
|
{
"year": 2013,
"sha1": "ce25b7dc7779a6df75aaca94fbe4f1ce15dc0063",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2152-7806.110657",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce25b7dc7779a6df75aaca94fbe4f1ce15dc0063",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1071984
|
pes2o/s2orc
|
v3-fos-license
|
Probabilistic growth of large entangled states with low error accumulation
The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.
The problem of scalability continues to be a key challenge in the field of quantum information processing (QIP). Whereas many physical systems have successfully embodied a few qubits, a clear route toward large scale universal computers has yet to be demonstrated. One promising solution is distributed QIP, where small systems (such as trapped atoms or solid state nanostructures [1]) are networked together to constitute the entire machine. While this may resolve the issue of scaling, it introduces the problem of how to entangle the physically remote subsystems. Solutions were found [2,3] involving the use of optical measurements that simultaneously monitor two, or more [4], such systems. There are now experimental demonstrations of such approaches in both ensemble systems [5] and with individual atoms [6].
Typically a remote entangling operation (EO) has two key characteristics: First, it may fail outright, but this failure will be heralded, meaning that the failure will be registered by the apparatus. Failure is destructive, leaving the qubits that were acted upon in an uncertain state so that they will need to be reset. We should assume that such failures are common, i.e. that the success probability p s may be very low in real systems. For example, in the work of Monroe et al. impressive proofof-principle experiments have achieved entanglement by measurement of two remote atoms, but the success rate is below one in a million [6,7]. The second characteristic of a realistic EO will be some finite probability of un-herladed errors, including all imperfections in the operation that are unrecorded by the apparatus.
The challenge is then to create a large scale entangled state across the network using a basic EO of this kind. To be specific one might aim to create a so-called graph state (Fig. 1). These states are conveniently represented diagrammatically, with nodes corresponding to qubits and lines (or 'edges') representing phase entanglement between them. Graph states with certain topologies can enable quantum computing because they incorporate all the entanglement required to perform an algorithm; Fig. 1(a) shows one suitable example, a square lattice called a cluster state. A cluster state is therefore an example of the kind of large entangled state one might wish to create; we refer to such targets generically as 'the primary graph state'.
Creating a network-wide graph state though the use of failure-prone EOs is actually quite straightforward if each local subsystem of the network contains two or more qubits; then we can use one complete subset to store the growing graph state, while the other set is involved in 'brokering' new entanglement [8,9]. Unfortunately, many physical systems may be limited to embodying only a single qubit. Given only one qubit at each network site, it is inevitable that the nacent graph state will be damaged repeatedly during its creation: every time we wish to entangle two specific qubits, there is a significant risk that the EO will fail and therefore the two qubits in question will need to be reset, losing any prior entanglement they had acquired with third party qubits. At first glance it may seem that it will not be possible to grow a large state efficiently unless the probability of success is at least 0.5 (say). However, previous publications have shown that one can indeed achieve positive growth on average for any finite p S [10,11,12,13,14]. Generally the solution involves small resource states (see Fig. 1(b)) which are created by 'brute force', i.e. suffering the cost of repeated failures. Then these small graph states are added to the primary graph state. When a small state is successfully connected to the primary graph, the several qubits that are thus added more than make up for those which are lost though failures.
This kind of strategy has two drawbacks, both of which we address in the present manuscript. Firstly, the time cost of creating the smaller 'resource' object will be very high when p S is small. This cost cannot be ignored, since (for a finite number of physical qubits) it dictates the maximum rate at which the primary graph can be grown; if this rate is too low, decoherence will destroy the primary graph before it can be completed. While one cannot avoid this time cost, we believe that the approach that we report here is the most efficient to have been described to-date. The second drawback of the use of resource objects, often overlooked in earlier works, concerns the accumulation of errors. Such errors are broadly of two kinds: The complexity of the resource states is in itself a source of errors, since imperfections in the 'successful' entanglement operations will accumulate in the resource objects, and ultimately lead to degradation of the primary graph state. Furthermore, the large time cost associated with the resource FIG. 1: The figure shows graph states: nodes correspond to qubits, and connections ('edges') correspond to phase entanglement. (a) The cluster state is a resource to permit general quantum information processing. (b) Several kinds of graph state have been considered as 'building blocks' including star [12], linear [11] and cross [13] geometries. However, the 'snowflake' that we introduce (rightmost) offers superior error suppression. (c) When we attempt to entangle two qubits (blue) from two independent snowflakes, we either succeed to form a single new snowflake (green arrow), or we fail and completely reset all qubits (red arrow). (d) An example of the snowflakes that will exist within our device in two consecutive time steps. At each step we pair up snowflakes of equal size, and attempt to fuse all such pairs in parallel (specific qubits to be involved in entanglement operations are marked blue). preparation may imply that passive decoherence (as opposed to errors from active operations) will significantly degrade the resource during its formation, and again the primary graph would inherit this noise. In the protocol we describe here, both forms of errors accumulate only as a logarithmic function of 1/p S . We believe that our protocol is the first to have a logarithmic error accumulation, whereas in previous approaches errors accumulate linearly. In this sense the present scheme is considerably more practical.
As with previous proposals, our protocol is based on the creation of relatively small resource graph states, which are fused together to create the primary graph state. Earlier schemes have used linear, star or cross-shaped topologies for the resource objects (Fig 1). All these topologies have the property of redundancy: the structure contains a number of qubits of order 1/p S so that the effort to join the resource into the primary graph state can suffer multiple failures prior to the success. Regrettably the primary graph will ultimately accumulate errors corresponding to this redundant structure, either during the process of attaching the resource graph state or in subsequent 'pruning' of the remaining redundant structure after success. Here we employ a different topology; it is simply a balanced binary tree but we refer to it as a snowflake since we find it helpful to envisage it as roughly circular (see Fig. 1). This structure is chosen because (a) it is efficient to grow, and moreover (b) only a logarithmic fraction of imperfections during growth eventually afflict the primary graph state.
It is efficient to grow the snowflake structure in phases. In the Phase I, we begin from product state qubits and aim to create a snowflake incorporating 1/p S qubits in total. The strategy which proves the most efficient is to fuse snowflakes in pairs, each pair being matched in size. Note that this is in the spirit of the MODESTY/GREED approaches identified by Eisert et al for growth of linear graph states [15], however here we assume that the physical technology is capable of performing multiple operations in parallel (as would be the case for optical measurement-based entanglement). Snowflakes are fused at their core (see Fig 1(c)). In the figure we depict the kind of fusion that results from a parity projection, i.e. a projector into the two-qubit subspace of given parity, since this is the type of EO that has been mostly commonly proposed [16,17,18,19,20,21]. If the fusion fails, then we choose to reset the complete structure back to product state qubits. This guarantees that the eventual size 1/p S object will be a perfect binary tree, since it is the result of an unbroken chain of successes. It may seem wasteful to discard the relatively complex graph states that remain after a single failure. But we have found that the use of recycling, whereby one attempts to reuse such fragments, is not helpful in this Phase: we would obtain the target 1/p S object only very slightly more rapidly, with the cost that there are now a random number of errors accumulated in the structure (see Fig. 2).
It is interesting to note that the process depicted in Fig. 1 is making aggressive use of parallelism: For p S ≪ 1, in a typical time step the majority of qubits must be in either the separable state or the two-qubit snowflake (the left most blocks in the Figure). Thus most of these qubits will be designated for an entanglement attempt in the next round of (simultaneous) entanglement operations.
We find that it is important to employ a buffer, i.e. the total number of qubits available in the device should be larger than the target size 1/p S . Otherwise, the growth process will repeatedly 'get stuck' while one waits for the emergence of a snowflake to match the size of the present largest. To mitigate this effect, it suffices to have a buffer equal to the size of the desired snowflake, i.e. total number of qubits should be ≥ 2/p S . However, there are advantages to using far larger buffers, as noted presently.
Having obtained snowflakes of size 1/p S , we then proceed to a Phase II where these snowflakes are combined into larger but more loosely defined objects that we call a snowballs. A p ⌉ qubits, the age is simply ⌈log 2 1 p ⌉ where ⌈x⌉ is a ceiling function to denote a minimum integer which is not less than x. Lower graph: desired device size in order to keep errors within order log(1/pS). The y-axis can be read as a cost factor to enable QIP with a probabilistic technology, as compared to a completely deterministic machine. figure: A portion of the external nodes of each component object is allocated to the role of fusing to a specific partner object. On success, the new entity contains significantly more qubits. Panels 1 to 4 depict successive steps in growing a snowball that is large enough for a use in a subsequent cluster state synthesis (see text). In step 1, two snowflakes of size 1/p are fused to obtain an object with 1.55/p qubits; in subsequent steps the resultant contains 2.27/p, 3.15/p and 4.07/p qubits respectively. snowball is created by attempting, in parallel, to fuse pairs of perimeter nodes of two snowflakes (or, smaller snowballs), see Fig. 3 (upper). If at least one such fusion succeeds, then the two objects are successfully connected; thus the probability of this outcome is thus not limited to p S . Instead it is a number to be obtained by numerical optimization. As shown in Figure 3, we find that a snowball comprised of 4.07/p S qubits can be obtained from 16 snowflakes of size 1/p S with a probability of at least 2.31% that is independent of p S . A snowball of this size is a resource that enables the final phase.
Finally, in Phase III the large snowballs are fused to form the ultimate graph state. There are several tactics that one could employ here, depending on the desired target state. To take a concrete example we assume that the target is a canonical two-dimensional square cluster state (as depicted in Fig. 1(a)). We employ a very basic strategy for generating such a cluster: fuse snowballs in a square lattice, and then remove extraneous nodes. Specifically, we take snowballs size of 4.07/p S as described in the previous Phase, and commit a quarter of all nodes to the task of fusing to each of the four neighboring snowballs. Then we find that the probability of achieving at least one successful fusion between two specific adjacent snowballs is 0.639. Since this is significantly above the perculation threshold of 1 2 , it follows from the treatment in Ref. [22] that the resulting imperfect cluster state will embed a perfect cluster state of somewhat smaller size (the scale factor, being independent of p S , does not affect our claim of logarithmic error scaling).
To track error accumulation, we first note that graph states can be defined as states stabilized by the Pauli operators X i j∈Nbgh(i) Z j , where Nbgh(i) denotes the neighbourhood of vertex i in the graph labeling that specific graph state. A direct result of this definition is that measurements of Pauli operators made on graph states must result in other stabilizer states, which are equivalent up to local operations to smaller graph states. The relevant transformation rules were discovered independently by Hein et al [23] and by Schlingemann [24]. Of particular consequence to graph state growth schemes are the effects of Y -and Z-basis measurements. Y -basis measurements complement the edges between neighbours of the measured vertex, which is removed along with any edges connected to it, while Z-basis measurements alter the graph simply by removing the measured vertex and any edges attached to it. Z measurements can be used to remove unwanted qubits, leaving only connected paths, while Y measurements can then be used to contract the path between two nodes, removing the intermediate qubits to leave a direct edge between the two nodes.
The large resource overhead required to deal with nondeterministic entangling operations can cause error accumulation to balloon, as local errors on each of the qubits used in the growth phase can be propogated to qubits in the final graph state when these ancillae are measured out. Previous schemes have relied on long paths connecting nodes in the final graph state, which scale linearly in 1/p S , and so the probability of avoiding error once the entire path is measured out is exponentially small in 1/p S . In the snowflake scheme, however, the maximum path length between nodes in the final graph state is only logarithmic in 1/p S . As snowballs form tree-like graphs, in order to disentangle all unwanted branches it suffices to perform a Z measurement at the cutting points, which scale linearly with the path length, leading to only a polynomial decrease in success probability. Even in the worst case, the maximum path length between nodes is 10 log 2 ( 1 pS ). Concerning error accumulation from decoherence during snowflake growth (see Fig. 1), one might have the following concern: since our strategy is to store a small snowflake until an equally sized partner emerges, it is possible that parts of the eventual large snowflake will be very 'old' and may therefore have suffered significant decoherence while 'waiting'. This would indeed be the case if a small fixed set of physical qubits were committed to the production of each snowflake. However, in reality rather than 'walling off' parts of the device to produce individual snowflakes, instead all physical resources associated with snowflake growth would be shared. Thus in a full scale quantum computer, there would be numerous snowflakes of each size in existence simultaneouslythen, it will never be the case that a specific snowflake waits for a partner, and the age of the oldest entanglement relationships within a given snowflake is only a logarithmic function of the snowflake's size -i.e. it is only log(1/p S ) The overall error accumulation in the ultimate graph state (e.g., cluster state) is then merely a logarithmic function of 1/p S , so that there is no fundamental difficulty with errors to make this approach impractical. It only remains to assess the resource scaling 1/p S in order to gauge what values of p S might be tolerable in a realistic system. In the lowest graph of Fig. 2 we show the size of device needed to produce one complete snowflake of size 1/p S per time step. This is the threshold where the error accumulation due to 'age' of snowflakes becomes merely logarithmic in 1/p S , and therefore one can interpret this as the scaling cost needed in order to make a probabilistic machine function similarly to a deterministic device. As can be readily seen, the factor necessary to support p S < 1/8 is already high, in excess of 1000. While such numbers might be achievable with a sufficiently dense technology, below the p S = 1/32 level the cost rapidly becomes unfeasible, exceeding 10 12 for p S = 1/64.
In conclusion, we have introduced an error-minimising protocol for creating large entangled states using single-qubit nodes together with entangling operations (EOs) that succeed only with probability p S ≪ 1. This protocol makes efficient use of parallelism and bounds the error accumulation within a logarithmic function of 1/p S . We show how large a machine using failure-prone EOs must be, in order to compete with a machine based on deterministic EOs.
|
2009-08-03T16:30:25.000Z
|
2009-08-03T00:00:00.000
|
{
"year": 2010,
"sha1": "1f9fbcfd6ee5080baecbbb064d9b8e788b645ff0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0908.0291",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1f9fbcfd6ee5080baecbbb064d9b8e788b645ff0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Medicine"
]
}
|
234396422
|
pes2o/s2orc
|
v3-fos-license
|
AgNPs-Azolla Pinnata Extract As Larvicidal Against Aedes Aegypti (Diptera: Culicidae)
The widely used synthetic insecticide in the operation of mosquito control could result in unfavourable impacts to the environment, human health and non-target organism. Considering these issues, environmentally friendly insecticides from plant extract have been used as green alternatives by recent researchers. Unfortunately, the method of using plant extract as insecticide requires a large amount of raw plants. In relation to this problem, the use of nanoparticles that possesses unique characteristics including small size and potential in changing physical, chemical and biological properties of organisms were studied. Nano-synthesized silver particles (AgNPs) from Azolla pinnata extract were thus investigated in this study in order to determine its efficacy as Aedes aegypti larvicide. The present work focuses on extraction of the compounds in Azolla pinnata using soxhlet extraction method. The plant extract was mixed with 1 mM silver nitrate solution and the biosynthesized silver nanoparticles were then being characterized using UV-Vis spectrophotometer. AgNPs particles from Azolla pinnata extract were prepared in six different concentrations and set in plastic cups. Late third instar larvae of Aedes aegypti were being used in all tests. Based on the findings of the experiment, there was no mortality of larvae recorded in control groups after 24 hours of exposure. The lowest mortality recorded was at 10 ppm with only 7.5% mortality, while 95% mortality was recorded for the highest concentration which was 250 ppm. Meanwhile, the LC50 and LC95 obtained at 95% confidence interval after 24 hours of exposure were 121.570 ppm and 369.438 ppm respectively. Further studies should be done to determine the mechanisms of AgNPs in aiding Azolla pinnata as an effective larvicide in the future.
Abstract. The widely used synthetic insecticide in the operation of mosquito control could result in unfavourable impacts to the environment, human health and non-target organism. Considering these issues, environmentally friendly insecticides from plant extract have been used as green alternatives by recent researchers. Unfortunately, the method of using plant extract as insecticide requires a large amount of raw plants. In relation to this problem, the use of nanoparticles that possesses unique characteristics including small size and potential in changing physical, chemical and biological properties of organisms were studied. Nanosynthesized silver particles (AgNPs) from Azolla pinnata extract were thus investigated in this study in order to determine its efficacy as Aedes aegypti larvicide. The present work focuses on extraction of the compounds in Azolla pinnata using soxhlet extraction method. The plant extract was mixed with 1 mM silver nitrate solution and the biosynthesized silver nanoparticles were then being characterized using UV-Vis spectrophotometer. AgNPs particles from Azolla pinnata extract were prepared in six different concentrations and set in plastic cups. Late third instar larvae of Aedes aegypti were being used in all tests. Based on the findings of the experiment, there was no mortality of larvae recorded in control groups after 24 hours of exposure. The lowest mortality recorded was at 10 ppm with only 7.5% mortality, while 95% mortality was recorded for the highest concentration which was 250 ppm. Meanwhile, the LC50 and LC95 obtained at 95% confidence interval after 24 hours of exposure were 121.570 ppm and 369.438 ppm respectively. Further studies should be done to determine the mechanisms of AgNPs in aiding Azolla pinnata as an effective larvicide in the future.
Introduction
The infections which caused by transmission of pathogens by certain types of arthropods are called vector-borne diseases. Ticks, lice, sand flies, black flies, mosquitoes and others related insects are the few examples of arthropods that can spread vector-borne diseases. Dengue is one of the mosquitoborne diseases mainly caused by dengue viruses [1,2]. It can be transmitted by 2 main species of Aedes mosquito which are Aedes aegypti and Aedes albopictus mosquitoes through blood feeding from one individual to another individual [1,2]. Currently, there are 46,713 reported dengue cases in Malaysia which accumulated from 29 December 2019 to 30 May 2020 (World Health Organization, 2020). According to World Health Organization (2020), a cumulative amount of 50,511 dengue cases in Malaysia was reported as of 13 June 2020. The resurgence or re-emergence of mosquito-borne arboviruses gives concerns to the importance of public health because it can lead to diseases outbreak which can occur globally. It also has been reported that the clinical characteristics of dengue were similar with coronavirus diseases 2019 (COVID-19) World Health Organization (2020). A green approach to mosquito control is vital in order to improve the environment and public health. The widely used chemical insecticides are reported to have a few adverse effects on the target and nontarget population [3][4][5][6][7][8]. An exploration of safe and eco-friendly methods through the utilization of plants that could be promising which numerously shown by Azolla pinnata plant to prevent mosquito breeding [9][10][11][12]. This free-floating aquatic pterophyte can proliferate within three to five days by doubling its biomass [13].
Recently, silver nanoparticles applied in the industry due to its antimicrobial, antiviral and its suitability in various fields [14]. Studies have reported that the A. pinnata extracts against Ae. aegypti require high plant concentration of 1853 ppm for crude and 2,521,535 ppm for fresh A. pinnata to achieve highest mortality percentage [12,13]. These are not in line with the commercialization views for the extract. In order to reduce the usage of A. pinnata crude extract concentrations, silver nanoparticle (AgNPs) solution was bio-synthesized with A. pinnata extracts, and in this study was applied as insecticides against late third larvae of Ae. aegypti. Thus, the objective of this study is to determine the lethal concentration of 50 and 95 (LC 50 and LC 95 ) from the AgNPs of A. pinnata extract against Aedes aegypti.
Study Area
All testing and experiments conducted in the Vector Control Research Unit (VCRU), USM with strict adhere to WHO 2005 [17] guidelines. The environment condition is set at an average room temperature of 25 °C ± 2 °C with 75 ± 5 % relative humidity.
Synthesis and characterization of silver nanoparticle from Azolla pinnata
The pre-extracted plant extract (10 mL) and 1mM AgNO3 (90 mL) solution were mixed at the ratio of 1:9 and kept in the dark condition for 3 hours. The colour of AgNO3 changes from colourless to brownish colour (Figure 1) after 30 minutes of incubation, and it indicates the complexation of the plant extract with the silver particles.
The complexation of Azolla extracts with the silver ions investigated through measuring the UV-Vis spectra using HACH DR 6000 UV Vis Spectrophotometer at a resolution of 1 nm and a wavelength ranging of 200 nm to 700 nm as the study on 'Green Synthesis of Silver Nanoparticles using Apple Extract' [14].
Mosquito Larvae
The larvae ere hatched in de-chlorinated ater for 4 hours and aintained at to roo temperature), pH of 6.95 to 7.03, and relative humidity of 80 ± 10% and dissolved oxygen from 5.5 to 6.1 mg/L in the laboratory [11,13]. After five days, the late 3rd instar larvae used for bioassay test WHO 2005 [2].
Larvicidal Bioassay
Four replicates tested using 20 late-third instar larvae for each concentration ranging from 5 ppm to 750 ppm to find the optimum range for larvicidal activities. The second testing concentrations phase involves a lower concentration from 10 ppm to 250 ppm that yield between 5% and 95% mortality in 24 hours of exposure. Two control were created with only 10 ppm A. pinnata extracts. All test was carried out for 24 hours before mortality were observed and recorded. Figure 2 below shows the experiments set-up. The results were analysed with probit regression using statistical package IBM SPSS 21 software to estimate the LC50 and LC95 values.
UV-Vis Spectroscopy
In order to show the perfect combination of AgNPs with the azolla extract, UV-Vis analysis is applied to study the excitation of electromagnetic field of surface plasmon resonance (SPR). During the combination, the plant extract of A. pinnata had changed the colour of silver nitrate solution from transparent to dark brown due to the reduction of Ag ions to AgNPs within half an hour of the commencement of the reaction. These colour change arise because of the excitation of surface plasmon vibrations with the silver nanoparticles [11]. SPR peak centered near 425 nm affirmed the reduction of Ag + to Ag 0 . In particular, the absorbance range for presence of silver nanoparticles are between 420 nm to 450 nm [11]. UV-visible absorbance of reaction mixture was taken after 30 minutes of the reaction commencement which further remained constant.
The absorbance of Azolla pinnata plant extract showed absorbance wavelength near 240 nm and 320 nm indicating the presence of proteins and phenols in the extract respectively, Figure 3. Absorption peak at around 320 nm shown in Figure 3 disappeared during the reaction which indicates the involvement and role of phenols in the reaction.
Larvicidal Bioassay Test
By using log probit regression analysis (95% confidence level) through IBM SPSS 21 software, lethal concentration 50 and 95 (LC50 and LC95) values after 24 hours exposure calculated as Table 1. In contrast, lethal concentration LC99 showed in Table 2.
Based on the LC50 and LC95 result from log probit analysis, it means 121.570 ppm of AgNPs from Azolla pinnata is needed to kill 50% of larvae while 369.438 ppm of the solution as mentioned earlier is needed to kill 95% of larvae. The p-value obtained (0.025) shows significance value as it does not exceed 0.05, which is the maximum value to be categorized as significant. The result obtained is statistical significance, and the null hypothesis is rejected. The p-value reflects the characteristics and efficiency of the test carried out on the sample populations, which were larvae. The larvicidal properties of A. pinnata and AgNPs expected to cause the larvae mortalities and behavioural changes as seen in this study. The previous study by suggested that A. pinnata have active ingredients which contribute to pesticides, insecticidal, anti-parasitic and antimicrobial activities such as 3,7,11,15-tetramethyl-2-hexadecane-1-ol, neophytadiene, and methacrylic acid. Others also reported that high concentrations of extract would lead to severe morphological deformities. Ravi et al. [11] showed that there are deformities in morphological of the Ae. aegypti larvae with the abdomen became blackened and twisted after treated with extract of I. cairica leaf. The mixing of A. pinnata extract, and silver nitrate that resulted in the synthesis of silver nanoparticles has led to potentiation. This might due to the small size of silver nanoparticles that allow the to pass through the all of larvae's body into the cell here it can disturb other physiological processes of larvae [11] In parallel to this research, Figure 4 (a), (b) and (c) showing the morphologically deformed larvae can be seen clearly with the darkening of the whole body of the larvae. This can be due to the ingestion process by larvae or absorption process of the AgNPs from A. pinnata extract into the body of larvae. Meanwhile, the abdomen of morphologically deform larvae could not be seen clearly and seems darker.
Conclusion
In this study, a co-synthesized complexation of AgNPs using A. pinnata extract has shown promising results in reducing the concentration of plant extract and achieving the highest larvae mortality. These are further confirmed through the morphological deformities of the dead larvae which indicated the combination potential mode of action.
|
2020-12-31T09:12:24.161Z
|
2020-12-29T00:00:00.000
|
{
"year": 2020,
"sha1": "107bbb6927bf8edf1cd96f4967c0cdbb64ebbbbd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/596/1/012065",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d408562ccfa51e28873b27f1f4e9a14b2728dc31",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
242413339
|
pes2o/s2orc
|
v3-fos-license
|
Telecare Service Use in Northern Ireland: Exploratory Retrospective Cohort Study
Background Telecare is a health service that involves the home installation of a number of information technology support systems for individuals with complex needs, such as people with reduced mobility or disabilities and the elderly. It involves the use of sensors in patients’ homes to detect events, such as smoke in the kitchen, a front door left open, or a patient fall. In Northern Ireland (NI), outputs from these sensors are monitored remotely by the telecare team, who can provide assistance as required by telephone or through the emergency services. The facilitation of such rapid responses has the aim of promoting early intervention and therefore maintaining patient well-being. Objective The aims of this study were to construct a descriptive summary of the telecare program in NI and evaluate hospital-based service use by telecare patients before and after the installation of telecare equipment. Methods An exploratory retrospective cohort study was conducted involving more than 2000 patients. Data analysis included the evaluation of health care use before and after the telecare service was initiated for individual participants. Individuals with data for a minimum of 6 months before and after the installation of the telecare service were included in this analysis. Results A total of 2387 patients were enrolled in the telecare service during the observation period (February 26, 2010-February 22, 2016). The mean age was 78 years (median 81 years). More women (1623/2387, 68%) were enrolled in the service. Falls detectors were the most commonly deployed detectors in the study cohort (824/1883, 43.8% of cases). The average number of communications (calls and/or alarms) between participants and the coordinating center was the highest for patients aged ≥85 years (mean 86 calls per year). These contacts were similarly distributed by gender. The mortality rate over the study period was higher in men than women (98/770, 14.4% in men compared to 107/1617, 6.6% in women). The number of nonelective hospital admissions, emergency room visits, and outpatient clinic visits and the length of hospital stays per year were significantly higher (P<.001) after the installation of the telecare equipment than during the period before installation. Conclusions Despite the likely benefits of the telecare service in providing peace of mind for patients and their relatives, hospital-based health care use significantly increased after enrollment in the service. This likely reflects the increasing health care needs over time in an aging population.
Introduction
It has been claimed that home-based telecare, particularly for the elderly, reduces the need for community care, prevents unnecessary hospital admissions, and delays or prevents admission into residential or nursing home care [1][2][3][4][5][6]. Telecare Northern Ireland is a service which provides a range of information technology support services to assist mainly elderly people who live independently in their own homes. It typically involves the use of sensors placed in patients' homes to allow for the detection of critical events, such as smoke in the kitchen, a tap left running, a front door left open, or a patient fall [7][8][9]. The sensors allow for the transmission of alerts to a central coordinating center, from which staff respond as appropriate. Telecare can be used by a full spectrum of patients but is mainly used by elderly people who live alone in their own homes [4,5,9].
In 2008, the Minister for Health, Social Services, and Public Safety for Northern Ireland (NI) announced £1.5 million (US $1.97 million) funding for pilot projects to promote the development of telehealth [3]. A telecare program was introduced as part of this initiative, under the umbrella of a more extensive Telemonitoring NI initiative. At the time the cohort of users was established, there were approximately 1.7 million people in the United Kingdom (UK) using telecare services [10].
Telecare programs in NI typically involve the deployment of different equipment and/or sensors, depending on the perceived benefits to the patient. Although telecare has the potential to play an important role in enhancing the ability of elderly people to manage their activities of daily living and, if required, avail of rapid response services, there is often a misunderstanding regarding the role of the service. It should be considered an aid to improve elderly patients' independence and quality of life and not a solution to their growing need for general health care and hospital-based care [3][4][5]9].
A private company (TF3) won the contract to provide telecare services in NI. The UK National Health Service (NHS) operates in NI, and the telecare service, as is the case with other health care services, is free of charge to patients. Patients were enrolled in the program by their clinical team. The range of sensors and components deployed in the NI program were as follows: a pendant that the patient can activate if he or she experiences a fall, an emergency alarm button, an extreme temperature sensor, a bed or chair occupancy sensor, a home safety package (consisting of a pressure mat, bogus caller button, epilepsy sensor, and property exit sensor), a fumes detector, a flood detector, and an immobility sensor. The combination of components used with each patient was adjusted according to individual patient needs. The equipment was installed and maintained in patients' homes by TF3 and all patients across NI were connected to a call center that dealt with alarms and calls from patients receiving the service.
The aims of this exploratory study were to construct a descriptive summary of the use of the telecare program in NI and evaluate patient use of hospital-based services before and after the introduction of the telecare service.
The objectives were as follows: • Using patient administrative data collected by the provider of telecare services in NI (TF3) as part of service provision, together with health care use data sets held at the Business Services Organisation (BSO) in NI, develop a descriptive summary of the patients enrolled in the telecare service from 2010-2016.
• Using data held by TF3 and Health and Social Care in NI (HSC), compare hospital-based service use before and after telecare service initiation in patients' homes.
Ethics Approval
Ethical approval was obtained from the National Research Ethics Service Committee (Research Ethics Committees 15/SW/0015, SET/14/68, WT/14/37; Integrated Research Application System project ID: 167795). Governance approvals and data access agreements were approved by the HSC Trusts.
Data Access and Confidentiality
Access to the health care data sets of individual NHS patients in NI is only made available to researchers in anonymized form via a confidential data repository (ie, the Honest Broker Service [HBS], established by the BSO in NI. The HBS provides a "safe haven" in which data can be accessed and analyzed within a confidential secure environment).
Patient-level data supplied by TF3 and the HSC Trusts were anonymized by the HBS and made available to the research team. To ensure confidentiality, identifiable data are not accessible to researchers and results from analyses undergo scrutiny before being released.
Data Acquisition and Inclusion in Master Data Set
Health care use data (ie, nonelective hospital admissions, periods of hospital stay, outpatient clinic visits, and emergency room [ER] visits) were obtained for all enrolled patients.
Individual patient data sets were retrieved using Health and Social Care numbers (HCNs) for the period before and after the installation date. If a patient died after installation, the date of death was inputted as the endpoint for that individual. The HCN is a unique identifier for all patients registered to receive NHS services in NI and was crucial for data linkage. The date of telecare equipment installation was used as the cut-off point to demarcate preservice and postpatientservice use. Following clearance by the data guardians at the 5 HSC Trusts, TF3 provided data sets on telecare usage to the HBS for linkage and access by the research team. Patients who had data relating to a minimum of 6 months before and after the initiation of the telecare service were included in the health care use aspect of the study.
Data Analysis
The data were analyzed in the HBS using SPSS (version 22; IBM Corp). Descriptive data analyses on patient demographic characteristics (eg, age and gender), number of calls (communications between the patient and coordinating center), telecare equipment components installed, and mortality rates were performed. Differences in the continuous variables relating to health care use before and after telecare installation were tested for significance using the paired t test.
Demographic Data
Data for a total of 2387 patients enrolled in the telecare service in NI indicated that more female patients than male patients received the telecare service (n=1623, 68% female patients compared to n=764, 32% male patients). The mean age of participants was 78 (SD 12) years; 1716 (1716/2387, 72%) individuals in the study population were 75 years or older. Only 295 (295/2387, 12%) were under 65 years of age.
Contact Calls by Age Group and Gender
Out of the 2387 patients enrolled to receive the telecare service, 2330 patients had records of contact with the coordinating center (eg, in a fall alarm event, both the incoming alarm and outgoing call were recorded and counted as 2 calls). There were between 1 and 7183 calls per patient per year, with a mean of 64.7 and a median of 33 calls per year.
The highest average number of annual patient contact calls was in the ≥85 years age group, with an average of 86 calls per year. This decreased to 59 calls per year in the 75-84 years age group and 54 calls per year in the 65-74 years age group. In addition, the lowest mean number of calls (47 calls per year) was in the ≤64 years age group. Finally, the average number of calls was very similar for female (65.6 calls per year) and male patients (62.8 calls per year).
Mortality of the Enrolled Patients
A total of 205 (205/2387, 8.6%) of patients died during the observation period. As expected, the mortality rate was the highest in patients who were ≥85 years when they were first enrolled. Mortality during the observation period was more than 2 times higher in male participants (98/770, 12.7%) compared with female participants (107/1617, 6.6%).
Installation Frequency of Telecare Equipment Components
Out of a total population of 2387 patients, 1883 patients had data available on the individual telecare equipment components installed in their homes. Data showed that almost all (1867/1883, 99.2%) of the patients had a call advisor or home unit installed. This equipment provides an alternative to a landline telephone and allows the patient to contact the call center. A total of 824 patients had a fall detector installed (a pendant that a patient can activate if he or she has a fall). The remaining telecare equipment components or detectors (shown in Figure 1) were less commonly installed. This includes the alarm (a button that a patient can activate in case of any emergency; 441 cases), fire alarm (an extreme temperature sensor; 276 cases), timer (a bed or chair occupancy sensor that has a timer device that can be set according to each individual's routine and is placed under their mattress or chair cushion; 181 cases), safety package (consisting of a pressure mat, bogus caller button, epilepsy sensor, and property exit sensor; 96 cases), fumes detector (detects dangerous levels of carbon monoxide; 82 cases), flood detector (detects if water has overflowed onto the patient's floor; 38 cases), and immobility sensor (detects lack of movement within the patient's home, which suggests that patient has collapsed; 37 cases).
Health Care Use Pre and Post Installation of the Telecare Components
The health care use parameters increased significantly after the installation of the telecare equipment.
Study Focus
This study had a narrow focus (ie, to construct a descriptive summary of patients enrolled in and the use of the telecare program in NI and to evaluate hospital-based service use by patients before and after installation of the telecare equipment).
In interpreting the results, one must consider that there is often tension within health and social care provision regarding the value versus the cost of various services since different services have different impacts on clinical, humanistic, and economic outcomes.
Patient Age and Gender
The highest proportions of telecare NI patients were in the elderly age groups, with only 12% (295/2387) of participants under the age of 65 years. A total of 12% of the total UK population of >66 million people are aged ≥65 years [11]. Aging statistics in NI, which had a total population of 1.9 million in February 2021, show that the total number of people aged ≥65 years has increased from 13% (mid-1994) to 16.6% (mid-2019) [12]. This age group is projected to grow in all Great Britain (GB) regions by mid-2028 [13] (ie, there is a growing elderly population who may avail of telecare services).
The patients who enrolled for this telecare service were predominantly aged 75 years and above (1716/2387, 72% of the study population). The profile of participant age groups in this study is similar to the patients who enrolled for telecare services in England in the telehealth whole system demonstrator (WSD) project, in which approximately 60% of participants (intervention group) were aged ≥75 years [7]. The mean age of the patients enrolled for the telecare service in NI was 78 (SD 12) years, while it was 75 (SD 14) years for patients in the telecare arm of the WSD project [7].
All patients enrolled in this study were still able to live in their own homes or sheltered accommodation and were deemed able to take care of themselves with the aid of telecare equipment. In GB, approximately 60% of women aged ≥75 years live alone in their own homes, compared with 36% of men of the same age [14]. This disparity in the female-to-male ratio was evident in the uptake of telecare services in NI (1218:499 for patients aged ≥75 years).
Overall data on gender showed that 68% (n=1623) of the 2387 patients enrolled in the telecare service in NI were female. This is similar to a telecare service in Scotland, where 62% of a cohort of 7487 patients was comprised of female patients [5].
A similar male-to-female ratio was reported in the WSD project in England, where 67.5% of patients in both the control (n=1236) and intervention (telecare arm) groups (n=1190) were female [7].
Patient Contact Calls
Despite the high level of activity in this study (eg, an average of 86 contacts annually between the telecare center and patients >85 years), markers of the need for hospital-based care increased over time among patients enrolled in the program.
Research by others has indicated more nuanced outcomes; for example, a systematic review [15] on the benefits of home telecare services for elderly patients involving 21 randomized trials and 12 observational studies found that regular calls between health care providers and patients reduced or delayed hospital admissions and improved discharge rates in elderly people with long-term conditions, leading to cost savings. The observational studies in the systematic review also indicated that supplementing the type of telecare service delivered in NI with daily follow-up telephone calls from nurses may further reduce costs by delaying hospital admissions and lowering the number of readmissions in elderly patients with heart disease, diabetes, and chronic obstructive pulmonary disease. However, the review found insufficient rigorous evidence about the effects of safety and security alert systems, such as fall detectors and community alarms, on either individual or system outcomes [15]. A more recent study conducted in England found that the number of requests for ambulances as a consequence of falls was reduced by the rapid response of a telecare call center [6].
Mortality
In NI, mortality rates have decreased in recent years across all age groups, but the mortality rates in men remain higher than in women. It has also been noted, however, that although women live longer, they often live the extra years in poor health [14]. These data help explain the greater use of the telecare service by women and their lower mortality in this study.
Telecare Equipment Installation and Health Care Use
In NI, as in other locations, a wide assortment of sensors and devices were used according to the perceived needs of clients [9,[16][17][18]. After the advisor call unit, the most frequently installed was fall detection equipment (824/1883, 44%). Falls are particularly problematic in an aging population and can have serious consequences, including bone (especially hip) fractures [6,19].
Although telecare is increasingly being used across GB, there has been little definitive work on its impact on health outcomes [20,21]. The variety of equipment components makes the delivery of randomized trials complex and difficult to perform [3]. The range and combinations of telecare equipment components used in different regions and countries, coupled with differing health and social care delivery models, also make it difficult to compare data from different centers.
An increase in health care use over time is to be expected in this study population because the majority (1716/2387, 72%) who enrolled were aged 75 years or older. Because a control group of people with similar characteristics who did not receive the telecare services was not available, the impact of telecare could not be evaluated; however, a doubling of the mean number of hospitalizations (0.5 to 1.0; P<.001) was disappointing and clearly highlights the impact of aging on health and well-being.
These findings can be considered alongside a report which summarized details of the Scottish Telecare Development Program [22]. This telecare provision was implemented at the time of hospital discharge over a 1-year period (2007)(2008). As in this NI study, there was no control group. A total of 7902 patients were provided with the telecare service (85% aged ≥65 years). It was estimated (by the 18 telecare service providers involved) that more than 500 delayed discharges were avoided by the use of telecare, saving an estimated >5000 bed days. It was also estimated that more than 1200 emergency admissions were avoided, saving an estimated 13,000 bed days [22].
The demographics of the NI telecare recipient population were similar to those of the participant population in the study in Scotland. Since both regions operate under the UK NHS system, it is likely that benefit was accrued from the telecare service in NI despite the increased use of hospital-based services post installation. Peace of mind (through feeling safe and secure) achieved by both patients and their families, as demonstrated by other researchers [6,9,[22][23][24], was likely to have been achieved, but this benefit could not be assessed using the NI data.
Conclusions
Despite the likely benefit of the telecare service, including peace of mind for patients and their relatives [23,24], hospital-based health care use significantly increased after enrollment in the service. This may simply reflect the increasing health care needs due to health deterioration over time within an aging population; with no control data available, it was not possible to quantify the impact of the telecare service.
This quantification would require a new prospective study with a control group and, therefore, a randomized controlled trial is recommended to fully evaluate the potential of telecare services to improve clinical, humanistic, and economic outcomes across NI. This should be supplemented by a substantive qualitative aspect to the research, including interviews with both patients and their next of kin and the development of a number of case studies involving patients who engaged with the telecare service.
|
2021-10-20T15:23:41.542Z
|
2020-07-26T00:00:00.000
|
{
"year": 2022,
"sha1": "d71d4688183ce2d7e2426ffb87d6b4461c7ec676",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/22899",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8622ffae16cbae3a3bea943c22cb778eb7e30cf6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.